CN115661212A - Building structure reconstruction and extension safety investigation method and device based on computer vision - Google Patents
Building structure reconstruction and extension safety investigation method and device based on computer vision Download PDFInfo
- Publication number
- CN115661212A CN115661212A CN202211713032.7A CN202211713032A CN115661212A CN 115661212 A CN115661212 A CN 115661212A CN 202211713032 A CN202211713032 A CN 202211713032A CN 115661212 A CN115661212 A CN 115661212A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- data set
- target building
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Image Processing (AREA)
Abstract
The application relates to a computer vision-based building structure reconstruction and extension safety investigation method, a computer vision-based building structure reconstruction and extension safety investigation device, a computer device, a storage medium and a computer program product. The method comprises the following steps: scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time; carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result; performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model; and displaying a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building. By adopting the method, the omission factor can be reduced.
Description
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for building structure reconstruction and extension security investigation based on computer vision.
Background
In recent years, the phenomena of private expansion and reconstruction of house buildings are increasing, and the illegal expansion and reconstruction of house buildings can cause structural accidents, such as collapse accidents and the like. The illegal building extension is mainly used for adding layers illegally, the illegal reconstruction is mainly used for adding door and window openings privately, potential safety hazards can be caused to the use of a building structure due to the illegal extension and reconstruction, and therefore detection and troubleshooting need to be carried out in time.
The detection and the investigation of traditional building are manual inspection, and the phenomenon of illegal extension and reconstruction is investigated by the manual inspection of inspectors, however, the problem of high missing inspection rate exists in the manual inspection mode.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, an apparatus, a computer device, a computer-readable storage medium, and a computer program product for computer vision-based security investigation of building structure reconstruction and extension.
In a first aspect, the application provides a computer vision-based building structure reconstruction and extension safety investigation method. The method comprises the following steps:
scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and outputting a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, the scanning the target building at each preset detection time to obtain the point cloud data set corresponding to the target building at each detection time includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In one embodiment, the registration processing of the point cloud data contained in each point cloud data set to obtain a registration result includes:
sorting the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and performing registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, the three-dimensional model superposition processing is performed according to the registration result and the point cloud data contained in each point cloud data set, and obtaining a superposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional model according to the registration result to obtain the superposed three-dimensional model.
In one embodiment, the registration result comprises detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result to obtain a superposed target model, wherein the superposed target model comprises the following steps:
and translating or rotating the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
In one embodiment, the method further comprises:
identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises a component type of each component, coordinates of a geometric center of each component, and a component number corresponding to each component type;
determining a compliance detection result corresponding to the target building based on the identified component types and the component quantity corresponding to the component types;
obtaining the absolute distance between the members according to the coordinates of the geometric centers of the members, and comparing the absolute distance between the members with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and outputting a detection report of the target building according to the compliance detection result and the safety detection result.
In a second aspect, the application further provides a building structure reconstruction and extension safety investigation device based on computer vision. The device comprises:
the scanning module is used for scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
the registration module is used for carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result;
the superposition module is used for carrying out three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and the display module is used for displaying the target model corresponding to the target building according to the superposed three-dimensional model, and the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, the scanning module is specifically configured to:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In one embodiment, the registration module is specifically configured to:
sorting the point cloud data contained in the point cloud data set to obtain a point cloud data set sequence;
in the point cloud data set sequence, the point cloud data contained in the adjacent point cloud data sets are subjected to registration processing through an iterative closest point algorithm, and a registration result between the adjacent point cloud data sets is obtained.
In one embodiment, the overlay module is specifically configured to:
establishing a three-dimensional model for the target building according to the point cloud data;
and carrying out coordinate transformation on the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
In one embodiment, the display module is specifically configured to:
and displaying a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, the apparatus further comprises:
the identification module is used for identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises a component type of each component, coordinates of a geometric center of each component, and a component number corresponding to each component type;
the determining module is used for determining a compliance detection result corresponding to the target building based on the identified component types and the component quantity corresponding to the component types;
the comparison module is used for obtaining the absolute distance between the members according to the coordinates of the constructed geometric center, and comparing the absolute distance between the members with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and the display module is used for outputting a detection report of the target building according to the compliance detection result and the safety detection result.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method of the first aspect when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of the first aspect.
In a fifth aspect, the present application further provides a computer program product. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method of the first aspect.
According to the building structure reconstruction and expansion safety investigation method, device, computer equipment, storage medium and computer program product based on computer vision, the target building is scanned at each preset detection time to obtain the point cloud data set corresponding to the target building at each detection time; carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result; performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model; and outputting a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building. The missing rate of illegal structures of the house buildings can be reduced by registering the point cloud data sets of the scanned target buildings and superposing the three-dimensional models of the target buildings at different time points according to the registration result.
Drawings
FIG. 1 is a diagram of an application environment of a computer vision-based method for security investigation of building structure reconstruction and extension;
FIG. 2 is a schematic flow chart of a building structure rebuilding and expanding security troubleshooting method based on computer vision according to an embodiment;
FIG. 3 is a schematic flow chart of the point cloud registration step in one embodiment;
FIG. 4 is a schematic flow chart diagram illustrating the model stacking step in one embodiment;
FIG. 5 is a schematic flow chart of a crack projection step in another embodiment;
FIG. 6 is a schematic diagram of a computer vision-based security screening process for building structure reconstruction and extension in one embodiment;
FIG. 7 is a block diagram of an apparatus for computer vision based security screening of building structure reconstruction and extension in one embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The building structure reconstruction and expansion safety checking method based on computer vision provided by the embodiment of the application can be applied to the application environment shown in figure 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers. The server 104 may be implemented as a stand-alone server or a server cluster comprised of multiple servers. In addition, the method may also be applied to a terminal or a server, and the present application is not limited thereto.
In one embodiment, as shown in fig. 2, a computer vision-based security troubleshooting method for building structure reconstruction and extension is provided, which is described by taking an application of the method to a terminal as an example, and includes the following steps:
The point cloud data set is obtained by scanning of a laser radar, and the terminal stores a laser scanning result as the point cloud data set.
In the embodiment of the application, the terminal and the scanning device can be integrally arranged, the terminal moves along the preset route according to different time intervals, and the scanning device scans the target building to obtain the point cloud data of the target building. For example, the user may move inside the building by holding the terminal device with the hand, thereby scanning the inside of the building to obtain the point cloud data inside the building, or the user may scan the outside of the building by holding the terminal device with the hand, thereby obtaining the point cloud data outside the building. Optionally, the terminal may be the SALM (Simultaneous Localization and Mapping) device, including a laser radar, an inertial navigation, and a high resolution camera. Therefore, the terminal can scan the target building at different preset detection times respectively to obtain the point cloud data sets corresponding to the target building at each detection time.
And 204, carrying out registration processing on the point cloud data contained in each point cloud data set to obtain a registration result.
Wherein the registration result comprises a rotation matrix and a translation matrix.
In the embodiment of the application, the terminal may perform registration on Point cloud data sets of buildings at different time points by using an ICP (Iterative Closest Point) algorithm, so as to obtain a registration result.
Optionally, the terminal may register the point cloud data sets of adjacent time points according to a selection of the user, or the user may designate the point cloud data sets of any two time points to register.
And step 206, performing three-dimensional model superposition processing according to the registration result and the point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model.
In the embodiment of the application, the terminal establishes a three-dimensional model for the target building according to the point cloud data, and coordinate transformation is performed on the three-dimensional model of the target building at different time points by using the rotation matrix and the translation matrix, so that the three-dimensional models of the target building at different time points are converted into the same coordinate system, and the superposed three-dimensional model is obtained. In one example, the terminal performs three-dimensional model superposition processing on the point cloud data contained in the point cloud data set of the target building at the last time point and the point cloud data contained in the point cloud data set of the target building at the current time point through the registration result, and can convert the three-dimensional model of the target building at the last time point into the same coordinate system as the three-dimensional model of the target building at the current time point.
And 208, displaying the target model corresponding to the target building according to the superposed three-dimensional model.
Wherein, the target model is used for reflecting the deformation characteristics of the target building.
In the embodiment of the application, the terminal displays the target model corresponding to the target building according to the user instruction, and in the target model, the deformation parts of the target building at different time points can be highlighted, so that the deformation characteristics of the target building can be reflected visually.
Specifically, the terminal may perform coordinate transformation on the three-dimensional models at different detection times according to the rotation matrix and the translation matrix, perform coordinate matching on the element coordinates in the three-dimensional model of the target building at the previous time point and the element coordinates in the three-dimensional model of the target building at the current time point, determine the element coordinates failed in matching in the three-dimensional model of the target building at the current time point, and perform highlight display according to the element coordinates failed in matching.
In addition, the method can also comprise building crack identification, and a crack image is identified by using a crack identification model to obtain a crack outline information graph; the crack image is an image obtained by shooting a crack part of the building by the detection equipment.
And performing feature extraction on the crack contour information graph to obtain crack width feature points, and projecting the crack contour information graph to a target model corresponding to the building according to the crack width feature points in the crack image to obtain the target model containing the crack projection.
In the building structure reconstruction and extension safety inspection method based on computer vision, the three-dimensional models of the target buildings at different time points are superposed, and the deformation parts of the target buildings at different time points are highlighted, so that the omission ratio of the building safety inspection can be reduced.
In one embodiment, scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time includes: and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In the embodiment of the application, the terminal scans the target building through the laser radar at different time points (namely preset detection time) by using an instant positioning and map construction algorithm to obtain complete point cloud data of the target building. In one example, when the terminal can monitor that the current time reaches the preset detection time, the terminal automatically scans the target building; alternatively, the terminal may also scan the target building in response to a scan instruction from the user.
In this embodiment, complete point cloud data inside the target building can be achieved by scanning the target building.
In an embodiment, as shown in fig. 3, performing registration processing on point cloud data included in each point cloud data set to obtain a registration result includes:
The point cloud data set comprises point cloud data of target buildings at different time points.
In the embodiment of the application, the terminal may store the acquisition time corresponding to the point cloud data sets, and then, the point cloud data sets are sorted according to the sequence of the acquisition time to obtain a point data set sequence.
And 304, performing registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In the embodiment of the application, the terminal performs registration on two adjacent point cloud data sets in the point cloud data sets through an ICP (inductively coupled plasma) algorithm to obtain a rotation matrix and a translation matrix of the two point cloud data sets at adjacent time points, namely a registration result.
Specifically, aiming at any two point cloud data sets to be registered, a terminal performs point cloud conversion on an origin point cloud data set to a target point cloud data set according to an initial rotation matrix and an initial translation matrix which are roughly registered to obtain a temporary point cloud data set corresponding to the origin point cloud data set, obtains a closest point of each point in the origin point cloud data set in the target point cloud data set according to the distance between the temporary point cloud data set and the point cloud data in the target point cloud data set, estimates the initial rotation matrix and the initial translation matrix by using a least square method according to the closest point in the origin point cloud data set, solves to obtain an optimized rotation matrix and an optimized translation matrix, further calculates to obtain the optimized closest point according to the optimized rotation matrix and the optimized translation matrix, and continuously iterates the process until the changes of the rotation matrix and the translation matrix are smaller than a certain threshold value or the closest point does not change any more to obtain a conversion relation between determined point clouds, namely the target rotation matrix and the target translation matrix.
In this embodiment, the rotational matrix and the translational matrix required for the application to the three-dimensional model superposition can be obtained by registering the two point cloud data through the ICP algorithm.
In an embodiment, as shown in fig. 4, performing three-dimensional model superposition processing according to the registration result and the point cloud data included in each point cloud data set, and obtaining a superposed three-dimensional model includes:
and 402, establishing a three-dimensional model for the target building according to the point cloud data.
In the embodiment of the application, after the terminal acquires the point cloud data, the target building can be subjected to three-dimensional reconstruction through the point cloud data, and the three-dimensional reconstruction can be an r3live algorithm. Optionally, any reconstruction algorithm capable of realizing three-dimensional reconstruction may be applied to the embodiment of the present application, and the embodiment of the present application is not limited.
And 404, superposing the three-dimensional model according to the registration result to obtain a superposed three-dimensional model.
Wherein the three-dimensional model comprises three-dimensional models at various time points.
In the embodiment of the application, the terminal rotates and translates the three-dimensional model established by the point cloud data of the adjacent time points according to the target rotation matrix and the target translation matrix, so that the three-dimensional model of the target building of the adjacent time points is superposed to obtain the target model, and the target model can visually observe the deformation characteristics of the target building.
In this embodiment, through the three-dimensional model to adjacent time point target building stack, can be by the target model that obtains, can observe the deformation characteristic of target building directly perceived.
In one embodiment, as shown in fig. 5, further includes:
The point cloud data set comprises point cloud data sets of buildings at preset different time points.
In the embodiment of the application, the terminal identifies the point cloud data set outline according to a deep learning algorithm to obtain each constructed category which can be a beam, a column, a door, a window, a floor and the like; the identification result further comprises a confidence coefficient corresponding to each component and a point cloud range of the component, the confidence coefficient is used for judging the accuracy of component identification, and the point cloud range of the component is a coordinate boundary of the point cloud data set of the component, namely the range of the point cloud data containing the component.
Wherein, the compliance refers to whether the target building is illegally reconstructed or additionally reconstructed.
In the embodiment of the application, the terminal performs consistency matching based on the identified component types to obtain a first matching result, performs consistency matching based on the component number corresponding to the identified component types to obtain a second matching result, and determines the compliance detection result corresponding to the target building according to the first matching result and the second matching result.
For example, a terminal identifies a point cloud data set of a target building at time A to obtain a construction category and the number of members of each category at time A, and a terminal identifies a point cloud data set of a target building at time B to obtain a construction category and the number of members of each category at time B, if the construction category at time A is not matched with the construction category at time B or the number of members of each category at time A is not matched with the number of members of each category at time B, the target building is reconstructed or added, and the compliance detection result of the target building is unqualified; and if the construction type at the time A is matched with the construction type at the time B and the number of the members in each type at the time A is matched with the number of the members in each type at the time B, the target building is not reconstructed or added, and the compliance detection result of the target building is qualified.
And step 506, obtaining the absolute distance between the members through the coordinates of the geometric centers of the members, and comparing the absolute distance between the members with a preset safety threshold value to obtain a safety detection result corresponding to the target building.
Wherein the preset safety threshold is set by a technician according to engineering experience.
In the embodiment of the application, for each component, the terminal calculates the absolute distance of the component at different time points according to the geometric center of the component identified in step 502, compares the absolute distance of the component at different time points with a preset safety threshold, determines that the safety of the component is dangerous if the absolute distance of the component at different time points is greater than the preset safety threshold, and determines that the safety of the component is normal if the absolute distance of the component at different time points is less than the preset safety threshold. And if the safety of all the members is safe, determining that the safety detection result of the target building is safe.
For example, the terminal calculates the euclidean distance between the geometric center of the member at the time a and the geometric center of the member at the time B according to the coordinates of the geometric center of the member at the time a and the geometric center of the member at the time B, and if the euclidean distance is greater than a preset safety threshold, the safety of the target building is determined to be dangerous; and if the Euclidean distance is smaller than a preset safety threshold value, judging that the safety of the target building is safe.
And step 508, outputting a detection report of the target building according to the compliance detection result and the safety detection result.
In the embodiment of the application, the terminal responds to the user instruction, can output the detection report of the target building according to the compliance detection result and the safety detection result, and can also output the position, the confidence degree, the point cloud range and the like of the corresponding component with the compliance problem or the safety problem according to the detection report.
In the embodiment, the detection report of the target building is obtained by identifying and comparing the types and the number of the members in the target building, so that the missing rate of building safety inspection can be reduced.
In one embodiment, as shown in fig. 6, the present application further provides an example of a method for security investigation of building structure reconstruction and extension based on computer vision, which specifically includes:
step 602, scanning the target building by using an instant positioning and map building algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
And 606, performing registration processing on each point cloud data contained in adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
And step 608, establishing a three-dimensional model for the target building according to the point cloud data.
And step 610, performing coordinate transformation on the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
Step 612, identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the recognition result includes the component type of each component, the coordinates of the geometric center of each component, and the number of components corresponding to each component type.
And step 614, determining the compliance detection result corresponding to the target building based on the identified component types and the component number corresponding to the component types.
And 616, obtaining the absolute distance between the members through the coordinates of the constructed geometric center, and comparing the absolute distance between the members with a preset safety threshold value to obtain a safety detection result corresponding to the target building.
And step 618, outputting a detection report of the target building according to the compliance detection result and the safety detection result.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a building structure reconstruction and expansion safety inspection device based on computer vision, which is used for realizing the building structure reconstruction and expansion safety inspection method based on computer vision. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so that specific limitations in one or more embodiments of the building structure reconstruction and expansion security inspection device based on computer vision provided below can be referred to the limitations on the building structure reconstruction and expansion security inspection method based on computer vision, and are not described herein again.
In one embodiment, as shown in fig. 7, a computer vision based security troubleshooting apparatus for building structure reconstruction and extension is provided, comprising: a scanning module 701, a registration module 702, an overlay module 703 and a presentation module 704, wherein:
the scanning module 701 is used for scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
a registration module 702, configured to perform registration processing on point cloud data included in each point cloud data set to obtain a registration result;
the superposition module 703 is configured to perform three-dimensional model superposition processing according to the registration result and the point cloud data included in each point cloud data set, so as to obtain a superposed three-dimensional model;
and the display module 704 is configured to output a target model corresponding to the target building according to the stacked three-dimensional model, where the target model is used to reflect the deformation characteristics of the target building.
In one embodiment, the scanning module is specifically configured to:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In one embodiment, the registration module is specifically configured to:
sorting the point cloud data contained in the point cloud data set to obtain a point cloud data set sequence;
in the point cloud data set sequence, the point cloud data contained in the adjacent point cloud data sets are subjected to registration processing through an iterative closest point algorithm, and a registration result between the adjacent point cloud data sets is obtained.
In one embodiment, the overlay module is specifically configured to:
establishing a three-dimensional model for the target building according to the point cloud data;
and carrying out coordinate transformation on the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
In one embodiment, the display module is specifically configured to:
and outputting a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring a crack image of the surface of the building;
the identification module is used for identifying the crack image on the surface of the building to obtain a crack outline information graph;
and the projection module is used for extracting the features of the crack contour information graph to obtain crack width feature points, and projecting the crack contour information graph to an initial model corresponding to the building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
All or part of each module in the building structure reconstruction and expansion safety inspection device based on computer vision can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing a point cloud data set. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize the computer vision-based building structure reconstruction and expansion safety check method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor realizing the following method steps when executing the computer program:
scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and outputting a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In one embodiment, the registration processing of the point cloud data contained in each point cloud data set to obtain a registration result includes:
sorting the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and performing registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, the three-dimensional model superposition processing is performed according to the registration result and the point cloud data contained in each point cloud data set, and obtaining a superposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional model according to the registration result to obtain the superposed three-dimensional model.
In one embodiment, the registration result comprises a rotation matrix and a translation matrix between the point cloud data sets adjacent in detection time; superposing the three-dimensional model according to the registration result to obtain a superposed target model, wherein the superposed target model comprises the following steps:
and translating or rotating the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
In one embodiment, the method further comprises:
collecting a crack image of the surface of the building;
identifying the crack image on the surface of the building to obtain a crack outline information graph;
and performing feature extraction on the crack contour information graph to obtain crack width feature points, and projecting the crack contour information graph to an initial model corresponding to the building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
Scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and outputting a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In one embodiment, the registration processing of the point cloud data contained in each point cloud data set to obtain a registration result includes:
sorting the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and performing registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, the three-dimensional model superposition processing is performed according to the registration result and the point cloud data contained in each point cloud data set, and obtaining a superposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional model according to the registration result to obtain the superposed three-dimensional model.
In one embodiment, the registration result comprises detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result to obtain a superposed target model, wherein the superposed target model comprises the following steps:
and translating or rotating the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
In one embodiment, the method further comprises:
collecting a crack image of the surface of a building;
identifying the crack image on the surface of the building to obtain a crack outline information graph;
and performing feature extraction on the crack contour information graph to obtain crack width feature points, and projecting the crack contour information graph to an initial model corresponding to the building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
Scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in each point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and outputting a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
In one embodiment, the scanning the target building at each preset detection time to obtain the point cloud data set corresponding to the target building at each detection time includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
In one embodiment, the registration processing of the point cloud data contained in each point cloud data set to obtain a registration result includes:
sorting the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and performing registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, the three-dimensional model superposition processing is performed according to the registration result and the point cloud data contained in each point cloud data set, and obtaining a superposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional model according to the registration result to obtain the superposed three-dimensional model.
In one embodiment, the registration result comprises detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result to obtain a superposed target model, wherein the superposed target model comprises the following steps:
and translating or rotating the three-dimensional models at different detection times according to the rotation matrix and the translation matrix to obtain the superposed three-dimensional models.
In one embodiment, the method further comprises:
collecting a crack image of the surface of the building;
identifying the crack image on the surface of the building to obtain a crack outline information graph;
and performing feature extraction on the crack contour information graph to obtain crack width feature points, and projecting the crack contour information graph to an initial model corresponding to a building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. A building structure reconstruction and extension safety investigation method based on computer vision is characterized by comprising the following steps:
scanning a target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on the point cloud data contained in each point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and displaying a target model corresponding to the target building according to the superposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristic of the target building.
2. The method of claim 1, wherein the scanning the target building at each preset detection time to obtain the point cloud data set corresponding to the target building at each detection time comprises:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time.
3. The method of claim 1, wherein the registering the point cloud data included in each point cloud data set to obtain a registration result comprises:
sorting the point cloud data contained in each point cloud data set according to the time of each point cloud data set to obtain a point cloud data set sequence;
and performing registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative closest point algorithm to obtain a registration result between the adjacent point cloud data sets.
4. The method of claim 1, wherein the step of performing three-dimensional model superposition processing according to the registration result and the point cloud data included in each point cloud data set to obtain a superposed three-dimensional model comprises:
establishing a three-dimensional model for a target building according to point cloud data contained in the point cloud data set;
and superposing the three-dimensional model according to the registration result to obtain the superposed three-dimensional model.
5. The method of claim 4, wherein the registration result comprises detecting a rotation matrix and a translation matrix between temporally adjacent sets of point cloud data; the step of superposing the three-dimensional model according to the registration result to obtain a superposed target model comprises the following steps:
and according to the rotation matrix and the translation matrix, carrying out coordinate transformation on the three-dimensional models at different detection times to obtain the superposed three-dimensional models.
6. The method of claim 1, further comprising:
identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises a component type of each component, coordinates of a geometric center of each component, and a component number corresponding to each component type;
determining a compliance detection result corresponding to the target building based on the identified component types and the component quantity corresponding to the component types;
obtaining the absolute distance between the members according to the coordinates of the geometric centers of the members, and comparing the absolute distance between the members with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and outputting a detection report of the target building according to the compliance detection result and the safety detection result.
7. A computer vision-based building structure reconstruction and extension safety investigation device, characterized in that the device comprises:
the scanning module is used for scanning a target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
the registration module is used for carrying out registration processing on the point cloud data contained in each point cloud data set to obtain a registration result;
the superposition module is used for carrying out three-dimensional model superposition processing according to the registration result and the point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
and the display module is used for displaying the target model corresponding to the target building according to the superposed three-dimensional model, and the target model is used for reflecting the deformation characteristic of the target building.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211713032.7A CN115661212B (en) | 2022-12-30 | 2022-12-30 | Building structure reconstruction and expansion safety investigation method and device based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211713032.7A CN115661212B (en) | 2022-12-30 | 2022-12-30 | Building structure reconstruction and expansion safety investigation method and device based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115661212A true CN115661212A (en) | 2023-01-31 |
CN115661212B CN115661212B (en) | 2023-06-06 |
Family
ID=85023365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211713032.7A Active CN115661212B (en) | 2022-12-30 | 2022-12-30 | Building structure reconstruction and expansion safety investigation method and device based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115661212B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628294A (en) * | 2024-08-14 | 2024-09-10 | 广东省建筑设计研究院集团股份有限公司 | Building inspection method and system based on existing building structure |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106225707A (en) * | 2016-08-01 | 2016-12-14 | 三峡大学 | A kind of method for the deformation of fast monitored high CFRD extrusion side wall |
CN107316299A (en) * | 2017-07-13 | 2017-11-03 | 云南数云信息科技有限公司 | Ancient architecture deformed monitoring method and system based on three-dimensional point cloud technology |
CN108844479A (en) * | 2018-06-27 | 2018-11-20 | 清华大学 | A kind of monitoring method of existing spatial steel structure member bending deformation |
US20210048294A1 (en) * | 2019-08-15 | 2021-02-18 | China Institute Of Water Resources And Hydropower Research | System and method for monitoring deformation of dam slope |
WO2021232463A1 (en) * | 2020-05-19 | 2021-11-25 | 北京数字绿土科技有限公司 | Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium |
CN114494274A (en) * | 2022-03-31 | 2022-05-13 | 清华大学 | Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium |
-
2022
- 2022-12-30 CN CN202211713032.7A patent/CN115661212B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106225707A (en) * | 2016-08-01 | 2016-12-14 | 三峡大学 | A kind of method for the deformation of fast monitored high CFRD extrusion side wall |
CN107316299A (en) * | 2017-07-13 | 2017-11-03 | 云南数云信息科技有限公司 | Ancient architecture deformed monitoring method and system based on three-dimensional point cloud technology |
CN108844479A (en) * | 2018-06-27 | 2018-11-20 | 清华大学 | A kind of monitoring method of existing spatial steel structure member bending deformation |
US20210048294A1 (en) * | 2019-08-15 | 2021-02-18 | China Institute Of Water Resources And Hydropower Research | System and method for monitoring deformation of dam slope |
WO2021232463A1 (en) * | 2020-05-19 | 2021-11-25 | 北京数字绿土科技有限公司 | Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium |
CN114494274A (en) * | 2022-03-31 | 2022-05-13 | 清华大学 | Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium |
Non-Patent Citations (4)
Title |
---|
YUAN ZHOU ET AL.: "A deep learning framework to early identify emerging technologies in large-scale outlier patents: an empirical study of CNC machine tool" * |
何原荣;郑渊茂;潘火平;陈鉴知;: "基于点云数据的复杂建筑体真三维建模与应用" * |
刘宇飞 等: "多视角几何三维重建法识别工程结构缺损与变形" * |
张亚;山锋;: "建筑物三维重建点云数据配准方式研究" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628294A (en) * | 2024-08-14 | 2024-09-10 | 广东省建筑设计研究院集团股份有限公司 | Building inspection method and system based on existing building structure |
Also Published As
Publication number | Publication date |
---|---|
CN115661212B (en) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11887247B2 (en) | Visual localization | |
Walsh et al. | Data processing of point clouds for object detection for structural engineering applications | |
US20220138484A1 (en) | Visual localization method and apparatus based on semantic error image | |
Carozza et al. | Markerless vision‐based augmented reality for urban planning | |
WO2019040997A9 (en) | Method and system for use in performing localisation | |
US8406535B2 (en) | Invariant visual scene and object recognition | |
US10628959B2 (en) | Location determination using street view images | |
EP1319216A2 (en) | Sar and flir image registration method | |
Sui et al. | A novel 3D building damage detection method using multiple overlapping UAV images | |
US20230401691A1 (en) | Image defect detection method, electronic device and readable storage medium | |
CN112489099A (en) | Point cloud registration method and device, storage medium and electronic equipment | |
CN115661212B (en) | Building structure reconstruction and expansion safety investigation method and device based on computer vision | |
CN115526892A (en) | Image defect duplicate removal detection method and device based on three-dimensional reconstruction | |
CN112991429B (en) | Box volume measuring method, device, computer equipment and storage medium | |
CN116503474A (en) | Pose acquisition method, pose acquisition device, electronic equipment, storage medium and program product | |
CN112733641A (en) | Object size measuring method, device, equipment and storage medium | |
CN116091431A (en) | Case Liang Binghai detection method, apparatus, computer device, and storage medium | |
Ye et al. | Exploiting depth camera for 3d spatial relationship interpretation | |
Potůčková et al. | Comparison of quality measures for building outline extraction | |
Wujanz et al. | Plane-based registration of several thousand laser scans on standard hardware | |
CN116972880A (en) | Precision detection device of positioning algorithm | |
Xia et al. | Precise indoor localization with 3D facility scan data | |
US8290741B2 (en) | Fusing multi-sensor data sets according to relative geometrical relationships | |
CN115830073A (en) | Map element reconstruction method, map element reconstruction device, computer equipment and storage medium | |
CN115827812A (en) | Relocation method, relocation device, relocation equipment and storage medium thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |