CN115661212B - Building structure reconstruction and expansion safety investigation method and device based on computer vision - Google Patents
Building structure reconstruction and expansion safety investigation method and device based on computer vision Download PDFInfo
- Publication number
- CN115661212B CN115661212B CN202211713032.7A CN202211713032A CN115661212B CN 115661212 B CN115661212 B CN 115661212B CN 202211713032 A CN202211713032 A CN 202211713032A CN 115661212 B CN115661212 B CN 115661212B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- data set
- target building
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Image Processing (AREA)
Abstract
The present application relates to a method, an apparatus, a computer device, a storage medium and a computer program product for computer vision based building structure reconstruction and extension security investigation. The method comprises the following steps: scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time; carrying out registration processing on point cloud data contained in the point cloud data set to obtain a registration result; performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model; and displaying a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building. By adopting the method, the omission ratio can be reduced.
Description
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, apparatus, computer device, storage medium and computer program product for building structure reconstruction and expansion security inspection based on computer vision.
Background
In recent years, private building expansion and reconstruction have been increasingly performed, and illegal building expansion and reconstruction have resulted in structural accidents, such as collapse accidents, and the like. The building is mainly in a layer of illegal addition, the illegal reconstruction is mainly in a private door and window opening, and the illegal expansion and reconstruction can cause potential safety hazards to the use of a building structure, so that detection and investigation are needed in time.
The detection and the investigation of the traditional building are manual inspection, and the inspector performs manual inspection to investigate the phenomenon of illegal extension and reconstruction, however, the manual inspection mode has the problem of higher omission ratio.
Disclosure of Invention
Based on this, it is necessary to provide a building construction reconstruction and expansion security check method, apparatus, computer device, computer readable storage medium and computer program product based on computer vision, in view of the above-described technical problems.
In a first aspect, the present application provides a computer vision-based building structure reconstruction and expansion security screening method. The method comprises the following steps:
scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in the point cloud data set to obtain a registration result;
Performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model;
outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, scanning the target building at each preset detection time to obtain the point cloud data set corresponding to each detection time of the target building includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In one embodiment, performing registration processing on point cloud data included in the point cloud data set to obtain a registration result includes:
ordering the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and carrying out registration processing on each point cloud data contained in adjacent point cloud data sets in the point cloud data set sequence through an iterative nearest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set, and obtaining a superimposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional models according to the registration result to obtain the superposed three-dimensional models.
In one embodiment, the registration result includes detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result, wherein obtaining the superposed target model comprises the following steps:
and translating or rotating the three-dimensional model with different detection time according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
In one embodiment, the method further comprises:
identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises the component category of each component, the coordinates of the geometric center of each component and the number of components corresponding to each component category;
determining a compliance detection result corresponding to the target building based on the identified component categories and the number of components corresponding to the component categories;
Obtaining the absolute distance between the components through the coordinates of the geometric centers of the components, and comparing the absolute distance between the components with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and outputting a detection report of the target building according to the compliance detection result and the safety detection result.
In a second aspect, the present application also provides a computer vision-based building structure reconstruction and expansion security inspection device. The device comprises:
the scanning module is used for scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
the registration module is used for carrying out registration processing on the point cloud data contained in the point cloud data set to obtain a registration result;
the superposition module is used for carrying out three-dimensional model superposition processing according to the registration result and the point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model;
the display module is used for displaying a target model corresponding to the target building according to the superimposed three-dimensional model, and the target model is used for reflecting deformation characteristics of the target building.
In one embodiment, the scanning module is specifically configured to:
And scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In one embodiment, the registration module is specifically configured to:
ordering the point cloud data contained in the point cloud data set to obtain a point cloud data set sequence;
in the point cloud data set sequence, the point cloud data contained in the adjacent point cloud data sets are registered through an iterative nearest point algorithm, and a registration result between the adjacent point cloud data sets is obtained.
In one embodiment, the superposition module is specifically configured to:
establishing a three-dimensional model for the target building according to the point cloud data;
and carrying out coordinate transformation on the three-dimensional models with different detection times according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
In one embodiment, the display module is specifically configured to:
and displaying a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, the apparatus further comprises:
the identification module is used for identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises the component category of each component, the coordinates of the geometric center of each component and the number of components corresponding to each component category;
The determining module is used for determining a compliance detection result corresponding to the target building based on the identified component categories and the number of components corresponding to the component categories;
the comparison module is used for obtaining the absolute distance between the components through the coordinates of the constructed geometric center, and comparing the absolute distance between the components with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and the display module is used for outputting a detection report of the target building according to the compliance detection result and the safety detection result.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
In a fifth aspect, the present application also provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
According to the building structure reconstruction and expansion safety investigation method, device, computer equipment, storage medium and computer program product based on computer vision, the target building is scanned at each preset detection time, so that a point cloud data set corresponding to the target building at each detection time is obtained; carrying out registration processing on point cloud data contained in the point cloud data set to obtain a registration result; performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model; outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building. The point cloud data sets of the scanned target buildings are registered, and three-dimensional models of the target buildings at different time points are overlapped according to registration results, so that the omission ratio of the illegal structures of the building buildings can be reduced.
Drawings
FIG. 1 is an application environment diagram of a computer vision-based building structure reconstruction and expansion security screening method in one embodiment;
FIG. 2 is a flow chart of a method for computer vision based reconstruction and expansion of a building structure according to an embodiment;
FIG. 3 is a flow diagram of a point cloud registration step in one embodiment;
FIG. 4 is a flow chart of a model stacking step in one embodiment;
FIG. 5 is a schematic flow chart of a slit projection step in another embodiment;
FIG. 6 is a flow diagram of an example of computer vision based building structure reconstruction security screening in one embodiment;
FIG. 7 is a block diagram of an apparatus for computer vision based building structure reconstruction and security screening in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The building structure reconstruction and expansion safety investigation method based on computer vision provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers. In addition, the method can also be applied to a terminal or a server, and the application is not limited.
In one embodiment, as shown in fig. 2, a method for checking the reconstruction and expansion of a building structure based on computer vision is provided, and the method is applied to a terminal for illustration, and comprises the following steps:
The point cloud data set is obtained by laser radar scanning, and the terminal stores a laser scanning result as the point cloud data set.
In the embodiment of the application, the terminal can be integrated with the scanning equipment, move along the preset route according to different time intervals, and scan the target building through the scanning equipment to obtain the point cloud data of the target building. For example, the user may move the handheld terminal device inside the building to scan the inside of the building to obtain point cloud data inside the building, or the user may scan the outside of the building to obtain point cloud data outside the building. Alternatively, the terminal may be the SALM (Simultaneous Localization and Mapping, instant positioning and mapping) device, including lidar, inertial navigation, and high resolution cameras. In this way, the terminal can scan the target building at different preset detection times respectively to obtain the point cloud data set corresponding to the target building at each detection time.
And 204, carrying out registration processing on the point cloud data contained in the point cloud data set to obtain a registration result.
Wherein the registration result includes a rotation matrix and a translation matrix.
In the embodiment of the application, the terminal can register the point cloud data sets of the buildings at different time points by adopting an ICP (Iterative Closest Point) algorithm to iterate the closest points, so that a registration result is obtained.
Optionally, the terminal may register the point cloud data sets of the adjacent time points according to the selection of the user, or the user may designate the point cloud data sets of any two time points to register.
And 206, performing three-dimensional model superposition processing according to the registration result and the point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model.
In the embodiment of the application, the terminal establishes a three-dimensional model for the target building according to the point cloud data, and performs coordinate transformation on the three-dimensional model of the target building at different time points by utilizing the rotation matrix and the translation matrix, so that the three-dimensional model of the target building at different time points is transformed into the same coordinate system, and the superimposed three-dimensional model is obtained. In one example, the terminal performs three-dimensional model superposition processing on the point cloud data included in the point cloud data set of the previous time point target building and the point cloud data included in the point cloud data set of the current time point target building through the registration result, so that the three-dimensional model of the previous time point target building can be converted into the same coordinate system as the three-dimensional model of the current time point target building.
And step 208, displaying the target model corresponding to the target building according to the superimposed three-dimensional model.
The target model is used for reflecting deformation characteristics of the target building.
In the embodiment of the application, the terminal displays the target model corresponding to the target building according to the user instruction, and in the target model, deformation parts of the target building at different time points can be highlighted, so that deformation characteristics of the target building can be intuitively reflected.
Specifically, the terminal may perform coordinate transformation on the three-dimensional model at different detection times according to the rotation matrix and the translation matrix, then perform coordinate matching on the element coordinates in the three-dimensional model of the target building at the previous time point and the element coordinates in the three-dimensional model of the target building at the current time point, determine the element coordinates with failed matching in the target building three-dimensional model at the current time point, and perform highlighting according to the element coordinates with failed matching.
In addition, the embodiment can also comprise building crack identification, and a crack image is identified by using a crack identification model to obtain a crack profile information diagram; the crack image is an image obtained by photographing the crack part of the building by the detection equipment.
And carrying out feature extraction on the crack profile information graph to obtain crack width feature points, and projecting the crack profile information graph to a target model corresponding to the building according to the crack width feature points in the crack image to obtain the target model containing crack projection.
According to the building structure reconstruction and expansion safety checking method based on computer vision, the three-dimensional models of the target buildings at different time points are overlapped, and deformation parts of the target buildings at different time points are highlighted, so that the omission ratio of building safety checking can be reduced.
In one embodiment, scanning the target building at each preset detection time to obtain the point cloud data set corresponding to each detection time of the target building includes: and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In the embodiment of the application, the terminal scans the target building through the laser radar at different time points (namely preset detection time) by utilizing an instant positioning and map construction algorithm to obtain complete point cloud data of the target building. In one example, the terminal may automatically scan the target building when the current time reaches a preset detection time; alternatively, the terminal may also scan the target building in response to a scan instruction from the user.
In this embodiment, by scanning the target building, the complete point cloud data inside the target building can be achieved.
In one embodiment, as shown in fig. 3, performing registration processing on point cloud data included in the point cloud data set to obtain a registration result includes:
The point cloud data set comprises point cloud data of target buildings at different time points.
In the embodiment of the application, the terminal may store the acquisition time corresponding to the point cloud data set, and then sort the point cloud data set according to the sequence of the acquisition time to obtain the point data set sequence.
And 304, performing registration processing on each point cloud data contained in adjacent point cloud data sets in the point cloud data set sequence by iterating a nearest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In the embodiment of the application, the terminal registers two adjacent point cloud data sets in the point cloud data sets through an ICP algorithm to obtain a rotation matrix and a translation matrix of the two adjacent point cloud data sets at the adjacent time points, namely a registration result.
Specifically, for any two point cloud data sets to be registered, the terminal performs point cloud transformation on the point cloud data set to the target point cloud data set according to the initial rotation matrix and the initial translation matrix of coarse registration to obtain a temporary point cloud data set corresponding to the point cloud data set, according to the distance between the point cloud data set and the point cloud data in the target point cloud data set, the nearest point of each point in the point cloud data set is obtained, the initial rotation matrix and the initial translation matrix are estimated by using a least square method according to the nearest point in the point cloud data set, an optimized rotation matrix and an optimized translation matrix are obtained by solving, and then the optimized nearest point calculated according to the optimized rotation matrix and the optimized translation matrix is iterated continuously until the change of the rotation matrix and the translation matrix is smaller than a certain threshold value or the nearest point is not changed, and the transformation relation between the point clouds is obtained, namely the target rotation matrix and the target translation matrix is obtained.
In this embodiment, the two point cloud data are registered by the ICP algorithm, so that a rotation matrix and a translation matrix required in the three-dimensional model superposition can be obtained.
In one embodiment, as shown in fig. 4, performing a three-dimensional model superposition process according to the registration result and the point cloud data included in the point cloud data set, and obtaining a superimposed three-dimensional model includes:
In the embodiment of the application, after the terminal acquires the point cloud data, the three-dimensional reconstruction of the target building can be performed through the point cloud data, and the three-dimensional reconstruction can be an r3live algorithm. Optionally, any reconstruction algorithm capable of implementing three-dimensional reconstruction may be applied to the embodiments of the present application, which are not limited.
And step 404, superposing the three-dimensional model according to the registration result to obtain a superposed three-dimensional model.
Wherein the three-dimensional model comprises three-dimensional models at respective different points in time.
In the embodiment of the application, the terminal rotates and translates the three-dimensional model established by the point cloud data of the adjacent time points according to the target rotation matrix and the target translation matrix, so that the three-dimensional model of the target building of the adjacent time points is subjected to superposition processing to obtain the target model, and the deformation characteristics of the target building can be intuitively observed by the target model.
In this embodiment, by overlapping three-dimensional models of target buildings at adjacent time points, the target models can be obtained, and deformation characteristics of the target buildings can be intuitively observed.
In one embodiment, as shown in fig. 5, further comprising:
The point cloud data sets comprise point cloud data sets of buildings at different preset time points.
In the embodiment of the application, the terminal identifies the outline of the point cloud data set according to a deep learning algorithm to obtain each constructed category which can be a beam, a column, a door, a window, a floor and the like; the identification result also comprises a confidence coefficient corresponding to each component and a point cloud range of the component, wherein the confidence coefficient is used for judging the accuracy of component identification, and the point cloud range of the component is the coordinate limit of the point cloud data set of the component, namely the range containing the point cloud data of the component.
Compliance, among other things, refers to whether a target building is being modified or built against regulations.
In the embodiment of the application, the terminal performs consistency matching based on the identified component types to obtain a first matching result, performs consistency matching based on the number of components corresponding to the identified component types to obtain a second matching result, and determines a compliance detection result corresponding to the target building according to the first matching result and the second matching result.
For example, the terminal identifies the point cloud data set of the time A target building to obtain the construction type and the number of each type of components in the time A target building, the terminal identifies the point cloud data set of the time B target building to obtain the construction type and the number of each type of components in the time B target building, and if the construction type under the time A is inconsistent with the construction type under the time B or the number of each type of components under the time A is inconsistent with the number of each type of components under the time B, the condition that the target building is reconstructed or added is indicated, and the compliance detection result of the target building is disqualified; if the construction category under the time A is matched with the construction category under the time B and the number of the components of each class under the time A is matched with the number of the components of each class under the time B, the condition that the target building is not reconstructed or built is indicated, and the compliance detection result of the target building is qualified.
And 506, obtaining the absolute distance between the components through the coordinates of the geometric centers of the components, and comparing the absolute distance between the components with a preset safety threshold value to obtain a safety detection result corresponding to the target building.
Wherein the preset safety threshold is set by a technician according to engineering experience.
In this embodiment, for each member, the terminal calculates the absolute distance of the member at different time points according to the geometric center of the member identified in step 502, compares the absolute distance of the member at different time points with a preset safety threshold, determines that the safety of the member is dangerous if the absolute distance of the member at different time points is greater than the preset safety threshold, and determines that the safety of the member is normal if the absolute distance of the member at different time points is less than the preset safety threshold. If the safety of each member is dangerous, the safety detection result of the target building is determined to be dangerous, and if the safety of each member is safe, the safety detection result of the target building is determined to be safe.
For example, the terminal calculates the euclidean distance between the geometric center of the member at time a and the geometric center of the member at time B according to the coordinates of the geometric center of the member at time a and the geometric center of the member at time B, and if the euclidean distance is greater than a preset safety threshold, the terminal determines that the safety of the target building is dangerous; if the Euclidean distance is smaller than the preset safety threshold value, the safety of the target building is judged to be safe.
And step 508, outputting a detection report of the target building according to the compliance detection result and the safety detection result.
In the embodiment of the application, the terminal responds to the user instruction and can output the detection report of the target building according to the compliance detection result and the safety detection result, and the terminal can also output the corresponding positions, confidence degrees, point cloud ranges and the like of the components with the compliance problem or the safety problem according to the detection report.
In this embodiment, by identifying and comparing the types and the numbers of the components in the target building, a detection report of the target building is obtained, and the omission ratio of the building safety inspection can be reduced.
In one embodiment, as shown in fig. 6, the application further provides an example of a building structure reconstruction and expansion security check method based on computer vision, which specifically includes:
step 602, scanning the target building by using an instant positioning and map construction algorithm in each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In step 606, in the sequence of point cloud data sets, the point cloud data contained in the adjacent point cloud data sets are registered by iterative nearest point algorithm, so as to obtain a registration result between the adjacent point cloud data sets.
And 608, building a three-dimensional model for the target building according to the point cloud data.
And 610, carrying out coordinate transformation on the three-dimensional models with different detection times according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
Step 612, identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result includes the component category of each component, the coordinates of the geometric center of each component, and the number of components corresponding to each component category.
And 616, obtaining the absolute distance between the components through the constructed coordinates of the geometric center, and comparing the absolute distance between the components with a preset safety threshold value to obtain a safety detection result corresponding to the target building.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a building structure reconstruction and expansion safety inspection device based on computer vision, which is used for realizing the above related building structure reconstruction and expansion safety inspection method based on computer vision. The implementation scheme of the device for solving the problem is similar to that described in the above method, so the specific limitation in the embodiments of the device for verifying and expanding the building structure based on computer vision provided below can be referred to the limitation of the method for verifying and expanding the building structure based on computer vision in the above description, and will not be repeated here.
In one embodiment, as shown in fig. 7, there is provided a computer vision-based building construction rebuilding and safety inspection apparatus, comprising: a scanning module 701, a registration module 702, a superposition module 703 and a presentation module 704, wherein:
the scanning module 701 is configured to scan the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
the registration module 702 is configured to perform registration processing on point cloud data included in the point cloud data set, so as to obtain a registration result;
The superposition module 703 is configured to perform a three-dimensional model superposition process according to the registration result and the point cloud data included in the point cloud data set, so as to obtain a superimposed three-dimensional model;
and the display module 704 is used for outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, the scanning module is specifically configured to:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In one embodiment, the registration module is specifically configured to:
ordering the point cloud data contained in the point cloud data set to obtain a point cloud data set sequence;
in the point cloud data set sequence, the point cloud data contained in the adjacent point cloud data sets are registered through an iterative nearest point algorithm, and a registration result between the adjacent point cloud data sets is obtained.
In one embodiment, the superposition module is specifically configured to:
establishing a three-dimensional model for the target building according to the point cloud data;
and carrying out coordinate transformation on the three-dimensional models with different detection times according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
In one embodiment, the display module is specifically configured to:
outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring the image of the crack on the surface of the building;
the identification module is used for identifying the crack image on the surface of the building to obtain a crack profile information graph;
and the projection module is used for extracting the characteristics of the crack profile information graph to obtain crack width characteristic points, and projecting the crack profile information graph to an initial model corresponding to the building according to the crack width characteristic points in the crack image to obtain a target model containing crack projection.
The above-mentioned building structure reconstruction and expansion safety inspection device based on computer vision can be implemented by all or part of software, hardware and their combination. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing a point cloud data set. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a computer vision-based building structure reconstruction and expansion security screening method.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor, when executing the computer program, performing the method steps of:
scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in the point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model;
outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, scanning the target building at each preset detection time to obtain the point cloud data set corresponding to each detection time of the target building includes:
And scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In one embodiment, performing registration processing on point cloud data included in the point cloud data set to obtain a registration result includes:
ordering the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and carrying out registration processing on each point cloud data contained in adjacent point cloud data sets in the point cloud data set sequence through an iterative nearest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set, and obtaining a superimposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional models according to the registration result to obtain the superposed three-dimensional models.
In one embodiment, the registration result includes detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result, wherein obtaining the superposed target model comprises the following steps:
And translating or rotating the three-dimensional model with different detection time according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
In one embodiment, the method further comprises:
collecting a crack image of the surface of a building;
identifying the crack image on the surface of the building to obtain a crack profile information graph;
and carrying out feature extraction on the crack profile information graph to obtain crack width feature points, and projecting the crack profile information graph to an initial model corresponding to the building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
Scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in the point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model;
Outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, scanning the target building at each preset detection time to obtain the point cloud data set corresponding to each detection time of the target building includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In one embodiment, performing registration processing on point cloud data included in the point cloud data set to obtain a registration result includes:
ordering the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
and carrying out registration processing on each point cloud data contained in adjacent point cloud data sets in the point cloud data set sequence through an iterative nearest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set, and obtaining a superimposed three-dimensional model includes:
Establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional models according to the registration result to obtain the superposed three-dimensional models.
In one embodiment, the registration result includes detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result, wherein obtaining the superposed target model comprises the following steps:
and translating or rotating the three-dimensional model with different detection time according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
In one embodiment, the method further comprises:
collecting a crack image of the surface of a building;
identifying the crack image on the surface of the building to obtain a crack profile information graph;
and carrying out feature extraction on the crack profile information graph to obtain crack width feature points, and projecting the crack profile information graph to an initial model corresponding to the building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Scanning the target building at each preset detection time to obtain a point cloud data set corresponding to the target building at each detection time;
carrying out registration processing on point cloud data contained in the point cloud data set to obtain a registration result;
performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set to obtain a superposed three-dimensional model;
outputting a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting the deformation characteristics of the target building.
In one embodiment, scanning the target building at each preset detection time to obtain the point cloud data set corresponding to each detection time of the target building includes:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
In one embodiment, performing registration processing on point cloud data included in the point cloud data set to obtain a registration result includes:
ordering the point cloud data contained in the point cloud data sets according to the time of each point cloud data set to obtain a point cloud data set sequence;
And carrying out registration processing on each point cloud data contained in adjacent point cloud data sets in the point cloud data set sequence through an iterative nearest point algorithm to obtain a registration result between the adjacent point cloud data sets.
In one embodiment, performing three-dimensional model superposition processing according to the registration result and point cloud data contained in the point cloud data set, and obtaining a superimposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data;
and superposing the three-dimensional models according to the registration result to obtain the superposed three-dimensional models.
In one embodiment, the registration result includes detecting a rotation matrix and a translation matrix between the time-adjacent point cloud data sets; superposing the three-dimensional model according to the registration result, wherein obtaining the superposed target model comprises the following steps:
and translating or rotating the three-dimensional model with different detection time according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
In one embodiment, the method further comprises:
collecting a crack image of the surface of a building;
identifying the crack image on the surface of the building to obtain a crack profile information graph;
and carrying out feature extraction on the crack profile information graph to obtain crack width feature points, and projecting the crack profile information graph to an initial model corresponding to the building according to the crack width feature points in the crack image to obtain a target model containing crack projection.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (9)
1. A computer vision-based building structure reconstruction and expansion security investigation method, the method comprising:
scanning a target building at each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building;
carrying out registration processing on the point cloud data contained in each point cloud data set to obtain a registration result;
Performing three-dimensional model superposition processing according to the registration result and the point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
displaying a target model corresponding to the target building according to the superimposed three-dimensional model, wherein the target model is used for reflecting deformation characteristics of the target building;
identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises the component category of each component, the coordinates of the geometric center of each component and the number of components corresponding to each component category;
determining a compliance detection result corresponding to the target building based on the identified component categories and the number of components corresponding to the component categories;
obtaining the absolute distance between the components through the coordinates of the geometric centers of the components, and comparing the absolute distance between the components with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and outputting a detection report of the target building according to the compliance detection result and the safety detection result.
2. The method of claim 1, wherein scanning the target building at each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building comprises:
And scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
3. The method of claim 1, wherein performing registration processing on the point cloud data included in each point cloud data set to obtain a registration result includes:
ordering the point cloud data contained in each point cloud data set according to the time of each point cloud data set to obtain a point cloud data set sequence;
and carrying out registration processing on each point cloud data contained in the adjacent point cloud data sets in the point cloud data set sequence through an iterative nearest point algorithm to obtain a registration result between the adjacent point cloud data sets.
4. The method according to claim 1, wherein the performing three-dimensional model superposition processing according to the registration result and the point cloud data included in each point cloud data set to obtain a superimposed three-dimensional model includes:
establishing a three-dimensional model for the target building according to the point cloud data contained in the point cloud data set;
and superposing the three-dimensional model according to the registration result to obtain a superposed three-dimensional model.
5. The method of claim 4, wherein the registration results include detecting a rotation matrix and a translation matrix between the time-adjacent sets of point cloud data; the step of superposing the three-dimensional model according to the registration result to obtain a superposed target model comprises the following steps:
and carrying out coordinate transformation on the three-dimensional models with different detection times according to the rotation matrix and the translation matrix to obtain a superimposed three-dimensional model.
6. A computer vision-based building structure reconstruction and expansion security inspection device, the device comprising:
the scanning module is used for scanning the target building at each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building;
the registration module is used for carrying out registration processing on the point cloud data contained in each point cloud data set to obtain a registration result;
the superposition module is used for carrying out three-dimensional model superposition processing according to the registration result and the point cloud data contained in each point cloud data set to obtain a superposed three-dimensional model;
the display module is used for displaying a target model corresponding to the target building according to the superimposed three-dimensional model, and the target model is used for reflecting deformation characteristics of the target building;
The identification module is used for identifying each point cloud data set according to a deep learning algorithm to obtain an identification result of each point cloud data set; the identification result comprises the component category of each component, the coordinates of the geometric center of each component and the number of components corresponding to each component category;
the determining module is used for determining a compliance detection result corresponding to the target building based on the identified component categories and the number of components corresponding to the component categories;
the comparison module is used for obtaining the absolute distance between the components through the coordinates of the geometric centers of the components, and comparing the absolute distance between the components with a preset safety threshold value to obtain a safety detection result corresponding to the target building;
and the output module is used for outputting a detection report of the target building according to the compliance detection result and the safety detection result.
7. The apparatus of claim 6, wherein the scanning module is specifically configured to:
and scanning the target building by utilizing an instant positioning and map construction algorithm within each preset detection time to obtain a point cloud data set corresponding to each detection time of the target building.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211713032.7A CN115661212B (en) | 2022-12-30 | 2022-12-30 | Building structure reconstruction and expansion safety investigation method and device based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211713032.7A CN115661212B (en) | 2022-12-30 | 2022-12-30 | Building structure reconstruction and expansion safety investigation method and device based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115661212A CN115661212A (en) | 2023-01-31 |
CN115661212B true CN115661212B (en) | 2023-06-06 |
Family
ID=85023365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211713032.7A Active CN115661212B (en) | 2022-12-30 | 2022-12-30 | Building structure reconstruction and expansion safety investigation method and device based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115661212B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628294A (en) * | 2024-08-14 | 2024-09-10 | 广东省建筑设计研究院集团股份有限公司 | Building inspection method and system based on existing building structure |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021232463A1 (en) * | 2020-05-19 | 2021-11-25 | 北京数字绿土科技有限公司 | Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium |
CN114494274A (en) * | 2022-03-31 | 2022-05-13 | 清华大学 | Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106225707B (en) * | 2016-08-01 | 2018-10-23 | 三峡大学 | A method of it is deformed for fast slowdown monitoring high CFRD extrusion side wall |
CN107316299A (en) * | 2017-07-13 | 2017-11-03 | 云南数云信息科技有限公司 | Ancient architecture deformed monitoring method and system based on three-dimensional point cloud technology |
CN108844479B (en) * | 2018-06-27 | 2020-12-29 | 清华大学 | Method for monitoring bending deformation of existing space steel structure rod piece |
CN110453731B (en) * | 2019-08-15 | 2020-06-30 | 中国水利水电科学研究院 | Dam slope deformation monitoring system and method |
-
2022
- 2022-12-30 CN CN202211713032.7A patent/CN115661212B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021232463A1 (en) * | 2020-05-19 | 2021-11-25 | 北京数字绿土科技有限公司 | Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium |
CN114494274A (en) * | 2022-03-31 | 2022-05-13 | 清华大学 | Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
张亚;山锋.建筑物三维重建点云数据配准方式研究.河南科技.2018,(19),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN115661212A (en) | 2023-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210365785A1 (en) | Method and system for performing convolutional image transformation estimation | |
AU2018326401C1 (en) | Method and system for use in performing localisation | |
Huber et al. | Fully automatic registration of multiple 3D data sets | |
CN113673530B (en) | Remote sensing image semantic segmentation method, device, computer equipment and storage medium | |
US10115165B2 (en) | Management of tax information based on topographical information | |
US20230401691A1 (en) | Image defect detection method, electronic device and readable storage medium | |
CN115661212B (en) | Building structure reconstruction and expansion safety investigation method and device based on computer vision | |
CN115526892A (en) | Image defect duplicate removal detection method and device based on three-dimensional reconstruction | |
CN112991429B (en) | Box volume measuring method, device, computer equipment and storage medium | |
Wujanz et al. | Plane-based registration of several thousand laser scans on standard hardware | |
CN116503474A (en) | Pose acquisition method, pose acquisition device, electronic equipment, storage medium and program product | |
Keyvanfar et al. | Performance comparison analysis of 3D reconstruction modeling software in construction site visualization and mapping | |
Hu et al. | VODRAC: Efficient and robust correspondence-based point cloud registration with extreme outlier ratios | |
Hesami et al. | Range segmentation of large building exteriors: A hierarchical robust approach | |
Moreira et al. | Modeling and Representing Real-World Spatio-Temporal Data in Databases (Vision Paper) | |
US7379599B1 (en) | Model based object recognition method using a texture engine | |
US20230401670A1 (en) | Multi-scale autoencoder generation method, electronic device and readable storage medium | |
Al-Temeemy et al. | Chromatic methodology for laser detection and ranging (LADAR) image description | |
CN115376018A (en) | Building height and floor area calculation method, device, equipment and storage medium | |
CN113435384A (en) | Target detection method, device and equipment for medium-low resolution optical remote sensing image | |
CN113312970A (en) | Target object identification method, target object identification device, computer equipment and storage medium | |
Urmanov et al. | Computer methods of image processing of volcanoes | |
Dushepa | A learning-based approach for rigid image registration accuracy estimation | |
CN115984709B (en) | Content identification method for rapid large-scale remote sensing image | |
CN115994955B (en) | Camera external parameter calibration method and device and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |