CN113160202A - Crack detection method and system - Google Patents
Crack detection method and system Download PDFInfo
- Publication number
- CN113160202A CN113160202A CN202110483642.1A CN202110483642A CN113160202A CN 113160202 A CN113160202 A CN 113160202A CN 202110483642 A CN202110483642 A CN 202110483642A CN 113160202 A CN113160202 A CN 113160202A
- Authority
- CN
- China
- Prior art keywords
- crack
- video
- image
- point
- mobile robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 37
- 230000011218 segmentation Effects 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims description 13
- 210000000988 bone and bone Anatomy 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 13
- 206010017076 Fracture Diseases 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000001681 protective effect Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 241000283070 Equus zebra Species 0.000 description 2
- 239000006096 absorbing agent Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000035939 shock Effects 0.000 description 2
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000005297 material degradation process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention discloses a crack detection method and a system, wherein the method comprises the following steps: the mobile robot acquires a crack video to be identified in a target area and sends the crack video to the remote terminal; the remote terminal receives the crack video to be recognized, and performs crack recognition and segmentation on the crack video to be recognized by using the trained deep convolutional neural network model to obtain a crack image; and the remote terminal analyzes the crack image to obtain a plurality of index parameters of the crack. Conventional artificial road detection is replaced by remote control, detection efficiency is improved, potential safety hazards of operators are reduced, the mobile robot is controlled to reach any position of a road to detect road cracks, the cracks are analyzed by using an image processing technology, and analysis efficiency is improved.
Description
Technical Field
The invention relates to the field of road crack detection, in particular to a crack detection method and system.
Background
The cracks of the infrastructure are mainly caused by material degradation and loads of vehicles, wind, earthquakes or environmental vibrations, the existence of cracks in the road has a crucial influence on the safety of the road, the detection and measurement of the cracks are important contents for structural health monitoring, and the quantitative analysis of the cracks of the road surface can help engineers to evaluate the condition and durability of the road surface and confirm whether repair is needed. Currently, conventional crack detection and measurement is time consuming, labor intensive, inefficient, and dangerous.
The prior art can refer to the invention patent with the publication number of CN 208949695, which discloses a road crack detector, comprising a universal wheel, a distance control telescopic box, a protection pad, a support holding rod, a manual control pressing body, a telescopic outlet end, a movable auxiliary frame and an auxiliary wheel, wherein the auxiliary wheel and the universal wheel are arranged on the same plane, the distance control telescopic box is characterized in that the distance control telescopic box comprises a signal induction receiving end, an inflation end to be released, a magnetic field pushing device, a stress pushing group, a protective shell, a stress trigger device, a pushing rotating device and a main electric box to be triggered, the main electric box to be triggered is embedded in the protective shell and is positioned at the side of the magnetic field pushing device, one end of the stress trigger device penetrates through the inner part of the top end of the protective shell and is movably connected, based on the prior art, the utility model adopts the equipment which can press the road through the outside at one end of the road, the device is triggered to push the device to one end of the road until the device reaches the required position, and the hand is released, so that the device can be pushed to detect the road.
The manual field operation requires workers to carry out field operation, and has high requirements on technical experience of manual operation and great operation difficulty; the field environment is unstable, an emergency exists, and the safety of personnel cannot be guaranteed; the efficiency of manual operation is low, and the system is far inferior to an automatic crack detection system under the crisis condition; for some scenes with unclear road conditions and special structures, the manual operation difficulty is increased, and the work is difficult to complete. In addition, when the road crack is detected manually, the amount of engineering of the road crack to be detected is large, and long-time visual detection is a great challenge for detection personnel, so that visual fatigue is caused, and the detection efficiency is reduced. For road crack analysis, manual analysis is time and labor intensive.
Disclosure of Invention
The invention provides a crack detection method and system, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In a first aspect, an embodiment of the present invention provides a crack detection method, where the method includes:
the mobile robot acquires a crack video to be identified in a target area and sends the crack video to the remote terminal;
the remote terminal receives the crack video to be recognized, and performs crack recognition and segmentation on the crack video to be recognized by using the trained deep convolutional neural network model to obtain a crack image;
and the remote terminal analyzes the crack image to obtain a plurality of index parameters of the crack.
Furthermore, the trained deep convolutional neural network model is formed by integrating a plurality of trained convolutional neural networks, each trained convolutional neural network performs crack recognition and segmentation on a crack video to be recognized to obtain an initial segmentation image, and the initial segmentation images output by each trained convolutional neural network are fused to obtain a crack image;
the trained deep convolutional neural network model is obtained by the following method:
acquiring a crack image dataset;
carrying out crack marking on the crack image data set, and dividing the marked crack image data set into a training set and a testing set;
constructing a deep convolution neural network model by using a training set;
and testing the deep convolutional neural network model by using the test set, and obtaining the trained deep convolutional neural network model after the test is passed.
Further, the plurality of indicator parameters includes at least one of a length of the crack, a maximum width of the crack, a crack area, and an average width of the crack.
Further, the length of the fracture is calculated as follows:
using a fast parallel thinning algorithm to skeletonize the cracks on the crack image on a single-pixel level to obtain a bone line image;
acquiring pixel values of all coordinate points in the skeleton line image:
where (x, y) represents coordinates of the bone line image, and I (x, y) represents a pixel value at (x, y);
counting the number of I (x, y) ═ 1 as n, wherein n represents the total number of pixel points on the skeleton line;
calculating the length L of the crackC:
WhereinIs the distance between adjacent pixel points (here, by default, one pixel) on the skeleton line; (x)i,yi) Coordinates of the ith point representing the skeleton line, (x)i+1,yi+1) Coordinates of the (i + 1) th point representing the bone line.
Further, the maximum width d of the crack is calculatedMax:
Let t denote the t-th point of the skeleton line, and t is 1,2 … … n, the tangent line of the t-th point of the skeleton line is obtained according to the linear interpolation method, and the normal line of the t-th point of the skeleton line is obtained according to the tangent line;
acquiring the coordinate of the t-th point of the skeleton line, judging whether pixel points on positive and negative normal vectors of the t-th point at the position, corresponding to the coordinate of the t-th point, of the crack image belong to crack pixel points or not, obtaining a point A and a point B of the crack edge, and calculating the distance between the point A and the point B to serve as the crack width of the t-th point;
the crack widths of the n points are compared, and the maximum value is taken as the maximum width of the crack.
Further, the area of the crack is calculated as follows: and judging whether the pixels belong to the crack pixels one by one from the original point of the crack image, and counting the number of the pixels belonging to the crack to obtain the area of the crack.
Further, the average width of the crack is calculated as follows:
wherein,is the average width of the crack, Area is the Area of the crack, LCIs the length of the crack.
Further, before the mobile robot acquires the video of the crack to be identified in the target area, the method further comprises the following steps:
the method comprises the following steps that a mobile robot obtains an environment video and a road surface video and sends the environment video and the road surface video to a remote terminal;
the remote far-end equipment processes the environment video to obtain a processed environment video, and sends the processed environment video to the VR equipment;
the remote controller sends a control instruction to the mobile robot so as to control the mobile robot to move to the target area and acquire the video to be identified in the target area, wherein the control instruction is generated by a user operating the remote controller according to the road video and the processed environment video acquired by the VR equipment.
In a second aspect, an embodiment of the present invention further provides a crack detection system, where the system includes:
the mobile robot is used for acquiring a to-be-identified crack video of the target area and sending the to-be-identified crack video to the remote terminal;
the remote terminal is used for receiving the crack video to be recognized, and performing crack recognition and segmentation on the crack video to be recognized by utilizing the trained deep convolutional neural network model to obtain a crack image;
and the remote terminal is also used for analyzing the crack image to obtain a plurality of index parameters of the crack.
Further, the system also includes a VR device and a remote controller;
the mobile robot is also used for acquiring an environment video and a road surface video and sending the environment video and the road surface video to the remote terminal;
the remote terminal is also used for receiving the environment video and the road surface video and processing the environment video to obtain a processed environment video;
a VR device for acquiring the processed environmental video from a remote device;
and the remote controller is used for sending a control instruction to the mobile robot so as to control the mobile robot to move to the target area and acquire the video to be identified of the target area, wherein the control instruction is generated by operating the remote controller by a user according to the road video and the processed environment video acquired by the VR equipment.
The crack detection method and the crack detection system provided by the embodiment of the invention at least have the following beneficial effects: the mobile robot acquires the crack image, sends the crack image to the remote terminal, and identifies and segments the crack based on a deep learning mode, so that the crack detection effect is guaranteed. And processing and analyzing the segmentation image by adopting an image processing mode to obtain various index parameters of the crack. Conventional artificial road detection is replaced by remote control, detection efficiency is improved, and potential safety hazards of operators are reduced. And the mobile robot is controlled to reach any position of the road for road detection, and particularly, the road detection is more direct and efficient for some road surfaces with large crack areas. And the crack is analyzed by using an image processing technology, so that the analysis efficiency is improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a crack detection system according to an embodiment of the present invention;
FIG. 3 is a flow chart of a crack detection method according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a deep convolutional neural network model according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a convolutional neural network according to an embodiment of the present invention;
fig. 6 is a schematic diagram of detecting the width of a crack according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that although functional block divisions are provided in the system drawings and logical orders are shown in the flowcharts, in some cases, the steps shown and described may be performed in different orders than the block divisions in the systems or in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Fig. 1 discloses a mobile robot, comprising a high definition camera 101, a panoramic camera 102, four active guiding wheels 103, four electrical rings 104, a plurality of shock absorbers 105, and two laser radars 106;
four active guiding wheels 103 are installed at four corners of the chassis 100, and each wheel 103 is driven by two actuators (not shown in the figure) and controls the direction independently, so that the moving capability of the mobile robot on the road is ensured. An electrical ring 104 is mounted on each wheel 103 to keep the wires well connected. Each wheel 103 is provided with two shock absorbers 105 on the suspension, so that the influence of disturbance on the vehicle body is reduced, and the stability of the mobile robot is improved. Two laser radars 106 are respectively installed at two ends of a diagonal line of the chassis and used for sensing the environment around the mobile robot.
High definition digtal camera 101 and panorama camera 102 install in the same side of mobile robot, and high definition digtal camera 101 is used for shooing the road surface video on relevant road surface, and panorama camera 102 is used for shooing the environment video of relevant road surrounding environment. When the mobile robot is moved to the target area, the high-definition camera 101 shoots a video of a crack to be identified in the target area.
The mobile robot also comprises an emergency brake 107, the mobile robot can be stopped in time by pressing the emergency brake 107, and the mobile robot can be stopped in emergency, so that the safety is improved.
The mobile robot further comprises an industrial personal computer (not shown in the figure) and a wireless communication module (not shown in the figure), wherein the industrial personal computer is respectively connected with the high-definition camera 101, the panoramic camera 102, the laser radar 106 and the wireless communication module, and the industrial personal computer sends videos to a remote terminal through the wireless communication module.
The mobile robot also includes a battery box 108 to provide operating power to the mobile robot.
Fig. 2 is a diagram of a crack detection system according to an embodiment of the present invention, which includes a remote terminal, a mobile robot, a VR device, and a remote controller, where the remote terminal and the remote controller are respectively in communication with the mobile robot through a wireless network.
The mobile robot sends a road video shot by a high-definition camera and an environment video shot by a panoramic shooting camera to a remote terminal, the remote terminal sends the processed environment video to VR equipment, the VR equipment is provided with a display screen, the remote terminal is provided with a display, the display can simultaneously display the environment video, the road video (or a video to be identified in a target area) and a crack image, an operator observes the environment video through the VR equipment, the environment video and the road video are compared through the display, so that the traffic condition and the ground condition of a road are obtained, when the operator sees an interested area, the operator can control the mobile robot to move to the target area, the target area is the interested area, wherein the interested area refers to the road area with cracks which can be seen by the operator through the road video and the processed environment video obtained by the VR equipment, or whether the crack is an area with cracks is difficult to distinguish, for example, the area with water stains, zebra stripes, interference of road blades and the like on the road surface is difficult to directly observe and determine, the mobile robot collects the video of the crack to be identified and the video of the environment of the target area, and sends the video of the crack to be identified and the video of the environment to the remote terminal. The remote terminal is pre-stored with a trained deep convolution neural network model, crack images are obtained by performing crack identification and segmentation on a to-be-identified crack video, and the crack images are analyzed to obtain multiple index parameters of the cracks, such as the maximum width of the cracks, the length of the cracks, the area of the cracks, the average width of the cracks and the like.
Fig. 3 is a crack detection method disclosed in the embodiment of the present invention, which includes the following steps:
s101, the mobile robot acquires a to-be-identified crack video of a target area and sends the to-be-identified crack video to a remote terminal;
the target area refers to an area including a crack or an area having a crack which is difficult to distinguish, for example, a road surface area having water stains, zebra stripes, road blade interference, and the like which are difficult to directly observe and determine. The mobile robot acquires a to-be-identified crack video of a target area through the high-definition camera and sends the to-be-identified crack video to the remote terminal through the wireless network.
Step S101 is preceded by steps S201-S203:
s201, the mobile robot acquires an environment video and a road surface video and sends the environment video and the road surface video to a remote terminal;
s202, the remote far-end equipment sends the environment video to VR equipment;
and S203, the remote controller sends a control instruction to the mobile robot to control the mobile robot to move to the target area and acquire the video to be identified in the target area, wherein the control instruction is generated by operating the remote controller by a user according to the road video and the environment video acquired by the VR equipment.
In one embodiment, the mobile robot acquires a video of a crack to be identified in a target area, acquires a video of the mobile robot around the target area, and transmits the video of the crack to be identified in the target area and an environment video of the target area to a remote terminal, so that a user can judge whether the mobile robot reaches the target area according to the video of the crack to be identified and the environment video of the target area, and observe the traffic condition and the ground condition of a road.
S102, the remote terminal receives the crack video to be recognized, and the trained deep convolutional neural network model is used for recognizing and segmenting the crack video to be recognized to obtain a crack image;
the method comprises the steps that a trained deep convolutional neural network model is integrated by a plurality of trained convolutional neural networks, each trained convolutional neural network carries out crack recognition and segmentation on a to-be-recognized crack video to obtain an initial segmentation image, and the initial segmentation images output by each trained convolutional neural network are fused to obtain a crack image; as shown in fig. 4, the deep convolutional neural network model is integrated from 3 identical convolutional neural networks. As shown in fig. 5, each convolutional neural network comprises an input layer, convolutional layers and fully-connected layers, the convolutional layers comprise six groups of convolutional layers, the fully-connected layers comprise three fully-connected layers, the input layer selects a part of an original image in a sliding window manner as an input of a first group of convolutional layers, an output of the input layer serves as an input of the first group of convolutional layers, the first group of convolutional layers to a sixth group of convolutional layers are connected in sequence, an output of the sixth group of convolutional layers serves as an input of the fully-connected layers, each group of convolutional layers outputs a C × H × W characteristic diagram, wherein H and W are respectively the length and width of a characteristic diagram, C is the number of channels and is a positive integer, an output of the input layer is a diagram with the size of 3 × 27 × 27 selected in the original image, the first group of convolutional layers and the second group of convolutional layers each output a 16 × 27 × 27 characteristic diagram, the third group of convolutional layers outputs a 16 × 14 × 14 characteristic diagram, and the fourth group of convolutional layers and the fifth group of convolutional layers each output a 32 × 14 characteristic diagram, outputting a 32 × 7 × 7 characteristic diagram by a sixth group of convolutional layers, wherein the output of the sixth group of convolutional layers is used as the input of a first fully-connected layer, the first fully-connected layer is sequentially connected to a third fully-connected layer, the output of the third fully-connected layer is used as the output of a convolutional neural network, the third fully-connected layer is a softmax layer, and the number of nodes from the first fully-connected layer to the third fully-connected layer is 64, 64 and 25 respectively.
Wherein the image fusion comprises: and adding pixel values of the initial segmentation images output by each trained convolutional neural network, calculating a pixel average value, considering the initial segmentation images as cracks when the pixel average value is greater than or equal to a set threshold, and taking a value 1 at a corresponding pixel coordinate, and considering the initial segmentation images as non-cracks when the pixel average value is less than the set threshold, and taking a value 0 at the corresponding pixel coordinate. Therefore, the obtained crack image is a binary image.
The trained deep convolutional neural network model is obtained by the following method:
s301, acquiring a crack image data set;
wherein the fracture image dataset is not less than 600 fracture images.
S302, carrying out crack annotation on a crack image data set, and dividing the annotated crack image data set into a training set and a testing set;
wherein, adopt artifical mode to carry out the crack mark, training set and test set are according to 7: 3 or 6: and 4, distributing.
S303, training a deep convolutional neural network model by using a training set;
and S304, testing the deep convolutional neural network model by using the test set, and obtaining the trained deep convolutional neural network model after the test is passed.
And training a deep convolutional neural network model by using a deep learning method, and automatically identifying and segmenting cracks contained in a video frame image of the road surface acquired from a high-definition camera of the mobile robot.
S103, the remote terminal analyzes the crack image to obtain a plurality of index parameters of the crack.
The plurality of indicator parameters includes at least one of a length of the fracture, a maximum width of the fracture, a fracture area, and an average width of the fracture.
The length of the crack was calculated as follows:
using a fast parallel thinning algorithm, skeletonizing the cracks on the crack image on a single-pixel level, and converting the cracks into bone lines to obtain a bone line image; the fast parallel refinement algorithm (FPT) is an existing image processing algorithm, and is not described herein again. The obtained crack image is analyzed to obtain that each index of the crack is information of a single pixel level, and the accuracy is high.
Acquiring pixel values of all coordinate points in the skeleton line image:
where (x, y) represents coordinates of the bone line image, and I (x, y) represents a pixel value at (x, y);
counting the number of I (x, y) ═ 1 as n, wherein n represents the total number of pixel points on the skeleton line;
calculating the length L of the crackC:
WhereinIs the distance between adjacent pixel points on the skeleton line; (x)i,yi) Coordinates of the ith point representing the skeleton line, (x)i+1,yi+1) Coordinates of the (i + 1) th point representing the bone line.
Calculating the maximum width of the crack:
let t denote the t-th point of the skeleton line, and t is 1,2 … … n, the tangent line of the t-th point of the skeleton line is obtained according to the linear interpolation method, and the normal line of the t-th point of the skeleton line is obtained according to the tangent line;
acquiring the coordinate of the t-th point of the skeleton line, judging whether pixel points on positive and negative normal vectors of the t-th point at the position, corresponding to the coordinate of the t-th point, of the crack image belong to crack pixel points or not, obtaining a point A and a point B of the crack edge, and calculating the distance between the point A and the point B to serve as the crack width of the t-th point;
the crack widths of the n points are compared, and the maximum value is taken as the maximum width of the crack.
The maximum width of the crack is calculated as follows:
s401, let k equal to 1, k denote the kth point of the fracture bone line, dMax=0,dMaxRepresents the maximum width of the crack;
s402, solving a tangent line l of a k point of the skeleton line, and solving a normal line l' of the k point of the skeleton line according to the tangent line;
s403, acquiring coordinates (x, y) of a k-th point of the skeleton line;
s404, acquiring pixel points in the positive and negative normal vector directions along the normal l' at the coordinates (x, y) of the k-th point corresponding to the crack image, judging whether the acquired pixel points belong to crack pixel points or not, obtaining a point A and a point B at the edge of the crack, and calculating the distance between the point A and the point B as the crack width d of the k-th point, as shown in FIG. 6;
S405、dMax=max(dMax,d);
S406、k=k+1;
s407, judging whether k is less than or equal to n, if so, executing a step S402, and if not, executing a step S408;
s408, outputting dMaxAs the maximum width of the crack.
The area of the crack was calculated as follows: and judging whether the pixels belong to the crack pixels one by one from the original point of the crack image, and counting the number of the pixels belonging to the crack to obtain the area of the crack. If the value of the pixel point is 1, the pixel point belongs to the crack, otherwise, the pixel point does not belong to the crack.
The average width of the crack was calculated as follows:
wherein,is the average width of the crack, Area is the Area of the crack, LCIs the length of the crack.
By using the crack detection method, various index parameters of the road crack can be well analyzed under different conditions (under the environments of dim light, bright light, shadow, leaves, water traces and the like).
The embodiment of the invention also provides a crack detection system, which comprises:
the mobile robot is used for acquiring a to-be-identified crack video of the target area and sending the to-be-identified crack video to the remote terminal;
the remote terminal is used for receiving the crack video to be recognized, and performing crack recognition and segmentation on the crack video to be recognized by utilizing the trained deep convolutional neural network model to obtain a crack image;
and the remote terminal is also used for analyzing the crack image to obtain a plurality of index parameters of the crack.
In an embodiment, the system further comprises a VR device and a remote control;
the mobile robot is also used for acquiring an environment video and a road surface video and sending the environment video and the road surface video to the remote terminal;
the remote terminal is also used for receiving the environment video and the road surface video and processing the environment video to obtain a processed environment video;
a VR device for acquiring the processed environmental video from a remote device;
and the remote controller is used for sending a control command to the mobile robot so as to control the mobile robot to move to the target area, wherein the control command is generated by operating the remote controller by a user according to the road video and the processed environment video acquired by the VR equipment.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.
Claims (10)
1. A crack detection method, characterized in that the method comprises:
the mobile robot acquires a crack video to be identified in a target area and sends the crack video to the remote terminal;
the remote terminal receives the crack video to be recognized, and performs crack recognition and segmentation on the crack video to be recognized by using the trained deep convolutional neural network model to obtain a crack image;
and the remote terminal analyzes the crack image to obtain a plurality of index parameters of the crack.
2. The crack detection method of claim 1, wherein the trained deep convolutional neural network model is integrated by a plurality of trained convolutional neural networks, each trained convolutional neural network performs crack recognition and segmentation on a crack video to be recognized to obtain an initial segmentation image, and the initial segmentation images output by each trained convolutional neural network are fused to obtain a crack image;
the trained deep convolutional neural network model is obtained by the following method:
acquiring a crack image dataset;
carrying out crack marking on the crack image data set, and dividing the marked crack image data set into a training set and a testing set;
constructing a deep convolution neural network model by using a training set;
and testing the deep convolutional neural network model by using the test set, and obtaining the trained deep convolutional neural network model after the test is passed.
3. The crack detection method of claim 1, wherein the plurality of index parameters includes at least one of a length of the crack, a maximum width of the crack, a crack area, and an average width of the crack.
4. A crack detection method as claimed in claim 3, characterized in that the length of the crack is calculated as follows:
using a fast parallel thinning algorithm to skeletonize the cracks on the crack image on a single-pixel level to obtain a bone line image;
acquiring pixel values of all coordinate points in the skeleton line image:
where (x, y) represents coordinates of the bone line image, and I (x, y) represents a pixel value at (x, y);
counting the number of I (x, y) ═ 1 as n, wherein n represents the total number of pixel points on the skeleton line;
calculating the length L of the crackC:
5. Crack detection method as claimed in claim 3, characterized in that the maximum width d of the crack is calculatedMax:
Let t denote the t-th point on the skeleton line, and t is 1, 2.. n, the tangent line of the t-th point of the skeleton line is obtained according to the linear interpolation method, and the normal line of the t-th point of the skeleton line is obtained according to the tangent line;
acquiring the coordinate of the t-th point of the skeleton line, judging whether pixel points on positive and negative normal vectors of the t-th point at the position, corresponding to the coordinate of the t-th point, of the crack image belong to crack pixel points or not, obtaining a point A and a point B of the crack edge, and calculating the distance between the point A and the point B to serve as the crack width of the t-th point;
the crack widths of the n points are compared, and the maximum value is taken as the maximum width of the crack.
6. The crack detection method of claim 4, wherein the area of the crack is calculated as follows: and judging whether the pixels belong to the crack pixels one by one from the original point of the crack image, and counting the number of the pixels belonging to the crack to obtain the area of the crack.
8. The crack detection method of claim 1, wherein before the mobile robot obtains the video of the crack to be identified in the target area, the method further comprises:
the method comprises the following steps that a mobile robot obtains an environment video and a road surface video and sends the environment video and the road surface video to a remote terminal;
the remote far-end equipment processes the environment video to obtain a processed environment video, and sends the processed environment video to the VR equipment;
the remote controller sends a control instruction to the mobile robot so as to control the mobile robot to move to the target area and acquire the video to be identified in the target area, wherein the control instruction is generated by a user operating the remote controller according to the road video and the processed environment video acquired by the VR equipment.
9. A fracture detection system, the system comprising:
the mobile robot is used for acquiring a to-be-identified crack video of the target area and sending the to-be-identified crack video to the remote terminal;
the remote terminal is used for receiving the crack video to be recognized, and performing crack recognition and segmentation on the crack video to be recognized by utilizing the trained deep convolutional neural network model to obtain a crack image;
and the remote terminal is also used for analyzing the crack image to obtain a plurality of index parameters of the crack.
10. The crack detection system of claim 9, wherein the system further comprises a VR device and a remote control;
the mobile robot is also used for acquiring an environment video and a road surface video and sending the environment video and the road surface video to the remote terminal;
the remote terminal is also used for receiving the environment video and the road surface video and processing the environment video to obtain a processed environment video;
a VR device for acquiring the processed environmental video from a remote device;
and the remote controller is used for sending a control instruction to the mobile robot so as to control the mobile robot to move to the target area and acquire the video to be identified of the target area, wherein the control instruction is generated by operating the remote controller by a user according to the road video and the processed environment video acquired by the VR equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110483642.1A CN113160202A (en) | 2021-04-30 | 2021-04-30 | Crack detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110483642.1A CN113160202A (en) | 2021-04-30 | 2021-04-30 | Crack detection method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113160202A true CN113160202A (en) | 2021-07-23 |
Family
ID=76873058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110483642.1A Pending CN113160202A (en) | 2021-04-30 | 2021-04-30 | Crack detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113160202A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113686874A (en) * | 2021-08-16 | 2021-11-23 | 沭阳林冉塑业有限公司 | Mechanical part damage detection method and system based on artificial intelligence |
CN114419080A (en) * | 2022-01-26 | 2022-04-29 | 南昌市建筑科学研究所(南昌市建筑工程质量检测中心) | Curtain wall inspection system and method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103364408A (en) * | 2013-07-10 | 2013-10-23 | 三峡大学 | Method for detecting underwater surface crack of hydraulic concrete structure by using underwater robot system |
CN110378879A (en) * | 2019-06-26 | 2019-10-25 | 杭州电子科技大学 | A kind of Bridge Crack detection method |
-
2021
- 2021-04-30 CN CN202110483642.1A patent/CN113160202A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103364408A (en) * | 2013-07-10 | 2013-10-23 | 三峡大学 | Method for detecting underwater surface crack of hydraulic concrete structure by using underwater robot system |
CN110378879A (en) * | 2019-06-26 | 2019-10-25 | 杭州电子科技大学 | A kind of Bridge Crack detection method |
Non-Patent Citations (2)
Title |
---|
ZHUN FAN ET AL: "Ensemble of Deep Convolutional Neural Networks for Automatic Pavement Crack Detection and Measurement", 《COATINGS 2020, 10, 152; DOI:10.3390/COATINGS10020152》 * |
马国鑫: "基于无人机采集图像的建筑物表面裂缝检测方法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113686874A (en) * | 2021-08-16 | 2021-11-23 | 沭阳林冉塑业有限公司 | Mechanical part damage detection method and system based on artificial intelligence |
CN113686874B (en) * | 2021-08-16 | 2022-08-02 | 沭阳林冉塑业有限公司 | Mechanical part damage detection method and system based on artificial intelligence |
CN114419080A (en) * | 2022-01-26 | 2022-04-29 | 南昌市建筑科学研究所(南昌市建筑工程质量检测中心) | Curtain wall inspection system and method |
CN114419080B (en) * | 2022-01-26 | 2023-05-02 | 南昌市建筑科学研究所(南昌市建筑工程质量检测中心) | Curtain wall inspection system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108846418B (en) | Cable equipment temperature abnormity positioning and identifying method | |
CN108022235B (en) | Method for identifying defects of key components of high-voltage transmission iron tower | |
CN111339893B (en) | Pipeline detection system and method based on deep learning and unmanned aerial vehicle | |
Ahmed et al. | Inspection and identification of transmission line insulator breakdown based on deep learning using aerial images | |
CN109829908B (en) | Binocular image-based method and device for detecting safety distance of ground object below power line | |
CN112528979B (en) | Transformer substation inspection robot obstacle distinguishing method and system | |
CN113160202A (en) | Crack detection method and system | |
CN111158358B (en) | Method and system for self-optimization routing inspection of transformer/converter station based on three-dimensional model | |
CN104282011A (en) | Method and device for detecting interference stripes in video images | |
CN109816780B (en) | Power transmission line three-dimensional point cloud generation method and device of binocular sequence image | |
CN111507975B (en) | Method for detecting abnormity of outdoor insulator of traction substation | |
CN114463308B (en) | Visual inspection method, device and processing equipment for visual angle photovoltaic module of unmanned aerial vehicle | |
CN111967323B (en) | Electric power live working safety detection method based on deep learning algorithm | |
CN107818563A (en) | A kind of transmission line of electricity bundle spacing space measurement and localization method | |
CN115965578A (en) | Binocular stereo matching detection method and device based on channel attention mechanism | |
CN115841633A (en) | Power tower and power line associated correction power tower and power line detection method | |
CN114880730A (en) | Method and device for determining target equipment and photovoltaic system | |
CN111476062A (en) | Lane line detection method and device, electronic equipment and driving system | |
CN113297914B (en) | Distribution network field operation electricity testing action recognition method | |
CN114549628A (en) | Power pole inclination detection method, device, equipment and storage medium | |
CN115147356A (en) | Photovoltaic panel inspection positioning method, device, equipment and storage medium | |
CN110516551B (en) | Vision-based line patrol position deviation identification system and method and unmanned aerial vehicle | |
CN114648736A (en) | Robust engineering vehicle identification method and system based on target detection | |
CN113869245A (en) | Method and device for identifying safety region | |
CN109855534B (en) | Method, system, medium and equipment for judging position of chassis handcart of switch cabinet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |