CN116739357B - Multi-mode fusion perception city existing building wide area monitoring and early warning method and device - Google Patents
Multi-mode fusion perception city existing building wide area monitoring and early warning method and device Download PDFInfo
- Publication number
- CN116739357B CN116739357B CN202311029710.2A CN202311029710A CN116739357B CN 116739357 B CN116739357 B CN 116739357B CN 202311029710 A CN202311029710 A CN 202311029710A CN 116739357 B CN116739357 B CN 116739357B
- Authority
- CN
- China
- Prior art keywords
- building
- security risk
- deformation
- level
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 230000008447 perception Effects 0.000 title claims abstract description 6
- 238000013507 mapping Methods 0.000 claims abstract description 14
- 238000013145 classification model Methods 0.000 claims description 45
- 230000003287 optical effect Effects 0.000 claims description 26
- 238000006073 displacement reaction Methods 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 6
- 238000011835 investigation Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 238000003709 image segmentation Methods 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims description 3
- 230000036541 health Effects 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000015654 memory Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 208000025274 Lightning injury Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/08—Construction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Tourism & Hospitality (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Marketing (AREA)
- Databases & Information Systems (AREA)
- General Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Primary Health Care (AREA)
- Educational Administration (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Development Economics (AREA)
- Remote Sensing (AREA)
- Alarm Systems (AREA)
Abstract
The invention discloses a multi-mode fusion perception city existing building wide area monitoring and early warning method and device, belonging to the technical field of building health monitoring, wherein the method comprises the following steps: collecting basic attribute data of all buildings in a target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images; determining a security risk level of the building based on the appearance image and the basic attribute data of the building; determining a deformation monitoring mode of the building according to the safety risk level of the building, and monitoring the deformation of the building; and according to the monitoring result, implementing hierarchical early warning on the building with the deformation degree exceeding the preset deformation threshold. By adopting the scheme of the invention, wide-area monitoring and early warning of the existing building deformation risk of the city can be realized on the premise of controllable cost, and city safety management is supported.
Description
Technical Field
The invention relates to the technical field of building health monitoring, in particular to a multi-mode fusion perception method and device for wide-area monitoring and early warning of an existing city building.
Background
A large number of buildings in the city are basic elements for satisfying city functions, human production and life, and the security of the city must be ensured first. The deformation monitoring of the buildings in the city is a necessary means for guaranteeing the safety of the buildings.
The traditional building deformation monitoring means is to install displacement meters, accelerometers and other contact sensors on different floors of a building, so that early warning decisions are made according to deformation data recorded in real time. However, the conventional touch sensing device in the market at present needs to be equipped with a data acquisition instrument, a matched software system and the like besides a sensor terminal, and the whole sensing system is high in selling price. The service life of the current contact type sensing equipment is limited, generally only a few years to more than ten years, which is far smaller than the service age of a normal building, and the common external environment effects such as corrosion, lightning stroke and the like further aggravate the fault probability of the special sensing equipment. Therefore, the conventional technical means based on the contact sensing device are difficult to meet the deformation monitoring requirements of a large number of buildings in a city in terms of both cost and benefit.
In recent years, a non-contact measurement technique of building deformation based on computer vision has been developed gradually. However, only a part of buildings in a city generally need to be monitored, so that when the non-contact measurement technology of building deformation is applied, the need of determining which buildings in the city need to be monitored is first needed, but a mature method for quickly screening high-risk buildings needing to be monitored from a large number of buildings is still lacking.
Therefore, neither the contact sensor alone nor the computer vision based non-contact measurement technique can effectively meet the wide area monitoring needs of the existing city building. Regarding how to realize wide-area monitoring and early warning of urban existing buildings under the condition of limited cost, no mature and feasible method is yet available at present.
Disclosure of Invention
The invention provides a multi-mode fusion-aware urban existing building wide area monitoring and early warning method and device, which are used for solving the technical problem that the existing urban existing building wide area monitoring requirements cannot be met effectively only by means of a contact sensor or a non-contact measurement technology based on computer vision. The wide area monitoring and the high-efficiency early warning of the existing building deformation risk in the city with low cost and practical operation are realized.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the invention provides a multi-mode fusion perceived city existing building wide area monitoring and early warning method, which comprises the following steps:
collecting basic attribute data of all buildings in a target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images;
determining a security risk level of the building based on the appearance image and the basic attribute data of the building;
determining a deformation monitoring mode of the building according to the safety risk level of the building, and monitoring the deformation of the building;
and according to the monitoring result, implementing hierarchical early warning on the building with the deformation degree exceeding the preset deformation threshold.
Further, the basic attribute data includes: building year, building structure type, building function, building height and building bottom profile data; the building bottom surface contour data comprise longitude and latitude coordinates of each corner point forming the bottom surface contour shape.
Further, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images, wherein the method comprises the following steps:
setting longitude and latitude coordinates of an unmanned aerial vehicle aerial photographing path node based on building height and building bottom surface contour data;
acquiring images of all buildings in a target city by using the unmanned aerial vehicle through longitude and latitude coordinates of a set unmanned aerial vehicle aerial photographing path node, and recording state data of the unmanned aerial vehicle when each picture is photographed in the acquisition process, wherein the state data comprise longitude and latitude coordinates, a pitch angle, a roll angle and a course angle of the unmanned aerial vehicle;
aiming at each building in a target city, screening out all images of all buildings acquired by the unmanned aerial vehicle, wherein all images of the current building are contained in the images of all buildings acquired by the unmanned aerial vehicle according to the profile data of the bottom surface of the building and the state data;
and segmenting the appearance image of the current building from all the images comprising the current building through a preset image segmentation network, and establishing a one-to-one or one-to-many mapping relation between each building of the target city and the appearance image of each building.
Further, the determining the security risk level of the building based on the appearance image and the basic attribute data of the building includes:
inputting the appearance image and basic attribute data of the building into a pre-trained security risk classification model, and obtaining the security risk level of the building by using the security risk classification model; when the security risk classification model is utilized to obtain the security risk level of the building, only one appearance image corresponding to the building is input into the pre-trained security risk classification model each time so as to obtain the security risk level corresponding to the building; if a plurality of appearance images correspond to the building, sequentially taking each appearance image as input of a security risk classification model to obtain the security risk level of the building determined by each appearance image, and taking the highest level of the security risk levels obtained by the plurality of appearance images as the security risk level of the building;
the training process of the security risk classification model comprises the following steps:
basic attribute data and appearance images of a preset number of existing buildings are collected, and the safety risk level of each existing building is determined by organizing evaluation staff through field investigation;
constructing a sample data set by utilizing the collected basic attribute data, appearance images and security risk levels of the preset number of existing buildings;
constructing a security risk classification model by adopting a deep neural network model;
training the constructed security risk classification model by utilizing the sample data set to obtain a trained security risk classification model; the input of the security risk classification model is an appearance image and basic attribute data of a building, and the output of the security risk classification model is a security risk level of the building;
further, the security risk grades of the building are divided into A, B, C, D grades from low to high;
the A level refers to building non-dangerous components according to preset component risk judging standards;
the B level refers to that dangerous components exist in the building according to a preset component risk judging standard, but only less than 5% of structural components in all components of the building are dangerous components;
the C level is that dangerous components exist in the building according to a preset component risk judging standard, and 5% -30% of all components in the building are dangerous components;
the class D refers to that more than 30% of all the structural members of the building are dangerous members according to the preset member risk assessment standard.
Further, the determining a building deformation monitoring mode according to the security risk level of the building includes:
aiming at a building with a security risk level of D, installing a contact sensor, and monitoring deformation of the contact sensor; wherein, the touch sensor includes: displacement meters and accelerometers;
aiming at a building with a security risk level of C, if the existing public cameras are used for deformation monitoring, obtaining a deformation result of the building by adopting a preset optical flow tracking algorithm based on video data of the current building shot by the existing public cameras around the existing public cameras, and if a plurality of cameras which can be used for deformation monitoring exist, taking an average value of deformation data obtained by utilizing the cameras as the deformation result of the building; if the existing public cameras are not arranged around the sensor, the sensor is provided with a contact sensor for monitoring the deformation of the sensor;
aiming at the building with the security risk level of B level or A level, deformation monitoring is not carried out on the building.
Further, the optical flow tracking algorithm is a sparse optical flow tracking algorithm.
Further, when the deformation result of the building is obtained by using a sparse optical flow tracking algorithm, the sparse optical flow tracking algorithm increases the stability of feature point tracking by the following steps:
and (3) a reverse checking step: sparse optical flow tracking algorithm with videoA feature point in the frame->As reference point, at +.>Searching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the To ensure the tracking quality of the feature points, the reverse inspection is performed, namelyIs inverted at the reference point +>Searching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the When feature point->And->When the distance of the tracking quality is smaller than the preset distance threshold value, the current tracking quality is considered to meet the requirement;
rechecking: re-applying Shi-Tomasi corner detection at fixed frame intervals to identify traceable feature points in the region of interest.
Further, when the hierarchical early warning is implemented on the building with the deformation degree exceeding the preset deformation threshold according to the monitoring result, the considered deformation indexes comprise the foundation settlement rate, the overall inclination rate and the overall horizontal displacement of the building;
when the foundation settlement rate of the building is greater than 4 mm/month continuously for two months, or the overall inclination rate of the building is greater than 2%, or the overall horizontal displacement of the building is greater than 10mm, triggering a level I early warning;
triggering a level II early warning when the foundation settlement rate of the building is greater than 2 mm/month but not greater than 4 mm/month continuously or the overall inclination rate of the building is greater than 1% but not greater than 2% or the overall horizontal displacement of the building is greater than 5mm but not greater than 10 mm;
and triggering III-level early warning when the foundation settlement rate of the building is continuously greater than 1 mm/month but not greater than 2 mm/month, or the overall inclination rate of the building is greater than 0.5% but not greater than 1%, or the overall horizontal displacement of the building is greater than 2mm but not greater than 5 mm.
On the other hand, the invention also provides a multi-mode fusion perceived city existing building wide area monitoring and early warning device, which comprises:
the data collection module is used for collecting basic attribute data of all buildings in the target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images of each building in the target city;
the building security risk level determining module is used for determining the security risk level of the building based on the appearance image and the basic attribute data of the building collected by the data collecting module;
the deformation monitoring module is used for determining a building deformation monitoring mode according to the safety risk level of the building determined by the building safety risk level determining module and monitoring the deformation of the building;
and the grading early warning module is used for carrying out grading early warning on the building with the deformation degree exceeding the preset deformation threshold according to the monitoring result obtained by the deformation monitoring module.
In yet another aspect, the present invention also provides an electronic device including a processor and a memory; wherein the memory stores at least one instruction that is loaded and executed by the processor to implement the above-described method.
In yet another aspect, the present invention also provides a computer readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the above method.
The technical scheme provided by the invention has the beneficial effects that at least:
the multi-mode fusion-aware city existing building wide area monitoring and early warning method disclosed by the invention combines the advantages of multi-mode sensing means such as unmanned aerial vehicle machine vision, city existing public camera machine vision, traditional contact sensing equipment and the like, the buildings with monitoring requirements are screened out through the unmanned aerial vehicle machine vision, the cost pressure of newly added sensing equipment is reduced by fully utilizing city existing public camera resources, the deformation monitoring precision of the building with the strongest monitoring requirements is ensured through the traditional contact sensing equipment, the problems of wide area monitoring cost and actual operation of the city existing building by only relying on a contact sensor or a non-contact measurement technology based on computer vision at present are solved, and the wide area monitoring and high-efficiency early warning of the city existing building deformation risks with low cost and actual operation are realized, so that technical support can be provided for effective and intelligent management and control of city safety.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an execution flow of a multi-mode fusion-aware urban existing building wide area monitoring and early warning method provided by an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
First embodiment
The embodiment provides a multi-mode fusion-aware urban existing building wide area monitoring and early warning method, which can be realized by electronic equipment, wherein the execution flow of the method is shown in figure 1 and comprises the following steps:
s1, collecting basic attribute data of all buildings in a target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images of each building in the target city;
specifically, in the present embodiment, the implementation procedure of S1 is as follows:
s11, collecting basic attribute data of all buildings in a target city through a public geographic information system website, a map website or a government database; wherein the basic attribute data includes: building year, building structure type, building function, building height and building bottom profile data; the building bottom surface contour data comprises longitude and latitude coordinates of each corner point forming the bottom surface contour shape;
s12, setting longitude and latitude coordinates of key nodes of the unmanned aerial vehicle aerial photographing path based on building height and building bottom surface contour data;
s13, acquiring images of all buildings in a target city by using the unmanned aerial vehicle through longitude and latitude coordinates of a set unmanned aerial vehicle aerial photographing path node, and recording state data of the unmanned aerial vehicle when photographing each picture in the acquisition process, wherein the state data comprise longitude and latitude coordinates, pitch angle, roll angle and course angle of the unmanned aerial vehicle;
s14, screening all images of all buildings acquired by the unmanned aerial vehicle according to the building bottom surface contour data and the state data aiming at each building of the target city;
s15, segmenting the appearance image of the building from all images comprising the building through a preset image segmentation network, and establishing a one-to-one or one-to-many mapping relation between each building of the target city and the appearance image of each building.
Specifically, in this embodiment, the OpenStreetMap discloses a geographic information system website to obtain the construction year, structure type, building function, building height and bottom profile data of all buildings in a certain area of a certain city in China, the unmanned aerial vehicle obtains the appearance images of all buildings, and the Mask R-CNN deep neural network cuts out the appearance image corresponding to each building from the images obtained by the unmanned aerial vehicle.
S2, determining the security risk level of the building based on the appearance image and the basic attribute data of the building;
specifically, in this embodiment, the implementation procedure of S2 is as follows:
inputting the appearance image and basic attribute data of the building into a pre-trained security risk classification model, and obtaining the security risk level of the building by using the security risk classification model; when the security risk classification model is utilized to obtain the security risk level of the building, only one appearance image corresponding to the building is input into the pre-trained security risk classification model each time so as to obtain the security risk level corresponding to the building; if a plurality of appearance images correspond to the building, sequentially taking each appearance image as input of a security risk classification model to obtain the security risk level of the building determined by each appearance image, and taking the highest level of the security risk levels obtained by the plurality of appearance images as the security risk level of the building;
the training process of the security risk classification model comprises the following steps:
s21, collecting basic attribute data and appearance images of preset quantity of existing buildings, organizing experienced house detection engineers, and determining the safety risk level of each existing building through field investigation according to the existing building identification standards of China (general standards for existing building identification and reinforcement) (GB 55021-2021);
s22, constructing a sample data set by utilizing the collected basic attribute data, the appearance images and the security risk level of the preset number of existing buildings;
s23, constructing a security risk classification model by adopting a deep neural network model;
s24, training the constructed security risk classification model by using the sample data set to obtain a trained security risk classification model; the input of the security risk classification model is an appearance image and basic attribute data of a building, and the output of the security risk classification model is a security risk level of the building.
The network architecture of the security risk classification model comprises a text image fusion module and a residual neural network; the text image fusion module is used for fusing the appearance image of the building with the basic attribute data, and comprises a text encoder (used for encoding the basic attribute data), an image encoder (used for encoding the appearance image) and a text image fusion encoder (used for fusing the appearance image of the building with the basic attribute data to obtain fused encoding characteristics) based on Hadamard products. And the residual neural network is used for determining the security risk level of the building according to the output of the text image fusion module.
The safety risk grade of the building is divided into A, B, C, D grades from low to high;
the A level refers to that according to a preset component risk judging standard, no dangerous component exists in the building, and the current building can meet the safety use requirement;
the B level refers to that dangerous components exist in the building according to a preset component risk judging standard, but only less than 5% of structural components in all components of the building are dangerous components; the safety of the main structure is not affected, and the safety use requirement can be basically met;
the C level is that dangerous components exist in the building according to a preset component risk judging standard, 5% -30% of structural components in all components of the building are dangerous components, and the safety use requirement cannot be met; the building part is in a dangerous state to form a local dangerous room;
the level D refers to that according to the preset component hazard judgment standard, more than 30% of all components of the building are dangerous components, the safety use requirement cannot be met, and the whole building is in a dangerous state to form the whole dangerous house.
Specifically, in this embodiment, professional engineers of a plurality of professional house security detection companies are organized to obtain security risk classifications of about 2 ten thousand existing buildings through detailed field investigation of building components, floors and house totality, wherein the security risk classification of about 1.42 ten thousand buildings is a level, the security risk classification of about 0.39 ten thousand buildings is a level B, the security risk classification of about 0.13 ten thousand buildings is a level C, and the security risk classification of about 0.06 ten thousand buildings is a level D.
S3, determining a building deformation monitoring mode according to the security risk level of the building, and monitoring the deformation of the building;
specifically, in this embodiment, the implementation procedure of S3 is as follows:
s31, aiming at a building with a safety risk level of D, installing a contact sensor, and monitoring deformation of the contact sensor; wherein, the touch sensor includes: displacement meters and accelerometers;
s32, aiming at the building with the security risk level of C, if the existing public cameras are used for deformation monitoring, obtaining a deformation result of the building by adopting an optical flow tracking algorithm based on the video data of the current building shot by the existing public cameras around the existing public cameras, and if a plurality of cameras which can be used for deformation monitoring exist, taking the average value of the deformation data obtained by the cameras as the deformation result of the building; if the existing public cameras are not arranged around the sensor, the sensor is provided with a contact sensor for monitoring the deformation of the sensor;
s33, aiming at the building with the security risk level of B level or A level, deformation monitoring is not carried out on the building.
Wherein the optical flow tracking algorithm is a sparse optical flow tracking algorithm.
Further, considering that the conventional sparse optical flow tracking algorithm may cause error tracking due to the feature point disappearance problem, when the sparse optical flow tracking algorithm is used to obtain a deformation result of a building, the sparse optical flow tracking algorithm adopts the following two steps to increase the stability of feature point tracking:
1. and (3) a reverse checking step: sparse optical flow tracking algorithm with videoA feature point in the frame->As reference point, at +.>Searching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the To ensure the tracking quality of the feature points, the inverse is performedTo check, i.e. toIs inverted at the reference point +>Searching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the When feature point->And->When the distance of the track is smaller than the preset distance threshold value, the current tracking is considered to be good;
2. rechecking: the feature point tracking failure can lead to the gradual decrease of the number of the trackable feature points in the measurement process, so that the embodiment reappears the Shi-Tomasi corner detection every fixed frame interval to identify the trackable feature points in the interested region.
S4, performing hierarchical early warning on the building with the deformation degree exceeding a preset deformation threshold according to the monitoring result;
specifically, the embodiment establishes a three-level I-II-III early warning mechanism; deformation indexes considered by early warning are three types, including: the foundation settlement rate, the overall inclination rate and the overall horizontal displacement of the building.
When the foundation settlement rate of the building is greater than 4 mm/month continuously for two months, or the overall inclination rate of the building is greater than 2%, or the overall horizontal displacement of the building is greater than 10mm, triggering a level I early warning;
triggering a level II early warning when the foundation settlement rate of the building is greater than 2 mm/month but not greater than 4 mm/month continuously or the overall inclination rate of the building is greater than 1% but not greater than 2% or the overall horizontal displacement of the building is greater than 5mm but not greater than 10 mm;
and triggering III-level early warning when the foundation settlement rate of the building is continuously greater than 1 mm/month and not greater than 2 mm/month, or the overall inclination rate of the building is greater than 0.5% and not greater than 1%, or the overall horizontal displacement of the building is greater than 2mm and not greater than 5 mm.
The III-level early warning means that the attention of a monitored building needs to be improved, the interior personnel of the building are reminded to withdraw as much as possible, and related personnel are not suitable to enter the building continuously; class II early warning means that personnel inside the building are forced to evacuate immediately, and related personnel should not continue to enter the building; the level I early warning means that people in the building are forced to withdraw immediately, protective fences are set up around the building, and a dismantling procedure is required to be prepared immediately, so that the building is prevented from suddenly collapsing, and the surrounding building environment and personnel safety are affected.
In summary, the embodiment provides a multi-mode fusion-aware city existing building wide area monitoring and early warning method, which fuses the advantages of multi-mode sensing means such as unmanned aerial vehicle machine vision, city existing public camera machine vision, traditional contact type sensing equipment and the like, the buildings with monitoring requirements are screened out through the unmanned aerial vehicle machine vision, the cost pressure of newly added sensing equipment is reduced by fully utilizing city existing public camera resources, the deformation monitoring precision of the building with the strongest monitoring requirements is guaranteed through the traditional contact type sensing equipment, the problems of the cost and actual operation of city existing building wide area monitoring by only relying on a contact type sensor or a non-contact type measuring technology based on computer vision at present are solved, the wide area monitoring and high-efficiency early warning of city existing building deformation risks with low cost and actual operation are realized, and therefore technical support can be provided for effective and intelligent management and control of city safety.
Second embodiment
The embodiment provides a multi-mode fusion perceived city existing building wide area monitoring and early warning device, which comprises the following modules:
the data collection module is used for collecting basic attribute data of all buildings in the target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images of each building in the target city;
the building security risk level determining module is used for determining the security risk level of the building based on the appearance image and the basic attribute data of the building collected by the data collecting module;
the deformation monitoring module is used for determining a building deformation monitoring mode according to the safety risk level of the building determined by the building safety risk level determining module and monitoring the deformation of the building;
and the grading early warning module is used for carrying out grading early warning on the building with the deformation degree exceeding the preset deformation threshold according to the monitoring result obtained by the deformation monitoring module.
The multi-mode fusion-aware city existing building wide-area monitoring and early warning device of the embodiment corresponds to the multi-mode fusion-aware city existing building wide-area monitoring and early warning method of the first embodiment; the functions realized by the functional modules in the multi-mode fusion-aware urban existing building wide area monitoring and early warning device in the embodiment are in one-to-one correspondence with the flow steps in the multi-mode fusion-aware urban existing building wide area monitoring and early warning method in the first embodiment; therefore, the description is omitted here.
Third embodiment
The embodiment provides an electronic device, which comprises a processor and a memory; wherein the memory stores at least one instruction that is loaded and executed by the processor to implement the method of the first embodiment.
The electronic device may vary considerably in configuration or performance and may include one or more processors (central processing units, CPU) and one or more memories having at least one instruction stored therein that is loaded by the processors and performs the methods described above.
Fourth embodiment
The present embodiment provides a computer-readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the method of the first embodiment described above. The computer readable storage medium may be, among other things, ROM, random access memory, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc. The instructions stored therein may be loaded by a processor in the terminal and perform the methods described above.
Furthermore, it should be noted that the present invention can be provided as a method, an apparatus, or a computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
It is finally pointed out that the above description of the preferred embodiments of the invention, it being understood that although preferred embodiments of the invention have been described, it will be obvious to those skilled in the art that, once the basic inventive concepts of the invention are known, several modifications and adaptations can be made without departing from the principles of the invention, and these modifications and adaptations are intended to be within the scope of the invention. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Claims (2)
1. A multi-mode fusion perceived city existing building wide area monitoring and early warning method is characterized by comprising the following steps:
collecting basic attribute data of all buildings in a target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images;
determining a security risk level of the building based on the appearance image and the basic attribute data of the building;
determining a deformation monitoring mode of the building according to the safety risk level of the building, and monitoring the deformation of the building;
according to the monitoring result, implementing hierarchical early warning on the building with the deformation degree exceeding the preset deformation threshold value;
the basic attribute data includes: building year, building structure type, building function, building height and building bottom profile data; the building bottom surface contour data comprises longitude and latitude coordinates of each corner point forming the bottom surface contour shape;
the step of collecting the appearance images of all buildings in the target city and establishing the mapping relation between each building in the target city and the appearance images comprises the following steps:
setting longitude and latitude coordinates of an unmanned aerial vehicle aerial photographing path node based on building height and building bottom surface contour data;
acquiring images of all buildings in a target city by using the unmanned aerial vehicle through longitude and latitude coordinates of a set unmanned aerial vehicle aerial photographing path node, and recording state data of the unmanned aerial vehicle when each picture is photographed in the acquisition process, wherein the state data comprise longitude and latitude coordinates, a pitch angle, a roll angle and a course angle of the unmanned aerial vehicle;
aiming at each building in a target city, screening out all images of all buildings acquired by the unmanned aerial vehicle, wherein all images of the current building are contained in the images of all buildings acquired by the unmanned aerial vehicle according to the profile data of the bottom surface of the building and the state data;
the method comprises the steps of segmenting an appearance image of a current building from all images comprising the current building through a preset image segmentation network, and establishing a one-to-one or one-to-many mapping relation between each building of a target city and the appearance image;
the method for determining the security risk level of the building based on the appearance image and the basic attribute data of the building comprises the following steps:
inputting the appearance image and basic attribute data of the building into a pre-trained security risk classification model, and obtaining the security risk level of the building by using the security risk classification model; when the security risk classification model is utilized to obtain the security risk level of the building, only one appearance image corresponding to the building is input into the pre-trained security risk classification model each time so as to obtain the security risk level corresponding to the building; if a plurality of appearance images correspond to the building, sequentially taking each appearance image as input of a security risk classification model to obtain the security risk level of the building determined by each appearance image, and taking the highest level of the security risk levels obtained by the plurality of appearance images as the security risk level of the building;
the training process of the security risk classification model comprises the following steps:
basic attribute data and appearance images of a preset number of existing buildings are collected, and the safety risk level of each existing building is determined by organizing evaluation staff through field investigation;
constructing a sample data set by utilizing the collected basic attribute data, appearance images and security risk levels of the preset number of existing buildings;
constructing a security risk classification model by adopting a deep neural network model;
training the constructed security risk classification model by utilizing the sample data set to obtain a trained security risk classification model; the input of the security risk classification model is an appearance image and basic attribute data of a building, and the output of the security risk classification model is a security risk level of the building;
the safety risk grades of the buildings are divided into A, B, C, D grades from low to high; wherein,
the A level refers to building non-dangerous components according to preset component risk judging standards;
the B level refers to that dangerous components exist in the building according to a preset component risk judging standard, but only less than 5% of structural components in all components of the building are dangerous components;
the C level is that dangerous components exist in the building according to a preset component risk judging standard, and 5% -30% of all components in the building are dangerous components;
the level D refers to that more than 30% of structural components in all components of the building are dangerous components according to a preset component risk judging standard;
the method for determining the building deformation monitoring mode according to the security risk level of the building comprises the following steps:
aiming at a building with a security risk level of D, installing a contact sensor, and monitoring deformation of the contact sensor; wherein, the touch sensor includes: displacement meters and accelerometers;
aiming at a building with a security risk level of C, if the existing public cameras are used for deformation monitoring, obtaining a deformation result of the building by adopting a preset optical flow tracking algorithm based on video data of the current building shot by the existing public cameras around the existing public cameras, and if a plurality of cameras which can be used for deformation monitoring exist, taking an average value of deformation data obtained by utilizing the cameras as the deformation result of the building; if the existing public cameras are not arranged around the sensor, the sensor is provided with a contact sensor for monitoring the deformation of the sensor;
aiming at the building with the security risk level of B level or A level, deformation monitoring is not carried out on the building;
the optical flow tracking algorithm is a sparse optical flow tracking algorithm;
when a deformation result of a building is obtained by using a sparse optical flow tracking algorithm, the sparse optical flow tracking algorithm increases the stability of feature point tracking by the following steps:
and (3) a reverse checking step: sparse optical flow tracking algorithm with videoA feature point in the frame->As a reference point, at the firstSearching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the To ensure the tracking quality of the feature points, the reverse inspection is performed, namelyIs inverted at the reference point +>Searching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the When feature point->And->When the distance of the tracking quality is smaller than the preset distance threshold value, the current tracking quality is considered to meet the requirement;
rechecking: re-applying Shi-Tomasi corner detection at fixed frame intervals, and identifying traceable feature points in the region of interest;
according to the monitoring result, when implementing grading early warning on the building with the deformation degree exceeding the preset deformation threshold, the considered deformation indexes comprise the foundation settlement rate, the overall inclination rate and the overall horizontal displacement of the building;
when the foundation settlement rate of the building is greater than 4 mm/month continuously for two months, or the overall inclination rate of the building is greater than 2%, or the overall horizontal displacement of the building is greater than 10mm, triggering a level I early warning;
triggering a level II early warning when the foundation settlement rate of the building is greater than 2 mm/month but not greater than 4 mm/month continuously or the overall inclination rate of the building is greater than 1% but not greater than 2% or the overall horizontal displacement of the building is greater than 5mm but not greater than 10 mm;
and triggering III-level early warning when the foundation settlement rate of the building is continuously greater than 1 mm/month but not greater than 2 mm/month, or the overall inclination rate of the building is greater than 0.5% but not greater than 1%, or the overall horizontal displacement of the building is greater than 2mm but not greater than 5 mm.
2. The utility model provides a city existing building wide area monitoring early warning device of multimode fusion perception which characterized in that, the city existing building wide area monitoring early warning device of multimode fusion perception includes:
the data collection module is used for collecting basic attribute data of all buildings in the target city, collecting appearance images of all buildings in the target city, and establishing a mapping relation between each building in the target city and the appearance images of each building in the target city;
the building security risk level determining module is used for determining the security risk level of the building based on the appearance image and the basic attribute data of the building collected by the data collecting module;
the deformation monitoring module is used for determining a building deformation monitoring mode according to the safety risk level of the building determined by the building safety risk level determining module and monitoring the deformation of the building;
the grading early warning module is used for carrying out grading early warning on the building with the deformation degree exceeding the preset deformation threshold according to the monitoring result obtained by the deformation monitoring module;
the basic attribute data includes: building year, building structure type, building function, building height and building bottom profile data; the building bottom surface contour data comprises longitude and latitude coordinates of each corner point forming the bottom surface contour shape;
the step of collecting the appearance images of all buildings in the target city and establishing the mapping relation between each building in the target city and the appearance images comprises the following steps:
setting longitude and latitude coordinates of an unmanned aerial vehicle aerial photographing path node based on building height and building bottom surface contour data;
acquiring images of all buildings in a target city by using the unmanned aerial vehicle through longitude and latitude coordinates of a set unmanned aerial vehicle aerial photographing path node, and recording state data of the unmanned aerial vehicle when each picture is photographed in the acquisition process, wherein the state data comprise longitude and latitude coordinates, a pitch angle, a roll angle and a course angle of the unmanned aerial vehicle;
aiming at each building in a target city, screening out all images of all buildings acquired by the unmanned aerial vehicle, wherein all images of the current building are contained in the images of all buildings acquired by the unmanned aerial vehicle according to the profile data of the bottom surface of the building and the state data;
the method comprises the steps of segmenting an appearance image of a current building from all images comprising the current building through a preset image segmentation network, and establishing a one-to-one or one-to-many mapping relation between each building of a target city and the appearance image;
the method for determining the security risk level of the building based on the appearance image and the basic attribute data of the building comprises the following steps:
inputting the appearance image and basic attribute data of the building into a pre-trained security risk classification model, and obtaining the security risk level of the building by using the security risk classification model; when the security risk classification model is utilized to obtain the security risk level of the building, only one appearance image corresponding to the building is input into the pre-trained security risk classification model each time so as to obtain the security risk level corresponding to the building; if a plurality of appearance images correspond to the building, sequentially taking each appearance image as input of a security risk classification model to obtain the security risk level of the building determined by each appearance image, and taking the highest level of the security risk levels obtained by the plurality of appearance images as the security risk level of the building;
the training process of the security risk classification model comprises the following steps:
basic attribute data and appearance images of a preset number of existing buildings are collected, and the safety risk level of each existing building is determined by organizing evaluation staff through field investigation;
constructing a sample data set by utilizing the collected basic attribute data, appearance images and security risk levels of the preset number of existing buildings;
constructing a security risk classification model by adopting a deep neural network model;
training the constructed security risk classification model by utilizing the sample data set to obtain a trained security risk classification model; the input of the security risk classification model is an appearance image and basic attribute data of a building, and the output of the security risk classification model is a security risk level of the building;
the safety risk grades of the buildings are divided into A, B, C, D grades from low to high; wherein,
the A level refers to building non-dangerous components according to preset component risk judging standards;
the B level refers to that dangerous components exist in the building according to a preset component risk judging standard, but only less than 5% of structural components in all components of the building are dangerous components;
the C level is that dangerous components exist in the building according to a preset component risk judging standard, and 5% -30% of all components in the building are dangerous components;
the level D refers to that more than 30% of structural components in all components of the building are dangerous components according to a preset component risk judging standard;
the method for determining the building deformation monitoring mode according to the security risk level of the building comprises the following steps:
aiming at a building with a security risk level of D, installing a contact sensor, and monitoring deformation of the contact sensor; wherein, the touch sensor includes: displacement meters and accelerometers;
aiming at a building with a security risk level of C, if the existing public cameras are used for deformation monitoring, obtaining a deformation result of the building by adopting a preset optical flow tracking algorithm based on video data of the current building shot by the existing public cameras around the existing public cameras, and if a plurality of cameras which can be used for deformation monitoring exist, taking an average value of deformation data obtained by utilizing the cameras as the deformation result of the building; if the existing public cameras are not arranged around the sensor, the sensor is provided with a contact sensor for monitoring the deformation of the sensor;
aiming at the building with the security risk level of B level or A level, deformation monitoring is not carried out on the building;
the optical flow tracking algorithm is a sparse optical flow tracking algorithm;
when a deformation result of a building is obtained by using a sparse optical flow tracking algorithm, the sparse optical flow tracking algorithm increases the stability of feature point tracking by the following steps:
and (3) a reverse checking step: sparse optical flow tracking algorithm with videoA feature point in the frame->As a reference point, at the firstSearching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the To ensure the tracking quality of the feature points, the reverse inspection is performed, namelyIs inverted at the reference point +>Searching for corresponding feature points in a frame>The method comprises the steps of carrying out a first treatment on the surface of the When feature point->And->When the distance of the tracking quality is smaller than the preset distance threshold value, the current tracking quality is considered to meet the requirement;
rechecking: re-applying Shi-Tomasi corner detection at fixed frame intervals, and identifying traceable feature points in the region of interest;
according to the monitoring result, when implementing grading early warning on the building with the deformation degree exceeding the preset deformation threshold, the considered deformation indexes comprise the foundation settlement rate, the overall inclination rate and the overall horizontal displacement of the building;
when the foundation settlement rate of the building is greater than 4 mm/month continuously for two months, or the overall inclination rate of the building is greater than 2%, or the overall horizontal displacement of the building is greater than 10mm, triggering a level I early warning;
triggering a level II early warning when the foundation settlement rate of the building is greater than 2 mm/month but not greater than 4 mm/month continuously or the overall inclination rate of the building is greater than 1% but not greater than 2% or the overall horizontal displacement of the building is greater than 5mm but not greater than 10 mm;
and triggering III-level early warning when the foundation settlement rate of the building is continuously greater than 1 mm/month but not greater than 2 mm/month, or the overall inclination rate of the building is greater than 0.5% but not greater than 1%, or the overall horizontal displacement of the building is greater than 2mm but not greater than 5 mm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311029710.2A CN116739357B (en) | 2023-08-16 | 2023-08-16 | Multi-mode fusion perception city existing building wide area monitoring and early warning method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311029710.2A CN116739357B (en) | 2023-08-16 | 2023-08-16 | Multi-mode fusion perception city existing building wide area monitoring and early warning method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116739357A CN116739357A (en) | 2023-09-12 |
CN116739357B true CN116739357B (en) | 2023-11-17 |
Family
ID=87903047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311029710.2A Active CN116739357B (en) | 2023-08-16 | 2023-08-16 | Multi-mode fusion perception city existing building wide area monitoring and early warning method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116739357B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102468888B1 (en) * | 2021-11-30 | 2022-11-21 | 주식회사 건강 | A system for monitoring the condition of a building based on IoT |
CN115993096A (en) * | 2023-02-08 | 2023-04-21 | 金陵科技学院 | High-rise building deformation measuring method |
CN116229299A (en) * | 2023-03-03 | 2023-06-06 | 深圳市城市公共安全技术研究院有限公司 | Unmanned aerial vehicle-based building or structure damage assessment method, terminal and medium |
-
2023
- 2023-08-16 CN CN202311029710.2A patent/CN116739357B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102468888B1 (en) * | 2021-11-30 | 2022-11-21 | 주식회사 건강 | A system for monitoring the condition of a building based on IoT |
CN115993096A (en) * | 2023-02-08 | 2023-04-21 | 金陵科技学院 | High-rise building deformation measuring method |
CN116229299A (en) * | 2023-03-03 | 2023-06-06 | 深圳市城市公共安全技术研究院有限公司 | Unmanned aerial vehicle-based building or structure damage assessment method, terminal and medium |
Non-Patent Citations (2)
Title |
---|
平面场景三维重建技术研究与实现;叶慧颖;CNKI硕士电子期刊(第第07期期);第1-8页 * |
朱大勇 等编.第十届全国建筑物鉴定与加固学术交流会论文集.合肥工业大学出版社,2010,第767-768页. * |
Also Published As
Publication number | Publication date |
---|---|
CN116739357A (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114118677B (en) | Tailing pond risk monitoring and early warning system based on Internet of things | |
Wang et al. | Machine learning-based regional scale intelligent modeling of building information for natural hazard risk management | |
KR101937940B1 (en) | Method of deciding cpted cctv position by big data | |
WO2020111934A1 (en) | A method and system for detection of natural disaster occurrence | |
CN117538503A (en) | Real-time intelligent soil pollution monitoring system and method | |
CN115641501A (en) | Road inspection system and road inspection equipment | |
KR102642540B1 (en) | Methodo of providing smart city safety service and server performing the same | |
CN111709661A (en) | Risk processing method, device and equipment for business data and storage medium | |
CN118225179B (en) | Intelligent well lid monitoring method and system based on urban drainage | |
CN115841730A (en) | Video monitoring system and abnormal event detection method | |
Kandoi et al. | Pothole detection using accelerometer and computer vision with automated complaint redressal | |
CN117078045B (en) | Holographic inspection and monitoring method and system for traffic infrastructure | |
CN117114241B (en) | Intelligent remote sensing system for monitoring road disasters | |
CN113033443B (en) | Unmanned aerial vehicle-based automatic pedestrian crossing facility whole road network checking method | |
CN114494845A (en) | Artificial intelligence hidden danger troubleshooting system and method for construction project site | |
CN118212747A (en) | Intelligent monitoring and early warning system and method for slope disasters based on multi-source information fusion | |
US20230408479A1 (en) | Systems and methods for enhancing water safety using sensor and unmanned vehicle technologies | |
KR102157201B1 (en) | System and method for determining disastrous based on image and accident record analysis | |
CN111598885B (en) | Automatic visibility grade marking method for highway foggy pictures | |
CN116739357B (en) | Multi-mode fusion perception city existing building wide area monitoring and early warning method and device | |
CN110765900A (en) | DSSD-based automatic illegal building detection method and system | |
CN116050837A (en) | Comprehensive monitoring early warning and safety assessment scheme for tailing pond multielement disasters | |
CN114646735A (en) | Carbon dioxide concentration monitoring system in air | |
Gao et al. | Measuring urban waterlogging depths from video images based on reference objects | |
Yang et al. | Predicting traffic accident risk in Seoul metropolitan city: a dataset construction approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |