Nothing Special   »   [go: up one dir, main page]

CN111145166B - Security monitoring method and system - Google Patents

Security monitoring method and system Download PDF

Info

Publication number
CN111145166B
CN111145166B CN201911404088.2A CN201911404088A CN111145166B CN 111145166 B CN111145166 B CN 111145166B CN 201911404088 A CN201911404088 A CN 201911404088A CN 111145166 B CN111145166 B CN 111145166B
Authority
CN
China
Prior art keywords
point cloud
cloud data
dimensional point
server
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911404088.2A
Other languages
Chinese (zh)
Other versions
CN111145166A (en
Inventor
朱翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenzhen Survey Technology Co ltd
Original Assignee
Beijing Shenzhen Survey Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenzhen Survey Technology Co ltd filed Critical Beijing Shenzhen Survey Technology Co ltd
Priority to CN201911404088.2A priority Critical patent/CN111145166B/en
Publication of CN111145166A publication Critical patent/CN111145166A/en
Application granted granted Critical
Publication of CN111145166B publication Critical patent/CN111145166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Alarm Systems (AREA)

Abstract

The application relates to a safety monitoring method, which comprises the following steps: the time-of-flight TOF camera collects first three-dimensional point cloud data of a scene according to a preset sampling frequency and sends the first three-dimensional point cloud data to a server, wherein the first three-dimensional point cloud data comprises three-dimensional coordinates X, Y, Z; the server performs de-dryness and foreground extraction on the first three-dimensional point cloud data to generate second three-dimensional point cloud data; the server extracts the human head of the second three-dimensional point cloud data to generate third three-dimensional point cloud data; the server calculates fourth three-dimensional point cloud data of the head center of each body in the third three-dimensional point cloud data; the server calculates the motion acceleration of the center of the head of each human body according to the fourth three-dimensional point cloud data of the current frame and the previous adjacent frame; if the acceleration change of the movement of the center of the head of the human body is larger than a first threshold value, the server sends safety alarm information to the terminal.

Description

Security monitoring method and system
Technical Field
The application relates to the field of three-dimensional data processing and automatic control, in particular to a method and a system for safety monitoring.
Background
In order to maintain the safe and orderly development of society, the safety monitoring plays a vital role, the actions of people can be restrained through the deterrent force of the safety monitoring, and the real-time monitoring capability of the safety monitoring can quickly help related departments to find criminals.
However, in the existing security field, most of the adopted video real-time monitoring modes need to be provided with special security personnel to stare at a plurality of monitoring screens in real time, so that the security personnel are extremely tired, if the security personnel walk away from a monitoring room temporarily, a warning missing phenomenon can occur, and malignant events cannot be found timely. The condition of untimely safety alarm can also appear when the ambient light condition is bad or has to shelter from. When video monitoring is inconvenient to perform in some private spaces, such as a clothes changing room, a public toilet, a public sleeping room and the like, the video monitoring becomes a monitoring blind area.
Disclosure of Invention
The application aims at overcoming the defects Of the prior art, and provides a safety monitoring method and a safety monitoring system, which are used for acquiring three-dimensional point cloud data Of a scene by using a Time Of Flight (TOF) camera, processing the acquired data and realizing safety monitoring Of various environments according to the change trend Of the acceleration and the height Of the central coordinate Of the head Of a human body.
To achieve the above object, in one aspect, the present application provides a method for security monitoring, including:
the method comprises the steps that a time-of-flight TOF camera collects first three-dimensional point cloud data of a scene according to a preset sampling frequency and sends the first three-dimensional point cloud data to a server, wherein the first three-dimensional point cloud data comprise three-dimensional coordinates X, Y, Z;
the server performs drying and foreground extraction on the first three-dimensional point cloud data to generate second three-dimensional point cloud data;
the server extracts the human head of the second three-dimensional point cloud data to generate third three-dimensional point cloud data;
the server calculates fourth three-dimensional point cloud data of the head center of each body in the third three-dimensional point cloud data;
the server calculates the motion acceleration of the center of the head of each human body according to the fourth three-dimensional point cloud data of the current frame and the previous adjacent frame;
and if the acceleration change of the movement of the center of the head of the human body is larger than a first threshold value, the server sends safety alarm information to the terminal.
Further, the server performs human head extraction on the second three-dimensional point cloud data, and the generation of the third three-dimensional point cloud data specifically includes:
the server acquires initial human body target three-dimensional point cloud data according to the second three-dimensional point cloud data;
the server calculates and extracts the three-dimensional contour line of each initial human body target according to the three-dimensional point cloud data of the initial human body target, compares the three-dimensional contour line outline of each initial human body target with the three-dimensional contour line of the initial human body target excluding the non-human body head, extracts the three-dimensional contour lines of all real human body heads, and generates third three-dimensional point cloud data.
Further, the calculating, by the server, fourth three-dimensional point cloud data of the head center of each person in the third three-dimensional point cloud data specifically includes:
the server extracts a first closed contour curve of the three-dimensional contour line of each human head closest to the TOF camera according to the third three-dimensional point cloud data, calculates the central point of the first closed contour curve, and generates k central points into fourth three-dimensional point cloud data, wherein k is the number of human heads.
Further, the motion acceleration includes an acceleration value and a direction.
Further, the method further comprises:
the server calculates the height difference between the height of the center of the head of each human body and the height of the previous frame according to the fourth three-dimensional point cloud data of the current frame and the previous adjacent frame;
and if the height of the center of the head of the human body continuously decreases, the server sends safety alarm information to the terminal.
Further, the acceleration change includes a change in acceleration value and a change in direction angle.
Further, the method further comprises:
the server acquires the number of people in the acquired scene according to the third three-dimensional point cloud data;
and if the number of the acquired scene is less than 2, judging that the safety problem exists, and performing data processing on the first three-dimensional point cloud data of the next frame by the server.
Further, after the center height of the human head continuously decreases if the human head appears, the method further comprises:
the server calculates the ratio of the number of human heads to the total number of people within 2 meters of the center radius of the human head, the height of the center of the human head is continuously reduced;
and if the proportion is larger than the second threshold value, the server sends the safety alarm information to the terminal.
On the other hand, the application provides a safety monitoring system, which comprises the time-of-flight TOF camera, the server and the terminal.
The method and the system for safety monitoring provided by the embodiment Of the application collect three-dimensional point cloud data Of a scene by using a Time Of Flight (TOF) camera, process the collected data and realize safety monitoring Of various environments according to the change trend Of the acceleration and the height Of the central coordinate Of the head Of a human body
Drawings
FIG. 1 is a flow chart of a method for security monitoring according to an embodiment of the present application;
fig. 2 is a schematic diagram of a system architecture for security monitoring according to an embodiment of the present application.
Detailed Description
The technical scheme of the application is further described in detail through the drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
For a better understanding of the method of security monitoring according to the embodiments of the present application, a system for implementing the security monitoring of the method will be described first. Fig. 2 is a schematic architecture diagram of a security monitoring system according to an embodiment of the present application, where the system includes: a TOF camera 1, a server 2 and a terminal 3.
Since the safety monitoring method of the application requires that the light energy emitted by the TOF camera is fully covered on the top of the head of the person, the TOF camera 1 is required to be installed at a higher position of a building, and the minimum number of installed TOF cameras 1 is determined by the full coverage in the monitoring range. Since the TOF camera 1 is not affected by ambient light, the problem of external light source is not required to be considered in the installation, and only the view angle of how to use up the small power is required to realize the monitoring in the largest possible range, the higher the installation height is, the better the higher the installation height is within the effective monitoring distance of the TOF camera 1.
The server 2 is connected with the TOF camera 1 in a wireless or wired mode, the server 2 needs to perform high-speed operation of data, meanwhile, different choices can be made according to the size of the monitoring scene, when the monitoring scene is applied to a small-range space, a compact and miniaturized embedded computer can be adopted, and the operating system adopts Linux. If extensive multi-scenario, such as simultaneous monitoring of multiple cells, involves an excessive amount of data computation and analysis of mathematical models, a mainframe or high-end workstation may be employed.
The terminal 3 is connected with the server 2 through a wire or a wireless way, and the terminal 3 is a display or other intelligent devices and can be used for receiving the safety alarm information sent by the server 2.
The embodiment of the application provides a safety monitoring method which is applied to the safety monitoring system to safely monitor and alarm public places or monitored gray zones. The method flow chart is shown in fig. 1, and comprises the following steps:
in step S110, the time-of-flight TOF camera collects first three-dimensional point cloud data of the scene according to a preset sampling frequency, and sends the first three-dimensional point cloud data to the server, wherein the first three-dimensional point cloud data includes three-dimensional coordinates X, Y, Z.
Specifically, sampling control of the TOF cameras is executed by the server, the server sends an acquisition instruction and acquisition frequency to the TOF cameras, the TOF cameras start to enter a data acquisition state, if the server simultaneously controls a plurality of TOF cameras, each TOF camera is assigned with different ID numbers, all the TOF cameras can be controlled to be simultaneously opened or closed by the server, or the TOF cameras can be independently controlled to be opened or closed by the ID numbers when some monitoring points are at rest time and are unmanned, so that redundant data can be greatly reduced, system resources are saved, and energy consumption is reduced.
Since the safety monitoring is not highly demanding on the sampling frequency, the preferred sampling frequency can be set to 10 frames/second. The first three-dimensional point cloud data includes three-dimensional coordinates X, Y, Z of a point cloud, wherein a distance connecting line direction from the TOF camera to the top of the human body is parallel to a depth coordinate Z direction, and a plane perpendicular to the Z axis is an XY plane.
The TOF camera adopted in the embodiment of the application transmits optical signals through the built-in laser transmitting module, and acquires depth information of a target scene through a built-in Complementary Metal Oxide Semiconductor (CMOS) photosensitive element, so that three-dimensional reconstruction of the target scene is completed. The TOF camera acquires a distance depth map of a three-dimensional scene through a CMOS pixel array and an active modulation light source technology, and a single-point-by-point mode is not needed, so that the imaging rate can reach hundreds of frames per second, and meanwhile, the TOF camera has a compact structure and can realize lower energy consumption.
The data acquisition principle of the TOF camera is to calculate the distance travelled by the light wave according to the phase difference generated by reflecting the light emitted by the light source generator arranged in the camera after the light irradiates the detected object, so that the acquisition of the TOF camera data is not influenced by an external light source even if the external light source is completely absent. . When the measured object is shielded, two objects shielded by each other can be accurately separated due to distance imaging. Therefore, the method provided by the embodiment of the application is suitable for the condition that the illumination state is not ideal or no light source exists at all, and is also suitable for the condition that the detected object is shielded. In the application, the TOF camera only collects the depth of field point cloud data of a scene and does not collect intensity and color information, so that the TOF camera has good application prospect in some public monitoring gray zones. For example, the clothes changing room and the toilet can not be involved in invading personal privacy while safety monitoring is carried out.
And step S120, the server performs de-dryness and foreground extraction on the first three-dimensional point cloud data to generate second three-dimensional point cloud data.
Specifically, the server performs the steps of drying and foreground extraction on the first three-dimensional point cloud data as follows:
the server extracts depth data in the first three-dimensional point cloud data, generates first point cloud depth data, establishes a mapping relation with the first three-dimensional point cloud data, and enables each first point cloud depth data to find the first three-dimensional point cloud data according to the mapping relation.
And the server generates a two-dimensional point cloud matrix from the first point cloud depth data. Because the first three-dimensional point cloud data acquired by the TOF camera is stored in the TOF sensor in an array form, and the pixel array of the TOF sensor is the resolution of the sensor, the number of columns and rows of the two-dimensional point cloud matrix generated by the first point cloud depth data are designed to be consistent with the resolution of the TOF sensor, in a specific example, when the resolution of the sensor in the TOF camera is 32×48, that is, the number of horizontal pixels of the TOF camera sensor is 32, the number of vertical pixels is 48, the number of rows of the two-dimensional point cloud matrix is 48, and the number of columns is 32. Preferably, the element position arrangement in the two-dimensional point cloud matrix is kept consistent with the storage position of the TOF camera sensor array, so that each adjacent element in the two-dimensional point cloud matrix is adjacent in the actual scene.
All 3 x 3 sub-matrices of the two-dimensional point cloud matrix are extracted. The number P of 3×3 submatrices which can be extracted from the two-dimensional point cloud matrix and are not repeated most is the number of internal elements surrounded by the first row, the last row, the first column and the last column of the two-dimensional point cloud matrix. In a specific example, when the two-dimensional point cloud matrix is 32×48, there are 1536 elements in total, and 156 elements in the first row, the last row, the first column and the last column of the matrix are removed, that is, P is 1380, so that 1380 3×3 submatrices can be extracted, and when P takes the maximum value, the dryness determination of each point can be ensured to the maximum extent.
And establishing a position index of the central element of the 3 multiplied by 3 submatrix in the two-dimensional point cloud matrix. Marking a row mark and a column mark of a central element of each 3×3 submatrix in the two-dimensional point cloud matrix, and matching corresponding depth data in the two-dimensional point cloud matrix according to the row mark and the column mark. Because the center element of the 3 multiplied by 3 submatrix is subjected to the de-drying judgment, only the position of the center element is required to be marked, so that the calculation amount of the system is greatly reduced.
Comparing the first result of adding the central element within the 3 x 3 sub-matrix to the absolute values of the other element differences, respectively, with a fourth threshold value:
if the first result is smaller than the fourth threshold value, reserving an element corresponding to the position of the center element in the two-dimensional point cloud matrix;
extracting a 2 x 2 sub-matrix of the 3 x 3 sub-matrix if the first result is not less than the fourth threshold;
comparing the absolute value of the difference between the element in each 2×2 submatrix and the central element, extracting a first minimum value and comparing the first minimum value with a fifth threshold value;
if the first minimum value is not smaller than a fifth threshold value, judging that the central element is a noise point, finding the position of the noise point in the two-dimensional point cloud matrix according to the position index, and discarding the element corresponding to the noise point;
if the first minimum value is smaller than a fifth threshold value, reserving an element corresponding to the position of the central element in the two-dimensional point cloud matrix;
the fourth threshold and the fifth threshold are set to be suitable values in a manner that different thresholds can be selected multiple times according to the standard measured scene, and preferably, the fifth threshold is not more than half of the fourth threshold. In a specific embodiment, when the two-dimensional point cloud matrix is 240×320, the fourth threshold value is preferably 0.2. When the first result is not smaller than the fourth threshold value, 2×2 submatrices of the 3×3 submatrices are extracted to perform secondary judgment on the noise points, so that the false deletion rate of the noise points can be effectively reduced.
And generating fifth three-dimensional point cloud data by using the first point cloud depth data corresponding to the elements reserved in the two-dimensional point cloud matrix when the first result is smaller than the fourth threshold and the first minimum value is smaller than the fifth threshold.
The server calculates the normal vector of the fifth three-dimensional point cloud data, has different normal vector characteristics for the three-dimensional point cloud data with the foreground and the background mixed together, removes the background with the same release amount characteristic, reserves the foreground and generates second three-dimensional point cloud data.
Step S130, the server performs human head extraction on the second three-dimensional point cloud data, and generates third three-dimensional point cloud data.
Specifically, the server acquires initial human body target three-dimensional point cloud data through edge extraction according to the second three-dimensional point cloud data;
the server calculates and extracts the three-dimensional contour line of each initial human body target according to the three-dimensional point cloud data of the initial human body targets, compares the three-dimensional contour line contours of each initial human body target with the three-dimensional contour line of the initial human body targets excluding the non-human body heads, extracts the three-dimensional contour lines of all real human body heads, and generates third three-dimensional point cloud data.
By means of feature comparison of three-dimensional contour lines of the initial human body target, non-human body targets extracted by mistake in human body edge contour extraction can be removed, and accuracy of the system is improved.
In step S140, the server calculates fourth three-dimensional point cloud data of the center of each body head in the third three-dimensional point cloud data.
Specifically, the server extracts a first closed contour curve of the three-dimensional contour line of each human head closest to the TOF camera according to the third three-dimensional point cloud data, calculates a center point of the first closed contour curve, wherein the center point can be approximately regarded as the center position of the top of the human head, and generates k center points into fourth three-dimensional point cloud data, wherein k is the number of the human heads.
The three-dimensional contour lines are formed by taking M as an interval from the nearest to the closest point of the closest distance to the TOF camera from the near to the far in the depth direction of the point cloud, wherein M is a preset depth contour line interval value.
Step S150, the server calculates the motion acceleration of each human head center according to the fourth three-dimensional point cloud data of the current frame and the previous adjacent frame.
Specifically, the motion acceleration includes a value of a motion speed change and a direction of the speed change, and when the acceleration is suddenly accelerated, the direction of the acceleration coincides with the motion direction, and when the acceleration is suddenly decelerated, the direction is opposite to the motion direction.
Step S160, if the acceleration change of the human head center movement is larger than a first threshold value, the server sends safety alarm information to the terminal.
Specifically, when an emergency such as a frame or robbery occurs, a person always keeps a period of instantaneous and violent movement, and the instantaneous movement points to different directions, because the acceleration is the quantity for measuring the movement speed and the movement direction of an object, when the acceleration of the head movement of a human body in continuous frames is monitored to suddenly change and exceed a preset threshold value, the direction angle of the acceleration points to different direction angles in continuous frames, so that the existence of a safety event can be judged, and the server sends alarm information to a background terminal. The alarm mode can be a ringing mode or a text intensity reminding mode which can be performed by jumping out on a display screen or intelligent equipment.
When a safety event occurs, such as fighting, the phenomenon that the human body falls down often occurs, and the height of the center of the human head can change obviously, so when the acceleration exceeds a preset threshold, the change of the front and rear frames of the center of the human head is further judged, if the center of the human head is continuously low, the fighting is judged to exist, and the server sends alarm information to the terminal.
When the height of the center of the human head continuously becomes low, in order to remove misjudgment of actions such as picking things at low head or carrying, whether the surrounding masses exist is further judged, the situation that people are close to the event center in a collective way usually occurs when a fighting event occurs, the server calculates the ratio of the number of the human heads within 2 meters of the radius of the center of the human head, the height of the center of the human head continuously becomes low, to the total number of people, when the surrounding number exceeds half of the number of people in a monitoring area, the existence of the fighting actions can be judged, and the server sends alarm information to the terminal.
When the artificial safety problem occurs, if the safety problem does not occur when only 1 person exists in the monitoring range, the step after S140 is not needed to be executed when the number of the human bodies obtained during the head extraction of the human bodies is 1, the current frame is directly abandoned, and the next frame is judged.
The above is a process of completely implementing the security monitoring method provided by the embodiment of the present application.
According to the safety monitoring method and system provided by the embodiment of the application, the TOF camera is utilized to collect three-dimensional point cloud data of a monitored scene, the three-dimensional point cloud data are subjected to desiccation, foreground extraction and human head extraction, the contour lines are utilized to remove pseudo human body targets, the contour lines are utilized to acquire human head center points to perform positioning and tracking dangerous behaviors of a human body, and the calculation speed is greatly improved through positioning and tracking of the positioning head center points, so that a large amount of redundant information is reduced. When three-dimensional point cloud data are acquired, the adopted TOF camera is active visual equipment, the TOF camera is not influenced by environmental illumination in the data acquisition process, and the data acquisition is not influenced even if the external environment is completely irrelevant, so that the TOF camera is not limited at all in some monitoring places with poor visual conditions. The three-dimensional point cloud data is acquired by depth information of the targets, the targets with shielding overlapping relation can be well separated, and the monitoring effect is better than that of the traditional monitoring equipment. Meanwhile, when the target is monitored, the server only acquires the depth three-dimensional coordinates of the target, and the information such as the color intensity of the target is not available, so that the personal privacy is not exposed while the system is ensured to be safely monitored when being applied to monitoring gray zones with potential safety hazards.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (9)

1. A method of security monitoring, the method comprising:
the method comprises the steps that a time-of-flight TOF camera collects first three-dimensional point cloud data of a scene according to a preset sampling frequency and sends the first three-dimensional point cloud data to a server, wherein the first three-dimensional point cloud data comprise three-dimensional coordinates X, Y, Z;
the server performs denoising and foreground extraction on the first three-dimensional point cloud data to generate second three-dimensional point cloud data;
the server extracts the human head of the second three-dimensional point cloud data to generate third three-dimensional point cloud data;
the server calculates fourth three-dimensional point cloud data of the head center of each body in the third three-dimensional point cloud data;
the server calculates the motion acceleration of the center of the head of each human body according to the fourth three-dimensional point cloud data of the current frame and the previous adjacent frame;
and if the acceleration change of the movement of the center of the head of the human body is larger than a first threshold value, the server sends safety alarm information to the terminal.
2. The method of claim 1, wherein the server performs human head extraction on the second three-dimensional point cloud data, and the generating third three-dimensional point cloud data specifically includes:
the server acquires initial human body target three-dimensional point cloud data according to the second three-dimensional point cloud data;
the server calculates and extracts the three-dimensional contour line of each initial human body target according to the three-dimensional point cloud data of the initial human body target, compares the three-dimensional contour line outline of each initial human body target with the three-dimensional contour line of the initial human body target excluding the non-human body head, extracts the three-dimensional contour lines of all real human body heads, and generates third three-dimensional point cloud data.
3. The method according to claim 2, wherein the server calculates fourth three-dimensional point cloud data of the center of each human head in the third three-dimensional point cloud data specifically includes:
the server extracts a first closed contour curve of the three-dimensional contour line of each human head closest to the TOF camera according to the third three-dimensional point cloud data, calculates the central point of the first closed contour curve, and generates k central points into fourth three-dimensional point cloud data, wherein k is the number of human heads.
4. The method of claim 1, wherein the motion acceleration comprises an acceleration value and a direction.
5. The method of security monitoring of claim 1, further comprising:
the server calculates the height difference between the height of the center of the head of each human body and the height of the previous frame according to the fourth three-dimensional point cloud data of the current frame and the previous adjacent frame;
and if the height of the center of the head of the human body continuously decreases, the server sends safety alarm information to the terminal.
6. The method of claim 4, wherein the acceleration changes include changes in acceleration values and changes in direction angles.
7. The method of security monitoring of claim 1, further comprising:
the server acquires the number of people in the acquired scene according to the third three-dimensional point cloud data;
and if the number of the acquired scene is less than 2, judging that the safety problem exists, and performing data processing on the first three-dimensional point cloud data of the next frame by the server.
8. The method of claim 5, wherein if the human head center height is continuously reduced, the method further comprises:
the server calculates the ratio of the number of human heads to the total number of people within 2 meters of the center radius of the human head, the height of the center of the human head is continuously reduced;
and if the proportion is larger than a second threshold value, the server sends safety alarm information to the terminal.
9. A system of security monitoring, characterized in that the system comprises a time of flight TOF camera according to any of claims 1-8, a server and a terminal.
CN201911404088.2A 2019-12-31 2019-12-31 Security monitoring method and system Active CN111145166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911404088.2A CN111145166B (en) 2019-12-31 2019-12-31 Security monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911404088.2A CN111145166B (en) 2019-12-31 2019-12-31 Security monitoring method and system

Publications (2)

Publication Number Publication Date
CN111145166A CN111145166A (en) 2020-05-12
CN111145166B true CN111145166B (en) 2023-09-01

Family

ID=70522290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911404088.2A Active CN111145166B (en) 2019-12-31 2019-12-31 Security monitoring method and system

Country Status (1)

Country Link
CN (1) CN111145166B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201123898A (en) * 2009-12-31 2011-07-01 Hon Hai Prec Ind Co Ltd Monitoring system, method, and monitoring apparatus including the same
CN105574889A (en) * 2014-10-09 2016-05-11 中国科学院大学 Individual abnormal behavior detecting method and system
WO2016082252A1 (en) * 2014-11-27 2016-06-02 苏州福丰科技有限公司 Airport security check method through three-dimensional face recognition based on cloud server
CN106407985A (en) * 2016-08-26 2017-02-15 中国电子科技集团公司第三十八研究所 Three-dimensional human head point cloud feature extraction method and device thereof
CN106872180A (en) * 2017-01-24 2017-06-20 中国汽车技术研究中心 Method for judging head injury of passenger in vehicle frontal collision
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN109477951A (en) * 2016-08-02 2019-03-15 阿特拉斯5D公司 People and/or identification and the system and method for quantifying pain, fatigue, mood and intention are identified while protecting privacy
CN110579772A (en) * 2018-06-11 2019-12-17 视锐光科技股份有限公司 Operation mode of intelligent safety warning system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396385B2 (en) * 2010-08-26 2016-07-19 Blast Motion Inc. Integrated sensor and video motion analysis method
US9401178B2 (en) * 2010-08-26 2016-07-26 Blast Motion Inc. Event analysis system
CA2834877A1 (en) * 2012-11-28 2014-05-28 Henry Leung System and method for event monitoring and detection
KR101593187B1 (en) * 2014-07-22 2016-02-11 주식회사 에스원 Device and method surveiling innormal behavior using 3d image information
US10852419B2 (en) * 2017-10-20 2020-12-01 Texas Instruments Incorporated System and method for camera radar fusion

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201123898A (en) * 2009-12-31 2011-07-01 Hon Hai Prec Ind Co Ltd Monitoring system, method, and monitoring apparatus including the same
CN105574889A (en) * 2014-10-09 2016-05-11 中国科学院大学 Individual abnormal behavior detecting method and system
WO2016082252A1 (en) * 2014-11-27 2016-06-02 苏州福丰科技有限公司 Airport security check method through three-dimensional face recognition based on cloud server
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN109477951A (en) * 2016-08-02 2019-03-15 阿特拉斯5D公司 People and/or identification and the system and method for quantifying pain, fatigue, mood and intention are identified while protecting privacy
CN106407985A (en) * 2016-08-26 2017-02-15 中国电子科技集团公司第三十八研究所 Three-dimensional human head point cloud feature extraction method and device thereof
CN106872180A (en) * 2017-01-24 2017-06-20 中国汽车技术研究中心 Method for judging head injury of passenger in vehicle frontal collision
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN110579772A (en) * 2018-06-11 2019-12-17 视锐光科技股份有限公司 Operation mode of intelligent safety warning system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金科."基于深度视频的3D人体行为识别算法研究".《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,(第01期),全文. *

Also Published As

Publication number Publication date
CN111145166A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
Zhan et al. A high-precision forest fire smoke detection approach based on ARGNet
JP6597609B2 (en) Image processing apparatus, monitoring system, image processing method, and program
CN103726879B (en) Utilize camera automatic capturing mine ore deposit to shake and cave in and the method for record warning in time
EP3016382B1 (en) Monitoring methods and devices
US11232689B2 (en) Smoke detection method with visual depth
US20170206423A1 (en) Device and method surveilling abnormal behavior using 3d image information
CN111753609A (en) Target identification method and device and camera
CN106657921A (en) Portable radar perimeter security and protection system
WO2022078182A1 (en) Throwing position acquisition method and apparatus, computer device and storage medium
CN112216052A (en) Forest fire prevention monitoring and early warning method, device and equipment and storage medium
CN111339826B (en) Landslide unmanned aerial vehicle linear sensor network frame detecting system
CN114664048B (en) Fire monitoring and fire early warning method based on satellite remote sensing monitoring
US11594035B2 (en) Monitoring device, and method for monitoring a man overboard situation
US11776275B2 (en) Systems and methods for 3D spatial tracking
CN111047827A (en) Intelligent monitoring method and system for environment-assisted life
US20220035003A1 (en) Method and apparatus for high-confidence people classification, change detection, and nuisance alarm rejection based on shape classifier using 3d point cloud data
CN105528581B (en) Video smoke event intelligent detecting method based on bionical color reaction model
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
CN110807345A (en) Building evacuation method and building evacuation system
Behera et al. Multi-camera based surveillance system
CN109670391B (en) Intelligent lighting device based on machine vision and dynamic identification data processing method
CN111145166B (en) Security monitoring method and system
Zheng et al. Forest farm fire drone monitoring system based on deep learning and unmanned aerial vehicle imagery
EP3510573B1 (en) Video surveillance apparatus and method
CN109405809A (en) A kind of substation's flood depth of water detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant