Nothing Special   »   [go: up one dir, main page]

CN118298377A - Perimeter intrusion recognition method and system based on video joint acquisition - Google Patents

Perimeter intrusion recognition method and system based on video joint acquisition Download PDF

Info

Publication number
CN118298377A
CN118298377A CN202410483604.XA CN202410483604A CN118298377A CN 118298377 A CN118298377 A CN 118298377A CN 202410483604 A CN202410483604 A CN 202410483604A CN 118298377 A CN118298377 A CN 118298377A
Authority
CN
China
Prior art keywords
intrusion
monitoring
infrared
activating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410483604.XA
Other languages
Chinese (zh)
Inventor
王瑞
杨文�
马祯
张万鹏
胡昊
陈中雷
杨琦
康剑
杨雪
张德强
陈梦
蔡青
郭志华
赵垒
张瀛丹
栗文韬
白国帅
李超
龚建康
黄健
郭星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Railway Sciences Corp Ltd CARS
China State Railway Group Co Ltd
Beijing Jingwei Information Technology Co Ltd
Original Assignee
China Academy of Railway Sciences Corp Ltd CARS
China State Railway Group Co Ltd
Beijing Jingwei Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Railway Sciences Corp Ltd CARS, China State Railway Group Co Ltd, Beijing Jingwei Information Technology Co Ltd filed Critical China Academy of Railway Sciences Corp Ltd CARS
Priority to CN202410483604.XA priority Critical patent/CN118298377A/en
Publication of CN118298377A publication Critical patent/CN118298377A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a perimeter intrusion recognition method and a perimeter intrusion recognition system based on video joint acquisition, which relate to the technical field of data processing, wherein the method comprises the following steps: the method comprises the steps of collecting environment brightness information of a first monitoring area through a photoreceptor, independently identifying a first infrared imaging set and a first monitoring image set to obtain intrusion object distribution coordinates, activating an infrared holder imaging array or a binocular holder camera array to lock intrusion objects by combining the environment brightness information to perform joint tracking to obtain an intrusion object moving coordinate sequence, executing training of an intrusion behavior identification channel to generate intrusion behavior trigger probability, and sending an intrusion behavior identification result and intrusion object real-time coordinates to a perimeter intrusion management terminal when the intrusion behavior trigger probability meets a trigger probability threshold value.

Description

Perimeter intrusion recognition method and system based on video joint acquisition
Technical Field
The invention relates to the technical field of data processing, in particular to a perimeter intrusion recognition method and system based on video joint acquisition.
Background
With the development of video recognition technology, particularly the development of the field of perimeter intrusion recognition, perimeter intrusion detection generally refers to monitoring a certain linear boundary, and when the behavior of illegally passing through the boundary is found, an alarm is generated, and the perimeter intrusion detection is restricted by various conditions such as topography, climate, weather and the like, so that the detection requirement is high and the difficulty is also high. The existing perimeter intrusion detection products are few in variety and small in selectable space, and the conventional perimeter intrusion recognition is difficult to realize multi-machine-position joint evaluation, so that the technical problem of poor recognition accuracy is caused.
Disclosure of Invention
The application provides a perimeter intrusion recognition method and a perimeter intrusion recognition system based on video joint acquisition, which are used for solving the technical problems that the conventional perimeter intrusion recognition in the prior art is difficult to realize multi-machine joint evaluation, and the recognition accuracy is poor.
In view of the above problems, the application provides a perimeter intrusion recognition method and a perimeter intrusion recognition system based on video joint acquisition.
In a first aspect, the present application provides a perimeter intrusion recognition method based on video joint acquisition, the method comprising: collecting the ambient brightness information of a first monitoring area through a photoreceptor; when the ambient brightness information is smaller than or equal to a first brightness threshold value, activating an infrared cradle head imager array to monitor so as to obtain a first infrared imaging set; when the ambient brightness information is larger than the first brightness threshold value, activating a binocular cradle head camera array to monitor, and obtaining a first monitoring image set; independently identifying the first infrared imaging set and the first monitoring image set to obtain an intrusion object distribution coordinate; activating the infrared tripod head imager array or the binocular tripod head camera array to lock the invasion object according to the invasion object distribution coordinates and combining the environment brightness information to perform joint tracking so as to obtain an invasion object moving coordinate sequence; according to the moving coordinate sequence of the intrusion object, training an intrusion behavior recognition channel is executed, and intrusion behavior triggering probability is generated; when the intrusion behavior triggering probability meets a triggering probability threshold, activating audible and visual alarm equipment to alarm, and simultaneously sending an intrusion behavior identification result and real-time coordinates of an intrusion object to a perimeter intrusion management terminal.
In a second aspect, the present application provides a perimeter intrusion recognition system based on video joint acquisition, the system comprising: one or more technical schemes provided by the application have at least the following technical effects or advantages: the information acquisition module is used for acquiring the environment brightness information of the first monitoring area through the photoreceptor; the first monitoring module is used for activating the infrared cradle head imager array to monitor when the ambient brightness information is smaller than or equal to a first brightness threshold value, so as to obtain a first infrared imaging set; the second monitoring module is used for activating the binocular cradle head camera array to monitor when the environment brightness information is larger than the first brightness threshold value, so as to obtain a first monitoring image set; the independent identification module is used for independently identifying the first infrared imaging set and the first monitoring image set to obtain an intrusion object distribution coordinate; the joint tracking module is used for activating the infrared tripod head imager array or the binocular tripod head camera array to lock an invasion object according to the invasion object distribution coordinates and the environment brightness information to perform joint tracking so as to obtain an invasion object moving coordinate sequence; the first training module is used for executing training of the intrusion behavior recognition channel according to the intrusion object moving coordinate sequence and generating intrusion behavior triggering probability; and the alarm module is used for activating the audible and visual alarm equipment to alarm when the trigger probability of the intrusion behavior meets the trigger probability threshold value, and simultaneously sending the intrusion behavior identification result and the real-time coordinates of the intrusion object to the perimeter intrusion management terminal.
The application provides a perimeter intrusion recognition method and a perimeter intrusion recognition system based on video combined acquisition, relates to the technical field of data processing, solves the technical problems that the conventional perimeter intrusion recognition in the prior art is difficult to realize multi-machine-position combined evaluation, and the recognition accuracy is poor, and realizes the improvement of the perimeter intrusion recognition rate based on visible light and infrared video combined acquisition.
Drawings
FIG. 1 is a schematic flow chart of a perimeter intrusion recognition method based on video joint acquisition;
FIG. 2 is a schematic diagram of the distribution coordinates of an intrusion object in a perimeter intrusion recognition method based on video joint acquisition;
Fig. 3 is a schematic diagram of a perimeter intrusion recognition system based on video joint acquisition.
Reference numerals illustrate: the system comprises an information acquisition module 1, a first monitoring module 2, a second monitoring module 3, an independent identification module 4, a joint tracking module 5, a first training module 6 and an alarm module 7.
Detailed Description
The application provides a perimeter intrusion recognition method and a perimeter intrusion recognition system based on video combined acquisition, which are used for solving the technical problem of low perimeter intrusion recognition rate caused by single acquisition method for perimeter intrusion recognition in the prior art.
Example 1
As shown in fig. 1, an embodiment of the present application provides a perimeter intrusion recognition method based on video joint acquisition, which includes:
Step A100: collecting the ambient brightness information of a first monitoring area through a photoreceptor;
in the application, the perimeter intrusion recognition method based on video joint acquisition is applied to the perimeter intrusion recognition system based on video joint acquisition, and the perimeter intrusion recognition system based on video joint acquisition is in communication connection with a photoreceptor, wherein the photoreceptor is used for acquiring environmental brightness parameters.
Furthermore, in order to better identify the perimeter intrusion, the first monitoring area needs to be collected by the photoreceptor connected with the system to acquire the environmental brightness in the first monitoring area, the first monitoring area refers to the area in the perimeter defined by the need to be monitored, when the photosensitive material in the photoreceptor is irradiated by light with proper wavelength, the current becomes larger along with the increase of the light intensity, so that the photoelectric conversion is realized, the data such as the environmental light intensity data, the environmental light color data, the environmental light direction and the like in the first monitoring area are acquired, the data are summarized and then recorded as the environmental brightness information to be output, and the identification collection based on the video combination for perimeter intrusion is realized in the later stage as an important reference basis.
Step A200: when the ambient brightness information is smaller than or equal to a first brightness threshold value, activating an infrared cradle head imager array to monitor so as to obtain a first infrared imaging set;
In the application, in order to improve the accuracy of identifying the invader in the perimeter invasion identification process, on the basis of the acquired environmental brightness information, the environmental brightness information is compared with a first brightness threshold value, wherein the first brightness threshold value is obtained according to the preset environmental brightness average value of a first monitoring area in a historical period, and further, when the environmental brightness information is smaller than or equal to the first brightness threshold value, the infrared cradle head imager array is activated
The infrared cloud platform imager array is in communication connection with a perimeter intrusion recognition system based on video joint acquisition, the infrared cloud platform imager array is obtained by arranging a plurality of infrared cloud platform imagers in a first monitoring area according to equal distance, and the infrared cloud platform imager array is used for monitoring and acquiring infrared radiation energy distribution graphic parameters in the first monitoring area, so that a first infrared imaging set is generated according to the infrared cloud platform imagers, the first infrared imaging set comprises a plurality of infrared thermal image images in the first monitoring area, and further, the perimeter intrusion recognition acquisition based on video joint is guaranteed.
Step A300: when the ambient brightness information is larger than the first brightness threshold value, activating a binocular cradle head camera array to monitor, and obtaining a first monitoring image set;
Further, the step a300 of the present application further includes:
Step a310: when the ambient brightness information is larger than the first brightness threshold and larger than the second brightness threshold, activating a binocular cradle head camera array to monitor, and obtaining a first monitoring image set;
Step A320: when the ambient brightness information is larger than the first brightness threshold and smaller than or equal to the second brightness threshold, activating a laser light supplementing assembly, and combining the binocular cradle head camera array for monitoring to obtain a first monitoring image set;
Further, the step a300 of the present application further includes:
step a330: obtaining first monitoring area coordinates of a first binocular tripod head camera of the binocular tripod head camera array;
Step A340: extracting a plurality of vegetation coverage areas according to the first monitoring area coordinates, wherein any one vegetation coverage area is provided with a vegetation coverage width label, a vegetation coverage height label and a vegetation coverage length label;
Step A350: constructing a vegetation coverage three-dimensional model according to the vegetation coverage width label, the vegetation coverage height label and the vegetation coverage length label;
Step A360: based on the historical invasion transaction set, configuring an invasion object width minimum value, an invasion object height minimum value and an invasion object length minimum value, and constructing an invasion object threshold model;
Step a370: comparing the intrusion object threshold model with the vegetation coverage three-dimensional model, when the vegetation coverage three-dimensional model can shade the intrusion object threshold model, activating a first infrared cradle head imager of the first monitoring area coordinate to perform auxiliary monitoring, obtaining an infrared monitoring auxiliary image, and adding the infrared monitoring auxiliary image into a first monitoring image set of the first monitoring area coordinate.
In order to more accurately identify perimeter intrusion, on the basis that an arranged infrared cradle head imager array monitors a first monitoring area to obtain a first infrared imaging set, when ambient brightness information is larger than a first brightness threshold value, a binocular cradle head camera array is activated, the binocular cradle head camera array is in communication connection with a perimeter intrusion identification system based on video combined acquisition, and the binocular cradle head camera array is used for acquiring image parameters of an ambient invader.
Further, when the ambient brightness information is greater than the first brightness threshold and greater than the second brightness threshold, the binocular tripod head camera array is activated to monitor, the second brightness threshold is obtained by taking the average value of the highest brightness value of the infrared tripod head imager array and the lowest brightness value of the binocular tripod head camera, and the second brightness threshold is greater than the first brightness threshold, at the moment, the binocular tripod head camera array can be used for carrying out video intelligent identification color imaging on the first monitoring area under the current brightness, the visual effect is good, the poor resolution of the thermal imaging image is compensated, the obtained color imaging is summarized and recorded as a first monitoring image set, further, When the ambient brightness information is larger than a first brightness threshold and smaller than or equal to a second brightness threshold, activating a laser light supplementing component connected with the binocular tripod head camera array, performing light supplementing operation on a first monitoring area through the laser light supplementing component, simultaneously performing intrusion monitoring on the first monitoring area by combining the binocular tripod head camera array, marking monitoring coordinates in the first monitoring area while intrusion detection is performed, namely marking according to first monitoring area coordinates corresponding to a first binocular tripod head camera in the binocular tripod head camera array, wherein the first binocular tripod head camera is a camera which is selected and determined in the binocular tripod head camera array at will, The first monitoring area coordinates are first monitoring area coordinates in the first monitoring area generated by constructing area coordinates according to the areas monitored by the first binocular cradle head camera, further, the area with vegetation in the first monitoring area is extracted according to the first monitoring area coordinates, namely the area with static irregular objects in the first monitoring area is stored and recorded to generate a plurality of vegetation coverage areas in the first monitoring area, and any one vegetation coverage area is provided with a vegetation coverage width label, a vegetation coverage height label and a vegetation coverage length label, wherein the vegetation coverage width label is used for marking the vegetation coverage widest distance, The vegetation cover height label is used for marking the height of the vegetation from the horizontal plane in the vegetation cover area, the vegetation cover length label is used for marking the longest distance of the vegetation cover, further, based on the vegetation cover width label, the vegetation cover height label and the vegetation cover length label, the vegetation cover width, the vegetation cover height and the vegetation cover length are aligned with coordinate feature points through an iterative nearest point algorithm, namely, the optimal transformation matrix of point clouds among the vegetation cover width, the vegetation cover height and the vegetation cover length is searched through iteration, the distance between the vegetation cover width, the vegetation cover height and the vegetation cover length is minimized, a vegetation cover three-dimensional model is constructed according to the data of registration completion, and meanwhile, a historical invasion transaction set is taken as a basis, The historical intrusion transaction is to identify the event of the existence of the invader in the first monitoring area in the historical period, extract the minimum width value, the minimum height value and the minimum length value of the invaded object in the historical intrusion transaction set, wherein the minimum width value of the invaded object refers to the data with the minimum width in the historical invader, the minimum height value of the invaded object refers to the data with the minimum height in the historical invader, the minimum length value of the invaded object refers to the data with the minimum length in the historical invader, coordinate registration is carried out on the width data, the height data and the length data of the historical invader in the space coordinate system of the first monitoring area on the basis, the registration process is the same as the registration, Thus completing the construction of the intrusion object threshold model, finally comparing the constructed intrusion object threshold model with the vegetation coverage three-dimensional model, when the area of the area defined by the connection of the vegetation feature points in the vegetation coverage three-dimensional model is larger than the area defined by the connection of the intrusion feature points in the intrusion object threshold model, the vegetation feature points in the vegetation coverage three-dimensional model can shield the intrusion feature points in the intrusion object threshold model, at the moment, the binocular tripod head camera array has a blind area in the intrusion monitoring process, the first infrared tripod head imaging instrument with the first monitoring area coordinate is required to be activated to assist the binocular tripod head camera array in intrusion monitoring, The infrared monitoring auxiliary image generated by the first infrared cradle head imager is obtained, and meanwhile, the infrared monitoring auxiliary image is added into the first monitoring image set of the first monitoring area coordinate, so that the monitoring accuracy of the binocular cradle head camera is improved, and the perimeter intrusion identification tamping foundation based on video combination is realized subsequently.
Step A400: independently identifying the first infrared imaging set and the first monitoring image set to obtain an intrusion object distribution coordinate;
further, as shown in fig. 2, step a400 of the present application further includes:
step A410: traversing the first infrared imaging set and the first monitoring image set, and configuring a reference infrared imaging set and a reference monitoring image set, wherein the reference infrared imaging set and the reference monitoring image set are monitoring images of non-invasive objects;
Step a420: and activating an intrusion object judging channel, traversing the first infrared imaging set and the first monitoring image set, and combining the reference infrared imaging set and the reference monitoring image set to identify so as to obtain the intrusion object distribution coordinates.
Further, step a420 of the present application includes:
step A421: traversing the first infrared imaging set and the first monitoring image set, and extracting a first image to be identified and a first reference image;
Step a422: when the first image to be identified is an infrared image, a first feature extraction path of an infrared response node of the intrusion object judgment channel is activated to receive the first image to be identified, a second feature extraction path of the infrared response node of the intrusion object judgment channel is activated to receive the first reference image for abnormal identification, and a morphological deviation area is obtained, wherein the infrared response node is a twin neural network;
Step A423: when the first image to be identified is a non-infrared image, activating a third feature extraction path of a conventional image response node of the intrusion object judgment channel to receive the first image to be identified, and activating a fourth feature extraction path of a conventional image response node of the intrusion object judgment channel to receive the first reference image for abnormal identification, so as to obtain a morphological deviation area, wherein the conventional image response node is a twin neural network;
Step a424: and positioning the morphological deviation area based on a cloud deck coordinate system to generate the intrusion object distribution coordinate.
In the application, in order to better determine the distribution coordinates of the intrusion objects in the perimeter, the first infrared imaging set obtained by monitoring through the infrared cradle head imager array and the first monitoring image set obtained by monitoring through the binocular cradle head camera array are required to be respectively and independently identified, namely after traversing the infrared imaging image in the first infrared imaging set and the video image in the first monitoring image set in sequence, the images when the non-intrusion objects are extracted are integrated to generate a reference infrared imaging set and a reference monitoring image set, namely the reference infrared imaging set and the reference monitoring image set are monitoring images without intrusion objects, further, after activating an intrusion object judging channel, sequentially accessing each image data node contained in the first infrared imaging set and the first monitoring image set, and simultaneously traversing and identifying the reference infrared imaging set and the reference monitoring image set, wherein the process can be to randomly respond to the first intrusion object extracting path by traversing each image data contained in the first infrared imaging set and the first monitoring image set, and randomly responding to the first intrusion object extracting node when the non-intrusion object is not determined to be extracted, the first intrusion object is the first node is the first intrusion object extracting feature, and the node is the first intrusion object is required to be extracted, and the node is the first intrusion object is required to be identified, and the node is the first intrusion object is required to be extracted in response to the first image receiving image, and simultaneously activating a second characteristic extraction path of an infrared response node of an intrusion object judgment channel to receive a first reference image, carrying out abnormal identification on a first image to be identified, and generating a morphological deviation area according to the abnormal identification, wherein the infrared response node is a twin neural network in a first monitoring area, the construction process of the twin neural network can be that the twin neural network is based on a coupling framework established by two artificial neural networks, the twin neural network is further constructed, input data of the twin neural network comprises configuration parameters obtained by inputting first selection data into a detection device selection unit for selecting detection devices, the twin neural network comprises infrared response nodes corresponding to an infrared holder imager array and conventional image response nodes corresponding to a binocular holder camera array, a first construction data set is obtained through the infrared response nodes, a second construction data set is obtained through the conventional image response nodes, the twin neural network is a neural network model which can be continuously self-optimized in machine learning, the twin neural network passes through a training data set and a supervision training data set, the twin neural network is obtained through the training data set, and the training data set is corresponding to the training data set, and the training data set is obtained in a one-to-one data set is obtained, and the supervision data set is corresponding to the training data set is obtained, and the training data set is obtained in a supervision data set is corresponding to the training data set.
Further, the twin neural network construction process is as follows: inputting each group of training data in the training data set into the twin neural network, performing output supervision adjustment of the twin neural network through the supervision data corresponding to the group of training data, finishing the current group of training when the output result of the twin neural network is consistent with the supervision data, finishing the training of all the training data in the training data set, and finishing the training of the twin neural network.
In order to ensure the convergence and accuracy of the twin neural network, the convergence process may be that when the output data in the twin neural network is converged to one point, the convergence is performed when the output data approaches to a certain value, the accuracy may be tested by the twin neural network through a test data set, for example, the test accuracy may be set to 80%, and when the test accuracy of the test data set meets 80%, the twin neural network construction is completed.
Similarly, when the first image to be identified is a non-infrared image, a third feature extraction path of a conventional image response node of the intrusion object judging channel is activated to receive the first image to be identified, and a fourth feature extraction path of the conventional image response node of the intrusion object judging channel is activated to receive the first reference image to perform abnormal identification, and a form deviation area is generated according to the abnormal identification in the image, wherein the conventional image response node is also obtained by calculating a twin neural network in the first monitoring area, so that the form deviation area is positioned in a holder coordinate system, and the holder coordinate system is based on a feature coordinate point overlapped in an infrared holder imager array and a binocular holder camera array in the first monitoring area as a reference, so that intrusion object distribution coordinates are generated according to alignment coordinates, and the intrusion object distribution coordinates are all in the holder coordinate system, so that perimeter intrusion identification based on video combination is realized.
Step A500: activating the infrared tripod head imager array or the binocular tripod head camera array to lock the invasion object according to the invasion object distribution coordinates and combining the environment brightness information to perform joint tracking so as to obtain an invasion object moving coordinate sequence;
further, the step a500 of the present application further includes:
step A510: based on the intrusion object distribution coordinates, locking an intrusion object at a first monitoring device for tracking and monitoring to obtain a first area intrusion object moving coordinate sequence, wherein the first monitoring device is an infrared holder imager or/and a binocular holder camera;
step A520: when the invasive object is separated from the first area to carry out the second area, activating a second monitoring device to lock the invasive object for tracking and monitoring, and obtaining a moving coordinate sequence of the invasive object in the second area;
Step a530: and according to the monitoring time sequence, the first area invasion object moving coordinate sequence and the second area invasion object moving coordinate sequence are stored simultaneously until the K area invasion object moving coordinate sequence, and the invasion object moving coordinate sequence is obtained.
In the application, the obtained intrusion object distribution coordinates are taken as reference data, after being combined with the environmental brightness information, the infrared cloud platform imaging instrument array and/or the binocular cloud platform camera array are activated, then the intrusion object is locked and then the combined tracking is carried out, namely, on the basis of the intrusion object distribution coordinates, the intrusion object is locked and monitored through a first monitoring device, the first monitoring device can be any one monitoring device in the infrared cloud platform imaging instrument or the binocular cloud platform camera, the intrusion object is dynamically and continuously monitored, a first area intrusion object moving coordinate sequence is generated according to the monitoring coordinate point of the intrusion object in each monitoring device, further, when the intrusion object is separated from a second area, the first area and the second area are adjacent connected with different equal block areas, so that the intrusion object is locked and monitored through the second monitoring device, namely, the intrusion object moving coordinate point of the intrusion object in the second area is locked and monitored through the second monitoring device, the intrusion object moving coordinate sequence in the second area is generated, further, the intrusion object moving coordinate sequence is stored in the first area moving coordinate sequence, the first area is moved coordinate sequence is stored until the intrusion object moving coordinate sequence is moved in the first area, the intrusion object moving coordinate sequence is moved in the second area, the moving coordinate sequence is stored according to the moving coordinate sequence of the intrusion object moving coordinate sequence, and the intrusion object moving coordinate sequence is moved in the second area moving coordinate sequence according to the moving coordinate sequence of the first area moving coordinate sequence until the intrusion object moving coordinate sequence is stored in the second area moving coordinate sequence, so as to be used as reference data for the later recognition of perimeter intrusion based on video syndication.
Step A600: according to the moving coordinate sequence of the intrusion object, training an intrusion behavior recognition channel is executed, and intrusion behavior triggering probability is generated;
further, the step a600 of the present application further includes:
step a610: configuring an intrusion behavior movement sensitive path;
step a620: collecting an intrusion transaction object moving coordinate sequence data set and a conventional object moving coordinate sequence data set, mixing, and generating an intrusion behavior recognition training data set;
Step a630: configuring an intrusion behavior trigger probability evaluation function:
Yi=(y1,y2,…yl,…,yQ);
Wherein P (X 0) represents the intrusion behavior trigger probability of a moving coordinate sequence of an intrusion object, Y i represents the moving coordinate sequence of an ith intrusion behavior moving sensitive path, Y l represents the first coordinate of the moving coordinate sequence of the ith intrusion behavior moving sensitive path, X 0 represents the moving coordinate sequence of the intrusion object, Q represents the number of coordinates of the ith intrusion behavior moving sensitive path, and M represents the number of intrusion behavior moving sensitive paths;
Step A640: according to the intrusion trigger probability evaluation function and the intrusion movement sensitive path, an intrusion evaluation rule is constructed, unsupervised training is carried out on the intrusion recognition training data set, the intrusion recognition channel is generated, the intrusion object movement coordinate sequence is analyzed, and the intrusion trigger probability is generated.
In the application, in order to improve the accuracy of perimeter intrusion recognition, training is needed to be performed on an intrusion behavior recognition channel by taking an intrusion object moving coordinate sequence as a basis, namely, firstly, an intrusion behavior movement sensitive path is configured, wherein the intrusion behavior movement sensitive path is recorded in an abnormal intrusion path in a first monitoring area, the path can comprise a climbing path, a railing-crossing path and the like, further, an intrusion transaction object moving coordinate sequence data set and a regular object moving coordinate sequence data set are acquired and mixed, the intrusion transaction object moving coordinate sequence data set is a data set recorded in the first monitoring area and identified as a dynamic coordinate point of an intrusion object, the regular object moving coordinate sequence data set is a data set recorded in the first monitoring area and identified as a dynamic coordinate point of the moving object in a normal state, and on the basis, an intrusion behavior recognition training data set is generated, and further, an intrusion behavior triggering probability evaluation function is constructed by an intrusion behavior movement sensitive path, and the configured intrusion behavior triggering probability evaluation function is as follows:
Yi=(y1,y2,…yl,…,yQ);
Wherein P (X 0) represents the intrusion behavior trigger probability of a moving coordinate sequence of an intrusion object, Y i represents the moving coordinate sequence of an ith intrusion behavior moving sensitive path, Y l represents the first coordinate of the moving coordinate sequence of the ith intrusion behavior moving sensitive path, X 0 represents the moving coordinate sequence of the intrusion object, Q represents the number of coordinates of the ith intrusion behavior moving sensitive path, and M represents the number of intrusion behavior moving sensitive paths;
The method comprises the steps of evaluating the invasion level of an invasion object in a first monitoring area through a mobile coordinate sequence of the invasion object and a mobile coordinate sequence of an invasion action mobile sensitive path, wherein the invasion level evaluation is performed according to the invasion influence degree of the invasion object in the invasion area, the higher the evaluation level is, the higher the invasion influence degree is, thereby obtaining the invasion action triggering probability of the mobile coordinate sequence of the invasion object, and simultaneously performing unsupervised training on an invasion action recognition training data set, namely automatically clustering the invasion action recognition training data set into a plurality of categories with high similarity together, completing the division of the invasion action recognition training data set, thereby generating an invasion action recognition channel, performing analysis of the invasion object mobile coordinate sequence in the invasion action recognition channel, generating the invasion action triggering probability, and improving the accuracy of realizing perimeter invasion recognition based on video association in the later period.
Step A700: when the intrusion behavior triggering probability meets a triggering probability threshold, activating audible and visual alarm equipment to alarm, and simultaneously sending an intrusion behavior identification result and real-time coordinates of an intrusion object to a perimeter intrusion management terminal.
In the application, firstly, the intrusion behavior trigger probability is judged, namely the intrusion behavior trigger probability is compared with a trigger probability threshold, the trigger probability threshold is set according to the historical trigger maximum probability of intrusion trigger in a first monitoring area, when the intrusion behavior trigger probability meets the trigger probability threshold, the influence degree of the current intrusion behavior on the first monitoring area is considered to be maximum, therefore, the acousto-optic alarm equipment is required to be activated, the joint alarm of light alarm and sound alarm is carried out on the first monitoring area through the acousto-optic alarm equipment, meanwhile, the intrusion behavior identification result under the intrusion object corresponding to the current intrusion behavior and the real-time coordinate of the intrusion object are determined in the first monitoring area, and the intrusion behavior identification result and the real-time coordinate of the intrusion object are integrated and then sent to a perimeter intrusion management terminal to inform a manager corresponding to the first monitoring area of intrusion management and control on the intrusion object, so that the perimeter intrusion is better based on video joint identification in the later period is ensured.
In summary, the perimeter intrusion recognition method based on video combined acquisition provided by the embodiment of the application at least comprises the following technical effects that the perimeter intrusion recognition rate is improved based on the combined acquisition of visible light and infrared video.
Example two
Based on the same inventive concept as the perimeter intrusion recognition method based on video joint acquisition in the foregoing embodiment, as shown in fig. 3, the present application provides a perimeter intrusion recognition system based on video joint acquisition, the system comprising:
the information acquisition module 1 is used for acquiring the environment brightness information of the first monitoring area through the photoreceptor;
The first monitoring module 2 is configured to activate the infrared pan-tilt imager array to monitor when the ambient brightness information is less than or equal to a first brightness threshold value, so as to obtain a first infrared imaging set;
The second monitoring module 3 is configured to activate the binocular tripod head camera array to monitor when the environmental brightness information is greater than the first brightness threshold value, so as to obtain a first monitoring image set;
the independent identification module 4 is used for independently identifying the first infrared imaging set and the first monitoring image set to obtain an intrusion object distribution coordinate;
The joint tracking module 5 is used for activating the infrared tripod head imager array or the binocular tripod head camera array to lock the invasion object to perform joint tracking according to the invasion object distribution coordinates and the environment brightness information, so as to obtain an invasion object moving coordinate sequence;
the first training module 6 is used for executing training of the intrusion behavior recognition channel according to the intrusion object moving coordinate sequence and generating intrusion behavior triggering probability;
and the alarm module 7 is used for activating the audible and visual alarm equipment to alarm when the intrusion behavior trigger probability meets a trigger probability threshold, and simultaneously sending an intrusion behavior identification result and the real-time coordinates of an intrusion object to the perimeter intrusion management terminal.
Further, the system further comprises:
The first judging module is used for activating the binocular tripod head camera array to monitor when the environment brightness information is larger than the first brightness threshold value and larger than the second brightness threshold value, so as to obtain a first monitoring image set;
The second judging module is used for activating a laser light supplementing assembly and combining the binocular cradle head camera array to monitor when the ambient brightness information is larger than the first brightness threshold and smaller than or equal to the second brightness threshold, so as to obtain a first monitoring image set;
Wherein the second luminance threshold is greater than the first luminance threshold.
Further, the system further comprises:
the coordinate acquisition module is used for acquiring first monitoring area coordinates of a first binocular tripod head camera of the binocular tripod head camera array;
The area extraction module is used for extracting a plurality of vegetation coverage areas according to the first monitoring area coordinates, wherein any one vegetation coverage area is provided with a vegetation coverage width label, a vegetation coverage height label and a vegetation coverage length label;
the first model construction module is used for constructing a vegetation coverage three-dimensional model according to the vegetation coverage width label, the vegetation coverage height label and the vegetation coverage length label;
The second model construction module is used for configuring an intrusion object width minimum value, an intrusion object height minimum value and an intrusion object length minimum value based on the historical intrusion transaction set to construct an intrusion object threshold model;
the auxiliary monitoring module is used for comparing the invasive object threshold model with the vegetation coverage three-dimensional model, when the vegetation coverage three-dimensional model can shade the invasive object threshold model, the first infrared cradle head imager of the first monitoring area coordinate is activated to perform auxiliary monitoring, an infrared monitoring auxiliary image is obtained, and the infrared monitoring auxiliary image is added into the first monitoring image set of the first monitoring area coordinate.
Further, the system further comprises:
The first traversing module is used for traversing the first infrared imaging set and the first monitoring image set and configuring a reference infrared imaging set and a reference monitoring image set, wherein the reference infrared imaging set and the reference monitoring image set are monitoring images of non-invasive objects;
The identification module is used for activating an intrusion object judgment channel, traversing the first infrared imaging set and the first monitoring image set, and combining the reference infrared imaging set and the reference monitoring image set for identification to obtain the intrusion object distribution coordinates.
Further, the system further comprises:
the second traversing module is used for traversing the first infrared imaging set and the first monitoring image set and extracting a first image to be identified and a first reference image;
The first anomaly identification module is used for activating a first feature extraction path of an infrared response node of the intrusion object judgment channel to receive the first image to be identified when the first image to be identified is an infrared image, and activating a second feature extraction path of the infrared response node of the intrusion object judgment channel to receive the first reference image for anomaly identification, so as to obtain a morphological deviation area, wherein the infrared response node is a twin neural network;
the second anomaly identification module is used for activating a third feature extraction path of a conventional image response node of the intrusion object judgment channel to receive the first image to be identified when the first image to be identified is a non-infrared image, and activating a fourth feature extraction path of a conventional image response node of the intrusion object judgment channel to receive the first reference image for anomaly identification, so as to obtain a morphological deviation area, wherein the conventional image response node is a twin neural network;
and the coordinate positioning module is used for positioning the morphological deviation area based on a holder coordinate system and generating the intrusion object distribution coordinate.
Further, the system further comprises:
the first tracking and monitoring module is used for locking an intrusion object at a first monitoring device based on the intrusion object distribution coordinates to carry out tracking and monitoring to obtain a first area intrusion object moving coordinate sequence, wherein the first monitoring device is an infrared cradle head imager or/and a binocular cradle head camera;
The second tracking and monitoring module is used for activating a second monitoring device to lock the invasive object for tracking and monitoring when the invasive object is separated from the first area to carry out a second area, so as to obtain a moving coordinate sequence of the invasive object in the second area;
and the simultaneous storage module is used for simultaneously storing the moving coordinate sequence of the first area invasion object and the moving coordinate sequence of the second area invasion object until the moving coordinate sequence of the K area invasion object according to the monitoring time sequence to obtain the moving coordinate sequence of the invasion object.
Further, the system further comprises:
the path configuration module is used for configuring an intrusion behavior movement sensitive path;
The mixing module is used for collecting the moving coordinate sequence data set of the intrusion transaction object and the moving coordinate sequence data set of the conventional object, mixing the moving coordinate sequence data set and generating an intrusion behavior recognition training data set;
The function module is used for configuring an intrusion behavior trigger probability evaluation function:
Yi=(y1,y2,…yl,…,yQ);
Wherein P (X 0) represents the intrusion behavior trigger probability of a moving coordinate sequence of an intrusion object, Y i represents the moving coordinate sequence of an ith intrusion behavior moving sensitive path, Y l represents the first coordinate of the moving coordinate sequence of the ith intrusion behavior moving sensitive path, X 0 represents the moving coordinate sequence of the intrusion object, Q represents the number of coordinates of the ith intrusion behavior moving sensitive path, and M represents the number of intrusion behavior moving sensitive paths;
And the unsupervised training module is used for constructing an intrusion evaluation rule according to the intrusion trigger probability evaluation function and the intrusion movement sensitive path, performing unsupervised training on the intrusion recognition training data set, generating the intrusion recognition channel, analyzing the intrusion object movement coordinate sequence and generating the intrusion trigger probability.
The foregoing detailed description of the perimeter intrusion recognition method based on video joint acquisition will clearly be known to those skilled in the art, and the perimeter intrusion recognition system based on video joint acquisition in this embodiment is relatively simple in description, and relevant places refer to the method part for description, since the device disclosed in the embodiment corresponds to the method disclosed in the embodiment.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The perimeter intrusion recognition method based on video joint acquisition is characterized by comprising the following steps:
collecting the ambient brightness information of a first monitoring area through a photoreceptor;
When the ambient brightness information is smaller than or equal to a first brightness threshold value, activating an infrared cradle head imager array to monitor so as to obtain a first infrared imaging set;
when the ambient brightness information is larger than the first brightness threshold value, activating a binocular cradle head camera array to monitor, and obtaining a first monitoring image set;
Independently identifying the first infrared imaging set and the first monitoring image set to obtain an intrusion object distribution coordinate;
Activating the infrared tripod head imager array or the binocular tripod head camera array to lock the invasion object according to the invasion object distribution coordinates and combining the environment brightness information to perform joint tracking so as to obtain an invasion object moving coordinate sequence;
According to the moving coordinate sequence of the intrusion object, training an intrusion behavior recognition channel is executed, and intrusion behavior triggering probability is generated;
When the intrusion behavior triggering probability meets a triggering probability threshold, activating audible and visual alarm equipment to alarm, and simultaneously sending an intrusion behavior identification result and real-time coordinates of an intrusion object to a perimeter intrusion management terminal.
2. The method of claim 1, wherein activating the binocular cradle head camera array to monitor when the ambient brightness information is greater than the first brightness threshold value, obtains a first set of monitored images, further comprising:
When the ambient brightness information is larger than the first brightness threshold and larger than the second brightness threshold, activating a binocular cradle head camera array to monitor, and obtaining a first monitoring image set;
when the ambient brightness information is larger than the first brightness threshold and smaller than or equal to the second brightness threshold, activating a laser light supplementing assembly, and combining the binocular cradle head camera array for monitoring to obtain a first monitoring image set;
Wherein the second luminance threshold is greater than the first luminance threshold.
3. The method of claim 1, wherein activating the binocular cradle head camera array to monitor when the ambient brightness information is greater than the first brightness threshold value, obtains a first set of monitored images, further comprising:
Obtaining first monitoring area coordinates of a first binocular tripod head camera of the binocular tripod head camera array;
extracting a plurality of vegetation coverage areas according to the first monitoring area coordinates, wherein any one vegetation coverage area is provided with a vegetation coverage width label, a vegetation coverage height label and a vegetation coverage length label;
Constructing a vegetation coverage three-dimensional model according to the vegetation coverage width label, the vegetation coverage height label and the vegetation coverage length label;
Based on the historical invasion transaction set, configuring an invasion object width minimum value, an invasion object height minimum value and an invasion object length minimum value, and constructing an invasion object threshold model;
Comparing the intrusion object threshold model with the vegetation coverage three-dimensional model, when the vegetation coverage three-dimensional model can shade the intrusion object threshold model, activating a first infrared cradle head imager of the first monitoring area coordinate to perform auxiliary monitoring, obtaining an infrared monitoring auxiliary image, and adding the infrared monitoring auxiliary image into a first monitoring image set of the first monitoring area coordinate.
4. The method of claim 1, wherein independently identifying the first set of infrared imaging and the first set of monitoring images to obtain intrusion object distribution coordinates comprises:
Traversing the first infrared imaging set and the first monitoring image set, and configuring a reference infrared imaging set and a reference monitoring image set, wherein the reference infrared imaging set and the reference monitoring image set are monitoring images of non-invasive objects;
and activating an intrusion object judging channel, traversing the first infrared imaging set and the first monitoring image set, and combining the reference infrared imaging set and the reference monitoring image set to identify so as to obtain the intrusion object distribution coordinates.
5. The method of claim 4, wherein activating an intrusion object determination channel, traversing the first set of infrared imaging and the first set of monitoring images, and identifying in combination with the reference set of infrared imaging and the reference set of monitoring images, obtains the intrusion object distribution coordinates, comprises:
Traversing the first infrared imaging set and the first monitoring image set, and extracting a first image to be identified and a first reference image;
When the first image to be identified is an infrared image, a first feature extraction path of an infrared response node of the intrusion object judgment channel is activated to receive the first image to be identified, a second feature extraction path of the infrared response node of the intrusion object judgment channel is activated to receive the first reference image for abnormal identification, and a morphological deviation area is obtained, wherein the infrared response node is a twin neural network;
when the first image to be identified is a non-infrared image, activating a third feature extraction path of a conventional image response node of the intrusion object judgment channel to receive the first image to be identified, and activating a fourth feature extraction path of a conventional image response node of the intrusion object judgment channel to receive the first reference image for abnormal identification, so as to obtain a morphological deviation area, wherein the conventional image response node is a twin neural network;
and positioning the morphological deviation area based on a cloud deck coordinate system to generate the intrusion object distribution coordinate.
6. The method of claim 1, wherein activating the infrared pan-tilt imager array or the binocular pan-tilt camera array to lock the intrusion object for joint tracking in combination with the ambient brightness information according to the intrusion object distribution coordinates to obtain an intrusion object movement coordinate sequence, comprising:
based on the intrusion object distribution coordinates, locking an intrusion object at a first monitoring device for tracking and monitoring to obtain a first area intrusion object moving coordinate sequence, wherein the first monitoring device is an infrared holder imager or/and a binocular holder camera;
When the invasive object is separated from the first area to carry out the second area, activating a second monitoring device to lock the invasive object for tracking and monitoring, and obtaining a moving coordinate sequence of the invasive object in the second area;
And according to the monitoring time sequence, the first area invasion object moving coordinate sequence and the second area invasion object moving coordinate sequence are stored simultaneously until the K area invasion object moving coordinate sequence, and the invasion object moving coordinate sequence is obtained.
7. The method of claim 1, wherein performing training of an intrusion behavior recognition channel based on the intrusion object movement coordinate sequence, generating intrusion behavior trigger probabilities, comprises:
configuring an intrusion behavior movement sensitive path;
Collecting an intrusion transaction object moving coordinate sequence data set and a conventional object moving coordinate sequence data set, mixing, and generating an intrusion behavior recognition training data set;
Configuring an intrusion behavior trigger probability evaluation function:
Yi=(y1,y2,…yl,…,yQ);
Wherein P (X 0) represents the intrusion behavior trigger probability of a moving coordinate sequence of an intrusion object, Y i represents the moving coordinate sequence of an ith intrusion behavior moving sensitive path, Y l represents the first coordinate of the moving coordinate sequence of the ith intrusion behavior moving sensitive path, X 0 represents the moving coordinate sequence of the intrusion object, Q represents the number of coordinates of the ith intrusion behavior moving sensitive path, and M represents the number of intrusion behavior moving sensitive paths;
According to the intrusion trigger probability evaluation function and the intrusion movement sensitive path, an intrusion evaluation rule is constructed, unsupervised training is carried out on the intrusion recognition training data set, the intrusion recognition channel is generated, the intrusion object movement coordinate sequence is analyzed, and the intrusion trigger probability is generated.
8. Perimeter intrusion identification system based on video joint acquisition, characterized by comprising:
the information acquisition module is used for acquiring the environment brightness information of the first monitoring area through the photoreceptor;
The first monitoring module is used for activating the infrared cradle head imager array to monitor when the ambient brightness information is smaller than or equal to a first brightness threshold value, so as to obtain a first infrared imaging set;
The second monitoring module is used for activating the binocular cradle head camera array to monitor when the environment brightness information is larger than the first brightness threshold value, so as to obtain a first monitoring image set;
The independent identification module is used for independently identifying the first infrared imaging set and the first monitoring image set to obtain an intrusion object distribution coordinate;
The joint tracking module is used for activating the infrared tripod head imager array or the binocular tripod head camera array to lock an invasion object according to the invasion object distribution coordinates and the environment brightness information to perform joint tracking so as to obtain an invasion object moving coordinate sequence;
The first training module is used for executing training of the intrusion behavior recognition channel according to the intrusion object moving coordinate sequence and generating intrusion behavior triggering probability;
And the alarm module is used for activating the audible and visual alarm equipment to alarm when the trigger probability of the intrusion behavior meets the trigger probability threshold value, and simultaneously sending the intrusion behavior identification result and the real-time coordinates of the intrusion object to the perimeter intrusion management terminal.
CN202410483604.XA 2024-04-22 2024-04-22 Perimeter intrusion recognition method and system based on video joint acquisition Pending CN118298377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410483604.XA CN118298377A (en) 2024-04-22 2024-04-22 Perimeter intrusion recognition method and system based on video joint acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410483604.XA CN118298377A (en) 2024-04-22 2024-04-22 Perimeter intrusion recognition method and system based on video joint acquisition

Publications (1)

Publication Number Publication Date
CN118298377A true CN118298377A (en) 2024-07-05

Family

ID=91679380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410483604.XA Pending CN118298377A (en) 2024-04-22 2024-04-22 Perimeter intrusion recognition method and system based on video joint acquisition

Country Status (1)

Country Link
CN (1) CN118298377A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118465676A (en) * 2024-07-12 2024-08-09 青岛高科通信股份有限公司 Intelligent electric energy meter verification error analysis and calculation device
CN118800012A (en) * 2024-09-14 2024-10-18 江苏益捷思信息科技有限公司 Multi-path video linkage alarm method and device for intelligent security

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118465676A (en) * 2024-07-12 2024-08-09 青岛高科通信股份有限公司 Intelligent electric energy meter verification error analysis and calculation device
CN118800012A (en) * 2024-09-14 2024-10-18 江苏益捷思信息科技有限公司 Multi-path video linkage alarm method and device for intelligent security

Similar Documents

Publication Publication Date Title
CN110543867B (en) Crowd density estimation system and method under condition of multiple cameras
CN118298377A (en) Perimeter intrusion recognition method and system based on video joint acquisition
Maltezos et al. Building extraction from LiDAR data applying deep convolutional neural networks
CN109980781B (en) Intelligent monitoring system of transformer substation
CN108037770B (en) Unmanned aerial vehicle power transmission line inspection system and method based on artificial intelligence
CN112380952A (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
AU2019201977B2 (en) Aerial monitoring system and method for identifying and locating object features
CN110428522A (en) A kind of intelligent safety and defence system of wisdom new city
CN112068111A (en) Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN108710126A (en) Automation detection expulsion goal approach and its system
CN110321853A (en) Distribution cable external force damage prevention system based on video intelligent detection
CN110516529A (en) It is a kind of that detection method and system are fed based on deep learning image procossing
CN104813339A (en) Methods, devices and systems for detecting objects in a video
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN108500992A (en) A kind of multi-functional mobile security robot
CN110414400A (en) A kind of construction site safety cap wearing automatic testing method and system
CN116798176A (en) Data management system based on big data and intelligent security
Hu et al. Building occupancy detection and localization using CCTV camera and deep learning
CN111753780A (en) Transformer substation violation detection system and violation detection method
CN115035470A (en) Low, small and slow target identification and positioning method and system based on mixed vision
CN110188617A (en) Intelligent monitoring method and system for machine room
CN112800918A (en) Identity recognition method and device for illegal moving target
Ji et al. STAE‐YOLO: Intelligent detection algorithm for risk management of construction machinery intrusion on transmission lines based on visual perception
CN113505704B (en) Personnel safety detection method, system, equipment and storage medium for image recognition
CN110688892A (en) Portrait identification alarm method and system based on data fusion technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination