Nothing Special   »   [go: up one dir, main page]

CN113591651B - Method for capturing image, image display method, device and storage medium - Google Patents

Method for capturing image, image display method, device and storage medium Download PDF

Info

Publication number
CN113591651B
CN113591651B CN202110831445.4A CN202110831445A CN113591651B CN 113591651 B CN113591651 B CN 113591651B CN 202110831445 A CN202110831445 A CN 202110831445A CN 113591651 B CN113591651 B CN 113591651B
Authority
CN
China
Prior art keywords
video monitoring
target object
frame
rule
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110831445.4A
Other languages
Chinese (zh)
Other versions
CN113591651A (en
Inventor
吴允
蔡合瑶
谢俞胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110831445.4A priority Critical patent/CN113591651B/en
Publication of CN113591651A publication Critical patent/CN113591651A/en
Application granted granted Critical
Publication of CN113591651B publication Critical patent/CN113591651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the application provides a method for capturing an image, an image display method, an image display device and a storage medium, and belongs to the technical field of image processing. According to the method for capturing images, if the target object triggering the alarm rule exists in the video monitoring frames, the target object is identified in a plurality of subsequent video monitoring frames, after the capturing rule is met, the panoramic picture of the video monitoring frame at the moment corresponding to the capturing rule is captured, and the identification results of the panoramic picture and the video monitoring frame at the moment corresponding to the capturing rule are sent to the rear end, so that the rear end extracts the target frame comprising the target object from the panoramic picture according to the identification results. Therefore, only one panoramic picture needs to be grabbed, and the target object triggering the alarm rule can be tracked in real time in the video monitoring frame, so that the rear end can be ensured to accurately extract the target frame comprising the target object when the target frame is grabbed.

Description

Method for capturing image, image display method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method for capturing an image, an image display method, an image display device and a storage medium.
Background
In video monitoring, a camera detects each target object, such as pedestrians, motor vehicles, non-motor vehicles and the like, and when detecting that the target object triggers certain rules, such as a wire mixing rule, an area invasion rule, an article leaving rule and the like, the camera needs to capture a target frame and a corresponding panoramic picture of the target object so as to preserve corresponding evidence.
In the related art, two schemes are often adopted to capture images, the first scheme is to randomly capture a frame of panoramic picture at the current moment when an event occurs, namely when a target object triggering rule is detected, and the disadvantage of the scheme is that a specific target object of the triggering rule cannot be displayed on the panoramic picture. The second scheme is to accurately capture a frame of panoramic picture corresponding to the occurrence time of an event and a corresponding target frame comprising a target object when the event occurs, and the scheme has the defects that when the event occurs more, the picture is lost due to the limitation of various hardware resources and network bandwidth, and a large amount of storage space is consumed for storing the picture.
Disclosure of Invention
In order to solve the existing technical problems, the embodiment of the application provides a method for capturing images, an image display method, an image display device and a storage medium, which can ensure that a target frame comprising a target object is accurately captured in a whole image and reduce the storage space required for storing the image.
In order to achieve the above object, the technical solution of the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a method for capturing an image, the method including:
If a target object triggering an alarm rule exists in the video monitoring frame, identifying the target object in a plurality of subsequent video monitoring frames;
After the grabbing rule is met, grabbing panoramic pictures of video monitoring frames at the moment corresponding to the grabbing rule; the video monitoring frame meeting the corresponding moment of the grabbing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the plurality of continuous video monitoring frames;
And sending the identification result of the panoramic picture and the video monitoring frame at the moment corresponding to the grabbing rule to the rear end, so that the rear end extracts a target frame comprising the target object from the panoramic picture according to the identification result.
In the method for capturing images provided by the embodiment of the application, if the target object triggering the alarm rule exists in the video monitoring frame, the target object is identified in a plurality of subsequent video monitoring frames, after the capturing rule is met, the panoramic picture of the video monitoring frame at the moment corresponding to the capturing rule is captured, and the identification results of the panoramic picture and the video monitoring frame at the moment corresponding to the capturing rule are sent to the rear end, so that the rear end extracts the target frame comprising the target object from the panoramic picture according to the identification results. The panoramic picture of the video monitoring frame corresponding to the current moment is not immediately captured when the alarm rule is triggered, but the panoramic picture of the monitoring video frame meeting the moment corresponding to the capture rule is captured, and the target object is tracked in real time after the alarm rule is triggered by the target object, so that only one panoramic picture is required to be captured, and the rear end can be ensured to accurately extract the target frame comprising the target object when the target frame is extracted.
In an alternative embodiment, if there is a target object triggering an alarm rule in the video monitoring frame, identifying the target object in a plurality of subsequent video monitoring frames includes:
if a target object triggering an alarm rule exists in the video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinates corresponding to the target object are determined according to the first position of the target object in the video monitoring frame;
identifying the target object in a plurality of subsequent video monitoring frames according to the target identifier corresponding to the target object, and respectively determining a second position of the target object in the plurality of subsequent video monitoring frames;
And updating the position coordinates according to the determined second position every time the second position is determined.
In this embodiment, if there is a target object triggering an alarm rule in the video monitoring frame, determining a target identifier and a position coordinate corresponding to the target object respectively, where the position coordinate corresponding to the target object is determined according to a first position of the target object in the video monitoring frame, identifying the target object in a plurality of subsequent video monitoring frames according to the target identifier corresponding to the target object, determining a second position of the target object in the plurality of subsequent video monitoring frames respectively, determining each second position, and updating the position coordinate according to the determined second position. The position of the target object in the video monitoring frame can be tracked in real time, so that the accuracy of updating the position coordinates corresponding to the target object can be ensured.
In an optional embodiment, the identifying the target object in the subsequent continuous multiple video monitoring frames according to the target identifier corresponding to the target object, and determining the second position of the target object in the continuous multiple video monitoring frames respectively includes:
For each video monitoring frame in the continuous plurality of video monitoring frames, respectively executing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identifier corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
In this embodiment, for each video monitoring frame of the continuous plurality of video monitoring frames, the following operations may be performed, respectively: identifying the video monitoring frame, determining each object in the video monitoring frame, matching the target object with each object according to the target identifier corresponding to the target object, and determining a second position of the target object in the video monitoring frame. Therefore, the real-time tracking of the position of the target object in the video monitoring frame can be ensured.
In an optional embodiment, the sending, to the back end, the identification result of the video monitoring frame at the time corresponding to the capturing rule includes:
Transmitting a target identifier and a position coordinate corresponding to the target object to a rear end; and the position coordinates corresponding to the target object are determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
In this embodiment, the target identifier and the position coordinate corresponding to the target object may be sent to the back end, where the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the time corresponding to the capturing rule. The target mark and the position coordinate corresponding to the target object can be sent to the rear end, so that the rear end can accurately extract the target frame comprising the target object from the panoramic picture according to the target mark and the position coordinate corresponding to the target object.
In an optional embodiment, the capturing the panoramic picture of the video monitoring frame at the time corresponding to the capturing rule includes:
If the time for triggering the alarm rule reaches the preset time, capturing panoramic pictures of video monitoring frames corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous multiple video monitoring frames and the target object is identified in the video monitoring frame before the one video monitoring frame, capturing a panoramic picture of the video monitoring frame before the video monitoring frame.
In this embodiment, if the time for triggering the alarm rule reaches the preset time, capturing a panoramic picture of a video monitoring frame corresponding to the preset time, and/or if a target object cannot be identified in one video monitoring frame of a plurality of continuous video monitoring frames, and the target object is identified in a video monitoring frame preceding the one video monitoring frame, capturing a panoramic picture of the preceding video monitoring frame. The capturing rule is that the time for triggering the alarm rule reaches the preset time and/or the target object disappears before the capturing rule in the video monitoring frame, so that the target object exists in the video monitoring frame when the panoramic picture of the video monitoring frame at the moment corresponding to the capturing rule is captured.
In a second aspect, an embodiment of the present application further provides an image display method, where the method includes:
Receiving a panoramic picture sent by video monitoring equipment and a recognition result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by capturing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame with a target object triggering an alarm rule, and the recognition result is determined after the target object in the last video monitoring frame is recognized;
And extracting a target frame comprising a target object from the panoramic picture according to the identification result, and displaying the panoramic picture and the target frame.
The data image display method provided by the embodiment of the application can receive the panoramic picture sent by the video monitoring equipment and the identification result of the video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by grabbing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame with the target object triggering the alarm rule, the identification result is determined after the target object in the last video monitoring frame is identified, the target frame comprising the target object is extracted from the panoramic picture according to the identification result, and the panoramic picture and the target frame are displayed. Only one panoramic picture is required to be grabbed, and a target frame comprising a target object with a triggering alarm rule is grabbed in the panoramic picture, so that the storage space required when the picture is stored can be reduced, and the storage pressure of related equipment is reduced.
In a third aspect, an embodiment of the present application further provides an apparatus for capturing an image, including:
the target object identification unit is used for identifying the target object in a plurality of subsequent video monitoring frames if the target object triggering the alarm rule exists in the video monitoring frames;
The image grabbing unit is used for grabbing panoramic pictures of video monitoring frames at the moment corresponding to the grabbing rules after the grabbing rules are met; the video monitoring frame meeting the corresponding moment of the grabbing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the plurality of continuous video monitoring frames;
and the image sending unit is used for sending the identification result of the panoramic picture and the video monitoring frame at the moment corresponding to the grabbing rule to the rear end so that the rear end extracts the target frame comprising the target object from the panoramic picture according to the identification result.
In an alternative embodiment, the target object identifying unit is specifically configured to:
if a target object triggering an alarm rule exists in the video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinates corresponding to the target object are determined according to the first position of the target object in the video monitoring frame;
identifying the target object in a plurality of subsequent video monitoring frames according to the target identifier corresponding to the target object, and respectively determining a second position of the target object in the plurality of subsequent video monitoring frames;
And updating the position coordinates according to the determined second position every time the second position is determined.
In an alternative embodiment, the target object recognition unit is further configured to:
For each video monitoring frame in the continuous plurality of video monitoring frames, respectively executing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identifier corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
In an alternative embodiment, the image sending unit is specifically configured to:
Transmitting a target identifier and a position coordinate corresponding to the target object to a rear end; and the position coordinates corresponding to the target object are determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
In an alternative embodiment, the image capturing unit is specifically configured to:
If the time for triggering the alarm rule reaches the preset time, capturing panoramic pictures of video monitoring frames corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous multiple video monitoring frames and the target object is identified in the video monitoring frame before the one video monitoring frame, capturing a panoramic picture of the video monitoring frame before the video monitoring frame.
In a fourth aspect, an embodiment of the present application further provides an image display apparatus, including:
The image receiving unit is used for receiving a panoramic picture sent by video monitoring equipment and a recognition result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by grabbing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame with a target object triggering an alarm rule, and the recognition result is determined after the target object in the last video monitoring frame is recognized;
and the image display unit is used for extracting a target frame comprising a target object from the panoramic picture according to the identification result and displaying the panoramic picture and the target frame.
In a fifth aspect, embodiments of the present application further provide a computer readable storage medium, in which a computer program is stored, which when executed by a processor, implements the method of capturing an image of the first aspect.
In a sixth aspect, an embodiment of the present application further provides a computer readable storage medium, in which a computer program is stored, the computer program implementing the image presentation method of the second aspect when executed by a processor.
In a seventh aspect, an embodiment of the present application further provides a video monitoring device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, causes the processor to implement the method for capturing an image according to the first aspect.
In an eighth aspect, an embodiment of the present application further provides an image display apparatus, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, causes the processor to implement the image display method of the second aspect.
Technical effects caused by any implementation manner of the third aspect, the fifth aspect and the seventh aspect may refer to technical effects caused by corresponding implementation manners of the first aspect, and are not described herein.
Technical effects caused by any implementation manner of the fourth aspect, the sixth aspect and the eighth aspect may refer to technical effects caused by corresponding implementation manners of the second aspect, and are not described herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an image processing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a video monitoring device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another image display apparatus according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for capturing an image according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for capturing images according to an embodiment of the present application;
Fig. 6 is a flowchart of an image display method according to an embodiment of the present application;
FIG. 7 is a flowchart of another image display method according to an embodiment of the present application;
fig. 8 is a schematic diagram of a video monitoring frame according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an image capturing result according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an image display device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "comprises" and "comprising," along with their variants, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following describes in detail the technical solution provided by the embodiments of the present application with reference to the accompanying drawings.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
A schematic structural diagram of an image processing system is exemplarily shown in fig. 1. As shown in fig. 1, the image processing system may include a video surveillance device 100 and an image presentation device 200.
The video monitoring device 100 may be any device capable of implementing the method for capturing images according to the present application, for example, the video monitoring device 100 may be a camera. In this embodiment, the video surveillance device 100 may be structured as shown in fig. 2, including a memory 101, a transmitting component 103, and one or more processors 102.
A memory 101 for storing a computer program for execution by the processor 102. The memory 101 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 101 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 101 may also be a non-volatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a hard disk (HARD DISK DRIVE, HDD) or a solid state disk (solid-STATE DRIVE, SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 101 may be a combination of the above memories.
The processor 102 may include one or more central processing units (central processing unit, CPUs) or a digital processing unit, or the like. And the processor 102 is used for realizing the method for capturing the image when calling the computer program stored in the memory 101.
The sending component 103 is configured to send, to the image display apparatus 200, a result of identifying a panoramic picture and a video monitoring frame at a time corresponding to the capturing rule.
The specific connection medium between the memory 101, the sending component 103, and the processor 102 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 101 and the processor 102 are connected through the bus 104 in fig. 2, the bus 104 is shown by a thick line in fig. 2, and the connection manner between other components is only schematically illustrated, but not limited to. The bus 104 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 2, but not only one bus or one type of bus.
The image display apparatus 200 may be any apparatus capable of implementing the image display method proposed by the present application, and the image display apparatus 200 may be a back end. For example, image display device 200 may be an NVR (Network Video Recorder ), DVR (Digital Video Recorder, digital video recorder), IVSS (INTELLIGENT VIDEO SURVEILLANCE SYSTEM ), platform, or the like. In this embodiment, the image display device 200 may be structured as shown in fig. 3, including a memory 201, a receiving component 203, and one or more processors 202.
A memory 201 for storing a computer program executed by the processor 202. The memory 201 may mainly include a memory program area and a memory data area, wherein the memory program area may store an operating system, a program required for running an instant communication function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 201 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 201 may also be a non-volatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a hard disk (HARD DISK DRIVE, HDD) or a Solid State Disk (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 201 may be a combination of the above memories.
The processor 202 may include one or more central processing units (central processing unit, CPUs) or a digital processing unit, or the like. The processor 202 is configured to implement the image display method provided by the embodiment of the present application when calling the computer program stored in the memory 201.
The receiving component 203 is configured to receive a recognition result of the panoramic picture and the corresponding video monitoring frame sent by the video monitoring device 100.
The specific connection medium between the memory 201, the receiving component 203, and the processor 202 is not limited in the embodiment of the present application. In the embodiment of the present application, the memory 201 and the processor 202 are connected through the bus 204 in fig. 3, the bus 204 is shown with a thick line in fig. 3, and the connection manner between other components is only schematically illustrated, but not limited to. The bus 204 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 3, but not only one bus or one type of bus.
The video monitoring device 100 may be directly connected to the image display device 200 or may be communicatively connected to the image display device through a network, or may be communicatively connected to the image display device through other manners, which are not limited herein.
In some embodiments, a flowchart of the method for capturing images performed by the video monitoring apparatus 100 may be shown in fig. 4, and includes the following steps:
In step S401, if there is a target object triggering an alarm rule in the video monitoring frame, the target object is identified in a plurality of subsequent video monitoring frames.
If a target object triggering the alarm rule exists in the video monitoring frame, a target identifier and a position coordinate corresponding to the target object can be respectively determined. The position coordinates corresponding to the target object are determined according to the first position of the target object in the video monitoring frame. Alarm rules may include wire-mixing rules, area intrusion rules, item carry-over rules, and the like.
The following operations may then be performed separately for each video monitoring frame of the continuous plurality of video monitoring frames: identifying the video monitoring frame, determining each object in the video monitoring frame, matching the target object with each object according to the target identifier corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
And, each time a second position is determined, the position coordinates corresponding to the target object may be updated according to the determined second position.
And step S402, after the grabbing rule is met, grabbing the panoramic picture of the video monitoring frame at the moment corresponding to the grabbing rule.
The video monitoring frame meeting the corresponding moment of the grabbing rule is the last video monitoring frame in the continuous multiple video monitoring frames, and the continuous multiple video monitoring frames all comprise target objects.
In one embodiment, if the time for triggering the alarm rule reaches the preset time, capturing a panoramic picture of a video monitoring frame corresponding to the preset time, where the video monitoring frame corresponding to the preset time includes a target object.
In another embodiment, if the target object cannot be identified in one video monitoring frame of the continuous plurality of video monitoring frames, and the target object is identified in a video monitoring frame preceding the one video monitoring frame, then capturing a panoramic picture of the preceding video monitoring frame.
In another embodiment, if the time for triggering the alarm rule reaches a preset time, and the target object cannot be identified in one video monitoring frame of the continuous multiple video monitoring frames, and the target object is identified in the video monitoring frame preceding the one video monitoring frame, then capturing a panoramic picture of the video monitoring frame corresponding to the current moment.
Step S403, sending the identification result of the panoramic picture and the video monitoring frame at the moment corresponding to the grabbing rule to the rear end, so that the rear end extracts the target frame comprising the target object from the panoramic picture according to the identification result.
The panoramic picture, the target identifier corresponding to the target object and the position coordinate can be sent to the back end. The panoramic picture is obtained by grabbing the video monitoring frame at the moment corresponding to the grabbing rule, and the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
According to the method for capturing images, when the condition that a target object triggering an alarm rule exists in a video monitoring frame is detected, namely, when an event occurs, capturing a panoramic picture of the video monitoring frame is not immediately captured, but N milliseconds are delayed, namely, whether the capturing rule is met or not is determined, the target object is respectively identified in a plurality of continuous video monitoring frames included in the N milliseconds, namely, each video monitoring frame carries out position coordinate updating on the target object according to a target identifier corresponding to the target object in the N milliseconds, namely, all the target objects with the event in the N milliseconds are mapped onto an Mth video monitoring frame corresponding to the N milliseconds, when the Nth millisecond is reached, the Mth video monitoring frame corresponding to the N milliseconds is captured, and all the target objects are respectively sent to the rear end at the target identifier and the position coordinate corresponding to the Mth video monitoring frame. The relative accuracy of the positions of all the target objects in the Mth video monitoring frame can be ensured, so that the rear end can accurately grasp the target frames respectively comprising all the target objects according to the Mth video monitoring frame, and the storage space required when the pictures are stored can be reduced due to the fact that only the Mth video monitoring frame is grasped, and the problem that all the target objects cannot be grasped in real time when the performance of related equipment is insufficient is solved.
The calculation formula of the grabbing rule N may be as follows:
The value range of N is N greater than or equal to 0, and when N is equal to 0, the grabbing rule is met, and the video monitoring frame is immediately grabbed. Cs is an intelligent detection frame rate, and an actual frame rate configured for a device to perform intelligent detection on a video monitoring frame may be configured with different values according to device and service requirements, for example, may be configured as 8, 12, 16, 24, etc. Cp is an equipment hardware comprehensive evaluation index, the value range is Cp= {0,1}, and is obtained according to indexes such as CPU main frequency Qa (unit is MHZ), CPU quantity Qn, bandwidth Qbw (unit is GB/s), calculation force Qt (unit is TOPS) and the like of equipment main control, and the technical formula is as follows: cp= (Qa/900) qn+ Qbw/3.6+qt/0.5.Cf is a comprehensive evaluation coefficient of the performance index of the equipment, and can be determined according to the maximum event number Mf of a single video monitoring frame and the maximum alarm number Mt per second: cf= (mf+mt)/16. Ct is the time consumed for intelligent processing of each frame, the time consumed for intelligent processing of one video monitor frame data for the current operating environment, and this parameter can be obtained computationally while the device is running.
Nt is the time from when the target object triggers the alarm rule in one video monitoring frame to when the target object disappears in another video monitoring frame when the target object disappears after triggering the alarm rule for N milliseconds.
In other embodiments, the flowchart of the method for capturing images performed by the video monitoring apparatus 100 may also be shown in fig. 5, and includes the following steps:
Step S501, detecting a video monitoring frame.
Each object in the video monitoring frame can be detected in real time according to the set alarm rule.
Step S502, determining whether a target object exists in a video monitoring frame to trigger an alarm rule; if yes, go to step S503; if not, step S504 is performed.
In step S503, the target identifier and the position coordinate corresponding to the target object are added to the linked list.
Judging whether a target object exists in the video monitoring frame to trigger an alarm rule, and if so, acquiring a target identifier and a position coordinate corresponding to the target object, wherein the position coordinate corresponding to the target object is determined according to the position of the target object in the video monitoring frame, and adding the target identifier and the position coordinate corresponding to the target object into a linked list.
Step S504, determining whether a target object triggering an alarm rule exists in the video monitoring frame; if yes, go to step S505; if not, step S501 is executed.
Step S505, updating the position coordinates corresponding to the target object in the linked list.
After the target identifier and the position coordinate corresponding to the target object are added to the linked list, the position coordinate corresponding to the target object in the linked list can be updated according to the position of the target object in the next video monitoring frame.
If the target object triggering alarm rule does not exist in the video monitoring frame, whether the target object triggering the alarm rule exists in the video monitoring frame can be judged, and if the target object triggering alarm rule exists in the video monitoring frame, the position coordinates corresponding to the target object in the linked list can be updated according to the position of the target object in the video monitoring frame. If not, the next video monitoring frame can be continuously detected, and whether the target object exists in the next video monitoring frame or not is determined to trigger an alarm rule.
Specifically, the process of updating the position coordinates corresponding to the target object in the linked list may be: each object in the video monitoring frame is obtained, the target identifier corresponding to the target object in the linked list is respectively matched with each object, whether the target identifier corresponding to the target object is identical to each object or not is determined, and if the target identifier corresponding to the target object is identical to each object, the position coordinates corresponding to the target object in the linked list can be updated according to the position of the target object in the video monitoring frame.
Step S506, determining whether a grabbing rule is met; if yes, go to step S507; if not, step S501 is executed.
And step S507, capturing panoramic pictures of the video monitoring frames at the corresponding moments meeting the capturing rules.
Whether the grabbing rule is met or not can be judged, namely whether the time for triggering the alarm rule reaches the preset time or not and/or whether the situation that the target object cannot be detected in one video monitoring frame exists or not is judged, and the situation that the target object can be detected in the video monitoring frame before the video monitoring frame, namely whether the target object disappears in advance or not is judged.
And if the grabbing rule is met, grabbing the panoramic picture of the video monitoring frame at the moment corresponding to the grabbing rule.
If the grabbing rule is not satisfied, the video monitoring frame can be continuously detected, and whether the target object exists in the video monitoring frame or not is determined to trigger the alarm rule.
Step S508, the panoramic picture, the target mark and the position coordinate corresponding to the target object are sent to the back end.
And sending the panoramic picture of the video monitoring frame at the moment corresponding to the grabbing rule, and the target identifier and the position coordinate corresponding to the target object in the linked list to the rear end, wherein the position coordinate is the position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
After the panoramic picture, the target identifier corresponding to the target object and the position coordinate are sent to the back end, the linked list can be emptied.
In some embodiments, a flowchart of the image displaying method performed by the image displaying apparatus 200 may be shown in fig. 6, and includes the following steps:
step S601, receiving a panoramic picture sent by a video monitoring device and a recognition result of a video monitoring frame corresponding to the panoramic picture.
The panoramic picture is obtained by grabbing the last video monitoring frame in the continuous multiple video monitoring frames after the video monitoring frame of the target object with the triggering alarm rule, and the identification result is determined after the target object in the last video monitoring frame is identified. The identification result is a position coordinate corresponding to the target object and a target identifier corresponding to the target object, which are determined according to the position of the target object in the last video monitoring frame.
Step S602, extracting a target frame comprising a target object from the panoramic picture according to the identification result, and displaying the panoramic picture and the target frame.
And extracting a target frame comprising the target object from the panoramic picture according to the target identifier and the position coordinate corresponding to the target object, and displaying the panoramic picture and the target frame in a related interface in response to an instruction of displaying the picture by a user.
In other embodiments, the flowchart of the image displaying method performed by the image displaying apparatus 200 may also be shown in fig. 7, and includes the following steps:
Step S701, receiving a panoramic picture sent by a video monitoring device and a recognition result of a video monitoring frame corresponding to the panoramic picture.
The panoramic picture is a panoramic picture of a video monitoring frame at the moment corresponding to the grabbing rule, and the identification result comprises a position coordinate corresponding to the target object and a target identifier corresponding to the target object, wherein the position coordinate corresponding to the target object is determined according to the position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
Step S702, extracting the attribute and the characteristic information of the target object according to the identification result.
According to the position coordinates and the target identification corresponding to the target object, the attribute and the characteristic information of the target object can be extracted from the panoramic picture, and the extracted attribute and characteristic information can be analyzed.
Step S703, determining whether a target frame including a target object is displayed in the panoramic picture; if yes, go to step S704; if not, step S705 is performed.
Step S704, drawing a target frame comprising a target object in the panoramic picture according to the identification result.
In response to an instruction of a user for displaying a target frame comprising a target object in the panoramic picture, the target frame comprising the target object can be drawn in the panoramic picture according to the position coordinates and the target identification corresponding to the target object.
Step S705, determining whether to display the target frame alone; if yes, go to step S706; if not, step S707 is executed.
Step S706, clipping the target frame.
In response to an instruction of a user to display the target frame separately, after the target frame including the target object is drawn in the panoramic picture, the target frame may be cropped.
And step S707, displaying the attribute and the characteristic information of the panoramic picture, the target frame and the target object in the interface.
After drawing the target frame comprising the target object in the panoramic picture, the attribute and characteristic information of the panoramic picture, the target frame and the target object can be displayed in the related interface.
Step S708, displaying the target frame in the interface.
After the target frame is obtained by clipping, the target frame can be displayed in a related interface.
For example, as shown in fig. 8, for a plurality of video monitoring frames, assume that the target object ID1 triggers an alarm rule in frame 1 and the target object ID2 triggers an alarm rule in frame 2.
Then, when the 1 st frame is detected, the target object ID1 triggers an alarm rule, and the position coordinates determined according to the position of the target object ID1 in the 1 st frame are stored in a linked list instead of grabbing the corresponding 1 st frame at the moment, and the time starts to count.
At the 2 nd frame, the target object ID2 triggers an alarm rule, and the position coordinates determined according to the position of the target object ID2 in the 2 nd frame can be saved in a linked list, and the position coordinates of the target object ID1 saved in the linked list are updated to the position coordinates determined according to the position of the target object ID1 in the 2 nd frame, so that the time count is fixedly accumulated.
And executing the same process as the 2 nd frame in the continuous multiframes after the 2 nd frame, namely detecting whether a target object triggers an alarm rule in each frame of the continuous multiframes after the 2 nd frame, if so, storing the position coordinates corresponding to the target object into a linked list, and updating the position coordinates of the target object ID1 and the target object ID2 stored in the linked list according to the positions of the target object ID1 and the target object ID2 in the frame respectively.
When the time reaches the grabbing rule N milliseconds or one of the target objects ID1 and ID2 disappears in advance, grabbing the panoramic picture of the Mth frame corresponding to the N milliseconds, wherein the grabbed panoramic picture is the Mth frame in FIG. 8. And then, respectively transmitting the target identifications and the position coordinates corresponding to the panoramic picture, the target object ID1 and the target object ID2 to the rear end.
After the rear end receives the target identifications and the position coordinates corresponding to the panoramic picture, the target object ID1 and the target object ID2 respectively, the target frame comprising the target object ID1 and the target frame comprising the target object ID2 can be extracted from the panoramic picture according to the position coordinates of the target object ID1 and the position coordinates of the target object ID2 respectively, and the obtained pictures can be shown in fig. 9 (a) and fig. 9 (b) respectively.
The method for capturing images shown in fig. 4 is based on the same inventive concept, and the embodiment of the application also provides a device for capturing images. Since the device is a device corresponding to the method for capturing images according to the present application, and the principle of the device for solving the problem is similar to that of the method, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Fig. 10 shows a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present application, and as shown in fig. 10, the apparatus for capturing an image includes a target object recognition unit 1001, an image capturing unit 1002, and an image transmitting unit 1003.
The target object identifying unit 1001 is configured to identify, if a target object triggering an alarm rule exists in a video monitoring frame, the target object in a plurality of subsequent video monitoring frames;
The image capturing unit 1002 is configured to capture a panoramic image of a video monitoring frame at a time corresponding to the capturing rule after the capturing rule is satisfied; the video monitoring frame meeting the corresponding moment of the grabbing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the continuous multiple video monitoring frames comprise target objects;
And an image sending unit 1003, configured to send an identification result of the panoramic picture and the video monitoring frame at a time corresponding to the capturing rule to the back end, so that the back end extracts a target frame including the target object from the panoramic picture according to the identification result.
In an alternative embodiment, the target object identification unit 1001 is specifically configured to:
If a target object triggering an alarm rule exists in the video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinates corresponding to the target object are determined according to the first position of the target object in the video monitoring frame;
Identifying the target object in a plurality of subsequent video monitoring frames according to the target identification corresponding to the target object, and respectively determining a second position of the target object in the plurality of subsequent video monitoring frames;
and updating the position coordinates according to the determined second position every time the second position is determined.
In an alternative embodiment, the target object identification unit 1001 is further configured to:
for each video monitoring frame in the continuous plurality of video monitoring frames, the following operations are respectively executed:
Identifying the video monitoring frames, and determining each object in the video monitoring frames;
and matching the target object with each object according to the target identifier corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
In an alternative embodiment, the image sending unit 1003 is specifically configured to:
transmitting a target identifier and a position coordinate corresponding to the target object to the rear end; the position coordinates corresponding to the target object are determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
In an alternative embodiment, the image capturing unit 1002 is specifically configured to:
If the time for triggering the alarm rule reaches the preset time, capturing panoramic pictures of video monitoring frames corresponding to the preset time; and/or
If the target object cannot be identified in one video monitoring frame of the continuous multiple video monitoring frames and the target object is identified in the previous video monitoring frame of the video monitoring frame, capturing the panoramic picture of the previous video monitoring frame.
The image display method shown in fig. 6 is based on the same inventive concept, and an image display device is also provided in the embodiment of the application. Because the device is a device corresponding to the image display method of the application, and the principle of the device for solving the problem is similar to that of the method, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Fig. 11 shows a schematic structural diagram of an image display device according to an embodiment of the present application, and as shown in fig. 11, the image display device includes an image receiving unit 1101 and an image display unit 1102.
The image receiving unit 1101 is configured to receive a panoramic image sent by a video monitoring device and a recognition result of a video monitoring frame corresponding to the panoramic image, where the panoramic image is obtained by capturing a last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame with a target object triggering an alarm rule, and the recognition result is determined after the target object in the last video monitoring frame is recognized;
The image display unit 1102 is configured to extract a target frame including a target object from the panoramic image according to the identification result, and display the panoramic image and the target frame.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method of capturing an image or the image presentation method in the above-described embodiments. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application.

Claims (11)

1. A method of capturing an image, the method comprising:
If a target object triggering an alarm rule exists in the video monitoring frame, identifying the target object in a plurality of subsequent video monitoring frames;
After the grabbing rule is met, grabbing panoramic pictures corresponding to video monitoring frames at the moment corresponding to the grabbing rule; the capturing rule characterizes the time for capturing the panoramic picture after triggering the alarm rule, and the capturing rule is determined according to the performance of the video monitoring equipment or the disappearing time of the target object after triggering the alarm rule; the video monitoring frame meeting the corresponding moment of the grabbing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the plurality of continuous video monitoring frames;
And sending the identification result of the panoramic picture and the video monitoring frame at the moment corresponding to the grabbing rule to the rear end, so that the rear end extracts a target frame comprising the target object from the panoramic picture according to the identification result.
2. The method of claim 1, wherein identifying the target object in a subsequent consecutive plurality of video monitoring frames if the target object triggering the alarm rule is present in the video monitoring frames comprises:
if a target object triggering an alarm rule exists in the video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinates corresponding to the target object are determined according to the first position of the target object in the video monitoring frame;
identifying the target object in a plurality of subsequent video monitoring frames according to the target identifier corresponding to the target object, and respectively determining a second position of the target object in the plurality of subsequent video monitoring frames;
And updating the position coordinates according to the determined second position every time the second position is determined.
3. The method according to claim 2, wherein identifying the target object in a plurality of subsequent video monitoring frames according to the target identifier corresponding to the target object, and determining the second position of the target object in the plurality of subsequent video monitoring frames respectively includes:
For each video monitoring frame in the continuous plurality of video monitoring frames, respectively executing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identifier corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
4. The method of claim 2, wherein the sending the identification result of the video monitoring frame at the time corresponding to the capturing rule to the back end includes:
Transmitting a target identifier and a position coordinate corresponding to the target object to a rear end; and the position coordinates corresponding to the target object are determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
5. The method according to any one of claims 1 to 4, wherein capturing the panoramic picture of the video monitoring frame at the time corresponding to the capturing rule includes:
If the time for triggering the alarm rule reaches the preset time, capturing panoramic pictures of video monitoring frames corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous multiple video monitoring frames and the target object is identified in the video monitoring frame before the one video monitoring frame, capturing a panoramic picture of the video monitoring frame before the video monitoring frame.
6. An image display method, the method comprising:
Receiving a panoramic picture sent by video monitoring equipment and an identification result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by grabbing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame with a target object triggering an alarm rule is located after the grabbing rule is met, and the identification result is determined after the target object in the last video monitoring frame is identified; the grabbing rule characterizes the time for grabbing the panoramic picture after triggering the alarm rule, and the grabbing rule is determined according to the performance of the video monitoring equipment or the disappearing time of the target object after triggering the alarm rule;
And extracting a target frame comprising a target object from the panoramic picture according to the identification result, and displaying the panoramic picture and the target frame.
7. An apparatus for capturing an image, comprising:
the target object identification unit is used for identifying the target object in a plurality of subsequent video monitoring frames if the target object triggering the alarm rule exists in the video monitoring frames;
The image grabbing unit is used for grabbing panoramic pictures of video monitoring frames at the moment corresponding to the grabbing rules after the grabbing rules are met; the capturing rule characterizes the time for capturing the panoramic picture after triggering the alarm rule, and the capturing rule is determined according to the performance of the video monitoring equipment or the disappearing time of the target object after triggering the alarm rule; the video monitoring frame meeting the corresponding moment of the grabbing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the plurality of continuous video monitoring frames;
and the image sending unit is used for sending the identification result of the panoramic picture and the video monitoring frame at the moment corresponding to the grabbing rule to the rear end so that the rear end extracts the target frame comprising the target object from the panoramic picture according to the identification result.
8. An image display device, comprising:
The image receiving unit is used for receiving a panoramic picture sent by the video monitoring equipment and a recognition result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by capturing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of a target object with a trigger alarm rule is in the condition that the capturing rule is met, and the recognition result is determined after the target object in the last video monitoring frame is recognized; the grabbing rule characterizes the time for grabbing the panoramic picture after triggering the alarm rule, and the grabbing rule is determined according to the performance of the video monitoring equipment or the disappearing time of the target object after triggering the alarm rule;
and the image display unit is used for extracting a target frame comprising a target object from the panoramic picture according to the identification result and displaying the panoramic picture and the target frame.
9. A video monitoring device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, which when executed by the processor, implements the method of any of claims 1 to 5.
10. An image display device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, the computer program, when executed by the processor, implementing the method of claim 6.
11. A computer-readable storage medium having a computer program stored therein, characterized in that: the computer program, when executed by a processor, implements the method of any one of claims 1 to 5 or claim 6.
CN202110831445.4A 2021-07-22 2021-07-22 Method for capturing image, image display method, device and storage medium Active CN113591651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110831445.4A CN113591651B (en) 2021-07-22 2021-07-22 Method for capturing image, image display method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110831445.4A CN113591651B (en) 2021-07-22 2021-07-22 Method for capturing image, image display method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113591651A CN113591651A (en) 2021-11-02
CN113591651B true CN113591651B (en) 2024-11-01

Family

ID=78249000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110831445.4A Active CN113591651B (en) 2021-07-22 2021-07-22 Method for capturing image, image display method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113591651B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914592A (en) * 2019-05-08 2020-11-10 杭州海康威视数字技术股份有限公司 Multi-camera combined evidence obtaining method, device and system
CN112055158A (en) * 2020-10-16 2020-12-08 苏州科达科技股份有限公司 Target tracking method, monitoring device, storage medium and system

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITVI20070062A1 (en) * 2007-03-02 2008-09-03 Emanuele Menegatti PANORAMIC TRACKING CAMERA
CN100557657C (en) * 2007-12-28 2009-11-04 北京航空航天大学 A kind of vehicle checking method based on video image characteristic
CN102946528A (en) * 2012-12-14 2013-02-27 安徽水天信息科技有限公司 Airport runway monitoring system based on intelligent video monitoring for whole scenic spot
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104135645A (en) * 2014-07-31 2014-11-05 天津市亚安科技股份有限公司 Video surveillance system and method for face tracking and capturing
CN104469317A (en) * 2014-12-18 2015-03-25 天津市亚安科技股份有限公司 Video monitoring system for pipeline safety early warning
CN106791715A (en) * 2017-02-24 2017-05-31 深圳英飞拓科技股份有限公司 Classification joint control intelligent control method and system
CN109151375B (en) * 2017-06-16 2020-07-24 杭州海康威视数字技术股份有限公司 Target object snapshot method and device and video monitoring equipment
CN109151295B (en) * 2017-06-16 2020-04-03 杭州海康威视数字技术股份有限公司 Target object snapshot method and device and video monitoring equipment
CN109922250B (en) * 2017-12-12 2021-04-02 杭州海康威视数字技术股份有限公司 Target object snapshot method and device and video monitoring equipment
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN108108698A (en) * 2017-12-25 2018-06-01 哈尔滨市舍科技有限公司 Method for tracking target and system based on recognition of face and panoramic video
CN111372037B (en) * 2018-12-25 2021-11-02 杭州海康威视数字技术股份有限公司 Target snapshot system and method
CN111753609B (en) * 2019-08-02 2023-12-26 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN110867083B (en) * 2019-11-20 2021-06-01 浙江宇视科技有限公司 Vehicle monitoring method, device, server and machine-readable storage medium
CN112948627B (en) * 2019-12-11 2023-02-03 杭州海康威视数字技术股份有限公司 Alarm video generation method, display method and device
CN111405238B (en) * 2019-12-16 2023-04-18 杭州海康威视系统技术有限公司 Transmission method, device and system for snap pictures, camera and storage equipment
CN115002414A (en) * 2020-03-20 2022-09-02 腾讯云计算(北京)有限责任公司 Monitoring method, monitoring device, server and computer readable storage medium
CN111597919A (en) * 2020-04-26 2020-08-28 无锡高斯科技有限公司 Human body tracking method in video monitoring scene
CN111815570B (en) * 2020-06-16 2024-08-30 浙江大华技术股份有限公司 Regional intrusion detection method and related device thereof
CN112767711B (en) * 2021-01-27 2022-05-27 湖南优美科技发展有限公司 Multi-class multi-scale multi-target snapshot method and system
CN113011258A (en) * 2021-02-08 2021-06-22 深圳英飞拓科技股份有限公司 Object monitoring and tracking method and device and electronic equipment
CN113112814B (en) * 2021-02-25 2022-10-04 浙江大华技术股份有限公司 Snapshot method and device without stopping right turn and computer storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914592A (en) * 2019-05-08 2020-11-10 杭州海康威视数字技术股份有限公司 Multi-camera combined evidence obtaining method, device and system
CN112055158A (en) * 2020-10-16 2020-12-08 苏州科达科技股份有限公司 Target tracking method, monitoring device, storage medium and system

Also Published As

Publication number Publication date
CN113591651A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
KR101687530B1 (en) Control method in image capture system, control apparatus and a computer-readable storage medium
CN109166261A (en) Image processing method, device, equipment and storage medium based on image recognition
JP2022082561A (en) Analysis server, monitoring system, monitoring method, and program
KR101472077B1 (en) Surveillance system and method based on accumulated feature of object
KR20080058171A (en) Camera tampering detection
JP6568476B2 (en) Information processing apparatus, information processing method, and program
CN111583118B (en) Image stitching method and device, storage medium and electronic equipment
JP4613230B2 (en) Moving object monitoring device
CN108289201B (en) Video data processing method and device and electronic equipment
CN111127508A (en) Target tracking method and device based on video
CN112150514A (en) Pedestrian trajectory tracking method, device and equipment of video and storage medium
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN110826496A (en) Crowd density estimation method, device, equipment and storage medium
US10783365B2 (en) Image processing device and image processing system
JP5758165B2 (en) Article detection device and stationary person detection device
JP2007312271A (en) Surveillance system
CN113591651B (en) Method for capturing image, image display method, device and storage medium
CN111986229A (en) Video target detection method, device and computer system
CN113038261A (en) Video generation method, device, equipment, system and storage medium
JPWO2018179119A1 (en) Video analysis device, video analysis method, and program
CN110248090A (en) Control method and control device
US20190279477A1 (en) Monitoring system and information processing apparatus
JP2017168885A (en) Imaging control device and camera
CN112927258A (en) Target tracking method and device
CN112232113B (en) Person identification method, person identification device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant