CN116797971A - Video stream identification method and device - Google Patents
Video stream identification method and device Download PDFInfo
- Publication number
- CN116797971A CN116797971A CN202310755955.7A CN202310755955A CN116797971A CN 116797971 A CN116797971 A CN 116797971A CN 202310755955 A CN202310755955 A CN 202310755955A CN 116797971 A CN116797971 A CN 116797971A
- Authority
- CN
- China
- Prior art keywords
- optical flow
- video stream
- change information
- space motion
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 101
- 230000003287 optical effect Effects 0.000 claims abstract description 341
- 230000008859 change Effects 0.000 claims abstract description 322
- 238000001514 detection method Methods 0.000 claims abstract description 80
- 230000033001 locomotion Effects 0.000 claims description 231
- 239000011159 matrix material Substances 0.000 claims description 85
- 230000009466 transformation Effects 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000009499 grossing Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 13
- 238000004590 computer program Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 238000012795 verification Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 239000000243 solution Substances 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
One or more embodiments of the present disclosure provide a method and an apparatus for identifying a video stream, where the method includes: and determining optical flow change information corresponding to the target video stream, wherein the theoretical acquisition time period of the target video stream is a preset time period and the theoretical acquisition equipment is a target user terminal. And determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in a preset time period. And determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information.
Description
The application is filed in China patent office, application number 2019114033183 and application on 12 months and 31 days of 2019
A divisional application of a chinese patent application entitled "a video stream recognition method and apparatus".
Technical Field
The present document relates to the field of internet technologies, and in particular, to a method and apparatus for identifying a video stream.
Background
At present, with the advent of the internet era and the rapid development of mobile internet technology, the internet is widely applied to daily study, work and life of people. Various daily transactions for people can be handled and presented over the internet using user terminals, such as smartphones. The user may install a corresponding application program in the smart phone according to respective actual requirements, for example, a payment application, a financial application, an instant messaging application, a shopping application, and the like.
Currently, if a user needs to complete a service under a certain application program, the user needs to trigger a camera in the smart phone to shoot a target object (such as a user face, a bank card, an identity card, a bill and the like) through an uploading control under the application program, and upload video stream information of the target object so as to open a corresponding internet service for the user based on the video stream information. However, there may be a means of video stream injection attack by a malicious user, that is, directly performing hook on the hardware/driver/API layer of the smart phone, so as to replace the original video stream acquired in real time with the pre-stored video stream information, so that the pre-stored video stream information is used as an input information source, and the purpose of maliciously triggering target service execution is achieved.
For example, taking account login of payment application based on face recognition as an example, a malicious user acquires video information by acquiring a face of a target user in advance, then, when the face image is acquired, the face acquired video information of the target user is uploaded to an authentication server in a video frame replacement mode, at the moment, the authentication server performs authentication on the logged-in user based on the face acquired video information, and further, the authentication of the user is determined to pass, so that the malicious user completes authentication and enters a user operation interface, an entrance is provided for the malicious user to execute illegal actions, and the purpose of ensuring account safety in an authentication mode cannot be achieved.
Therefore, there is a need to provide a technical solution that can quickly, accurately and reliably identify video streams.
Disclosure of Invention
It is an object of one or more embodiments of the present specification to provide a video stream identification method. The video stream identification method comprises the following steps:
and determining optical flow change information corresponding to the target video stream, wherein the theoretical acquisition time period of the target video stream is a preset time period and the theoretical acquisition equipment is a target user terminal. And determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in the preset time period. And determining a credibility identification result aiming at the target video stream according to the comparison information between the optical flow change information and the pose change information.
It is an object of one or more embodiments of the present specification to provide a video stream identification apparatus. The video stream identification apparatus includes:
and determining optical flow change information corresponding to the target video stream, wherein the theoretical acquisition time period of the target video stream is a preset time period and the theoretical acquisition equipment is a target user terminal. And determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in the preset time period. And determining a credibility identification result aiming at the target video stream according to the comparison information between the optical flow change information and the pose change information.
It is an object of one or more embodiments of the present specification to provide a video stream identification apparatus comprising: a processor; and a memory arranged to store computer executable instructions.
And the computer executable instructions when executed enable the processor to determine optical flow change information corresponding to the target video stream, wherein the theoretical acquisition time period of the target video stream is a preset time period and the theoretical acquisition equipment is a target user terminal. And determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in the preset time period. And determining a credibility identification result aiming at the target video stream according to the comparison information between the optical flow change information and the pose change information.
It is an object of one or more embodiments of the present description to provide a storage medium for storing computer-executable instructions. And the executable instructions determine optical flow change information corresponding to the target video stream when being executed by the processor, wherein the theoretical acquisition time period of the target video stream is a preset time period and the theoretical acquisition equipment is a target user terminal. And determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in the preset time period. And determining a credibility identification result aiming at the target video stream according to the comparison information between the optical flow change information and the pose change information.
Drawings
For a clearer description of one or more embodiments of the present description or of the solutions of the prior art, the drawings that are necessary for the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description below are only some of the embodiments described in one or more of the present description, from which other drawings can be obtained, without inventive faculty, for a person skilled in the art.
Fig. 1 is a schematic flow chart of a video stream identification method according to one or more embodiments of the present disclosure;
fig. 2 is a schematic diagram of a specific implementation of a video stream identification method according to one or more embodiments of the present disclosure;
fig. 3 is a second flowchart of a video stream identification method according to one or more embodiments of the present disclosure;
FIG. 4 is a schematic diagram of a specific implementation principle of determining optical flow change information in a video stream recognition method according to one or more embodiments of the present disclosure;
fig. 5 is a third flowchart of a video stream identification method according to one or more embodiments of the present disclosure;
fig. 6 is a schematic diagram of a specific implementation principle of determining pose change information in a video stream recognition method according to one or more embodiments of the present disclosure;
Fig. 7 is a fourth flowchart of a video stream identification method according to one or more embodiments of the present disclosure;
fig. 8 is a schematic diagram illustrating a module composition of a video stream identification apparatus according to one or more embodiments of the present disclosure;
fig. 9 is a schematic structural diagram of a video stream identification apparatus according to one or more embodiments of the present disclosure.
Detailed Description
In order for those skilled in the art to better understand the solutions in one or more embodiments of the present specification, the solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is apparent that the described embodiments are only a part of one or more embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without undue effort by one of ordinary skill in the art, are intended to be within the scope of the present disclosure.
It should be noted that, without conflict, one or more embodiments and features of the embodiments in the present specification may be combined with each other. One or more embodiments of the present specification will be described in detail below with reference to the attached drawings and in conjunction with the embodiments.
One or more embodiments of the present disclosure provide a video stream identification method and apparatus, which identify whether a target video stream is video stream information that is photographed and uploaded in real time for a target object by comparing optical flow change information determined based on the target video stream with pose change information of a target user terminal, so as to quickly identify a situation of a malicious video stream attack that replaces a real-time collected video stream with a pre-stored non-real-time collected video stream, so as to intercept the non-real-time collected video stream injected by the malicious video stream attack in time, and improve accuracy of subsequent service processing.
Fig. 1 is a schematic flow chart of a video stream identification method provided by one or more embodiments of the present disclosure, where the method in fig. 1 may be executed by a user terminal, or may be executed by a server, where the user terminal may be a mobile terminal such as a smart phone, or may be a terminal device such as an internet of things device, and specifically, the user terminal may be configured to collect video stream information of a target object and perform reliability identification on the video stream information, and identify whether to execute a corresponding control operation based on the target video stream information when the reliability identification passes, or upload the video stream information to the server, so that the server may perform user identity verification based on the video stream information continuously; the server side can be a background server or a cloud server, and is specifically configured to receive video stream information uploaded by a user terminal, perform credibility identification on the video stream information, perform user authentication based on the video stream information when the credibility identification is passed, and provide a certain business service for a user when the authentication is passed.
In the process of performing credibility recognition on the video stream information of the target object by the user terminal or the server, as shown in fig. 1, the video stream recognition method at least includes the following steps:
s102, determining optical flow change information corresponding to a target video stream, wherein a theoretical acquisition time period of the target video stream is a preset time period and theoretical acquisition equipment is a target user terminal;
specifically, after detecting a video stream acquisition request, the user terminal acquires video stream information of a target object by using a camera device and acquires a target video stream for user identity verification, wherein a theoretical acquisition time period of the target video stream comprises: a time period from a video stream acquisition start time to an acquisition end time;
wherein, consider the above-mentioned goal video stream may be video stream information that the pick-up device gathers in real time; the video stream information acquired in real time can be replaced by a preset video stream attack means, and the video stream information acquired in non-real time is injected, so that before user identity verification is performed based on the target video stream, optical flow change information corresponding to the target video stream is determined, and the optical flow change information is compared with pose change information of the target user terminal so as to identify whether the target video stream is trusted video stream information acquired in real time by the target user terminal;
The method comprises the steps that a theoretical acquisition time period and an actual acquisition time period of a target video stream are the same according to the situation that the target video stream is video stream information acquired in real time, and target user terminals are both used as theoretical acquisition equipment and actual acquisition equipment, namely the target video stream is acquired by the target user terminals in a preset time period; the target video stream is replaced by the video stream information acquired in real time through a preset video stream attack means, and the theoretical acquisition time period of the target video stream is different from the actual acquisition time period, namely the actual acquisition time of the target video stream is not the preset time period, and the actual acquisition equipment is not the target user terminal but the video stream information pre-stored by the target user terminal and used for replacing the video stream information acquired in real time.
S104, determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in a preset time period;
wherein, the target user terminal includes: a camera device and at least one preset sensor; the preset sensor may be an IMU sensor, for example, a gyroscope, and for example, an acceleration sensor, and the sensor detection information may include: at least one of a three-axis attitude angle, an angular velocity, and an acceleration of the sensor;
Specifically, the target user terminal collects sensor detection information by using at least one preset sensor while the target user terminal collects video streams of the target object by using the camera device so as to determine pose change information of the target user terminal based on the sensor detection information; and then the pose change information is used as a comparison basis with the optical flow change information of the target video stream, so that the credibility of the target video stream is identified.
S106, determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information.
The method comprises the steps that a target video stream is acquired in real time, and the target video stream and sensor detection information are acquired synchronously under the condition that a target user terminal is in a certain jitter state, namely, under the condition that the target user terminal is not in absolute rest, the target video stream and the sensor detection information are acquired simultaneously, so that the direction change consistency is required to be realized between the naturally recorded optical flow change direction in the target video stream and the naturally recorded sensor space movement direction in the sensor detection information, and the comparison information between the optical flow change information determined based on the target video stream and the pose change information determined based on the sensor detection information is required to meet the preset change consistency condition;
Aiming at the situation that the target video stream is video stream information acquired in non-real time, the target video stream and the sensor detection information are not acquired synchronously in the same jitter state, so that the direction change consistency is not provided between the naturally recorded optical flow change direction in the target video stream and the naturally recorded sensor space movement direction in the sensor detection information, and the comparison information between the optical flow change information determined based on the target video stream and the pose change information determined based on the sensor detection information does not meet the preset change consistency condition;
it can be known that whether the target video stream is the trusted video stream information acquired by the target user terminal in real time can be identified by comparing the optical flow change information determined based on the target video stream with the pose change information determined based on the sensor detection information;
in one or more embodiments of the present disclosure, by comparing and analyzing optical flow change information determined based on a target video flow with pose change information of a target user terminal, whether the target video flow is video flow information that is shot and uploaded in real time for a target object is identified, so as to quickly identify a situation of a malicious video flow attack that uses a pre-stored non-real-time acquisition video flow to replace the real-time acquisition video flow, so as to intercept the non-real-time acquisition video flow injected by the malicious video flow attack in time, and improve accuracy of subsequent service processing.
Specifically, for the situation that the user terminal performs the credibility recognition on the target video stream, the user terminal acquires the target video stream and the sensor detection information acquired in the preset time period, and determines the credibility recognition result for the target video stream based on the steps S102 to S106; when the credibility identification result is that the target video stream is the credible video stream collected in real time, identifying whether to execute corresponding control operation or not based on the target video stream information, or uploading the target video stream to a server side, so that the server side performs user identity verification based on the target video stream.
Correspondingly, aiming at the situation that the server side performs credibility identification on the target video stream, the user terminal acquires the target video stream and sensor detection information acquired in a preset time period, and uploads the target video stream and the sensor detection information to the server side, so that the server side determines a credibility identification result aiming at the target video stream based on the steps S102 to S106; and when the credibility identification result is that the target video stream is the credible video stream collected in real time, carrying out user identity verification based on the target video stream.
In the implementation, as shown in fig. 2, taking a target object as a user certificate and a target user terminal as a smart phone as an example, a schematic diagram of a specific implementation principle of a video stream identification method is provided, which specifically includes:
(1) After detecting a video stream acquisition request, namely after detecting triggering operation of a user on a video stream uploading control, the user terminal acquires video stream information of a target object by using a camera device; the actual collection time period of the video stream information of the target object is a preset time period, and the preset time period comprises: a plurality of designated time nodes;
(2) The user terminal collects sensor detection information by using at least one preset sensor in the preset time period; wherein, the target user terminal includes: a camera device and at least one preset sensor;
(3) Acquiring a target video stream to be identified; the theoretical acquisition time period of the target video stream is a preset time period, and the theoretical acquisition equipment is a target user terminal;
specifically, if the theoretical acquisition time period and the actual acquisition time period of the target video stream are the same, and the theoretical acquisition equipment and the actual acquisition equipment are both target user terminals, the target video stream is the video stream information of the target object acquired by the camera device; otherwise, the target video stream is an untrusted video stream which is maliciously replaced;
(4) Acquiring sensor detection information acquired in a preset time period; wherein the sensor detection information may include: at least one of a three-axis attitude angle, an angular velocity, and an acceleration of the sensor; the target user terminal collects the video stream of the target object by using the camera device and also collects the sensor detection information by using at least one preset sensor;
(5) Determining optical flow change information corresponding to the target video stream based on a plurality of object image frames corresponding to a plurality of designated time nodes in the obtained target video stream;
(6) Determining pose change information of a target user terminal based on the acquired sensor detection information corresponding to each designated time node;
(7) And determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information.
It should be noted that the processes (3) to (7) may be performed by a user terminal, particularly, an information processing module in the user terminal, and may also be performed by a server.
Wherein, for a process of determining optical flow change information based on a target video stream of a target object, the preset time period includes a plurality of designated time nodes; the plurality of designated time nodes can be obtained by dividing a preset time period according to a certain time interval;
Correspondingly, as shown in fig. 3, in S102, determining optical flow change information corresponding to the target video stream specifically includes:
s1021, obtaining a target video stream to be identified, wherein the target video stream comprises: a plurality of object image frames containing the target object corresponding to the designated time nodes respectively;
the method comprises the steps that a target video stream is uploaded to a server side by a target user terminal aiming at the situation that the target video stream is subjected to credibility identification by the server side, so that the server side is triggered to carry out user identity verification based on the target video stream;
s1022, determining optical flow change information corresponding to the target video flow according to the object image frames under the plurality of designated time nodes.
Specifically, after receiving a target video stream uploaded by a target user terminal, the server side does not directly perform user identity verification based on the target video stream, but determines corresponding optical flow change information based on the target video stream, performs credibility recognition on the target video stream based on the optical flow change information and pose change information determined by sensor detection information acquired in the same preset time period, and performs user identity verification based on the target video stream when the credibility recognition result is that the target video stream is a credible video stream acquired in real time.
Specifically, S1022 determines, according to the object image frames at the plurality of designated time nodes, optical flow change information corresponding to the target video flow, which specifically includes:
step one, two object image frames corresponding to every two adjacent designated time nodes are determined to be image frame combinations;
step two, aiming at each image frame combination, carrying out image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix;
the method comprises the steps that an existing optical flow method can be adopted to conduct optical flow trend analysis on a plurality of object image frames in a target video stream, and optical flow change information is obtained; specifically, the optical flow method forms a motion vector field by assigning a velocity vector to each pixel point in each object image frame. And obtaining the speed vector characteristic of each pixel point based on the displacement direction of each pixel point in the object image frames corresponding to the two adjacent designated time nodes. And determining a corresponding optical flow space motion matrix according to the speed vector characteristics of each pixel point.
And thirdly, determining optical flow change information corresponding to the target video flow according to the optical flow space motion matrixes respectively corresponding to the image frame combinations.
Wherein, the naturally recorded optical flow in the video stream information is generated by the movement of the camera device caused by the shake of the user terminal; the optical flow can express the change of the image, and then the optical flow reflects the jitter amplitude of the user terminal. At this time, if the sensor detection information is collected by the preset sensor in the user terminal at the same time, the spatial movement direction naturally recorded in the sensor detection information is also generated by the movement of the preset sensor caused by the shake of the user terminal, and the spatial movement direction can reflect the shake amplitude of the user terminal, and since the shake amplitudes are synchronous and consistent, the optical flow direction change rule naturally recorded in the video flow information has direct correlation with the spatial movement direction change rule naturally recorded in the sensor detection information, and the optical flow direction change rule and the spatial movement direction change rule of the sensor should have consistent changes.
Further, consider that if the target object is relatively stationary, for example, the target object is a document of interest placed on a fixed plane, then the naturally recorded optical flow in the video stream information is due to movement of the camera device caused by jitter of the user terminal. However, in reality, there may be a case where the target object itself may also move, for example, the target object is a user face, where the optical flow naturally recorded in the video stream information is generated not only by the movement of the image capturing device caused by the shake of the user terminal, but also by the movement of the target itself, and if the optical flow caused by the movement of the target itself is taken into consideration, the accuracy of the comparison information between the optical flow change information and the pose change information may be affected, so that the accuracy of the credibility recognition result for the target video stream is reduced;
Therefore, in order to further improve accuracy of the reliability recognition result for the target video stream, the image region where the moving target is located may be removed from the object image frames, and the removed object image frames are used as a basis for determining the optical flow change information, and the second step corresponds to the second step, in which, for each image frame combination, the image optical flow information recognition is performed on the image frame combination by using a preset optical flow method, so as to obtain a corresponding optical flow spatial motion matrix, which specifically includes:
aiming at each image frame combination, carrying out designated area rejection on two object image frames in the image frame combination to obtain two rejected object image frames; wherein the designated area is an image area containing a moving object;
and (3) performing image optical flow information identification on the two removed object image frames by using a preset optical flow method to obtain a corresponding optical flow space motion matrix.
Further, considering that the optical flow change information of the target video stream needs to be compared with the pose change information of the target user terminal to identify whether the target video stream is the trusted video stream information collected by the target user terminal in real time, in order to improve accuracy and referenceability of information comparison, the optical flow change information and the pose change information need to be determined under the same spatial coordinate system, based on this, the third step, determining the optical flow change information corresponding to the target video stream according to the optical flow spatial motion matrixes respectively corresponding to each image frame combination specifically includes:
For each optical flow space motion matrix, under a preset space coordinate system, carrying out coordinate system transformation on the optical flow space motion matrix to obtain a transformed optical flow space motion matrix;
and determining optical flow change information corresponding to the target video stream according to the optical flow space motion matrixes after transformation.
In a specific embodiment, as shown in fig. 4, a schematic diagram of a specific implementation principle of determining optical flow change information in a video flow identification method is provided, which specifically includes:
(1) Obtaining object image frames corresponding to a plurality of designated time nodes respectively in a preset time period, for example, object image frame 1 … object image frame i … object image frame n;
(2) Two object image frames corresponding to every two adjacent designated time nodes are determined to be image frame combinations;
(3) For each image frame combination, performing image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix, for example, the currently identified image frame combination comprises: object image frame i and object image frame i+1, where i= … n-1;
(4) For each optical flow space motion matrix, under a preset space coordinate system, carrying out coordinate system transformation on the optical flow space motion matrix to obtain a transformed optical flow space motion matrix;
(5) And determining optical flow change information corresponding to the target video stream according to the optical flow space motion matrixes after transformation.
The method comprises the steps that a target user terminal collects sensor detection information through at least one preset sensor while a target object is subjected to video stream collection through an image pickup device, so that pose change information of the target user terminal is determined based on the sensor detection information, and particularly, the preset time period comprises a plurality of appointed time nodes for the process of determining the pose change information based on the sensor detection information; the plurality of designated time nodes may be obtained by dividing a preset time period according to a certain time interval, that is, the plurality of designated time nodes are the same as designated time nodes selected for determining the optical flow change information;
correspondingly, as shown in fig. 5, the determining pose change information of the target user terminal in S104 specifically includes:
s1041, acquiring sensor detection information acquired by at least one preset sensor of a target user terminal at each designated time node;
the method comprises the steps that a server side performs credibility identification on a target video stream, sensor detection information is uploaded to the server side by a target user terminal, and the actual acquisition time of the sensor detection information is the same as the theoretical acquisition time of the target video stream;
S1042, determining pose change information of the target user terminal according to the sensor detection information of the plurality of designated time nodes.
After sensor detection information corresponding to each designated time node is obtained, sensor space motion information of each designated time node is determined based on the sensor detection information, and further pose change information of the target user terminal is determined.
Specifically, S1042 determines pose change information of the target user terminal according to sensor detection information under a plurality of designated time nodes, and specifically includes:
step one, determining a corresponding sensor space motion matrix according to sensor detection information corresponding to every two adjacent designated time nodes;
specifically, based on sensor detection information acquired by using a preset sensor, identifying spatial state information of the preset sensor; determining a corresponding sensor space motion matrix according to space state information corresponding to every two adjacent designated time nodes;
and step two, determining pose change information of the target user terminal according to the spatial motion matrixes of the sensors.
Further, considering that the optical flow change information of the target video stream needs to be compared with the pose change information of the target user terminal to identify whether the target video stream is the trusted video stream information collected by the target user terminal in real time, in order to improve accuracy and referenceability of information comparison, the optical flow change information and the pose change information need to be determined under the same spatial coordinate system, based on this, the step two, determining the pose change information of the target user terminal according to the spatial motion matrixes of the sensors specifically includes:
Aiming at each sensor space motion matrix, under a preset space coordinate system, carrying out coordinate system transformation on the sensor space motion matrix to obtain a transformed sensor space motion matrix;
and determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
In a specific embodiment, as shown in fig. 6, a schematic diagram of a specific implementation principle of determining pose change information in a video stream identification method is provided, and specifically includes:
(1) Acquiring sensor detection information corresponding to each of a plurality of designated time nodes within a preset time period, for example, sensor detection information 1 … sensor detection information i … sensor detection information n;
(2) According to the sensor detection information corresponding to every two adjacent designated time nodes, determining a corresponding sensor space motion matrix, for example, the currently identified detection information combination comprises: sensor detection information i and sensor detection information i+1, where i= … n-1;
(3) Aiming at each sensor space motion matrix, under a preset space coordinate system, carrying out coordinate system transformation on the sensor space motion matrix to obtain a transformed sensor space motion matrix;
(4) And determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
Considering that if the target video stream is the video stream information collected in real time, there should be direction change consistency between the optical flow change direction naturally recorded in the target video stream and the sensor space movement direction naturally recorded in the sensor detection information, so the reliability recognition process for the target video stream, as shown in fig. 7, is described in S106, where the reliability recognition result for the target video stream is determined according to the comparison information between the determined optical flow change information and the pose change information, and specifically includes:
s1061, comparing the determined optical flow change information with pose change information to obtain a corresponding optical flow pose comparison result;
wherein, the optical flow pose comparison result may include: at least one of a relative difference comparison result, a change similarity comparison result, or a relative change trend comparison result between the optical flow change information and the pose change information.
S1062, judging whether the determined optical flow pose comparison result meets a preset change consistency condition;
wherein the preset variation consistency condition comprises: the relative difference of the optical flow pose is smaller than a preset maximum difference threshold value, the change similarity of the optical flow pose is larger than a preset minimum similarity threshold value, or the relative change trend of the optical flow pose meets the consistency condition of the preset trend.
If yes, S1063, determining that the target video stream is trusted video stream information acquired in real time;
specifically, if it is determined that the optical flow pose comparison result meets the preset change consistency condition, it is indicated that the theoretical acquisition time period and the actual acquisition time period of the target video stream are the same, and the theoretical acquisition equipment and the actual acquisition equipment are both target user terminals, that is, the target video stream is acquired by the target user terminals in the preset time period, so that the target video stream is not attacked by a malicious user.
If the judgment result is negative, S1064, determining that the target video stream is the untrusted video stream information acquired in non-real time;
specifically, if it is determined that the optical flow pose comparison result does not meet the preset change consistency condition, it is indicated that the theoretical acquisition time period and the actual acquisition time period of the target video stream are different, that is, the actual acquisition time of the target video stream is not the preset time period, and the actual acquisition device may not be the target user terminal, so that the target video stream may be the video stream information acquired in real time by a malicious user through a preset video stream attack means, and the injected video stream information acquired in non-real time.
Wherein the optical flow change information includes: the plurality of optical flow space motion matrixes within a preset time period, and the pose change information comprises: a plurality of sensor space motion matrixes in a preset time period; preferably, the optical flow space motion matrix and the sensor space motion matrix are obtained by transforming a coordinate system under the same space coordinate system;
correspondingly, the step S1061 is to compare the determined optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result, and specifically includes:
and comparing the plurality of optical flow space motion matrixes with the plurality of sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Specifically, the optical flow space motion matrixes and the sensor space motion matrixes are in one-to-one correspondence, and one optical flow space motion matrix and one sensor space motion matrix are respectively corresponding to two same adjacent designated time nodes (namely the same designated time node combination), so that for each designated time node combination, difference operation is carried out on the corresponding optical flow space motion matrix and the corresponding sensor space motion matrix to obtain the relative difference of the optical flow pose corresponding to the designated time node combination; or,
For each appointed time node combination, performing matrix similarity calculation on the corresponding optical flow space motion matrix and the sensor space motion matrix to obtain the optical flow pose change similarity (for example, matrix distance) corresponding to the appointed time node combination; or,
the method comprises the steps of determining a first change trend based on a plurality of optical flow space motion matrixes, determining a second change trend based on a plurality of sensor space motion matrixes, and determining the relative change trend of the optical flow pose according to the first change trend and the second change trend.
Further, in order to further improve reliability recognition accuracy of the target video stream, the method may further include preprocessing initial change information before comparing the optical flow change information with the pose change information, and comparing the preprocessed optical flow change information with the preprocessed pose change information, where based on the comparing, comparing the plurality of optical flow spatial motion matrices with the plurality of sensor spatial motion matrices to obtain a corresponding optical flow pose comparison result, and specifically includes:
preprocessing a plurality of optical flow space motion matrixes and a plurality of sensor space motion matrixes to obtain preprocessed optical flow space motion matrixes and preprocessed sensor space motion matrixes, wherein the preprocessing comprises: smoothing, removing dryness and aligning the comparison starting points;
And comparing the preprocessed optical flow space motion matrixes with the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Wherein, a certain time delay is considered between the acquisition time period of the possible video stream information and the acquisition time period of the sensor detection information, and therefore, the comparison starting points of the optical flow change information and the pose change information are aligned; in addition, partial abnormal points possibly exist in the determined optical flow change information and the pose change information, so that the abnormal points need to be removed; and the collected related information may have external noise interference, so that the change information can be subjected to drying treatment; thereby further improving the reliability recognition accuracy of the target video stream.
Specifically, for the determining process of the optical flow pose comparison result, whether the difference value between the optical flow spatial motion matrix and the sensor spatial motion matrix is smaller than a preset threshold value or not can be analyzed, whether the relative change trend between the optical flow spatial motion matrix and the sensor spatial motion matrix is consistent or not can also be analyzed, based on the determining process, the comparing the preprocessed optical flow spatial motion matrices with the preprocessed sensor spatial motion matrices to obtain a corresponding optical flow pose comparison result specifically includes:
For every two adjacent designated time nodes in a preset time period, subtracting the preprocessed optical flow space motion matrix corresponding to the every two adjacent designated time nodes from the preprocessed sensor space motion matrix to obtain a corresponding optical flow pose comparison result;
or,
comparing the first change trend of the preprocessed optical flow space motion matrixes with the second change trend of the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results;
the first change trend and the second change trend can be represented by waveform curves, the upper and lower amplitude of the first waveform curve corresponding to the first change trend represents the image optical flow change direction, the upper and lower amplitude of the second waveform curve corresponding to the second change trend represents the space motion direction of the sensor, and the image optical flow change direction and the space motion direction of the sensor are both caused by the same shake of the target user terminal, so that if the upper and lower amplitude changes of the waveforms of the first waveform curve and the second waveform curve have consistency, the target video stream is determined to be the trusted video stream information acquired by the target user terminal in real time; in addition, considering that there may be a specific abnormal point or external noise, the first waveform curve and the second waveform curve may be subjected to smoothing filtering, and then, whether the upper and lower amplitudes of the first waveform curve and the second waveform curve after smoothing filtering have consistency is compared, if the upper and lower amplitudes of the waveforms have consistency, the target video stream is determined to be the trusted video stream information collected by the target user terminal in real time.
According to the video stream identification method in one or more embodiments of the present disclosure, optical flow change information corresponding to a target video stream is determined, where a theoretical acquisition time period of the target video stream is a preset time period and a theoretical acquisition device is a target user terminal; and determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in a preset time period; and determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information. By comparing and analyzing the optical flow change information determined based on the target video flow and the pose change information of the target user terminal, whether the target video flow is the video flow information which is shot and uploaded in real time for the target object is identified, so that the situation that a prestored non-real-time acquisition video flow is used for replacing malicious video flow attacks of the real-time acquisition video flow is quickly identified, the non-real-time acquisition video flow injected by the malicious video flow attacks is intercepted in time, and the accuracy of subsequent service processing is improved.
In accordance with the video stream recognition method described in fig. 1 to 7, based on the same technical concept, one or more embodiments of the present disclosure further provide a video stream recognition device, and fig. 8 is a schematic block diagram of the video stream recognition device provided in one or more embodiments of the present disclosure, where the device is configured to perform the video stream recognition method described in fig. 1 to 7, and as shown in fig. 8, the device includes:
The optical flow change information determining module 801 determines optical flow change information corresponding to a target video stream, wherein a theoretical acquisition time period of the target video stream is a preset time period and theoretical acquisition equipment is a target user terminal; the method comprises the steps of,
a pose change information determination module 802 that determines pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal within the preset time period;
a video stream reliability recognition module 803 that determines a reliability recognition result for the target video stream based on the comparison information between the optical flow change information and the pose change information.
In one or more embodiments of the present disclosure, by comparing and analyzing optical flow change information determined based on a target video flow with pose change information of a target user terminal, whether the target video flow is video flow information that is shot and uploaded in real time for a target object is identified, so as to quickly identify a situation of a malicious video flow attack that uses a pre-stored non-real-time acquisition video flow to replace the real-time acquisition video flow, so as to intercept the non-real-time acquisition video flow injected by the malicious video flow attack in time, and improve accuracy of subsequent service processing.
Optionally, the preset time period includes a plurality of designated time nodes; the optical flow change information determination module 801, which:
obtaining a target video stream to be identified, wherein the target video stream comprises: a plurality of object image frames containing the target object corresponding to the specified time nodes respectively;
and determining optical flow change information corresponding to the target video flow according to the object image frames under the plurality of designated time nodes.
Optionally, the optical flow change information determining module 801, which:
determining two object image frames corresponding to the appointed time nodes adjacent to each other as an image frame combination;
for each image frame combination, carrying out image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to the optical flow space motion matrixes respectively corresponding to the image frame combinations.
Optionally, the optical flow change information determining module 801, which:
for each optical flow space motion matrix, carrying out coordinate system transformation on the optical flow space motion matrix under a preset space coordinate system to obtain a transformed optical flow space motion matrix;
And determining optical flow change information corresponding to the target video flow according to each transformed optical flow space motion matrix.
Optionally, the preset time period includes a plurality of designated time nodes; the pose change information determination module 802, which:
acquiring sensor detection information acquired by at least one preset sensor of the target user terminal at each designated time node;
and determining pose change information of the target user terminal according to the sensor detection information under the plurality of designated time nodes.
Optionally, the pose change information determination module 802:
determining a corresponding sensor space motion matrix according to the sensor detection information corresponding to the appointed time nodes adjacent to each other;
and determining pose change information of the target user terminal according to the sensor space motion matrixes.
Optionally, the pose change information determination module 802:
performing coordinate system transformation on the sensor space motion matrixes under a preset space coordinate system aiming at each sensor space motion matrix to obtain transformed sensor space motion matrixes;
and determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
Optionally, the video stream credibility identifying module 803:
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result;
judging whether the optical flow pose comparison result meets a preset change consistency condition or not;
if the judgment result is yes, determining that the target video stream is the trusted video stream information acquired in real time;
if the judgment result is negative, determining that the target video stream is the untrusted video stream information acquired in non-real time.
Optionally, the optical flow change information includes: the plurality of optical flow space motion matrixes within the preset time period, and the pose change information comprises: a plurality of sensor spatial motion matrices within the preset time period;
the video stream credibility identification module 803:
and comparing the plurality of optical flow space motion matrixes with the plurality of sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Optionally, the video stream credibility identifying module 803:
preprocessing the optical flow space motion matrixes and the sensor space motion matrixes to obtain preprocessed optical flow space motion matrixes and preprocessed sensor space motion matrixes, wherein the preprocessing comprises the following steps: smoothing, removing dryness and aligning the comparison starting points;
And comparing the preprocessed optical flow space motion matrixes with the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Optionally, the video stream credibility identifying module 803:
for every two adjacent designated time nodes in the preset time period, subtracting the preprocessed optical flow space motion matrix corresponding to the every two adjacent designated time nodes from the preprocessed sensor space motion matrix to obtain a corresponding optical flow pose comparison result;
or,
and comparing the first change trend of the preprocessed optical flow space motion matrixes with the second change trend of the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
The video stream identification device in one or more embodiments of the present disclosure determines optical flow change information corresponding to a target video stream, where a theoretical acquisition time period of the target video stream is a preset time period and a theoretical acquisition device is a target user terminal; and determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in a preset time period; and determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information. By comparing and analyzing the optical flow change information determined based on the target video flow and the pose change information of the target user terminal, whether the target video flow is the video flow information which is shot and uploaded in real time for the target object is identified, so that the situation that a prestored non-real-time acquisition video flow is used for replacing malicious video flow attacks of the real-time acquisition video flow is quickly identified, the non-real-time acquisition video flow injected by the malicious video flow attacks is intercepted in time, and the accuracy of subsequent service processing is improved.
It should be noted that, the embodiments of the video stream recognition apparatus in the present specification and the embodiments of the video stream recognition method in the present specification are based on the same inventive concept, so that the specific implementation of the embodiments may refer to the implementation of the corresponding video stream recognition method, and the repetition is omitted.
Further, according to the method shown in fig. 1 to 7, based on the same technical concept, one or more embodiments of the present disclosure further provide a video stream recognition apparatus for performing the video stream recognition method as shown in fig. 9.
The video stream identification device may vary widely in configuration or performance, may include one or more processors 901 and memory 902, and may have one or more stored applications or data stored in memory 902. Wherein the memory 902 may be transient storage or persistent storage. The application program stored in the memory 902 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for use in a video stream identification device. Still further, the processor 901 may be arranged to communicate with the memory 902 and execute a series of computer executable instructions in the memory 902 on the video stream recognition device. The video stream identification device may also include one or more power supplies 903, one or more wired or wireless network interfaces 904, one or more input output interfaces 905, one or more keyboards 906, and the like.
In a particular embodiment, a video stream identification apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the video stream identification apparatus, and the execution of the one or more programs by one or more processors comprises computer-executable instructions for:
determining optical flow change information corresponding to a target video stream, wherein a theoretical acquisition time period of the target video stream is a preset time period and theoretical acquisition equipment is a target user terminal; the method comprises the steps of,
determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in the preset time period;
and determining a credibility identification result aiming at the target video stream according to the comparison information between the optical flow change information and the pose change information.
In one or more embodiments of the present disclosure, by comparing and analyzing optical flow change information determined based on a target video flow with pose change information of a target user terminal, whether the target video flow is video flow information that is shot and uploaded in real time for a target object is identified, so as to quickly identify a situation of a malicious video flow attack that uses a pre-stored non-real-time acquisition video flow to replace the real-time acquisition video flow, so as to intercept the non-real-time acquisition video flow injected by the malicious video flow attack in time, and improve accuracy of subsequent service processing.
Optionally, the computer-executable instructions, when executed, the preset time period comprises a plurality of designated time nodes;
the determining the optical flow change information corresponding to the target video flow includes:
obtaining a target video stream to be identified, wherein the target video stream comprises: a plurality of object image frames containing the target object corresponding to the specified time nodes respectively;
and determining optical flow change information corresponding to the target video flow according to the object image frames under the plurality of designated time nodes.
Optionally, the determining optical flow change information corresponding to the target video stream according to the object image frames under the plurality of designated time nodes includes:
determining two object image frames corresponding to the appointed time nodes adjacent to each other as an image frame combination;
for each image frame combination, carrying out image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to the optical flow space motion matrixes respectively corresponding to the image frame combinations.
Optionally, when executed, the determining optical flow change information corresponding to the target video stream according to the optical flow spatial motion matrix corresponding to each image frame combination includes:
for each optical flow space motion matrix, carrying out coordinate system transformation on the optical flow space motion matrix under a preset space coordinate system to obtain a transformed optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to each transformed optical flow space motion matrix.
Optionally, the computer-executable instructions, when executed, the preset time period comprises a plurality of designated time nodes;
the determining the pose change information of the target user terminal comprises the following steps:
acquiring sensor detection information acquired by at least one preset sensor of the target user terminal at each designated time node;
and determining pose change information of the target user terminal according to the sensor detection information under the plurality of designated time nodes.
Optionally, the determining pose change information of the target user terminal according to the sensor detection information under the plurality of specified time nodes when the computer executable instructions are executed includes:
Determining a corresponding sensor space motion matrix according to the sensor detection information corresponding to the appointed time nodes adjacent to each other;
and determining pose change information of the target user terminal according to the sensor space motion matrixes.
Optionally, the determining pose change information of the target user terminal according to each sensor space motion matrix when the computer executable instructions are executed includes:
performing coordinate system transformation on the sensor space motion matrixes under a preset space coordinate system aiming at each sensor space motion matrix to obtain transformed sensor space motion matrixes;
and determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
Optionally, the determining, when executed, the credibility identification result for the target video stream according to the comparison information between the optical flow variation information and the pose variation information includes:
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result;
judging whether the optical flow pose comparison result meets a preset change consistency condition or not;
If the judgment result is yes, determining that the target video stream is the trusted video stream information acquired in real time;
if the judgment result is negative, determining that the target video stream is the untrusted video stream information acquired in non-real time.
Optionally, the computer-executable instructions, when executed, the optical flow variation information comprises: the plurality of optical flow space motion matrixes within the preset time period, and the pose change information comprises: a plurality of sensor spatial motion matrices within the preset time period;
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result, wherein the optical flow pose comparison result comprises the following steps:
and comparing the plurality of optical flow space motion matrixes with the plurality of sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Optionally, when the computer executable instructions are executed, the comparing the plurality of optical flow spatial motion matrices with the plurality of sensor spatial motion matrices to obtain corresponding optical flow pose comparison results includes:
preprocessing the optical flow space motion matrixes and the sensor space motion matrixes to obtain preprocessed optical flow space motion matrixes and preprocessed sensor space motion matrixes, wherein the preprocessing comprises the following steps: smoothing, removing dryness and aligning the comparison starting points;
And comparing the preprocessed optical flow space motion matrixes with the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Optionally, when the computer executable instructions are executed, the comparing the preprocessed optical flow spatial motion matrices with the preprocessed sensor spatial motion matrices to obtain corresponding optical flow pose comparison results includes:
for every two adjacent designated time nodes in the preset time period, subtracting the preprocessed optical flow space motion matrix corresponding to the every two adjacent designated time nodes from the preprocessed sensor space motion matrix to obtain a corresponding optical flow pose comparison result;
or,
and comparing the first change trend of the preprocessed optical flow space motion matrixes with the second change trend of the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
In one or more embodiments of the present disclosure, a video stream identification device determines optical flow change information corresponding to a target video stream, where a theoretical acquisition time period of the target video stream is a preset time period and the theoretical acquisition device is a target user terminal; and determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in a preset time period; and determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information. By comparing and analyzing the optical flow change information determined based on the target video flow and the pose change information of the target user terminal, whether the target video flow is the video flow information which is shot and uploaded in real time for the target object is identified, so that the situation that a prestored non-real-time acquisition video flow is used for replacing malicious video flow attacks of the real-time acquisition video flow is quickly identified, the non-real-time acquisition video flow injected by the malicious video flow attacks is intercepted in time, and the accuracy of subsequent service processing is improved.
It should be noted that, the embodiments related to the video stream recognition device in the present specification and the embodiments related to the video stream recognition method in the present specification are based on the same inventive concept, so that the specific implementation of the embodiments may refer to the implementation of the corresponding video stream recognition method, and the repetition is omitted.
Further, according to the method shown in fig. 1 to 7, based on the same technical concept, one or more embodiments of the present disclosure further provide a storage medium, which is used to store computer executable instructions, and in a specific embodiment, the storage medium may be a U disc, an optical disc, a hard disk, etc., where the computer executable instructions stored in the storage medium can implement the following flow when executed by a processor:
determining optical flow change information corresponding to a target video stream, wherein a theoretical acquisition time period of the target video stream is a preset time period and theoretical acquisition equipment is a target user terminal; the method comprises the steps of,
determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in the preset time period;
and determining a credibility identification result aiming at the target video stream according to the comparison information between the optical flow change information and the pose change information.
In one or more embodiments of the present disclosure, by comparing and analyzing optical flow change information determined based on a target video flow with pose change information of a target user terminal, whether the target video flow is video flow information that is shot and uploaded in real time for a target object is identified, so as to quickly identify a situation of a malicious video flow attack that uses a pre-stored non-real-time acquisition video flow to replace the real-time acquisition video flow, so as to intercept the non-real-time acquisition video flow injected by the malicious video flow attack in time, and improve accuracy of subsequent service processing.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, comprise a plurality of designated time nodes;
the determining the optical flow change information corresponding to the target video flow includes:
obtaining a target video stream to be identified, wherein the target video stream comprises: a plurality of object image frames containing the target object corresponding to the specified time nodes respectively;
and determining optical flow change information corresponding to the target video flow according to the object image frames under the plurality of designated time nodes.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, determine optical flow change information corresponding to the target video stream according to the object image frames under the plurality of specified time nodes, including:
determining two object image frames corresponding to the appointed time nodes adjacent to each other as an image frame combination;
for each image frame combination, carrying out image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to the optical flow space motion matrixes respectively corresponding to the image frame combinations.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, determine optical flow change information corresponding to the target video stream according to the optical flow spatial motion matrix respectively corresponding to each of the image frame combinations, including:
for each optical flow space motion matrix, carrying out coordinate system transformation on the optical flow space motion matrix under a preset space coordinate system to obtain a transformed optical flow space motion matrix;
And determining optical flow change information corresponding to the target video flow according to each transformed optical flow space motion matrix.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, comprise a plurality of designated time nodes;
the determining the pose change information of the target user terminal comprises the following steps:
acquiring sensor detection information acquired by at least one preset sensor of the target user terminal at each designated time node;
and determining pose change information of the target user terminal according to the sensor detection information under the plurality of designated time nodes.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, determine pose change information of the target user terminal according to the sensor detection information under the plurality of specified time nodes, including:
determining a corresponding sensor space motion matrix according to the sensor detection information corresponding to the appointed time nodes adjacent to each other;
and determining pose change information of the target user terminal according to the sensor space motion matrixes.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, determine pose change information of the target user terminal according to each sensor spatial motion matrix, including:
performing coordinate system transformation on the sensor space motion matrixes under a preset space coordinate system aiming at each sensor space motion matrix to obtain transformed sensor space motion matrixes;
and determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
Optionally, the computer executable instructions stored on the storage medium, when executed by the processor, determine a credibility recognition result for the target video stream according to the comparison information between the optical flow variation information and the pose variation information, including:
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result;
judging whether the optical flow pose comparison result meets a preset change consistency condition or not;
if the judgment result is yes, determining that the target video stream is the trusted video stream information acquired in real time;
If the judgment result is negative, determining that the target video stream is the untrusted video stream information acquired in non-real time.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, the optical flow variation information comprises: the plurality of optical flow space motion matrixes within the preset time period, and the pose change information comprises: a plurality of sensor spatial motion matrices within the preset time period;
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result, wherein the optical flow pose comparison result comprises the following steps:
and comparing the plurality of optical flow space motion matrixes with the plurality of sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Optionally, when the computer executable instructions stored in the storage medium are executed by the processor, the comparing the plurality of optical flow spatial motion matrices with the plurality of sensor spatial motion matrices to obtain corresponding optical flow pose comparison results includes:
preprocessing the optical flow space motion matrixes and the sensor space motion matrixes to obtain preprocessed optical flow space motion matrixes and preprocessed sensor space motion matrixes, wherein the preprocessing comprises the following steps: smoothing, removing dryness and aligning the comparison starting points;
And comparing the preprocessed optical flow space motion matrixes with the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
Optionally, when the computer executable instructions stored in the storage medium are executed by the processor, the comparing the preprocessed optical flow spatial motion matrices with the preprocessed sensor spatial motion matrices to obtain corresponding optical flow pose comparison results includes:
for every two adjacent designated time nodes in the preset time period, subtracting the preprocessed optical flow space motion matrix corresponding to the every two adjacent designated time nodes from the preprocessed sensor space motion matrix to obtain a corresponding optical flow pose comparison result;
or,
and comparing the first change trend of the preprocessed optical flow space motion matrixes with the second change trend of the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
The computer executable instructions stored in the storage medium in one or more embodiments of the present disclosure, when executed by the processor, determine optical flow change information corresponding to a target video stream, where a theoretical acquisition time period of the target video stream is a preset time period and a theoretical acquisition device is a target user terminal; and determining pose change information of the target user terminal, wherein the pose change information is determined based on sensor detection information of the target user terminal in a preset time period; and determining a credibility identification result aiming at the target video stream according to the comparison information between the determined optical flow change information and the pose change information. By comparing and analyzing the optical flow change information determined based on the target video flow and the pose change information of the target user terminal, whether the target video flow is the video flow information which is shot and uploaded in real time for the target object is identified, so that the situation that a prestored non-real-time acquisition video flow is used for replacing malicious video flow attacks of the real-time acquisition video flow is quickly identified, the non-real-time acquisition video flow injected by the malicious video flow attacks is intercepted in time, and the accuracy of subsequent service processing is improved.
It should be noted that, the embodiments related to the storage medium in the present specification and the embodiments related to the video stream identification method in the present specification are based on the same inventive concept, so that the specific implementation of this embodiment may refer to the implementation of the corresponding video stream identification method, and the repetition is not repeated.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before being compiled is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but also HDL is not only one, but a plurality of, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HD Cal, JHDL (Java Hardware Description Language), lava, lola, my HDL, palam, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when one or more of the present description are implemented.
One skilled in the relevant art will recognize that one or more of the embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more of the present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to one or more embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One skilled in the relevant art will recognize that one or more of the embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more of the present description may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is merely illustrative of one or more embodiments of the present disclosure and is not intended to limit the one or more embodiments of the present disclosure. Various modifications and alterations to one or more of this description will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of one or more of the present description, are intended to be included within the scope of the claims of one or more of the present description.
Claims (24)
1. A video stream identification method, comprising:
determining optical flow change information corresponding to a target video stream, wherein the optical flow change information is obtained by performing optical flow trend analysis based on a plurality of target image frames in the target video stream; the method comprises the steps of,
determining pose change information of a target user terminal, wherein the pose change information is determined based on sensor space motion information respectively corresponding to sensor detection information of the target user terminal at a plurality of designated time nodes within a preset time period;
and determining whether the target video stream is a video stream acquired by the target user terminal in real time within the preset time period according to the comparison information between the optical flow change information and the pose change information.
2. The method of claim 1, wherein the determining optical flow change information corresponding to the target video stream comprises:
obtaining a target video stream to be identified, wherein the target video stream comprises: a plurality of object image frames containing the target object corresponding to the specified time nodes respectively;
and determining optical flow change information corresponding to the target video flow according to the object image frames under the plurality of designated time nodes.
3. The method of claim 2, wherein the determining optical flow change information corresponding to the target video flow from the object image frames at the plurality of specified time nodes comprises:
determining two object image frames corresponding to the appointed time nodes adjacent to each other as an image frame combination;
for each image frame combination, carrying out image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to the optical flow space motion matrixes respectively corresponding to the image frame combinations.
4. The method of claim 3, wherein the determining optical flow change information corresponding to the target video stream according to the optical flow spatial motion matrices respectively corresponding to each of the image frame combinations comprises:
for each optical flow space motion matrix, carrying out coordinate system transformation on the optical flow space motion matrix under a preset space coordinate system to obtain a transformed optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to each transformed optical flow space motion matrix.
5. The method of claim 1, wherein the determining pose change information of the target user terminal comprises:
acquiring sensor detection information acquired by at least one preset sensor of the target user terminal at each designated time node;
and determining pose change information of the target user terminal according to the sensor detection information under the plurality of designated time nodes.
6. The method of claim 5, wherein the determining pose change information of the target user terminal from the sensor detection information at the plurality of designated time nodes comprises:
determining a corresponding sensor space motion matrix according to the sensor detection information corresponding to the appointed time nodes adjacent to each other;
and determining pose change information of the target user terminal according to the sensor space motion matrixes.
7. The method of claim 6, wherein said determining pose change information of said target user terminal from each of said sensor spatial motion matrices comprises:
performing coordinate system transformation on the sensor space motion matrixes under a preset space coordinate system aiming at each sensor space motion matrix to obtain transformed sensor space motion matrixes;
And determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
8. The method of claim 1, wherein the determining whether the target video stream is a video stream acquired by the target user terminal in real time within the preset period of time according to the comparison information between the optical flow change information and the pose change information comprises:
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result;
judging whether the optical flow pose comparison result meets a preset change consistency condition or not;
if the judgment result is yes, determining that the target video stream is the trusted video stream information acquired by the target user terminal in real time in the preset time period;
if the target video stream is not the target video stream, determining that the target video stream is the untrusted video stream information acquired by the target user terminal in a non-real-time manner within the preset time period.
9. The method of claim 8, wherein the optical flow change information comprises: the plurality of optical flow space motion matrixes within the preset time period, and the pose change information comprises: a plurality of sensor spatial motion matrices within the preset time period;
Comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result, wherein the optical flow pose comparison result comprises the following steps:
and comparing the plurality of optical flow space motion matrixes with the plurality of sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
10. The method of claim 9, wherein the comparing the plurality of optical flow spatial motion matrices with the plurality of sensor spatial motion matrices to obtain corresponding optical flow pose comparison results comprises:
preprocessing the optical flow space motion matrixes and the sensor space motion matrixes to obtain preprocessed optical flow space motion matrixes and preprocessed sensor space motion matrixes, wherein the preprocessing comprises the following steps: smoothing, removing dryness and aligning the comparison starting points;
and comparing the preprocessed optical flow space motion matrixes with the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
11. The method of claim 10, wherein the comparing the preprocessed plurality of optical flow spatial motion matrices with the preprocessed plurality of sensor spatial motion matrices to obtain corresponding optical flow pose comparison results comprises:
For every two adjacent designated time nodes in the preset time period, subtracting the preprocessed optical flow space motion matrix corresponding to the every two adjacent designated time nodes from the preprocessed sensor space motion matrix to obtain a corresponding optical flow pose comparison result;
or,
and comparing the first change trend of the preprocessed optical flow space motion matrixes with the second change trend of the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
12. A video stream identification apparatus comprising:
the optical flow change information determining module is used for determining optical flow change information corresponding to a target video stream, wherein the optical flow change information is obtained by performing optical flow trend analysis on a plurality of target image frames in the target video stream; the method comprises the steps of,
the pose change information determining module is used for determining pose change information of a target user terminal, wherein the pose change information is determined based on sensor space motion information respectively corresponding to sensor detection information of the target user terminal at a plurality of designated time nodes in a preset time period;
The video stream credibility identification module is used for determining whether the target video stream is a video stream acquired by the target user terminal in real time within the preset time period according to the comparison information between the optical flow change information and the pose change information.
13. The apparatus of claim 12, wherein the optical flow change information determination module is to:
obtaining a target video stream to be identified, wherein the target video stream comprises: a plurality of object image frames containing the target object corresponding to the specified time nodes respectively;
and determining optical flow change information corresponding to the target video flow according to the object image frames under the plurality of designated time nodes.
14. The apparatus of claim 13, wherein the optical flow change information determination module is to:
determining two object image frames corresponding to the appointed time nodes adjacent to each other as an image frame combination;
for each image frame combination, carrying out image optical flow information identification on the image frame combination by using a preset optical flow method to obtain a corresponding optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to the optical flow space motion matrixes respectively corresponding to the image frame combinations.
15. The apparatus of claim 14, wherein the optical flow change information determination module is to:
for each optical flow space motion matrix, carrying out coordinate system transformation on the optical flow space motion matrix under a preset space coordinate system to obtain a transformed optical flow space motion matrix;
and determining optical flow change information corresponding to the target video flow according to each transformed optical flow space motion matrix.
16. The apparatus of claim 12, wherein the pose change information determination module is to:
acquiring sensor detection information acquired by at least one preset sensor of the target user terminal at each designated time node;
and determining pose change information of the target user terminal according to the sensor detection information under the plurality of designated time nodes.
17. The apparatus of claim 16, wherein the pose change information determination module is to:
determining a corresponding sensor space motion matrix according to the sensor detection information corresponding to the appointed time nodes adjacent to each other;
and determining pose change information of the target user terminal according to the sensor space motion matrixes.
18. The apparatus of claim 17, wherein the pose change information determination module is to:
performing coordinate system transformation on the sensor space motion matrixes under a preset space coordinate system aiming at each sensor space motion matrix to obtain transformed sensor space motion matrixes;
and determining pose change information of the target user terminal according to the transformed sensor space motion matrixes.
19. The apparatus of claim 12, wherein the video stream trustworthiness identification module:
comparing the optical flow change information with the pose change information to obtain a corresponding optical flow pose comparison result;
judging whether the optical flow pose comparison result meets a preset change consistency condition or not;
if the judgment result is yes, determining that the target video stream is the trusted video stream information acquired by the target user terminal in real time in the preset time period;
if the target video stream is not the target video stream, determining that the target video stream is the untrusted video stream information acquired by the target user terminal in a non-real-time manner within the preset time period.
20. The apparatus of claim 19, wherein the optical flow change information comprises: the plurality of optical flow space motion matrixes within the preset time period, and the pose change information comprises: a plurality of sensor spatial motion matrices within the preset time period;
The video stream credibility identification module comprises:
and comparing the plurality of optical flow space motion matrixes with the plurality of sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
21. The apparatus of claim 20, wherein the video stream trustworthiness identification module:
preprocessing the optical flow space motion matrixes and the sensor space motion matrixes to obtain preprocessed optical flow space motion matrixes and preprocessed sensor space motion matrixes, wherein the preprocessing comprises the following steps: smoothing, removing dryness and aligning the comparison starting points;
and comparing the preprocessed optical flow space motion matrixes with the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
22. The apparatus of claim 21, wherein the video stream trustworthiness identification module:
for every two adjacent designated time nodes in the preset time period, subtracting the preprocessed optical flow space motion matrix corresponding to the every two adjacent designated time nodes from the preprocessed sensor space motion matrix to obtain a corresponding optical flow pose comparison result;
Or,
and comparing the first change trend of the preprocessed optical flow space motion matrixes with the second change trend of the preprocessed sensor space motion matrixes to obtain corresponding optical flow pose comparison results.
23. A video stream identification device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
determining optical flow change information corresponding to a target video stream, wherein the optical flow change information is obtained by performing optical flow trend analysis based on a plurality of target image frames in the target video stream; the method comprises the steps of,
determining pose change information of a target user terminal, wherein the pose change information is determined based on sensor space motion information respectively corresponding to sensor detection information of the target user terminal at a plurality of designated time nodes within a preset time period;
and determining whether the target video stream is a video stream acquired by the target user terminal in real time within the preset time period according to the comparison information between the optical flow change information and the pose change information.
24. A storage medium storing computer executable instructions that when executed by a processor implement the method of:
Determining optical flow change information corresponding to a target video stream, wherein the optical flow change information is obtained by performing optical flow trend analysis based on a plurality of target image frames in the target video stream; the method comprises the steps of,
determining pose change information of a target user terminal, wherein the pose change information is determined based on sensor space motion information respectively corresponding to sensor detection information of the target user terminal at a plurality of designated time nodes within a preset time period;
and determining whether the target video stream is a video stream acquired by the target user terminal in real time within the preset time period according to the comparison information between the optical flow change information and the pose change information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310755955.7A CN116797971A (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911403318.3A CN111178277B (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
CN202310755955.7A CN116797971A (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911403318.3A Division CN111178277B (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116797971A true CN116797971A (en) | 2023-09-22 |
Family
ID=70658298
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310755955.7A Pending CN116797971A (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
CN201911403318.3A Active CN111178277B (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911403318.3A Active CN111178277B (en) | 2019-12-31 | 2019-12-31 | Video stream identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN116797971A (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112967228B (en) * | 2021-02-02 | 2024-04-26 | 中国科学院上海微系统与信息技术研究所 | Determination method and device of target optical flow information, electronic equipment and storage medium |
CN112966669A (en) * | 2021-04-06 | 2021-06-15 | 海南电网有限责任公司儋州供电局 | Identification method suitable for video stream detection |
CN113850211B (en) * | 2021-09-29 | 2024-08-23 | 支付宝(杭州)信息技术有限公司 | Method and device for detecting attack of injected video |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8755569B2 (en) * | 2009-05-29 | 2014-06-17 | University Of Central Florida Research Foundation, Inc. | Methods for recognizing pose and action of articulated objects with collection of planes in motion |
CN106022263B (en) * | 2016-05-19 | 2019-07-09 | 西安石油大学 | A kind of wireless vehicle tracking of fusion feature matching and optical flow method |
CN107404381A (en) * | 2016-05-19 | 2017-11-28 | 阿里巴巴集团控股有限公司 | A kind of identity identifying method and device |
CN106611157B (en) * | 2016-11-17 | 2019-11-29 | 中国石油大学(华东) | A kind of more people's gesture recognition methods detected based on light stream positioning and sliding window |
CN109215077B (en) * | 2017-07-07 | 2022-12-06 | 腾讯科技(深圳)有限公司 | Method for determining camera attitude information and related device |
US10982968B2 (en) * | 2018-03-29 | 2021-04-20 | Nio Usa, Inc. | Sensor fusion methods for augmented reality navigation |
CN108537845B (en) * | 2018-04-27 | 2023-01-03 | 腾讯科技(深圳)有限公司 | Pose determination method, pose determination device and storage medium |
CN110472458A (en) * | 2018-05-11 | 2019-11-19 | 深眸科技(深圳)有限公司 | A kind of unmanned shop order management method and system |
CN109543513A (en) * | 2018-10-11 | 2019-03-29 | 平安科技(深圳)有限公司 | Method, apparatus, equipment and the storage medium that intelligent monitoring is handled in real time |
CN109387205B (en) * | 2018-11-30 | 2020-11-17 | 歌尔科技有限公司 | Method, device and storage medium for acquiring attitude angle change amplitude |
CN109598242B (en) * | 2018-12-06 | 2023-04-18 | 中科视拓(北京)科技有限公司 | Living body detection method |
CN110264493B (en) * | 2019-06-17 | 2021-06-18 | 北京影谱科技股份有限公司 | Method and device for tracking multiple target objects in motion state |
CN110378936B (en) * | 2019-07-30 | 2021-11-05 | 北京字节跳动网络技术有限公司 | Optical flow calculation method and device and electronic equipment |
CN110415276B (en) * | 2019-07-30 | 2022-04-05 | 北京字节跳动网络技术有限公司 | Motion information calculation method and device and electronic equipment |
-
2019
- 2019-12-31 CN CN202310755955.7A patent/CN116797971A/en active Pending
- 2019-12-31 CN CN201911403318.3A patent/CN111178277B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111178277A (en) | 2020-05-19 |
CN111178277B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111178277B (en) | Video stream identification method and device | |
US11423695B2 (en) | Face location tracking method, apparatus, and electronic device | |
CN113095124B (en) | Face living body detection method and device and electronic equipment | |
CN112800997B (en) | Living body detection method, device and equipment | |
Castiglione et al. | Context aware ubiquitous biometrics in edge of military things | |
KR102399017B1 (en) | Method of generating image and apparatus thereof | |
KR20200044171A (en) | Real-time object detection method and apparatus by deep learning network model | |
US20150154455A1 (en) | Face recognition with parallel detection and tracking, and/or grouped feature motion shift tracking | |
US11354544B2 (en) | Fingerprint image processing methods and apparatuses | |
CN111160251B (en) | Living body identification method and device | |
CN110738078A (en) | face recognition method and terminal equipment | |
CN113850211B (en) | Method and device for detecting attack of injected video | |
CN111199231B (en) | Image recognition method and device | |
WO2017062441A1 (en) | Autofocus method and apparatus using modulation transfer function curves | |
WO2014091667A1 (en) | Analysis control system | |
US10401968B2 (en) | Determining digit movement from frequency data | |
US11315256B2 (en) | Detecting motion in video using motion vectors | |
CN111625297A (en) | Application program display method, terminal and computer readable storage medium | |
CN114740975A (en) | Target content acquisition method and related equipment | |
Hofer et al. | Face to Face with Efficiency: Real-Time Face Recognition Pipelines on Embedded Devices | |
CN109118506A (en) | The method and device of pupil image marginal point in a kind of determining eye image | |
JP7524980B2 (en) | Determination method, determination program, and information processing device | |
CN109165488B (en) | Identity authentication method and device | |
CN117237682A (en) | Certificate verification method and device, storage medium and electronic equipment | |
CN116052287A (en) | Living body detection method, living body detection device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |