Nothing Special   »   [go: up one dir, main page]

CN115511919B - Video processing method, image detection method and device - Google Patents

Video processing method, image detection method and device Download PDF

Info

Publication number
CN115511919B
CN115511919B CN202211169899.0A CN202211169899A CN115511919B CN 115511919 B CN115511919 B CN 115511919B CN 202211169899 A CN202211169899 A CN 202211169899A CN 115511919 B CN115511919 B CN 115511919B
Authority
CN
China
Prior art keywords
feature point
algorithm
image
vector
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211169899.0A
Other languages
Chinese (zh)
Other versions
CN115511919A (en
Inventor
尹越
孙茳
赵毅晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qiantu Technology Co ltd
Original Assignee
Beijing Qiantu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qiantu Technology Co ltd filed Critical Beijing Qiantu Technology Co ltd
Priority to CN202211169899.0A priority Critical patent/CN115511919B/en
Publication of CN115511919A publication Critical patent/CN115511919A/en
Application granted granted Critical
Publication of CN115511919B publication Critical patent/CN115511919B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a video processing method, an image detection method and a device. Acquiring feature point optical flow movement vectors in front of two adjacent frames in an image sequence to be processed according to an optical flow algorithm; according to a preset image processing algorithm, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed; calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector; and adding the stabilized feature point movement vector to the position of the feature point of the previous frame in the image sequence to be processed to obtain the stabilized feature point serving as the position of the feature point of the current frame. The method solves the technical problem that the stable characteristic points of the target object cannot be obtained in the video.

Description

Video processing method, image detection method and device
Technical Field
The application relates to the field of video processing and computer image processing, in particular to a video processing method, an image detection method and a device.
Background
And when the video is analyzed, corresponding analysis data in each frame of picture is obtained through a computer vision algorithm. For example, the joints of the objects in the image are used as key points, and then all the detected key points are distributed to each corresponding object.
When the computer vision algorithm is adopted, each frame of image of the video is used for analysis, so that in the process of processing each frame of image, the analysis error of the analysis result of the previous frame of image data and the analysis error of the next frame of image data cannot be ensured under the same environment when independent analysis exists. And further, it cannot be ensured that the position information of the feature point extracted from the image data of each frame is kept stable.
Aiming at the problem that the stable characteristic point of the target object cannot be obtained in the video in the related art, no effective solution is proposed at present.
Disclosure of Invention
The application mainly aims to provide a video processing method, an image detection method and a device, so as to solve the problem that a stable characteristic point of a target object cannot be obtained in a video.
In order to achieve the above object, according to one aspect of the present application, there is provided a video processing method.
The video processing method according to the present application includes:
according to an optical flow algorithm, acquiring feature point optical flow movement vectors before two adjacent frames in an image sequence to be processed;
according to a preset image processing algorithm, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed;
calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector;
and adding the stabilized feature point movement vector to the position of the feature point of the previous frame in the image sequence to be processed to obtain the stabilized feature point serving as the position of the feature point of the current frame.
In some embodiments, the feature points include: and the preset key point information is extracted from each frame of image in the image sequence to be processed and at least comprises joint position information of a target object.
In some embodiments, before the position of the feature point of the previous frame in the image sequence to be processed is added to the stabilized feature point motion vector, the method further includes:
judging whether the actual length of the motion vector is greater than A% of the width of the current frame picture or not according to the feature point algorithm;
if the feature point algorithm determines that the actual length of the motion vector is greater than A of the current frame picture width, the stabilized feature point motion vector can be added to the position of the feature point of the previous frame in the image sequence to be processed, wherein A is preset.
In some embodiments, the method further comprises:
when a target object is stationary, judging whether the target moves according to the optical flow algorithm, and correcting through the stabilized characteristic point movement vector;
when the target object moves, the stabilized characteristic point movement vector is used for correction, so that errors generated when the characteristic points are detected are reduced.
In some embodiments, the calculating the error between the feature point optical flow motion vector and the feature point algorithm conclusion motion vector to obtain a stabilized feature point motion vector includes:
and obtaining the stabilized feature point movement vector according to the vector subtraction of the feature point optical flow movement vector and the feature point algorithm conclusion movement vector, wherein the stabilization is used for representing whether the feature point of the target object of the current image frame remains stable or not.
In some embodiments, the acquiring feature point optical flow motion vectors before two adjacent frames in the image sequence to be processed according to the optical flow algorithm includes:
determining the corresponding relation between the previous frame image and the current frame image according to the change of pixels in the image sequence to be processed in the time domain and the correlation between adjacent frames;
acquiring motion information of a target object between a previous frame image and a current frame image according to the corresponding relation;
and acquiring the characteristic point optical flow movement vector according to the optical flow starting point and the optical flow ending point in the motion information.
In some embodiments, the obtaining, according to a preset image processing algorithm, a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed includes:
extracting and obtaining the characteristic point position of the previous frame image and the characteristic point position of the current frame image according to a preset image processing algorithm;
and obtaining the conclusion motion vector of the feature point algorithm according to the difference value between the feature point position of the previous frame image and the feature point position of the current frame image.
In order to achieve the above object, according to another aspect of the present application, there is provided an image detection method including: and (5) performing image detection on the positions of the characteristic points of the previous frame obtained by the video processing method.
In order to achieve the above object, according to another aspect of the present application, there is provided a video processing apparatus.
The video processing apparatus according to the present application includes:
the first acquisition module is used for acquiring feature point optical flow movement vectors in front of two adjacent frames in the image sequence to be processed according to an optical flow algorithm;
the second acquisition module is used for acquiring a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm;
the computing module is used for computing errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector;
and the correction module is used for adding the stabilized feature point movement vector to the position of the feature point of the previous frame in the image sequence to be processed to obtain the stabilized feature point serving as the position of the feature point of the current frame.
In order to achieve the above object, according to yet another aspect of the present application, there is provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to execute the method when run.
To achieve the above object, according to a further aspect of the present application, there is provided an electronic device comprising a memory, in which a computer program is stored, and a processor arranged to run the computer program to perform the method.
According to the video processing method, the image detection method and the device, a mode of acquiring feature point optical flow movement vectors before two adjacent frames in an image sequence to be processed according to an optical flow algorithm is adopted, feature point algorithm conclusion movement vectors between the two adjacent frames in the image sequence to be processed are acquired according to a preset image processing algorithm, then errors between the feature point optical flow movement vectors and the feature point algorithm conclusion movement vectors are calculated, the stabilized feature point movement vectors are obtained, the purpose that the position of a feature point of a previous frame in the image sequence to be processed is added with the stabilized feature point movement vectors, and the stabilized feature point serves as the position of a feature point of a current frame is achieved, and therefore the technical effect of correcting the feature point position of a target object in the video is achieved, and the technical problem that the stabilized feature point of the target object cannot be obtained in the video is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this specification. The drawings and their description are illustrative of the application and are not to be construed as unduly limiting the application. In the drawings:
fig. 1 is a flow chart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic structural view of a video processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic diagram of a video processing method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the application herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal" and the like indicate an azimuth or a positional relationship based on that shown in the drawings. These terms are only used to better describe the present application and its embodiments and are not intended to limit the scope of the indicated devices, elements or components to the particular orientations or to configure and operate in the particular orientations.
Also, some of the terms described above may be used to indicate other meanings in addition to orientation or positional relationships, for example, the term "upper" may also be used to indicate some sort of attachment or connection in some cases. The specific meaning of these terms in the present application will be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, the terms "mounted," "configured," "provided," "connected," "coupled," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; may be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements, or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
The inventor finds that when the target detection algorithm of the related technology is used for target detection, characteristic point position jump back and forth can occur when a target object (person) is stationary, and stable acquisition cannot be ensured. In addition, if the object moves, the acquired data may also have data errors.
In order to overcome the defects, the video processing method in the embodiment of the application judges the movement condition of the whole picture through an optical flow algorithm at the characteristic points, then concludes the movement vector through the optical flow movement vector of the characteristic points and the characteristic point algorithm, acquires the characteristic movement vector through vector subtraction, and corrects the characteristic movement vector in a certain range.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
As shown in fig. 1, a flow chart of a video processing method according to an embodiment of the present application includes the following steps S110 to S140:
step S110, according to the optical flow algorithm, the feature point optical flow movement vector before two adjacent frames in the image sequence to be processed is obtained.
And taking the video stream data as an image sequence to be processed, and acquiring feature point optical flow movement vectors before two adjacent frames in the image sequence to be processed by adopting an optical flow algorithm. It will be appreciated that the two adjacent frames are the previous frame and the current frame. And calculating to obtain the feature point optical flow movement vector between two adjacent frames by adopting an optical flow algorithm.
When the time interval between two continuous frames of the video is very small according to the optical flow algorithm, the displacement of the target point is equivalent.
And step S120, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm.
And simultaneously, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm for the same video stream data. It can be understood that the feature points can be obtained by using an image processing algorithm in the related art, and then the feature point algorithm conclusion motion vector between the two adjacent frames is extracted.
And step S130, calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector.
And calculating the stabilized characteristic point movement vector according to the error between the two. That is, a stable feature point movement vector can be obtained regardless of whether the object is stationary or moving.
Step S140, adding the stabilized feature point motion vector to the position of the feature point of the previous frame in the image sequence to be processed, so as to obtain the stabilized feature point as the position of the feature point of the current frame.
And adding the stabilized feature point movement vector to the position of the feature point of the previous frame in the image sequence to be processed, so as to obtain the stabilized feature point. And serves as the position of the feature point of the current frame. It can be appreciated that the current frame and the previous frame are adjacent frames.
It should be noted that, at this time, it may be assumed that the position of the feature point of the previous frame is accurate, or the position of the feature point of the previous frame is determined to be accurate according to a preset condition.
From the above description, it can be seen that the following technical effects are achieved:
the method comprises the steps of obtaining feature point optical flow moving vectors before two adjacent frames in an image sequence to be processed according to an optical flow algorithm, obtaining feature point algorithm conclusion moving vectors between the two adjacent frames in the image sequence to be processed according to a preset image processing algorithm, calculating errors between the feature point optical flow moving vectors and the feature point algorithm conclusion moving vectors to obtain stabilized feature point moving vectors, and achieving the purpose that the position of a feature point of a previous frame in the image sequence to be processed is added with the stabilized feature point moving vectors to obtain stabilized feature points as the position of a feature point of a current frame, so that the technical effect of correcting the position of a feature point of a target object in the video is achieved, and the technical problem that the stabilized feature point of the target object cannot be obtained in the video is solved.
As a preferable mode in this embodiment, the feature points include: and the preset key point information is extracted from each frame of image in the image sequence to be processed and at least comprises joint position information of a target object.
As a preferred embodiment, the preset key point information extracted from each frame of image in the image sequence to be processed may be a target point set according to the requirement. Preferably, when the detection target is a human body, the preset key point information at least includes joint position information of the target object.
Before the position of the feature point of the last frame in the image sequence to be processed is added with the stabilized feature point motion vector, the method further comprises the following steps: judging whether the actual length of the motion vector is greater than A% of the width of the current frame picture or not according to the feature point algorithm; if the feature point algorithm determines that the actual length of the motion vector is greater than A of the current frame picture width, the stabilized feature point motion vector can be added to the position of the feature point of the previous frame in the image sequence to be processed, wherein A is preset.
Before the stable feature point motion vector is added to the position of the feature point of the previous frame, it is further required to determine whether the actual length of the motion vector is greater than a of the width of the current frame or whether the actual length meets the ratio requirement of the width of the current frame. If so, the stabilized feature point motion vector can be added to the position of the feature point of the previous frame in the image sequence to be processed.
It will be appreciated that the a may be configured according to actual needs, in relation to the picture width in the current image frame.
As a preference in this embodiment, the method further comprises: when a target object is stationary, judging whether the target moves according to the optical flow algorithm, and correcting through the stabilized characteristic point movement vector; when the target object moves, the stabilized characteristic point movement vector is used for correction, so that errors generated when the characteristic points are detected are reduced.
In implementation, when the object is stationary, the video processing method in the embodiment of the application can judge whether the object moves according to the optical flow algorithm, and correct the object through the stabilized feature point movement vector.
Further, when the target object moves, the video processing method in the embodiment of the application can correct the stabilized feature point movement vector so as to reduce errors generated when feature points are detected.
As a preferred embodiment, the calculating the error between the feature point optical flow motion vector and the feature point algorithm conclusion motion vector to obtain a stabilized feature point motion vector includes: and obtaining the stabilized feature point movement vector according to the vector subtraction of the feature point optical flow movement vector and the feature point algorithm conclusion movement vector, wherein the stabilization is used for representing whether the feature point of the target object of the current image frame remains stable or not.
In the implementation, the stabilized feature point movement vector is obtained according to the vector subtraction between the feature point optical flow movement vector and the feature point algorithm conclusion movement vector.
It is noted that the stabilization is used to characterize whether the feature point position of the object of the current image frame remains stable. If the feature point motion vector is the stable feature point motion vector, the feature point motion vector error obtained through feature extraction is considered to be smaller.
As a preference in this embodiment, the acquiring, according to the optical flow algorithm, the feature point optical flow motion vector before two adjacent frames in the image sequence to be processed includes: determining the corresponding relation between the previous frame image and the current frame image according to the change of pixels in the image sequence to be processed in the time domain and the correlation between adjacent frames; acquiring motion information of a target object between a previous frame image and a current frame image according to the corresponding relation; and acquiring the characteristic point optical flow movement vector according to the optical flow starting point and the optical flow ending point in the motion information.
In practice, optical flow (optical flow) is the instantaneous velocity of the pixel motion of a spatially moving object on an observation imaging plane.
Therefore, the optical flow algorithm is a method for finding the correspondence existing between the previous frame and the current frame by using the change of the pixels in the image sequence in the time domain and the correlation between the adjacent frames, so as to calculate the motion information of the object between the adjacent frames. Further, the gradation temporal change rate at a specific coordinate point of the two-dimensional image plane is generally defined as an optical flow vector. Specifically, in the scheme, the feature point optical flow motion vector before two adjacent frames in the image sequence to be processed is acquired.
As a preferred embodiment of the present application, the obtaining, according to a preset image processing algorithm, a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed includes: extracting and obtaining the characteristic point position of the previous frame image and the characteristic point position of the current frame image according to a preset image processing algorithm; and obtaining the conclusion motion vector of the feature point algorithm according to the difference value between the feature point position of the previous frame image and the feature point position of the current frame image.
In the implementation, the conclusion motion vector of the feature point algorithm is obtained through the difference value between the feature point position of the previous frame image and the feature point position of the current frame image.
In another embodiment of the present application, there is also provided an image detection method including: the position of the characteristic point of the previous frame obtained by adopting the video processing method is subjected to image detection, so that the characteristic point detection result of the stable target object in the image can be obtained.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the above method, as shown in fig. 2, the apparatus including:
a first obtaining module 210, configured to obtain feature point optical flow motion vectors before two adjacent frames in the image sequence to be processed according to an optical flow algorithm;
the second obtaining module 220 is configured to obtain a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm;
the calculating module 230 is configured to calculate an error between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector, so as to obtain a stabilized feature point moving vector;
and the correction module 240 is configured to add the stabilized feature point motion vector to the position of the feature point of the previous frame in the image sequence to be processed, so as to obtain the position of the feature point of the current frame.
In the embodiment of the present application, the first obtaining module 210 uses video stream data as the image sequence to be processed, and obtains the feature point optical flow motion vector before two adjacent frames in the image sequence to be processed by adopting an optical flow algorithm. It will be appreciated that the two adjacent frames are the previous frame and the current frame. And calculating to obtain the feature point optical flow movement vector between two adjacent frames by adopting an optical flow algorithm.
When the time interval between two continuous frames of the video is very small according to the optical flow algorithm, the displacement of the target point is equivalent.
In the second obtaining module 220 of the embodiment of the present application, the feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed is obtained according to the preset image processing algorithm for the same video stream data. It can be understood that the feature points can be obtained by using an image processing algorithm in the related art, and then the feature point algorithm conclusion motion vector between the two adjacent frames is extracted.
According to the error between the two, the calculation module 230 in the embodiment of the present application can calculate and obtain the stable feature point motion vector. That is, a stable feature point movement vector can be obtained regardless of whether the object is stationary or moving.
In the correction module 240 of the embodiment of the present application, the stabilized feature point motion vector is added to the position of the feature point of the previous frame in the image sequence to be processed, so as to obtain the stabilized feature point. And serves as the position of the feature point of the current frame. It can be appreciated that the current frame and the previous frame are adjacent frames.
It should be noted that, at this time, it may be assumed that the position of the feature point of the previous frame is accurate, or the position of the feature point of the previous frame is determined to be accurate according to a preset condition.
It will be apparent to those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
In order to better illustrate the video processing method in the present application, fig. 3 is a schematic diagram of the implementation principle of the video processing method according to an embodiment of the present application, and the implementation process specifically includes the following steps:
firstly, acquiring feature point optical flow movement vectors before two adjacent frames in an image sequence to be processed according to an optical flow algorithm, and acquiring feature point algorithm conclusion movement vectors between the two adjacent frames in the image sequence to be processed according to a preset image processing algorithm;
secondly, calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector;
and finally, adding the stabilized feature point motion vector to the position of the feature point of the last frame in the image sequence to be processed to obtain the stabilized feature point serving as the position of the feature point of the current frame. Preferably, when the feature point algorithm concludes that the length of the motion vector is greater than 5% of the picture width, the stabilized vector is attached to the feature point position of the previous frame, so as to obtain the stabilized feature point.
The method in the embodiment of the application is distinguished from the prior art that in the process of analyzing the characteristic points in the video, the positions obtained by the characteristic points have great difference under the condition that the video is up and down and the scene is similar due to the reason of an algorithm, and the method has no usability. According to the method, when the object is stationary, whether the object moves or not is judged according to the optical flow algorithm, and correction is carried out through the stabilized characteristic point movement vector; when the target object moves, the stabilized characteristic point movement vector is used for correction, so that errors generated when the characteristic points are detected are reduced.
An embodiment of the application also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, acquiring feature point optical flow movement vectors in front of two adjacent frames in an image sequence to be processed according to an optical flow algorithm;
s2, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm;
s3, calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector;
and S4, adding the stabilized feature point movement vector to the position of the feature point of the previous frame in the image sequence to be processed, and obtaining the stabilized feature point as the position of the feature point of the current frame.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring feature point optical flow movement vectors in front of two adjacent frames in an image sequence to be processed according to an optical flow algorithm;
s2, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm;
s3, calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector to obtain a stabilized feature point moving vector;
and S4, adding the stabilized feature point movement vector to the position of the feature point of the previous frame in the image sequence to be processed, and obtaining the stabilized feature point as the position of the feature point of the current frame.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (5)

1. A method of video processing, the method comprising:
according to an optical flow algorithm, acquiring feature point optical flow movement vectors before two adjacent frames in an image sequence to be processed;
determining the corresponding relation between the previous frame image and the current frame image according to the change of pixels in the image sequence to be processed in the time domain and the correlation between adjacent frames;
acquiring motion information of a target object between a previous frame image and a current frame image according to the corresponding relation;
acquiring the characteristic point optical flow movement vector according to the optical flow starting point and the optical flow end point in the motion information;
according to a preset image processing algorithm, obtaining a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed;
extracting and obtaining the characteristic point position of the previous frame image and the characteristic point position of the current frame image according to a preset image processing algorithm;
obtaining a conclusion motion vector of the feature point algorithm according to the difference value between the feature point position of the previous frame image and the feature point position of the current frame image;
calculating errors between the feature point optical flow moving vector and the feature point algorithm conclusion moving vector, and obtaining a stabilized feature point moving vector through vector subtraction;
adding the stabilized feature point motion vector to the position of the feature point of the previous frame in the image sequence to be processed to obtain a stabilized feature point serving as the position of the feature point of the current frame;
the feature points include: preset key point information extracted from each frame of image in the image sequence to be processed, wherein the preset key point information at least comprises joint position information of a target object,
before the position of the feature point of the last frame in the image sequence to be processed is added with the stabilized feature point motion vector, the method further comprises the following steps:
judging whether the actual length of the motion vector is greater than A% of the width of the current frame picture or not according to the feature point algorithm;
if the feature point algorithm determines that the actual length of the motion vector is greater than A of the current frame picture width, the stabilized feature point motion vector can be added at the position of the feature point of the last frame in the image sequence to be processed, wherein A is preset,
the method further comprises the steps of:
when a target object is stationary, judging whether the target moves according to the optical flow algorithm, and correcting through the stabilized characteristic point movement vector;
when the target object moves, the stabilized characteristic point movement vector is used for correction, so that errors generated when the characteristic points are detected are reduced.
2. The method of claim 1, wherein calculating the error between the feature point optical flow motion vector and the feature point algorithm conclusion motion vector to obtain a stabilized feature point motion vector comprises:
and obtaining the stabilized feature point movement vector according to the vector subtraction of the feature point optical flow movement vector and the feature point algorithm conclusion movement vector, wherein the stabilization is used for representing whether the feature point of the target object of the current image frame remains stable or not.
3. An image detection method, the method comprising: image detection is performed using the positions of the feature points of the previous frame obtained by the video processing method according to any one of claims 1 to 2.
4. A video processing apparatus, the apparatus comprising:
the first acquisition module is used for acquiring feature point optical flow movement vectors in front of two adjacent frames in the image sequence to be processed according to an optical flow algorithm;
determining the corresponding relation between the previous frame image and the current frame image according to the change of pixels in the image sequence to be processed in the time domain and the correlation between adjacent frames;
acquiring motion information of a target object between a previous frame image and a current frame image according to the corresponding relation;
acquiring the characteristic point optical flow movement vector according to the optical flow starting point and the optical flow end point in the motion information;
the second acquisition module is used for acquiring a feature point algorithm conclusion motion vector between two adjacent frames in the image sequence to be processed according to a preset image processing algorithm;
extracting and obtaining the characteristic point position of the previous frame image and the characteristic point position of the current frame image according to a preset image processing algorithm;
obtaining a conclusion motion vector of the feature point algorithm according to the difference value between the feature point position of the previous frame image and the feature point position of the current frame image;
the calculating module is used for calculating errors between the characteristic point optical flow moving vector and the characteristic point algorithm conclusion moving vector, and obtaining a stabilized characteristic point moving vector through vector subtraction;
the correction module is used for adding the stabilized feature point motion vector to the position of the feature point of the previous frame in the image sequence to be processed to obtain the position of the feature point of the current frame, wherein the stabilized feature point is used as the position of the feature point of the current frame;
before the position of the feature point of the last frame in the image sequence to be processed is added with the stabilized feature point motion vector, the method further comprises the following steps:
judging whether the actual length of the motion vector is greater than A% of the width of the current frame picture or not according to the feature point algorithm;
if the feature point algorithm determines that the actual length of the motion vector is greater than A of the current frame picture width, the stabilized feature point motion vector can be added at the position of the feature point of the last frame in the image sequence to be processed, wherein A is preset,
the method further comprises the steps of:
when a target object is stationary, judging whether the target moves according to the optical flow algorithm, and correcting through the stabilized characteristic point movement vector;
when the target object moves, the stabilized characteristic point movement vector is used for correction, so that errors generated when the characteristic points are detected are reduced.
5. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 2.
CN202211169899.0A 2022-09-23 2022-09-23 Video processing method, image detection method and device Active CN115511919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211169899.0A CN115511919B (en) 2022-09-23 2022-09-23 Video processing method, image detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211169899.0A CN115511919B (en) 2022-09-23 2022-09-23 Video processing method, image detection method and device

Publications (2)

Publication Number Publication Date
CN115511919A CN115511919A (en) 2022-12-23
CN115511919B true CN115511919B (en) 2023-09-19

Family

ID=84507011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211169899.0A Active CN115511919B (en) 2022-09-23 2022-09-23 Video processing method, image detection method and device

Country Status (1)

Country Link
CN (1) CN115511919B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930609A (en) * 2010-08-24 2010-12-29 东软集团股份有限公司 Approximate target object detecting method and device
CN108830286A (en) * 2018-03-30 2018-11-16 西安爱生技术集团公司 A kind of reconnaissance UAV moving-target detects automatically and tracking
CN109523502A (en) * 2018-08-28 2019-03-26 顺丰科技有限公司 Loading hatch condition detection method, device, equipment and its storage medium
CN113362371A (en) * 2021-05-18 2021-09-07 北京迈格威科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN113870307A (en) * 2021-09-01 2021-12-31 河北汉光重工有限责任公司 Target detection method and device based on interframe information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930609A (en) * 2010-08-24 2010-12-29 东软集团股份有限公司 Approximate target object detecting method and device
CN108830286A (en) * 2018-03-30 2018-11-16 西安爱生技术集团公司 A kind of reconnaissance UAV moving-target detects automatically and tracking
CN109523502A (en) * 2018-08-28 2019-03-26 顺丰科技有限公司 Loading hatch condition detection method, device, equipment and its storage medium
CN113362371A (en) * 2021-05-18 2021-09-07 北京迈格威科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN113870307A (en) * 2021-09-01 2021-12-31 河北汉光重工有限责任公司 Target detection method and device based on interframe information

Also Published As

Publication number Publication date
CN115511919A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
US8054881B2 (en) Video stabilization in real-time using computationally efficient corner detection and correspondence
US7936950B2 (en) Apparatus for creating interpolation frame
US7899122B2 (en) Method, apparatus and computer program product for generating interpolation frame
KR100866963B1 (en) Digital image stabilization method that can correct horizontal skew distortion and vertical scaling distortion
EP2640057B1 (en) Image processing device, image processing method and program
US20180041708A1 (en) One-Pass Video Stabilization
US8810692B2 (en) Rolling shutter distortion correction
US9191589B2 (en) Image processing device
US9055217B2 (en) Image compositing apparatus, image compositing method and program recording device
US8189104B2 (en) Apparatus, method, and computer program product for detecting motion vector and for creating interpolation frame
US8274602B2 (en) Image processing apparatus and image processing method with redundant frame detection
KR20110023472A (en) Object tracking device and method based on pan tilt zoom camera using coordinate map
KR100727795B1 (en) Motion estimation
KR20180102639A (en) Image processing apparatus, image processing method, image processing program, and storage medium
JPWO2016021147A1 (en) Image processing system, image processing method, and recording medium for detecting stay of moving object from image
US9798919B2 (en) Method and apparatus for estimating image motion using disparity information of a multi-view image
CN115511919B (en) Video processing method, image detection method and device
CN112001949B (en) Method, device, readable storage medium and equipment for determining target point moving speed
JP6178646B2 (en) Imaging apparatus and image shake correction processing method
WO2018179119A1 (en) Image analysis apparatus, image analysis method, and recording medium
JP5197374B2 (en) Motion estimation
JP5059855B2 (en) Global motion estimation method
US20200065979A1 (en) Imaging system and method with motion detection
EP3543903B1 (en) Image processing apparatus and method, and storage medium storing instruction
JP4482933B2 (en) Motion vector detection device, image display device, image imaging device, motion vector detection method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant