Nothing Special   »   [go: up one dir, main page]

CN115035164B - Moving object identification method and device - Google Patents

Moving object identification method and device Download PDF

Info

Publication number
CN115035164B
CN115035164B CN202210697761.1A CN202210697761A CN115035164B CN 115035164 B CN115035164 B CN 115035164B CN 202210697761 A CN202210697761 A CN 202210697761A CN 115035164 B CN115035164 B CN 115035164B
Authority
CN
China
Prior art keywords
target
optical flow
target object
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210697761.1A
Other languages
Chinese (zh)
Other versions
CN115035164A (en
Inventor
王发平
张翔
姜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Haixing Zhijia Technology Co Ltd
Original Assignee
Shenzhen Haixing Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Haixing Zhijia Technology Co Ltd filed Critical Shenzhen Haixing Zhijia Technology Co Ltd
Priority to CN202210697761.1A priority Critical patent/CN115035164B/en
Publication of CN115035164A publication Critical patent/CN115035164A/en
Application granted granted Critical
Publication of CN115035164B publication Critical patent/CN115035164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a moving object identification method and a moving object identification device, wherein the moving object identification method comprises the following steps: acquiring two adjacent frames of images of a target scene, wherein the target scene comprises a target object; respectively extracting pixel data corresponding to two adjacent frames of images; inputting the pixel data into an optical flow model, and calculating optical flow energy fields corresponding to different weight parameters; classifying optical flow energy fields corresponding to different weight parameters, and determining target weight parameters of a target object in a target scene based on classification results; and calculating the motion state of the target object based on the optical flow model and the target weight parameter. By aiming at different target scenes, the flexible adjustment of the weight parameters is realized, and on the basis of quickly and accurately determining the target weight parameters, the accurate identification of the motion state of the moving target is realized, and the efficiency of identifying the moving target is further improved.

Description

Moving object identification method and device
Technical Field
The invention relates to the field of image recognition, in particular to a moving target recognition method and device.
Background
The detection of moving targets plays an important role in the fields of video analysis, image processing, micro-nano operation, medical image analysis and the like. The optical flow method has the advantages of high precision, rich information of motion or scene and the like, and is widely applied to scenes such as unmanned automobile front scene and obstacle detection, identification, segmentation, tracking, navigation, shape information recovery and the like. Optical flow is the instantaneous velocity of the pixel motion of a spatially moving object on the viewing imaging plane. The optical flow method is a method for finding out the correspondence existing between the previous frame and the current frame by utilizing the change of pixels in an image sequence in a time domain and the correlation between adjacent frames, thereby calculating the motion information of an object between the adjacent frames.
In general, optical flow calculation methods can be classified into gradient-based methods (represented by HS (Horn Schunck) and LK (Lucas Kanade) and derivative methods thereof), matching-based methods, energy-based methods, phase-based methods and neuro-dynamic methods, wherein as the variational method and partial differential equation theory are gradually perfected, the high-precision variational moving object recognition method based on the HS and derivative algorithm thereof are widely studied, however, the method and derivative algorithm thereof are greatly influenced by the weight coefficient of a smooth term, and the adjustment process of the weight coefficient is very difficult, and multiple manual tests are required, so that the moving object recognition efficiency is low.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defect of low moving object recognition efficiency caused by difficult adjustment process of the weight coefficient in the high-precision variation moving object recognition method based on the HS and the derivative algorithm thereof in the prior art, thereby providing the moving object recognition method and the moving object recognition device.
According to a first aspect, an embodiment of the present invention provides a moving object recognition method, including:
Acquiring two adjacent frames of images of a target scene, wherein the target scene comprises a target object;
respectively extracting pixel data corresponding to the two adjacent frames of images;
Inputting the pixel data into an optical flow model, and calculating optical flow energy fields corresponding to different weight parameters;
Classifying optical flow energy fields corresponding to different weight parameters, and determining target weight parameters of the target object in a target scene based on classification results;
And calculating the motion state of the target object based on the optical flow model and the target weight parameter.
Optionally, the classifying the optical flow energy fields corresponding to different weight parameters, and determining the target weight parameter under the target scene where the target object is located based on the classification result includes:
Clustering the optical flow energy fields corresponding to different weight parameters to obtain a rapid descent area, a turning area and a stable area;
and determining the weight parameter corresponding to the turning region clustering center as the target weight parameter of the target scene where the target object is located.
Optionally, the calculating, based on the optical flow model and the target weight parameter, a motion state of the target object includes:
Inputting the pixel data into an optical flow model, and performing pyramid filtering processing to obtain a first filtering image and a second filtering image;
pyramid hierarchical sampling is carried out on the basis of the target weight parameter, the first filtering image and the second filtering image, so that a first image and a second image are respectively obtained;
Calculating an optical flow vector result based on the target weight coefficient, the first image and the second image;
and determining the motion state of the target object according to the optical flow vector result.
Optionally, the determining the motion state of the target object according to the optical flow vector result includes:
When the optical flow vector result is larger than a preset threshold value, judging that the target object is in a motion state;
And when the optical flow vector result is not greater than a preset threshold value, judging that the target object is in a static state.
Optionally, the method further comprises:
When the target object is in a motion state, extracting all the optical flow vector results reaching a threshold value to obtain pixel extraction data of the target object;
And extracting data based on the pixels of the target object, and marking the motion position of the target object on an original image.
Optionally, the marking the motion position of the target object on the original image based on the pixel extraction data of the target object includes:
Acquiring the position information of the target object in an original image;
matching the position information of the original image with the pixel extraction data of the target object to obtain a matching result;
and marking the motion position of the target object from the original image based on the matching result.
Optionally, the method further comprises:
acquiring a scene image to be detected;
and when the scene image to be detected is consistent with the target scene image, determining the target weight parameter under the target scene as the target weight parameter under the scene to be detected.
According to a second aspect, an embodiment of the present invention provides a moving object recognition apparatus, the apparatus including:
the acquisition module is used for acquiring two adjacent frames of images of a target scene, wherein the target scene comprises a target object;
the extraction module is used for respectively extracting pixel data corresponding to the two adjacent frames of images;
the first calculation module is used for inputting the pixel data into an optical flow model and calculating optical flow energy fields corresponding to different weight parameters;
the second calculation module is used for classifying the optical flow energy fields corresponding to the different weight parameters, and determining the target weight parameters of the target object in the target scene based on the classification result;
And the third calculation module is used for calculating the motion state of the target object based on the optical flow model and the target weight parameter.
According to a third aspect, an embodiment of the present invention provides an electronic device, including:
The system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the method in the first aspect or any optional implementation manner of the first aspect.
According to a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect, or any one of the alternative embodiments of the first aspect.
The technical scheme of the invention has the following advantages:
according to the moving target identification method and device, two adjacent frames of images of a target scene are acquired, and the target scene comprises a target object; respectively extracting pixel data corresponding to the two adjacent frames of images; inputting the pixel data into an optical flow model, and calculating optical flow energy fields corresponding to different weight parameters; classifying optical flow energy fields corresponding to different weight parameters, and determining target weight parameters of the target object in a target scene based on classification results; and calculating the motion state of the target object based on the optical flow model and the target weight parameter. The pixel data of two adjacent frames of images of the target scene are input into the optical flow model, the optical flow energy fields corresponding to different weight parameters and the classification of the optical flow energy fields are obtained through calculation, and the target weight parameters of the target scene where the target object is located are determined based on the classification result, so that the flexible adjustment of the weight parameters is realized for different target scenes, the motion state of the target object is obtained through calculation based on the optical flow model and the target weight parameters, and on the basis of quickly and accurately determining the target weight parameters, the accurate identification of the motion state of the moving target is realized, and the efficiency of identifying the moving target is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a moving object recognition method according to an embodiment of the present invention;
FIG. 2 is a diagram showing the correspondence between the optical flow energy field and the weight parameter in the moving object recognition method according to the embodiment of the present invention;
FIG. 3 is a schematic diagram of a monitoring target object of a moving target recognition method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram showing the calculation result of an optical flow field in the moving object recognition method according to the embodiment of the present invention;
FIG. 5 is a diagram showing the result of calculation of an optical flow vector for a moving object recognition method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of object cutting of a moving object recognition method according to an embodiment of the present invention;
FIG. 7 is a flowchart of an algorithm of a moving object recognition method according to an embodiment of the present invention;
FIG. 8 is a waveform comparison chart of a moving object recognition method according to an embodiment of the present invention;
FIG. 9 is a comparison chart of the result of an optical flow algorithm of a moving object recognition method according to an embodiment of the present invention;
Fig. 10 is a schematic structural diagram of a moving object recognition device according to an embodiment of the present invention;
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; the two components can be directly connected or indirectly connected through an intermediate medium, or can be communicated inside the two components, or can be connected wirelessly or in a wired way. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The optical flow method is a calculation method for calculating the motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between the adjacent frames to find out the corresponding relation between the previous frame and the current frame, wherein the high-precision variation optical flow calculation method based on the HS and the derivative algorithm thereof have the following defects:
1) Solving the optical flow is a highly uncomfortable problem, and using pure intensity-based constraints generally results in a system of underdetermined equations with aperture problems;
2) The optical flow method based on the HS theory has poor robustness, and cannot accurately process illumination change scenes, scenes with larger gradient boundary areas and multi-target large-displacement scenes;
3) The optical flow method based on the HS theory is to calculate dense optical flow, and the efficiency is generally not high;
4) The optical flow method based on the HS theory is greatly influenced by the weight parameters of the smoothing term, is difficult to adjust, requires multiple manual experiments and has poor adaptability.
The optical flow variation model based on the HS principle is composed of two main core parts: data items and smoothing items, the basic model is described as follows:
Wherein E (u, v) is the optical flow field calculation result; s is a pixel data set corresponding to the whole image; i x is the pixel gradient change value in the x direction; i y is the pixel gradient change value in the y direction; i t is the pixel gradient change value of two adjacent frames of images before and after; u, v are optical flow vectors in the x-direction and y-direction, respectively; representing the second order Laplacian of u; representing the second order laplace operator of v.
In view of the above problems, embodiments of the present invention provide a moving object recognition method for solving the shortcomings in the above-described variational optical flow calculation method, and relatively fast, robust and high-precision optical flow calculation is realized by improving data items and smoothing items in a model.
As shown in fig. 1, the moving object identification method specifically includes the following steps:
step S101: and acquiring two adjacent frames of images of a target scene, wherein the target scene comprises a target object.
Specifically, in practical application, the image of the target scene can be acquired through the camera, but the practical situation is not limited to the situation, the images of the target scene and the target object can be acquired by any image acquisition equipment, and the target scene and the target object in the images are separated, so that a foundation is laid for accurately identifying the motion state of the target object in the follow-up process.
The moving target identification method provided by the embodiment of the invention can be applied to scenes such as traffic roads and the like, and provides theoretical and image technical support for acquiring the moving state of the target object.
Step S102: and respectively extracting pixel data corresponding to two adjacent frames of images.
Step S103: and inputting the pixel data into an optical flow model, and calculating optical flow energy fields corresponding to different weight parameters.
Specifically, in practical application, the embodiment of the invention improves the basic model of the optical flow variation model, and the improved TV-L 1 optical flow variation model (namely the optical flow model) is as follows:
Wherein E (u, v) is the optical flow field calculation result; g t is a filter based on trigonometric polynomial expansion; omega represents the range of the target scene; i 0 and I 1 are images of two frames before and after movement respectively; x= (p x,py)T is a pixel point coordinate on the image; λ is a weight parameter of the smoothing term; Namely, is Representing the second order Laplacian of u; h (x) = [ u (x), v (x) ] T is the optical flow vector in both the x and y directions to be solved.
For smooth terms in the base model: according to the embodiment of the invention, the K-Means algorithm based on energy field classification is introduced, so that the model has the capability of adaptively selecting the weight coefficient in the smoothing term. Specifically, in practical application, the K-Means algorithm based on energy field classification in the embodiment of the present invention is used for weight estimation, but the practical situation is not limited thereto, and the change of the type and number of the algorithm is performed for determining the optimal weight parameter, which is also within the protection scope of the moving object recognition method provided in the embodiment of the present invention. The K-Means algorithm based on energy field classification is used for weight estimation, a traditional mode of determining the value of the weight parameter by adopting a large number of experiments can be made up, a large amount of processing time is saved, when the weight parameter of an unknown scene is selected, the weight parameter suitable for the current scene is rapidly determined by analyzing the optical flow energy fields corresponding to different weight parameters, and therefore the efficiency of identifying the moving object is further improved on the basis of realizing accurate identification of the moving state of the moving object.
Step S104: classifying the optical flow energy fields corresponding to the different weight parameters, and determining the target weight parameters of the target object in the target scene based on the classification result.
Specifically, in practical application, the embodiment of the invention classifies the optical flow energy fields of the common scenes corresponding to different weight parameters, determines the classification condition of the optical flow energy fields of multiple classes based on the optical flow variation model, and compares the object scene with the common scenes in the model, thereby determining the object weight parameters of the object in the object scene.
According to the embodiment of the invention, the weight parameters suitable for the current common scene are determined by comparing and matching different weight parameters with the optical flow energy fields of the multiple common scenes, and after the image data acquired by the image acquisition equipment are input into the optical flow variation model, the target weight parameters can be determined directly by comparing the target scene, the target object and the common scenes stored in the model, so that the self-adaptive adjustment of the weight parameters in the unknown scene is realized.
Step S105: and calculating the motion state of the target object based on the optical flow model and the target weight parameter.
Specifically, in practical application, the embodiment of the invention calculates the pixel data of the target object based on the optical flow variation model and the target weight parameter, so as to obtain the motion state of the target object, and display the motion condition of the target object.
By executing the steps, the moving target recognition method provided by the embodiment of the invention obtains two adjacent frame images of the target scene, wherein the target scene comprises a target object; respectively extracting pixel data corresponding to two adjacent frames of images; inputting the pixel data into an optical flow model, and calculating optical flow energy fields corresponding to different weight parameters; classifying optical flow energy fields corresponding to different weight parameters, and determining target weight parameters of a target object in a target scene based on classification results; and calculating the motion state of the target object based on the optical flow model and the target weight parameter. The pixel data of two adjacent frames of images of the target scene are input into the optical flow model, the optical flow energy fields corresponding to different weight parameters and the classification of the optical flow energy fields are obtained through calculation, and the target weight parameters of the target scene where the target object is located are determined based on the classification result, so that the flexible adjustment of the weight parameters is realized for different target scenes, the motion state of the target object is obtained through calculation based on the optical flow model and the target weight parameters, and on the basis of quickly and accurately determining the target weight parameters, the accurate identification of the motion state of the moving target is realized, and the efficiency of identifying the moving target is further improved.
Specifically, in an embodiment, the step S104 classifies the optical flow energy fields corresponding to the different weight parameters, determines the target weight parameters under the target scene where the target object is located based on the classification result, and specifically includes the following steps:
Step S201: clustering the optical flow energy fields corresponding to different weight parameters to obtain a rapid descent area, a turning area and a stable area.
Step S202: and determining the weight parameter corresponding to the turning region clustering center as the target weight parameter of the target scene where the target object is located.
Specifically, in practical application, the embodiment of the invention trains and determines the weight parameters based on the optical flow variation model, classifies optical flow energy fields with different weight parameters by adopting various common scenes, determines the target weight parameters under the target scene of the target object based on the classification result, and lays a foundation for directly comparing the scene to be detected with the target scene subsequently, thereby greatly shortening the selection processing time of the weight parameters and rapidly determining the target weight parameters.
Specifically, the K-Means algorithm based on energy field classification is used for weight estimation by three steps: calculation of energy field > energy field effective data screening > K-Means realizes weight parameter self-adaptive selection.
Specifically, for different application scenarios, in the embodiment of the present invention, four common scenarios are taken as examples, four sets of image sequences are selected, and a determination process of weight parameters of each scenario is explained, but the actual situation is not limited to this, and the actual situation can be trained and determined according to this process, and will not be described herein.
Specifically, four groups of image sequences in the embodiment of the present invention are respectively "Army-Group1, mequon-Group2, evergreen-Group3, and baseball-Group 4", and the energy field distributions corresponding to different image sequences at different λ are shown in fig. 2. It can be seen that although the image sequences are different, the energy fields are different from λ, but the same trend of the energy curves is the same, and the energy fields corresponding to different λ can be classified into three categories: the rapid descent region, the turning region and the smooth region, and because the optical flow field represented by the turning region has large density and low noise level, the energy field can be classified by a K-Means method, and the specific process is as follows:
① And data normalization is carried out, and the data of different image sequences are unified by a preprocessing method, so that the subsequent comparison is convenient.
② The initialized k samples are selected as initial cluster centers, k is the energy field classification result, namely k=3, namely a i=a1,a2,a3, wherein a i is the ith cluster center, and i=1, 2 and 3.
③ And calculating the distances from k clustering centers according to the energy field of each lambda i, and dividing the distances into classes corresponding to the clustering centers with the smallest distances.
④ For each a i, the cluster center is recalculatedWherein x is sample data of weight parameter lambda; c i is the i-th cluster sample.
Specifically, as shown in fig. 2, c 1、c2 and c 3 respectively represent three kinds of clustering sets of energy fields corresponding to different λ, namely, a rapid-descent region weight parameter sample data set, a turning region weight parameter sample data set and a plateau region weight parameter sample data set, and in an embodiment of the present invention, c 1 is taken as the rapid-descent region weight parameter set, c 2 represents the turning region weight parameter set, and c 3 represents the plateau region weight parameter set as an example for illustration, but the practical situation is not limited thereto, and the corresponding situation of c i and the energy field may be changed according to the practical requirement.
⑤ Setting a termination condition, wherein the iteration number is limited, and repeating ③-④ steps until the rapid descent region, the turning region and the stable region are separated.
⑥ And taking the cluster center of the turning region as the optimal solution of the turning region as output.
The embodiment of the invention can realize the self-adaptive selection of the weight parameter lambda through ①-⑥ process, and the classification method has higher efficiency.
The center of the cluster represents the distribution of the same-class data, and is the representative of the data, in the embodiment of the invention, the turning region is the region with fast energy characteristic transformation, the cluster center can represent and embody the energy characteristic of the image, and the numerical value of the cluster center of the turning region is adopted as the weight parameter, so that the energy characteristics of the target scene and the target object can be fully embodied, and the accurate and precise identification of the motion state of the target object is realized.
The clustering method adopted in the embodiment of the present invention refers to the related description of the clustering calculation process in the prior art, and will not be described in detail herein.
Specifically, in an embodiment, the step S104 specifically further includes the following steps:
Step S301: and acquiring a scene image to be detected.
Step S302: and when the scene image to be detected is consistent with the target scene image, determining the target weight parameter in the target scene as the target weight parameter in the scene to be detected.
Specifically, in practical application, after the optical flow variation model is trained, pixel data of a scene image to be detected is input into the optical flow variation model, a target scene image in the optical flow variation model is firstly compared, and when the scene image to be detected is consistent with the target scene image, a target weight parameter in the target scene is determined as the target weight parameter in the scene to be detected, so that the repeated calculation of the weight parameter is avoided, and the detection efficiency of a moving target is greatly improved.
Specifically, in an embodiment, the step S105 calculates the motion state of the target object based on the optical flow model and the target weight parameter, and specifically includes the following steps:
Step S401: and inputting the pixel data into an optical flow model, and performing pyramid filtering processing to obtain a first filtering image and a second filtering image.
Specifically, in practical application, the embodiment of the invention performs pyramid layering processing, that is, downsampling processing, by inputting pixel data into an optical flow variation model, the downsampling processing including filtering and modifying the image size.
Specifically, compared with the traditional pyramid method adopting the Gaussian filter, the method provided by the embodiment of the invention uses the optimized bilateral filter to ensure that the edges of the target image and the target object are clear, and the optimized bilateral filter structure comprises the Gaussian filter process, so that the advantages of the traditional Gaussian filter can be fully maintained, and the filtering effect is further improved. In addition, in order to accelerate the calculation speed of bilateral filtering, the embodiment of the invention adopts the polynomial approximation of the O (1) complexity trigonometric function to replace a Gaussian filter structure in bilateral filtering, thereby greatly improving the calculation efficiency and enhancing the clear capability of the target detection edge.
The optimization of the data items in the basic model according to the embodiment of the invention comprises the following steps: ① The outer layer of the integral term is introduced into a filter, a Gaussian filter is usually selected, and a trigonometric function polynomial is adopted to expand to replace the Gaussian filter, so that the filtering complexity is reduced from O (n 2) to O (1), and the anti-noise capability of a model and the anti-interference capability of illumination transformation are improved; ② The second order penalty function in the data item is changed to first order, so that the optical flow calculation maintains clear edges at the position with larger gradient change.
Step S402: and pyramid hierarchical sampling is carried out on the basis of the target weight parameter, the first filtering image and the second filtering image, so that a first image and a second image are respectively obtained.
Specifically, regarding the process of modifying the image size, the embodiment of the present invention reduces the length and width of the two input frames of images respectively, and illustratively, the two input frames of images may be reduced to half of the original size, that is, reduced by 4 times as much as the whole (the number of downsampling layers may be manually set according to the image size, typically 3-5 layers), so as to obtain the first image of the original size and the reduced second image.
The pyramid layering algorithm can enable two frames of images to be subjected to target detection from the lowest resolution and then gradually projected to the original resolution, so that even if two adjacent frames of images which are not originally input, such as a first frame image and a fifth frame image, can be input, through the pyramid layering algorithm, after target detection is completed by the low-resolution image, the problem of large displacement (namely frame skipping detection) can be successfully solved by projecting to the high resolution.
Step S403: and calculating an optical flow vector result based on the target weight coefficient, the first image and the second image.
Specifically, in practical applications, according to equation (2), the embodiment of the present invention calculates, through the target weight parameter λ, the first image, and the second image, an optical flow vector result of the target object, where the optical flow vector result includes the optical flow direction and the optical flow displacement information.
By optimizing the optical flow algorithm, the embodiment of the invention adopts a mode of replacing a Gaussian filter by a trigonometric function polynomial, thereby reducing the complexity of the algorithm; by effectively determining the target weight parameters, the accuracy of the optical flow field and the optical flow vector calculation result is ensured.
Step S404: the motion state of the target object is determined from the optical flow vector result.
Specifically, in practical application, the embodiment of the invention effectively displays the optical flow vector result on the target object, so that not only can the determination of the motion state of the target object be realized, but also the motion trend of the target object can be predicted to a certain extent, and the prediction is more prospective.
Specifically, in an embodiment, the step S404 specifically includes the following steps:
step S501: and when the optical flow vector result is larger than a preset threshold value, judging that the target object is in a motion state.
Step S502: and when the optical flow vector result is not greater than the preset threshold value, judging that the target object is in a static state.
Specifically, in practical application, in order to better and quickly determine the motion state of the target object, the embodiment of the invention determines that the target object is in the motion state by setting a preset threshold value when the optical flow vector result is greater than the preset threshold value; when the optical flow vector result is not greater than the preset threshold, the target object is judged to be in a static state, and the value of the preset threshold can be set according to the actual situation, so that the use requirements of different users are met.
Specifically, in an embodiment, after the step S501 is executed to determine that the target object is in a motion state, the method specifically further includes the following steps:
Step S601: and when the target object is in a motion state, extracting all the optical flow vector results reaching the threshold value to obtain pixel extraction data of the target object.
Step S602: and extracting data based on the pixels of the target object, and marking the motion position of the target object on the original image.
3-5, The embodiment of the invention firstly acquires two adjacent frames of images through the monitoring video; inputting two frames of images into an optical flow variation model to obtain an optical flow field calculation result of a target object in a target scene, wherein as shown in fig. 4, 0-639 pixels and 640-1280 pixels on the abscissa respectively represent flow fields in the x direction (i.e. the u direction) and the y direction (i.e. the v direction); according to the optical flow vector calculation result of the target object obtained by calculation according to the formula (2), the motion direction and the motion displacement of the coordinates of each position of the target object can be accurately displayed by amplifying an optical flow vector result graph (shown as a right graph in fig. 5), and according to fig. 5, the larger the target motion amplitude is, the higher the vector value is; further, as shown in fig. 6, after determining that the target object is in a motion state, the embodiment of the present invention may further determine, according to the optical flow field calculation result and the optical flow vector calculation result, the position of the target object in the target scene in the original video image, and divide the target object from the target scene, so as to perform marker display.
According to the embodiment of the invention, through acquiring two adjacent frames of images, inputting pixel data of the images into an optical flow variation model, processing the pixel data through the target weight parameters corresponding to the target scene to respectively obtain an optical flow field calculation result and an optical flow vector calculation result, a user can visually check the movement direction and movement displacement of the target object through the optical flow vector local amplification image, the identification of the movement state of the target object (namely the moving target) is greatly improved, the movement trend of the target object can be predicted to a certain extent, and the user requirement is met.
Specifically, in an embodiment, the step S602 marks the motion position of the target object on the original image based on the pixel extraction data of the target object, and specifically includes the following steps:
Step S701: and acquiring the position information of the target object in the original image.
Step S702: and matching the position information of the original image with the pixel extraction data of the target object to obtain a matching result.
Step S703: and marking the motion position of the target object from the original image based on the matching result.
Specifically, in practical application, the position information of the target object in the original image is obtained and matched with the pixel extraction data of the target object in the optical flow variation model, and the motion position of the target object is marked from the original image based on the matching result, so that the user can conveniently and quickly determine the position of the target object in the original image while quickly judging the motion state of the target object.
By executing the steps, the moving object recognition method provided by the embodiment of the invention inputs the pixel data of two adjacent frames of images of the target scene into the optical flow model, calculates to obtain the optical flow energy fields corresponding to different weight parameters and the classification of the optical flow energy fields, and determines the target weight parameters of the target scene where the target object is based on the classification result, so that the flexible adjustment of the weight parameters is realized for different target scenes, the moving state of the target object is calculated based on the optical flow model and the target weight parameters, and on the basis of quickly and accurately determining the target weight parameters, the precise recognition of the moving state of the moving object is realized, and the moving object recognition efficiency is further improved.
The moving object recognition method provided by the embodiment of the invention will be described in detail below with reference to specific application examples.
Referring to fig. 1 to 8, the embodiments of the present invention optimize the basic model (formula (1)) respectively, and specifically described as follows:
1) For data items: ① The filter is introduced into the outer layer of the integral term, a Gaussian filter is usually selected, and a trigonometric function polynomial is adopted to expand to replace the Gaussian filter, so that the filtering complexity is reduced from O (n 2) to O (1), and the anti-noise capability and the anti-interference capability on illumination transformation of the model are improved; ② Changing the second order penalty function in the data item into first order, so that the optical flow calculation keeps clear edges at the position with larger gradient change;
2) For the smooth term: introducing a K-Means algorithm based on energy field classification, so that the model has the capability of adaptively selecting weight coefficients in a smoothing term;
3) For the whole model: and introducing an optimized bilateral filtering algorithm and a pyramid layering algorithm, and filtering by using an optimized bilateral filter and then downsampling when the pyramid constructs a multi-resolution image, so that the object edge in the solved optical flow field can be subjected to high-precision optical flow estimation. In addition, the Gaussian term designed in the optimized bilateral filtering algorithm can be directly expanded by using the trigonometric function polynomial in the 1) to replace the Gaussian term, so that the algorithm multiplexing is realized.
The modified TV-L 1 optical flow variation model is shown in formula (2), and will not be described here.
The embodiment of the invention is realized through three core steps, as shown in fig. 7, and specifically described as follows:
1) Data item optimization
In the traditional HS model-based optical flow method, the data term is a penalty function of L 2, which causes nonlinear amplification of the energy conservation assumption error and affects the flow field edges. In order to increase the robustness of the data item flow field estimation, the invention introduces an L 1 non-square penalty function into the data item. Meanwhile, in order to ensure the accuracy of data item flow field estimation, an optimized filter is added to the outer layer of the integral item. In general, a gaussian filter is a conventional choice, but in an optical flow estimation method based on an HS model, since HS belongs to the category of dense optical flow calculation, the efficiency of HS is not advantageous, and if a conventional filter is still adopted, the convolution process of HS will cause an increase in the time for executing a program, thereby reducing the real-time performance. Therefore, the embodiment of the invention introduces a trigonometric function polynomial expansion to approach Gaussian filtering, so that the Gaussian filter can have similar performance to the Gaussian filter when the order is designated. The expression form of the filter is as follows:
wherein T is the current time, T E [ -T, T ]; λ=pi/2 t, t being the dynamic range of the pixels of the current frame of the image sequence being processed; n is the order of the polynomial, n=2, 3, …, i; i is the total order, i.e. [0+ ]. In general, when N is sufficiently large, there will be the following approximations:
Where cos is a cosine trigonometric function formally converted to an expression of a gaussian function, where cos is defined as [ -pi/2, pi/2 ]; Where σ is the standard deviation of the gaussian function and ρ is the scale factor generated during the approximation process. Fig. 8 shows a graph of the function curve of the trigonometric polynomial versus the curve of the gaussian function, and it can be seen that, under certain conditions, the waveform of the dashed line is closer to the waveform of the solid line as the order N increases, i.e., the waveform of the trigonometric polynomial is closer to the gaussian curve. Experiments have shown that when n=4, the gaussian curve can be well approximated, thus achieving substitution. Since the two-dimensional Gaussian function is realized through double-layer circulation in the encoding process, the complexity is O (n 2), and after the two-dimensional Gaussian function is replaced by a trigonometric function polynomial, the multiplication can be changed into addition, so that the complexity is reduced to O (1). In addition, in the encoding implementation process, for addition and multiplication of the processed image pixels, the embodiment of the invention adopts an instruction set method to accelerate, such as an SSE instruction set, and uses a 128-bit register to read 8 pixel values at a time to perform addition or multiplication operation and then save, thereby improving the execution efficiency.
2) Smooth item optimization
One key parameter involved in the smoothing term, namely λ in equation (2), represents the weight parameter of the smoothing term, and the selection of the parameter affects the estimation accuracy of the whole flow field. Illustratively, when λ takes 50, 100, and 180, there is a difference in optical flow estimation fields in the u and v directions, and therefore, the influence of the weight parameter on the accuracy is large. The traditional method is to determine the value of the weight parameter in the model through priori knowledge or a large number of experiments, and the method is not applicable to the unknown scene and wastes a large amount of time, so the embodiment of the invention introduces a K-Means algorithm based on energy field classification for weight estimation to solve the problems.
3) Flow field edge and large displacement optimization
The high-precision estimation of the variable spectral flow field of the micro-displacement under the same scale can be realized through the data item optimization of '1) and the smooth item optimization of' 2), however, in practical application, if the object to be measured moves greatly or illumination changes severely at a certain moment, the accurate estimation of the optical flow field cannot be obtained and the edge of the object at the gradient change is fuzzy only by the data item optimization of '1) and the smooth item optimization of' 2), and the accurate calculation of the optical flow field value cannot be performed. Thus, the intervention of flow field edge and large displacement optimization methods is required.
Pyramid layering based algorithms have proven to be very effective in large displacement calculations, but gaussian filtering is required before downsampling between each two layers during pyramid construction to prevent aliasing of the downsampled image. The general algorithm is very effective using gaussian filtering, however, the estimation of the optical flow field is sensitive to gradient changes, and small changes can blur the boundary of the optical flow field, so that the robustness of calculation is reduced. Therefore, the embodiment of the invention introduces a filter with stronger edge holding capability as a processing tool for constructing pyramid interlayer images, takes bilateral filtering as an example, and can obtain edges with higher precision by directly using a traditional bilateral filtering algorithm, but the algorithm is extremely time-consuming and benefits from the idea that triangular function polynomial expansion approaches Gaussian filtering in '1) data item optimization', and introduces the operation into the traditional bilateral filtering, and the operation is nested with pyramid layering algorithm and is realized by encoding by matching with an instruction set method, thus forming a set of optimization algorithm.
According to the embodiment of the invention, firstly, two adjacent frames of images of a target scene are acquired and respectively corresponding pixel data are extracted, and downsampling processing is carried out on the two frames of images based on a pyramid layering algorithm, so that the optimization process of flow field edge and large displacement processing is realized.
The method comprises the steps of inputting pixel data into an optical flow variation model, carrying out pyramid layering processing, calculating based on a target scene and target objects to obtain optical flow energy field results corresponding to different weight parameters, classifying the optical flow energy field results, determining target weight parameters of the target objects in the target scene, calculating based on the optical flow model and the target weight parameters to obtain the motion state of the target objects, and displaying the optical flow vector results of the target objects on an original image.
By completing all the steps, the optimal estimation based on the TV-L 1 self-adaptive optical flow field can be realized, and for verifying the correctness of the result, the embodiment of the invention also carries out simulation verification on the process and gives a simulation result as shown in fig. 9, wherein the 1 st column is an optical flow field image obtained based on theoretical calculation; column 2 is an optical flow field image calculated based on the LK optical flow method; column 3 is an optical flow field image calculated based on the HS optical flow method; column 4 is the optical flow field image calculated based on the block optical flow method; column 5 is an optical flow field image calculated by the moving object recognition method according to the embodiment of the present invention.
Illustratively, the embodiment of the invention takes the data from the MIDDLEBURY optical flow database as an example for simulation by using the standard test set, and a simulation result is obtained.
As can be seen from fig. 9, when two frames of images of "Urban3" and "Venus" in the dataset are processed, the LK optical flow method has a sparse estimation result and a poor object gradient edge optical flow description compared with the theoretical optical flow field; HS optical flow method precision is higher LK, but the flow field estimation in adjacent area is not uniform, it is influenced by manual selection of the weight coefficient; the boundary optical flow based on the block optical flow method is clear, is suitable for processing the flow field estimation of the high-contrast image sequence, but is easy to have the phenomenon of error optical flow estimation, so that the flow field estimation is more chaotic; the adaptive optical flow optimization estimation method based on the TV-L 1 provided by the embodiment of the invention can realize the selection of the adaptive weight coefficient, the obtained flow field estimation is clear and uniform at the edge, and the detection performance of the algorithm in the Urban3 with more shielding is better than that of other three algorithms. The method provided by the embodiment of the invention is comprehensive and has better universality, and other three methods have more respective prominent application fields, such as a block-based method, and the effect is very good at the edge with larger gradient change although more error optical flows can be estimated.
To quantitatively illustrate the accuracy of the proposed method of the embodiments of the present invention to estimate the optical flow field, the optical flow field estimation results in fig. 9 are measured using Average Angle Error (AAE), average End Point Error (AEPE), and average standard deviation (ASTD) as shown in table 1. It can be seen that under three error evaluation criteria, the errors of the method provided by the embodiment of the invention are not optimal under a certain index, but the overall performance is optimal, the three errors are kept at lower levels, the calculation speeds are respectively 0.14s and 0.27s, and the efficiency is improved by 1-2.5 times compared with the other three methods.
Table 1 algorithm error results comparison
Urban3 LK HS SD The method provided by the embodiment of the invention
AAE 12.0418 15.1384 29.4743 14.3524
AEPE 1.7547 1.5200 4.7190 0.7857
ASTD 28.9766 31.7956 43.0622 31.0202
Venus LK HS SD The method provided by the embodiment of the invention
AAE 12.7544 15.4528 12.5920 12.5015
AEPE 0.7837 1.0784 1.1132 0.6928
ASTD 28.4428 27.6540 29.0916 26.4893
In summary, the moving object recognition method provided by the embodiment of the invention has the following greatest advantages: the performance optimization method is provided from the basic model level, a set of solutions which can realize high precision, low complexity of dense optical flow fields, strong noise immunity, optical flow field boundary maintenance and self-adaptive weight parameter selection in variable optical flow calculation are provided for students and engineering personnel, the performance of the algorithm is verified through simulation experiments, and effective references are provided for research, development and application personnel.
An embodiment of the present invention provides a moving object recognition apparatus, as shown in fig. 10, including:
The acquiring module 101 is configured to acquire two adjacent frames of images of a target scene, where the target scene includes a target object. For details, refer to the related description of step S101 in the above method embodiment, and no further description is given here.
The extracting module 102 is configured to extract pixel data corresponding to two adjacent frames of images respectively. For details, refer to the related description of step S102 in the above method embodiment, and no further description is given here.
The first calculation module 103 is configured to input the pixel data into the optical flow model, and calculate optical flow energy fields corresponding to different weight parameters. For details, see the description of step S103 in the above method embodiment, and the details are not repeated here.
The second calculation module 104 is configured to classify the optical flow energy fields corresponding to the different weight parameters, and determine the target weight parameter under the target scene where the target object is located based on the classification result. For details, refer to the related description of step S104 in the above method embodiment, and no further description is given here.
The third calculation module 105 is configured to calculate, based on the optical flow model and the target weight parameter, a motion state of the target object. For details, see the description of step S105 in the above method embodiment, and the details are not repeated here.
For further description of the moving object recognition device, refer to the related description of the moving object recognition method embodiment, and the description thereof will not be repeated here.
Through the cooperative cooperation of the above components, the moving object recognition device provided by the embodiment of the invention inputs the pixel data of two adjacent frames of images of the target scene into the optical flow model, calculates to obtain the optical flow energy field corresponding to different weight parameters and the classification of the optical flow energy field, determines the target weight parameters of the target object in the target scene based on the classification result, thus flexibly adjusting the weight parameters for different target scenes, calculates to obtain the moving state of the target object based on the optical flow model and the target weight parameters, and not only realizes the accurate recognition of the moving state of the moving object but also further improves the recognition efficiency of the moving object on the basis of quickly and accurately determining the target weight parameters.
An embodiment of the present invention provides an electronic device, as shown in fig. 11, where the electronic device includes a processor 901 and a memory 902, and the memory 902 and the processor 901 are communicatively connected to each other, where the processor 901 and the memory 902 may be connected by a bus or other means, and in fig. 11, the connection is exemplified by a bus.
The processor 901 may be a central processing unit (Central Processing Unit, CPU). The Processor 901 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), field-Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 902 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the methods of the embodiments of the present invention. The processor 901 executes various functional applications of the processor 901 and data processing, i.e., implements the methods in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor 901, and the like. In addition, the memory 902 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 902 optionally includes memory remotely located relative to processor 901, which may be connected to processor 901 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 902 that, when executed by the processor 901, perform the methods of the method embodiments described above.
The specific details of the electronic device may be correspondingly understood by referring to the corresponding related descriptions and effects in the above method embodiments, which are not repeated herein.
It will be appreciated by those skilled in the art that implementing all or part of the above-described methods in the embodiments may be implemented by a computer program for instructing relevant hardware, and the implemented program may be stored in a computer readable storage medium, and the program may include the steps of the embodiments of the above-described methods when executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a hard disk (HARD DISK DRIVE, abbreviated as HDD), a Solid state disk (Solid-STATE DRIVE, SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (5)

1. A moving object recognition method, characterized by comprising:
Acquiring two adjacent frames of images of a target scene, wherein the target scene comprises a target object;
respectively extracting pixel data corresponding to the two adjacent frames of images;
inputting the pixel data into an optical flow model, and calculating optical flow energy fields corresponding to different weight parameters; the optical flow model is as follows:
Wherein E (u, v) is the optical flow field calculation result; g t is a filter based on trigonometric polynomial expansion; omega represents the range of the target scene; i 0 and I 1 are images of two frames before and after movement respectively; x= (p x,py)T is a pixel point coordinate on the image; λ is a weight parameter of the smoothing term; Namely, is Representing the second-order Laplacian of u; h (x) = [ u (x), v (x) ] T is the optical flow vector in both the x and y directions to be solved;
Classifying optical flow energy fields corresponding to different weight parameters, and determining target weight parameters of the target object in a target scene based on classification results;
calculating to obtain the motion state of the target object based on the optical flow model and the target weight parameter;
the classifying the optical flow energy fields corresponding to different weight parameters, and determining the target weight parameters under the target scene where the target object is based on the classification result comprises the following steps:
Clustering the optical flow energy fields corresponding to different weight parameters to obtain a rapid descent area, a turning area and a stable area;
determining a weight parameter corresponding to the turning region clustering center as a target weight parameter of a target scene where the target object is located;
The calculating, based on the optical flow model and the target weight parameter, a motion state of the target object includes:
Inputting the pixel data into an optical flow model, and performing pyramid filtering processing to obtain a first filtering image and a second filtering image;
pyramid hierarchical sampling is carried out on the basis of the target weight parameter, the first filtering image and the second filtering image, so that a first image and a second image are respectively obtained;
Calculating an optical flow vector result based on the target weight coefficient, the first image and the second image;
Determining a motion state of the target object from the optical flow vector result;
the determining the motion state of the target object according to the optical flow vector result includes:
When the optical flow vector result is larger than a preset threshold value, judging that the target object is in a motion state;
When the optical flow vector result is not greater than a preset threshold value, judging that the target object is in a static state;
The method further comprises the steps of:
When the target object is in a motion state, extracting all the optical flow vector results reaching a threshold value to obtain pixel extraction data of the target object;
Extracting data based on pixels of the target object, and marking the motion position of the target object on an original image;
The extracting data based on the pixels of the target object, marking the motion position of the target object on the original image, includes:
Acquiring the position information of the target object in an original image;
matching the position information of the original image with the pixel extraction data of the target object to obtain a matching result;
and marking the motion position of the target object from the original image based on the matching result.
2. The method according to claim 1, wherein the method further comprises:
acquiring a scene image to be detected;
and when the scene image to be detected is consistent with the target scene image, determining the target weight parameter under the target scene as the target weight parameter under the scene to be detected.
3. A moving object recognition apparatus, characterized by comprising:
the acquisition module is used for acquiring two adjacent frames of images of a target scene, wherein the target scene comprises a target object;
the extraction module is used for respectively extracting pixel data corresponding to the two adjacent frames of images;
The first calculation module is used for inputting the pixel data into an optical flow model and calculating optical flow energy fields corresponding to different weight parameters; the optical flow model is as follows:
Wherein E (u, v) is the optical flow field calculation result; g t is a filter based on trigonometric polynomial expansion; omega represents the range of the target scene; i 0 and I 1 are images of two frames before and after movement respectively; x= (p x,py)T is a pixel point coordinate on the image; λ is a weight parameter of the smoothing term; Namely, is Representing the second-order Laplacian of u; h (x) = [ u (x), v (x) ] T is the optical flow vector in both the x and y directions to be solved;
The second calculation module is used for classifying the optical flow energy fields corresponding to the different weight parameters, and determining the target weight parameters of the target object in the target scene based on the classification result; the classifying the optical flow energy fields corresponding to different weight parameters, and determining the target weight parameters under the target scene where the target object is based on the classification result comprises the following steps: clustering the optical flow energy fields corresponding to different weight parameters to obtain a rapid descent area, a turning area and a stable area; determining a weight parameter corresponding to the turning region clustering center as a target weight parameter of a target scene where the target object is located;
The third calculation module is used for calculating the motion state of the target object based on the optical flow model and the target weight parameter; the calculating, based on the optical flow model and the target weight parameter, a motion state of the target object includes: inputting the pixel data into an optical flow model, and performing pyramid filtering processing to obtain a first filtering image and a second filtering image; pyramid hierarchical sampling is carried out on the basis of the target weight parameter, the first filtering image and the second filtering image, so that a first image and a second image are respectively obtained; calculating an optical flow vector result based on the target weight coefficient, the first image and the second image; determining a motion state of the target object from the optical flow vector result; the determining the motion state of the target object according to the optical flow vector result includes: when the optical flow vector result is larger than a preset threshold value, judging that the target object is in a motion state; when the optical flow vector result is not greater than a preset threshold value, judging that the target object is in a static state; when the target object is in a motion state, extracting all the optical flow vector results reaching a threshold value to obtain pixel extraction data of the target object; extracting data based on pixels of the target object, and marking the motion position of the target object on an original image; the extracting data based on the pixels of the target object, marking the motion position of the target object on the original image, includes: acquiring the position information of the target object in an original image; matching the position information of the original image with the pixel extraction data of the target object to obtain a matching result; and marking the motion position of the target object from the original image based on the matching result.
4. An electronic device, comprising:
A memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the method of any of claims 1-2.
5. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions, the computer instructions for causing a computer to perform the method of any one of claims 1-2.
CN202210697761.1A 2022-06-20 2022-06-20 Moving object identification method and device Active CN115035164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210697761.1A CN115035164B (en) 2022-06-20 2022-06-20 Moving object identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210697761.1A CN115035164B (en) 2022-06-20 2022-06-20 Moving object identification method and device

Publications (2)

Publication Number Publication Date
CN115035164A CN115035164A (en) 2022-09-09
CN115035164B true CN115035164B (en) 2024-11-05

Family

ID=83125668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210697761.1A Active CN115035164B (en) 2022-06-20 2022-06-20 Moving object identification method and device

Country Status (1)

Country Link
CN (1) CN115035164B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880784B (en) * 2023-02-22 2023-05-09 武汉商学院 Scenic spot multi-person action behavior monitoring method based on artificial intelligence
CN118505982B (en) * 2024-07-16 2024-10-18 广东海洋大学 Target detection method based on finite time gradient projection nerve dynamics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104869387A (en) * 2015-04-19 2015-08-26 中国传媒大学 Method for acquiring binocular image maximum parallax based on optical flow method
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028263B (en) * 2019-10-29 2023-05-05 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN112132871B (en) * 2020-08-05 2022-12-06 天津(滨海)人工智能军民融合创新中心 Visual feature point tracking method and device based on feature optical flow information, storage medium and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104869387A (en) * 2015-04-19 2015-08-26 中国传媒大学 Method for acquiring binocular image maximum parallax based on optical flow method
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus

Also Published As

Publication number Publication date
CN115035164A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
Uhrig et al. Sparsity invariant cnns
CN110930387A (en) Fabric defect detection method based on depth separable convolutional neural network
CN107633226B (en) Human body motion tracking feature processing method
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN115035164B (en) Moving object identification method and device
CN109029363A (en) A kind of target ranging method based on deep learning
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN109740588A (en) The X-ray picture contraband localization method reassigned based on the response of Weakly supervised and depth
CN109214422B (en) Parking data repairing method, device, equipment and storage medium based on DCGAN
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN104574393A (en) Three-dimensional pavement crack image generation system and method
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
CN110704652A (en) Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN101630407A (en) Method for positioning forged region based on two view geometry and image division
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN113313047A (en) Lane line detection method and system based on lane structure prior
CN106919950A (en) Probability density weights the brain MR image segmentation of geodesic distance
CN110598711A (en) Target segmentation method combined with classification task
CN115457130A (en) Electric vehicle charging port detection and positioning method based on depth key point regression
CN108447084B (en) Stereo matching compensation method based on ORB characteristics
CN114170188A (en) Target counting method and system for overlook image and storage medium
CN113378864A (en) Method, device and equipment for determining anchor frame parameters and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant