Nothing Special   »   [go: up one dir, main page]

CN106296730A - A kind of Human Movement Tracking System - Google Patents

A kind of Human Movement Tracking System Download PDF

Info

Publication number
CN106296730A
CN106296730A CN201610612701.XA CN201610612701A CN106296730A CN 106296730 A CN106296730 A CN 106296730A CN 201610612701 A CN201610612701 A CN 201610612701A CN 106296730 A CN106296730 A CN 106296730A
Authority
CN
China
Prior art keywords
image
tracking
motion
tracking object
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610612701.XA
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610612701.XA priority Critical patent/CN106296730A/en
Publication of CN106296730A publication Critical patent/CN106296730A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Human Movement Tracking System, including human motion video acquisition device, image preprocess apparatus, shooting adjusting apparatus and follow-up mechanism;Wherein, described video image is carried out processing obtaining by described follow-up mechanism follows the tracks of object present frame position, it was predicted that follow the tracks of object motion direction, according to described tracking object present frame position, and determine area-of-interest, at area-of-interest, described tracking object is tracked;Described shooting adjusting apparatus for judge described tracking object present frame position whether in the central area of current picture, if it is, do not adjust photographic head, if it does not, according to described tracking object motion direction adjust photographic head.The tracking effect that the present invention is capable of smoothing need not any auxiliary locator, is not limited by following the tracks of angle, can follow the tracks of human body in all directions, and robustness is not by external influence.

Description

Human motion tracking system
Technical Field
The invention relates to the technical field of human body tracking, in particular to a human body motion tracking system.
Background
In the related technology, the human motion tracking means that the position of a human body is detected in an acquired image, and the motion of a corresponding camera is controlled, so that a tracked object is always kept in the middle of a picture. In the related technology, an infrared detection technology, an ultrasonic detection technology and the like are adopted for tracking the human body movement, wherein an infrared emission device is worn on a tracked object by the infrared detection technology, and a camera determines the shooting position of the camera according to an infrared signal received by an infrared receiving device. However, infrared tracking has the problems of poor anti-interference performance, easy influence of thermal light sources such as visible light and fluorescent lamps in the environment, and the like, and needs an auxiliary device, so that infrared signals are lost when a tracked object turns, the shooting direction cannot be judged, and the positioning accuracy and real-time performance are influenced. The ultrasonic detection technique mounts a plurality of ultrasonic wave transmitting and receiving devices having a specific frequency in the vicinity of a tracking object, determines the position of the tracking object based on the change of reflected waves received by the ultrasonic wave receiving device, and determines the direction of imaging by a camera. Because the emission angle of the ultrasonic wave is relatively large, the shooting azimuth angle is not too high, and the target cannot be tracked by 360 degrees. In addition, this technique also requires auxiliary equipment, and long-term ultrasonic radiation is harmful to human health.
Disclosure of Invention
To solve the above problems, the present invention aims to provide a human motion tracking system.
The purpose of the invention is realized by adopting the following technical scheme:
a human motion tracking system comprises a human motion video acquisition device, an image preprocessing device, a shooting adjusting device and a tracking device, wherein the human motion video acquisition device is used for acquiring a video image containing a human body; the image preprocessing device is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device processes the video image to obtain the current frame position of the tracking object, predicts the motion direction of the tracking object, determines the region of interest according to the current frame position of the tracking object, and tracks the tracking object in the region of interest; the shooting adjusting device is used for judging whether the current frame position of the tracking object is in the central area of the current picture, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
The invention has the beneficial effects that: the invention can realize smooth tracking effect by selecting a tracking object and adjusting the position of the camera by combining direction prediction, does not need any auxiliary positioning device, is not limited by a tracking angle, can track a human body in an all-around way, has robustness not influenced by the outside, and solves the technical problems.
Drawings
The invention is further described by using the drawings, but the application scenarios in the drawings do not limit the invention in any way, and for those skilled in the art, other drawings can be obtained according to the following drawings without creative efforts.
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a schematic diagram of the module connection of the tracking device of the present invention.
Reference numerals:
the device comprises a human motion video acquisition device 1, an image preprocessing device 2, a shooting adjustment device 3, a tracking device 4, a region-of-interest determination module 41, a candidate motion region extraction module 42, a tracked object positioning module 43, an initialization sub-module 421, a state transition model establishing sub-module 422, an observation model establishing sub-module 423, a candidate motion region calculation sub-module 424, a position correction sub-module 425 and a resampling sub-module 426.
Detailed Description
The invention is further described in connection with the following application scenarios.
Application scenario 1
Referring to fig. 1 and 2, a human motion tracking system in a complex scene according to an embodiment of the application scene includes a human motion video acquisition device 1, an image preprocessing device 2, a shooting adjustment device 3, and a tracking device 4, where the human motion video acquisition device 1 is configured to acquire a video image including a human body; the image preprocessing device 2 is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device 4 processes the video image to acquire the current frame position of the tracking object, predicts the motion direction of the tracking object, determines an interested area according to the current frame position of the tracking object, and tracks the tracking object in the interested area; the shooting adjusting device 3 is used for judging whether the current frame position of the tracking object is in the central area of the current frame, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
Preferably, the processing the video image to obtain the current frame position of the tracking object includes: processing the image to extract a candidate motion area containing a human body; acquiring a human body target in the candidate motion area; determining a tracking object according to the human body target, and acquiring and recording the current frame position of the tracking object; and predicting the motion direction of the tracking object according to the current frame position of the tracking object.
The embodiment of the invention realizes the smooth tracking effect by selecting the tracking object and combining the direction prediction to adjust the position of the camera, does not need any auxiliary positioning device, is not limited by the tracking angle, can track the human body in an all-around way, has robustness not influenced by the outside, and solves the technical problems.
Preferably, the preprocessing of the acquired video image comprises selecting a first frame image of the video image as a reference frame, averagely dividing the reference frame into four non-overlapping regions, wherein W represents the width of the image, H represents the height of the image, the four regions are all 0.5W × 0.5.5H, the regions 1, 2, 3 and 4 are sequentially arranged from the upper left of the image in the clockwise direction, and selecting a region A at the center position of the image received in the next frame0,A0The size of A is 0.5W × 0.5.5H0The four image sub-blocks a of size 0.25W × 0.25.25H are divided according to the above method1、A2、A3、A4,A1And A2For estimating local motion vectors in the vertical direction, A3And A4For estimating local motion vectors in the horizontal direction, let A1、A2、A3、A4And searching the best match in the four areas of 1, 2, 3 and 4 respectively to estimate the global motion vector of the video sequence, and then performing reverse motion compensation to eliminate the influence of video jitter.
The preferred embodiment performs image stabilization on the video image, avoids the influence of video jitter on subsequent image processing, and has high preprocessing efficiency.
Preferably, the tracking device 4 comprises a region of interest determination module 41, a candidate motion region extraction module 42 and a tracked object localization module 43; the region-of-interest determining module 41 is configured to determine a region of interest D in one frame of image of the video image1And using the template as a target template; the candidate motion region extraction module 42 is configured to establish a particle state transition and observation model based on the aboveA model for predicting a candidate motion region by using particle filtering; the tracked object positioning module 43 is configured to perform feature similarity measurement on the candidate motion region and the target template, identify a tracked object, and record a current frame position of the tracked object.
The preferred embodiment builds a modular architecture for the tracking device 4.
Preferably, the candidate motion region extraction module 42 includes:
(1) initialization submodule 421: for in the region of interest D1Randomly selecting n particles and initializing each particle, wherein the initial state of the initialized particles is x0 iThe initial weight is { Qo i=1/n,i=1,...n};
(2) The state transition model establishing sub-module 422: for establishing a particle state transition model, the particle state transition model adopts the following formula:
x m i = Ax m - 1 i + v m i
in the formula,represents new particles at the moment m, m is more than or equal to 2,is Gaussian white noise with the average value of 0, and A is a 4-order unit matrix; the particles at the m-1 moment are propagated through a state transition model;
(3) the observation model establishing sub-module 423 is used for establishing a particle observation model in a mode of combining a color histogram, a texture feature histogram and a motion edge feature;
(4) candidate motion region calculation sub-module 424: it computes candidate motion regions using minimum variance estimation:
x n o w = Σ j = 1 n Q m j · x m j
in the formula, xnowRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the moment m;
(5) position correction submodule 425: for correcting abnormal data:
x p r e = Σ j = 1 n Q m - 1 j · x m - 1 j
in the formula, xpreRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the m-1 moment;
setting a data anomaly evaluation function P ═ xnow-xpreIf the value of P is greater than the set empirical value T, then xnow=xpre
(6) Resampling sub-module 426: the method is used for deleting particles with too small weight values through resampling operation, during resampling, an innovation residual error is provided by utilizing a difference value predicted and observed at the current moment of a system, then online adaptive adjustment is carried out on sampled particles through measuring the innovation residual error, and the relation between the particle quantity and the information residual error in the sampling process is defined as follows:
wherein N ismRepresenting the number of particles at time m, N, during the sampling processmaxAnd NminRespectively representing the minimum and maximum number of particles, Nmin+1Denotes that only greater than NminNumber of particles of (2), Nmax-1Meaning less than N onlymaxThe number of particles of (a) to be,representing the innovation residual of the system at time m.
The preferred embodiment updates the weight of the sampling particles by adopting a mode of combining a color histogram, a texture feature histogram and a motion edge feature, thereby effectively enhancing the robustness of the tracking system; a position correction submodule 425 is arranged, so that the influence of abnormal data on the whole system can be avoided; in the resampling sub-module 426, an innovation residual is provided by using the difference between the prediction and observation at the current moment, and then the online adaptive adjustment is performed on the sampled particles by measuring the innovation residual, and the relationship between the particle number and the information residual in the sampling process is defined, so that the high efficiency of particle sampling and the real-time performance of the algorithm are better ensured.
Preferably, the particle weight value updating formula of the particle observation model is as follows:
Q m j = Q C m j ‾ · Q M m j ‾ · Q W m j ‾ + λ 1 Q C m j ‾ + λ 2 2 Q M m j ‾ + λ 2 3 Q W m j ‾ + λ 1 λ 2 λ 3 ( 1 + λ 1 ) ( 1 + λ 2 ) ( 1 + λ 3 )
in the formula
Q C m j ‾ = Q C m j / Σ j = 1 n Q C m j , Q C m j = Q C ( m - 1 ) j 1 2 π σ exp ( - A m 2 2 σ 2 )
Q M m j ‾ = Q M m j / Σ j = 1 n Q M m j , Q M m j = Q M ( m - 1 ) j 1 2 π σ exp ( - B m 2 2 σ 2 )
Q W m j ‾ = Q W m j / Σ j = 1 n Q W m j , Q W m j = Q W ( m - 1 ) j 1 2 π σ exp ( - C m 2 2 σ 2 )
Wherein,represents the final update weight of the jth particle at time m,andrespectively representing the update weight value of the jth particle in the m moment and the m-1 moment based on the color histogram,representing the updated weight of the jth particle based on the motion edge in the m-moment and the m-1 moment,representing the update weight of the jth particle in m time and m-1 time based on the histogram of the texture features, AmFor the jth particle in m time instant, based on the Bhattacharya distance between the observed value and the true value of the color histogrammFor the jth particle in the m-th time, the Bhattacharya distance between the observed value and the true value based on the motion edge, CmThe method is characterized in that Bhattacharya distance between an observed value and a true value of the jth particle in the m moment based on a texture feature histogram, sigma is variance of a Gaussian likelihood model, and lambda1Adaptive adjustment factor, λ, for color histogram based feature weight normalization2Adaptive adjustment factor, λ, for feature weight normalization based on moving edges3A self-adaptive adjustment factor for feature weight normalization based on the texture feature histogram;
the calculation formula of the self-adaptive adjustment factor is as follows:
λ s m = ξ m - 1 · [ - Σ j = 1 n ( p m - 1 s / j ) log 2 p m - 1 s / j ] , s = 1 , 2 , 3 ;
wherein when s is 1,an adaptive adjustment factor representing the color histogram based feature weight normalization in time m,the observation probability value of the characteristic value based on the color histogram under j particles in m-1 moment; when the s is equal to 2, the reaction solution is,an adaptive adjustment factor representing the normalization of the feature weight based on the motion edge in the time m,the observed probability values of the characteristic values based on the moving edge under j particles at the moment of m-1 are obtained; when s is 3, the reaction time is as short as possible,an adaptive adjustment factor representing the feature weight normalization based on the histogram of texture features at time m,the observed probability value of the characteristic value under j particles based on the histogram of the texture characteristics in the m-1 moment ξm-1Representing the variance values of the spatial positions of all particles in time instant m-1.
The preferred embodiment provides a particle weight updating formula of the particle observation model and a calculation formula of the self-adaptive adjustment factor, and fusion processing is performed on the characteristic weights of the particles, so that the defects of additive fusion and multiplicative fusion are effectively overcome, and the robustness of the tracking system is further enhanced.
In the application scenario, the number of the selected particles n is 50, the tracking speed is relatively improved by 8%, and the tracking accuracy is relatively improved by 7%.
Application scenario 2
Referring to fig. 1 and 2, a human motion tracking system in a complex scene according to an embodiment of the application scene includes a human motion video acquisition device 1, an image preprocessing device 2, a shooting adjustment device 3, and a tracking device 4, where the human motion video acquisition device 1 is configured to acquire a video image including a human body; the image preprocessing device 2 is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device 4 processes the video image to acquire the current frame position of the tracking object, predicts the motion direction of the tracking object, determines an interested area according to the current frame position of the tracking object, and tracks the tracking object in the interested area; the shooting adjusting device 3 is used for judging whether the current frame position of the tracking object is in the central area of the current frame, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
Preferably, the processing the video image to obtain the current frame position of the tracking object includes: processing the image to extract a candidate motion area containing a human body; acquiring a human body target in the candidate motion area; determining a tracking object according to the human body target, and acquiring and recording the current frame position of the tracking object; and predicting the motion direction of the tracking object according to the current frame position of the tracking object.
The embodiment of the invention realizes the smooth tracking effect by selecting the tracking object and combining the direction prediction to adjust the position of the camera, does not need any auxiliary positioning device, is not limited by the tracking angle, can track the human body in an all-around way, has robustness not influenced by the outside, and solves the technical problems.
Preferably, the preprocessing of the acquired video image comprises selecting a first frame image of the video image as a reference frame, averagely dividing the reference frame into four non-overlapping regions, wherein W represents the width of the image, H represents the height of the image, the four regions are all 0.5W × 0.5.5H, the regions 1, 2, 3 and 4 are sequentially arranged from the upper left of the image in the clockwise direction, and selecting a region A at the center position of the image received in the next frame0,A0The size of A is 0.5W × 0.5.5H0The four image sub-blocks a of size 0.25W × 0.25.25H are divided according to the above method1、A2、A3、A4,A1And A2For estimating local motion vectors in the vertical direction, A3And A4For estimating local motion vectors in the horizontal direction, let A1、A2、A3、A4In four areas of 1, 2, 3 and 4 respectivelyAnd searching the best match so as to estimate the global motion vector of the video sequence, and then performing reverse motion compensation to eliminate the influence of video jitter.
The preferred embodiment performs image stabilization on the video image, avoids the influence of video jitter on subsequent image processing, and has high preprocessing efficiency.
Preferably, the tracking device 4 comprises a region of interest determination module 41, a candidate motion region extraction module 42 and a tracked object localization module 43; the region-of-interest determining module 41 is configured to determine a region of interest D in one frame of image of the video image1And using the template as a target template; the candidate motion region extraction module 42 is configured to establish a particle state transition and observation model and predict a candidate motion region by using particle filtering based on the model; the tracked object positioning module 43 is configured to perform feature similarity measurement on the candidate motion region and the target template, identify a tracked object, and record a current frame position of the tracked object.
The preferred embodiment builds a modular architecture for the tracking device 4.
Preferably, the candidate motion region extraction module 42 includes:
(1) initialization submodule 421: for in the region of interest D1Randomly selecting n particles and initializing each particle, wherein the initial state of the initialized particles is x0 iThe initial weight is { Qo i=1/n,i=1,...n};
(2) The state transition model establishing sub-module 422: for establishing a particle state transition model, the particle state transition model adopts the following formula:
x m i = Ax m - 1 i + v m i
in the formula,represents new particles at the moment m, m is more than or equal to 2,is Gaussian white noise with the average value of 0, and A is a 4-order unit matrix; the particles at the m-1 moment are propagated through a state transition model;
(3) the observation model establishing sub-module 423 is used for establishing a particle observation model in a mode of combining a color histogram, a texture feature histogram and a motion edge feature;
(4) candidate motion region calculation sub-module 424: it computes candidate motion regions using minimum variance estimation:
x n o w = Σ j = 1 n Q m j · x m j
in the formula, xnowRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the moment m;
(5) position correction submodule 425: for correcting abnormal data:
x p r e = Σ j = 1 n Q m - 1 j · x m - 1 j
in the formula, xpreRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the m-1 moment;
setting a data anomaly evaluation function P ═ xnow-xpreIf the value of P is greater than the set empirical value T, then xnow=xpre
(6) Resampling sub-module 426: the method is used for deleting particles with too small weight values through resampling operation, during resampling, an innovation residual error is provided by utilizing a difference value predicted and observed at the current moment of a system, then online adaptive adjustment is carried out on sampled particles through measuring the innovation residual error, and the relation between the particle quantity and the information residual error in the sampling process is defined as follows:
wherein N ismRepresenting the number of particles at time m, N, during the sampling processmaxAnd NminRespectively representing the minimum and maximum number of particles, Nmin+1Denotes that only greater than NminNumber of particles of (2), Nmax-1Meaning less than N onlymaxThe number of particles of (a) to be,representing the innovation residual of the system at time m.
The preferred embodiment updates the weight of the sampling particles by adopting a mode of combining a color histogram, a texture feature histogram and a motion edge feature, thereby effectively enhancing the robustness of the tracking system; a position correction submodule 425 is arranged, so that the influence of abnormal data on the whole system can be avoided; in the resampling sub-module 426, an innovation residual is provided by using the difference between the prediction and observation at the current moment, and then the online adaptive adjustment is performed on the sampled particles by measuring the innovation residual, and the relationship between the particle number and the information residual in the sampling process is defined, so that the high efficiency of particle sampling and the real-time performance of the algorithm are better ensured.
Preferably, the particle weight value updating formula of the particle observation model is as follows:
Q m j = Q C m j ‾ · Q M m j ‾ · Q W m j ‾ + λ 1 Q C m j ‾ + λ 2 2 Q M m j ‾ + λ 2 3 Q W m j ‾ + λ 1 λ 2 λ 3 ( 1 + λ 1 ) ( 1 + λ 2 ) ( 1 + λ 3 )
in the formula
Q C m j ‾ = Q C m j / Σ j = 1 n Q C m j , Q C m j = Q C ( m - 1 ) j 1 2 π σ exp ( - A m 2 2 σ 2 )
Q M m j ‾ = Q M m j / Σ j = 1 n Q M m j , Q M m j = Q M ( m - 1 ) j 1 2 π σ exp ( - B m 2 2 σ 2 )
Q W m j ‾ = Q W m j / Σ j = 1 n Q W m j , Q W m j = Q W ( m - 1 ) j 1 2 π σ exp ( - C m 2 2 σ 2 )
Wherein,represents the final update weight of the jth particle at time m,andrespectively representing the update weight value of the jth particle in the m moment and the m-1 moment based on the color histogram,representing the updated weight of the jth particle based on the motion edge in the m-moment and the m-1 moment,representing the update weight of the jth particle in m time and m-1 time based on the histogram of the texture features, AmFor the jth particle in m time instant, based on the Bhattacharya distance between the observed value and the true value of the color histogrammFor the jth particle in the m-th time, the Bhattacharya distance between the observed value and the true value based on the motion edge, CmThe method is characterized in that Bhattacharya distance between an observed value and a true value of the jth particle in the m moment based on a texture feature histogram, sigma is variance of a Gaussian likelihood model, and lambda1Adaptive adjustment factor, λ, for color histogram based feature weight normalization2Adaptive adjustment factor, λ, for feature weight normalization based on moving edges3A self-adaptive adjustment factor for feature weight normalization based on the texture feature histogram;
the calculation formula of the self-adaptive adjustment factor is as follows:
λ s m = ξ m - 1 · [ - Σ j = 1 n ( p m - 1 s / j ) log 2 p m - 1 s / j ] , s = 1 , 2 , 3 ;
wherein when s is 1,an adaptive adjustment factor representing the color histogram based feature weight normalization in time m,the observation probability value of the characteristic value based on the color histogram under j particles in m-1 moment; when the s is equal to 2, the reaction solution is,an adaptive adjustment factor representing the normalization of the feature weight based on the motion edge in the time m,the observed probability values of the characteristic values based on the moving edge under j particles at the moment of m-1 are obtained; when s is 3, the reaction time is as short as possible,an adaptive adjustment factor representing the feature weight normalization based on the histogram of texture features at time m,the observed probability value of the characteristic value under j particles based on the histogram of the texture characteristics in the m-1 moment ξm-1Representing the variance values of the spatial positions of all particles in time instant m-1.
The preferred embodiment provides a particle weight updating formula of the particle observation model and a calculation formula of the self-adaptive adjustment factor, and fusion processing is performed on the characteristic weights of the particles, so that the defects of additive fusion and multiplicative fusion are effectively overcome, and the robustness of the tracking system is further enhanced.
In the application scenario, the number of the selected particles n is 55, so that the tracking speed is relatively improved by 7%, and the tracking accuracy is relatively improved by 8%.
Application scenario 3
Referring to fig. 1 and 2, a human motion tracking system in a complex scene according to an embodiment of the application scene includes a human motion video acquisition device 1, an image preprocessing device 2, a shooting adjustment device 3, and a tracking device 4, where the human motion video acquisition device 1 is configured to acquire a video image including a human body; the image preprocessing device 2 is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device 4 processes the video image to acquire the current frame position of the tracking object, predicts the motion direction of the tracking object, determines an interested area according to the current frame position of the tracking object, and tracks the tracking object in the interested area; the shooting adjusting device 3 is used for judging whether the current frame position of the tracking object is in the central area of the current frame, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
Preferably, the processing the video image to obtain the current frame position of the tracking object includes: processing the image to extract a candidate motion area containing a human body; acquiring a human body target in the candidate motion area; determining a tracking object according to the human body target, and acquiring and recording the current frame position of the tracking object; and predicting the motion direction of the tracking object according to the current frame position of the tracking object.
The embodiment of the invention realizes the smooth tracking effect by selecting the tracking object and combining the direction prediction to adjust the position of the camera, does not need any auxiliary positioning device, is not limited by the tracking angle, can track the human body in an all-around way, has robustness not influenced by the outside, and solves the technical problems.
Preferably, the preprocessing the acquired video image includes: selecting the first of the video imagesOne frame of image is taken as a reference frame, the reference frame is averagely divided into four non-overlapping areas, W represents the width of the image, H represents the height of the image, the four areas are all 0.5W × 0.5.5H, the areas 1, 2, 3 and 4 are arranged in sequence from the upper left of the image according to the clockwise direction, and the area A is selected at the center position of the image received by the next frame0,A0The size of A is 0.5W × 0.5.5H0The four image sub-blocks a of size 0.25W × 0.25.25H are divided according to the above method1、A2、A3、A4,A1And A2For estimating local motion vectors in the vertical direction, A3And A4For estimating local motion vectors in the horizontal direction, let A1、A2、A3、A4And searching the best match in the four areas of 1, 2, 3 and 4 respectively to estimate the global motion vector of the video sequence, and then performing reverse motion compensation to eliminate the influence of video jitter.
The preferred embodiment performs image stabilization on the video image, avoids the influence of video jitter on subsequent image processing, and has high preprocessing efficiency.
Preferably, the tracking device 4 comprises a region of interest determination module 41, a candidate motion region extraction module 42 and a tracked object localization module 43; the region-of-interest determining module 41 is configured to determine a region of interest D in one frame of image of the video image1And using the template as a target template; the candidate motion region extraction module 42 is configured to establish a particle state transition and observation model and predict a candidate motion region by using particle filtering based on the model; the tracked object positioning module 43 is configured to perform feature similarity measurement on the candidate motion region and the target template, identify a tracked object, and record a current frame position of the tracked object.
The preferred embodiment builds a modular architecture for the tracking device 4.
Preferably, the candidate motion region extraction module 42 includes:
(1) initialThe chemical sub-module 421: for in the region of interest D1Randomly selecting n particles and initializing each particle, wherein the initial state of the initialized particles is x0 iThe initial weight is { Qo i=1/n,i=1,...n};
(2) The state transition model establishing sub-module 422: for establishing a particle state transition model, the particle state transition model adopts the following formula:
x m i = Ax m - 1 i + v m i
in the formula,represents new particles at the moment m, m is more than or equal to 2,is Gaussian white noise with the average value of 0, and A is a 4-order unit matrix; the particles at the m-1 moment are propagated through a state transition model;
(3) the observation model establishing sub-module 423 is used for establishing a particle observation model in a mode of combining a color histogram, a texture feature histogram and a motion edge feature;
(4) candidate motion region calculation sub-module 424: it computes candidate motion regions using minimum variance estimation:
x n o w = Σ j = 1 n Q m j · x m j
in the formula, xnowRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the moment m;
(5) position correction submodule 425: for correcting abnormal data:
x p r e = Σ j = 1 n Q m - 1 j · x m - 1 j
in the formula, xpreRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the m-1 moment;
setting a data anomaly evaluation function P ═ xnow-xpreIf the value of P is greater than the set empirical value T, then xnow=xpre
(6) Resampling sub-module 426: the method is used for deleting particles with too small weight values through resampling operation, during resampling, an innovation residual error is provided by utilizing a difference value predicted and observed at the current moment of a system, then online adaptive adjustment is carried out on sampled particles through measuring the innovation residual error, and the relation between the particle quantity and the information residual error in the sampling process is defined as follows:
wherein N ismRepresenting the number of particles at time m, N, during the sampling processmaxAnd NminRespectively representing the minimum and maximum number of particles, Nmin+1Denotes that only greater than NminNumber of particles of (2), Nmax-1Meaning less than N onlymaxThe number of particles of (a) to be,representing the innovation residual of the system at time m.
The preferred embodiment updates the weight of the sampling particles by adopting a mode of combining a color histogram, a texture feature histogram and a motion edge feature, thereby effectively enhancing the robustness of the tracking system; a position correction submodule 425 is arranged, so that the influence of abnormal data on the whole system can be avoided; in the resampling sub-module 426, an innovation residual is provided by using the difference between the prediction and observation at the current moment, and then the online adaptive adjustment is performed on the sampled particles by measuring the innovation residual, and the relationship between the particle number and the information residual in the sampling process is defined, so that the high efficiency of particle sampling and the real-time performance of the algorithm are better ensured.
Preferably, the particle weight value updating formula of the particle observation model is as follows:
Q m j = Q C m j ‾ · Q M m j ‾ · Q W m j ‾ + λ 1 Q C m j ‾ + λ 2 2 Q M m j ‾ + λ 2 3 Q W m j ‾ + λ 1 λ 2 λ 3 ( 1 + λ 1 ) ( 1 + λ 2 ) ( 1 + λ 3 )
in the formula
Q C m j ‾ = Q C m j / Σ j = 1 n Q C m j , Q C m j = Q C ( m - 1 ) j 1 2 π σ exp ( - A m 2 2 σ 2 )
Q M m j ‾ = Q M m j / Σ j = 1 n Q M m j , Q M m j = Q M ( m - 1 ) j 1 2 π σ exp ( - B m 2 2 σ 2 )
Q W m j ‾ = Q W m j / Σ j = 1 n Q W m j , Q W m j = Q W ( m - 1 ) j 1 2 π σ exp ( - C m 2 2 σ 2 )
Wherein,represents the final update weight of the jth particle at time m,andrespectively representing the update weight value of the jth particle in the m moment and the m-1 moment based on the color histogram,representing the updated weight of the jth particle based on the motion edge in the m-moment and the m-1 moment,representing the update weight of the jth particle in m time and m-1 time based on the histogram of the texture features, AmFor the jth particle in m time instant, based on the Bhattacharya distance between the observed value and the true value of the color histogrammFor the jth particle in the m-th time, the Bhattacharya distance between the observed value and the true value based on the motion edge, CmThe method is characterized in that Bhattacharya distance between an observed value and a true value of the jth particle in the m moment based on a texture feature histogram, sigma is variance of a Gaussian likelihood model, and lambda1Adaptive adjustment factor, λ, for color histogram based feature weight normalization2Adaptive adjustment factor, λ, for feature weight normalization based on moving edges3A self-adaptive adjustment factor for feature weight normalization based on the texture feature histogram;
the calculation formula of the self-adaptive adjustment factor is as follows:
λ s m = ξ m - 1 · [ - Σ j = 1 n ( p m - 1 s / j ) log 2 p m - 1 s / j ] , s = 1 , 2 , 3 ;
wherein when s is 1,an adaptive adjustment factor representing the color histogram based feature weight normalization in time m,the observation probability value of the characteristic value based on the color histogram under j particles in m-1 moment; when the s is equal to 2, the reaction solution is,an adaptive adjustment factor representing the normalization of the feature weight based on the motion edge in the time m,the observed probability values of the characteristic values based on the moving edge under j particles at the moment of m-1 are obtained; when s is 3, the reaction time is as short as possible,to representThe self-adaptive adjustment factor based on the feature weight value normalization of the texture feature histogram in the m time,the observed probability value of the characteristic value under j particles based on the histogram of the texture characteristics in the m-1 moment ξm-1Representing the variance values of the spatial positions of all particles in time instant m-1.
The preferred embodiment provides a particle weight updating formula of the particle observation model and a calculation formula of the self-adaptive adjustment factor, and fusion processing is performed on the characteristic weights of the particles, so that the defects of additive fusion and multiplicative fusion are effectively overcome, and the robustness of the tracking system is further enhanced.
In the application scenario, the number of the selected particles n is 60, so that the tracking speed is relatively improved by 6.5%, and the tracking accuracy is relatively improved by 8.4%.
Application scenario 4
Referring to fig. 1 and 2, a human motion tracking system in a complex scene according to an embodiment of the application scene includes a human motion video acquisition device 1, an image preprocessing device 2, a shooting adjustment device 3, and a tracking device 4, where the human motion video acquisition device 1 is configured to acquire a video image including a human body; the image preprocessing device 2 is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device 4 processes the video image to acquire the current frame position of the tracking object, predicts the motion direction of the tracking object, determines an interested area according to the current frame position of the tracking object, and tracks the tracking object in the interested area; the shooting adjusting device 3 is used for judging whether the current frame position of the tracking object is in the central area of the current frame, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
Preferably, the processing the video image to obtain the current frame position of the tracking object includes: processing the image to extract a candidate motion area containing a human body; acquiring a human body target in the candidate motion area; determining a tracking object according to the human body target, and acquiring and recording the current frame position of the tracking object; and predicting the motion direction of the tracking object according to the current frame position of the tracking object.
The embodiment of the invention realizes the smooth tracking effect by selecting the tracking object and combining the direction prediction to adjust the position of the camera, does not need any auxiliary positioning device, is not limited by the tracking angle, can track the human body in an all-around way, has robustness not influenced by the outside, and solves the technical problems.
Preferably, the preprocessing of the acquired video image comprises selecting a first frame image of the video image as a reference frame, averagely dividing the reference frame into four non-overlapping regions, wherein W represents the width of the image, H represents the height of the image, the four regions are all 0.5W × 0.5.5H, the regions 1, 2, 3 and 4 are sequentially arranged from the upper left of the image in the clockwise direction, and selecting a region A at the center position of the image received in the next frame0,A0The size of A is 0.5W × 0.5.5H0The four image sub-blocks a of size 0.25W × 0.25.25H are divided according to the above method1、A2、A3、A4,A1And A2For estimating local motion vectors in the vertical direction, A3And A4For estimating local motion vectors in the horizontal direction, let A1、A2、A3、A4And searching the best match in the four areas of 1, 2, 3 and 4 respectively to estimate the global motion vector of the video sequence, and then performing reverse motion compensation to eliminate the influence of video jitter.
The preferred embodiment performs image stabilization on the video image, avoids the influence of video jitter on subsequent image processing, and has high preprocessing efficiency.
Preferably, the tracking device 4 comprises a region of interest determination module 41, a candidate motion region extraction module 42 and a tracked object localization module43; the region-of-interest determining module 41 is configured to determine a region of interest D in one frame of image of the video image1And using the template as a target template; the candidate motion region extraction module 42 is configured to establish a particle state transition and observation model and predict a candidate motion region by using particle filtering based on the model; the tracked object positioning module 43 is configured to perform feature similarity measurement on the candidate motion region and the target template, identify a tracked object, and record a current frame position of the tracked object.
The preferred embodiment builds a modular architecture for the tracking device 4.
Preferably, the candidate motion region extraction module 42 includes:
(1) initialization submodule 421: for in the region of interest D1Randomly selecting n particles and initializing each particle, wherein the initial state of the initialized particles is x0 iThe initial weight is { Qo i=1/n,i=1,...n};
(2) The state transition model establishing sub-module 422: for establishing a particle state transition model, the particle state transition model adopts the following formula:
x m i = Ax m - 1 i + v m i
in the formula,represents new particles at the moment m, m is more than or equal to 2,is Gaussian white noise with the average value of 0, and A is a 4-order unit matrix; the particles at the m-1 moment are propagated through a state transition model;
(3) the observation model establishing sub-module 423 is used for establishing a particle observation model in a mode of combining a color histogram, a texture feature histogram and a motion edge feature;
(4) candidate motion region calculation sub-module 424: it computes candidate motion regions using minimum variance estimation:
x n o w = Σ j = 1 n Q m j · x m j
in the formula, xnowRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the moment m;
(5) position correction submodule 425: for correcting abnormal data:
x p r e = Σ j = 1 n Q m - 1 j · x m - 1 j
in the formula, xpreRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the m-1 moment;
setting a data anomaly evaluation function P ═ xnow-xpreIf the value of P is greater than the set empirical value T, then xnow=xpre
(6) Resampling sub-module 426: the method is used for deleting particles with too small weight values through resampling operation, during resampling, an innovation residual error is provided by utilizing a difference value predicted and observed at the current moment of a system, then online adaptive adjustment is carried out on sampled particles through measuring the innovation residual error, and the relation between the particle quantity and the information residual error in the sampling process is defined as follows:
wherein N ismRepresenting the number of particles at time m, N, during the sampling processmaxAnd NminRespectively representing the minimum and maximum number of particles, Nmin+1Denotes that only greater than NminNumber of particles of (2), Nmax-1Meaning less than N onlymaxThe number of particles of (a) to be,representing the innovation residual of the system at time m.
The preferred embodiment updates the weight of the sampling particles by adopting a mode of combining a color histogram, a texture feature histogram and a motion edge feature, thereby effectively enhancing the robustness of the tracking system; a position correction submodule 425 is arranged, so that the influence of abnormal data on the whole system can be avoided; in the resampling sub-module 426, an innovation residual is provided by using the difference between the prediction and observation at the current moment, and then the online adaptive adjustment is performed on the sampled particles by measuring the innovation residual, and the relationship between the particle number and the information residual in the sampling process is defined, so that the high efficiency of particle sampling and the real-time performance of the algorithm are better ensured.
Preferably, the particle weight value updating formula of the particle observation model is as follows:
Q m j = Q C m j ‾ · Q M m j ‾ · Q W m j ‾ + λ 1 Q C m j ‾ + λ 2 2 Q M m j ‾ + λ 2 3 Q W m j ‾ + λ 1 λ 2 λ 3 ( 1 + λ 1 ) ( 1 + λ 2 ) ( 1 + λ 3 )
in the formula
Q C m j ‾ = Q C m j / Σ j = 1 n Q C m j , Q C m j = Q C ( m - 1 ) j 1 2 π σ exp ( - A m 2 2 σ 2 )
Q M m j ‾ = Q M m j / Σ j = 1 n Q M m j , Q M m j = Q M ( m - 1 ) j 1 2 π σ exp ( - B m 2 2 σ 2 )
Q W m j ‾ = Q W m j / Σ j = 1 n Q W m j , Q W m j = Q W ( m - 1 ) j 1 2 π σ exp ( - C m 2 2 σ 2 )
Wherein,represents the final update weight of the jth particle at time m,andrespectively representing the update weight value of the jth particle in the m moment and the m-1 moment based on the color histogram,representing the updated weight of the jth particle based on the motion edge in the m-moment and the m-1 moment,representing the update weight of the jth particle in m time and m-1 time based on the histogram of the texture features, AmFor the jth particle in m time instant, based on the Bhattacharya distance between the observed value and the true value of the color histogrammFor the jth particle in the m-th time, the Bhattacharya distance between the observed value and the true value based on the motion edge, CmThe method is characterized in that Bhattacharya distance between an observed value and a true value of the jth particle in the m moment based on a texture feature histogram, sigma is variance of a Gaussian likelihood model, and lambda1Is based on colorAdaptive adjustment factor, λ, for feature weight normalization of color histograms2Adaptive adjustment factor, λ, for feature weight normalization based on moving edges3A self-adaptive adjustment factor for feature weight normalization based on the texture feature histogram;
the calculation formula of the self-adaptive adjustment factor is as follows:
λ s m = ξ m - 1 · [ - Σ j = 1 n ( p m - 1 s / j ) log 2 p m - 1 s / j ] , s = 1 , 2 , 3 ;
wherein when s is 1,an adaptive adjustment factor representing the color histogram based feature weight normalization in time m,the observation probability value of the characteristic value based on the color histogram under j particles in m-1 moment; when the s is equal to 2, the reaction solution is,an adaptive adjustment factor representing the normalization of the feature weight based on the motion edge in the time m,the observed probability values of the characteristic values based on the moving edge under j particles at the moment of m-1 are obtained; when s is 3, the reaction time is as short as possible,an adaptive adjustment factor representing the feature weight normalization based on the histogram of texture features at time m,the observed probability value of the characteristic value under j particles based on the histogram of the texture characteristics in the m-1 moment ξm-1Representing the variance values of the spatial positions of all particles in time instant m-1.
The preferred embodiment provides a particle weight updating formula of the particle observation model and a calculation formula of the self-adaptive adjustment factor, and fusion processing is performed on the characteristic weights of the particles, so that the defects of additive fusion and multiplicative fusion are effectively overcome, and the robustness of the tracking system is further enhanced.
In the application scenario, the number of the selected particles n is 65, so that the tracking speed is relatively improved by 6.5%, and the tracking accuracy is relatively improved by 8.5%.
Application scenario 5
Referring to fig. 1 and 2, a human motion tracking system in a complex scene according to an embodiment of the application scene includes a human motion video acquisition device 1, an image preprocessing device 2, a shooting adjustment device 3, and a tracking device 4, where the human motion video acquisition device 1 is configured to acquire a video image including a human body; the image preprocessing device 2 is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device 4 processes the video image to acquire the current frame position of the tracking object, predicts the motion direction of the tracking object, determines an interested area according to the current frame position of the tracking object, and tracks the tracking object in the interested area; the shooting adjusting device 3 is used for judging whether the current frame position of the tracking object is in the central area of the current frame, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
Preferably, the processing the video image to obtain the current frame position of the tracking object includes: processing the image to extract a candidate motion area containing a human body; acquiring a human body target in the candidate motion area; determining a tracking object according to the human body target, and acquiring and recording the current frame position of the tracking object; and predicting the motion direction of the tracking object according to the current frame position of the tracking object.
The embodiment of the invention realizes the smooth tracking effect by selecting the tracking object and combining the direction prediction to adjust the position of the camera, does not need any auxiliary positioning device, is not limited by the tracking angle, can track the human body in an all-around way, has robustness not influenced by the outside, and solves the technical problems.
Preferably, the preprocessing of the acquired video image comprises selecting a first frame image of the video image as a reference frame, averagely dividing the reference frame into four non-overlapping regions, wherein W represents the width of the image, H represents the height of the image, the four regions are all 0.5W × 0.5.5H, the regions 1, 2, 3 and 4 are sequentially arranged from the upper left of the image in the clockwise direction, and selecting a region A at the center position of the image received in the next frame0,A0The size of A is 0.5W × 0.5.5H0The four image sub-blocks a of size 0.25W × 0.25.25H are divided according to the above method1、A2、A3、A4,A1And A2For estimating local motion vectors in the vertical direction, A3And A4For estimating local motion vectors in the horizontal direction, let A1、A2、A3、A4And searching the best match in the four areas of 1, 2, 3 and 4 respectively to estimate the global motion vector of the video sequence, and then performing reverse motion compensation to eliminate the influence of video jitter.
The preferred embodiment performs image stabilization on the video image, avoids the influence of video jitter on subsequent image processing, and has high preprocessing efficiency.
Preferably, the tracking device 4 comprises a region of interest determination module 41, a candidate motion region extraction module 42 and a tracked object localization module 43; the region-of-interest determining module 41 is configured to determine a region of interest D in one frame of image of the video image1And using the template as a target template; the candidate motion region extraction module 42 is configured to establish a particle state transition and observation model and predict a candidate motion region by using particle filtering based on the model; the tracked object positioning module 43 is configured to perform feature similarity measurement on the candidate motion region and the target template, identify a tracked object, and record a current frame position of the tracked object.
The preferred embodiment builds a modular architecture for the tracking device 4.
Preferably, the candidate motion region extraction module 42 includes:
(1) initialization submodule 421: for in the region of interest D1Randomly selecting n particles and initializing each particle, wherein the initial state of the initialized particles is x0 iThe initial weight is { Qo i=1/n,i=1,...n};
(2) The state transition model establishing sub-module 422: for establishing a particle state transition model, the particle state transition model adopts the following formula:
x m i = Ax m - 1 i + v m i
in the formula,represents new particles at the moment m, m is more than or equal to 2,is Gaussian white noise with the average value of 0, and A is a 4-order unit matrix; the particles at the m-1 moment are propagated through a state transition model;
(3) the observation model establishing sub-module 423 is used for establishing a particle observation model in a mode of combining a color histogram, a texture feature histogram and a motion edge feature;
(4) candidate motion region calculation sub-module 424: it computes candidate motion regions using minimum variance estimation:
x n o w = Σ j = 1 n Q m j · x m j
in the formula, xnowRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the moment m;
(5) position correction submodule 425: for correcting abnormal data:
x p r e = Σ j = 1 n Q m - 1 j · x m - 1 j
in the formula, xpreRepresents a calculated candidate motion region of the current frame image,representing the corresponding state value of the jth particle at the m-1 moment;
setting a data anomaly evaluation function P ═ xnow-xpreIf the value of P is greater than the set empirical value T, then xnow=xpre
(6) Resampling sub-module 426: the method is used for deleting particles with too small weight values through resampling operation, during resampling, an innovation residual error is provided by utilizing a difference value predicted and observed at the current moment of a system, then online adaptive adjustment is carried out on sampled particles through measuring the innovation residual error, and the relation between the particle quantity and the information residual error in the sampling process is defined as follows:
wherein N ismRepresenting the number of particles at time m, N, during the sampling processmaxAnd NminRespectively representing the minimum and maximum number of particles, Nmin+1Denotes that only greater than NminNumber of particles of (2), Nmax-1Meaning less than N onlymaxThe number of particles of (a) to be,representing the innovation residual of the system at time m.
The preferred embodiment updates the weight of the sampling particles by adopting a mode of combining a color histogram, a texture feature histogram and a motion edge feature, thereby effectively enhancing the robustness of the tracking system; a position correction submodule 425 is arranged, so that the influence of abnormal data on the whole system can be avoided; in the resampling sub-module 426, an innovation residual is provided by using the difference between the prediction and observation at the current moment, and then the online adaptive adjustment is performed on the sampled particles by measuring the innovation residual, and the relationship between the particle number and the information residual in the sampling process is defined, so that the high efficiency of particle sampling and the real-time performance of the algorithm are better ensured.
Preferably, the particle weight value updating formula of the particle observation model is as follows:
Q m j = Q C m j ‾ · Q M m j ‾ · Q W m j ‾ + λ 1 Q C m j ‾ + λ 2 2 Q M m j ‾ + λ 2 3 Q W m j ‾ + λ 1 λ 2 λ 3 ( 1 + λ 1 ) ( 1 + λ 2 ) ( 1 + λ 3 )
in the formula
Q C m j ‾ = Q C m j / Σ j = 1 n Q C m j , Q C m j = Q C ( m - 1 ) j 1 2 π σ exp ( - A m 2 2 σ 2 )
Q M m j ‾ = Q M m j / Σ j = 1 n Q M m j , Q M m j = Q M ( m - 1 ) j 1 2 π σ exp ( - B m 2 2 σ 2 )
Q W m j ‾ = Q W m j / Σ j = 1 n Q W m j , Q W m j = Q W ( m - 1 ) j 1 2 π σ exp ( - C m 2 2 σ 2 )
Wherein,represents the final update weight of the jth particle at time m,andrespectively representing the update weight value of the jth particle in the m moment and the m-1 moment based on the color histogram,representing the updated weight of the jth particle based on the motion edge in the m-moment and the m-1 moment,representing the update weight of the jth particle in m time and m-1 time based on the histogram of the texture features, AmFor the jth particle in m time instant, based on the Bhattacharya distance between the observed value and the true value of the color histogrammFor the jth particle in the m-th time, the Bhattacharya distance between the observed value and the true value based on the motion edge, CmThe method is characterized in that Bhattacharya distance between an observed value and a true value of the jth particle in the m moment based on a texture feature histogram, sigma is variance of a Gaussian likelihood model, and lambda1Adaptive adjustment factor, λ, for color histogram based feature weight normalization2Adaptive adjustment factor, λ, for feature weight normalization based on moving edges3A self-adaptive adjustment factor for feature weight normalization based on the texture feature histogram;
the calculation formula of the self-adaptive adjustment factor is as follows:
λ s m = ξ m - 1 · [ - Σ j = 1 n ( p m - 1 s / j ) log 2 p m - 1 s / j ] , s = 1 , 2 , 3 ;
wherein when s is 1,an adaptive adjustment factor representing the color histogram based feature weight normalization in time m,the observation probability value of the characteristic value based on the color histogram under j particles in m-1 moment; when the s is equal to 2, the reaction solution is,an adaptive adjustment factor representing the normalization of the feature weight based on the motion edge in the time m,the observed probability values of the characteristic values based on the moving edge under j particles at the moment of m-1 are obtained; when s is 3, the reaction time is as short as possible,an adaptive adjustment factor representing the feature weight normalization based on the histogram of texture features at time m,the observed probability value of the characteristic value under j particles based on the histogram of the texture characteristics in the m-1 moment ξm-1Representing the variance values of the spatial positions of all particles in time instant m-1.
The preferred embodiment provides a particle weight updating formula of the particle observation model and a calculation formula of the self-adaptive adjustment factor, and fusion processing is performed on the characteristic weights of the particles, so that the defects of additive fusion and multiplicative fusion are effectively overcome, and the robustness of the tracking system is further enhanced.
In the application scene, the number of the selected particles n is 70, the tracking speed is relatively improved by 6 percent, and the tracking precision is relatively improved by 9 percent
Finally, it should be noted that the above application scenarios are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, and although the present invention is described in detail with reference to the preferred application scenarios, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (3)

1. A human motion tracking system is characterized by comprising a human motion video acquisition device, an image preprocessing device, a shooting adjusting device and a tracking device, wherein the human motion video acquisition device is used for acquiring a video image containing a human body; the image preprocessing device is used for preprocessing the acquired video image and eliminating the influence of video jitter; the tracking device processes the video image to obtain the current frame position of the tracking object, predicts the motion direction of the tracking object, determines the region of interest according to the current frame position of the tracking object, and tracks the tracking object in the region of interest; the shooting adjusting device is used for judging whether the current frame position of the tracking object is in the central area of the current picture, if so, the camera is not adjusted, and if not, the camera is adjusted according to the motion direction of the tracking object.
2. The system for tracking human motion according to claim 1, wherein the processing the video image to obtain the current frame position of the tracked object comprises: processing the image to extract a candidate motion area containing a human body; acquiring a human body target in the candidate motion area; determining a tracking object according to the human body target, and acquiring and recording the current frame position of the tracking object; and predicting the motion direction of the tracking object according to the current frame position of the tracking object.
3. The human body motion tracking system according to claim 2, wherein the pre-processing of the collected video images comprises selecting a first frame of the video images as a reference frame, equally dividing the reference frame into four non-overlapping regions, W representing the width of the image, H representing the height of the image, and all four regions having a size of 0.5W × 0.5.5H, sequentially including regions 1, 2, 3 and 4 in a clockwise direction from the top left of the image, and selecting a region A at the center position of the image received in the next frame0,A0The size of A is 0.5W × 0.5.5H0The four image sub-blocks a of size 0.25W × 0.25.25H are divided according to the above method1、A2、A3、A4,A1And A2For estimating local motion vectors in the vertical direction, A3And A4For estimating local motion vectors in the horizontal direction, let A1、A2、A3、A4And searching the best match in the four areas of 1, 2, 3 and 4 respectively to estimate the global motion vector of the video sequence, and then performing reverse motion compensation to eliminate the influence of video jitter.
CN201610612701.XA 2016-07-27 2016-07-27 A kind of Human Movement Tracking System Pending CN106296730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610612701.XA CN106296730A (en) 2016-07-27 2016-07-27 A kind of Human Movement Tracking System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610612701.XA CN106296730A (en) 2016-07-27 2016-07-27 A kind of Human Movement Tracking System

Publications (1)

Publication Number Publication Date
CN106296730A true CN106296730A (en) 2017-01-04

Family

ID=57663668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610612701.XA Pending CN106296730A (en) 2016-07-27 2016-07-27 A kind of Human Movement Tracking System

Country Status (1)

Country Link
CN (1) CN106296730A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106949354A (en) * 2017-04-19 2017-07-14 成都市宏山科技有限公司 The human body of display follows angle adjuster
CN109525781A (en) * 2018-12-24 2019-03-26 国网山西省电力公司检修分公司 A kind of image capturing method, device, equipment and the storage medium of wire-connection point
CN110086988A (en) * 2019-04-24 2019-08-02 薄涛 Shooting angle method of adjustment, device, equipment and its storage medium
CN111898519A (en) * 2020-07-28 2020-11-06 武汉大学 Portable auxiliary visual servo robot system for motion training in specific area and posture evaluation method
CN113179371A (en) * 2021-04-21 2021-07-27 新疆爱华盈通信息技术有限公司 Shooting method, device and snapshot system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222343A (en) * 2010-04-16 2011-10-19 上海摩比源软件技术有限公司 Method for tracking human body motions and system thereof
CN102360423A (en) * 2011-10-19 2012-02-22 丁泉龙 Intelligent human body tracking method
CN102368301A (en) * 2011-09-07 2012-03-07 常州蓝城信息科技有限公司 Moving human body detection and tracking system based on video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222343A (en) * 2010-04-16 2011-10-19 上海摩比源软件技术有限公司 Method for tracking human body motions and system thereof
CN102368301A (en) * 2011-09-07 2012-03-07 常州蓝城信息科技有限公司 Moving human body detection and tracking system based on video
CN102360423A (en) * 2011-10-19 2012-02-22 丁泉龙 Intelligent human body tracking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李昱辰: "基于粒子滤波的视频目标跟踪方法研究", 《中国博士学位论文全文数据库 信息科技辑(月刊)》 *
邱家涛: "电子稳像算法和视觉跟踪算法研究", 《中国博士学位论文全文数据库 信息科技辑(月刊)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106949354A (en) * 2017-04-19 2017-07-14 成都市宏山科技有限公司 The human body of display follows angle adjuster
CN109525781A (en) * 2018-12-24 2019-03-26 国网山西省电力公司检修分公司 A kind of image capturing method, device, equipment and the storage medium of wire-connection point
CN110086988A (en) * 2019-04-24 2019-08-02 薄涛 Shooting angle method of adjustment, device, equipment and its storage medium
CN111898519A (en) * 2020-07-28 2020-11-06 武汉大学 Portable auxiliary visual servo robot system for motion training in specific area and posture evaluation method
CN113179371A (en) * 2021-04-21 2021-07-27 新疆爱华盈通信息技术有限公司 Shooting method, device and snapshot system

Similar Documents

Publication Publication Date Title
CN110998594B (en) Method and system for detecting motion
Chen et al. A deep learning approach to drone monitoring
EP3304492B1 (en) Modelling a three-dimensional space
US6999599B2 (en) System and method for mode-based multi-hypothesis tracking using parametric contours
CN100565244C (en) Multimode multi-target accurate tracking apparatus and method
CN101676744B (en) Method for tracking small target with high precision under complex background and low signal-to-noise ratio
CN106296730A (en) A kind of Human Movement Tracking System
CN111127518A (en) Target tracking method and device based on unmanned aerial vehicle
CN106780542A (en) A kind of machine fish tracking of the Camshift based on embedded Kalman filter
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN110070565A (en) A kind of ship trajectory predictions method based on image superposition
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
CN106886017A (en) Submarine target locus computational methods based on double frequency identification sonar
CN114241008B (en) Long-time region tracking method adaptive to scene and target change
CN116977902B (en) Target tracking method and system for on-board photoelectric stabilized platform of coastal defense
CN112417948A (en) Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision
Krout et al. Object tracking with imaging sonar
KR101962933B1 (en) Detection and tracking method for sea-surface moving object
Wei et al. DeepTracks: Geopositioning Maritime Vehicles in Video Acquired from a Moving Platform
CN106228576A (en) For processing the system of image for target following
CN105974941A (en) Unmanned aerial vehicle reconnaissance system
CN108230363A (en) A kind of human-computer interaction device for target following
CN118129692A (en) Position tracking method and system for target moving object
CN106254822B (en) Environmental monitoring system
Dimas et al. Check for updates A Single Image Neuro-Geometric Depth Estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170104

RJ01 Rejection of invention patent application after publication