CN107341460B - Face tracking method and device - Google Patents
Face tracking method and device Download PDFInfo
- Publication number
- CN107341460B CN107341460B CN201710493142.XA CN201710493142A CN107341460B CN 107341460 B CN107341460 B CN 107341460B CN 201710493142 A CN201710493142 A CN 201710493142A CN 107341460 B CN107341460 B CN 107341460B
- Authority
- CN
- China
- Prior art keywords
- identification information
- face
- tracked
- tracker
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The disclosure relates to a face tracking method and device. The method comprises the following steps: in the current frame image, when it is determined that two tracked faces overlap, first identification information of a first tracked face which is not shielded is obtained, a tracker corresponding to the first identification information is obtained according to the first identification information and identification information prestored in the two trackers, the first tracked face is distributed to the tracker corresponding to the first identification information for tracking, and the other tracked face is distributed to the other tracker for tracking, so that when the two tracked faces are shielded, the identification information of the tracked face which is not shielded is compared with the identification information prestored in the two trackers, and the face which is not shielded is distributed to a correct tracker, therefore, the tracking accuracy is improved, tracking errors are eliminated, and the continuity of tracking results is improved.
Description
Technical Field
The present disclosure relates to target tracking technologies, and in particular, to a method and an apparatus for tracking a human face.
Background
The face tracking can be widely applied to the fields of face recognition, safety monitoring, mobile phone photographing and the like. In the process of tracking the face, a situation that two tracked faces are overlapped may occur, and in this situation, how to realize correct tracking is very important.
In the related technology, a Kalman filter and a mean shift algorithm can be adopted to realize an anti-shielding face tracking method, and the problem that the tracked face is easily interfered by surrounding similar chrominance objects is solved.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a face tracking method and apparatus.
According to a first aspect of the embodiments of the present disclosure, there is provided a face tracking method, including:
in a current frame image, when two tracked faces are determined to be overlapped, acquiring first identification information of a first tracked face which is not shielded;
acquiring trackers corresponding to the first identification information according to the first identification information and identification information prestored in the two trackers;
and distributing the first tracked face to a tracker corresponding to the first identification information for tracking, and distributing the other tracked face to the other tracker for tracking.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the method further includes:
when the two tracked faces are overlapped and separated and the distance between the two tracked faces is smaller than a preset distance threshold, acquiring second identification information of any one of the two tracked faces;
acquiring trackers corresponding to the second identification information according to the second identification information and the identification information pre-stored in the two trackers;
and allocating the tracked face corresponding to the second identification information to a tracker corresponding to the second identification information for tracking, and allocating the other tracked face to the other tracker for tracking.
With reference to the first aspect, in a second possible implementation manner of the first aspect, before obtaining the first identification information of the unobstructed first tracked face when it is determined that two tracked faces overlap, the method further includes:
when the distance between the two tracked faces is determined to be smaller than a preset distance threshold, respectively acquiring identification information of the two tracked faces;
and respectively storing the identification information into the tracker according to the corresponding relation between the tracked human face and the tracker and the identification information.
With reference to the first aspect, the first possible implementation manner of the first aspect, or the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the identification information includes geometric information of a tracked face;
the acquiring the tracker corresponding to the first identification information according to the first identification information and the identification information pre-stored in the two trackers includes:
acquiring Euclidean distance between the first identification information and each piece of pre-stored identification information;
and determining a tracker corresponding to the identification information of which the Euclidean distance from the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the geometric information includes: length, width, interocular distance, width of mouth, and nose tip to eye distance of a human face.
With reference to the third possible implementation manner or the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the obtaining first identification information of a first tracked face that is not occluded includes:
respectively acquiring the positions of the feature points of the first tracked face in the current frame image according to a feature point positioning algorithm;
and acquiring first identification information of the first tracked face according to the position.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the tracker further includes a filter, and the allocating the first tracked face to the tracker corresponding to the first identification information for tracking processing includes:
determining a position area image of the first tracked face in the current frame image according to the position of the first tracked face in the previous frame image of the current frame image;
convolving the position area image with a filter in a tracker corresponding to the first tracked face in the previous frame image to obtain a response image of the first tracked face in the current frame image;
determining the position corresponding to the maximum response value in the response image as the position of the first tracked face in the current frame image;
and determining a filter in a tracker corresponding to the first tracked face in the current frame image according to the response image and the position area image, wherein the filter in the current frame image is used for determining the response image of the first tracked face in the next frame image.
According to a second aspect of the embodiments of the present disclosure, there is provided a face tracking apparatus, including:
the first acquisition module is configured to acquire first identification information of a first tracked face which is not shielded when two tracked faces are determined to be overlapped in a current frame image;
the second acquisition module is configured to acquire the tracker corresponding to the first identification information according to the first identification information and identification information prestored in the two trackers;
and the first distribution module is configured to distribute the first tracked face to a tracker corresponding to the first identification information for tracking processing, and distribute the other tracked face to another tracker for tracking processing.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the apparatus further includes:
the third acquisition module is configured to acquire second identification information of any one of the two tracked faces when the two tracked faces are overlapped and separated and the distance between the two tracked faces is smaller than a preset distance threshold;
the fourth acquisition module is configured to acquire the tracker corresponding to the second identification information according to the second identification information and the identification information pre-stored in the two trackers;
and the second allocating module is configured to allocate the tracked face corresponding to the second identification information to the tracker corresponding to the second identification information for tracking processing, and allocate another tracked face to another tracker for tracking processing.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the apparatus further includes:
a fifth obtaining module configured to obtain identification information of the two tracked faces respectively when it is determined that the distance between the two tracked faces is smaller than a preset distance threshold;
and the storage module is configured to store the identification information into the tracker respectively according to the corresponding relation between the tracked human face and the tracker and the identification information.
With reference to the second aspect, the first possible implementation manner of the second aspect, or the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the identification information includes geometric information of a tracked face;
the second acquisition module includes:
a first obtaining sub-module configured to obtain a euclidean distance between the first identification information and each piece of pre-stored identification information;
a first determining sub-module configured to determine, as a tracker corresponding to the first identification information, a tracker corresponding to identification information whose euclidean distance to the first identification information is smaller than a preset difference threshold.
With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the geometric information includes: length, width, interocular distance, width of mouth, and nose tip to eye distance of a human face.
With reference to the third possible implementation manner or the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the first obtaining module includes:
the second acquisition sub-module is configured to respectively acquire the positions of the feature points of the first tracked face in the current frame image according to a feature point positioning algorithm;
a third obtaining sub-module configured to obtain first identification information of the first tracked face according to the position.
With reference to the second aspect, in a sixth possible implementation manner of the second aspect, the tracker further includes a filter, and the first allocating module includes:
a second determining sub-module configured to determine a position area image of the first tracked face in the current frame image according to the position of the first tracked face in the previous frame image of the current frame image;
a fourth obtaining sub-module, configured to convolve the position area image with a filter in a tracker corresponding to the first tracked face in a previous frame of image, and obtain a response map of the first tracked face in the current frame of image;
a third determining submodule configured to determine a position corresponding to a maximum response value in the response map as a position of the first tracked face in the current frame image;
and the fourth determining sub-module is configured to determine a filter in a tracker corresponding to the first tracked face in the current frame image according to the response map and the position area image, wherein the filter in the current frame image is used for determining the response map of the first tracked face in the next frame image.
According to a third aspect of the embodiments of the present disclosure, there is provided a face tracking apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
in a current frame image, when two tracked faces are determined to be overlapped, acquiring first identification information of a first tracked face which is not shielded;
acquiring trackers corresponding to the first identification information according to the first identification information and identification information prestored in the two trackers;
and distributing the first tracked face to a tracker corresponding to the first identification information for tracking, and distributing the other tracked face to the other tracker for tracking.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium containing computer-executable instructions, which, when executed by a processor of a face tracking apparatus, cause the face tracking apparatus to perform the face tracking method as described in the first aspect or any one of the possible implementations of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in one embodiment, in a current frame image, when it is determined that two tracked faces overlap, first identification information of a first tracked face which is not shielded is obtained, a tracker corresponding to the first identification information is obtained according to the first identification information and identification information prestored in two trackers, the first tracked face is allocated to the tracker corresponding to the first identification information for tracking processing, and the other tracked face is allocated to the other tracker for tracking processing, so that when two tracked faces are shielded, the identification information of the tracked face which is not shielded is compared with the identification information prestored in the two trackers, and the face which is not shielded is allocated to a correct tracker, thereby improving the tracking accuracy, eliminating tracking errors and improving the continuity of tracking results.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an application scenario of a face tracking method provided in an embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating a face tracking method in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram of the principle of the tracking algorithm in the embodiment of FIG. 2;
FIG. 4 is a schematic diagram of acquiring identification information of a tracked face in the embodiment shown in FIG. 2;
FIG. 5 is a block diagram illustrating a face tracking device according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating a face tracking device according to another exemplary embodiment;
FIG. 7 is a block diagram illustrating a face tracking device according to yet another exemplary embodiment;
FIG. 8 is a block diagram illustrating a face tracking device according to yet another exemplary embodiment;
FIG. 9 is a block diagram illustrating a face tracking device according to another exemplary embodiment;
FIG. 10 is a block diagram illustrating a face tracking device according to an exemplary embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a schematic diagram of an application scenario of a face tracking method provided in an embodiment of the present disclosure. The face tracking method provided by the embodiment of the disclosure can be applied to a scene when two tracked faces are overlapped in a motion process, namely, one tracked face is blocked by the other tracked face. As shown in fig. 1, the tracked face a and the tracked face B are initially spaced by a certain distance, and then the moving tracks of the two tracked faces overlap, so that the tracked face a is blocked by the tracked face B. The tracked face A corresponds to a tracker, and the tracked face B also corresponds to a tracker. At this time, because two tracked faces overlap, according to a tracking algorithm in the related art, it can only solve the problem that the tracked faces are easily interfered by surrounding similar colorimetric objects, and when two very close objects are shielded, correct tracking cannot be realized, and after the two tracked faces are separated, a false following situation may occur, that is, a tracker that initially tracks the tracked face B according to the tracked face a may occur, which reduces the accuracy of face tracking. The face tracking algorithm provided by the embodiment of the disclosure can solve the above problems.
FIG. 2 is a flow diagram illustrating a face tracking method according to an exemplary embodiment. As shown in fig. 2, the face tracking method provided by the embodiment of the present disclosure includes the following steps:
in step 101, in a current frame image, when it is determined that two tracked faces overlap, first identification information of a first tracked face which is not occluded is acquired.
The face tracking method in the embodiments of the present disclosure may be executed by a computing device with computing capability, for example, a computer, a terminal device, a personal digital assistant, and the like. Illustratively, the computing device may be a server of video surveillance.
The face tracking method provided by the embodiment of the disclosure is an improvement on the basis of the existing face tracking method. Optionally, the method may be improved based on a correlation filter (MOSSE) tracking algorithm. The MOSSE algorithm is described below: a) in an initial frame, namely a first frame, generating sample data (F) according to a face region of an image, and generating a response image (G) according to the center position of a face; generating a correlation filter (H) by combining the sample data and the response graph, specifically, the correlation filter (H) may be:where the index 1 indicates the first frame. In this step, sample data may be generated according to the result of the manual recognition. The sample data represents an image of a region where a human face is located. The response map may be determined by combining the sample data with a gaussian function. The value in the response map indicates the likelihood that the position corresponding to the value is a human face. b) In the second frame image, generating sample data of the current frame according to the sample data in the initial frame, and performing correlation filtering on the first frameAnd carrying out convolution on the sample data of the second frame to obtain a response image of the second frame. When generating sample data of a second frame from sample data in a first frame, a position range may be determined by enlarging a position of the sample data in the first frame, and an image in the position range in an image of the second frame may be used as the sample data. c) And finding the position corresponding to the maximum response value in the response image, wherein the position is the position of the center position of the face in the second frame image. d) And updating the relevant filter according to the sample data of the second frame and the response graph of the second frame:where 2 denotes the second frame, the updated correlation filter may be combined with the filter of the first frame to form an updated filter of the second frame, which is used for the determination of the response map in the image of the third frame. e) In a third frame, determining sample data of the third frame according to the identification result of the second frame, convolving the sample data with the updated filter of the second frame to obtain a response graph of the third frame, finding a position corresponding to a maximum response value in the response graph, wherein the position is the position of the center position of the face in the image of the third frame, and updating a relevant filter according to the sample data of the third frame and the response graph of the third frame:where i denotes the frame number, the updated correlation filter may be combined with the filter of the second frame to form an updated filter of the third frame, which is used for determination of the response map in the image of the fourth frame. And e, circulating all the time to realize the tracking of the human face. When the sample data of the third frame is specified based on the recognition result of the second frame, the position of the recognition result in the second frame may be enlarged to specify a position range, and the image in the position range in the image of the third frame may be used as the sample data of the third frame.
Fig. 3 is a schematic diagram of the principle of the tracking algorithm in the embodiment shown in fig. 2. As shown in fig. 3, the input image is sample data F of the current frame generated from the recognition result in the previous frame, and the output response map G can be obtained by convolving the input image with the updated filter H of the previous frame. The position corresponding to the maximum response value in the response map is the position where the center of the tracked face is located.
The disclosed embodiment is applied to a scene in which two tracked faces exist in an image, and for convenience of description, the two tracked faces are referred to as a tracked face a and a tracked face B. Accordingly, there are two trackers for tracking the two tracked faces, respectively. In the current frame, according to a MOSSE algorithm, a first response graph is determined according to a filter in a tracker corresponding to a tracked face A and sample data of the current frame, a second response graph is determined according to a filter in a tracker corresponding to a tracked face B and sample data of the current frame, and when the first response graph or the second response graph does not exist or a maximum value does not exist in the first response graph or the second response graph, the tracked face A or the tracked face B is overlapped. The overlap here may be a complete overlap or a partial overlap. When the two tracked faces overlap, the tracked face close to the camera blocks the tracked face far away from the camera from the visual angle of the camera. At this time, since the two faces are very close in distance or the distances are overlapping, it is necessary to determine to which tracker the two tracked faces are assigned. It should be noted that the sample data of the current frame used in determining the first response map is different from the sample data of the current frame used in determining the second response map, because the sample data of the current frame is determined according to the recognition result of the previous frame, and the recognition results of the two tracked faces in the previous frame are generally different.
In the embodiment of the disclosure, first identification information of a first tracked face which is not occluded is obtained. The first tracked face which is not occluded is the face which exists in the response image or the face with the maximum response value can be determined in the response image. The first identification information may be information that serves as identification in the face of a person. In one implementation, the first identification information may be five sense organs and positions of the five sense organs on the face, such as shapes of the nose and positions of the nose on the face. In another implementation, the identification information may be geometric information of the human face, such as width, length, inter-ocular distance, width of the mouth, and distance from the tip of the nose to the eyes of the face.
When the first identification information is obtained, a feature point location algorithm, for example, an ESR algorithm, may be used to obtain the location of the feature point of the face in the image. And then acquiring the first identification information of the unoccluded tracked face according to the positions of the feature points in the image. For example, the feature points may be points constituting the contour of the face, points of the contour of the eyes, points of the contour of the mouth, points of the contour of the nose, and the like. The process of acquiring the identification information by adopting the algorithm is high in speed.
In step 102, a tracker corresponding to the first identification information is obtained according to the first identification information and the identification information pre-stored in the two trackers.
The two trackers have pre-stored identification information therein.
In one implementation, the identification information in the tracker is obtained when the tracked face a and the tracked face B are initially tracked.
In another implementation manner, when the tracked face a and the tracked face B may overlap, the identification information of the two tracked faces is obtained and stored in the corresponding trackers, and the obtaining step is performed only when needed, so as to save storage resources and computing resources of the computing device. The specific process is as follows: before step 101, when the distance between two tracked faces is smaller than a preset distance threshold, respectively obtaining identification information of the two tracked faces, and respectively storing the identification information into a tracker according to the corresponding relationship between the tracked faces and the tracker and the identification information. For convenience of description, the two trackers are a tracker a and a tracker B, and assuming that the tracker a corresponds to the tracked face a and the tracker B corresponds to the tracked face B, after the identification information of the tracked face a is acquired, the identification information is stored in the tracker a, and after the identification information of the tracked face B is acquired, the identification information is stored in the tracker B. When the distance between the tracked face a and the tracked face B is smaller than the preset distance threshold, it indicates that the tracked face a and the tracked face B may have route overlapping. The distance between the two tracked faces may be calculated after the positions of the tracked face a and the tracked face B are identified.
Alternatively, the identification information may include geometric information of the tracked face. The geometric information here may include: length, width, interocular distance, width of mouth, and nose tip to eye distance of a human face.
The obtaining of the first identification information of the first non-occluded tracked face includes: respectively acquiring the positions of the feature points of the first tracked face in the current frame image according to a feature point positioning algorithm; and acquiring first identification information of the first tracked face according to the position.
Fig. 4 is a schematic diagram of acquiring identification information of a tracked face in the embodiment shown in fig. 2. As shown in fig. 4, the positions of the feature points 41 in the current frame image are obtained according to a feature point positioning algorithm, and then the identification information of the tracked face is obtained according to the positions of the feature points in the current frame image.
Optionally, when the identification information is the shape of the five sense organs and the position of the five sense organs on the face, the process of determining the tracker corresponding to the first identification information is as follows: and acquiring a difference value of the shape of the first identification information and each piece of pre-stored identification information and a difference value weighted by the position difference value, and determining a tracker corresponding to the identification information of which the difference value with the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information.
Optionally, when the identification information is geometric information, the process of obtaining the comparison result is: acquiring Euclidean distance between the first identification information and each piece of pre-stored identification information; and determining the tracker corresponding to the identification information of which the Euclidean distance from the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information. The method for calculating the euclidean distance is an existing method, and is not described herein again.
In step 103, the first tracked face is assigned to the tracker corresponding to the first identification information for tracking, and the other tracked face is assigned to the other tracker for tracking.
After the first tracked face is assigned to the tracker corresponding to the first identification information, the filter of the tracker needs to be updated to realize the tracking of the face in the next frame of image. The filter in the tracker corresponding to the occluded tracked face may not be updated. The tracker further comprises a filter, and the method further comprises: determining a position area image of a first tracked face in a current frame image according to the position of the first tracked face in a previous frame image of the current frame image, wherein the position area image is sample data of the current frame; convolving the position area image with a filter in a tracker corresponding to a first tracked face in the previous frame image to obtain a response image of the first tracked face in the current frame image; determining the position corresponding to the maximum response value in the response image as the position of a first tracked face in the current frame image; and determining a filter in a tracker corresponding to the first tracked face in the current frame image according to the response image and the position area image, wherein the filter in the current frame image is used for determining the response image of the first tracked face in the next frame image. And updating the filter in the tracker corresponding to the first tracked face. It should be noted that the filter in the tracker corresponding to the first tracked face in the previous frame image refers to the filter updated in the previous frame image.
In order to further ensure the tracking accuracy, when two tracked faces are overlapped and separated, and the distance between the two tracked faces is smaller than a preset distance threshold, it is necessary to continuously determine which tracked face needs to be allocated to which tracker. Therefore, after the two tracked faces are overlapped and separated, and when the distance between the two tracked faces is smaller than a preset distance threshold, second identification information of any one of the two tracked faces is obtained, and a tracker corresponding to the second identification information is obtained according to the second identification information and the identification information prestored in the two trackers; and the tracked face corresponding to the second identification information is distributed to the tracker corresponding to the second identification information for tracking, and the other tracked face is distributed to the other tracker for tracking. It should be noted that the identification information here may be geometric information of a human face. The process of acquiring the tracker corresponding to the second identification information is the same as the process of acquiring the tracker corresponding to the first identification information in S102, and is not described herein again.
The face tracking method provided by the embodiment of the disclosure obtains the first identification information of the first tracked face which is not shielded when the two tracked faces are determined to be overlapped in the current frame image, acquiring the tracker corresponding to the first identification information according to the first identification information and the identification information pre-stored in the two trackers, distributing the first tracked face to the tracker corresponding to the first identification information for tracking processing, distributing the other tracked face to the other tracker for tracking processing, so that when the two tracked faces are shielded, the non-occluded face is assigned to the correct tracker by comparing the identification information of the non-occluded tracked face with the pre-stored identification information in both trackers, therefore, the tracking accuracy is improved, the tracking error is eliminated, and the continuity of the tracking result is improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
FIG. 5 is a block diagram illustrating a face tracking device according to an exemplary embodiment. As shown in fig. 5, the face tracking apparatus provided by the present disclosure includes the following modules: a first acquisition module 51, a second acquisition module 52 and a first distribution module 53.
The first obtaining module 51 is configured to obtain, in the current frame image, first identification information of a first tracked face that is not occluded when it is determined that two tracked faces overlap.
And a second obtaining module 52 configured to obtain the tracker corresponding to the first identification information according to the first identification information and the identification information pre-stored in the two trackers.
Optionally, the identification information includes geometric information of the tracked face. The geometric information includes: length, width, interocular distance, width of mouth, and nose tip to eye distance of a human face. The second obtaining module 52 includes: a first acquisition submodule 521 and a first determination submodule 522. The first obtaining sub-module 521 is configured to obtain a euclidean distance between the first identification information and each piece of identification information stored in advance. A first determining sub-module 522 configured to determine, as the tracker corresponding to the first identification information, the tracker corresponding to the identification information whose euclidean distance of the first identification information is smaller than the preset difference threshold.
Optionally, the first obtaining module 51 includes: a second fetch submodule 511 and a third fetch submodule 512. And the second obtaining sub-module 511 is configured to obtain the positions of the feature points of the first tracked face in the current frame image according to a feature point positioning algorithm. A third obtaining sub-module 512 configured to obtain the first identification information of the first tracked face according to the position.
And the first distributing module 53 is configured to distribute the first tracked face to the tracker corresponding to the first identification information for tracking processing, and distribute the other tracked face to the other tracker for tracking processing.
The face tracking device provided by the embodiment of the disclosure is provided with a first obtaining module configured to obtain first identification information of a first tracked face which is not occluded when two tracked faces are determined to be overlapped in a current frame image, a second obtaining module configured to obtain a tracker corresponding to the first identification information according to the first identification information and identification information pre-stored in two trackers, a first allocating module configured to allocate the first tracked face to the tracker corresponding to the first identification information for tracking processing, allocate the other tracked face to the other tracker for tracking processing, and realize that when two tracked faces are occluded, the face which is not occluded is allocated to a correct tracker by comparing the identification information of the tracked face which is not occluded with the identification information pre-stored in the two trackers, therefore, the tracking accuracy is improved, the tracking error is eliminated, and the continuity of the tracking result is improved.
FIG. 6 is a block diagram illustrating a face tracking device according to another exemplary embodiment. This embodiment is based on the embodiment shown in fig. 5, and other modules of the face tracking apparatus are explained in detail. As shown in fig. 6, the face tracking apparatus provided in the embodiment of the present disclosure further includes, on the basis of the apparatus shown in fig. 5: a third acquisition module 61, a fourth acquisition module 62 and a second allocation module 63.
And the third obtaining module 61 is configured to obtain the second identification information of any one of the two tracked faces when the two tracked faces are overlapped and separated and the distance between the two tracked faces is smaller than a preset distance threshold.
And a fourth obtaining module 62 configured to obtain the tracker corresponding to the second identification information according to the second identification information and the identification information pre-stored in the two trackers.
And the second allocating module 63 is configured to allocate the tracked face corresponding to the second identification information to the tracker corresponding to the second identification information for tracking processing, and allocate another tracked face to another tracker for tracking processing.
The face tracking device provided by the embodiment of the disclosure is provided with a third acquisition module configured to acquire second identification information of any one of the two tracked faces when the two tracked faces are overlapped and separated and the distance between the two tracked faces is smaller than a preset distance threshold, a fourth acquisition module configured to acquire a tracker corresponding to the second identification information according to the second identification information and identification information prestored in the two trackers, and a second allocation module configured to allocate the tracked face corresponding to the second identification information to the tracker corresponding to the second identification information for tracking processing and allocate the other tracked face to the other tracker for tracking processing, thereby further ensuring the tracking accuracy.
FIG. 7 is a block diagram illustrating a face tracking device according to yet another exemplary embodiment. This embodiment is based on the embodiment shown in fig. 5, and other modules of the face tracking apparatus are explained in detail. As shown in fig. 7, the face tracking apparatus provided in the embodiment of the present disclosure further includes, on the basis of the apparatus shown in fig. 5: a fifth acquisition module 71 and a storage module 72.
A fifth obtaining module 71, configured to obtain the identification information of the two tracked faces respectively when determining that the distance between the two tracked faces is smaller than the preset distance threshold.
And the storage module 72 is configured to store the identification information into the tracker according to the corresponding relationship between the tracked human face and the tracker and the identification information respectively.
The face tracking device provided by the embodiment of the disclosure is provided with a fifth acquisition module configured to respectively acquire identification information of two tracked faces when it is determined that the distance between the two tracked faces is smaller than a preset distance threshold, and a storage module configured to respectively store the identification information into the tracker according to the corresponding relationship between the tracked faces and the tracker and the identification information, so that the identification information of the two tracked faces is acquired and stored in the corresponding tracker only when the two tracked faces may overlap, thereby saving storage resources and calculation resources of the face tracking device.
FIG. 8 is a block diagram illustrating a face tracking device according to yet another exemplary embodiment. This embodiment provides a detailed description of the first distribution module based on the embodiment shown in fig. 5. In the face tracking apparatus provided in this embodiment, the tracker further includes a filter. Accordingly, as shown in fig. 8, the first distribution module 53 includes: a second determination submodule 531, a fourth acquisition submodule 532, a third determination submodule 533, and a fourth determination submodule 534.
A second determining submodule 531 configured to determine a position area image of the first tracked face in the current frame image according to the position of the first tracked face in the previous frame image of the current frame image.
And a fourth obtaining sub-module 532 configured to convolve the position area image with the filter in the tracker corresponding to the first tracked face in the previous frame image, and obtain a response map of the first tracked face in the current frame image.
The third determining submodule 533 is configured to determine the position corresponding to the maximum response value in the response map as the position of the first tracked face in the current frame image.
And a fourth determining sub-module 534 configured to determine a filter in the tracker corresponding to the first tracked face in the current frame image according to the response map and the position area image.
Wherein, the filter in the current frame image is used for determining the response image of the first tracked human face in the next frame image.
The face tracking device provided by the embodiment of the disclosure, by providing a second determining sub-module configured to determine a position area image of a first tracked face in a current frame image according to a position of the first tracked face in a previous frame image of the current frame image, a fourth obtaining sub-module configured to convolve the position area image with a filter in a tracker corresponding to the first tracked face in the previous frame image, obtain a response map of the first tracked face in the current frame image, a third determining sub-module configured to determine a position corresponding to a maximum response value in the response map as a position of the first tracked face in the current frame image, a fourth determining sub-module configured to determine a filter in the tracker corresponding to the first tracked face in the current frame image according to the response map and the position area image, realizes that the position of the first tracked face can be determined according to the response map, and updating the filter in the tracker corresponding to the first tracked face, thereby further improving the tracking accuracy.
Having described the internal functions and structure of the face tracking device, FIG. 9 is a block diagram of a face tracking device according to an exemplary embodiment. As shown in fig. 9, the face tracking apparatus may be implemented as:
a processor 91;
a memory 92 for storing instructions executable by the processor 91;
wherein the processor 91 is configured to:
in a current frame image, when two tracked faces are determined to be overlapped, acquiring first identification information of a first tracked face which is not shielded;
acquiring trackers corresponding to the first identification information according to the first identification information and the identification information prestored in the two trackers;
and allocating the first tracked face to the tracker corresponding to the first identification information for tracking, and allocating the other tracked face to the other tracker for tracking.
FIG. 10 is a block diagram illustrating a face tracking device according to an exemplary embodiment. For example, the face tracking apparatus 1900 may be provided as a server. Referring to FIG. 10, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the face tracking method described above.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
The present disclosure also provides a computer-readable storage medium. The storage medium contains computer-executable instructions that, when executed by a processor of a face tracking apparatus, enable the face tracking apparatus to perform the face tracking method described above. The computer readable storage medium may be a non-transitory storage medium.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (14)
1. A face tracking method, comprising:
in a current frame image, when the distance between two tracked faces is determined to be smaller than a preset distance threshold, respectively acquiring identification information of the two tracked faces; respectively storing the identification information into the tracker according to the corresponding relation between the tracked face and the tracker and the identification information; when the two tracked faces are determined to be overlapped, acquiring first identification information of a first tracked face which is not shielded;
acquiring trackers corresponding to the first identification information according to the first identification information and identification information prestored in the two trackers, wherein the identification information is used for identifying information of the face, and the identification information comprises five sense organs of the face and positions of the five sense organs on the face or geometric information of the face;
distributing the first tracked face to a tracker corresponding to the first identification information for tracking, and distributing the other tracked face to the other tracker for tracking;
when the pre-stored identification information is the shape of the five sense organs and the position of the five sense organs on the face, the process of acquiring the tracker corresponding to the first identification information is as follows: and acquiring a difference value of the shape of the first identification information and each piece of pre-stored identification information and a difference value weighted by the position difference value, and determining a tracker corresponding to the identification information of which the difference value with the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information.
2. The method of claim 1, further comprising:
when the two tracked faces are overlapped and separated and the distance between the two tracked faces is smaller than a preset distance threshold, acquiring second identification information of any one of the two tracked faces;
acquiring trackers corresponding to the second identification information according to the second identification information and the identification information pre-stored in the two trackers;
and allocating the tracked face corresponding to the second identification information to a tracker corresponding to the second identification information for tracking, and allocating the other tracked face to the other tracker for tracking.
3. The method according to any one of claims 1 or 2, wherein the identification information includes geometric information of the tracked face;
the acquiring the tracker corresponding to the first identification information according to the first identification information and the identification information pre-stored in the two trackers includes:
acquiring Euclidean distance between the first identification information and each piece of pre-stored identification information;
and determining a tracker corresponding to the identification information of which the Euclidean distance from the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information.
4. The method of claim 3, wherein the geometric information comprises: length, width, interocular distance, width of mouth, and nose tip to eye distance of a human face.
5. The method of claim 3, wherein the obtaining first identification information of the unobstructed first tracked face comprises:
respectively acquiring the positions of the feature points of the first tracked face in the current frame image according to a feature point positioning algorithm;
and acquiring first identification information of the first tracked face according to the position.
6. The method according to claim 1, wherein the tracker further includes a filter, and the assigning the first tracked face to the tracker corresponding to the first identification information for tracking processing includes:
determining a position area image of the first tracked face in the current frame image according to the position of the first tracked face in the previous frame image of the current frame image;
convolving the position area image with a filter in a tracker corresponding to the first tracked face in the previous frame image to obtain a response image of the first tracked face in the current frame image;
determining the position corresponding to the maximum response value in the response image as the position of the first tracked face in the current frame image;
and determining a filter in a tracker corresponding to the first tracked face in the current frame image according to the response image and the position area image, wherein the filter in the current frame image is used for determining the response image of the first tracked face in the next frame image.
7. A face tracking device, comprising:
the fifth acquisition module is configured to respectively acquire identification information of two tracked faces when the distance between the two tracked faces is determined to be smaller than a preset distance threshold;
the storage module is configured to store the identification information into the tracker according to the corresponding relation between the tracked human face and the tracker and the identification information respectively;
the first acquisition module is configured to acquire first identification information of a first tracked face which is not shielded when two tracked faces are determined to be overlapped in a current frame image;
the second acquisition module is configured to acquire the tracker corresponding to the first identification information according to the first identification information and identification information prestored in the two trackers, wherein the identification information is used for identifying information of the face, and the identification information comprises five sense organs of the face and positions of the five sense organs on the face or geometric information of the face;
the first distribution module is configured to distribute the first tracked face to a tracker corresponding to the first identification information for tracking processing, and distribute the other tracked face to the other tracker for tracking processing;
the second obtaining module is further configured to:
when the pre-stored identification information is the shape of the five sense organs and the position of the five sense organs on the face, the process of acquiring the tracker corresponding to the first identification information is as follows: and acquiring a difference value of the shape of the first identification information and each piece of pre-stored identification information and a difference value weighted by the position difference value, and determining a tracker corresponding to the identification information of which the difference value with the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information.
8. The apparatus of claim 7, further comprising:
the third acquisition module is configured to acquire second identification information of any one of the two tracked faces when the two tracked faces are overlapped and separated and the distance between the two tracked faces is smaller than a preset distance threshold;
the fourth acquisition module is configured to acquire the tracker corresponding to the second identification information according to the second identification information and the identification information pre-stored in the two trackers;
and the second allocating module is configured to allocate the tracked face corresponding to the second identification information to the tracker corresponding to the second identification information for tracking processing, and allocate another tracked face to another tracker for tracking processing.
9. The apparatus according to any one of claims 7 or 8, wherein the identification information comprises geometric information of a tracked face;
the second acquisition module includes:
a first obtaining sub-module configured to obtain a euclidean distance between the first identification information and each piece of pre-stored identification information;
a first determining sub-module configured to determine, as a tracker corresponding to the first identification information, a tracker corresponding to identification information whose euclidean distance to the first identification information is smaller than a preset difference threshold.
10. The apparatus of claim 9, wherein the geometric information comprises: length, width, interocular distance, width of mouth, and nose tip to eye distance of a human face.
11. The apparatus of claim 9, wherein the first obtaining module comprises:
the second acquisition sub-module is configured to respectively acquire the positions of the feature points of the first tracked face in the current frame image according to a feature point positioning algorithm;
a third obtaining sub-module configured to obtain first identification information of the first tracked face according to the position.
12. The apparatus of claim 7, further comprising a filter in the tracker, wherein the first distribution module comprises:
a second determining sub-module configured to determine a position area image of the first tracked face in the current frame image according to the position of the first tracked face in the previous frame image of the current frame image;
a fourth obtaining sub-module, configured to convolve the position area image with a filter in a tracker corresponding to the first tracked face in a previous frame of image, and obtain a response map of the first tracked face in the current frame of image;
a third determining submodule configured to determine a position corresponding to a maximum response value in the response map as a position of the first tracked face in the current frame image;
and the fourth determining sub-module is configured to determine a filter in a tracker corresponding to the first tracked face in the current frame image according to the response map and the position area image, wherein the filter in the current frame image is used for determining the response map of the first tracked face in the next frame image.
13. A face tracking device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
in a current frame image, when the distance between two tracked faces is determined to be smaller than a preset distance threshold, respectively acquiring identification information of the two tracked faces; respectively storing the identification information into the tracker according to the corresponding relation between the tracked face and the tracker and the identification information; when the two tracked faces are determined to be overlapped, acquiring first identification information of a first tracked face which is not shielded;
acquiring trackers corresponding to the first identification information according to the first identification information and identification information prestored in the two trackers, wherein the identification information is used for identifying information of the face, and the identification information comprises five sense organs of the face and positions of the five sense organs on the face or geometric information of the face;
distributing the first tracked face to a tracker corresponding to the first identification information for tracking, and distributing the other tracked face to the other tracker for tracking;
when the pre-stored identification information is the shape of the five sense organs and the position of the five sense organs on the face, the process of acquiring the tracker corresponding to the first identification information is as follows: and acquiring a difference value of the shape of the first identification information and each piece of pre-stored identification information and a difference value weighted by the position difference value, and determining a tracker corresponding to the identification information of which the difference value with the first identification information is smaller than a preset difference threshold value as the tracker corresponding to the first identification information.
14. A computer-readable storage medium containing computer-executable instructions that, when executed by a processor of a face tracking apparatus, cause the face tracking apparatus to perform the face tracking method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710493142.XA CN107341460B (en) | 2017-06-26 | 2017-06-26 | Face tracking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710493142.XA CN107341460B (en) | 2017-06-26 | 2017-06-26 | Face tracking method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107341460A CN107341460A (en) | 2017-11-10 |
CN107341460B true CN107341460B (en) | 2022-04-22 |
Family
ID=60221637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710493142.XA Active CN107341460B (en) | 2017-06-26 | 2017-06-26 | Face tracking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107341460B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629283B (en) * | 2018-04-02 | 2022-04-08 | 北京小米移动软件有限公司 | Face tracking method, device, equipment and storage medium |
CN108734722B (en) * | 2018-04-18 | 2022-03-15 | 南京邮电大学 | Visual tracking error correction method based on PSR |
CN110675426B (en) * | 2018-07-02 | 2022-11-22 | 百度在线网络技术(北京)有限公司 | Human body tracking method, device, equipment and storage medium |
CN111626213A (en) * | 2020-05-27 | 2020-09-04 | 北京嘀嘀无限科技发展有限公司 | Identity authentication method and device, electronic equipment and readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101908149A (en) * | 2010-07-06 | 2010-12-08 | 北京理工大学 | Method for identifying facial expressions from human face image sequence |
CN102496009A (en) * | 2011-12-09 | 2012-06-13 | 北京汉邦高科数字技术股份有限公司 | Multi-face tracking method for intelligent bank video monitoring |
CN102542249A (en) * | 2010-11-01 | 2012-07-04 | 微软公司 | Face recognition in video content |
US8233677B2 (en) * | 2007-07-04 | 2012-07-31 | Sanyo Electric Co., Ltd. | Image sensing apparatus and image file data structure |
CN105678288A (en) * | 2016-03-04 | 2016-06-15 | 北京邮电大学 | Target tracking method and device |
CN106651913A (en) * | 2016-11-29 | 2017-05-10 | 开易(北京)科技有限公司 | Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System) |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9373023B2 (en) * | 2012-02-22 | 2016-06-21 | Sri International | Method and apparatus for robustly collecting facial, ocular, and iris images using a single sensor |
-
2017
- 2017-06-26 CN CN201710493142.XA patent/CN107341460B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8233677B2 (en) * | 2007-07-04 | 2012-07-31 | Sanyo Electric Co., Ltd. | Image sensing apparatus and image file data structure |
CN101908149A (en) * | 2010-07-06 | 2010-12-08 | 北京理工大学 | Method for identifying facial expressions from human face image sequence |
CN102542249A (en) * | 2010-11-01 | 2012-07-04 | 微软公司 | Face recognition in video content |
CN102496009A (en) * | 2011-12-09 | 2012-06-13 | 北京汉邦高科数字技术股份有限公司 | Multi-face tracking method for intelligent bank video monitoring |
CN105678288A (en) * | 2016-03-04 | 2016-06-15 | 北京邮电大学 | Target tracking method and device |
CN106651913A (en) * | 2016-11-29 | 2017-05-10 | 开易(北京)科技有限公司 | Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System) |
Non-Patent Citations (1)
Title |
---|
人脸检测与跟踪算法的研究;闫珍;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130615(第6期);第48-49页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107341460A (en) | 2017-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11003893B2 (en) | Face location tracking method, apparatus, and electronic device | |
CN108629283B (en) | Face tracking method, device, equipment and storage medium | |
US10244164B1 (en) | Systems and methods for image stitching | |
CN107341460B (en) | Face tracking method and device | |
US10580206B2 (en) | Method and apparatus for constructing three-dimensional map | |
US9519956B2 (en) | Processing stereo images | |
US11042966B2 (en) | Method, electronic device, and storage medium for obtaining depth image | |
US20180268207A1 (en) | Method for automatic facial impression transformation, recording medium and device for performing the method | |
WO2022052582A1 (en) | Image registration method and device, electronic apparatus, and storage medium | |
WO2019042419A1 (en) | Image tracking point acquisition method and device, and storage medium | |
CN108234858B (en) | Image blurring processing method and device, storage medium and electronic equipment | |
US20130215234A1 (en) | Method and apparatus for stereo matching | |
EP3275213B1 (en) | Method and apparatus for driving an array of loudspeakers with drive signals | |
CN106683100A (en) | Image segmentation and defogging method and terminal | |
US20200394819A1 (en) | Systems and methods for augmented reality applications | |
US9659372B2 (en) | Video disparity estimate space-time refinement method and codec | |
CN107992790A (en) | Target long time-tracking method and system, storage medium and electric terminal | |
CN110651274A (en) | Movable platform control method and device and movable platform | |
CN114037087B (en) | Model training method and device, depth prediction method and device, equipment and medium | |
CN108010052A (en) | Method for tracking target and system, storage medium and electric terminal in complex scene | |
JPWO2015198592A1 (en) | Information processing apparatus, information processing method, and information processing program | |
CN115953813B (en) | Expression driving method, device, equipment and storage medium | |
KR101920159B1 (en) | Stereo Matching Method and Device using Support point interpolation | |
CN113077396B (en) | Straight line segment detection method and device, computer readable medium and electronic equipment | |
CN115937290A (en) | Image depth estimation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |