Nothing Special   »   [go: up one dir, main page]

CN101499132B - Three-dimensional transformation search method for extracting characteristic points in human face image - Google Patents

Three-dimensional transformation search method for extracting characteristic points in human face image Download PDF

Info

Publication number
CN101499132B
CN101499132B CN 200910037867 CN200910037867A CN101499132B CN 101499132 B CN101499132 B CN 101499132B CN 200910037867 CN200910037867 CN 200910037867 CN 200910037867 A CN200910037867 A CN 200910037867A CN 101499132 B CN101499132 B CN 101499132B
Authority
CN
China
Prior art keywords
prime
dimensional
coordinate
theta
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200910037867
Other languages
Chinese (zh)
Other versions
CN101499132A (en
Inventor
易法令
熊伟
黄展鹏
赵洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU HENGBIKANG INFORMATION TECHNOLOGY CO.,LTD.
Guangdong Pharmaceutical University
Original Assignee
Guangdong Pharmaceutical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Pharmaceutical University filed Critical Guangdong Pharmaceutical University
Priority to CN 200910037867 priority Critical patent/CN101499132B/en
Publication of CN101499132A publication Critical patent/CN101499132A/en
Application granted granted Critical
Publication of CN101499132B publication Critical patent/CN101499132B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a three-dimension transform search method for extracting characteristic point in face image which uses ASM (Active Shape Models) method locating face image as a base, changes two-dimension transform shape search method in present ASM into three-dimension transform shape search method. The method includes steps as follows: firstly, constructing a standard face three-dimension model; secondly, constructing three-dimension coordinates of a two-dimension stat model (basic shape) of ASM train centralizing face characteristic point according with standard face three-dimension model; finally, processing three-dimension transform to the two-dimension stat model having three-dimension coordinate and projecting to the two-dimension plane for approaching the given face characteristic point shape after search, wherein the search process adopts the method of two-step conversion and iterated approximation. The method can reflect real change of face gesture and has more search precision; the test result shows: the method can more approach fact characteristic point compared with present two-dimension transform search method.

Description

The three-dimensional transformation search method of feature point extraction in a kind of facial image
Technical field
The invention belongs to the recognition of face field, be specifically related to the method for face organ's feature point extraction of people's face.
Background technology
Recognition of face is based on a kind of biological identification technology that people's face feature information is carried out identification, and wherein to extract be the basis of carrying out recognition of face to face feature point.Recognition of face has wide practical use as people's identification mode, and at present, though there are some gyp face identification systems to progress into market, still, these technology and system have with a certain distance from practical, and performance and accuracy rate have much room for improvement.At present, what human face characteristic point extracted generally employing is ASM (ActiVe Shape Models, moving shape model) localization method, and the method generally comprised for three steps: (1) obtains a real shape description by the alignment training sample set; (2) catch the statistical information of the shape of having alignd; (3) search for shape instance at image.The method has preferably effect in general front face location, then effect is not good enough but locate for people's face that certain angle deflection is arranged.By analyzing and experiment, we find, the way of search of this and image is relevant: current ASM method, be by the basic configuration to the image of two dimension be rotated, zooming and panning operate and go to approach associated shape; And people's face is the object of a three-dimensional, and aforesaid operations obviously can not reflect the variation of human face posture fully, therefore, can have larger difference in the shape search when approaching.
Summary of the invention
The method that the object of the invention is to consider the problems referred to above and provide a kind of angle removal search human face posture from three-dimensional to change uses the method can improve the accuracy of people's face ASM shape search, and then improves precision and the efficient of whole face identification system.
Technical scheme of the present invention is:
The three-dimensional transformation search method of feature point extraction in a kind of facial image, at first make up the standard three-dimensional model of people's face, secondly obtain the third dimension coordinate of two-dimension human face unique point as the basis take this standard three-dimensional model, carry out the search of three-dimension varying ASM shape as the basis take three-dimensional coordinate at last, namely comprise the steps:
(1) makes up the standard three-dimensional model of people's face, comprise the three-dimensional coordinate (x, y, z) (wherein the front of people's face is the XY plane) of human face characteristic point in the three-dimensional model;
(2) take three-dimensional master pattern as the basis, according to the Two-dimensional Statistical model (x of the human face characteristic point of ASM training set 1, y 1, x 2, y 2..., x n, y n), determine in proportion the third dimension coordinate (z of each unique point in the Two-dimensional Statistical model i);
(3) with the basic configuration that comprises three-dimensional coordinate respectively from around Z axis, X-axis, Y direction rotation, zooming and panning, the result with conversion projects to the XY plane at last, goes to approach the shape of current search by the result of projection.
Described step (1) people's face standard three-dimensional model takes the method for actual measurement to make up, and at first selects the people's face more than 50 in training set, the three-dimensional coordinate (x of each unique point of actual measurement 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n), wherein third dimension coordinate (z axle) begins to measure as zero plane with the central plane of neck, and above data are carried out normalized, then asks its mean value, just obtains the standard faces three-dimensional model.
Described step (2) realizes as follows:
1) the foundation third dimension (Z direction) the coordinate array SZ=[z corresponding with the Two-dimensional Statistical model of the human face characteristic point of ASM training set 1, z 2..., z n], it is data from the third dimension coordinate of standard faces three-dimensional model, and with the ASM training set in human face characteristic point corresponding one by one;
2) in people's face standard three-dimensional model, choose three unique points, and record its two dimensional surface coordinate figure (x, y), calculate accordingly by these three points.Described three unique points are tail of the eyes of choosing two eyes and nose totally three points (P1, P2, P3) (with point 13,26 among Fig. 2,41 corresponding), in the standard faces three-dimensional model, this two dimensional surface coordinate of 3 is known, if coordinate is respectively (xc1, yc1), (xc2, yc2), (xc3, yc3); Corresponding three point coordinate also are known in the Two-dimensional Statistical model of the human face characteristic point of ASM training set, establish its coordinate and are respectively (x1, y1), (x2, y2), (x3, y3).
3) ask horizontal zoom factor C by a P1, P2 x, ask vertical zoom factor C by mid point and the P3 of P1 and P2 y, as follows respectively:
C x = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 / ( xc 2 - xc 1 ) C y = ( x 3 - x 1 + x 2 2 ) 2 + ( y 3 - y 1 + y 2 2 ) 2 / ( yc 3 - yc 1 )
Get both mean value at the zoom factor of Z direction, that is:
C z=(C x+C y)/2
Pass through C zThe third dimension coordinate that multiplies each other and just can obtain two-dimension human face image with third dimension coordinate array SZ, i.e. Z axis coordinate.
Described step (3) is the anglec of rotation θ that needs to seek three directions x, θ y, θ z, zooming parameter S x, S y, S z, and the side-play amount T of three directions x, T y, T z, calculate T for convenient zBe made as 0, the original shape vector x of given people's face and target shape vector x ', both three-dimensional face carries out geometric transformation M to obtain the minor increment of x and x ' in the projection on XY plane, namely minimizes following formula:
E(θ x,θ y,θ z,S x,S y,S z,T x,T y)=|M(x)-x′| 2 (1)
Adopt the method for two step conversion iteration approximations to make formula (1) approach optimum parameter value.
The first step conversion of described two step conversion is to rotate and translation around Z axis on the XY plane; Detailed process is as follows: given two similar shape x and x ', seek anglec of rotation θ, yardstick convergent-divergent s, translational movement t, to x make geometric transformation X=M (s, θ) [x]+t so that the x after x ' and the conversion apart from minimum:
E=(M(s,θ)[x]+t-x′) T(M(s,θ)[x]+t-x′) (2)
Wherein: M ( s , θ ) x i y i = ( s · cos θ ) x i - ( s · sin θ ) y i ( s · sin θ ) x i + ( s · cos θ ) y i
t=(t x,t y,...t x,t y) T
Make a=scos θ, b=ssin θ has like this: s 2=a 2+ b 2, θ=tan -1(b/a)
So: M ( s , θ ) x i y i = a - b b a x i y i + t x t y - - - ( 3 )
Wherein a, b, t x, t yBe exactly calculative four attitude parameters, by seeking these four parameters so that in (2) formula the value of E minimum, thereby so that actual variation be consistent with calculating.
The second step conversion of described quadratic transformation is to rotate around Y-axis and Z axis respectively, and projects on the XY plane, and implementation procedure is as follows:
If (X z, Y z) be through horizontally rotating (around Z axis) and the displacement after coordinate (the z coordinate is constant), it at first rotates θ around Y-axis yAngle, zoom factor are S y, transversal displacement is T X ', then have:
X y = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y y = Y z ; Z y = - X z * S y * sin θ y + Z * S y * cos θ y - - - ( 6 )
Rotate θ around X-axis again xAngle, zoom factor is S simultaneously x, the vertical misalignment amount is T Y ', then have:
X x = X y ; Y x = Y y * S x * cos θ x - Z y * S x * sin θ x + T y ′ ; Z x = Y y * S x * sin θ x + Z y * S x * cos θ x - - - ( 7 )
Above two formulas of simultaneous, and to the XY plane projection can get the coordinate after the conversion:
X e = X x = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y e = Y x = Y z * S x *cos θ x - ( - X z * S y * sin θ y + Z * S y * cos θ y ) * S x *sin θ x + T y ′ - - - ( 8 )
Make in the equation (8) a y=S y* cos θ y, b y=S y* sin θ y, a x=S x* cos θ x, b x=S x* sin θ x, and the actual coordinate of establishing after the conversion is (x ', y '), getting in equation (8) the substitution formula (1):
|X z*a y+Z*b y+T x′-x′| 2+|Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′| 2 (9)
Make (9) formula minimum, parameter wherein asked local derviation, can get:
X z*a y+Z*b y+T x′-x′=0 (10)
Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′=0 (11)
By formula (10) is carried out multiple linear regression analysis, can be in the hope of parameter value a wherein y, b y, T X 'If total n unique point, detailed process is as follows:
1) averages
X z ‾ = 1 n Σ i = 1 n X zi
Z ‾ = 1 n Σ i = 1 n Z i
x ′ ‾ = 1 n Σ i = 1 n x i ′
2 ) S 11 = Σ i = 1 n ( X zi - X z ‾ ) 2
S 22 = Σ i = 1 n ( Z i - Z ‾ ) 2
L = Σ i = 1 n ( x i ′ - x ′ ‾ ) 2
S 12 = S 21 = Σ i = 1 n ( X zi - X ‾ ) ( Z i - Z )
S 10 = Σ i = 1 n ( X zi - X z ‾ ) ( x i ′ - x ′ ‾ )
S 20 = Σ i = 1 n ( Z i - Z ‾ ) ( x i ′ - x ′ ‾ )
3 ) a y = S 10 S 22 - S 20 S 12 S 11 S 22 - S 12 2
b y = S 20 S 11 - S 10 S 21 S 11 S 22 - S 12 2
T x ′ = x ′ ‾ - a y X z ‾ - b y Z ‾
A y, b y, T X 'Value substitution formula (11), using the same method can be in the hope of a x, b x, T Y 'Second portion conversion M 2Expression, then:
M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) X zi Y zi The most approaching with impact point.
Described iterative approach refer to unique point the XY plane around Z axis rotate and translation after coordinate (X z, Y z) intermediateness be to adopt repeatedly the method for iterative approach,
If intermediateness is (X z, Y z), concrete steps are as follows:
1) initial seasonal (X z, Y z) be end value (x ', y ');
2) (X z, Y z) substitution formula (1) replaces x ' wherein, tries to achieve in the formula (3) four transformation parameter a, b, t according to the ASM transform method of current two dimension x, t y
3) parameter a, b, t x, t ySubstitution formula (3) is obtained intermediateness (X z, Y z);
4) (X z, Y z) carry out second portion conversion (M 2), obtain parameter a y, b y, T X ', a x, b x, T Y '
5) be the basis with (x ', y '), calculate M 2Inverse transformation, can draw intermediateness (X ' z, Y ' z) namely:
( X ′ z , Y ′ z ) = M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) - 1 ( x ′ , y ′ )
(X ' z, Y ' z) substitution formula (1), try to achieve in the formula (3) four transformation parameter a, b, t according to two-dimentional ASM transform method x, t yThen changed for (3) step over to, carry out iterative computation, iteration just can obtain meeting 10 parameters of accuracy requirement for 10 times.
The present invention with respect to the beneficial effect of prior art is: the three-dimensional transformation search method of ASM of the present invention is compared with current two-dimensional search method, has reflected more truly the variation of human face posture, thereby has better unique point search Approximation effect.
Description of drawings
The present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
Fig. 1 is the process flow diagram of people's face three-dimensional search method of the present invention;
When Fig. 2 is of the present invention for specifically testing two-dimension human face image is carried out the schematic diagram that unique point is demarcated;
Fig. 3 for the given concrete face characteristic point coordinate of test after to the relative approximation degree comparison diagram of view data in the training set;
Fig. 4 for the given concrete face characteristic point coordinate of test after to the relative approximation degree comparison diagram of view data in the non-training set;
Fig. 5 during for the search of test actual persons face to the relative approximation degree comparison diagram of view data in the training set;
Relative approximation degree to view data in the non-training set when Fig. 6 searches for for test actual persons face compares.
Embodiment
The process flow diagram of people's face three-dimensional transformation search method of the present invention as shown in Figure 1, people's head and face organ according to this point, can construct the three-dimensional face model of a standard having similarity aspect shape and the position; The present invention determines the three-dimensional coordinate of face features point just by this master pattern, then carry out the ASM shape search of three-dimension varying as the basis take three-dimensional coordinate; Because be two dimensional image during actual search, so need to project to two dimensional surface at last, the process of three-dimensional search conversion is:
The first, the standard three-dimensional model (x of structure people face 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n) (wherein the front of people's face is the XY plane), the unique point in the three-dimensional model should comprise unique point in the actual two dimensional image (be the adaptability of embodiment standard three-dimensional model, can more than the unique point in the actual two dimensional image).
The second, carrying out people's face two dimensional image when search, take three-dimensional master pattern as the basis, according to Two-dimensional Statistical model (the basic configuration) (x of the human face characteristic point of ASM training set 1, y 1, x 2, y 2..., x n, y n), determine in proportion the third dimension coordinate (zi) of each unique point in the Two-dimensional Statistical model.
The 3rd, when shape search conversion, basic configuration is respectively from around Z axis, X-axis, Y direction rotation, zooming and panning, and the result with conversion projects to the XY plane at last, goes to approach the shape of current search by the result of projection.
Below the main content of describing three aspects, the one, the three-dimensional coordinate of the standard obtaining three-dimensional model two-dimension human face unique point by people's face; The 2nd, adopt the method for two step conversion and iteration to obtain 10 transformation parameters when carrying out three-dimensional search; The 3rd, the implementation result test.
(1) third dimension coordinate of the Two-dimensional Statistical model of human face characteristic point obtains
Generally speaking, people's face section organ is not only fixed at the relative position of two dimensional surface, and the height of the third dimension (face contour) also is basically identical.Although the height of people's face section organ may have certain difference, such as: it is low that people's nose has height to have, if but with the reference plane as third dimension coordinate of the central plane (center of three-dimensional rotation) of neck, then this difference is just very little, does not affect search precision when actual search is approached.The standard three-dimensional model of people's face also consists of with unique point, specifically represents with the three-dimensional coordinate form, that is: (x 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n); Wherein should comprise all unique points selected in the two-dimension human face, when Fig. 2 was test, the unique point schematic diagram that two dimensional image is selected had been chosen altogether 59 unique points.The acquisition methods of the third dimension coordinate of the Two-dimensional Statistical model of human face characteristic point is as follows:
(1) the foundation third dimension (Z direction) the coordinate array SZ=[z corresponding with the Two-dimensional Statistical model of the human face characteristic point of ASM training set 1, z 2..., z n], it is data from the third dimension coordinate of standard faces three-dimensional model, and with the ASM training set in human face characteristic point corresponding one by one.
(2) in the standard faces three-dimensional model, choose three unique points, and record its planimetric coordinates value.During actual test, what choose is three points of the tail of the eye and nose (P1, P2, P3) (with point 13,26 among Fig. 2,41 corresponding) of two eyes.In the standard faces three-dimensional model, this planar coordinate of 3 is known, establishes coordinate and is respectively (xc1, yc1), (xc2, yc2), (xc3, yc3); Correspondence three point coordinate of the Two-dimensional Statistical model of the human face characteristic point of ASM training set also are known, establish its coordinate and are respectively (x1, y1), (x2, y2), (x3, y3).
(3) ask horizontal zoom factor C by a P1, P2 x, ask vertical zoom factor C by mid point and the P3 of P1 and P2 y, as follows respectively:
C x = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 / ( xc 2 - xc 1 ) C y = ( x 3 - x 1 + x 2 2 ) 2 + ( y 3 - y 1 + y 2 2 ) 2 / ( yc 3 - yc 1 )
Get both mean value at the zoom factor of Z direction, that is:
C z=(C x+C y)/2
Pass through C zThe third dimension coordinate that multiplies each other and just can obtain two-dimension human face image with third dimension coordinate array SZ, i.e. Z axis coordinate.
(2) three-dimension varying search
The original shape vector x of given people's face and target shape vector x ', both three-dimensional face is in the projection on XY plane.Realized the three dimensional stress of original shape vector x by the operation of first, the three-dimensional search conversion be exactly carry out on the x basis of three dimensional stress the three-dimension varying removal search approach the target shape vector x '.Compare with the ASM searching method of current facial image, three-dimensional ASM searching method need to be sought the anglec of rotation θ of three directions x, θ y, θ z, zooming parameter S x, S y, S z, and the side-play amount T of three directions x, T y, T z(owing to finally will project to the XY plane, calculate for convenient, Tz can be made as 0) carries out geometric transformation M to obtain the minor increment of x and x ', namely minimizes following formula:
E(θ x,θ y,θ z,S x,S y,S z,T x,T y)=|M(x)-x′| 2 (1)
Conventional minimized method is that the parameters on the left side in the following formula is asked partial derivative, and then making it is 0, unites at last to find the solution separate equation and draw the parameters value.But because its parameter is numerous, and parameter all with certain any (x, y) coordinates correlation, above-mentioned conventional method may be obtained its parameter value hardly.Therefore, adopted the method for two step conversion and iteration to remove to approach optimum parameter value.
1) two step conversion
Whole three-dimension varying is divided into two parts: first rotates and translation around Z axis on the XY plane; Second portion is to rotate around Y-axis and Z axis respectively, and projects on the XY plane.Wherein: the conversion of first is consistent with current ASM search shape process, and detailed process is as follows: given two similar shape x and x ', seek anglec of rotation θ, yardstick convergent-divergent s, translational movement t, to x make geometric transformation X=M (s, θ) [x]+t so that the x after x ' and the conversion apart from minimum:
E=(M(s,θ)[x]+t-x′) T(M(s,θ)[x]+t-x′) (2)
Wherein: M ( s , θ ) x i y i = ( s · cos θ ) x i - ( s · sin θ ) y i ( s · sin θ ) x i + ( s · cos θ ) y i
t=(t x,t y,...t x,t y) T
Make a=scos θ, b=ssin θ has like this: s 2=a 2+ b 2, θ=tan -1(b/a)
So: M ( s , θ ) x i y i = a - b b a x i y i + t x t y - - - ( 3 )
Wherein a, b, t x, t yBe exactly calculative four attitude parameters, by seeking these four parameters so that in the formula (2) value of E minimum, thereby so that actual variation be consistent with calculating.Computing method are identical with the ASM transform method of current two dimension.Be provided with n unique point, computation process is as follows:
(1) wushu (3) substitution formula (2) gets:
E ( a , b , t x , t y ) = | M ( x ) - x ′ | 2
= Σ i = 1 n ( ax i - by i + t x - x ′ i ) 2 + ( bx i + ay i + t y - y ′ i ) 2 - - - ( 4 )
(2) describe for convenient, define following and value:
S x = 1 n Σ i = 1 n x i ; S y = 1 n Σ i = 1 n y i
S x ′ = 1 n Σ i = 1 n x ′ i ; S y ′ = 1 n Σ i = 1 n y ′ i
S xx = 1 n Σ i = 1 n x i 2 ; S yy = 1 n Σ i = 1 n y i 2
S xy = 1 n Σ i = 1 n x i y i
S xx ′ = 1 n Σ i = 1 n x i x ′ i ; S yy ′ = 1 n Σ i = 1 n y i y ′ i
S xy ′ = 1 n Σ i = 1 n x i y ′ i ; S yx ′ = 1 n Σ i = 1 n y i x ′ i
(3) each parameter in the formula (4) is asked partial derivative, and makes equation equal 0 can getting:
a ( S xx + S yy ) + t x S x + t y S y = S xx ′ + S yy ′ b ( S xx + S yy ) + t y S x - t x S y = S xy ′ - S yx ′ a S x - b S y + t x = S x ′ b S x + a S y + t y = S y ′ - - - ( 5 )
(4) to top system of equations (5) simultaneous solution, calculate for simplifying, the center of original state x is moved on to initial point, like this S x=S y=0.So, can obtain the value of 4 parameters:
t x=S x′;t y=S y′
a=(S xx′+S yy′)/(S xx+S yy)
b=(S xy′-S yx′)/(S xx+S yy)
The implementation procedure of second portion conversion is as follows:
If (X z, Y z) be through horizontally rotating (around Z axis) and the displacement after coordinate (the z coordinate is constant), it at first rotates θ around Y-axis yAngle, zoom factor are S y, transversal displacement is T X ', then have:
X y = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y y = Y z ; Z y = - X z * S y * sin θ y + Z * S y * cos θ y - - - ( 6 )
Rotate θ around X-axis again xAngle, zoom factor is S simultaneously x, the vertical misalignment amount is T Y ', then have:
X x = X y ; Y x = Y y * S x * cos θ x - Z y * S x * sin θ x + T y ′ ; Z x = Y y * S x * sin θ x + Z y * S x * cos θ x - - - ( 7 )
Above two formulas of simultaneous, and to the XY plane projection can get the coordinate after the conversion:
X e = X x = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y e = Y x = Y z * S x *cos θ x - ( - X z * S y * sin θ y + Z * S y * cos θ y ) * S x *sin θ x + T y ′ - - - ( 8 )
Make in the equation (8) a y=S y* cos θ y, b y=S y* sin θ y, a x=S x* cos θ x, b x=S x* sin θ x, and the actual coordinate of establishing after the conversion is (x ', y '), and getting in equation (8) the substitution formula (1): | X z* a y+ Z*b y+ T X '-x ' | 2+ | Y z* a x-(X z* b y+ Z*a y) * b x+ T Y '-y ' | 2(9)
Make (9) formula minimum, parameter wherein asked local derviation, can get:
X z*a y+Z*b y+T x′-x′=0 (10)
Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′=0 (11)
By formula (10) is carried out multiple linear regression analysis, can be in the hope of parameter value a wherein y, b y, T X 'If total n unique point, detailed process is as follows:
(1) averages
X z ‾ = 1 n Σ i = 1 n X zi
Z ‾ = 1 n Σ i = 1 n Z i
x ′ ‾ = 1 n Σ i = 1 n x i ′
( 2 ) S 11 = Σ i = 1 n ( X zi - X z ‾ ) 2
S 22 = Σ i = 1 n ( Z i - Z ‾ ) 2
L = Σ i = 1 n ( x i ′ - x ′ ‾ ) 2
S 12 = S 21 = Σ i = 1 n ( X zi - X ‾ ) ( Z i - Z )
S 10 = Σ i = 1 n ( X zi - X z ‾ ) ( x i ′ - x ′ ‾ )
S 20 = Σ i = 1 n ( Z i - Z ‾ ) ( x i ′ - x ′ ‾ )
( 3 ) a y = S 10 S 22 - S 20 S 12 S 11 S 22 - S 12 2
b y = S 20 S 11 - S 10 S 21 S 11 S 22 - S 12 2
T x ′ = x ′ ‾ - a y X z ‾ - b y Z ‾
A y, b y, T X 'Value substitution formula (11), using the same method can be in the hope of a x, b x, T Y 'Second portion conversion M 2Expression, then:
M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) X zi Y zi The most approaching with impact point.
2) iterative approach
How to obtain unique point the XY plane around Z axis rotate and translation after coordinate (X z, Y z) be the key of carrying out the second step conversion, because wherein intermediateness is unknown, when design, adopts repeatedly the method for iterative approach to obtain actual intermediateness, and finally obtain actual transformation parameter.If intermediateness is (X z, Y z) concrete steps are as follows:
(1) initial seasonal (X z, Y z) be end value (x ', y '),
(2) (X z, Y z) substitution formula (4), try to achieve in the formula (4) four transformation parameter a, b, t according to the ASM transform method of current two dimension x, t y
(3) parameter a, b, t x, t ySubstitution formula (3) is obtained intermediateness (X z, Y z)
(4) (X z, Y z) carry out second portion conversion (M 2), obtain parameter a y, b y, T X ', a x, b x, T Y '
(5) be the basis with (x ', y '), calculate M 2Inverse transformation, can draw intermediateness (X ' z, Y ' z) namely:
( X ′ z , Y ′ z ) = M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) - 1 ( x ′ , y ′ )
(6) (X ' z, Y ' z) substitution formula (4), try to achieve in the formula (4) four transformation parameter a, b, t according to two-dimentional ASM transform method x, t yThen changed for (3) step over to, carry out loop iteration and calculate, general iteration just can reach accuracy requirement 10 times.
Can finally obtain 10 parameters of image three-dimensional conversion by above-mentioned alternative manner, that is: 6 parameters of 4 parameters of for the first time conversion (rotate and translation around Z axis on the XY plane) and for the second time conversion (rotate and project on the XY plane around Y-axis and Z axis respectively).
(3) implementation result test
By the human face characteristic point extraction system that adopts three-dimensional ASM method is tested, aspect the feature point extraction of carrying out non-training set data, the method improves a lot aspect accuracy than the ASM method of two dimension.The below has carried out two types test: the one, and then given concrete face characteristic point coordinate goes to approach with two kinds of methods, tests its approximation ratio; The 2nd, given concrete people's face is also pressed same searching algorithm removal search unique point, the then difference between comparison search result and the fact characteristic point with two kinds of methods respectively.Wherein every type test all includes the test to training set data and non-training set data.Result's demonstration, not obvious to the improvement degree of training set data this method, but non-training set data then is greatly increased.Because in actual applications, most view data should belong to non-training set, so this method has higher practical value.
When the structure test macro, selected the facial image of the different attitudes of 100 width of cloth as training set data, other has 30 width of cloth images as test data, wherein image resolution-ratio is 125*150, all images are all carried out manual unique point demarcate, as shown in Figure 2, each image has been chosen 59 unique points.In order to compare more exactly both effects, defined the concept of a relative approximation degree.If the unique point that D1 calculates when approaching for employing three-dimensional transformation search method of the present invention and the mean distance between the actual calibration point, D2 is the unique point that adopts conventional two-dimensional transform search and calculate when approaching and the mean distance between the actual calibration point, and relative approximation degree RN is expressed as:
RN=(D2-D1)/D1*100%
Obviously RN represents then that for just three-dimensional Approximation effect is better, represents then that for bearing two-dimentional Approximation effect is good, and its numerical values recited then represents the degree of approaching.
1. the test search approaches concrete coordinate
We have chosen 12 width of cloth images in training set, the direct substitution of its coordinate is approached with two kinds of methods respectively, its result as shown in Figure 3, as can be seen from the figure, as a rule, both Approximation effects are consistent.Fig. 4 then is the relative approximation degree when image directly approaches in the non-training set, therefrom can find out, in most of the cases, three-dimensional approach method can more approach desired value.
2. test the search of concrete people's face
When the concrete people's face of search, from training set, chosen 15 width of cloth images and searched for its result such as Fig. 5.Basically identical with expected results, both difference is not obvious.Fig. 6 searches for result after the coupling to image in 30 non-training sets, can find out that therefrom three-dimension varying obviously is better than two dimension, and the relative approximation effect is better than directly the approaching of objectives, this be since in approximate procedure target may repeatedly adjust.

Claims (8)

1. the three-dimensional transformation search method of feature point extraction in the facial image, it is characterized in that, at first make up the standard three-dimensional model of people's face, secondly obtain the third dimension coordinate of two-dimension human face unique point as the basis take this standard three-dimensional model, realize the search of three-dimension varying moving shape model ASM shape as the basis take three-dimensional coordinate at last, specifically comprise the steps:
(1) make up the standard three-dimensional model of people's face, comprise the three-dimensional coordinate (x, y, z) of human face characteristic point in the standard three-dimensional model, wherein the front of people's face is the XY plane;
(2) take the standard three-dimensional model as the basis, according to the Two-dimensional Statistical model (x of the human face characteristic point of ASM training set 1, y 1, x 2, y 2..., x n, y n), determine in proportion the third dimension coordinate z of each unique point in the Two-dimensional Statistical model i
(3) with the basic configuration that comprises three-dimensional coordinate respectively from around Z axis, X-axis, Y direction rotation, zooming and panning, the result with conversion projects to the XY plane at last, goes to approach the shape of current search by the result of projection.
2. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 1, it is characterized in that described step (1) standard three-dimensional model takes the method for actual measurement to make up, at first in training set, select arbitrarily the people's face more than 50, the three-dimensional coordinate (x of each unique point of actual measurement 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n), wherein third dimension coordinate z axle begins to measure as zero plane with the central plane of neck, and above data are carried out normalized, then asks its mean value, just obtains the standard three-dimensional model.
3. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 1 is characterized in that described step (2) realizes as follows:
1) the foundation third dimension Z direction coordinate array SZ=[z corresponding with the Two-dimensional Statistical model of the human face characteristic point of ASM training set 1, z 2..., z n], it is data from the third dimension coordinate of standard faces three-dimensional model, and with the ASM training set in human face characteristic point corresponding one by one;
2) in people's face standard three-dimensional model, choose three unique points, and record its two dimensional surface coordinate figure (x, y), calculate accordingly by these three points, described three unique points are to choose the tail of the eye of two eyes and nose totally three points (P1, P2, P3), in the standard three-dimensional model, this two dimensional surface coordinate of 3 is known, establishes coordinate and is respectively (xc1, yc1), (xc2, yc2), (xc3, yc3); Corresponding three point coordinate also are known in the Two-dimensional Statistical model of the human face characteristic point of ASM training set, establish its coordinate and are respectively (x1, y1), (x2, y2), (x3, y3);
3) ask horizontal zoom factor C by a P1, P2 x, ask vertical zoom factor C by mid point and the P3 of P1 and P2 y, as follows respectively:
C x = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 / ( xc 2 - xc 1 ) C y = ( x 3 - x 1 + x 2 2 ) 2 + ( y 3 - y 1 + y 2 2 ) 2 / ( yc 3 - yc 1 )
Get both mean value at the zoom factor of Z direction, that is:
C z=(C x+C y)/2
Pass through C zThe third dimension coordinate that multiplies each other and just can obtain two-dimension human face image with third dimension coordinate array SZ, i.e. Z axis coordinate.
4. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 1 is characterized in that being that described step (3) is the anglec of rotation θ that needs are sought three directions x, θ y, θ z, zooming parameter S x, S y, S z, and the side-play amount T of three directions x, T y, T z, calculate T for convenient zBe made as 0; The original shape vector x of given people's face and target shape vector x ', both three-dimensional face carries out three-dimensional rotation, convergent-divergent, translation and projective transformation M to obtain the minor increment of x and x ' in the projection on XY plane, namely minimizes following formula:
E(θ x,θ y,θ z,S x,S y,S z,T x,T y)=|M(x)-x′| 2 (1)
5. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 4 is characterized in that adopting the method for two step conversion iteration approximations to make formula (1) approach optimum parameter value.
6. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 5, the first step conversion that it is characterized in that being described two step conversion be on the XY plane around Z axis rotate, zooming and panning; Detailed process is as follows: the original shape vector x of given people's face and target shape vector x ', seek anglec of rotation θ, yardstick convergent-divergent s, translational movement t, to x make geometric transformation X=M (s, θ) [x]+t so that the x after x ' and the conversion apart from minimum:
E=(M(s,θ)[x]+t-x′) T(M(s,θ)[x]+t-x′) (2)
Wherein: M ( s , θ ) x i y i = ( s · cos θ ) x i - ( s · sin θ ) y i ( s · sin θ ) x i + ( s · cos θ ) y i
t=(t x,t y,...t x,t y) T
Make a=scos θ, b=ssin θ has like this: s 2=a 2+ b 2, θ=tan -1(b/a)
So: M ( s , θ ) x i y i = a - b b a x i y i + t x t y - - - ( 3 )
Wherein a, b, t x, t yBe exactly calculative four attitude parameters, by seeking these four parameters so that in (2) formula the value of E minimum, thereby so that actual variation be consistent with calculating;
Computation process is as follows:
For convenience, define first following and value:
S x = 1 n Σ i = 1 n x i S y = 1 n Σ i = 1 n y i
S x ′ = 1 n Σ i = 1 n x i ′ S y ′ = 1 n Σ i = 1 n y i ′
S x x = 1 n Σ i = 1 n x i 2 S yy = 1 n Σ i = 1 n y i 2
S xy = 1 n Σ i = 1 n x i y i
S xx ′ = 1 n Σ i = 1 n x i x i ′ S yy ′ = 1 n Σ i = 1 n y i y i ′
S xy ′ = 1 n Σ i = 1 n x i y i ′ S yx ′ = 1 n Σ i = 1 n y i x i ′
(3) formula substitution (1) Shi Kede:
E ( a , b , t x , t y ) = Σ i = 1 n ( ax i - by i + t x - x i ′ ) 2 + ( bx i + ay i + t y - y i ′ ) 2
Respectively each parameter of following formula is asked partial derivative, and makes that its value is 0 can get:
a(S xx+S yy)+t xS x+t yS y=S xx′+S yy′
b(S xx+S yy)+t yS x-t xS y=S xy′-S yx′
aS x-bS y+t x=S x′
bS x+aS y+t y=S y′
In order to simplify calculating, can be first the center of vector x be moved to initial point, S is arranged like this x=0, S y=0, can get the value of 4 parameters to 4 equation solutions in front:
t x=S x′ t y=S y′
a = S xx ′ + S yy ′ S xx + S yy b = S xy ′ + S yx ′ S xx + S yy
7. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 5 is characterized in that being that the second step conversion of described quadratic transformation is to rotate around Y-axis and Z axis respectively, and projects on the XY plane, and implementation procedure is as follows:
If (X z, Y z, be that the Z coordinate is constant through the coordinate after the rotation of horizontal winding Z axis and the displacement Z); It at first rotates θ around Y-axis yAngle, zoom factor are S y, transversal displacement is T X ', then have:
X y = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y y = Y z ; Z y = - X z * S y * sin θ y + Z * S y * cos θ y - - - ( 6 )
Wherein: (X y, Y y, Z y) expression point (X z, Y z, Z) around Y-axis rotate, coordinate after convergent-divergent and the lateral excursion;
Rotate θ around X-axis again xAngle, zoom factor is S simultaneously x, the vertical misalignment amount is T Y ', then have:
X x = X y ; Y x = Y y * S x * cos θ x - Z y * S x * sin θ x + T y ′ ; Z x = Y y * S x * sin θ x + Z y * S x * cos θ x - - - ( 7 )
Wherein: (X x, Y x, Z x) expression point (X y, Y y, Z y) around X-axis rotate, coordinate behind convergent-divergent and the vertical misalignment;
Above two formulas of simultaneous, and to the XY plane projection can get the coordinate after the conversion:
X e = X x = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y e = Y x = Y z * S x * cos θ x - ( - X z * S y * sin θ y + Z * S y * cos θ y ) * S x * sin θ x + T y ′ - - - ( 8 )
Make in the equation (8) a y=S y* cos θ y, b y=S y* sin θ y, a x=S x* cos θ x, b x=S x* sin θ x, and the actual coordinate of establishing after the conversion is (x ', y '), getting in equation (8) the substitution formula (1):
|X z*a y+Z*b y+T x′-x′| 2+|Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′| 2 (9)
Make (9) formula minimum, parameter wherein asked local derviation, can get:
X z*a y+Z*b y+T x′-x′=0 (10)
Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′=0 (11)
By formula (10) is carried out multiple linear regression analysis, can be in the hope of parameter value a wherein y, b y, T X 'If total n unique point, then X in the formula (10) zCan be expressed as (X Z1, X Z2..., X Zn), Z is expressed as (Z 1, Z 2..., Z n), x ' be expressed as (x ' 1, x ' 2..., x ' n), detailed process is as follows:
1) averages
X z ‾ = 1 n Σ i = 1 n X zi
Z ‾ = 1 n Σ i = 1 n Z i
x ′ ‾ = 1 n Σ i = 1 n x i ′
2) obtain
Figure FSB00000940148800064
Figure FSB00000940148800065
Figure FSB00000940148800067
Value, and calculate its 1 to n with the intermediate value of value as next step calculating, use respectively S 11, S 22, L, S 12, S 10, S 20Expression
S 11 = Σ i = 1 n ( X zi - X z ‾ ) 2
S 22 = Σ i = 1 n ( Z i - Z ‾ ) 2
L = Σ i = 1 n ( x i ′ - x ′ ‾ ) 2
S 12 = S 21 = Σ i = 1 n ( X zi - X ‾ ) ( Z i - Z )
S 10 = Σ i = 1 n ( X zi - X z ‾ ) ( x i ′ - x ′ ‾ )
S 20 = Σ i = 1 n ( Z i - Z ‾ ) ( x i ′ - x ′ ‾ )
3) ask three parameter a y, b y, T X 'Value
a y = S 10 S 22 - S 20 S 12 S 11 S 22 - S 12 2
b y = S 20 S 11 - S 10 S 21 S 11 S 22 - S 12 2
T x ′ = x ′ ‾ - a y X z ‾ - b y Z ‾
A y, b y, T X 'Value substitution formula (11), using the same method can be in the hope of a x, b x, T Y 'Second step conversion M 2Expression, then:
M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) X zi Y zi The most approaching with impact point.
8. the three-dimensional transformation search method of feature point extraction in the facial image according to claim 5, it is characterized in that being described iterative approach refer to unique point the XY plane around Z axis rotate and translation after coordinate (X z, Y z) intermediateness be to adopt repeatedly the method for iterative approach,
If intermediateness is (X z, Y z), concrete steps are as follows:
1) initial seasonal (X z, Y z) be end value (x ', y ');
2) (X z, Y z) substitution formula (1) replaces x ' wherein, tries to achieve in the formula (3) four transformation parameter a, b, t according to the ASM method of current two dimension x, t y
3) parameter a, b, t x, t ySubstitution formula (3) is obtained intermediateness (X z, Y z);
4) (X z, Y z) carry out second step conversion M 2, obtain parameter a y, b y, T X ', a x, b x, T Y '
5) be the basis with (x ', y '), calculate M 2Inverse transformation, can draw intermediateness (X ' z, Y ' z) namely:
( X ′ z , Y ′ z ) = M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) - 1 ( x ′ , y ′ )
6) (X ' z, Y ' z) substitution formula (1), try to achieve in the formula (3) four transformation parameter a, b, t according to two-dimentional ASM transform method x, t yThen changed for (3) step over to, carry out iterative computation, iteration just can obtain meeting 10 parameters of accuracy requirement for 10 times, i.e. 4 parameter a, b, the t of for the first time conversion x, t y, and 6 parameter a of for the second time conversion y, b y, T X ', a x, b x, T Y '
CN 200910037867 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image Expired - Fee Related CN101499132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910037867 CN101499132B (en) 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910037867 CN101499132B (en) 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image

Publications (2)

Publication Number Publication Date
CN101499132A CN101499132A (en) 2009-08-05
CN101499132B true CN101499132B (en) 2013-05-01

Family

ID=40946200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910037867 Expired - Fee Related CN101499132B (en) 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image

Country Status (1)

Country Link
CN (1) CN101499132B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102357340B1 (en) * 2014-09-05 2022-02-03 삼성전자주식회사 Method and apparatus for face recognition
CN105426929B (en) * 2014-09-19 2018-11-27 佳能株式会社 Object shapes alignment device, object handles devices and methods therefor
CN105989326B (en) * 2015-01-29 2020-03-03 北京三星通信技术研究有限公司 Method and device for determining three-dimensional position information of human eyes
CN104899563B (en) * 2015-05-29 2020-01-07 深圳大学 Two-dimensional face key feature point positioning method and system
CN105404861B (en) * 2015-11-13 2018-11-02 中国科学院重庆绿色智能技术研究院 Training, detection method and the system of face key feature points detection model
CN106845327B (en) * 2015-12-07 2019-07-02 展讯通信(天津)有限公司 Training method, face alignment method and the device of face alignment model
CN107016319B (en) * 2016-01-27 2021-03-05 北京三星通信技术研究有限公司 Feature point positioning method and device
CN107341784A (en) * 2016-04-29 2017-11-10 掌赢信息科技(上海)有限公司 A kind of expression moving method and electronic equipment
CN106022281A (en) * 2016-05-27 2016-10-12 广州帕克西软件开发有限公司 Face data measurement method and system
CN106503682B (en) * 2016-10-31 2020-02-04 北京小米移动软件有限公司 Method and device for positioning key points in video data
CN110520056B (en) * 2017-04-07 2022-08-05 国立研究开发法人产业技术综合研究所 Surveying instrument installation assisting device and surveying instrument installation assisting method
CN108932459B (en) * 2017-05-26 2021-12-10 富士通株式会社 Face recognition model training method and device and face recognition method
CN108985220B (en) * 2018-07-11 2022-11-04 腾讯科技(深圳)有限公司 Face image processing method and device and storage medium
CN109692476B (en) * 2018-12-25 2022-07-01 广州方硅信息技术有限公司 Game interaction method and device, electronic equipment and storage medium
CN109606728B (en) * 2019-01-24 2019-10-29 中国人民解放军国防科技大学 Method and system for designing precursor of hypersonic aircraft
CN110032941B (en) * 2019-03-15 2022-06-17 深圳英飞拓科技股份有限公司 Face image detection method, face image detection device and terminal equipment
CN112052847B (en) * 2020-08-17 2024-03-26 腾讯科技(深圳)有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101189637A (en) * 2005-06-03 2008-05-28 日本电气株式会社 Image processing system, 3-dimensional shape estimation system, object position posture estimation system, and image generation system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101189637A (en) * 2005-06-03 2008-05-28 日本电气株式会社 Image processing system, 3-dimensional shape estimation system, object position posture estimation system, and image generation system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡峰松等.应用于人脸识别的基于Candide-3特定人脸三维重建.《湖南大学学报(自然科学版)》.2008,第35卷(第11期),第69-73页. *
胡步发等.基于多点模型的3D人脸姿态估计方法.《中国图象图形学报》.2008,第13卷(第7期),1353-1358. *

Also Published As

Publication number Publication date
CN101499132A (en) 2009-08-05

Similar Documents

Publication Publication Date Title
CN101499132B (en) Three-dimensional transformation search method for extracting characteristic points in human face image
CN102999942B (en) Three-dimensional face reconstruction method
US8379014B2 (en) System and method for 3D object recognition
Newcombe et al. Kinectfusion: Real-time dense surface mapping and tracking
Boult et al. Factorization-based segmentation of motions
CN110363849A (en) A kind of interior three-dimensional modeling method and system
CN102880866B (en) Method for extracting face features
Pons-Moll et al. Model-based pose estimation
JP2016161569A (en) Method and system for obtaining 3d pose of object and 3d location of landmark point of object
CN104346824A (en) Method and device for automatically synthesizing three-dimensional expression based on single facial image
CN105701455A (en) Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method
Kroemer et al. Point cloud completion using extrusions
Shiratori et al. Efficient large-scale point cloud registration using loop closures
Guo et al. Line-based 3d building abstraction and polygonal surface reconstruction from images
CN109345570B (en) Multi-channel three-dimensional color point cloud registration method based on geometric shape
CN107507218A (en) Part motility Forecasting Methodology based on static frames
Wu et al. On signature invariants for effective motion trajectory recognition
Chen et al. Learning shape priors for single view reconstruction
Leymarie et al. The SHAPE Lab: New technology and software for archaeologists
Lee et al. Noniterative 3D face reconstruction based on photometric stereo
Spek et al. A fast method for computing principal curvatures from range images
Hafez et al. Visual servoing based on gaussian mixture models
Mian et al. 3D face recognition
Gao et al. Estimation of 3D category-specific object structure: Symmetry, Manhattan and/or multiple images
Dorobantu et al. Conformal generative modeling on triangulated surfaces

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Yi Faling

Inventor after: Xiong Wei

Inventor after: Huang Zhanpeng

Inventor after: Zhao Jie

Inventor before: Yi Faling

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: YI FALING TO: YI FALING XIONG WEI HUANG ZHANPENG ZHAO JIE

C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 510006 Guangdong City, Guangzhou province outside the University of East Ring Road, No. 280

Patentee after: Guangdong Pharmaceutical University

Address before: 510006 Guangdong City, Guangzhou province outside the University of East Ring Road, No. 280

Patentee before: Guangdong Pharmaceutical University

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20170605

Address after: 510000 Guangdong city of Guangzhou province Panyu District Xiaoguwei Street Outer Ring Road No. 280 Building 1, room 207, Department of Guangdong Pharmaceutical University

Patentee after: GUANGZHOU HENGBIKANG INFORMATION TECHNOLOGY CO.,LTD.

Address before: 510006 Guangdong City, Guangzhou province outside the University of East Ring Road, No. 280

Patentee before: Guangdong Pharmaceutical University

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130501

Termination date: 20190312

CF01 Termination of patent right due to non-payment of annual fee