CN102054291A - Method and device for reconstructing three-dimensional face based on single face image - Google Patents
Method and device for reconstructing three-dimensional face based on single face image Download PDFInfo
- Publication number
- CN102054291A CN102054291A CN2009101127795A CN200910112779A CN102054291A CN 102054291 A CN102054291 A CN 102054291A CN 2009101127795 A CN2009101127795 A CN 2009101127795A CN 200910112779 A CN200910112779 A CN 200910112779A CN 102054291 A CN102054291 A CN 102054291A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- face
- facial image
- model
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a device for reconstructing a three-dimensional face based on a single face image. The method comprises the following steps of: performing posture recognition on a face image by utilizing face structure priori knowledge, and estimating rotating direction and angle of a face plane by combining face measurement and projective geometry so as to set a rotating angle of the three-dimensional face; estimating the depth of two-dimensional characteristic points on the face image by adopting an artificial neural network, and acquiring three-dimensional coordinates of the characteristic points; converting a common three-dimensional face model into a specific model by adopting a Dirichlet free deformation algorithm; and mapping the three-dimensional face model by adopting characteristic points extracted from the two-dimensional image. Therefore, the true three-dimensional model is constructed through the single face image, the three-dimensional modeling accuracy is effectively improved, the three-dimensional modeling time is shortened and the three-dimensional modeling cost is reduced.
Description
Technical field
The present invention relates to technical fields such as Digital Image Processing, machine vision and artificial intelligence, particularly relate to a kind of method and device thereof of realizing three-dimensional facial reconstruction based on the single width facial image.
Background technology
The method of present people's face modeling based on image mainly is divided into two kinds: a kind of method for reconstructing that is based on the shape of plurality of pictures or quadrature picture, the another kind of method that just is based on the three-dimensional face modeling of single image.
Traditional shape method for reconstructing based on many sheet photos generally adopts hardware (as specialized equipments such as spatial digitizers) to obtain the three-dimensional information of people's face, though the theoretical foundation comparative maturity, it is embodied as this and implementation method all compares difficulty; And the method that adopts the quadrature picture to rebuild, because direct picture and side image quadrature, can from image, take out the three-dimensional coordinate of individual features point, carry out the three-dimensional face model match then, add texture and just can rebuild a three-dimensional face model, but this method generally needs manual interaction to determine the position of human face characteristic point, and two image must be to want quadrature, this equally also is the comparison difficulty in practical operation.
Based on the three-dimensional face modeling method of single image, simplified manually-operated and realized the robotization of modeling.If do not have under other condition, only relying on single image, to carry out three-dimensional reconstruction be impossible.People such as Blanz propose the 3 D deformation model and have overcome this problem, adopt the three-dimensional face storehouse as priori the faceform to be carried out constraint condition, successfully realize the three-dimensional face automatic modeling based on single image.But 200 three-dimensional face models that scanning obtains of this method by linear combination, in conjunction with 22 parameters such as the anglec of rotation, illumination, through iteration optimization algorithms, could obtain object module, and the time that it spent is oversize.
And these do not consider how to obtain the angle of people's face and the degree of depth of unique point from an image from the method that single image carries out the three-dimensional face modeling, just utilize the unique point coordinate of two dimensional image, thereby have influenced the precision of three-dimensional modeling.
Summary of the invention
The objective of the invention is to overcome the deficiency of prior art, a kind of method and device thereof of realizing three-dimensional facial reconstruction based on the single width facial image is provided, be that four unique points according to human face posture realize that the attitude angle estimates, train its neural network with existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and and then construct the true three-dimension model, thereby reached effective raising three-dimensional modeling precision, shortened the purpose of three-dimensional modeling time and reduction three-dimensional modeling cost.
The technical solution adopted for the present invention to solve the technical problems is: a kind of method based on single width facial image realization three-dimensional facial reconstruction comprises:
The step of input single width facial image;
Carry out attitude and estimate the step of processing; This step is that the coordinate with two inner eye corners and two wing of nose points in the human face posture is that four unique points realize that the attitude angle estimates; Adopt two vanishing points to estimate the anglec of rotation of people's face on X, Y, Z axle;
Carry out the step that the unique point estimation of Depth is handled; This step is to train its neural network according to existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and known two-dimensional coordinate and the combination of its estimation of Depth value are obtained estimating D coordinates value;
Carry out the step that three-dimensional modeling is handled; This step is that the estimation D coordinates value of three-dimensional face X, Y, the anglec of rotation on the Z axle and the human face characteristic point that will obtain adopts Dirichlet Free Transform (DFFD) algorithm that three-dimensional face model is out of shape;
Carry out the step that texture is handled; This step is that single width forward facial image is projected on the three-dimensional model; For positive information not enough place and side-information, adopt the bilinear interpolation mode to carry out interpolation and estimate;
The step of structure true three-dimension model.
Described step of carrying out attitude estimation processing comprises:
The step of facial pose feature point extraction; This step is that the forward facial image employing active shape model (ASM) of input is located face characteristic; The coordinate that the face characteristic of being located comprises two inner eye corners in the human face posture and two wing of nose points is totally four unique points;
Calculate the step of the two vanishing points of quadrilateral; This step is to calculate its pair vanishing point according to four human face characteristic points;
Calculate the step of the anglec of rotation of X, Y, Z axle.
Described step of carrying out the processing of unique point estimation of Depth comprises:
The step of face-image feature point extraction; This step is to adopt active shape model (ASM) to locate face characteristic;
The step that the illumination texture information extracts;
The step of neural network training;
Estimate each unique point degree of depth (Z) and obtain the step of each point three-dimensional coordinate.
Described step of carrying out the three-dimensional modeling processing comprises:
Create neutral faceform's step;
The step that neutral human face posture is adjusted;
The step of people's face deformation algorithm;
Generate the step of people's face particular model.
Described step of carrying out the texture processing comprises:
The step of texture; This step is that faceform and the forward facial image that will obtain are pasted processing;
The step that bilinear interpolation is handled;
Construct the step of three-dimensional model.
A kind of device based on single width facial image realization three-dimensional facial reconstruction comprises:
One input media is used for importing facial image;
One attitude is estimated treating apparatus, and being used for coordinate with two inner eye corners and two wing of nose points in the human face posture is that four unique points realize that the attitude angle estimates, and employing pair vanishing points estimate the anglec of rotation of people's face on X, Y, Z axle;
One unique point estimation of Depth treating apparatus, be used for training its neural network according to existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and known two-dimensional coordinate and the combination of its estimation of Depth value are obtained estimating D coordinates value;
One three-dimensional modeling treating apparatus is used for the estimation D coordinates value of three-dimensional face X, Y, the anglec of rotation on the Z axle and the human face characteristic point that will obtain to adopt Dirichlet Free Transform (DFFD) algorithm that three-dimensional face model is out of shape;
One texture treating apparatus is used for single width forward facial image is projected on the three-dimensional model, for positive information not enough place and side-information, adopts the bilinear interpolation mode to carry out interpolation and estimates;
One structure true three-dimension model equipment is used for constructing the true three-dimension model;
The output of input media is connected to the input that attitude is estimated treating apparatus; Attitude estimates that the output for the treatment of apparatus is connected to the input of unique point estimation of Depth treating apparatus; The output of unique point estimation of Depth treating apparatus is connected to the input of three-dimensional modeling treating apparatus; The output of three-dimensional modeling treating apparatus is connected to the input of texture device; The output of texture device is connected to structure true three-dimension model equipment.
The invention has the beneficial effects as follows,, measure and projective geometry, estimate the sense of rotation and the angle on people's face plane, thereby the anglec of rotation of three-dimensional face is set in conjunction with people's face looks owing to adopted the human face structure priori that facial image is carried out gesture recognition; Adopt artificial neural network that the degree of depth of the two dimensional character point on the facial image is estimated, obtain the three-dimensional coordinate of each unique point; Adopt Dirichlet Free Transform algorithm to realize universal three-dimensional human face model is changed to particular model; The unique point that adopts two dimensional image to extract carries out three-dimensional face model is shone upon; Thereby realized constructing the true three-dimension model, improved the three-dimensional modeling precision effectively, shortened the three-dimensional modeling time and reduced the three-dimensional modeling cost by the single width facial image.
Below in conjunction with drawings and Examples the present invention is described in further detail; But a kind of method and device thereof based on single width facial image realization three-dimensional facial reconstruction of the present invention is not limited to embodiment.
Description of drawings
Fig. 1 is the main flow chart of the inventive method;
Fig. 2 is the process flow diagram that the attitude of the inventive method is estimated processing;
Fig. 3 is the process flow diagram that the unique point estimation of Depth of the inventive method is handled;
Fig. 4 is the process flow diagram that the three-dimensional modeling of the inventive method is handled;
Fig. 5 is the process flow diagram that the texture of the inventive method is handled;
Fig. 6 is the synoptic diagram of the human face posture unique point of the inventive method;
Fig. 7 is the synoptic diagram of the space coordinates and the human face posture unique point of the inventive method;
Fig. 8 is the synoptic diagram that looks like the slope of canthus point straight line in the plane of the inventive method;
Fig. 9 is the structured flowchart of apparatus of the present invention.
Embodiment
Referring to shown in Figure 1, a kind of method based on single width facial image realization three-dimensional facial reconstruction of the present invention comprises:
The step of input single width facial image; Shown in the frame among Fig. 1 101;
Carry out attitude and estimate the step of processing; This step is that the coordinate with two inner eye corners and two wing of nose points in the human face posture is that four unique points realize that the attitude angle estimates; Adopt two vanishing points to estimate the anglec of rotation of people's face on X, Y, Z axle; Shown in the frame among Fig. 1 102;
Carry out the step that the unique point estimation of Depth is handled; This step is to train its neural network according to existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and known two-dimensional coordinate and the combination of its estimation of Depth value are obtained estimating D coordinates value; Shown in the frame among Fig. 1 103;
Carry out the step that three-dimensional modeling is handled; This step is that the estimation D coordinates value of three-dimensional face X, Y, the anglec of rotation on the Z axle and the human face characteristic point that will obtain adopts Dirichlet Free Transform (DFFD) algorithm that three-dimensional face model is out of shape; Shown in the frame among Fig. 1 104;
Carry out the step that texture is handled; This step is that single width forward facial image is projected on the three-dimensional model; For positive information not enough place and side-information, adopt the bilinear interpolation mode to carry out interpolation and estimate; Bilinear interpolation is by the neighbor in the texture being handled the sawtooth that smoothly falls between the screen output pixel, being used bilinear interpolation can make the image of screen output seem more level and smooth; Shown in the frame among Fig. 1 105; The step of structure true three-dimension model; Shown in the frame among Fig. 1 106.
As shown in Figure 2, in the method for the present invention, wherein, described step of carrying out attitude estimation processing comprises:
The step of facial pose feature point extraction; This step is that the forward facial image employing active shape model (ASM) of input is located face characteristic; The coordinate that the face characteristic of being located comprises two inner eye corners in the human face posture and two wing of nose points is totally four unique points; Shown in the frame among Fig. 2 201;
Calculate the step of the two vanishing points of quadrilateral; This step is to calculate its pair vanishing point according to four human face characteristic points; Shown in the frame among Fig. 2 202;
Calculate the step of the anglec of rotation of X, Y, Z axle; Shown in the frame among Fig. 2 203.
In the step 201, the forward facial image employing active shape model (ASM) of system's input is located face characteristic, can adjust manually in case of necessity.
The face characteristic of being located has four points, the coordinate of two inner eye corners and two wing of nose points, as shown in Figure 6, these four points are not vulnerable to the interference that human face expression changes the interference of (as laugh at, cry etc.) and makeup (as glasses etc.), have very strong anti-interference and stability.In the space, the line of two inner eye corners is parallel with the line of two wing of nose points, and two lines also are parallel to each other about inner eye corner and wing of nose point, and the quadrilateral that these four points are formed has rectangular characteristic.
4 at the graph of a relation of space coordinates as shown in Figure 7.In space coordinates O-XYZ, O is an initial point, and E1, E2 are two inner eye corner points of people's face in the space, and N1, N2 are two wing of nose points of people's face in the space.F is the camera imaging plane.E1, e2 are the projections of two inner eye corner points of people's face on the picture plane, and n3, n2 are the projections of two wing of nose points of people's face on the picture plane.
Because the line of E1, E2, N1, N2 has rectangular characteristic, so the line of the line of E1, E2 and N1, N2 is parallel mutually, can't produce vanishing point, the line of the line of E1, N1 and E2, N2 can not produce vanishing point too.If F plane and space E 1, E2, N1, N2 and non-parallel, then e1, e2 line and n2, n3 line will produce vanishing point M1, and e1, n3 line and e2, n2 line also will produce vanishing point M2.
In the step 202, in step 201, obtain four human face characteristic points, begin to calculate its vanishing point M1, M2.
If these two straight lines are y=ax+b, y=cx+d
Ax+b=cx+d is arranged
(a-c)x=d-b
x=(d-b)/(a-c)
X substitution y=ax+b, y=cx+d gets again
Y=a* (d-b)/(a-c)+b or y=c* (d-b)/(a-c)+d
So intersecting point coordinate is [(d-b)/(a-c), a* (d-b)/(a-c)+b] or [(d-b)/(a-c), c* (d-b)/(a-c)+d].
In the step 203,, come the three-dimensional rotation attitude of estimation space people face according to two vanishing point coordinates.
If vectorial E={i1, j1, k1} represent the direction vector of E1, E2 line, because N1, N2 line are parallel with E1, E2, the direction vector of N1, N2 also is E, and the three-dimensional coordinate of M1 is { x1, y1, z1}.
Utilize projective geometry knowledge can try to achieve the M1 three-dimensional coordinate to be:
The E direction vector is { x1 * k1 ÷ f, y1 * k1 ÷ f, k1} (1)
Establish in like manner that N={ i2, j2, k2} are the direction vector of E1, N1 line.
Can get the M2 three-dimensional coordinate is:
The N direction vector is { x2 * k2 ÷ f, y2 * k2 ÷ f, k2} (2)
Calculate the normal of forward people face and can determine that people's face is at three-dimensional rotation attitude.
E1, E2 can be established, N2, N1 are four points in the forward people face in forward people face plane.The plane of being made up of these four points is the forward plane.
Because E and N direction vector are orthogonal, and in the forward plane that coexists, can get the normal on people's face forward plane: F=E * N.
Can get in conjunction with (1), (2):
F={y1-y2,x2-x1,(x1*y2-x2*y1)/f}
Because E and N direction vector are orthogonal, { x1 * k1 ÷ f, y1 * k1 ÷ f, k1}{x2 * k2 ÷ f, y2 * k2 ÷ f, k2} are zero to its inner product.
Can get through arrangement:
From step 202, can obtain vanishing point M1 (x1, x2), M2 (y1, coordinate figure y2).
Can get three-dimensional rotation angle [alpha], β, γ by normal F, respectively the anglec of rotation of corresponding X, Y, Z axle.
K is the slope of canthus point e1e2 straight line in the picture plane, as shown in Figure 8.
K=(y2-y1)/(x2-x1)。
As shown in Figure 3, in the method for the present invention, wherein, described step of carrying out the processing of unique point estimation of Depth comprises:
The step of face-image feature point extraction; This step is to adopt active shape model (ASM) to locate face characteristic; Shown in the frame among Fig. 3 301;
The step that the illumination texture information extracts; Shown in the frame among Fig. 3 302;
The step of neural network training; Shown in the frame among Fig. 3 303;
Estimate each unique point degree of depth (Z) and obtain the step of each point three-dimensional coordinate; Shown in the frame among Fig. 3 304.
In the step 301, adopt active shape model (ASM) to locate face characteristic, can adjust manually in case of necessity.
The two-dimensional coordinate of the human face characteristic point of all location is noted, and calculated the Euclidean distance (Euclidean distance) of each point, and longest distance in people's face, promptly the forehead unique point is set to L to the chin unique point.
In the step 302, the forward facial image that system is imported carries out the texture information extraction.
Wherein the leaching process of texture is as follows:
Colour picture is divided into Y (brightness), Cr (colourity r), Cb (colourity b) trivector according to three color spaces of YCrCb, and calculate the vector of this point according to the YCrCb color component of the peripheral pixel of this picture element, calculate the phase difference that waits direction up and down of its point, if the color no change, then its value is 0.
It is point that current point is set, and its Y (brightness) vector is corresponding to the Y4 in the following structure;
Y0 | Y1 | Y2 |
Y3 | Y4 | Y5 |
Y6 | Y7 | Y8 |
Cr (colourity r) vector is corresponding to the Cr4 in the following structure;
Cr0 | Cr1 | Cr2 |
Cr3 | Cr4 | Cr5 |
Cr6 | Cr7 | Cr8 |
Cb (colourity b) vector is corresponding to the Cb4 in the following structure;
Cb0 | Cb1 | Cb2 |
Cb3 | Cb4 | Cb5 |
Cb6 | Cb7 | Cb8 |
Carry out following calculating then;
Horizontal division is calculated: y0=((Y0+Y1+Y2)-(Y6+Y7+Y8)/3
Vertical division is calculated: y1=((Y0+Y3+Y6)-(Y2+Y5+Y8))/3
Right diagonal angle is divided and is calculated: y2=((Y0+Y1+Y3)-(Y5+Y7+Y8))/3
Diagonal angle, a left side is divided and is calculated: y3=((Y1+Y2+Y5)-(Y3+Y6+Y7)/3
Maximal value with its level, vertical, right diagonal angle, left diagonal angle is the value of this point.
point.Y=max(y0,y1,y2,y3);
In like manner can extrapolate the value of point.Cr and point.Cb.
The YCrCb texture that generates is generated gray scale (Gray) figure.
Each unique point coordinate with taking out in the step 301 calculates the gray-scale value between each unique point.Promptly in the texture gray scale, the gray-scale value summation between per two unique point coordinates.
Step 303 is the neural networks that the priori of each unique point of three-dimensional face are used for being trained to each unique point.
The priori of each unique point of three-dimensional face comprises following information.
Distance/L between each unique point (the forehead unique point is to chin unique point distance);
Texture gray-scale value/L between each unique point (the forehead unique point is to chin unique point distance);
If:
Eu=Eij(i,j∈M,i≠j)
Gu=Gij(i,j∈M,i≠j)
M is the set of current human face characteristic point, L for the forehead unique point to chin unique point distance, Eij is the Euclidean distance/L between two points in two dimensional image, Gij is the texture gray-scale value/L between two points in two dimensional image.
Generate the neural network of forehead unique point, then collect the degree of depth about forehead unique point Eu, Gu and forehead unique point earlier, adopt the BP neural network algorithm to train, obtain its network weight as trained values.
Adopt the distance and the texture information of each unique point, be based on two aspects and consider:
1), people's face is a sphere, the degree of depth of its each point is relevant with the position of its distribution;
2), between each point because the different suffered lighting effects of depth location also are different.
According to existing three-dimensional face priori, can train the neural network of each unique point.
In the step 304,, carry out neural network and calculate, can obtain the estimating depth value Z of each unique point Euclidean distance, texture gray-scale value and the L of each unique point in the current forward facial image that has obtained.And go up known X, Y two-dimensional coordinate, can get the each point D coordinates value.
As shown in Figure 4, in the method for the present invention, wherein, described step of carrying out the three-dimensional modeling processing comprises:
Create neutral faceform's step; Promptly create neutral faceform according to the priori of three-dimensional face; Shown in the frame among Fig. 4 401;
The step that neutral human face posture is adjusted; X, Y, Z that step 203 is obtained come neutral faceform is rotated; Shown in the frame among Fig. 4 402;
The step of people's face deformation algorithm; At first with each unique point set by step the three-dimensional coordinate of 304 each points that obtain adjust, adopt Dirichlet Free Transform algorithm to carry out other three-dimensional coordinate is adjusted then; Compare other Free Transform method, it has greater flexibility, and the reference mark can be set arbitrarily; Shown in the frame among Fig. 4 403;
Generate the step of people's face particular model; Neutral faceform is generated specific faceform, preserve; Shown in the frame among Fig. 4 404.
As shown in Figure 5, in the method for the present invention, wherein, described step of carrying out the texture processing comprises:
The step of texture; This step is that faceform and the forward facial image that will obtain are pasted processing; That is, paste corresponding texture information on the model in proportion; Shown in the frame among Fig. 5 501;
The step that bilinear interpolation is handled; For because direct picture need carry out the texture interpolation inadequately, carry out bilinear interpolation; Shown in the frame among Fig. 5 502; Bilinear interpolation is by the neighbor in the texture being handled the sawtooth that smoothly falls between the screen output pixel; Use bilinear interpolation can make the image of screen output seem more level and smooth;
Construct the step of three-dimensional model; With the three-dimensional model output behind the stickup texture; Shown in the frame among Fig. 5 503.
Referring to shown in Figure 9, a kind of device based on single width facial image realization three-dimensional facial reconstruction of the present invention comprises:
One input media 61 is used for importing facial image;
One attitude is estimated treating apparatus 62, and being used for coordinate with two inner eye corners and two wing of nose points in the human face posture is that four unique points realize that the attitude angle estimates, and adopts the anglec of rotation of two vanishing points estimation discrepancy faces on X, Y, Z axle;
One unique point estimation of Depth treating apparatus 63, be used for training its neural network according to existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and known two-dimensional coordinate and the combination of its estimation of Depth value are obtained estimating D coordinates value;
One three-dimensional modeling treating apparatus 64 is used for the estimation D coordinates value of three-dimensional face X, Y, the anglec of rotation on the Z axle and the human face characteristic point that will obtain to adopt Dirichlet Free Transform (DFFD) algorithm that three-dimensional face model is out of shape;
One texture treating apparatus 65 is used for single width forward facial image is projected on the three-dimensional model, for positive information not enough place and side-information, adopts the bilinear interpolation mode to carry out interpolation and estimates;
One structure true three-dimension model equipment 66 is used for constructing the true three-dimension model;
The output of input media 61 is connected to the input that attitude is estimated treating apparatus 62; Attitude estimates that the output for the treatment of apparatus 62 is connected to the input of unique point estimation of Depth treating apparatus 63; The output of unique point estimation of Depth treating apparatus 63 is connected to the input of three-dimensional modeling treating apparatus 64; The output of three-dimensional modeling treating apparatus 64 is connected to the input of texture device 65; The output of texture device 65 is connected to structure true three-dimension model equipment 66.
The foregoing description only is used for further specifying a kind of method and device thereof of realizing three-dimensional facial reconstruction based on the single width facial image of the present invention; but the present invention is not limited to embodiment; every foundation technical spirit of the present invention all falls into the protection domain of technical solution of the present invention to any simple modification, equivalent variations and modification that above embodiment did.
Claims (6)
1. realize it is characterized in that the method for three-dimensional facial reconstruction: comprising based on the single width facial image for one kind:
The step of input single width facial image;
Carry out attitude and estimate the step of processing; This step is that the coordinate with two inner eye corners and two wing of nose points in the human face posture is that four unique points realize that the attitude angle estimates; Adopt two vanishing points to estimate the anglec of rotation of people's face on X, Y, Z axle;
Carry out the step that the unique point estimation of Depth is handled; This step is to train its neural network according to existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and known two-dimensional coordinate and the combination of its estimation of Depth value are obtained estimating D coordinates value;
Carry out the step that three-dimensional modeling is handled; This step is that the estimation D coordinates value of three-dimensional face X, Y, the anglec of rotation on the Z axle and the human face characteristic point that will obtain adopts Dirichlet Free Transform (DFFD) algorithm that three-dimensional face model is out of shape;
Carry out the step that texture is handled; This step is that single width forward facial image is projected on the three-dimensional model; For positive information not enough place and side-information, adopt the bilinear interpolation mode to carry out interpolation and estimate;
The step of structure true three-dimension model.
2. the method based on single width facial image realization three-dimensional facial reconstruction according to claim 1 is characterized in that: described step of carrying out attitude estimation processing comprises:
The step of facial pose feature point extraction; This step is that the forward facial image employing active shape model (ASM) of input is located face characteristic; The coordinate that the face characteristic of being located comprises two inner eye corners in the human face posture and two wing of nose points is totally four unique points;
Calculate the step of the two vanishing points of quadrilateral; This step is to calculate its pair vanishing point according to four human face characteristic points;
Calculate the step of the anglec of rotation of X, Y, Z axle.
3. the method based on single width facial image realization three-dimensional facial reconstruction according to claim 1 is characterized in that: described step of carrying out the processing of unique point estimation of Depth comprises: the step of face-image feature point extraction; This step is to adopt active shape model (ASM) to locate face characteristic;
The step that the illumination texture information extracts;
The step of neural network training;
Estimate each unique point degree of depth (Z) and obtain the step of each point three-dimensional coordinate.
4. the method based on single width facial image realization three-dimensional facial reconstruction according to claim 1 is characterized in that: described step of carrying out the three-dimensional modeling processing comprises:
Create neutral faceform's step;
The step that neutral human face posture is adjusted;
The step of people's face deformation algorithm;
Generate the step of people's face particular model.
5. the method based on single width facial image realization three-dimensional facial reconstruction according to claim 1 is characterized in that: described step of carrying out the texture processing comprises:
The step of texture; This step is that faceform and the forward facial image that will obtain are pasted processing;
The step that bilinear interpolation is handled;
Construct the step of three-dimensional model.
6. realize it is characterized in that the device of three-dimensional facial reconstruction: comprising based on the single width facial image for one kind:
One input media is used for importing facial image;
One attitude is estimated treating apparatus, and being used for coordinate with two inner eye corners and two wing of nose points in the human face posture is that four unique points realize that the attitude angle estimates, and employing pair vanishing points estimate the anglec of rotation of people's face on X, Y, Z axle;
One unique point estimation of Depth treating apparatus, be used for training its neural network according to existing a large amount of human face characteristic point distances and texture information, estimate the unique point depth value of its present facial image then with this neural network, and known two-dimensional coordinate and the combination of its estimation of Depth value are obtained estimating D coordinates value;
One three-dimensional modeling treating apparatus is used for the estimation D coordinates value of three-dimensional face X, Y, the anglec of rotation on the Z axle and the human face characteristic point that will obtain to adopt Dirichlet Free Transform (DFFD) algorithm that three-dimensional face model is out of shape;
One texture treating apparatus is used for single width forward facial image is projected on the three-dimensional model, for positive information not enough place and side-information, adopts the bilinear interpolation mode to carry out interpolation and estimates;
One structure true three-dimension model equipment is used for constructing the true three-dimension model;
The output of input media is connected to the input that attitude is estimated treating apparatus; Attitude estimates that the output for the treatment of apparatus is connected to the input of unique point estimation of Depth treating apparatus; The output of unique point estimation of Depth treating apparatus is connected to the input of three-dimensional modeling treating apparatus; The output of three-dimensional modeling treating apparatus is connected to the input of texture device; The output of texture device is connected to structure true three-dimension model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009101127795A CN102054291A (en) | 2009-11-04 | 2009-11-04 | Method and device for reconstructing three-dimensional face based on single face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009101127795A CN102054291A (en) | 2009-11-04 | 2009-11-04 | Method and device for reconstructing three-dimensional face based on single face image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102054291A true CN102054291A (en) | 2011-05-11 |
Family
ID=43958575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009101127795A Pending CN102054291A (en) | 2009-11-04 | 2009-11-04 | Method and device for reconstructing three-dimensional face based on single face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102054291A (en) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102222363A (en) * | 2011-07-19 | 2011-10-19 | 杭州实时数码科技有限公司 | Method for fast constructing high-accuracy personalized face model on basis of facial images |
CN102436680A (en) * | 2011-08-19 | 2012-05-02 | 合肥鹏润图像科技有限公司 | Method for making digital photo stereo image |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
CN103473804A (en) * | 2013-08-29 | 2013-12-25 | 小米科技有限责任公司 | Image processing method, device and terminal equipment |
CN103634554A (en) * | 2012-08-20 | 2014-03-12 | 联想(北京)有限公司 | A data transmission method, a data reception method and electronic devices |
CN103927515A (en) * | 2014-04-08 | 2014-07-16 | 章建国 | Three-dimension dynamic facial comparison method |
CN104573737A (en) * | 2013-10-18 | 2015-04-29 | 华为技术有限公司 | Feature point locating method and device |
CN104899921A (en) * | 2015-06-04 | 2015-09-09 | 杭州电子科技大学 | Single-view video human body posture recovery method based on multi-mode self-coding model |
CN104978548A (en) * | 2014-04-02 | 2015-10-14 | 汉王科技股份有限公司 | Visual line estimation method and visual line estimation device based on three-dimensional active shape model |
WO2015172679A1 (en) * | 2014-05-14 | 2015-11-19 | 华为技术有限公司 | Image processing method and device |
CN105574518A (en) * | 2016-01-25 | 2016-05-11 | 北京天诚盛业科技有限公司 | Method and device for human face living detection |
CN105701455A (en) * | 2016-01-05 | 2016-06-22 | 安阳师范学院 | Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method |
WO2016146001A1 (en) * | 2015-03-17 | 2016-09-22 | 阿里巴巴集团控股有限公司 | Three-dimensional modelling method and device |
CN106023302A (en) * | 2016-05-06 | 2016-10-12 | 刘进 | Mobile communication terminal, three-dimensional reconstruction method thereof and server |
CN106067190A (en) * | 2016-05-27 | 2016-11-02 | 俞怡斐 | A kind of fast face threedimensional model based on single image generates and alternative approach |
WO2016188318A1 (en) * | 2015-05-22 | 2016-12-01 | 腾讯科技(深圳)有限公司 | 3d human face reconstruction method, apparatus and server |
CN106803284A (en) * | 2017-01-11 | 2017-06-06 | 北京旷视科技有限公司 | Build the method and device of the 3-D view of face |
CN106910241A (en) * | 2017-01-20 | 2017-06-30 | 徐迪 | The reconstructing system and method for the three-dimensional human head based on cell-phone camera and Cloud Server |
CN107452049A (en) * | 2016-05-30 | 2017-12-08 | 腾讯科技(深圳)有限公司 | A kind of three-dimensional head modeling method and device |
CN107527092A (en) * | 2016-06-15 | 2017-12-29 | 福特全球技术公司 | Trained using audible data for colliding the algorithm hidden |
CN107679446A (en) * | 2017-08-17 | 2018-02-09 | 平安科技(深圳)有限公司 | Human face posture detection method, device and storage medium |
CN108475438A (en) * | 2016-10-31 | 2018-08-31 | 谷歌有限责任公司 | The Facial reconstruction of insertion based on study |
CN108491881A (en) * | 2018-03-23 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating detection model |
CN108492373A (en) * | 2018-03-13 | 2018-09-04 | 齐鲁工业大学 | A kind of face embossment Geometric Modeling Method |
CN108537836A (en) * | 2018-04-12 | 2018-09-14 | 维沃移动通信有限公司 | A kind of depth data acquisition methods and mobile terminal |
CN108665555A (en) * | 2018-05-15 | 2018-10-16 | 华中师范大学 | A kind of autism interfering system incorporating real person's image |
CN108961384A (en) * | 2017-05-19 | 2018-12-07 | 中国科学院苏州纳米技术与纳米仿生研究所 | three-dimensional image reconstruction method |
CN109147037A (en) * | 2018-08-16 | 2019-01-04 | Oppo广东移动通信有限公司 | Effect processing method, device and electronic equipment based on threedimensional model |
CN109377557A (en) * | 2018-11-26 | 2019-02-22 | 中山大学 | Real-time three-dimensional facial reconstruction method based on single frames facial image |
CN109671108A (en) * | 2018-12-18 | 2019-04-23 | 重庆理工大学 | A kind of single width multi-angle of view facial image Attitude estimation method arbitrarily rotated in plane |
CN109919876A (en) * | 2019-03-11 | 2019-06-21 | 四川川大智胜软件股份有限公司 | A kind of true face model building of three-dimensional and three-dimensional true face photographic system |
WO2020029572A1 (en) * | 2018-08-10 | 2020-02-13 | 浙江宇视科技有限公司 | Human face feature point detection method and device, equipment and storage medium |
WO2020042720A1 (en) * | 2018-08-28 | 2020-03-05 | 腾讯科技(深圳)有限公司 | Human body three-dimensional model reconstruction method, device, and storage medium |
CN111091624A (en) * | 2019-12-19 | 2020-05-01 | 南京大学 | Method for generating high-precision drivable human face three-dimensional model from single picture |
CN111223175A (en) * | 2018-11-27 | 2020-06-02 | 财团法人交大思源基金会 | Three-dimensional face reconstruction method |
CN111292414A (en) * | 2020-02-24 | 2020-06-16 | 当家移动绿色互联网技术集团有限公司 | Method and device for generating three-dimensional image of object, storage medium and electronic equipment |
CN111428579A (en) * | 2020-03-03 | 2020-07-17 | 平安科技(深圳)有限公司 | Face image acquisition method and system |
CN112507848A (en) * | 2020-12-03 | 2021-03-16 | 中科智云科技有限公司 | Mobile terminal real-time human face attitude estimation method |
CN112597901A (en) * | 2020-12-23 | 2021-04-02 | 艾体威尔电子技术(北京)有限公司 | Multi-face scene effective face recognition device and method based on three-dimensional distance measurement |
CN113705280A (en) * | 2020-05-21 | 2021-11-26 | 北京聚匠艺传媒有限公司 | Human-computer interaction method and device based on facial features |
CN114061761A (en) * | 2021-11-17 | 2022-02-18 | 重庆大学 | Remote target temperature accurate measurement method based on monocular infrared stereoscopic vision correction |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404091A (en) * | 2008-11-07 | 2009-04-08 | 重庆邮电大学 | Three-dimensional human face reconstruction method and system based on two-step shape modeling |
-
2009
- 2009-11-04 CN CN2009101127795A patent/CN102054291A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404091A (en) * | 2008-11-07 | 2009-04-08 | 重庆邮电大学 | Three-dimensional human face reconstruction method and system based on two-step shape modeling |
Non-Patent Citations (1)
Title |
---|
赵广吉: "基于一定旋转角度人脸照片的三维重建", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102222363A (en) * | 2011-07-19 | 2011-10-19 | 杭州实时数码科技有限公司 | Method for fast constructing high-accuracy personalized face model on basis of facial images |
CN102436680A (en) * | 2011-08-19 | 2012-05-02 | 合肥鹏润图像科技有限公司 | Method for making digital photo stereo image |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
CN102663820B (en) * | 2012-04-28 | 2014-10-22 | 清华大学 | Three-dimensional head model reconstruction method |
CN103634554B (en) * | 2012-08-20 | 2017-06-27 | 联想(北京)有限公司 | A kind of method of data transfer, the method for data receiver and electronic equipment |
CN103634554A (en) * | 2012-08-20 | 2014-03-12 | 联想(北京)有限公司 | A data transmission method, a data reception method and electronic devices |
CN103473804A (en) * | 2013-08-29 | 2013-12-25 | 小米科技有限责任公司 | Image processing method, device and terminal equipment |
CN104573737B (en) * | 2013-10-18 | 2018-03-27 | 华为技术有限公司 | The method and device of positioning feature point |
CN104573737A (en) * | 2013-10-18 | 2015-04-29 | 华为技术有限公司 | Feature point locating method and device |
CN104978548A (en) * | 2014-04-02 | 2015-10-14 | 汉王科技股份有限公司 | Visual line estimation method and visual line estimation device based on three-dimensional active shape model |
CN104978548B (en) * | 2014-04-02 | 2018-09-25 | 汉王科技股份有限公司 | A kind of gaze estimation method and device based on three-dimensional active shape model |
CN103927515A (en) * | 2014-04-08 | 2014-07-16 | 章建国 | Three-dimension dynamic facial comparison method |
US10043308B2 (en) | 2014-05-14 | 2018-08-07 | Huawei Technologies Co., Ltd. | Image processing method and apparatus for three-dimensional reconstruction |
WO2015172679A1 (en) * | 2014-05-14 | 2015-11-19 | 华为技术有限公司 | Image processing method and device |
CN105096377A (en) * | 2014-05-14 | 2015-11-25 | 华为技术有限公司 | Image processing method and apparatus |
CN105096377B (en) * | 2014-05-14 | 2019-03-19 | 华为技术有限公司 | A kind of image processing method and device |
WO2016146001A1 (en) * | 2015-03-17 | 2016-09-22 | 阿里巴巴集团控股有限公司 | Three-dimensional modelling method and device |
US10410405B2 (en) | 2015-03-17 | 2019-09-10 | Alibaba Group Holding Limited | Reducing computational complexity in three-dimensional modeling based on two-dimensional images |
CN106033621A (en) * | 2015-03-17 | 2016-10-19 | 阿里巴巴集团控股有限公司 | Three-dimensional modeling method and device |
CN106033621B (en) * | 2015-03-17 | 2018-08-24 | 阿里巴巴集团控股有限公司 | A kind of method and device of three-dimensional modeling |
US10789767B2 (en) | 2015-03-17 | 2020-09-29 | Alibaba Group Holding Limited | Reducing computational complexity in three-dimensional modeling based on two-dimensional images |
US10055879B2 (en) | 2015-05-22 | 2018-08-21 | Tencent Technology (Shenzhen) Company Limited | 3D human face reconstruction method, apparatus and server |
WO2016188318A1 (en) * | 2015-05-22 | 2016-12-01 | 腾讯科技(深圳)有限公司 | 3d human face reconstruction method, apparatus and server |
CN104899921B (en) * | 2015-06-04 | 2017-12-22 | 杭州电子科技大学 | Single-view videos human body attitude restoration methods based on multi-modal own coding model |
CN104899921A (en) * | 2015-06-04 | 2015-09-09 | 杭州电子科技大学 | Single-view video human body posture recovery method based on multi-mode self-coding model |
CN105701455A (en) * | 2016-01-05 | 2016-06-22 | 安阳师范学院 | Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method |
CN105574518A (en) * | 2016-01-25 | 2016-05-11 | 北京天诚盛业科技有限公司 | Method and device for human face living detection |
CN106023302A (en) * | 2016-05-06 | 2016-10-12 | 刘进 | Mobile communication terminal, three-dimensional reconstruction method thereof and server |
CN106067190A (en) * | 2016-05-27 | 2016-11-02 | 俞怡斐 | A kind of fast face threedimensional model based on single image generates and alternative approach |
CN106067190B (en) * | 2016-05-27 | 2019-04-30 | 俞怡斐 | A kind of generation of fast face threedimensional model and transform method based on single image |
CN107452049A (en) * | 2016-05-30 | 2017-12-08 | 腾讯科技(深圳)有限公司 | A kind of three-dimensional head modeling method and device |
CN107527092A (en) * | 2016-06-15 | 2017-12-29 | 福特全球技术公司 | Trained using audible data for colliding the algorithm hidden |
US11335120B2 (en) | 2016-10-31 | 2022-05-17 | Google Llc | Face reconstruction from a learned embedding |
CN108475438A (en) * | 2016-10-31 | 2018-08-31 | 谷歌有限责任公司 | The Facial reconstruction of insertion based on study |
CN108475438B (en) * | 2016-10-31 | 2022-04-12 | 谷歌有限责任公司 | Learning-based embedded face reconstruction |
CN114694221A (en) * | 2016-10-31 | 2022-07-01 | 谷歌有限责任公司 | Face reconstruction method based on learning |
CN106803284B (en) * | 2017-01-11 | 2021-03-23 | 北京旷视科技有限公司 | Method and device for constructing three-dimensional image of face |
CN106803284A (en) * | 2017-01-11 | 2017-06-06 | 北京旷视科技有限公司 | Build the method and device of the 3-D view of face |
CN106910241A (en) * | 2017-01-20 | 2017-06-30 | 徐迪 | The reconstructing system and method for the three-dimensional human head based on cell-phone camera and Cloud Server |
CN108961384A (en) * | 2017-05-19 | 2018-12-07 | 中国科学院苏州纳米技术与纳米仿生研究所 | three-dimensional image reconstruction method |
CN108961384B (en) * | 2017-05-19 | 2021-11-30 | 中国科学院苏州纳米技术与纳米仿生研究所 | Three-dimensional image reconstruction method |
WO2019033576A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Face posture detection method, device, and storage medium |
CN107679446A (en) * | 2017-08-17 | 2018-02-09 | 平安科技(深圳)有限公司 | Human face posture detection method, device and storage medium |
CN108492373B (en) * | 2018-03-13 | 2019-03-08 | 齐鲁工业大学 | A kind of face embossment Geometric Modeling Method |
CN108492373A (en) * | 2018-03-13 | 2018-09-04 | 齐鲁工业大学 | A kind of face embossment Geometric Modeling Method |
CN108491881A (en) * | 2018-03-23 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating detection model |
CN108537836A (en) * | 2018-04-12 | 2018-09-14 | 维沃移动通信有限公司 | A kind of depth data acquisition methods and mobile terminal |
CN108665555A (en) * | 2018-05-15 | 2018-10-16 | 华中师范大学 | A kind of autism interfering system incorporating real person's image |
US11475708B2 (en) | 2018-08-10 | 2022-10-18 | Zhejiang Uniview Technologies Co., Ltd. | Face feature point detection method and device, equipment and storage medium |
WO2020029572A1 (en) * | 2018-08-10 | 2020-02-13 | 浙江宇视科技有限公司 | Human face feature point detection method and device, equipment and storage medium |
WO2020034698A1 (en) * | 2018-08-16 | 2020-02-20 | Oppo广东移动通信有限公司 | Three-dimensional model-based special effect processing method and device, and electronic apparatus |
CN109147037A (en) * | 2018-08-16 | 2019-01-04 | Oppo广东移动通信有限公司 | Effect processing method, device and electronic equipment based on threedimensional model |
WO2020042720A1 (en) * | 2018-08-28 | 2020-03-05 | 腾讯科技(深圳)有限公司 | Human body three-dimensional model reconstruction method, device, and storage medium |
US11302064B2 (en) | 2018-08-28 | 2022-04-12 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for reconstructing three-dimensional model of human body, and storage medium |
CN109377557A (en) * | 2018-11-26 | 2019-02-22 | 中山大学 | Real-time three-dimensional facial reconstruction method based on single frames facial image |
CN109377557B (en) * | 2018-11-26 | 2022-12-27 | 中山大学 | Real-time three-dimensional face reconstruction method based on single-frame face image |
CN111223175B (en) * | 2018-11-27 | 2023-07-04 | 财团法人交大思源基金会 | Three-dimensional face reconstruction method |
CN111223175A (en) * | 2018-11-27 | 2020-06-02 | 财团法人交大思源基金会 | Three-dimensional face reconstruction method |
CN109671108A (en) * | 2018-12-18 | 2019-04-23 | 重庆理工大学 | A kind of single width multi-angle of view facial image Attitude estimation method arbitrarily rotated in plane |
CN109919876B (en) * | 2019-03-11 | 2020-09-01 | 四川川大智胜软件股份有限公司 | Three-dimensional real face modeling method and three-dimensional real face photographing system |
CN109919876A (en) * | 2019-03-11 | 2019-06-21 | 四川川大智胜软件股份有限公司 | A kind of true face model building of three-dimensional and three-dimensional true face photographic system |
CN111091624A (en) * | 2019-12-19 | 2020-05-01 | 南京大学 | Method for generating high-precision drivable human face three-dimensional model from single picture |
CN111091624B (en) * | 2019-12-19 | 2021-09-28 | 南京大学 | Method for generating high-precision drivable human face three-dimensional model from single picture |
CN111292414A (en) * | 2020-02-24 | 2020-06-16 | 当家移动绿色互联网技术集团有限公司 | Method and device for generating three-dimensional image of object, storage medium and electronic equipment |
WO2021174939A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Facial image acquisition method and system |
CN111428579A (en) * | 2020-03-03 | 2020-07-17 | 平安科技(深圳)有限公司 | Face image acquisition method and system |
CN113705280A (en) * | 2020-05-21 | 2021-11-26 | 北京聚匠艺传媒有限公司 | Human-computer interaction method and device based on facial features |
CN113705280B (en) * | 2020-05-21 | 2024-05-10 | 北京聚匠艺传媒有限公司 | Human-computer interaction method and device based on facial features |
CN112507848A (en) * | 2020-12-03 | 2021-03-16 | 中科智云科技有限公司 | Mobile terminal real-time human face attitude estimation method |
CN112507848B (en) * | 2020-12-03 | 2021-05-14 | 中科智云科技有限公司 | Mobile terminal real-time human face attitude estimation method |
CN112597901A (en) * | 2020-12-23 | 2021-04-02 | 艾体威尔电子技术(北京)有限公司 | Multi-face scene effective face recognition device and method based on three-dimensional distance measurement |
CN112597901B (en) * | 2020-12-23 | 2023-12-29 | 艾体威尔电子技术(北京)有限公司 | Device and method for effectively recognizing human face in multiple human face scenes based on three-dimensional ranging |
CN114061761A (en) * | 2021-11-17 | 2022-02-18 | 重庆大学 | Remote target temperature accurate measurement method based on monocular infrared stereoscopic vision correction |
CN114061761B (en) * | 2021-11-17 | 2023-12-08 | 重庆大学 | Remote target temperature accurate measurement method based on monocular infrared stereoscopic vision correction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102054291A (en) | Method and device for reconstructing three-dimensional face based on single face image | |
CN106023288B (en) | A kind of dynamic scapegoat's building method based on image | |
CN101916454B (en) | Method for reconstructing high-resolution human face based on grid deformation and continuous optimization | |
CN109196561A (en) | System and method for carrying out three dimensional garment distortion of the mesh and layering for fitting visualization | |
CN104008564B (en) | A kind of human face expression cloning process | |
CN101303772A (en) | Method for modeling non-linear three-dimensional human face based on single sheet image | |
CN103593870A (en) | Picture processing device and method based on human faces | |
CN112734890B (en) | Face replacement method and device based on three-dimensional reconstruction | |
EP3186787A1 (en) | Method and device for registering an image to a model | |
US20230169727A1 (en) | Generative Nonlinear Human Shape Models | |
CN104392484B (en) | A kind of Three-dimension Tree modeling method and device | |
Bartoli et al. | Template-based isometric deformable 3D reconstruction with sampling-based focal length self-calibration | |
Neophytou et al. | Shape and pose space deformation for subject specific animation | |
CN106484511A (en) | A kind of spectrum attitude moving method | |
Yu et al. | An rbf-based reparameterization method for constrained texture mapping | |
CN105184856A (en) | Two-phase human skin three-dimensional reconstruction method based on density matching | |
Takahashi et al. | Rank minimization approach to image inpainting using null space based alternating optimization | |
Gu et al. | Customized 3D digital human model rebuilding by orthographic images-based modelling method through open-source software | |
Starck et al. | Model-based human shape reconstruction from multiple views | |
Xie et al. | SkeletonFusion: Reconstruction and tracking of human body in real-time | |
Yamazaki et al. | Non-rigid shape registration using similarity-invariant differential coordinates | |
Zhang et al. | Human model adaptation for multiview markerless motion capture | |
Fan et al. | Image morphing with conformal welding | |
CN112365588B (en) | Virtual three-dimensional somatosensory modeling method, device and system | |
Winkler et al. | Mesh massage: a versatile mesh optimization framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20110511 |