Nothing Special   »   [go: up one dir, main page]

CN109345470A - Facial image fusion method and system - Google Patents

Facial image fusion method and system Download PDF

Info

Publication number
CN109345470A
CN109345470A CN201811043607.2A CN201811043607A CN109345470A CN 109345470 A CN109345470 A CN 109345470A CN 201811043607 A CN201811043607 A CN 201811043607A CN 109345470 A CN109345470 A CN 109345470A
Authority
CN
China
Prior art keywords
face
characteristic point
feature
template
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811043607.2A
Other languages
Chinese (zh)
Other versions
CN109345470B (en
Inventor
梁凌宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811043607.2A priority Critical patent/CN109345470B/en
Publication of CN109345470A publication Critical patent/CN109345470A/en
Application granted granted Critical
Publication of CN109345470B publication Critical patent/CN109345470B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses facial image fusion methods, comprising: extracts the characteristic point of target face I and the characteristic point with reference to face R respectively;According to the characteristic point of the target face I and with reference to the characteristic point of face R, the apparent shape of reference face R and target face I are matched;Illumination template M is generated according to the target face I ' after matching and with reference to face R 'L;According to the characteristic point formation zone template M of target face IQ;The illumination template ML, region template MQWith target face I, match after reference face R ' be weighted summation, target face I and realize face fusion with reference to face R.The present invention can be when target face and reference face have larger light differential, and solving target face with reference to face there is larger light differential human face to merge the vision inconsistence problems of appearance, generate natural face fusion effect.

Description

Facial image fusion method and system
Technical field
The present invention relates to image procossings and Rendering field, and in particular to facial image fusion method and system.
Background technique
Facial image integration technology be the appearance features of reference face R are fused in target face I, thus in image or New content and style are generated in person's video.This technology is widely used in Cultural and Creative Industries, such as production of film and TV, number joy Pleasure, social media, augmented reality and personal images editor.And to realize the seamless connection with reference to face R and target face I, two The integration region of person needs to have good visual consistency.The technology that traditional method mainly uses mask and boundary to sprout wings is come It realizes, but this method only can be only achieved preferable effect in the case where the apparent difference of target face I and reference face R are little Fruit.Another method mainly refers to the pixel value of face R, allusion quotation using the apparent characteristic of target face I in estimation fusion region Type representative is Poisson image edit method.This method can be readjusted according to boundary value of the target face I in integration region With reference to the apparent of face R, so smooth transition can be obtained on fusion boundary, reach preferable syncretizing effect.But, work as mesh When marking face I and having biggish light differential with reference to face R, this method can introduce visual defect in integration region.Therefore, It always searches for solving target face I and the vision that with reference to face R there is larger light differential human face to merge appearance in industry The method of inconsistence problems.
Summary of the invention
The purpose of the invention is to overcome above the shortcomings of the prior art, facial image fusion method is provided.
It is another object of the present invention to provide facial image fusion system to overcome above the shortcomings of the prior art System.
The purpose of the present invention is realized by the following technical solution:
Facial image fusion method, comprising:
Step 1 extracts the characteristic point of target face I and the characteristic point with reference to face R respectively;
Step 2, according to the characteristic point of the target face I and with reference to the characteristic point of face R, by the apparent of reference face R Shape matches with target face I;
Step 3 generates illumination template M according to the target face I ' after matching and with reference to face R 'L
Step 4, according to the characteristic point formation zone template M of target face IQ
Step 5, the illumination template ML, region template MQWith target face I, match after reference face R ' carry out Weighted sum, target face I and reference face R realize face fusion.
Preferably, target face I ' and reference face R ' after the basis matches generate illumination template MLIt include: to mention It takes the target face I ' after matching and shines feature T with reference to the initial light of face R 'L;Feature T is shone to the initial lightLExpanded It dissipates, generates illumination template ML
Preferably, target face I ' and reference face R ' after the basis matches generate illumination template MLFurther include: CIE is transformed into from RGB color by target face I and with reference to face RLAB color space;It is filtered using edge preserving smoothing Wave device is respectively smoothed target face I and the luminance channel of reference face R ' after matching, and obtains target face Illumination feature ILWith match after reference face illumination feature RL;By the illumination feature I of target faceLDivided by matching The illumination feature R of reference face afterwardsLInitial light is obtained according to feature TL;Feature T is shone to initial light by the first iterative equationLInto Row diffusion, generates illumination template ML
First iterative equation are as follows:
ML (t+1)-ML (t)=(AL-BL)ML (t)+BLTL
Wherein, t is the number of iterations, initial value 0, and maximum number of iterations can be adjusted according to different situations;BLDiagonally to weigh Weight matrix, BL=diag { BL(i, i) }, weight size to control illumination diffusion region, wherein face interior zone weight be BL(i, i)=1, remaining region weight are BL(i, i)=0;ALFor illumination similarity matrix, different illumination characteristic point p are containedi The similarity that other are put with its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is being spread Restricted area is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is One small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
Preferably, the characteristic point formation zone template M according to target face IQIt include: the spy according to target face I Sign point extracts prime area feature TQ;To the prime area feature TQIt is diffused, formation zone template MQ
Preferably, described to the prime area feature TQIt is diffused, formation zone template MQIt include: to change by second For equation to prime area feature TQIt is diffused, formation zone template MQ
Secondary iteration equation are as follows:
MQ (t+1)-MQ (t)=(AQ-BQ)MQ (t)+BQTQ
Wherein t is the number of iterations, and initial value 0, maximum number of iterations can adjust according to different situations;BQFor diagonal weight Matrix, BQ=diag { BQ(i, i) }, the region that weight size is spread to control area, wherein face interior zone weight is BQ (i, i)=0, remaining region weight are BQ(i, i)=1;AQFor Regional Similarity matrix, different illumination characteristic point p are containediWith The similarity of other points of its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is being spread Restricted area is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is One small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
Preferably, the characteristic point according to the target face I and the characteristic point with reference to face R, by reference face R's Apparent shape and target face I match include: the characteristic point according to the target face I and the characteristic point with reference to face R it Between distance calculate transformation matrix;Reference face R image is converted using the transformation matrix.
Preferably, the characteristic point for extracting target face I respectively and the characteristic point of reference face R include: extraction target Characteristic point on the outer profile and face of face I;Extract the characteristic point on the outer profile and face with reference to face R.
Preferably, the formula of summation is weighted in step 5 are as follows:
O=MLMQ R'+(J-MQ)I
Wherein, O is target face I and realizes fused face with reference to face R, and J is all 1's matrix.
Another object of the present invention is realized by the following technical solution:
Facial image emerging system, comprising: feature point extraction module, for extracting the characteristic point and reference of target face I The characteristic point of face R;Face matching module, for according to the characteristic point of the target face I and with reference to the characteristic point of face R, The apparent shape of reference face R and target face I are matched;Illumination template generation module, for according to the mesh after matching It marks face I ' and generates illumination template M with reference to face R 'L;Region template generation module, for being generated according to the point of target face I Region template MQ;Face fusion module, for the illumination template ML, region template MQWith target face I, match after It is weighted summation with reference to face R ', target face I and reference face R realize face fusion.
Preferably, the illumination template generation module includes: initial light according to feature extraction unit, after matching for extraction Target face I ' and with reference to face R ' initial light shine feature TL;Illumination template generation unit, for shining the initial light Feature TL is diffused, and generates illumination template ML.
The present invention has the advantage that compared with the existing technology
Characteristic point of this programme according to the target face I and the characteristic point with reference to face R, by the apparent of reference face R Shape matches with target face I, can automatically adjust face with the apparent feature with reference to face according to target face in this way and melt Close the transition in region and the apparent feature with reference to face;Light is generated according to the target face I ' after matching and with reference to face R ' According to template ML, so that illumination template MLIllumination adjusting is carried out to the reference face after matching, according to the spy of target face I Sign point formation zone template MQ, finally the illumination template ML, region template MQWith target face I, match after reference man Face R ' is weighted summation, and target face I and reference face R realize face fusion, so as in target face and with reference to face When with larger light differential, solving target face with reference to face there is larger light differential human face to merge the vision occurred Inconsistence problems generate natural face fusion effect.
Detailed description of the invention
Fig. 1 is the flow chart of facial image fusion method of the invention.
Fig. 2 is that the target face I ' and reference face R ' after basis of the invention matches generate illumination template MLProcess Figure.
Fig. 3 (a) is the schematic diagram of target face of the invention.
Fig. 3 (b) is the schematic diagram of the invention with reference to face.
Fig. 3 (c) is the schematic diagram of face after fusion of the invention.
Fig. 4 (a) is the schematic diagram of initial light of the invention according to feature.
Fig. 4 (b) is the schematic diagram of illumination template of the invention.
Fig. 4 (c) is the schematic diagram of prime area feature of the invention.
Fig. 4 (d) is the schematic diagram of region template of the invention.
Fig. 5 is the structural schematic diagram of facial image emerging system of the invention.
Specific embodiment
Present invention will be further explained below with reference to the attached drawings and examples.
Facial image fusion method as shown in Figs 1-4, comprising:
S1 extracts the characteristic point of target face I (such as Fig. 3 (a)) and the characteristic point with reference to face R (such as Fig. 3 (b)) respectively; Specifically, the characteristic point on the outer profile and face of target face I is extracted;It extracts on outer profile and face with reference to face R Characteristic point.Wherein face include ear, eyebrow, eyes, nose and mouth.In the present embodiment, it is utilized respectively trained active Skeleton pattern ASM carries out feature point extraction.
S2, according to the characteristic point of the target face I and with reference to the characteristic point of face R, by the apparent shape of reference face R Match with target face I;It is, according to the characteristic point of the target face I and with reference to the characteristic point of face R, to reference Face is registrated, and is made the shape with reference to face, is apparently matched with the angle of target face and size;Further, step S2 includes: that the distance between the characteristic point according to the target face I and characteristic point with reference to face R calculate transformation matrix;Benefit Reference face R image is converted with the transformation matrix.Specifically, a transformation matrix is calculated, makes to become by the matrix Reference human face characteristic point and the distance between target human face characteristic point after changing is small as far as possible, then recycles the transformation matrix to ginseng Facial image is examined to be converted.Target facial image after matching is identical as with reference to facial image size, and human face region institute It is consistent in position.
S3 generates illumination template M (such as Fig. 4 (b)) according to the target face I ' after matching and with reference to face R 'L;Into one Step ground, step S3 include: to extract the target face I ' after matching and shine feature T with reference to the initial light of face R 'L(such as Fig. 4 (a));Feature T is shone to the initial lightLIt is diffused, generates illumination template ML.Further, step S3 further include:
S31 is transformed into CIE from RGB color by target face I and with reference to face RLAB color space;Make in this way Image is obtained to be made of a luminance channel and two Color Channels;Wherein luminance channel, which refers to, transforms to CIELAB color space The L of image*Channel, two Color Channels respectively refer to the L of CIELAB color spaceaWith LbChannel.
S32, using edge preserving smooth filter device respectively to target face I with match after reference face R ' brightness Channel is smoothed, and obtains the illumination feature I of target faceLWith match after reference face illumination feature RL;With double For side filter extracts the illumination feature of target face, have:
Wherein, filtering core is(k, l) For the field pixel of pixel (i, j) in image, parameter (σd, σr) be used to control corresponding Gaussian kernel size in filtering core, IL*For the luminance channel of target face;With reference to the illumination feature extraction and so on of face.
S33, by the illumination feature I of target faceLDivided by the illumination feature R of the reference face after matchingLObtain initial light According to feature TL
S34 shines feature T to initial light by the first iterative equationLIt is diffused, generates illumination template ML
First iterative equation are as follows:
ML (t+1)-ML (t)=(AL-BL)ML (t)+BLTL
Wherein, t is the number of iterations, initial value 0, and maximum number of iterations can be adjusted according to different situations;BLDiagonally to weigh Weight matrix, BL=diag { BL(i, i) }, weight size to control illumination diffusion region, wherein face interior zone weight be BL(i, i)=1, remaining region weight are BL(i, i)=0;ALFor illumination similarity matrix, different illumination characteristic point p are containedi The similarity that other are put with its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is being spread Restricted area is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is One small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
S4, according to the characteristic point formation zone template M of target face IQ(such as Fig. 4 (d));Further, step S4 includes: According to the feature point extraction prime area feature T of target face IQ(such as Fig. 4 (c));Namely according to the outer profile of target face I Obtain rough positioning, as prime area feature TQ;To the prime area feature TQIt is diffused, formation zone template MQ。 Wherein, described to the prime area feature TQIt is diffused, formation zone template MQInclude:
By secondary iteration equation to prime area feature TQIt is diffused, formation zone template MQ
Secondary iteration equation are as follows:
MQ (t+1)-MQ (t)=(AQ-BQ)MQ (t)+BQTQ
Wherein t is the number of iterations, and initial value 0, maximum number of iterations can adjust according to different situations;BQFor diagonal weight Matrix, BQ=diag { BQ(i, i) }, the region that weight size is spread to control area, wherein face interior zone weight is BQ (i, i)=0, remaining region weight are BQ(i, i)=1;AQFor Regional Similarity matrix, different illumination characteristic point p are containediWith The similarity of other points of its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is being spread Restricted area is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is One small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
S5, the illumination template ML, region template MQWith target face I, match after reference face R ' be weighted Summation, target face I and reference face R realize face fusion (such as Fig. 3 (c)).The wherein formula of weighted sum are as follows:
O=MLMQ R'+(J-MQ)I
Wherein, O is target face I and realizes fused face with reference to face R, and J is all 1's matrix.
Such as Fig. 5, the corresponding system of above-mentioned facial image fusion method includes: feature point extraction module, for extracting target The characteristic point of face I and the characteristic point of reference face R;Face matching module, for according to the characteristic point of the target face I and With reference to the characteristic point of face R, the apparent shape of reference face R and target face I are matched;Illumination template generation module is used According to the target face I ' after matching and with reference to face R ' generation illumination template ML;Region template generation module is used for root According to the characteristic point formation zone template M of target face IQ;Face fusion module, for the illumination template ML, region template MQ With target face I, match after reference face R ' be weighted summation, target face I and realize that face melts with reference to face R It closes.
In the present embodiment, the illumination template generation module includes: initial light according to feature extraction unit, for extracting phase The initial light of target face I ' and reference face R ' after matching shine feature TL;Illumination template generation unit, for described initial Illumination feature TLIt is diffused, generates illumination template ML
In the present embodiment, the illumination template generation module further include: color space unit, for by target face I and CIE is transformed into from RGB color with reference to face RLAB color space;Further, the initial light shines feature extraction Unit is also used for the brightness of reference face R ' of the edge preserving smooth filter device respectively to target face I and after matching Channel is smoothed, and obtains the illumination feature I of target faceLWith match after reference face illumination feature RL;By mesh Mark the illumination feature I of faceLDivided by the illumination feature R of the reference face after matchingLInitial light is obtained according to feature TL;The light According to template generation unit, it is also used to through the first iterative equation to initial light according to feature TLIt is diffused, generates illumination template ML; First iterative equation are as follows:
ML (t+1)-ML (t)=(AL-BL)ML (t)+BLTL
Wherein, t is the number of iterations, initial value 0, and maximum number of iterations can be adjusted according to different situations;BLDiagonally to weigh Weight matrix, BL=diag { BL(i, i) }, weight size to control illumination diffusion region, wherein face interior zone weight be BL(i, i)=1, remaining region weight are BL(i, i)=0;ALFor illumination similarity matrix, different illumination characteristic point p are containedi The similarity that other are put with its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is being spread Restricted area is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is One small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
In the present embodiment, the region template generation module includes: prime area feature extraction unit, for according to target The feature point extraction prime area feature T of face IQ;Region template generation unit, for the prime area feature TQIt carries out Diffusion, formation zone template MQ
Further, the region template generation unit is also used to through secondary iteration equation to prime area feature TQ It is diffused, formation zone template MQ
Secondary iteration equation are as follows:
MQ (t+1)-MQ (t)=(AQ-BQ)MQ (t)+BQTQ
Wherein t is the number of iterations, and initial value 0, maximum number of iterations can adjust according to different situations;BQFor diagonal weight Matrix, BQ=diag { BQ(i, i) }, the region that weight size is spread to control area, wherein face interior zone weight is BQ (i, i)=0, remaining region weight are BQ(i, i)=1;AQFor Regional Similarity matrix, different illumination characteristic point p are containediWith The similarity of other points of its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is being spread Restricted area is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is One small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
In the present embodiment, the face matching module is also used to according to the characteristic point of the target face I and with reference to face The distance between characteristic point of R calculates transformation matrix;Reference face R image is converted using the transformation matrix.
In the present embodiment, the feature point extraction module is also used to extract the spy on the outer profile and face of target face I Sign point;Extract the characteristic point on the outer profile and face with reference to face R.
In the present embodiment, the formula of weighted sum are as follows:
O=MLMQ R'+(J-MQ)I
Wherein, O is target face I and realizes fused face with reference to face R, and J is all 1's matrix.
Beneficial effects of the present invention are as follows:
Characteristic point of this programme according to the target face I and the characteristic point with reference to face R, by the apparent of reference face R Shape matches with target face I, can automatically adjust face with the apparent feature with reference to face according to target face in this way and melt Close the transition in region and the apparent feature with reference to face;Light is generated according to the target face I ' after matching and with reference to face R ' According to template ML, so that illumination template ML carries out Illumination adjusting to the reference face after matching, according to the spy of target face I A sign point formation zone template MQ, finally the illumination template ML, region template MQ and target face I, match after reference Face R ' is weighted summation, and target face I and reference face R realize face fusion, so as in target face and reference man When face has larger light differential, solving target face with reference to face there is larger light differential human face to merge the view occurred Feel inconsistence problems, generates natural face fusion effect.
It since initial light is extracted according to feature TL before generation illumination template ML is kept using color notation conversion space and edge The smoothing processing of smoothing filter, initial light according to the diffusion of feature TL be according to the lighting gradients feature of different characteristic point (i.e. Gi-Gj corresponding Illumination adjusting) is carried out to the reference face after registration, makes reference face and target face tool when face fusion There is good illumination consistency;Diffusion before the template MQ of formation zone then can automatically adjust region according to the gradient feature of face characteristic The transition change of fusion, therefore the present invention can be adaptively adjusted according to target face and the shape with reference to face with illumination particularity The transition in face fusion region and the illumination automatically regulated with reference to face improve face fusion work without human intervention The efficiency and ease for use of tool.
Above-mentioned specific embodiment is the preferred embodiment of the present invention, can not be limited the invention, and others are appointed The change or other equivalent substitute modes what is made without departing from technical solution of the present invention, are included in protection of the invention Within the scope of.

Claims (10)

1. facial image fusion method characterized by comprising
Step 1 extracts the characteristic point of target face I and the characteristic point with reference to face R respectively;
Step 2, according to the characteristic point of the target face I and with reference to the characteristic point of face R, by the apparent shape of reference face R Match with target face I;
Step 3 generates illumination template M according to the target face I ' after matching and with reference to face R 'L
Step 4, according to the characteristic point formation zone template M of target face IQ
Step 5, the illumination template ML, region template MQWith target face I, match after reference face R ' be weighted Summation, target face I and reference face R realize face fusion.
2. facial image fusion method according to claim 1, which is characterized in that the basis match after target person Face I ' and reference face R ' generate illumination template MLInclude:
It extracts the target face I ' after matching and shines feature T with reference to the initial light of face R 'L
Feature T is shone to the initial lightLIt is diffused, generates illumination template ML
3. facial image fusion method according to claim 2, which is characterized in that the basis match after target person Face I ' and reference face R ' generate illumination template MLFurther include:
CIE is transformed into from RGB color by target face I and with reference to face RLAB color space;
Using edge preserving smooth filter device respectively to target face I with match after reference face R ' luminance channel progress Smoothing processing obtains the illumination feature I of target faceLWith match after reference face illumination feature RL
By the illumination feature I of target faceLDivided by the illumination feature R of the reference face after matchingLInitial light is obtained according to feature TL
Feature T is shone to initial light by the first iterative equationLIt is diffused, generates illumination template ML
First iterative equation are as follows:
ML (t+1)-ML (t)=(AL-BL)ML (t)+BLTL
Wherein, t is the number of iterations, initial value 0, and maximum number of iterations can be adjusted according to different situations;BLFor diagonal weight square Battle array, BL=diag { BL(i, i) }, region of the weight size to control illumination diffusion, wherein face interior zone weight is BL (i, i)=1, remaining region weight are BL(i, i)=0;ALFor illumination similarity matrix, different illumination characteristic point p are containediWith The similarity of other points of its field, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is limited in diffusion Region is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is one Small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
4. facial image fusion method according to claim 1, which is characterized in that the feature according to target face I Point formation zone template MQInclude:
According to the feature point extraction prime area feature T of target face IQ
To the prime area feature TQIt is diffused, formation zone template MQ
5. facial image fusion method according to claim 4, which is characterized in that described to the prime area feature TQ It is diffused, formation zone template MQInclude:
By secondary iteration equation to prime area feature TQIt is diffused, formation zone template MQ
Secondary iteration equation are as follows:
MQ (t+1)-MQ (t)=(AQ-BQ)MQ (t)+BQTQ
Wherein t is the number of iterations, and initial value 0, maximum number of iterations can adjust according to different situations;BQFor diagonal weight matrix, BQ=diag { BQ(i, i) }, the region that weight size is spread to control area, wherein face interior zone weight is BQ(i,i) =0, remaining region weight is BQ(i, i)=1;AQFor Regional Similarity matrix, different illumination characteristic point p are containediWith its field The similarity of other points, specifically:
Wherein, the ith and jth pixel in subscript i, j representative image, j ∈ N (i) represent the neighborhood of pixel i;D is limited in diffusion Region is fractional value, is big numerical value, G=I in diffusion zoneLFor guidance feature, Gi-GjFor the gradient of guidance feature, c is one Small constant, with being 0 to avoid denominator, | z | it is to take the absolute value of z.
6. facial image fusion method according to claim 1, which is characterized in that described according to the target face I's The characteristic point of characteristic point and reference face R, the apparent shape of reference face R is matched with target face I includes:
Transformation matrix is calculated according to the characteristic point of the target face I and with reference to the distance between characteristic point of face R;
Reference face R image is converted using the transformation matrix.
7. facial image fusion method according to claim 1, which is characterized in that described to extract target face I's respectively Characteristic point and the characteristic point of reference face R include:
Extract the characteristic point on the outer profile and face of target face I;
Extract the characteristic point on the outer profile and face with reference to face R.
8. facial image fusion method according to claim 1, which is characterized in that be weighted the public affairs of summation in step 5 Formula are as follows:
O=MLMQ R'+(J-MQ)I
Wherein, O is target face I and realizes fused face with reference to face R, and J is all 1's matrix.
9. facial image emerging system characterized by comprising
Feature point extraction module, for extracting the characteristic point of target face I and with reference to the characteristic point of face R;
Face matching module will refer to face R for the characteristic point according to the target face I and the characteristic point with reference to face R Apparent shape match with target face I;
Illumination template generation module, for generating illumination template M according to the target face I ' after matching and with reference to face R 'L
Region template generation module, for the characteristic point formation zone template M according to target face IQ
Face fusion module, for the illumination template ML, region template MQWith target face I, match after reference face R ' is weighted summation, and target face I and reference face R realize face fusion.
10. facial image emerging system according to claim 9, which is characterized in that the illumination template generation module packet It includes:
Initial light shines feature extraction unit, for extracting the target face I ' after matching and shining spy with reference to the initial light of face R ' Levy TL
Illumination template generation unit, for shining feature T to the initial lightLIt is diffused, generates illumination template ML
CN201811043607.2A 2018-09-07 2018-09-07 Face image fusion method and system Expired - Fee Related CN109345470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811043607.2A CN109345470B (en) 2018-09-07 2018-09-07 Face image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811043607.2A CN109345470B (en) 2018-09-07 2018-09-07 Face image fusion method and system

Publications (2)

Publication Number Publication Date
CN109345470A true CN109345470A (en) 2019-02-15
CN109345470B CN109345470B (en) 2021-11-23

Family

ID=65304586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811043607.2A Expired - Fee Related CN109345470B (en) 2018-09-07 2018-09-07 Face image fusion method and system

Country Status (1)

Country Link
CN (1) CN109345470B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232730A (en) * 2019-06-03 2019-09-13 深圳市三维人工智能科技有限公司 A kind of three-dimensional face model textures fusion method and computer-processing equipment
CN110852967A (en) * 2019-11-06 2020-02-28 成都品果科技有限公司 Method for quickly removing flaws of portrait photo

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231748A (en) * 2007-12-18 2008-07-30 西安电子科技大学 Image anastomosing method based on singular value decomposition
CN101694691A (en) * 2009-07-07 2010-04-14 北京中星微电子有限公司 Method and device for synthesizing facial images
CN101739675A (en) * 2009-12-11 2010-06-16 重庆邮电大学 Method and device for registration and synthesis of non-deformed images
CN101945223A (en) * 2010-09-06 2011-01-12 浙江大学 Video consistent fusion processing method
CN104463181A (en) * 2014-08-05 2015-03-25 华南理工大学 Automatic face image illumination editing method under complex background
CN105184273A (en) * 2015-09-18 2015-12-23 桂林远望智能通信科技有限公司 ASM-based dynamic image frontal face reconstruction system and method
CN105741229A (en) * 2016-02-01 2016-07-06 成都通甲优博科技有限责任公司 Method for realizing quick fusion of face image
CN107633499A (en) * 2017-09-27 2018-01-26 广东欧珀移动通信有限公司 Image processing method and related product
CN108257084A (en) * 2018-02-12 2018-07-06 北京中视广信科技有限公司 A kind of automatic cosmetic method of lightweight face based on mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231748A (en) * 2007-12-18 2008-07-30 西安电子科技大学 Image anastomosing method based on singular value decomposition
CN101694691A (en) * 2009-07-07 2010-04-14 北京中星微电子有限公司 Method and device for synthesizing facial images
CN101739675A (en) * 2009-12-11 2010-06-16 重庆邮电大学 Method and device for registration and synthesis of non-deformed images
CN101945223A (en) * 2010-09-06 2011-01-12 浙江大学 Video consistent fusion processing method
CN104463181A (en) * 2014-08-05 2015-03-25 华南理工大学 Automatic face image illumination editing method under complex background
CN105184273A (en) * 2015-09-18 2015-12-23 桂林远望智能通信科技有限公司 ASM-based dynamic image frontal face reconstruction system and method
CN105741229A (en) * 2016-02-01 2016-07-06 成都通甲优博科技有限责任公司 Method for realizing quick fusion of face image
CN107633499A (en) * 2017-09-27 2018-01-26 广东欧珀移动通信有限公司 Image processing method and related product
CN108257084A (en) * 2018-02-12 2018-07-06 北京中视广信科技有限公司 A kind of automatic cosmetic method of lightweight face based on mobile terminal

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FENG MIN等: ""Automatic Face Replacement in Video Based on 2D Morphable Model"", 《2010 INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 *
代小红: "《基于机器视觉的数字图像处理与识别研究》", 31 March 2012, 西南交通大学出版社 *
曲剑等: ""可重构路由交换平台中的构件分类检索方法"", 《计算机工程》 *
梁凌宇: ""人脸图像的自适应美化与渲染研究"", 《中国博士学位论文全文数据库 信息科技辑》 *
梁凌宇等: ""自适应编辑传播的人脸图像光照迁移"", 《光学精密工程》 *
钟千里: ""图像中人脸自动替换技术的研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
魏璐: ""基于三维形变模型的人脸替换技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232730A (en) * 2019-06-03 2019-09-13 深圳市三维人工智能科技有限公司 A kind of three-dimensional face model textures fusion method and computer-processing equipment
CN110232730B (en) * 2019-06-03 2024-01-19 深圳市三维人工智能科技有限公司 Three-dimensional face model mapping fusion method and computer processing equipment
CN110852967A (en) * 2019-11-06 2020-02-28 成都品果科技有限公司 Method for quickly removing flaws of portrait photo
CN110852967B (en) * 2019-11-06 2023-09-12 成都品果科技有限公司 Method for rapidly removing flaws in portrait photo

Also Published As

Publication number Publication date
CN109345470B (en) 2021-11-23

Similar Documents

Publication Publication Date Title
US11450075B2 (en) Virtually trying cloths on realistic body model of user
CN102881011B (en) Region-segmentation-based portrait illumination transfer method
Liao et al. Automatic caricature generation by analyzing facial features
CN107730573A (en) A kind of personal portrait cartoon style generation method of feature based extraction
CN106780367B (en) HDR photo style transfer method dictionary-based learning
US10582733B2 (en) Methods for producing garments and garment designs
US20100189357A1 (en) Method and device for the virtual simulation of a sequence of video images
Ward et al. Depth director: A system for adding depth to movies
CN106920277A (en) Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving
CN104408462B (en) Face feature point method for rapidly positioning
Wu et al. Making bas-reliefs from photographs of human faces
CN104123749A (en) Picture processing method and system
CN106652015B (en) Virtual character head portrait generation method and device
CN102360513B (en) Object illumination moving method based on gradient operation
CN108053366A (en) A kind of image processing method and electronic equipment
CN104157001A (en) Method and device for drawing head caricature
CN106652037B (en) Face mapping processing method and device
CN109345470A (en) Facial image fusion method and system
CN104157002A (en) Color image texture force tactile reproduction method based on color transform space
Bourached et al. Recovery of underdrawings and ghost-paintings via style transfer by deep convolutional neural networks: A digital tool for art scholars
CN103337088B (en) A kind of facial image shadow edit methods kept based on edge
CN109903320A (en) A kind of face intrinsic picture breakdown method based on colour of skin priori
CN107204000A (en) Human body segmentation's method based on Kinect depth cameras
AU2021101766A4 (en) Cartoonify Image Detection Using Machine Learning
CN113052783A (en) Face image fusion method based on face key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211123

CF01 Termination of patent right due to non-payment of annual fee