Nothing Special   »   [go: up one dir, main page]

CN116597079A - Three-dimensional virtual face generation method and device and electronic equipment - Google Patents

Three-dimensional virtual face generation method and device and electronic equipment Download PDF

Info

Publication number
CN116597079A
CN116597079A CN202310508684.5A CN202310508684A CN116597079A CN 116597079 A CN116597079 A CN 116597079A CN 202310508684 A CN202310508684 A CN 202310508684A CN 116597079 A CN116597079 A CN 116597079A
Authority
CN
China
Prior art keywords
face
dimensional
virtual
texture map
virtual face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310508684.5A
Other languages
Chinese (zh)
Inventor
詹鹏鑫
刘奎龙
庄亦村
杨文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310508684.5A priority Critical patent/CN116597079A/en
Publication of CN116597079A publication Critical patent/CN116597079A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the specification provides a method and a device for generating a three-dimensional virtual face and electronic equipment. The method comprises the following steps: acquiring a two-dimensional real face image; generating a two-dimensional virtual face image corresponding to the two-dimensional real face image; carrying out face three-dimensional reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face; the comprehensive texture map comprises a texture map obtained by projecting a three-dimensional virtual face to a two-dimensional plane; sampling and calculating the two-dimensional virtual face image to obtain a front texture map of the virtual face; and combining the comprehensive texture map and the front texture map of the virtual face, and mapping the combined texture map to the three-dimensional face geometric model to generate the three-dimensional virtual face.

Description

Three-dimensional virtual face generation method and device and electronic equipment
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a method and a device for generating a three-dimensional virtual face and electronic equipment.
Background
With the development of computer technology, more and more application products for generating an avatar based on the computer technology are available. Therefore, in daily life or work, there are also more and more users who use three-dimensional virtual faces corresponding to their own faces.
In the related art, three-dimensional virtual face generation requires a user to provide a real face image. Because a single face image has a local blind area (for example, in a frontal face image, the face content of the side face is lacking), a user is required to provide a plurality of face images with different angles.
However, generating a three-dimensional virtual face based on a plurality of face images has a problem of long time consumption.
Disclosure of Invention
The embodiment of the specification provides a method and a device for generating a three-dimensional virtual face and electronic equipment.
According to a first aspect of embodiments of the present disclosure, there is provided a method for generating a three-dimensional virtual face, the method including:
acquiring a two-dimensional real face image;
generating a two-dimensional virtual face image corresponding to the two-dimensional real face image;
carrying out face three-dimensional reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face; the comprehensive texture map comprises a texture map obtained by projecting a three-dimensional virtual face to a two-dimensional plane;
sampling and calculating the two-dimensional virtual face image to obtain a front texture map of the virtual face;
and combining the comprehensive texture map and the front texture map of the virtual face, and mapping the combined texture map to the three-dimensional face geometric model to generate the three-dimensional virtual face.
Optionally, the generating a two-dimensional virtual face image corresponding to the two-dimensional real face image includes:
based on a virtual face generation algorithm, mapping real face features in the two-dimensional real face image into virtual face features; the virtual face generation algorithm comprises an algorithm which is constructed with a mapping relation between real face features and virtual face features;
and generating a two-dimensional virtual face image composed of the virtual face features based on the position coordinates of the real face features in the two-dimensional real face image.
Optionally, the mapping the real face feature in the two-dimensional real face image to the virtual face feature based on the virtual face generation algorithm includes:
responding to a triggered target virtual style in a plurality of different virtual styles, and acquiring a virtual face generation algorithm corresponding to the target virtual style;
and mapping the real face features in the two-dimensional real face image into the virtual face features of the target virtual style based on the virtual face generation algorithm.
Optionally, the performing three-dimensional face reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face includes:
performing coding calculation on the two-dimensional virtual face image to obtain face modeling parameters and texture parameters;
modeling the face modeling parameters to obtain a three-dimensional face geometric model; and modeling the texture parameters to obtain a comprehensive texture map of the virtual face.
Optionally, the face modeling parameters include face contour parameters, face pose parameters, and facial expression parameters.
Optionally, the face modeling parameters further include camera parameters;
and taking the face angle represented by the camera parameters as a reference, and adjusting the face angle of the three-dimensional face geometric model in modeling to the reference.
Optionally, the sampling calculation is performed on the two-dimensional virtual face image to obtain a front texture map of the virtual face, including:
projecting the two-dimensional virtual face image to the three-dimensional face geometric model, and obtaining pixel colors of mapping three-dimensional vertex coordinates in the three-dimensional face geometric model to the two-dimensional virtual face image;
projecting the three-dimensional vertex coordinates in the three-dimensional face geometric model to a two-dimensional plane to obtain two-dimensional plane coordinates;
and filling the pixel colors mapped by the three-dimensional vertex coordinates into corresponding two-dimensional plane coordinates to obtain the front texture map of the virtual human face.
Optionally, the merging the full texture map and the front texture map of the virtual face includes:
acquiring a part of the front texture map within a preset face boundary range as a face texture map;
acquiring a part, which is located outside the preset face boundary range, of the comprehensive texture map as a non-face texture map;
and splicing the facial texture map and the non-facial texture map to obtain a combined texture map.
Optionally, after stitching the face texture map and the non-face texture map, the method further comprises:
and smoothing the adjacent region where the spliced line is positioned based on a Gaussian smoothing algorithm.
According to a second aspect of embodiments of the present specification, there is provided a generating apparatus for three-dimensional virtual human face, the apparatus comprising:
the acquisition unit acquires a two-dimensional real face image;
the first generation unit is used for generating a two-dimensional virtual face image corresponding to the two-dimensional real face image;
the modeling unit is used for carrying out face three-dimensional reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face; the comprehensive texture map comprises a texture map obtained by projecting a three-dimensional virtual face to a two-dimensional plane;
the sampling unit is used for sampling and calculating the two-dimensional virtual face image to obtain a front texture map of the virtual face;
and the second generation unit is used for combining the comprehensive texture mapping and the front texture mapping of the virtual face and mapping the combined texture mapping to the three-dimensional face geometric model so as to generate the three-dimensional virtual face.
According to a third aspect of embodiments of the present specification, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
the processor is configured to be used for generating the three-dimensional virtual human face according to any one of the above methods.
According to a fourth aspect of embodiments of the present specification, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform any one of the above-described methods of generating a three-dimensional virtual human face.
The embodiment of the specification provides a generation scheme of a three-dimensional virtual face, after a two-dimensional real face image is converted into a two-dimensional virtual face image, a comprehensive texture map (non-high-definition face map) with lower definition degree and a three-dimensional face geometric model are generated, a front texture map (high-definition local face map) with higher definition degree is generated, and after the comprehensive texture map and the front texture map are integrated, the three-dimensional virtual face is obtained by projection to the three-dimensional face geometric model.
Because the user only needs to provide one real face image, the problem that a plurality of real face images take longer time is avoided.
Drawings
Fig. 1 is a schematic architecture diagram of a three-dimensional virtual face generating system according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for generating a three-dimensional virtual face according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a two-dimensional virtual face to a three-dimensional virtual face according to an embodiment of the present disclosure;
fig. 4 is a hardware configuration diagram of a three-dimensional virtual face generating device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a three-dimensional virtual face generating apparatus according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present description as detailed in the accompanying claims.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
User information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in this specification are both information and data authorized by the user or sufficiently authorized by the parties, and the collection, use and processing of relevant data requires compliance with relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation portals for the user to choose authorization or denial.
As described above, there is a problem in that it takes a long time when generating a three-dimensional virtual face based on a plurality of face images.
Therefore, the specification aims to provide a generation scheme of the three-dimensional virtual human face. Specifically, after converting a two-dimensional real face image into a two-dimensional virtual face image, generating a comprehensive texture map (non-high definition face map) with lower definition and a three-dimensional face geometric model, generating a front texture map (high definition local face map) with higher definition, integrating the comprehensive texture map and the front texture map, and projecting the integrated comprehensive texture map and the front texture map to the three-dimensional face geometric model to obtain the three-dimensional virtual face. Because the user only needs to provide one real face image, the problem that a plurality of real face images take longer time is avoided.
Referring first to fig. 1, a schematic architecture diagram of a three-dimensional virtual face generating system is shown, in which various network nodes can implement information communication by means of a network, and then interaction and data processing are completed. The system may include a server 105 in data communication with one or more clients 106 via a network 112, and a database 115 that may be integrated with the server 105 or independent of the server 105. The server 105 may include an operation server, a server cluster, or a cloud platform constructed from the server cluster that operates the three-dimensional virtual face.
The cloud platform may also be referred to as a cloud server, which may refer to a cloud server constructed based on a cloud computing technology. By utilizing resources (such as computing resources, storage resources and the like) endowed by the cloud computing technology, the three-dimensional virtual face can be generated more quickly.
Each network 112 may include wired or wireless telecommunication devices through which network devices on which clients 106 are based may exchange data. For example, each network 112 may include a local area network ("LAN"), a wide area network ("WAN"), an intranet, the internet, a mobile phone network, a Virtual Private Network (VPN), a cellular or other mobile communication network, bluetooth, NFC, or any combination thereof. In the discussion of exemplary embodiments, it should be understood that the terms "data" and "information" are used interchangeably herein to refer to text, images, audio, video, or any other form of information that may exist in a computer-based environment.
The network device upon which each client 106 is based may include a device having a communication module capable of sending out and receiving data via the network 112. For example, the network devices on which each client 106 is based may include servers, desktop computers, laptop computers, tablet computers, smartphones, handheld computers, personal digital assistants ("PDAs"), or any other wired or wireless processor driven device.
In fig. 1, the computing device 103, which may be in an integrated relationship or a separate relationship with the server 105, may be connected, in particular in the latter case, generally via an internal network or a private network, or may also be connected via an encrypted public network. In particular, in the case of an integrated relationship, it is possible to use a connection in the form of a more efficient internal bus with a faster transmission speed. The computing device 103, when in an integrated relationship or a discrete relationship, may access the database 115 directly or through the server 105.
In this description, the service end 105 may operate any service platform related or likely to be related to the three-dimensional virtual face. For example, a service of generating a three-dimensional virtual face may be provided for a social user in a service platform of a social network to increase social interest. For another example, a "cartoon filter" may be provided for the user in the service platform of the video chat, and when the user triggers the "cartoon filter", the collected face of the user may be converted into a three-dimensional virtual face, so as to realize the cartoon effect. For another example, the photographing application may also provide a "cartoon filter" service for converting a face photographed by a user into a three-dimensional virtual face.
The following description will describe an embodiment of a method for generating a three-dimensional virtual face in conjunction with fig. 2, which may be applied to the foregoing server in fig. 1, and the method may include the following steps:
step 210: and acquiring a two-dimensional real face image.
In the specification, the server side can receive the two-dimensional real face image uploaded by the client side. The two-dimensional real face image can be a face image shot by a user using the client in real time, or can be a two-dimensional real face image selected by the user from the existing images.
Step 220: and generating a two-dimensional virtual face image corresponding to the two-dimensional real face image.
After receiving the two-dimensional real face image, the server side can firstly generate a two-dimensional virtual face image corresponding to the two-dimensional real face image.
In an exemplary embodiment, the step 220 may include:
based on a virtual face generation algorithm, mapping real face features in the two-dimensional real face image into virtual face features; the virtual face generation algorithm comprises an algorithm which is constructed with a mapping relation between real face features and virtual face features;
and generating a two-dimensional virtual face image composed of the virtual face features based on the position coordinates of the real face features in the two-dimensional real face image.
In the present specification, the mapping relationship between the real face features and the virtual face features or the virtual face generation algorithm may be obtained by training based on a preset virtual face data set.
When the method is realized, a reasonable initial algorithm can be set, and a preset virtual face data set is subjected to iterative training by means of machine learning methods such as deep learning, so that coefficients of all parameters in the initial algorithm are obtained, and a unified algorithm formula can be obtained.
The machine learning methods such as Deep learning may include, but are not limited to, stable Diffusion algorithm, cycleGAN algorithm (an algorithm based on the countermeasure generation network GAN), deep stream algorithm (an algorithm based on the convolutional neural network CNN), and the like.
Taking a Stable distribution algorithm as an example, the preset virtual face data set can comprise a two-dimensional real face image, a two-dimensional virtual face image and a corresponding relation between the two-dimensional real face image and the two-dimensional virtual face image; during training, a preset virtual face data set is used as a training sample to be input into an initial Stable difference algorithm for training, so that the mapping relation between the real face characteristics and the virtual face characteristics is learned. The mapping relation between the real face features and the virtual face features can be continuously perfected through continuous training (generally called iterative training), and when the Stable diffration algorithm reaches a preset condition (for example, the accuracy of a two-dimensional virtual face image generated based on a two-dimensional real face image reaches a preset threshold value, or the number of iterative training times reaches a preset number of times), the Stable diffration algorithm can be used after the real face image reaches the line.
In practical application, a plurality of different virtual styles can be provided, so that a client determines a two-dimensional virtual face image of a target virtual style to be generated;
accordingly, the step 220 may include:
responding to a triggered target virtual style in a plurality of different virtual styles, and acquiring a virtual face generation algorithm corresponding to the target virtual style;
and mapping the real face features in the two-dimensional real face image into the virtual face features of the target virtual style based on the virtual face generation algorithm.
In the specification, the virtual style can be configured in advance by an operator based on actual requirements; such as the virtual style of the old, the modern virtual style, and further such as the virtual style of the japanese, the virtual style of the america, etc.
When the virtual face data set is realized, the virtual face data set can be divided into a plurality of data subsets with different virtual styles according to different virtual styles; still taking a Stable diffration algorithm as an example, the Stable diffration algorithm is used for training mapping relations between real face features and virtual face features in different virtual styles for different data subsets.
In the specification, through training a plurality of mapping relations matched with different virtual styles, according to a target virtual style selected by a user, a two-dimensional virtual face image corresponding to the target virtual style can be generated, and finally, a three-dimensional virtual face conforming to the target virtual style is generated.
Step 230: carrying out face three-dimensional reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face; the comprehensive texture map comprises a texture map obtained by projecting a three-dimensional virtual human face to a two-dimensional plane.
In this specification, the parameterization algorithm may be based on a parameterization algorithm of a face dataset. The face dataset may include a frame dataset, a faceWareHouse dataset, a 3DMM dataset, a FFHQ dataset, and the like.
Taking the frame as an example, a large amount of sample data of real human heads are collected in the frame, and the sample data can comprise characteristic data of the chin, the eyeballs, the mouth, the nose and the like; and further distinguishes different sample data by at least four sets of parameters shape, expression, pose, color.
In an exemplary embodiment, a parameterized algorithm may be used to reconstruct the two-dimensional virtual face image in three dimensions; the parameterization algorithm can be divided into a face coding algorithm Encoder and a face modeling algorithm Decoder;
accordingly, the step 230 may further include:
performing coding calculation on the two-dimensional virtual face image based on a face coding algorithm to obtain face modeling parameters and texture parameters;
modeling the face modeling parameters based on a face modeling algorithm to obtain a three-dimensional face geometric model; and modeling the texture parameters to obtain a comprehensive texture map of the virtual face.
In practical application, because of the problem of content deletion of the input single two-dimensional real face image (for example, the face image of the front face lacks side face content), the two-dimensional virtual face image generated based on the two-dimensional real face image lacks the same content. Based on the above, the two-dimensional virtual face image is subjected to face three-dimensional reconstruction by adopting the parameterization algorithm, and the parameterization algorithm integrates a large number of face data sets, so that the texture content missing in the two-dimensional virtual face can be complemented, and the problem of content missing in the input single real face image is solved.
In this specification, the parameters related coefficients in the face coding algorithm and the face modeling algorithm may be trained using the frame face dataset. The training is similar to the virtual face generation algorithm, and the frame face data set is used as a training sample to be input into an initial face coding algorithm and a face modeling algorithm for training so as to learn the mapping relation between the two-dimensional virtual face image and the face modeling parameters and the texture parameters, and the mapping relation between the face modeling parameters and the three-dimensional face geometric model and the mapping relation between the texture parameters and the overall texture map. These mappings may be continuously refined by continuous training (commonly referred to as iterative training), and when the face coding algorithm and the face modeling algorithm reach preset conditions, the face coding algorithm and the face modeling algorithm may be used.
When the two-dimensional virtual face image is subjected to face three-dimensional reconstruction based on a parameterization algorithm, the two-dimensional virtual face image generated in the step 220 can be input into a face coding algorithm, and the two-dimensional virtual face image is subjected to coding calculation by the face coding algorithm to obtain face modeling parameters and texture parameters; the face modeling parameters may include a face contour parameter (shape code), a face pose parameter (phase code), a facial expression parameter (expression code), a camera parameter (camera code), a texture parameter (albedo code), a lighting parameter (light code), and the like.
Then, the face modeling parameters are input into a face modeling algorithm, and a three-dimensional face geometric model is reconstructed by the face modeling algorithm according to face contour parameters, face posture parameters and facial expression parameters; and reconstructing the comprehensive texture map according to the texture parameters.
In an exemplary embodiment, the face angle of the three-dimensional face geometric model at the time of modeling is adjusted to a reference based on the face angle represented by the camera parameters.
In the specification, an image with the same angle as an input two-dimensional virtual face image can be rendered according to a camera parameter camera code.
The following is a schematic diagram of combining the two-dimensional virtual human face to the three-dimensional virtual human face shown in fig. 3.
As shown in fig. 3, by performing the foregoing parameterization calculation on the two-dimensional virtual face image, a three-dimensional face geometric model and a comprehensive face map can be obtained.
It should be noted that, because the definition of the overall face map obtained by parameterization calculation is low (the low-definition overall face map shown in fig. 3), if the overall face map is directly mapped to the three-dimensional face geometric model, although a three-dimensional virtual face (the low-definition three-dimensional virtual face shown in fig. 3) can also be obtained, the problem of low definition of the three-dimensional virtual face is also present.
In some embodiments, if there is not too high a definition requirement for the three-dimensional virtual face, then after step 230, the full texture map may be mapped directly to the three-dimensional face geometric model to generate the three-dimensional virtual face without performing subsequent steps 240-250.
Step 240: sampling and calculating the two-dimensional virtual face image to obtain a front texture map of the virtual face;
in the specification, aiming at the problem of low definition of the comprehensive face map obtained by parameterization calculation, the front texture map of the high-definition virtual face can be obtained through a screen space sampling (UV sampling) algorithm. It should be noted that here, the front texture map may miss texture maps of non-front portions (e.g., miss texture maps of sides of a face).
In an exemplary embodiment, the step 240 may further include:
projecting the two-dimensional virtual face image to the three-dimensional face geometric model, and obtaining pixel colors of mapping three-dimensional vertex coordinates in the three-dimensional face geometric model to the two-dimensional virtual face image;
projecting the three-dimensional vertex coordinates in the three-dimensional face geometric model to a two-dimensional plane to obtain two-dimensional plane coordinates;
and filling the pixel colors mapped by the three-dimensional vertex coordinates into corresponding two-dimensional plane coordinates to obtain the front texture map of the virtual human face.
Step 250: and combining the comprehensive texture map and the front texture map of the virtual face, and mapping the combined texture map to the three-dimensional face geometric model to generate the three-dimensional virtual face.
As shown in fig. 3, since the two-dimensional virtual face image has only front face information, not all three-dimensional vertices can acquire corresponding pixel information after projection (e.g., some vertices of the side cannot acquire color), which results in a front texture map generated by UV sampling that lacks a portion of content (e.g., lacks a texture map of the side).
To solve this problem, the present specification combines the respective advantages of the full-face texture map and the frontal face texture map, and obtains a high-definition texture map without missing contents by combining the full-face texture map and the frontal texture map of the virtual face.
In an exemplary embodiment, the merging the full texture map and the front texture map of the virtual face may include:
acquiring a part of the front texture map within a preset face boundary range as a face texture map;
acquiring a part, which is located outside the preset face boundary range, of the comprehensive texture map as a non-face texture map;
and splicing the facial texture map and the non-facial texture map to obtain a combined texture map.
In the present specification, a blank map with a preset face boundary but without a face texture map may be designed; as shown in fig. 3, a portion of the front texture map that is located within the preset face boundary is taken as a face texture map, and a portion of the overall texture map that is located outside the preset face boundary is taken as a non-face texture map; after stitching the facial texture map and the non-facial texture map, the resulting merged texture map is a high definition and no missing texture map (high definition face map shown in fig. 3).
In an exemplary embodiment, after stitching the face texture map and the non-face texture map, further comprising:
and smoothing the adjacent region where the spliced line is positioned based on a Gaussian smoothing algorithm.
In this specification, in order to make the boundary transition of the stitched portion between the face texture map and the non-face texture map more natural, the stitched portion is smoothed by using a gaussian smoothing algorithm.
In implementation, a convolution kernel (e.g., 2×2 pixels) may be set, and then, with each pixel point in the boundary of the spliced portion as a center, the pixel value of the pixel in the convolution kernel is calculated; the calculation strategy herein may be preconfigured, for example, may take an average, a maximum, a minimum, a median, etc. The result of the calculation will be the pixel values assigned to all pixels within the current convolution kernel. The Gaussian smoothing algorithm is used for Gaussian smoothing, so that the pixel values of the pixels of the adjacent areas where the stitching lines are positioned tend to be consistent, and the pixels are more visually apparent to be natural and unobtrusive.
In addition, the specification can also detect the face skin region in the combined texture map based on a face detection algorithm (such as face-side) through a color mapping table, and perform color mapping transformation on pixels in the detected face skin region.
In practical application, in order to meet the requirements of different users, a plurality of different types of color mapping tables can be set. For example, if some users may want to achieve a whitening effect, a color mapping table corresponding to the whitening effect may be selected, so as to map pixels in the detected skin area of the face to a more fair color. Some users may want to achieve a black effect, then a color mapping table corresponding to the black effect may be selected to map pixels in the detected skin area of the face to darker colors.
After the merged texture map is obtained, the server may further map the merged texture map to the three-dimensional face geometric model, thereby generating a three-dimensional virtual face.
As shown in fig. 3, since the merged texture map has a high-definition frontal face texture map, the generated three-dimensional virtual face has higher definition (high-definition three-dimensional virtual face as shown in fig. 3). The difference in sharpness is evident by comparing the high-definition three-dimensional virtual face with the low-definition three-dimensional virtual face of fig. 3.
In summary, after converting a two-dimensional real face image into a two-dimensional virtual face image, the embodiment of the specification firstly generates a comprehensive texture map (non-high definition face map) with low definition and a three-dimensional face geometric model through a parameterization algorithm, then generates a front texture map (high definition local face map) with high definition through a screen space sampling algorithm, integrates the comprehensive texture map and the front texture map, and projects the integrated texture map and the front texture map to the three-dimensional face geometric model to obtain the three-dimensional virtual face. Because the user only needs to provide one real face image, the problem that a plurality of real face images consume long time is avoided; in addition, the missing content can be complemented through a parameterization algorithm, so that the problem that the content of the single Zhang Zhenshi face image is missing is solved.
Corresponding to the foregoing embodiment of the method for generating a three-dimensional virtual face, the present disclosure further provides an embodiment of a device for generating a three-dimensional virtual face. The embodiment of the device can be implemented by software, or can be implemented by hardware or a combination of hardware and software. Taking a software implementation as an example, the device in a logic sense is formed by reading a corresponding computer program in a nonvolatile memory into a memory by a processor of a device where the device is located. From the hardware level, as shown in fig. 4, a hardware structure diagram of a device where the generating device of the three-dimensional virtual human face in the present specification is located is shown, except for the processor, the network interface, the memory and the nonvolatile memory shown in fig. 4, the device where the device is located in the embodiment generally includes other hardware according to the generating actual function of the three-dimensional virtual human face, which is not described herein again.
Referring to fig. 5, a block diagram of a device for generating a three-dimensional virtual human face according to an embodiment of the present disclosure corresponds to the embodiment shown in fig. 2, and the device includes:
an acquisition unit 510 that acquires a two-dimensional real face image of a target user;
a first generating unit 520 for generating a two-dimensional virtual face image corresponding to the two-dimensional real face image;
the modeling unit 530 performs three-dimensional face reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face; the comprehensive texture map comprises a texture map obtained by projecting a three-dimensional virtual face to a two-dimensional plane;
the sampling unit 540 is used for sampling and calculating the two-dimensional virtual face image to obtain a front texture map of the virtual face;
the second generating unit 550 combines the full-scale texture map and the front-side texture map of the virtual face, and maps the combined texture map to the three-dimensional face geometric model to generate the three-dimensional virtual face.
In an exemplary embodiment of the present invention,
the first generating unit 520 may further include:
the mapping subunit maps the real face features in the two-dimensional real face image into virtual face features based on a virtual face generation algorithm; the virtual face generation algorithm comprises an algorithm which is constructed with a mapping relation between real face features and virtual face features;
and the generation subunit is used for generating a two-dimensional virtual face image composed of the virtual face features based on the position coordinates of the real face features in the two-dimensional real face image.
In an exemplary embodiment, the first generating unit 520 may further include:
responding to a triggered target virtual style in a plurality of different virtual styles, and acquiring a virtual face generation algorithm corresponding to the target virtual style; and mapping the real face features in the two-dimensional real face image into the virtual face features of the target virtual style based on the virtual face generation algorithm.
In an exemplary embodiment, the modeling unit 530 may further include:
the coding subunit is used for carrying out coding calculation on the two-dimensional virtual face image to obtain face modeling parameters and texture parameters;
the modeling module is used for modeling the face modeling parameters to obtain a three-dimensional face geometric model; and modeling the texture parameters to obtain a comprehensive texture map of the virtual face.
In an exemplary embodiment, the face modeling parameters include a face contour parameter, a face pose parameter, and a facial expression parameter.
In an exemplary embodiment, the face modeling parameters further include camera parameters; the modeling subunit may further include adjusting a face angle of the three-dimensional face geometric model during modeling to a reference by using the face angle represented by the camera parameter as the reference.
In an exemplary embodiment, the sampling unit 540 may further include:
the first projection subunit projects the two-dimensional virtual face image to the three-dimensional face geometric model to obtain pixel colors mapped from three-dimensional vertex coordinates in the three-dimensional face geometric model to the two-dimensional virtual face image;
the second projection subunit projects the three-dimensional vertex coordinates in the three-dimensional face geometric model to a two-dimensional plane to obtain two-dimensional plane coordinates;
and the filling subunit fills the pixel colors mapped by the three-dimensional vertex coordinates into corresponding two-dimensional plane coordinates to obtain the front texture map of the virtual face.
In an exemplary embodiment, the merging the global texture map and the front texture map of the virtual face in the second generating unit 550 may further include:
an acquisition subunit, configured to acquire a portion of the front texture map within a preset face boundary range as a face texture map; acquiring a part, which is located outside the preset face boundary range, of the comprehensive texture map as a non-face texture map;
and the splicing subunit splices the face texture map and the non-face texture map to obtain the combined texture map.
In an exemplary embodiment, after the splicing subunit, the method may further include:
and the smoothing processing subunit is used for carrying out smoothing processing on the adjacent area where the splicing line is positioned based on a Gaussian smoothing algorithm.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Fig. 5 above describes an internal functional module and a structural schematic of a three-dimensional virtual face generating apparatus, and the substantial execution subject may be an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute an embodiment of the method for generating a three-dimensional virtual face.
In the above embodiment of the electronic device, it should be understood that the processor may be a processing unit (english: central Processing Unit, abbreviated as CPU), or may be another general purpose processor, a digital signal processor (english: digital Signal Processor, abbreviated as DSP), an application specific integrated circuit (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the aforementioned memory may be a read-only memory (ROM), a random access memory (random access memory, RAM), a flash memory, a hard disk, or a solid state disk. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware processor, or in a combination of hardware and software modules in a processor.
In addition, the present specification further provides a computer readable storage medium, where instructions in the computer readable storage medium, when executed by a processor of an electronic device, may enable the electronic device to perform an embodiment of any one of the above-mentioned three-dimensional virtual face generating methods.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the electronic device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.

Claims (11)

1. A method for generating a three-dimensional virtual face, the method comprising:
acquiring a two-dimensional real face image;
generating a two-dimensional virtual face image corresponding to the two-dimensional real face image;
carrying out face three-dimensional reconstruction on the two-dimensional virtual face image to obtain a three-dimensional face geometric model and a comprehensive texture map of the virtual face; the comprehensive texture map comprises a texture map obtained by projecting a three-dimensional virtual face to a two-dimensional plane;
sampling and calculating the two-dimensional virtual face image to obtain a front texture map of the virtual face;
and combining the comprehensive texture map and the front texture map of the virtual face, and mapping the combined texture map to the three-dimensional face geometric model to generate the three-dimensional virtual face.
2. The method of claim 1, wherein the generating a two-dimensional virtual face image corresponding to the two-dimensional real face image comprises:
based on a virtual face generation algorithm, mapping real face features in the two-dimensional real face image into virtual face features; the virtual face generation algorithm comprises an algorithm which is constructed with a mapping relation between real face features and virtual face features;
and generating a two-dimensional virtual face image composed of the virtual face features based on the position coordinates of the real face features in the two-dimensional real face image.
3. The method according to claim 2, wherein the mapping the real face features in the two-dimensional real face image to virtual face features based on a virtual face generation algorithm comprises:
responding to a triggered target virtual style in a plurality of different virtual styles, and acquiring a virtual face generation algorithm corresponding to the target virtual style;
and mapping the real face features in the two-dimensional real face image into the virtual face features of the target virtual style based on the virtual face generation algorithm.
4. The method according to claim 1, wherein said reconstructing the two-dimensional virtual face image into a three-dimensional face geometric model and a full texture map of the virtual face comprises:
performing coding calculation on the two-dimensional virtual face image to obtain face modeling parameters and texture parameters;
modeling the face modeling parameters to obtain a three-dimensional face geometric model; and modeling the texture parameters to obtain a comprehensive texture map of the virtual face.
5. The method of claim 4, wherein the face modeling parameters include face contour parameters, face pose parameters, and facial expression parameters.
6. The method of claim 4, wherein the face modeling parameters further comprise camera parameters; the method further comprises the steps of:
and taking the face angle represented by the camera parameters as a reference, and adjusting the face angle of the three-dimensional face geometric model in modeling to the reference.
7. The method according to claim 1, wherein the performing a sampling calculation on the two-dimensional virtual face image to obtain a front texture map of the virtual face comprises:
projecting the two-dimensional virtual face image to the three-dimensional face geometric model, and obtaining pixel colors of mapping three-dimensional vertex coordinates in the three-dimensional face geometric model to the two-dimensional virtual face image;
projecting the three-dimensional vertex coordinates in the three-dimensional face geometric model to a two-dimensional plane to obtain two-dimensional plane coordinates;
and filling the pixel colors mapped by the three-dimensional vertex coordinates into corresponding two-dimensional plane coordinates to obtain the front texture map of the virtual human face.
8. The method of claim 1, wherein the merging the full texture map and the frontal texture map of the virtual face comprises:
acquiring a part of the front texture map within a preset face boundary range as a face texture map;
acquiring a part, which is located outside the preset face boundary range, of the comprehensive texture map as a non-face texture map;
and splicing the facial texture map and the non-facial texture map to obtain a combined texture map.
9. The method of claim 8, further comprising, after stitching the facial texture map and the non-facial texture map:
and smoothing the adjacent region where the spliced line is positioned based on a Gaussian smoothing algorithm.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any of the preceding claims 1-9.
11. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1-9.
CN202310508684.5A 2023-05-05 2023-05-05 Three-dimensional virtual face generation method and device and electronic equipment Pending CN116597079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310508684.5A CN116597079A (en) 2023-05-05 2023-05-05 Three-dimensional virtual face generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310508684.5A CN116597079A (en) 2023-05-05 2023-05-05 Three-dimensional virtual face generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116597079A true CN116597079A (en) 2023-08-15

Family

ID=87594820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310508684.5A Pending CN116597079A (en) 2023-05-05 2023-05-05 Three-dimensional virtual face generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116597079A (en)

Similar Documents

Publication Publication Date Title
CN108921782B (en) Image processing method, device and storage medium
JP7446457B2 (en) Image optimization method and device, computer storage medium, computer program, and electronic equipment
WO2021174939A1 (en) Facial image acquisition method and system
JP7526412B2 (en) Method for training a parameter estimation model, apparatus for training a parameter estimation model, device and storage medium
CN108305312B (en) Method and device for generating 3D virtual image
WO2022156640A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN106682632B (en) Method and device for processing face image
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN113628327B (en) Head three-dimensional reconstruction method and device
KR20170008638A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
CN111008935B (en) Face image enhancement method, device, system and storage medium
US11475608B2 (en) Face image generation with pose and expression control
CN111652123B (en) Image processing and image synthesizing method, device and storage medium
US20220148333A1 (en) Method and system for estimating eye-related geometric parameters of a user
CN111798551A (en) Virtual expression generation method and device
Galteri et al. Deep 3d morphable model refinement via progressive growing of conditional generative adversarial networks
CN110852310A (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
KR20200100020A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
CN117218246A (en) Training method and device for image generation model, electronic equipment and storage medium
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method
WO2024104144A1 (en) Image synthesis method and apparatus, storage medium, and electrical device
US20220157016A1 (en) System and method for automatically reconstructing 3d model of an object using machine learning model
CN116597079A (en) Three-dimensional virtual face generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40100908

Country of ref document: HK