Nothing Special   »   [go: up one dir, main page]

US20230042734A1 - Face image processing method and apparatus, face image display method and apparatus, and device - Google Patents

Face image processing method and apparatus, face image display method and apparatus, and device Download PDF

Info

Publication number
US20230042734A1
US20230042734A1 US17/969,435 US202217969435A US2023042734A1 US 20230042734 A1 US20230042734 A1 US 20230042734A1 US 202217969435 A US202217969435 A US 202217969435A US 2023042734 A1 US2023042734 A1 US 2023042734A1
Authority
US
United States
Prior art keywords
face image
age
face
image
network layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/969,435
Other languages
English (en)
Inventor
Yun Cao
Hui NI
Feida ZHU
Xiaozhong Ji
Ying Tai
Yanhao Ge
Chengjie Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED reassignment TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GE, Yanhao, ZHU, Feida, CAO, YUN, WANG, CHENGJIE, JI, Xiaozhong, NI, HUI, TAI, YING
Publication of US20230042734A1 publication Critical patent/US20230042734A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • This application relates to the technical field of image processing, and particularly, to a face image processing method and apparatus, a face image display method and apparatus, and a device.
  • a computer device can process a first face image to obtain a second face image through the machine learning technology.
  • a first age corresponding to a face in the first face image and a second age corresponding to a face in the second face image are different, but correspond to the same identity.
  • the computer device usually processes the first face image through a machine learning model according to the inputted first face image and a face age change operation of a user, thereby obtaining the second face image.
  • the face age change operation is used for instructing the computer device to make the face in the second face image younger or older.
  • the face image When the face image is processed by the above-mentioned method, the face image can only be simply made younger or older. Therefore, the flexibility and accuracy of processing of a face image in an age dimension are low.
  • This disclosure provides a face image processing method and apparatus, a face image display method and apparatus, and a device.
  • the flexibility and accuracy of processing of a face image in an age dimension can be improved.
  • the technical solutions are as follows:
  • a face image processing method including:
  • the first image may include a face image of a person
  • the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of a second face image of the person at the specified age, the second face image being a face image of a face of the person at the specified age;
  • a face image display method including
  • the age change control being a control used for inputting a specified age
  • an age change model training method including:
  • sample image set including a sample image and a sample age label of the sample image
  • the specified age being a random age or the sample age label
  • a face image processing apparatus including:
  • an acquisition module configured to acquire a first face image
  • a prediction module configured to invoke an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of the specified age;
  • a first processing module configured to perform, based on the texture difference map, image processing on the first face image to obtain a second face image, the second face image being a face image of a face in the first face image at the specified age.
  • the age change model includes a conditional generative network layer and a texture synthesis network layer; the prediction module is configured to:
  • the conditional generative network layer invokes the conditional generative network layer to predict, based on the specified age, the first face image, and output the texture difference map, the texture difference map being used for reflecting the texture difference between the face texture in the first face image and the face texture of the specified age;
  • the first processing module is configured to invoke the texture synthesis network layer to superimpose the texture difference map with the first face image to obtain the second face image.
  • conditional generative network layer is further used for outputting an attention map;
  • the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point (also referred to as pixel) in the first face image;
  • the first processing module is configured to:
  • the age change model further includes a shape change network layer; the apparatus further includes:
  • a second processing module configured to invoke the shape change network layer to perform shape change processing on a face in the second face image.
  • conditional generative network layer is further used for outputting a shape change information map;
  • shape change information map is used for predicting a face shape change of the face in the first face image relative to the specified age;
  • second processing module is configured to:
  • the shape change information map includes displacement information corresponding to the pixel point in a first direction and a second direction, the first direction and the second direction being perpendicular to each other; the second processing module is configured to:
  • the apparatus further includes:
  • a third processing module configured to perform semantic image segmentation on the second face image to obtain a hair region in the second face image
  • a calculation module configured to calculate a corresponding target color value of the pixel point in the hair region at the specified age in a mapping manner based on an original color value of a pixel point in the hair region;
  • a replacement module configured to replace the original color value of the pixel point in the hair region by the target color value.
  • the apparatus further includes:
  • a fourth processing module configured to input the first face image into a face detection model, and output a face alignment point in the first face image
  • a fifth processing module configured to perform image matting on the first face image according to the face alignment point and affine transformation to obtain an aligned first face image.
  • the apparatus further includes a training module; the age change model is obtained by training by the training module; the training module is configured to:
  • the sample image set including a sample image and a sample age label of the sample image
  • the specified age being a random age or the sample age label
  • the adversarial loss may be 0 if the prediction result is correct and 1 if the prediction result is wrong;
  • an age prediction model to predict a predicted age of the predicted face image, and calculate an age loss between the predicted age and the specified age
  • the generator determines the generator to be the age change model in a case that a training end condition is satisfied.
  • the training module is configured to:
  • the training module is configured to:
  • the generator includes a conditional generative network layer and a texture synthesis network layer; the training module is configured to:
  • the conditional generative network layer invokes the conditional generative network layer to predict, based on the specified age, the sample image, and output the texture difference map; the texture difference map being used for reflecting a face texture difference of a face in the sample image relative to the specified age;
  • conditional generative network layer is further used for outputting an attention map;
  • the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the sample image;
  • training module is configured to:
  • the generator further includes a shape change network layer; the training module is configured to:
  • conditional generative network layer is further used for outputting a shape change information map;
  • shape change information map is used for predicting a face shape change of the face in the sample image relative to the specified age;
  • training module is configured to:
  • the shape change information map includes displacement information corresponding to the pixel point in a first direction and a second direction, the first direction and the second direction being perpendicular to each other; the training module is configured to:
  • a face image display apparatus including:
  • a display module configured to display a first face image and an age change control, the age change control being a control used for inputting a specified age
  • the processing module configured to invoke, in response to a trigger operation for the age change control, an age change model to process the first face image according to the specified age corresponding to the trigger operation to obtain a second face image, the second face image being a face image of a face of the person in the first face image at the specified age;
  • the display module being configured to display the second face image.
  • the display module is configured to display the second face image and the specified age.
  • an age change model training apparatus including:
  • an acquisition module configured to acquire a sample image set, the sample image set including a sample image and a sample age label of the sample image
  • a first determination module configured to determine a specified age, the specified age being a random age or the sample age label
  • a prediction module configured to invoke a generator in a generative adversarial network to predict, based on the specified age, the sample image to obtain a predicted face image
  • a first calculation module configured to invoke a discriminator in the generative adversarial network to calculate an adversarial loss of the predicted face image, the adversarial loss being used for representing whether the predicted face image is a loss of a real face image;
  • a second calculation module configured to invoke an age prediction model to predict a predicted age of the predicted face image, and calculate an age loss between the predicted age and the specified age
  • a training module configured to train the generator according to the adversarial loss and the age loss
  • a second determination module configured to determine, in a case that a training end condition is satisfied, the generator to be an age change model.
  • a computer device including a processor and a memory, the memory storing at least one instruction, at least one program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by the processor to implement the face image processing method, the face image displaying method, or the age change model training method according to the foregoing aspect.
  • a non-transitory computer storage medium storing at least one piece of program code, the program code being loaded and executed by a processor to implement the face image processing method, the face image displaying method, or the age change model training method according to the foregoing aspect.
  • a computer program product or a computer program including computer instructions, the computer instructions being stored in a non-transitory computer-readable storage medium.
  • a processor of a computer device reads the computer instructions from the non-transitory computer-readable storage medium, and executes the computer instructions, so that the computer device implements the face image processing method, the face image displaying method, or the age change model training method according to the foregoing aspect.
  • a first face image is processed through an age change model, so that a second face image can be generated and outputted according to a specified age.
  • the second face image is a face image of a face in the first face image at the specified age. That is, a face image can be changed according to a specified age customized by a user or a specified age preset in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved, and a clear and natural smooth transition change animation in all ages can be achieved.
  • FIG. 1 is a schematic structural diagram of an age change model provided according to one exemplary embodiment of this disclosure
  • FIG. 2 is a schematic flowchart of an exemplary face image processing method provided according to an embodiment of this disclosure
  • FIG. 3 is a schematic flowchart of another exemplary face image processing method provided according to an embodiment of this disclosure.
  • FIG. 4 is a schematic diagram of an exemplary implementation process of processing a first face image according to a specified age provided according to an embodiment of this disclosure
  • FIG. 5 is a schematic diagram of an exemplary process of processing a first face image provided according to an embodiment of this disclosure
  • FIG. 6 is a schematic flowchart of an exemplary face image display method provided according to an embodiment of this disclosure.
  • FIG. 7 is a schematic diagram of an exemplary user interface for displaying a first face image and an age change control provided according to an embodiment of this disclosure
  • FIG. 8 is a schematic diagram of an exemplary user interface for displaying a first face image and an age change control provided according to an embodiment of this disclosure
  • FIG. 9 is a schematic diagram of a second exemplary face image provided according to an embodiment of this disclosure.
  • FIG. 10 is a schematic diagram of an exemplary user interface for displaying a second face image provided according to an embodiment of this disclosure.
  • FIG. 11 is a schematic flowchart of an exemplary age change model training method provided according to an embodiment of this disclosure.
  • FIG. 12 is a schematic flowchart of another exemplary age change model training method provided according to an embodiment of this disclosure.
  • FIG. 13 is a schematic diagram of an exemplary implementation process of predicting a sample image provided according to an embodiment of this disclosure
  • FIG. 14 is a schematic diagram of an exemplary implementation process of preprocessing a face image provided according to an embodiment of this disclosure.
  • FIG. 15 is a schematic diagram of an exemplary implementation process of performing age change on a face image provided according to an embodiment of this disclosure
  • FIG. 16 is a schematic structural diagram of a face image processing apparatus provided according to an embodiment of this disclosure.
  • FIG. 17 is a schematic structural diagram of a face image processing apparatus provided according to an embodiment of this disclosure.
  • FIG. 18 is a schematic structural diagram of a face image processing apparatus provided according to an embodiment of this disclosure.
  • FIG. 19 is a schematic structural diagram of a face image processing apparatus provided according to an embodiment of this disclosure.
  • FIG. 20 is a schematic structural diagram of a face image processing apparatus provided according to an embodiment of this disclosure.
  • FIG. 21 is a schematic structural diagram of a face image display apparatus provided according to an embodiment of this disclosure.
  • FIG. 22 is a schematic structural diagram of an age change model training apparatus provided according to an embodiment of this disclosure.
  • FIG. 23 is a schematic structural diagram of another age change model training apparatus provided according to an embodiment of this disclosure.
  • FIG. 24 is a schematic structural diagram of a terminal provided according to an embodiment of this disclosure.
  • Generative Adversarial Network It usually includes a Generator (G) and a Discriminator (D).
  • Unsupervised learning is achieved by mutual competition between the generator and the discriminator.
  • the generator performs random sampling from a latent space to obtain an input, and an output result needs to simulate a real sample in a training set as far as possible.
  • An input by the discriminator is the real sample or the output of the generator, and its purpose is to take the output of the generator as an inputted input sample to distinguish it as much as possible from all input samples including the real sample.
  • the generator on the other hand, tries to fool the discriminator as much as possible. Therefore, an adversarial relationship between the generator and the discriminator is formed, so as to continuously adjust parameters and finally generate a fake picture to complete training of a model.
  • Semantic image segmentation It is a very important field in computer vision, which refers to identifying an input image at a pixel level and marking each pixel point in the image with an object category to which it belongs. For example, various elements (including the hairs, the face, the facial features, the glasses, the neck, the clothes, the background, etc.) in a picture including a face are distinguished through a neural network.
  • Color look up table Another corresponding color can be found according to an actually acquired color through a color LUT.
  • AIaaS Artificial intelligent cloud service
  • An AIaaS platform will split several types of common AI services and provide independent or packaged services in a cloud.
  • This service mode is similar to opening an AI-themed mall: all developers can access and use one or more AI services (such as face image processing based on a specified age) provided by the platform through an Application Programming Interface (API); and some senior developers can also use an AI framework and AI infrastructure provided by the platform to deploy, operate and maintain own dedicated cloud AI services.
  • API Application Programming Interface
  • FIG. 1 is a schematic structural diagram of an age change model 101 provided in one exemplary embodiment of this disclosure.
  • the age change model 101 includes a conditional generative network layer 1011 , a texture synthesis network layer 1012 and a shape change network layer 1013 .
  • a terminal or client can output a texture difference map 105 , an attention map 106 and a shape change information map 107 according to an inputted first face image 104 and a specified age through the conditional generative network layer 1011 .
  • the terminal or client is configured to provide a function of changing, based on a specified age, a face image, and the age change model 101 can be invoked by the terminal or client.
  • the age change model 101 may be set in the terminal, or may be set in a server or implemented as a cloud service. Terminals and servers can be collectively referred to as computer device.
  • the texture difference map 105 is used for reflecting a texture difference between a face texture in the first face image 104 and a face texture of the face of the same person at the specified age.
  • the attention map 106 is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the first face image 104 .
  • the shape change information map 107 includes corresponding displacement information of the pixel point in the first face image 104 in a first direction and a second direction. The first direction and the second direction are perpendicular to each other (such as a horizontal direction and a perpendicular direction).
  • the terminal or client invokes the texture synthesis network layer 1012 to superimpose, based on the attention map 106 , the texture difference map 105 with the first face image 104 to obtain a second face image, and invokes the shape change network layer 1013 to perform, based on the displacement information in the first direction and the second direction, bilinear displacement on a pixel point in the second face image, so as to change the shape of a face in the second face image, thereby obtaining an outputted second face image 108 .
  • the terminal or client can output face images 109 of the first face image 104 at the different specified ages according to different specified ages and the first face image 104 .
  • the terminal or client may also preprocess the first face image 104 before inputting the first face image 104 into the age change model 101 , wherein the preprocessing includes inputting the first face image 104 into a face detection model and outputting a face alignment point in the first face image; and performing image matting on the first face image 104 according to the face alignment point and affine transformation to obtain an aligned first face image.
  • the terminal or client also performs semantic image segmentation on the outputted second face image 108 to obtain a hair region in the outputted second face image; calculates a corresponding target color value of the pixel point in the hair region at the specified age in a mapping manner based on an original color value of a pixel point in the hair region; and replaces the original color value of the pixel point in the hair region by the target color value, thereby obtaining a second face image in which the hair color is dynamically changed based on the outputted second face image 108 .
  • the terminal or client may fuse the specified age into the features extracted by a plurality of feature extraction layers in the conditional generative network layer 1011 .
  • the age change model 101 and the discriminator 102 can constitute a generative adversarial network.
  • the age change model 101 can also be referred to as a generator in the generative adversarial network.
  • a computer device acquires a sample image and randomly generates a specified age, or takes an age corresponding to a sample image predicted by an age prediction model 103 as a specified age; and then invokes the generator (the age change model 101 ) to predict, based on the specified age, the sample image to obtain a predicted face image.
  • the predicted face image is a face image of a face in the sample image at the specified age.
  • the computer device then invokes the discriminator 102 to calculate an adversarial loss for predicting the face image, the adversarial loss being used for representing whether the predicted face image is a loss of a real face image; invokes the age prediction model 103 to predict a predicted age of the predicted face image, and calculates an age loss between the predicted age and the specified age, thereby training the generator based on the adversarial loss and the age loss; and determines the generator as the age change model 101 when a training end condition (such as the generator is converged stably) is satisfied.
  • the computer device can invoke the age change model 101 , the discriminator 102 and the age prediction model 103 .
  • the above-mentioned client can be installed in the computer device, and the computer device sends the trained age prediction model 103 to the client or the client invokes the age change model 101 through the computer device.
  • the computer device may also preprocess the sample image before inputting the sample image into the generator, wherein the preprocessing includes inputting the sample image into a face detection model and outputting a face alignment point in the sample image; and perform image matting on the sample image according to the face alignment point and affine transformation to obtain an aligned sample image.
  • the preprocessing for the sample image is the same as the preprocessing performed on the first face image 104 before the age change model 101 is used to process the first face image 104 , so that the accuracy of model outputting can be improved, and the training difficulty is lowered.
  • the first face image is processed through the age change model, so that the second face image can be generated according to the specified age.
  • the second face image is a face image of a face in the first face image at the specified age. That is, a face image can be changed according to a specified age customized by a user or a specified age set in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved.
  • FIG. 2 is a schematic flowchart of a face image processing method provided according to an embodiment of this disclosure. The method can be applied to a computer device or a client on the computer device. As shown in FIG. 2 , the method includes the following steps:
  • Step 201 Acquire a First Face Image
  • the first face image is from a photo or a video frame in a video.
  • the first face image is any image including information of a face.
  • the first face image includes facial features of the face, and a resolution of the first face image is 720P, 1080P, 4K and the like.
  • the first face image is a photo or a video frame of a video uploaded by the user in the client, or a photo or a video frame in a video captured by the computer device where the client is located, or a photo or a video frame in a video acquired by the client through other computer devices.
  • This client is configured to provide a function of processing, based on a specified age, a face image.
  • the client is a short video client, a song client, a live streaming client, a social client, a game client, a mini program and a web client.
  • a user uses this function by installing the client or by accessing a website corresponding to the client.
  • the client acquires the first face image by taking a photo or by reading photos or videos in a photo album, or receives the first face image transmitted by other devices. Alternatively, the client displays the first face image after acquiring the first face image.
  • Step 202 Invoke an Age Change Model to Predict a Texture Difference Map of the First Face Image at a Specified Age
  • the specified age is determined by the client according to an operation of inputting the specified age by the user, or is generated by the client.
  • the client acquires, according to a trigger operation of the user on an age change control displayed in a user interface that displays the first face image, the specified age, and invokes the age change model to process, based on the specified age, the first face image.
  • the texture difference map is used for reflecting a texture difference between a face texture in the first face image and a face texture of the specified age.
  • the texture difference at least includes: a face skin feature difference, a hair color feature difference and a beard feature difference.
  • the face skin feature difference is used for making the skin of the face smoother and finer;
  • the hair color feature difference is used for blackening the hairs;
  • the beard feature difference is used for erasing the beards.
  • the face skin feature difference is used for adding wrinkles on the face; the hair color feature difference is used for whitening the hairs; and the beard feature difference is used for whitening the beards.
  • the age corresponding to the face in the first face image is 21,and the specified age is 50.
  • the texture difference map includes texture information corresponding to the increased wrinkles, texture information corresponding to the whitened hairs, and texture information corresponding to the whitened beards.
  • Step 203 Perform Image Processing on the First Face Image Based on the Texture Difference Map to Obtain a Second Face Image
  • the second face image is a face image of a face in the first face image at the specified age.
  • the age corresponding to the face in the first face image is the same as or different from the specified age.
  • the user uploads an own photo at the age of 21 to the client, and inputs an age of 50 through the age change control.
  • the client invokes the age change model to process the photo at the age of 21, so as to obtain a corresponding photo of the user at the age of 51 for the photo at the age of 21.
  • the second face image is obtained by performing, by the client based on the texture difference map, the image processing on the first face image through the age change model.
  • the client also predicts a face shape change of the face in the first face image relative to the specified age through the age change model, and processes, based on the face shape change, the first face image, so as to obtain a more accurate second face image.
  • the age change model is based on a Convolutional Neural Network (CNN).
  • the age change model is deployed in the client, or in the computer device connected to the client.
  • the computer device is a server, and may be an independent physical server, or may be a server cluster including a plurality of physical servers or a distributed system, or may be a cloud server providing basic cloud computing services, such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), big data, and an artificial intelligence platform.
  • the client invokes the age change model through an age change model invoking interface provided by the computer device to process, based on the specified age, the first face image.
  • the client sends the first face image and the specified age to the computer device, and the computer device invokes the age change model to process, based on the specified age, the first face image, and sends the obtained second face image to the client.
  • Step 204 Output the Second Face Image
  • the client may output the second face image, for example, display the second face image, after processing the first face image and obtaining the second face image.
  • the client displays the first face image and the second face image in different user interfaces or in the same user interface.
  • the client displays the first face image and the second face image in the same display region in the user interface in a superimposed manner, and a switching display dividing line is also displayed in the display region.
  • the client displays the first face image on one side of the switching display dividing line, and displays the second face image on the other side.
  • display areas of the first face image and the second face image displayed by the client in the display region will change.
  • the client can display user interfaces used for comparing differences of the first face image with the second face image.
  • the client may also display the specified age while displaying the second face image.
  • the first face image is processed through the age change model, so that the second face image can be generated and displayed according to the specified age.
  • the second face image is a face image of a face in the first face image at the specified age. That is, a face image can be changed according to a specified age customized by a user or a specified age preset in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved, and a clear and natural smooth transition change animation in all ages can also be achieved.
  • FIG. 3 is a schematic flowchart of a face image processing method provided according to an embodiment of this disclosure. The method can be applied to a computer device or a client on the computer device. As shown in FIG. 3 , the method includes the following steps:
  • Step 301 Acquire a First Face Image
  • the first face image is from a photo or a video frame in a video.
  • the first face image is a photo or a video frame of a video uploaded by the user in the client, or a photo or a video frame in a video captured by the computer device where the client is located, or a photo or a video frame in a video acquired by the client through other computer devices.
  • the client is configured to provide a function of processing, based on a specified age, a face image.
  • Step 302 Invoke an Age Change Model to Predict, Based on the Specified Age, the First Face Image to Obtain a Second Face Image
  • the second face image is a face image of a face in the first face image at the specified age.
  • the client invokes the age change model to predict a texture difference map of the first face image at the specified age, and performs, based on the texture difference map, image processing on the first face image to obtain the second face image.
  • the client invokes the age change model to predict a texture difference map of the first face image at the specified age and a face shape change of a face in the first face image relative to the specified age, and process, according to the texture difference map and the face shape change, the first face image, thereby obtaining the second face image.
  • the texture difference map is used for reflecting a texture difference between a face texture in the first face image and a face texture of the specified age.
  • the client may also preprocess the first face image, including inputting the first face image into a face detection model, outputting a face alignment point in the first face image, and performing image matting on the first face image according to the face alignment point and affine transformation to obtain an aligned first face image.
  • the face detection model is the same as a model used for preprocessing a sample image during the training of the age change model.
  • the client performs the image matting on the first face image based on affine transformation through the warpAffine function (a public function interface used for achieving image rotation and translation).
  • the client invokes the age change model to process, according to the specified age, the first face image after image matting, thereby obtaining the second face image corresponding to the first face image after image matting, and performs inverse transformation of the affine transformation on the second face image, thereby obtaining the second face image used for being outputted.
  • the age change model is integrated with an efficient network structure module (such as MobileNet and a CBAM module), and the performance of the age change model is optimized using a model compression pruning technique and engineering optimization.
  • a storage space occupied by the age change model is reduced, and the speed of processing the first face image by the age change model can be increased.
  • the age change model includes a conditional generative network layer and a texture synthesis network layer.
  • the age change model can further include a shape change network layer.
  • an implementation process of step 302 includes following step 3021 to step 3023 :
  • Step 3021 Invoke the Conditional Generative Network Layer to Perform Prediction on the First Face Image According to the Specified Age, and Output the Texture Difference Map
  • the texture difference map is used for reflecting a texture difference between a face texture in the first face image and a face texture of the specified age.
  • the texture difference at least includes at least one of a face skin feature difference, a hair color feature difference and a beard feature difference.
  • the client may fuse the specified age into the features extracted by a plurality of feature extraction layers in the conditional generative network layer, thereby improving the accuracy of the outputted texture difference.
  • Step 3022 Invoke the Texture Synthesis Network Layer to Superimpose the Texture Difference Map with the First Face Image to Obtain the Second Face Image
  • conditional generative network layer is further used for outputting an attention map; and the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the first face image.
  • the client invokes the texture synthesis network layer to superimpose the texture difference map with the first face image based on the attention map, to obtain the second face image.
  • texture difference map is superimposed with a sample image based on the attention map, a pixel point in the texture difference map, a pixel point in the first face image, a pixel point in the second face image and the weight coefficient determined according to the attention map satisfy:
  • I RGB is the pixel point in the texture difference map
  • I in is the pixel point in the first face image
  • I out is the pixel point in the second face image
  • is the weight coefficient determined according to the attention map.
  • Step 3023 Invoke the Shape Change Network Layer to Perform Shape Change Processing on the Face in the Second Face Image
  • conditional generative network layer is further used for outputting a shape change information map; and the shape change information map is used for predicting a face shape change of the face in the first face image relative to the specified age.
  • the client can invoke the shape change network layer to perform, based on the shape change information map, shape change processing on the face in the second face image.
  • the shape change information map includes displacement information corresponding to the pixel point in a first direction and a second direction.
  • the first direction and the second direction are perpendicular to each other.
  • the client invokes the shape change network layer to perform, based on the displacement information in the first direction and the second direction, bilinear displacement on the pixel point in the second face image, so as to perform the shape change processing on the second face image.
  • the first face image includes three channels (red, green and blue) if it is an RGB image.
  • the texture difference map outputted by the conditional generative network layer based on the first face image also includes three channels (red, green and blue); the attention map includes one channel (weight coefficient); and the shape change information map includes two channels (displacement information in the first direction and displacement information in the second direction).
  • FIG. 5 is a schematic diagram of a process of processing a first face image provided according to an embodiment of this disclosure.
  • the client invokes the conditional generative network layer in the age change model to process, based on the specified age, a first face image 501 , so that a texture difference map 502 , a shape change information map 504 and an attention map can be obtained.
  • the client superimposes the first face image 501 with the texture difference map 502 based on the attention map through the texture synthesis network layer, so that a texture-changed second face image 503 .
  • the client then performs shape change processing on the texture-changed second face image 503 based on the shape change information map 504 through the shape change network layer to obtain a shape-changed second face image 505 .
  • the client processes the first face image through the age change model, so that the second face image obtained by changing the textures can be outputted only, or, the second face image obtained by changing the textures and the shape can be outputted.
  • Step 303 Process the Color of a Hair Region in the Second Face Image
  • the second face image is the second face image obtained by changing the textures or is the second face image obtained by changing the textures or the shape.
  • the client will also process the color of the hair region in the second face image, specifically including:
  • an image mask corresponding to the hair region can be obtained.
  • a value of a pixel point of the hair region in the second face image is 1, and a value of a pixel point of a region beyond the hair region is 0.
  • the hair region can be obtained by multiplying the image mask with the second face image.
  • the client calculates, in a mapping manner, a target color value of a pixel point in the hair region at the specified age, based on an original color value of the pixel point in the hair region, and replaces the original color value of the pixel point in the hair region by the target color value, thereby obtaining a hair color-changed second face image.
  • the client calculates the target color value in the mapping manner according to the original color value and the specified age through a conditional LUT. This condition means that the original color value corresponds to different target color value at different ages.
  • Step 304 Output the Second Face Image
  • the client outputs the second face image.
  • the client displays the second face image in a user interface that displays the first face image and the age change control, that is, the client displays the first face image and the second face image in the same user interface.
  • the client displays the first face image and the second face image in different user interfaces.
  • the client may also display the specified age corresponding to the second face image in the user interface that displays the second face image.
  • the first face image is processed through the age change model, so that the second face image can be generated and displayed according to the specified age.
  • the second face image is a face image of a face in the first face image at the specified age. That is, a face image can be changed according to a specified age customized by a user or a specified age preset in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved.
  • both the texture change processing and the shape change processing are respectively performed on the first face image, and the texture difference map is superimposed with an inputted original image, so that the definition of the outputted face image can be maintained.
  • the hair color processing is then performed on the texture-changed and shape-changed face image, so that the hair color of the finally outputted face image can be more realistic and natural, and the hair color of the face image matches with the specified age.
  • FIG. 6 is a schematic flowchart of a face image display method provided according to an embodiment of this disclosure. The method can be applied to a computer device or a client on the computer device. As shown in FIG. 6 , the method includes the following steps:
  • Step 601 Display a First Face Image and an Age Change Control
  • the first face image is from a photo or a video frame in a video.
  • the first face image is any image including information of a face.
  • the first face image is a photo or a video frame of a video uploaded by the user in the client, or a photo or a video frame in a video captured by the computer device where the client is located, or a photo or a video frame in a video acquired by the client through other computer devices.
  • the client is configured to provide a function of processing, based on a specified age, a face image.
  • the age change control is a control used for inputting a specified age.
  • the age change control includes an age input box; the age control includes an age selection box, or the age control includes an age display bar and elements that are superimposed on the age display bar to indicate specified ages.
  • the client displays the first face image and the age change control in the same user interface.
  • the user interface is used for providing a function of processing, according to the specified age, the first face image.
  • An image uploading control is also displayed in the user interface and is used for uploading the first face image.
  • FIG. 7 is a schematic diagram of a user interface for displaying a first face image and an age change control provided according to an embodiment of this disclosure.
  • a first face image 702 uploaded by a user through an image uploading button 704 is displayed in a first user interface 701 .
  • the first user interface 701 is a user interface, used for providing a function of processing, based on a specified age, a face image, in a client.
  • An age inputting box 703 is also displayed in the first user interface 701 .
  • the client can acquire the specified age according to an inputting operation on the age inputting box 703 .
  • FIG. 8 is a schematic diagram of another user interface for displaying a first face image and an age change control provided according to an embodiment of this disclosure.
  • a video frame 802 of a video captured in real time by a computer device where a client is located is displayed in a second user interface 801 , and the video frame 802 includes an image of a face.
  • the second user interface 801 is a user interface, used for processing the video captured in real time, in the client.
  • the client may display an age seek bar 804 .
  • the client can acquire, according to a drag operation on an element used for indicating a selected specified age on the age seek bar 804 , a specified age, and displays the specified age above this element.
  • Step 602 Process, in Response to a Trigger Operation for the Age Change Control, the First Face Image According to the Specified Age Corresponding to the Trigger Operation to Obtain a Second Face Image
  • the second face image is a face image of a face in the first face image at the specified age.
  • the client may invoke an age change model using the method in the above-mentioned embodiment to process, according to the specified age, the first face image to obtain the second face image.
  • the age change model may be the age change model mentioned in the above-mentioned embodiments.
  • the age change control further includes a confirm control when the age change control includes an age inputting box.
  • the client may acquire a specified age inputted into the inputting box and confirm that the trigger operation has been received.
  • the age change control also includes a confirm control when the age control includes an age selection box.
  • the client may acquire a specified age selected through the selection box and confirm that the trigger operation has been received.
  • the client may acquire the specified age indicated by the element and confirm that the trigger operation has been received.
  • the client receives an inputting operation for the age inputting box 703 , the client acquires the specified age inputted through the inputting operation, and invokes the age change model to process, according to the specified age, the first face image.
  • the client receives the drag operation on the element used for indicating the selected specified age on the age seek bar, the client acquires the specified age currently indicated by the element, and invokes the age change model to process, according to the specified age, the first face image.
  • Step 603 Display the Second Face Image
  • the client displays the second face image in a user interface that displays the first face image and the age change control, that is, the client displays the first face image and the second face image in the same user interface. Or, the client displays the first face image and the second face image in different user interfaces. Alternatively, the client may also display the specified age corresponding to the second face image in the user interface that displays the second face image.
  • FIG. 9 is a schematic diagram of a displayed second face image provided according to an embodiment of this disclosure.
  • a client invokes, according to a first face image 901 uploaded by a user, an age change model to process, based on a plurality of preset specified ages, the first face image, thereby obtaining a plurality of face images 902 of the first face image 901 at different specified ages.
  • the client may also display ages 903 , corresponding to a face in the face image, above each face image displayed.
  • FIG. 10 is a schematic diagram of a user interface for displaying a second face image provided according to an embodiment of this disclosure.
  • a video frame 802 of a video captured in real time by a computer device where a client is located is displayed in a second user interface 801 , and the video frame 802 includes an image of a face.
  • the client acquires a specified age through the age seek bar 804 , invokes the age change model to process the video frame 802 to obtain a target video frame 805 , and displays the video frame 802 and the target video frame 805 in a superimposed manner.
  • the target video frame 805 includes an image of a face obtained by processing the image of the face in the video frame 802 .
  • a switching display dividing line 806 is also displayed in the second user interface 801 .
  • the client displays the video frame 802 on the left side of the switching display dividing line 806 , and displays the target video frame 805 on the other side.
  • display areas of the video frame 802 and the target video frame 805 which are displayed on the client will change. For example, when the switching display dividing line 806 is dragged to the left, the area of the video frame 802 displayed on the client decreases, and the area of the target video frame 805 displayed increases.
  • the switching display dividing line 806 is dragged to the right, the area of the video frame 802 displayed on the client increases, and the area of the target video frame 805 displayed decreases.
  • the first face image in response to the trigger operation for the age change control, is processed through the age change model according to the specified age, so that the second face image of the face in the first face image at the specified age can be obtained and displayed. That is, a face image can be changed according to a specified age customized by a user or a specified age preset in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved.
  • FIG. 11 is a schematic flowchart of an age change model training method provided according to an embodiment of this disclosure. The method can be applied to a computer device or a client on the computer device. As shown in FIG. 11 , the method includes the following steps:
  • Step 1101 Acquire a Sample Image Set
  • the sample image set includes a sample image and a sample age label of the sample image.
  • the sample image set includes sample images of different faces.
  • the sample image set is determined by an administrator providing a face image processing service.
  • the computer device trains an age change model according to the sample image set.
  • the age change model is deployed in the computer device, or the computer device can remotely invoke the age change model.
  • the sample image set may include a plurality of pairs of sample image and corresponding sample age sample.
  • the training process as described below will apply to each of the plurality of pairs.
  • the computer device predicts the age of the sample image through an age prediction model, so as to obtain the sample age label.
  • the age prediction model is based on a convolutional neural network (CNN), and is trained by samples of face images of different ages and identities and known ages corresponding to the face images.
  • CNN convolutional neural network
  • Step 1102 Determine a Specified Age, the Specified Age Being a Random Age or the Sample Age Label
  • the specified age is randomly generated by the computer device.
  • the computer device randomly generates a number between 10 and 80, and determines it as the specified age.
  • the computer device can also acquire an age selected by the administrator and take it as the specified age.
  • Step 1103 Invoke a Generator in a Generative Adversarial Network to Predict, Based on the Specified Age, the Sample Image to Obtain a Predicted Face Image
  • the predicted face image is a face image of a face in the sample image at the specified age.
  • the generator in the generative adversarial network is the age change model.
  • the computer device invokes the generator to predict a texture difference map of the sample image at the specified age, and performs, based on the texture difference map, image processing on the sample image to obtain the predicted face image.
  • the computer device invokes the generator to predict a texture difference map of the sample image at the specified age and a face shape change of the face in the sample image relative to the specified age, and processes, according to the texture difference map and the face shape change, the sample image to obtain the predicted face image.
  • the texture difference map is used for reflecting a texture difference between a face texture in the sample image and a face texture of the specified age.
  • the generator is based on the CNN.
  • Step 1104 Invoke a Discriminator in the Generative Adversarial Network to Calculate an Adversarial Loss for Predicted Face Image
  • the generator and the discriminator constitute the generative adversarial network.
  • the discriminator is used for determining whether an inputted image is an image generated by the generator or a real image, thereby forming an adversarial relationship with the generator.
  • the computer device inputs the predicted face image into the discriminator, and obtains a determination on whether the predicted face image predicted by the discriminator is a real image, thereby calculating the adversarial loss.
  • the adversarial loss is used for whether the predicted face image is a loss of a real face image.
  • the computer device may also input the sample image into the discriminator, and trains, according to the determination outputted by the discriminator on whether the sample image is a real time, the discriminator.
  • the discriminator is based on the CNN.
  • Step 1105 Invoke the Age Prediction Model to Predict a Predicted Age of the Predicted Face Image, and Calculate an Age Loss Between the Predicted Age and the Specified Age
  • the age prediction model is the same as or different from the age prediction model used for predicting the sample age label.
  • the computer device inputs the predicted face image into the age prediction model, outputs the predicted age predicted for the face in the predicted face image, and calculates, according to the predicted age and the specified age, the age loss.
  • the age loss can reflect a deviation between the predicted age and the specified age.
  • Step 1106 Train the Generator According to the Adversarial Loss and the Age Loss
  • the computer device trains, according to the adversarial loss, the generator using gradient backpropagation, and trains, according to the age loss, the generator using gradient backpropagation.
  • the computer device trains the generator alternately, or simultaneously, according to the adversarial loss and the age loss.
  • the generator can generate an image that is closer to the real face image by the adversarial loss, and the age reflected by the face in the face image generated by the generator has a smaller deviation from the specified age by the age loss.
  • Step 1107 Determine the Generator to be the Age Change Model in a Case that a Training End Condition is Satisfied
  • the training end condition includes parameters of the generator obtained by adjusting, based on backpropagation, the adversarial loss and the age loss, so that the generator can be stabilized and converged. Or, the training end condition is determined by the administrator training the generator.
  • the above-mentioned discriminator and age prediction model are mainly used for the computer device to train the generator. When the training of the generator is completed, the computer device acquires the trained generator and determines it to be the age change model.
  • the computer device may set the age change model in the client, or provide an invoking interface of the age change model to the outside, so that a service of processing a face image based on a specified age is provided to the outside.
  • the computer device is a server, and may further be an independent physical server, or may be a server cluster including a plurality of physical servers or a distributed system, or may be a cloud server providing basic cloud computing services, such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), big data, and an artificial intelligence platform.
  • the computer device training the age change model is the same as or different from the computer device where the client is installed.
  • the generator is trained by the sample image set and the specified age, thereby obtaining the age change model.
  • the age change model is trained based on the adversarial loss and the age loss.
  • a face image can be changed according to a specified age customized by a user or a specified age preset in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved.
  • FIG. 12 is a schematic flowchart of another age change model training method provided according to an embodiment of this disclosure. The method can be applied to a computer device or a client on the computer device. As shown in FIG. 12 , the method includes the following steps:
  • Step 1201 Acquire a Sample Image Set
  • the sample image set includes a sample image and a sample age label of the sample image.
  • the sample image set includes sample images of different faces.
  • the sample image set is determined by an administrator providing a face image processing service.
  • the computer device trains an age change model according to the sample image set.
  • the age change model is deployed in the computer device, or the computer device can remotely invoke the age change model.
  • the computer device invokes the age prediction model to perform age prediction on the sample image to obtain the sample image and the sample age label, and determines the sample image and the sample age label of the sample image to be the sample image set.
  • the age prediction model is based on a CNN, and is trained by samples of face images of different ages and identities and known ages corresponding to the face images.
  • Step 1202 Preprocess the Sample Image Set
  • the computer device may input the sample image into a face detection model, output a face alignment point in the sample image, and perform image matting on the sample image according to the face alignment point and affine transformation to obtain an aligned sample image.
  • the face detection model is used for determining features of the face included in the sample image, thereby obtaining the face alignment point that reflects face features.
  • the face detection model is trained by a training sample including facial features.
  • the face alignment point includes a pixel point, used for indicating the facial features, in the sample image.
  • the computer device achieves image matting on the sample image based on affine transformation through the warpAffine function.
  • Step 1203 Determine a Specified Age, the Specified Age Being a Random Age or the Sample Age Label
  • the specified age is randomly generated by the computer device.
  • the computer device randomly generates a number between 10 and 80, and determines it as the specified age.
  • the computer device can also acquire an age selected by the administrator and take it as the specified age.
  • Step 1204 Invoke a Generator in a Generative Adversarial Network to Predict, Based on the Specified Age, the Sample Image to Obtain a Predicted Face Image
  • the generator in the generative adversarial network is the age change model.
  • the computer device invokes the generator to predict a texture difference between a face texture in the sample image and a face texture of the specified age, and performs, based on the texture difference, image processing on the sample image to obtain the predicted face image.
  • the computer device invokes the generator to predict a texture difference between a face texture in the sample image and a face texture of the specified age and a face shape change of the face in the sample image relative to the specified age, and processes, based on the face texture difference and the face shape change, the sample image to obtain the predicted face image.
  • the generator includes a conditional generative network layer and a texture synthesis network layer.
  • the generator can further include a shape change network layer.
  • an implementation process of step 1204 includes following step 12041 to step 12043 :
  • Step 12041 Invoke the Conditional Generative Network Layer to Predict, Based on the Specified Age, the Sample Image, and Output the Texture Difference Map
  • the texture difference map is used for reflecting the texture difference between the face texture in the sample image and the face texture of the specified age.
  • the texture difference at least includes: a face skin feature difference, a hair color feature difference and a beard feature difference.
  • the face skin feature difference is used for making the skin of the face smoother and finer;
  • the hair color feature difference is used for blackening the hairs;
  • the beard feature difference is used for erasing the beards.
  • the face skin feature difference is used for adding wrinkles on the face; the hair color feature difference is used for whitening the hairs; and the beard feature difference is used for whitening the beards.
  • Step 12042 Invoke the Texture Synthesis Network Layer to Superimpose the Texture Difference Map with the Sample Image to Obtain the Predicted Face Image
  • conditional generative network layer is further used for outputting an attention map.
  • the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the sample image.
  • the weight coefficient is used for reflecting the importance of the texture difference corresponding to the pixel point in the sample image in the texture differences corresponding to other pixel points.
  • the computer device invokes the texture synthesis network layer, so that the texture difference map and the sample image can be superimposed based on the attention map to obtain the predicted face image.
  • a pixel point in the texture difference map, a pixel point in the sample image, a pixel point in the predicted face image and the weight coefficient determined according to the attention map satisfy:
  • I out I RGB ⁇ +I in ⁇ (1 ⁇ )
  • I RGB is the pixel point in the texture difference map
  • I in is the pixel point in the sample image
  • I out is the pixel point in the predicted face image
  • is the weight coefficient determined according to the attention map.
  • Step 12043 Invoke the Shape Change Network Layer to Perform Shape Change Processing on the Face in the Predicted Face Image
  • conditional generative network layer is further used for outputting a shape change information map; and the shape change information map is used for predicting a face shape change of the face in the sample image relative to the face at the specified age.
  • the computer device can invoke the shape change network layer to perform, based on the shape change information map, shape change processing on the face in the predicted face image.
  • the shape change information map includes displacement information corresponding to the pixel point in the sample image in a first direction and a second direction.
  • the predicted face image is obtained based on the sample image, and the shape change information map can also reflect displacement information corresponding to the pixel point in the predicted face image in the first direction and the second direction.
  • the first direction and the second direction are perpendicular to each other.
  • the first direction is a vertical direction in the predicted sample image
  • the second direction is a horizontal direction in the predicted sample image.
  • the computer device invokes the shape change network layer to perform, based on the displacement information in the first direction and the second direction, bilinear displacement on the pixel point in the predicted face image, so as to perform the shape change processing on the predicted face image.
  • the sample image includes three channels (red, green and blue) if it is an RGB image.
  • the texture difference map outputted by the conditional generative network layer based on the sample image also includes three channels (red, green and blue); the attention map includes one channel (weight coefficient); and the shape change information map includes two channels (displacement information in the first direction and displacement information in the second direction).
  • Step 1205 Invoke a Discriminator in the Generative Adversarial Network to Calculate an Adversarial Loss for Predicted Face Image
  • the computer device inputs the predicted face image into the discriminator, and obtains a determination on whether the predicted face image predicted by the discriminator is a real image, thereby calculating the adversarial loss.
  • the adversarial loss is used for whether the predicted face image is a loss of a real face image.
  • the computer device may also input the sample image into the discriminator, and trains, according to the determination outputted by the discriminator on whether the sample image is a real time, the discriminator.
  • Step 1206 Invoke the Age Prediction Model to Predict a Predicted Age of the Predicted Face Image, and Calculate an Age Loss Between the Predicted Age and the Specified Age
  • the age prediction model is the same as or different from the age prediction model used for predicting the sample age label.
  • the computer device inputs the predicted face image into the age prediction model, outputs the predicted age predicted for the face in the predicted face image, and calculates, according to the predicted age and the specified age, the age loss.
  • the age loss can reflect a deviation between the predicted age and the specified age.
  • Step 1207 Train the Generator According to the Adversarial Loss and the Age Loss
  • the computer device trains, according to the adversarial loss, the generator using gradient backpropagation, and trains, according to the age loss, the generator using gradient backpropagation.
  • the computer device trains the generator alternately, or simultaneously, according to the adversarial loss and the age loss.
  • the generator can generate an image that is closer to the real face image by the adversarial loss, and the age reflected by the face in the face image generated by the generator has a smaller deviation from the specified age by the age loss.
  • Step 1208 Determine the Generator to be the Age Change Model in a Case that a Training End Condition is Satisfied
  • the training end condition includes parameters of the generator obtained by adjusting, based on backpropagation, the adversarial loss and the age loss, so that the generator can be stabilized and converged. Or, the training end condition is determined by the administrator training the generator.
  • the above-mentioned discriminator and age prediction model are mainly used for the computer device to train the generator. When the training of the generator is completed, the computer device acquires the trained generator and determines it to be the age change model.
  • the computer device can only train the conditional generative network layer and the texture synthesis network layer, that is, train the generator according to the outputted predicted face image that is not subjected to the shape change processing. Or, the computer device trains the conditional generative network layer, the texture synthesis network layer and the texture synthesis network layer, that is, train the generator according to the outputted predicted face image that is subjected to the shape change processing. After completing the training of the age change model, the computer device may set the age change model in the client, or provide an invoking interface of the age change model to the outside, so that a service of processing a face image based on a specified age is provided to the outside.
  • the generator is trained by the sample image set and the specified age, thereby obtaining the age change model.
  • the age change model is trained based on the adversarial loss and the age loss.
  • a face image can be changed according to a specified age customized by a user or a specified age preset in a system, so that the flexibility and accuracy of processing of the face image in an age dimension are improved.
  • the age change model is trained based on the generative adversarial network, so that the generated second face image is more natural.
  • the age change model is trained based on the age prediction model, so that the face in the generated second face image is closer to the face of the specified age. Preprocessing of the sample image can lower the difficulty in training the age change model, thereby improving the training efficiency.
  • FIG. 14 is a schematic diagram of an implementation process of preprocessing a face image provided according to an embodiment of this disclosure.
  • a client acquires a face image according to a picture or a video frame; determines, through a face detection model, a face alignment point in the face image to achieve detection and alignment for a face in the face image; perform, through the warpAffine function according to the face alignment point and affine transformation, image matting on the face image to obtain an aligned face image; and predicts, through an age prediction model, an age of the face image to obtain a combination of the face image and the predicted age.
  • the face image and the predicted age can be used as a sample image and a sample age label to train an age change model.
  • FIG. 15 is a schematic diagram of an implementation process of performing age change on a face image provided according to an embodiment of this disclosure.
  • a client acquires a face image according to a picture or a video frame.
  • the face image is preprocessed for face alignment, and a specified age is then acquired.
  • the client invokes a conditional generative network layer to process, based on the specified age, the face image to obtain a texture difference map, an attention map and a shape change information map.
  • the texture difference map and the face image are superimposed based on the attention map through the texture synthesis network layer, and shape change processing is performed on the outputted texture-changed face image based on the shape change information map through a shape change network layer.
  • the client performs semantic face segmentation on the texture-changed and shape-changed face image to obtain a hair region, and replaces the color of a pixel point in the hair region based on a dynamic hair color algorithm (that is a conditional color LUT), thereby outputting a final result map.
  • FIG. 16 is a schematic structural diagram of a face image processing apparatus provided according to an embodiment of this disclosure.
  • a unit and a module may be hardware such as a combination of electronic circuitries; firmware; or software such as computer instructions.
  • the unit and the module may also be any combination of hardware, firmware, and software.
  • a unit may include at least one module.
  • the apparatus can be applied to a computer device or a client on the computer device. As shown in FIG. 16 , the apparatus 160 includes:
  • an acquisition module 1601 configured to acquire a first face image
  • a prediction module 1602 configured to invoke an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of the specified age;
  • a first processing module 1603 configured to perform, based on the texture difference map, image processing on the first face image to obtain a second face image, the second face image being a face image of a face in the first face image at the specified age.
  • the age change model includes a conditional generative network layer and a texture synthesis network layer.
  • the prediction module 1602 is configured to:
  • the texture difference map is used for reflecting a texture difference between a face texture in the first face image and a face texture of the specified age.
  • the first processing module 1603 is configured to invoke the texture synthesis network layer to superimpose the texture difference map with the first face image to obtain the second face image.
  • conditional generative network layer is further used for outputting an attention map; and the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the first face image.
  • the first processing module 1603 is configured to:
  • the age change model further includes a shape change network layer.
  • the apparatus 160 further includes:
  • a second processing module 1604 configured to invoke the shape change network layer to perform shape change processing on a face in the second face image.
  • conditional generative network layer is further used for outputting a shape change information map; and the shape change information map is used for predicting a face shape change of the face in the first face image relative to the specified age.
  • the second processing module 1604 is configured to:
  • the shape change information map includes displacement information corresponding to the pixel point in a first direction and a second direction, the first direction and the second direction being perpendicular to each other;
  • the second processing module 1604 is configured to:
  • the apparatus 160 further includes:
  • a third processing module 1605 configured to perform semantic image segmentation on the second face image to obtain a hair region in the second face image
  • a calculation module 1606 configured to calculate a corresponding target color value of the pixel point in the hair region at the specified age in a mapping manner based on an original color value of a pixel point in the hair region;
  • a replacement module 1607 configured to replace the original color value of the pixel point in the hair region by the target color value.
  • the apparatus 160 further includes:
  • a fourth processing module 1608 configured to input the first face image into a face detection model, and output a face alignment point in the first face image
  • a fifth processing module 1609 configured to perform image matting on the first face image according to the face alignment point and affine transformation to obtain an aligned first face image.
  • the apparatus 160 further includes a training module 1610 ; the age change model is obtained by training by the training module 1610 ; the training module 1610 is configured to:
  • the sample image set including a sample image and a sample age label of the sample image; determine a specified age, the specified age being a random age or the sample age label; invoke a generator in a generative adversarial network to predict, based on the specified age, the sample image to obtain a predicted face image; invoke a discriminator in the generative adversarial network to calculate an adversarial loss of the predicted face image, the adversarial loss being used for representing whether the predicted face image is a loss of a real face image; invoke the age prediction model to predict a predicted age of the predicted face image, and calculate an age loss between the predicted age and the specified age; train the generator based on the adversarial loss and the age loss; and determine the generator to be the age change model in a case that a training end condition is satisfied.
  • the training module 1610 is configured to:
  • the training module 1610 is configured to:
  • the generator includes a conditional generative network layer and a texture synthesis network layer.
  • the training module 1610 is configured to:
  • the conditional generative network layer invokes the conditional generative network layer to predict, based on the specified age, the sample image, and output the texture difference map, the texture difference map being used for reflecting the texture difference between the face texture in the sample image and the face texture of the specified age; and invoke the texture synthesis network layer to superimpose the texture difference map with the sample image to obtain the predicted face image.
  • conditional generative network layer is further used for outputting an attention map; and the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the sample image.
  • the training module 1610 is configured to:
  • the generator further includes a shape change network layer.
  • the training module 1610 is configured to:
  • conditional generative network layer is further used for outputting a shape change information map; and the shape change information map is used for predicting a face shape change of the face in the sample image relative to the specified age.
  • the training module 1610 is configured to:
  • the shape change information map includes displacement information corresponding to the pixel point in a first direction and a second direction, the first direction and the second direction being perpendicular to each other;
  • the training module 1610 is configured to:
  • FIG. 21 is a schematic structural diagram of a face image display apparatus provided according to an embodiment of this disclosure.
  • the apparatus can be applied to a computer device or a client on the computer device.
  • the apparatus 210 includes:
  • a display module 2101 configured to display a first face image and an age change control, the age change control being a control used for inputting a specified age;
  • a processing module 2102 configured to process, in response to a trigger operation for the age change control, the first face image according to the specified age corresponding to the trigger operation to obtain a second face image, the second face image being a face image of a face in the first face image at the specified age.
  • the processing module 2102 is configured to invoke, in response to a trigger operation for the age change control, an age change model to process the first face image according to the specified age corresponding to the trigger operation to obtain a second face image, the second face image being a face image of a face in the first face image at the specified age.
  • the age change model is the age change model mentioned in the above-mentioned embodiments.
  • the display module 2101 is configured to display the second face image.
  • the display module 2101 is configured to display the second face image and the specified age.
  • FIG. 22 is a schematic structural diagram of an age change model training apparatus provided according to an embodiment of this disclosure.
  • the apparatus can be applied to a computer device or a client on the computer device.
  • the apparatus 220 includes:
  • an acquisition module 2201 configured to acquire a sample image set, the sample image set including a sample image and a sample age label of the sample image;
  • a first determination module 2202 configured to determine a specified age, the specified age being a random age or the sample age label;
  • a prediction module 2203 configured to invoke a generator in a generative adversarial network to predict, based on the specified age, the sample image to obtain a predicted face image;
  • a first calculation module 2204 configured to invoke a discriminator in the generative adversarial network to calculate an adversarial loss of the predicted face image, the adversarial loss being used for representing whether the predicted face image is a loss of a real face image;
  • a second calculation module 2205 configured to invoke an age prediction model to predict a predicted age of the predicted face image, and calculate an age loss between the predicted age and the specified age;
  • a training module 2206 configured to train the generator according to the adversarial loss and the age loss
  • a second determination module 2207 configured to determine, in a case that a training end condition is satisfied, the generator to be an age change model.
  • the acquisition module 2201 is configured to:
  • the apparatus 220 further includes:
  • a first processing module 2208 configured to input the sample image into a face detection model, and output a face alignment point in the sample image
  • a second processing module 2209 configured to perform image matting on the sample image according to the face alignment point and affine transformation to obtain an aligned sample image.
  • the generator includes a conditional generative network layer and a texture synthesis network layer.
  • the prediction module 2203 is configured to:
  • the conditional generative network layer invokes the conditional generative network layer to predict, based on the specified age, the sample image, and output the texture difference map, the texture difference map being used for reflecting the texture difference between the face texture in the sample image and the face texture of the specified age; and invoke the texture synthesis network layer to superimpose the texture difference map with the sample image to obtain the predicted face image.
  • conditional generative network layer is further used for outputting an attention map; and the attention map is used for reflecting a weight coefficient of the texture difference corresponding to a pixel point in the sample image.
  • the prediction module 2203 is configured to:
  • the generator further includes a shape change network layer.
  • the prediction module 2203 is configured to:
  • conditional generative network layer is further used for outputting a shape change information map; and the shape change information map is used for predicting a face shape change of the face in the sample image relative to the specified age.
  • the prediction module 2203 is configured to:
  • the shape change information map includes displacement information corresponding to the pixel point in a first direction and a second direction, the first direction and the second direction being perpendicular to each other;
  • the prediction module 2203 is configured to:
  • the face image processing apparatus provided in the foregoing embodiments is illustrated with an example of division of the foregoing functional modules.
  • the foregoing functions may be assigned to and completed by different function modules as required. That is, an internal structure of the device may be divided into different function modules to complete all or some of the functions described above.
  • the face image processing apparatus provided in the foregoing embodiment belongs to the same idea as the face image processing method. See the method embodiment for a specific implementation process thereof, and details are not described herein again.
  • the face image displaying apparatus in the above embodiment, only division of the functional modules is illustrated. In actual application, the functions may be assigned to different functional modules for completion as required. In other words, an internal structure of the device is divided into different functional modules to complete all or a part of the functions described above.
  • embodiments of the face image displaying apparatus and the face image displaying method provided in the foregoing embodiments belong to the same concept. For the specific implementation process, reference may be made to the method embodiments, and details are not described herein again.
  • the age change model training apparatus in the above embodiment, only division of the functional modules is illustrated. In actual application, the functions may be assigned to different functional modules for completion as required. In other words, an internal structure of the device is divided into different functional modules to complete all or a part of the functions described above.
  • the age change model training apparatus and age change model training method embodiments provided in the foregoing embodiments belong to one conception. For the specific implementation process, reference may be made to the method embodiments, and details are not described herein again.
  • An embodiment of this disclosure further provides a computer device, including a processor and a memory, the memory storing at least one instruction, at least one segment of program, a code set or an instruction set, the at least one instruction, the at least one segment of program, the code set or the instruction set being loaded and executed by the processor to implement the face image processing method, the face image displaying method, or the age change model training method provided in the foregoing method embodiments.
  • the computer device is a terminal.
  • FIG. 24 is a schematic structural diagram of a terminal according to an embodiment of this disclosure.
  • the terminal 2400 includes a processor 2401 and a memory 2402 .
  • the processor 2401 may include one or more processing cores, for example, a 4-core processor or an 8-core processor.
  • the processor 2401 may be implemented by using at least one hardware form of a digital signal processor (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA).
  • the processor 2401 may also include a main processor and a co-processor.
  • the main processor is a processor for processing data in a wake-up state, also referred to as a central processing unit (CPU).
  • the coprocessor is a low-power processor configured to process data in a standby state.
  • the processor 2401 may be integrated with a graphics processing unit (GPU) that is responsible for rendering and drawing content needing to be displayed by a display screen.
  • the processor 2401 may further include an artificial intelligence (AI) processor.
  • the AI processor is configured to process computing operations related to machine learning.
  • the memory 2402 may include one or more non-transitory computer-readable storage media that may be non-transitory.
  • the memory 2402 may further include a high-speed random access memory and a nonvolatile memory, for example, one or more disk storage devices or flash storage devices.
  • a non-transitory computer-readable storage medium in the memory 2402 is configured to store at least one instruction, the at least one instruction being configured to be executed by the processor 2401 to implement the method provided in the method embodiments of this disclosure.
  • An embodiment of this disclosure further provides a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium storing at least one piece of program code, the program code, when loaded and executed by a processor of a computer device, implementing the face image processing method, the face image displaying method, or the age change model training method according to the foregoing method embodiments.
  • This application further provides a computer program product or a computer program is provided, the computer program product or the computer program including computer instructions, the computer instructions being stored in a non-transitory computer-readable storage medium.
  • a processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to implement the face image processing method, the face image displaying method, or the age change model training method according to the foregoing method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
US17/969,435 2020-11-02 2022-10-19 Face image processing method and apparatus, face image display method and apparatus, and device Pending US20230042734A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202011203504.5A CN112287852B (zh) 2020-11-02 2020-11-02 人脸图像的处理方法、显示方法、装置及设备
CN202011203504.5 2020-11-02
PCT/CN2021/122656 WO2022089166A1 (zh) 2020-11-02 2021-10-08 人脸图像的处理方法、显示方法、装置及设备

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122656 Continuation WO2022089166A1 (zh) 2020-11-02 2021-10-08 人脸图像的处理方法、显示方法、装置及设备

Publications (1)

Publication Number Publication Date
US20230042734A1 true US20230042734A1 (en) 2023-02-09

Family

ID=74352904

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/969,435 Pending US20230042734A1 (en) 2020-11-02 2022-10-19 Face image processing method and apparatus, face image display method and apparatus, and device

Country Status (4)

Country Link
US (1) US20230042734A1 (zh)
JP (1) JP7562927B2 (zh)
CN (1) CN112287852B (zh)
WO (1) WO2022089166A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287852B (zh) * 2020-11-02 2023-11-21 腾讯科技(深圳)有限公司 人脸图像的处理方法、显示方法、装置及设备
CN113392769A (zh) * 2021-06-16 2021-09-14 广州繁星互娱信息科技有限公司 人脸图像的合成方法、装置、电子设备及存储介质
CN114022931A (zh) * 2021-10-29 2022-02-08 北京字节跳动网络技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN115147508B (zh) * 2022-06-30 2023-09-22 北京百度网讯科技有限公司 服饰生成模型的训练、生成服饰图像的方法和装置
CN116994309B (zh) * 2023-05-06 2024-04-09 浙江大学 一种公平性感知的人脸识别模型剪枝方法
CN116740654B (zh) * 2023-08-14 2023-11-07 安徽博诺思信息科技有限公司 基于图像识别技术的变电站作业防控方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850463A (en) * 1995-06-16 1998-12-15 Seiko Epson Corporation Facial image processing method and facial image processing apparatus
US20140185926A1 (en) * 2010-09-07 2014-07-03 University Of North Carolina At Wilmington Demographic Analysis of Facial Landmarks
US20180276883A1 (en) * 2017-03-21 2018-09-27 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation
US20180365874A1 (en) * 2017-06-14 2018-12-20 Adobe Systems Incorporated Neural face editing with intrinsic image disentangling
US20210407153A1 (en) * 2020-06-30 2021-12-30 L'oreal High-resolution controllable face aging with spatially-aware conditional gans
US20220084173A1 (en) * 2020-09-17 2022-03-17 Arizona Board of Regents on behalf on Arizona State University Systems, methods, and apparatuses for implementing fixed-point image-to-image translation using improved generative adversarial networks (gans)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4351001B2 (ja) 2003-08-11 2009-10-28 株式会社インテックシステム研究所 年齢変化画像生成方法及び肌平滑化画像生成方法
CN101556701A (zh) * 2009-05-15 2009-10-14 陕西盛世辉煌智能科技有限公司 基于平均脸和衰老比例图的人脸图像年龄变换方法
CN107408290A (zh) * 2015-07-09 2017-11-28 瑞穗情报综研株式会社 增龄化预测系统、增龄化预测方法以及增龄化预测程序
US10621771B2 (en) 2017-03-21 2020-04-14 The Procter & Gamble Company Methods for age appearance simulation
CN109308450A (zh) * 2018-08-08 2019-02-05 杰创智能科技股份有限公司 一种基于生成对抗网络的脸部变化预测方法
CN110348352B (zh) * 2019-07-01 2022-04-29 达闼机器人有限公司 一种人脸图像年龄迁移网络的训练方法、终端和存储介质
CN111612872B (zh) * 2020-05-22 2024-04-23 中国科学院自动化研究所 人脸年龄变化图像对抗生成方法及系统
CN112287852B (zh) * 2020-11-02 2023-11-21 腾讯科技(深圳)有限公司 人脸图像的处理方法、显示方法、装置及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850463A (en) * 1995-06-16 1998-12-15 Seiko Epson Corporation Facial image processing method and facial image processing apparatus
US20140185926A1 (en) * 2010-09-07 2014-07-03 University Of North Carolina At Wilmington Demographic Analysis of Facial Landmarks
US20180276883A1 (en) * 2017-03-21 2018-09-27 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation
US20180365874A1 (en) * 2017-06-14 2018-12-20 Adobe Systems Incorporated Neural face editing with intrinsic image disentangling
US20210407153A1 (en) * 2020-06-30 2021-12-30 L'oreal High-resolution controllable face aging with spatially-aware conditional gans
US20220084173A1 (en) * 2020-09-17 2022-03-17 Arizona Board of Regents on behalf on Arizona State University Systems, methods, and apparatuses for implementing fixed-point image-to-image translation using improved generative adversarial networks (gans)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hongyu Yang, Di Huang, Yunhong Wang, and Anil K Jain, "Learning face age progression: A pyramid architecture of GANs," in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 31–39. (Year: 2018) *
Wang et al. "Recurrent face aging." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 2378–2386 (Year: 2016) *
Zhifei Zhang, Yang Song, and Hairong Qi, "Age progression/regression by conditional adversarial autoencoder," in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5810–5818. (Year: 2017) *

Also Published As

Publication number Publication date
WO2022089166A1 (zh) 2022-05-05
JP2023539620A (ja) 2023-09-15
CN112287852B (zh) 2023-11-21
JP7562927B2 (ja) 2024-10-08
CN112287852A (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
US20230042734A1 (en) Face image processing method and apparatus, face image display method and apparatus, and device
US10657652B2 (en) Image matting using deep learning
JP7490004B2 (ja) 機械学習を用いた画像カラー化
CN111754596B (zh) 编辑模型生成、人脸图像编辑方法、装置、设备及介质
CN107025457B (zh) 一种图像处理方法和装置
US10867416B2 (en) Harmonizing composite images using deep learning
US20200342576A1 (en) Digital Image Completion by Learning Generation and Patch Matching Jointly
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
CN110490896B (zh) 一种视频帧图像处理方法和装置
US20180114363A1 (en) Augmented scanning of 3d models
CN111553267B (zh) 图像处理方法、图像处理模型训练方法及设备
CN111738243B (zh) 人脸图像的选择方法、装置、设备及存储介质
US12125170B2 (en) Image processing method and apparatus, server, and storage medium
CN115565238B (zh) 换脸模型的训练方法、装置、设备、存储介质和程序产品
CN111353546A (zh) 图像处理模型的训练方法、装置、计算机设备和存储介质
US20220292690A1 (en) Data generation method, data generation apparatus, model generation method, model generation apparatus, and program
WO2022089168A1 (zh) 具有三维效果的视频的生成方法、播放方法、装置及设备
CN111080746A (zh) 图像处理方法、装置、电子设备和存储介质
US20240320807A1 (en) Image processing method and apparatus, device, and storage medium
CN112565887B (zh) 一种视频处理方法、装置、终端及存储介质
US12020403B2 (en) Semantically-aware image extrapolation
CN116958306A (zh) 图像合成方法和装置、存储介质及电子设备
CN113709584A (zh) 视频划分方法、装置、服务器、终端及存储介质
KR102656674B1 (ko) 타겟 스타일 및 타겟 색상 정보에 기반하여, 입력 이미지를 변환하는 방법 및 장치
CN116596752B (zh) 脸部图像替换方法、装置、设备及存储介质

Legal Events

Date Code Title Description
AS Assignment

Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAO, YUN;NI, HUI;ZHU, FEIDA;AND OTHERS;SIGNING DATES FROM 20220928 TO 20221018;REEL/FRAME:061726/0819

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS