WO2023138345A1 - 虚拟形象生成方法和系统 - Google Patents
虚拟形象生成方法和系统 Download PDFInfo
- Publication number
- WO2023138345A1 WO2023138345A1 PCT/CN2022/143805 CN2022143805W WO2023138345A1 WO 2023138345 A1 WO2023138345 A1 WO 2023138345A1 CN 2022143805 W CN2022143805 W CN 2022143805W WO 2023138345 A1 WO2023138345 A1 WO 2023138345A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- head
- determining
- attributes
- target
- facial feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000001815 facial effect Effects 0.000 claims abstract description 83
- 239000000463 material Substances 0.000 claims description 79
- 239000013077 target material Substances 0.000 claims description 14
- 210000000746 body region Anatomy 0.000 claims description 9
- 230000000694 effects Effects 0.000 abstract description 5
- 210000003128 head Anatomy 0.000 description 178
- 210000004709 eyebrow Anatomy 0.000 description 55
- 210000001508 eye Anatomy 0.000 description 20
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 210000004209 hair Anatomy 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 239000011521 glass Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 241000161982 Mogera robusta Species 0.000 description 2
- 230000008094 contradictory effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 210000000214 mouth Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 235000011437 Amygdalus communis Nutrition 0.000 description 1
- 241000233805 Phoenix Species 0.000 description 1
- 241000220304 Prunus dulcis Species 0.000 description 1
- 208000004350 Strabismus Diseases 0.000 description 1
- 235000020224 almond Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 210000000088 lip Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 210000003491 skin Anatomy 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
Definitions
- the embodiments of the present application relate to the field of computer technology, and in particular to a virtual image generation method, system, computer equipment, and computer-readable storage medium.
- the video platform provides an avatar that can quickly generate its own avatar and integrate it into content creation. Taking a live broadcast as an example, the anchor can configure a similar virtual image for himself instead of the real self.
- the inventor realized that if a user wants to generate an editable virtual image, the result produced by the existing technology has a poor matching degree with each feature of the user's real appearance, resulting in a poor effect.
- the purpose of the embodiments of the present application is to provide a virtual image generation method, system, computer equipment and computer-readable storage medium for solving the above problems.
- An aspect of the embodiment of the present application provides a method for generating an avatar, the method comprising:
- the reference image including a reference object
- the head region comprising a plurality of head parts
- a target avatar is generated according to the plurality of facial feature attributes and the classification attributes of the respective head parts.
- the determining a plurality of facial feature attributes of the reference object according to the plurality of head key points includes:
- the preset face shape being a frontal face shape with a preset shape
- the multiple facial feature attributes are determined according to the position of each head key point on the preset face shape.
- the plurality of facial feature attributes include size features
- the determining the plurality of facial feature attributes according to the position of each key point of the head on the preset face shape includes:
- the plurality of facial feature attributes include an orientation feature
- the determining the plurality of facial feature attributes according to the position of each key point of the head on the preset face shape includes:
- the orientation characteristics of one or more head parts are determined according to the slope of the connection line between the at least some key points of the head.
- the classification attribute includes a color category
- the determination of the classification attributes of each head part includes:
- the color category of each head part is determined according to the main color of each head part and a preset color classification rule.
- the generating the target avatar according to the plurality of facial feature attributes and the classification attributes of the respective head parts includes:
- the determining the material elements of each head part according to the plurality of facial feature attributes and the classification attributes of each head part includes:
- the target head part being any one of the plurality of head parts
- Determining a plurality of target attributes of the target head part wherein the multiple target attributes include: classification attributes of the target head part and corresponding facial feature attributes, each target attribute corresponds to a weight;
- a second target material element that best matches the target head part is obtained from the preset material library.
- the reference object also includes a body region; the method further includes:
- the body of the target avatar is generated according to the classification attribute of each body part and the attribution of each clothing.
- An aspect of the embodiment of the present application is another avatar generation system, including:
- a first determining module configured to determine a reference image, where the reference image includes a reference object
- a second determining module configured to determine the head area of the reference object, the head area including a plurality of head parts
- the third determining module is used to determine the classification attribute of each head part
- a fourth determining module configured to determine a plurality of head key points in the head region
- a fifth determining module configured to determine a plurality of facial feature attributes of the reference object according to the plurality of head key points
- a generating module configured to generate a target avatar according to the plurality of facial feature attributes and the classification attributes of each head part.
- An aspect of the embodiments of the present application provides a computer device, the computer device includes a memory, a processor, and computer-readable instructions stored on the memory and operable on the processor, and the processor is used to implement the following steps when executing the computer-readable instructions:
- the reference image including a reference object
- the head region comprising a plurality of head parts
- a target avatar is generated according to the plurality of facial feature attributes and the classification attributes of the respective head parts.
- An aspect of the embodiments of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored in the computer-readable storage medium, and the computer-readable instructions can be executed by at least one processor, so that the at least one processor implements the following steps when executing the computer-readable instructions:
- the reference image including a reference object
- the head region comprising a plurality of head parts
- a target avatar is generated according to the plurality of facial feature attributes and the classification attributes of the respective head parts.
- the avatar generation method, system, device, and computer-readable storage medium provided in the embodiments of the present application can generate a target avatar that is identical or highly similar to the above-mentioned attributes after obtaining a plurality of facial feature attributes and classification attributes of each head part, that is, the generated avatar is highly matched with each attribute (feature) of the user's real appearance, and the effect is good.
- FIG. 1 schematically shows an application environment diagram of a method for generating an avatar according to an embodiment of the present application
- FIG. 2 schematically shows a flowchart of a method for generating an avatar according to Embodiment 1 of the present application
- Fig. 3 schematically shows the sub-step flowchart of step S204 in Fig. 2;
- Fig. 4 schematically shows the sub-step flowchart of step S208 in Fig. 2;
- Fig. 5 schematically shows the sub-step flowchart of step S402 in Fig. 4;
- Fig. 6 schematically shows another sub-step flowchart of step S402 in Fig. 4;
- Fig. 7 schematically shows a plurality of key points of the head of the head region of the reference object
- Fig. 8 schematically shows the sub-step flowchart of step S210 in Fig. 2;
- Fig. 9 schematically shows a flow chart of sub-steps of step S800 in Fig. 8;
- FIG. 10 schematically shows a flow chart of newly added steps of the method for generating an avatar according to Embodiment 1 of the present application
- Fig. 11 is a flowchart of an application example
- Fig. 12 schematically shows a plurality of components of a reference image and its attributes
- Fig. 13 schematically shows a block diagram of an avatar generation system according to Embodiment 2 of the present application.
- FIG. 14 schematically shows a schematic diagram of a hardware architecture of a computer device suitable for implementing the method for generating an avatar according to Embodiment 3 of the present application.
- this application aims to propose a new virtual image generation scheme, which can greatly improve the degree of beautification of the generated image, and has high scalability.
- Fig. 1 schematically shows a schematic diagram of an application environment of an avatar-based video editing method according to an embodiment of the present application.
- the electronic device 2 can connect to the server 4 through one or more networks.
- the electronic device 2 may be a device such as a smart phone, a tablet device, a PC (personal computer), or the like.
- the electronic device 2 may be installed with an avatar editor for providing avatar editing services.
- the avatar editor may provide a graphical user interface for avatar editing.
- a video editor can be a client, a browser, etc.
- the server 4 can provide the electronic device 2 with materials for avatar editing, such as resource files and configurations for configuring the avatar.
- the server 4 may provide services through one or more networks.
- a network may include various network devices such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices and/or the like.
- a network may include physical links, such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and/or the like.
- the network may include wireless links, such as cellular links, satellite links, Wi-Fi links and/or the like.
- the following takes the electronic device 2 as the execution subject, and introduces the avatar generation scheme through multiple embodiments. It should be noted that this solution can also be implemented by the server 4 , and the server 4 generates the avatar and returns it to the electronic device 2 .
- Fig. 2 schematically shows a flowchart of a method for generating an avatar according to Embodiment 1 of the present application.
- the virtual image generation method may include steps S200-S210, wherein:
- Step S200 determining a reference image, where the reference image includes a reference object.
- the reference image may be a local picture, or a picture collected in real time by an image acquisition device (camera).
- image acquisition device camera
- the reference object may be the head, half body or whole body of a single character, or may be the head, half body or whole body of multiple characters. It should be noted that, when there are multiple characters, multiple avatars are correspondingly generated.
- a client carrying an avatar function is installed on the electronic device 2 .
- the client is configured with a graphical user interface, and multiple controls, such as manual controls and automatic controls, are displayed on the graphical user interface.
- a material interface will pop up for the user to select material stitching from the material interface.
- the import control will pop up. Access the local image library or open the image acquisition device based on the import control to obtain reference images. The automatic generation of subsequent avatars is performed based on this reference image.
- Step S202 determining the head area of the reference object, where the head area includes a plurality of head parts.
- the head region (face position) in the reference image may be acquired according to a face detection method.
- the face detection method can be a method based on geometric features, a template-based method (such as a correlation matching method, an eigenface method, a linear discriminant analysis method, a singular value decomposition method, a neural network method, a dynamic connection matching method) and a model-based method (such as a hidden Markov model, etc.).
- the plurality of head parts may include: hair, face, eyes, eyebrows, mouth, ears, chin, accessories (glasses, hats), and the like.
- Step S204 determining the classification attribute of each head part.
- the classification attributes of each component may include shape type, color category, and the like.
- an image classification method (such as a classification algorithm based on a convolutional neural network) can be used to classify the shape type of each head part, such as: face shape (square face, round face, pointed face, etc.), eyebrow shape, ear type, hair length, hairstyle, bangs type, bangs length, glasses category, hat category, etc.
- eye types may include the following shape types: almond eyes, phoenix eyes, hanging eyes, elongated eyes, squint eyes, round eyes, and the like.
- the shape type of each eye of the reference object is determined through an image classification method, such as Danfeng eyes.
- the type of eyebrows may include the following shape types: unlined eyebrows, high-raised eyebrows, willow-leaf eyebrows, upward-raised eyebrows, and arched eyebrows.
- the shape type of each eyebrow of the reference object is determined by an image classification method, such as a raised eyebrow.
- the classification attribute includes a color category.
- step S204 can determine the main color of each head part: step S300, segment the head area to obtain the each head part; step S302, determine the main color of each head part according to preset rules; and step S304, determine the color category of each head part according to the main color of each head part and the preset color classification rule.
- the head region is segmented by a facial features segmentation algorithm (such as a recognition algorithm based on a convolutional neural network), that is, the areas of facial skin, pupils, lips, eyebrows, hair and other components are accurately segmented. Then, use a clustering algorithm or other color statistical methods to find out the main color of each component area, and use color mapping or classification to map the main color of the part to a specific color category. By obtaining the main color of each head part, the real face details of the reference object can be further obtained. It should be noted that, in this embodiment, the segmentation in this step is only used for the analysis of color attributes.
- a facial features segmentation algorithm such as a recognition algorithm based on a convolutional neural network
- Step S206 determining multiple head key points in the head region.
- Head key points also known as face key points
- key points of the human face including eyebrows, eyes, nose, mouth, and facial contour areas.
- Face key points are important feature points of various parts of the face, usually contour points and corner points.
- a plurality of key points (eg, 68 points, 106 points or 240 points) of the facial contour can be acquired through a face key point detection algorithm (such as a detection algorithm based on deep learning).
- Step S208 according to the multiple head key points, determine multiple facial feature attributes of the reference object.
- classification attributes such as shape type
- the facial feature attributes (degree attributes) of at least part of the head parts are analyzed through the head key points.
- the key points of the head are used to obtain facial feature attributes.
- the facial feature attributes analyzed by the key points of the head can further obtain the details of the real face.
- the reference object in the reference image is not necessarily a frontal face, which hinders the extraction of facial feature attributes.
- step S208 may include: step S400, projecting the plurality of head key points onto a preset face shape, the preset face shape being a frontal face shape with a preset shape; and step S402, determining the plurality of facial feature attributes according to the position of each head key point on the preset face shape.
- the obtained head key points are projected onto the canonical preset face shape through affine transformation (rotation, translation, scaling). Based on the position of each key point of the head of the preset face shape, some facial feature attributes can be accurately analyzed.
- the plurality of facial feature attributes may include degree attributes, such as the size of the component itself, the orientation of the component, and the like. For ease of understanding, several optional embodiments are provided below to introduce the process of acquiring some facial feature attributes.
- the plurality of facial feature attributes include a size feature.
- the step S402 may include: step S500, according to the position of each head key point on the preset face shape, determine the distance between at least some of the head key points; and step S502, according to the distance between the at least the head key points, determine the size characteristics of one or more head parts.
- the size, thickness, etc. of the eyebrows, eyes, and mouth of the reference object can be further accurately analyzed.
- the plurality of facial feature attributes include an orientation feature.
- the step S402 may include: Step S600, according to the position of each head key point on the preset face shape, determine the slope of the connection line between at least some of the head key points; and step S602, determine the orientation characteristics of one or more head parts according to the slope of the connection line between the at least part of the head key points.
- the degree of orientation of the eyebrows, the tails of the eyes, the corners of the mouth, etc. of the reference object can be further accurately analyzed.
- FIG. 7 it shows 106 head keypoints of the head region of a reference subject.
- Eye size By calculating the distance between the head key points 75 and 76 or 72 and 73, the eye size of the person in the image can be obtained. Since the key points of the head have been mapped to the standard size face (preset face shape), the distance of the key points of the head can directly represent the size characteristics of each part of the face, such as the size of the mouth and the thickness of the eyebrows. Alternatively, calculate the distance of the key points of the head (75 and 76 or 72 and 73) in the screenshot of the actual face image, and then divide it by the width of the face plus the sum of the vertical height from the midpoint of the lips to the brow, and use this as the value of the eye size.
- Face width the maximum value of the distance between the corresponding head key points on both sides of the face can be taken, such as the head key points 0 and 32, 1 and 31, etc.
- Component orientation Use the slope of the head key points to determine the orientation characteristics of the component, such as eyebrow orientation, eye tail orientation, mouth corner orientation, etc.
- the above only enumerates the acquisition of some facial feature attributes.
- the degree of size and orientation can be divided into several grades, and several specific thresholds can be formulated with reference to the definition of actual product requirements for classification.
- Step S210 generating a target avatar according to the plurality of facial feature attributes and the classification attributes of each head part.
- an avatar that is highly matched with each attribute (feature) of the user's real appearance can be generated, that is, an avatar that has the same attributes as the reference object (user's real image) can be obtained. Moreover, it can be modified and expanded efficiently by adjusting some attributes, and has high controllability.
- the step S210 may include: step S800, determine the material elements of each head part according to the plurality of facial feature attributes and the classification attributes of each head part; and step S802, synthesize the material elements of each head part to obtain the head of the target avatar.
- step S800 determines the material elements of each head part according to the plurality of facial feature attributes and the classification attributes of each head part
- step S802 synthesize the material elements of each head part to obtain the head of the target avatar.
- material elements conforming to each attribute of each head part are found, and each found material element is spliced, so as to obtain the head of the target avatar.
- the target avatar is highly corresponding to the reference object, and the attributes (features) of each head part of the target avatar are the same or highly similar to those of the reference object.
- a material element can also be directly selected from the material library, and the selected material element can replace a certain material element in the avatar, or add it to the avatar, so as to update and obtain a new avatar.
- this optional embodiment can greatly improve the degree of beautification of the generated image, so that the avatar has high controllability, high scalability, and low computing consumption, and avoids the following problems: in the prior art, if you want to change the style of the avatar, you need to re-collect a large amount of data and retrain the model, resulting in a huge cost loss.
- step S800 the material elements of each head part are determined.
- these head parts vary greatly in shape, size and structure. Therefore, it is difficult to find material elements that exactly match all attributes of each head part.
- the multiple attributes of eyebrows include eyebrow shape, eyebrow orientation, eyebrow thickness, and eyebrow color. It is difficult to match material elements that match all attributes.
- the step S800 can be implemented through the following steps: step S900, determine the target head part, and the target head part is any one of the plurality of head parts; step S902, determine a plurality of target attributes of the target head part; wherein the plurality of target attributes include: the classification attribute of the target head part and the corresponding facial feature attributes, and each target attribute corresponds to a weight; If there is no first target material element in the preset material library, according to the weight of each target attribute, the second target material element that best matches the target head part is obtained from the preset material library.
- the optimal material elements can be efficiently matched.
- the weight of each target attribute can be the same, or different weights can be assigned according to the importance of each attribute.
- eyebrows as an example, if the 4 attributes of eyebrows are determined to have different influences on the user's image in the actual effect statistics, such as eyebrow shape > eyebrow thickness > eyebrow orientation > eyebrow color, then the weights will be: eyebrow shape 0.4, eyebrow thickness 0.3, eyebrow orientation 0.2, eyebrow color 0.1.
- eyebrow material element A and eyebrow material element B for eyebrows.
- Eyebrow material element A and the eyebrows of the reference object match the eyebrow shape, eyebrow thickness, and eyebrow orientation.
- Eyebrow material element B matches the eyebrow thickness, eyebrow orientation, and eyebrow color of the reference object.
- the total score obtained by eyebrow material element A is 0.9, while that of eyebrow material element B is only 0.6, so the eyebrow material element A is selected as a more suitable material element.
- degree categories and other similar categories For example, orientation classification can be divided into upper, middle, and lower categories, and color categories can be divided into black, brown, blue, green, pink, red, purple, orange, gray, and white.
- similar material elements can be used for comprehensive analysis in combination with other attribute characteristics. For example, you can use medium instead of red, pink or orange instead of black.
- the preset material library includes material elements of the head and each part of the body. It should be noted that the classification of the material attribute tags of each material element is determined based on the predetermined real person attribute classification. In the preset material library, the attributes of each part of the reference object can be matched with the material attribute tags of each material element to find the material element with the highest matching degree for synthesizing the avatar.
- the foregoing mainly introduces the generation of the head of the target avatar.
- body clothing and the like can be automatically matched according to the user's gender.
- the user can select body props for stitching.
- the body of the target avatar may also be generated according to the body of the reference object.
- the method may further include: step S1000, segmenting the body region to obtain a plurality of body parts, the plurality of body parts including clothing and shoes; step S1002, determining the classification attributes of each body part; step S1004, determining the attribution of each clothing according to the respective positions of the head region, each clothing and shoes in the reference image; step S1006, generating the body of the target avatar according to the classification attributes of each body part and the attribution of each clothing .
- the fitting degree between the virtual image and the real person can be further increased.
- the precise regions and classification attributes of each clothing and shoes in the reference image are segmented by a clothing segmentation algorithm (clothing: short-sleeved tops, long-sleeved jackets, dresses, skirts, trousers, etc., shoes: sports shoes, leather shoes, boots, etc.). Then use the position of the head area and the positions of each clothing and shoes to infer the attribution of the clothing. Take the above-mentioned face as the target person, and calculate the color of the corresponding top, bottom, and shoe area. Finally, find the same or similar material elements from the preset material library for splicing. In this exemplary application, it is also possible to use the main color, use the weight of each attribute, etc. to match the material elements.
- a virtual image with matching attributes is generated based on a real person image (in the reference image). It extracts from the character's face and clothing features, and automatically selects appropriate material elements from the material library according to the attributes of each part, and synthesizes a complete virtual image based on the selected material elements. It involves main process modules: (1) feature extraction module; (2) material marking module; (3) material matching module.
- Each attribute (feature) possessed by the person in the reference image is extracted.
- the attributes here include but are not limited to age, gender, face shape, facial skin color, eyebrow shape, eyebrow orientation, eyebrow thickness, eyebrow color, mouth size, mouth corner orientation, lip color, eye size, eye tail orientation, pupil color, ear type, hair length, hairstyle, bang type, bang length, glasses type, hat type, top type, top color, bottom type, bottom color, shoe type, etc.
- the avatar can be divided into: hair, face, eyes, eyebrows, mouth, ears, accessories (glasses, hats, etc.), tops, bottoms, shoes, etc.
- each component is spliced to obtain a virtual image that can have the same attributes (features) as the user's real image.
- Fig. 13 schematically shows a block diagram of an avatar generating system according to Embodiment 2 of the present application.
- the avatar generating system can be divided into one or more program modules, and one or more program modules are stored in a storage medium and executed by one or more processors to complete the embodiment of the present application.
- the program modules referred to in the embodiments of the present application refer to a series of computer-readable instruction segments capable of accomplishing specific functions. The following description will specifically introduce the functions of the program modules in the embodiments of the present application.
- the avatar generation system 1300 may include a first determination module 1310, a second determination module 1320, a third determination module 1330, a fourth determination module 1340, a fifth determination module 1350, and a generation module 1360, wherein:
- the first determination module 1310 is configured to determine a reference image, the reference image includes a reference object;
- the second determination module 1320 is configured to determine the head area of the reference object, the head area includes a plurality of head parts;
- the third determination module 1330 is used to determine the classification attributes of each head part
- a fourth determining module 1340 configured to determine a plurality of head key points in the head region
- the fifth determination module 1350 is configured to determine a plurality of facial feature attributes of the reference object according to the plurality of head key points;
- a generation module 1360 configured to generate the target avatar according to the plurality of facial feature attributes and the classification attributes of the respective head parts.
- the fifth determination module 1350 is further configured to:
- the preset face shape being a frontal face shape with a preset shape
- the multiple facial feature attributes are determined according to the position of each head key point on the preset face shape.
- the plurality of facial feature attributes include a size feature
- the fifth determining module 1350 is further configured to:
- the plurality of facial feature attributes include an orientation feature
- the fifth determining module 1350 is further configured to:
- the orientation characteristics of one or more head parts are determined according to the slope of the connection line between the at least some key points of the head.
- the classification attribute includes a color category
- the third determining module 1330 is further configured to:
- the color category of each head part is determined according to the main color of each head part and a preset color classification rule.
- the generating module 1360 is further configured to:
- the generating module 1360 is further configured to:
- the target head part being any one of the plurality of head parts
- Determining a plurality of target attributes of the target head part wherein the multiple target attributes include: classification attributes of the target head part and corresponding facial feature attributes, each target attribute corresponds to a weight;
- a second target material element that best matches the target head part is obtained from the preset material library.
- the reference object also includes a body region; the system also includes a body generation module (not identified), configured to:
- the body of the target avatar is generated according to the classification attribute of each body part and the attribution of each clothing.
- FIG. 14 schematically shows a schematic diagram of a hardware architecture of a computer device 10000 suitable for implementing a method for generating an avatar according to Embodiment 3 of the present application.
- the computer device 10000 may be the electronic device 2 or a component thereof, or the server 4 or a component thereof.
- the computer device 10000 is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions.
- it may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a cabinet server (including an independent server, or a server cluster composed of multiple servers), etc.
- the computer device 10000 at least includes but is not limited to: a memory 10010 , a processor 10020 , and a network interface 10030 that can communicate with each other through a system bus. in:
- the memory 10010 includes at least one type of computer-readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
- the memory 10010 may be an internal storage module of the computer device 10000 , such as a hard disk or memory of the computer device 10000 .
- the memory 10010 may also be an external storage device of the computer device 10000, such as a plug-in hard disk equipped on the computer device 10000, a smart memory card (Smart Media Card, referred to as SMC), a secure digital (Secure Digital, referred to as SD) card, a flash memory card (Flash Card) and the like.
- the memory 10010 may also include both an internal storage module of the computer device 10000 and an external storage device thereof.
- the memory 10010 is generally used to store the operating system installed in the computer device 10000 and various application software, such as the program code of the virtual image generation method.
- the memory 10010 can also be used to temporarily store various types of data that have been output or will be output.
- the processor 10020 may be a central processing unit (Central Processing Unit, CPU for short), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
- the processor 10020 is generally used to control the overall operation of the computer device 10000 , such as performing control and processing related to data interaction or communication with the computer device 10000 .
- the processor 10020 is configured to run program codes stored in the memory 10010 or process data.
- the network interface 10030 may include a wireless network interface or a wired network interface, and the network interface 10030 is generally used to establish a communication link between the computer device 10000 and other computer devices.
- the network interface 10030 is used to connect the computer device 10000 with an external terminal through a network, and establish a data transmission channel and a communication link between the computer device 10000 and an external terminal.
- the network can be an enterprise intranet (Intranet), Internet (Internet), Global System of Mobile Communication (GSM for short), Wideband Code Division Multiple Access (WCDMA for short), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
- FIG. 14 only shows a computer device having components 10010-10030, but it should be understood that implementing all of the illustrated components is not required and that more or fewer components may instead be implemented.
- the avatar generation method stored in the memory 10010 can also be divided into one or more program modules, and executed by one or more processors (processor 10020 in this embodiment), so as to complete the embodiment of the present application.
- the embodiment of the present application also provides a computer-readable storage medium, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the steps of the virtual image generating method in the embodiment are implemented.
- the computer-readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
- the computer-readable storage medium may be an internal storage unit of a computer device, such as a hard disk or a memory of the computer device.
- the computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk equipped on the computer device, a smart memory card (Smart Media Card, referred to as SMC), a secure digital (Secure Digital, referred to as SD) card, a flash memory card (Flash Card) and the like.
- the computer-readable storage medium may also include both the internal storage unit of the computer device and its external storage device.
- the computer-readable storage medium is generally used to store the operating system and various application software installed on the computer device, such as the program code of the virtual image generation method in the embodiment.
- the computer-readable storage medium can also be used to temporarily store various types of data that have been output or will be output.
- each module or each step of the above-mentioned embodiments of the present application can be implemented by a general-purpose computer device, and they can be concentrated on a single computer device, or distributed on a network composed of multiple computer devices; Modules or steps are implemented as a single integrated circuit module.
- embodiments of the present application are not limited to any specific combination of hardware and software.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
本申请实施例提供了一种虚拟形象生成方法,所述方法包括:确定参考图像,所述参考图像包括参考对象;确定所述参考对象的头部区域,所述头部区域包括多个头部部件;确定各个头部部件的分类属性;确定所述头部区域的多个头部关键点;根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。本申请实施例提供了虚拟形象生成系统、计算机设备和计算机可读存储介质。本申请实施例提供的技术方案生成的虚拟形象与用户真实样貌的各个特征高度匹配,效果好。
Description
本申请申明2022年01月20日递交的申请号为202210066981.4、名称为“虚拟形象生成方法和系统”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
本申请实施例涉及计算机技术领域,尤其涉及一种虚拟形象生成方法、系统、计算机设备及计算机可读存储介质。
随着计算机技术的发展,视频播放等业务已成为当下一种热门的网络业务。为了进一步为提升视频播放的趣味性,以及兼顾内容生产者表现自我和保护自我这一矛盾需求的两面,视频平台提供了一个能快速生成自己的人格化身并融入到内容创作中的虚拟形象。以直播为例,主播可以为自身配置一个代替真实自己的相似虚拟形象。
本发明人意识到,用户如果想要生成可编辑的虚拟形象,现有的技术产生的结果与用户真实样貌的各个特征匹配度差,导致效果差。
发明内容
本申请实施例的目的是提供一种虚拟形象生成方法、系统、计算机设备及计算机可读存储介质,用于解决上述问题。
本申请实施例的一个方面提供了一种虚拟形象生成方法,所述方法包括:
确定参考图像,所述参考图像包括参考对象;
确定所述参考对象的头部区域,所述头部区域包括多个头部部件;
确定各个头部部件的分类属性;
确定所述头部区域的多个头部关键点;
根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及
根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
可选地,所述根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性,包括:
将所述多个头部关键点投射到预设脸型上,所述预设脸脸型为具有预设形状的正脸型;及
根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性。
可选地,所述多个脸部特征属性包括大小特征;
所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:
根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的距离;及
根据所述至少头部关键点之间的距离,确定一个或多个头部部件的大小特征。
可选地,所述多个脸部特征属性包括朝向特征;
所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:
根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的连线斜率;及
根据所述至少部分头部关键点之间的连线斜率,确定一个或多个头部部件的朝向特征。
可选地,所述分类属性包括颜色类别;
所述确定各个头部部件的分类属性,包括:
分割所述头部区域,以得到所述各个头部部件;
根据预设规则,确定各个头部部件的主颜色;及
根据所述各个头部部件的主颜色和预设颜色分类规则,确定所述各个头部部件的颜色类别。
可选地,所述根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象,包括:
根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素;及
对所述各个头部部件的素材元素进行合成,以得到所述目标虚拟形象的头部。
可选地,所述根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素,包括:
确定目标头部部件,所述目标头部部件为所述多个头部部件中的任意一个;
确定所述目标头部部件的多个目标属性;其中所述多个目标属性包括:所述目标头部部件的分类属性和相应的脸部特征属性,每个目标属性分别对应一个权重;
从预设素材库中,获取具有所述多个目标属性的第一目标素材元素;
在所述预设素材库中没有所述第一目标素材元素的情形下,根据所述每个目标属性的权重,从所述预设素材库中获取与所述目标头部部件最匹配的第二目标素材元素。
可选地,所述参考对象还包括身体区域;所述方法还包括:
分割所述身体区域,以得到多个身体部件,所述多个身体部件包括各个服装、鞋子;
确定各个身体部件的分类属性;
根据所述头部区域、各个服装和鞋子各自在所述参考图像中的位置,确定各个服装的归属;
根据所述各个身体部件的分类属性和所述各个服装的归属,生成所述目标虚拟形象的身体。
本申请实施例的一个方面又一种虚拟形象生成系统,包括:
第一确定模块,用于确定参考图像,所述参考图像包括参考对象;
第二确定模块,用于确定所述参考对象的头部区域,所述头部区域包括多个头部部件;
第三确定模块,用于确定各个头部部件的分类属性;
第四确定模块,用于确定所述头部区域的多个头部关键点;
第五确定模块,用于根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及
生成模块,用于根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
本申请实施例的一个方面又提供了一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时用于实现以下步骤:
确定参考图像,所述参考图像包括参考对象;
确定所述参考对象的头部区域,所述头部区域包括多个头部部件;
确定各个头部部件的分类属性;
确定所述头部区域的多个头部关键点;
根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及
根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
本申请实施例的一个方面又提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机可读指令,所述计算机可读指令可被至少一个处理器所执行,以使所述至少一个处理器执行所述计算机可读指令时实现以下步骤:
确定参考图像,所述参考图像包括参考对象;
确定所述参考对象的头部区域,所述头部区域包括多个头部部件;
确定各个头部部件的分类属性;
确定所述头部区域的多个头部关键点;
根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及
根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
本申请实施例提供的虚拟形象生成方法、系统、设备及计算机可读存储介质,在得到多个脸部特征属性和各个头部部件的分类属性,可以生成与上述属性相同或高度相似的目标虚拟形象,即,生成的虚拟形象与用户真实样貌的各个属性(特征)高度匹配,效果好。
图1示意性示出了根据本申请实施例的虚拟形象生成方法的应用环境图;
图2示意性示出了根据本申请实施例一的虚拟形象生成方法的流程图;
图3示意性示出了图2中步骤S204的子步骤流程图;
图4示意性示出了图2中步骤S208的子步骤流程图;
图5示意性示出了图4中步骤S402的子步骤流程图;
图6示意性示出了图4中步骤S402的另一子步骤流程图;
图7示意性示出了参考对象的头部区域的多个头部关键点;
图8示意性示出了图2中步骤S210的子步骤流程图;
图9示意性示出了图8中步骤S800的子步骤流程图;
图10示意性示出了根据本申请实施例一的虚拟形象生成方法的新增步骤流程图;
图11为在为一个应用示例的流程图;
图12示意性示出了参考图像的多个部件以其属性;
图13示意性示出了根据本申请实施例二的虚拟形象生成系统的框图;及
图14示意性示出了根据本申请实施例三的适于实现虚拟形象生成方法的计算机设备的硬件架构示意图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请实施例中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
本发明人发现:随着视频编辑、直播应用的发展,人们不再满足于把自己的真实形象呈现在网络世界中,更多的人希望可以有一个虚拟的人物或者形象代替自己。然后怎样为用户创造一个和他相似或者具有相同特征的形象便成为一个新的课题。个别的3D应用及游戏已经可以利用生成式图像算法去生成和用户较为相似的虚拟形象,可是产生的效果并不可控,用户对于此类形象的接受度并不高。
另外,通过算法生成贴近用户真实样貌的形象,仅限于用户的面部重建,注重用户的整个面部特征,且生成的结果很难做到高可控性,可能在展现用户本身样貌优点的同时,也放大了缺点,例如,用户脸上有一个大痣,若将该大痣也放入到虚拟形象中,接受度不高。此外,此类方法缺乏扩展性,比如如果想要更换虚拟形象的风格,就需要重新收集大量数据并重新训练模型,造成极大的成本损耗。
有鉴于此,本申请旨在提出一种新的虚拟形象生成方案,对用户形象特征进行素材匹配、整合的方式,可以极大地提高生成形象的美化程度,且具备高可扩展性。
以下提供本申请实施例的示例性应用环境。
图1示意性示出了根据本申请实施例的基于虚拟形象的视频编辑方法的环境应用示意图。在示例性的实施例中,电子设备2可以通过一个或多个网络连接服务器4。
电子设备2,可以是诸如智能手机、平板设备、PC(个人电脑)等设备。电子设备2可以安装有虚拟形象编辑器,用于提供虚拟形象编辑服务。虚拟形象编辑器可以提供用于虚拟形象编辑的图形用户界面。视频编辑器可以是客户端、浏览器等。
服务器4,可以为电子设备2提供用于虚拟形象编辑的素材,如用于配置虚拟形象的资源文件、配置等。其中,服务器4可以通过一个或多个网络提供服务。网络可以包括各种网络设备,例如路由器,交换机,多路复用器,集线器,调制解调器,网桥,中继器,防火墙,代理设备和/或等等。网络可以包括物理链路,例如同轴电缆链路,双绞线电缆链路,光纤链路,它们的组合和/或类似物。网络可以包括无线链路,例如蜂窝链路,卫星链路,Wi-Fi链路和/或类似物。
下面以电子设备2为执行主体,通过多个实施例介绍虚拟形象生成方案。需要说明的是,该方案也可以通过服务器4实施,服务器4生成虚拟形象并返回给电子设备2。
在本申请的描述中,需要理解的是,步骤前的数字标号并不标识执行步骤的前后顺序,仅用于方便描述本申请及区别每一步骤,因此不能理解为对本申请的限制。
实施例一
图2示意性示出了根据本申请实施例一的虚拟形象生成方法的流程图。
如图2所示,该虚拟形象生成方法可以包括步骤S200~S210,其中:
步骤S200,确定参考图像,所述参考图像包括参考对象。
所述参考图像可以本地图片,也可以是通过图像采集装置(摄像头)实时采集的图片。
所述参考对象可以是单个人物头部、半身或全身等,当然也可以是多个人物的人物头部、半身或全身等。需要说明的是,当为多个人物时,则对应生成多个虚拟形象。
在示例性应用中,电子设备2上安装携带虚拟形象功能的客户端。该客户端上配置有图形用户界面,该图形用户界面上有展示有多个控件,如手动控件、自动控件。
若检测到手动控件被触发,则弹出素材界面,以供用户从素材界面中选择素材拼接。
若检测到自动控件被触发,则弹出导入控件。基于导入控件访问本地图片库或开启图像采集装置,获取参考图像。基于该参考图像执行后续虚拟形象的自动生成。
步骤S202,确定所述参考对象的头部区域,所述头部区域包括多个头部部件。
在示例性应用中,可以根据人脸检测方法,获取所述参考图像中的头部区域(人脸位置)。其中,所述人脸检测方法可以是基于几何特征的方法、基于模板的方法(如,相关匹配的方法、特征脸方法、线性判别分析方法、奇异值分解方法、神经网络方法、动态连接 匹配方法)和基于模型的方法(如,隐马尔柯夫模型等)。
所述多个头部部件可以包括:头发、脸、眼睛、眉毛、嘴巴、耳朵、下巴、配件(眼镜、帽子)等。
步骤S204,确定各个头部部件的分类属性。
这些头部部件的形状、大小和结构上的各种差异才使得世界上每个人脸千差万别,构成了头部区域的重要特征。这些重要特征可以用于生成与用户真实样貌的各个属性(特征)匹配的虚拟形象。
所述各个部件的分类属性可以包括形状类型、颜色类别等。
关于形状类型:
在示例性应用中,可以利用图像分类方法(如基于卷积神经网络的分类算法)对所述各个头部部件的形状类型进行分类,如:脸型(方脸、圆脸、尖脸等)、眉毛形状、耳朵类型、头发长短、发型、刘海类型、刘海长短、眼镜类别、帽子类别等。
例如,眼睛类型可以包括以下形状类型:杏眼、丹凤眼、吊眼、细长眼、眯缝眼、圆眼等。通过图像分类方法确定所述参考对象的各只眼睛的形状类型,如丹凤眼。
再如,眉毛类型可以包括以下形状类型:一字眉、高挑眉、柳叶眉、上挑眉、拱形眉等。通过图像分类方法确定所述参考对象的各只眉毛的形状类型,如高挑眉。
关于颜色类型:
作为可选的实施例中,所述分类属性包括颜色类别。如图3所示,步骤S204可以确定各个头部部件的主颜色:步骤S300,分割所述头部区域,以得到所述各个头部部件;步骤S302,根据预设规则,确定各个头部部件的主颜色;及步骤S304,根据所述各个头部部件的主颜色和预设颜色分类规则,确定所述各个头部部件的颜色类别。
在示例性的应用中,通过五官分割算法(如基于卷积神经网络的识别算法)对将所述头部区域进行分割,即将人脸脸部皮肤、瞳孔、嘴唇、眉毛、头发等部件的区域精确分割出来。然后,利用聚类算法或者其他颜色统计方式,找出每个部件区域的主要颜色,同时利用颜色映射或分类将部件主要颜色映射到具体颜色类别中。通过获取各个头部部件的主颜色,可以进一步获得所述参考对象的真实人脸细节。需要说明的是,在本实施例中,本步骤中的分割仅用于颜色属性的分析。
步骤S206,确定所述头部区域的多个头部关键点。
头部关键点(又称脸部关键点),包括人脸面部的关键点,包括眉毛、眼睛、鼻子、嘴巴、脸部轮廓区域的点。人脸关键点是人脸各个部位的重要特征点,通常是轮廓点与角点。在示例性应用中,可以通过人脸关键点检测算法(如基于深度学习的检测算法)获取脸部轮廓的多个关键点(如,68个点、106个点或240个点)。
步骤S208,根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性。
本发明人发现,由于不同人的脸部差异性,仅通过各个头部部件的分类属性(如形状类型),仍然无法有效贴近真实人脸。举例而言,虽然可以获知眉毛的类型为高挑眉,但仍然无法知道眉毛的程度属性,如大、中、小,或者朝向上、中、下。
因此,通过头部关键点来分析出至少部分头部部件的脸部特征属性(程度属性)。需要说明的是,不同于仅用于摆正人脸,在本实施例中,头部关键点是用于获取脸部特征属性。通过头部关键点分析出的脸部特征属性可以进一步获得真实人脸细节。
参考图像中的参考对象不一定为正脸,从而会妨碍脸部特征属性的提取。
作为可选的实施例,如图4所示,步骤S208可以包括:步骤S400,将所述多个头部关键点投射到预设脸型上,所述预设脸脸型为具有预设形状的正脸型;及步骤S402,根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性。将得到的头部关键点通过仿射变换(旋转、平移、缩放)投射到规范的预设脸型上。基于预设脸型的各个头部关键点位置,可以精确分析出部分脸部特征属性。
所述多个脸部特征属性,可以包括程度属性,如部件本身的大小、部件的朝向大小等。为了便于理解,以下提供几个可选实施例,介绍部分脸部特征属性的获取过程。
作为可选的实施例,所述多个脸部特征属性包括大小特征。如图5所示,所述步骤S402可以包括:步骤S500,根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的距离;及步骤S502,根据所述至少头部关键点之间的距离,确定一个或多个头部部件的大小特征。在本可选的实施例中,可以进一步精确分析出所述参考对象的眉毛、眼睛、嘴巴等大小、粗细等。
作为可选的实施例,所述多个脸部特征属性包括朝向特征。如图6所示,所述步骤S402可以包括:步骤S600,根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的连线斜率;及步骤S602,根据所述至少部分头部关键点之间的连线斜率,确定一个或多个头部部件的朝向特征。在本可选的实施例中,可以进一步精确分析出所述参考对象的眉毛、眼尾、嘴角等的朝向程度等。
在示例性应用中,如图7所示,其示出了参考对象的头部区域的106个头部关键点。
(1)眼睛大小:通过计算头部关键点75和76或72和73的距离,可得到人物呈现在图像中眼睛大小。由于已将头部关键点映射到标准大小脸型(预设脸型)上,所以头部关键点距离可以直接表示人脸各个部件的大小特征,如嘴巴大小、眉毛粗细。或者,计算头部关键点(75和76或72和73)在实际人脸图截图的距离,再除以人脸宽度加上嘴唇中点到眉间的垂直高度的和,以此作为眼睛大小的数值。
(2)人脸宽度:可以取人脸两侧对应头部关键点距离的最大值,如头部关键点0和32、1和31等。
(3)嘴唇中点到眉间的垂直高度:可以采用头部关键点84和90的中点到头部关键点33和42取中点到距离。这两个距离和的数值随不同人脸变化的波动最小。
(4)部件朝向:利用头部关键点连线的斜率判断部件的朝向特征,如眉毛朝向、眼尾朝向、嘴角朝向等。
需要说明的是,以上仅列举了部分脸部特征属性的获取。另外,大小程度、朝向程度可以分为若干个等级,可以参考产品实际需求定义制定若干个具体阈值进行分类。
步骤S210,根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
在得到所述多个脸部特征属性和所述各个头部部件的分类属性,可以生成与用户真实样貌的各个属性(特征)高度匹配的虚拟形象,即,得到和参考对象(用户真实形象)拥有相同属性的虚拟形象。而且,可以通过调整部分属性高效修改、扩展,具有高可控性。
作为可选的实施例中,如图8所示,所述步骤S210可以包括:步骤S800,根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素;及步骤S802,对所述各个头部部件的素材元素进行合成,以得到所述目标虚拟形象的头部。在本可选的实施例中,找到符合各个头部部件的各属性的素材元素,对找到的各个素材元素进行拼接,从而得到所述目标虚拟形象的头部。所述目标虚拟形象与所述参考对象高度对应,其各个头部部件与参考对象的各个头部部件的属性(特征)相同或高度相似。另外,当想要更换虚拟形象的风格,可以调整某个/些头部部件的属性,从而匹配出新的素材元素,进而拼接出新的虚拟形象。在另一些实施例中,也可以直接通过素材库中选择出一个素材元素,将选出的素材元素替代虚拟形象中的某个素材元素,或者新增到虚拟形象中,从而更新得到新的虚拟形象。
可知,本可选的实施例,可以极大地提高生成形象的美化程度,使得虚拟形象具有高控制性、高扩展性,以及低计算消耗,避免了如下问题:现有技术中,如果想要更换虚拟形象的风格,就需要重新收集大量数据并重新训练模型,造成极大的成本损耗。
在步骤S800中,要确定各个头部部件的素材元素。但是,这些头部部件的形状、大小和结构上差异性极大。因此,要找到与各个头部部件的所有属性完全匹配的素材元素,有较大的难度。例如,眉毛的多个属性(分类属性+脸部特征属性)包括眉毛形状、眉毛朝向、眉毛粗细、眉毛颜色,匹配出各属性均符合的素材元素有难度。
有鉴于此,如图9所示,所述步骤S800可以通过如下步骤实现:步骤S900,确定目标头部部件,所述目标头部部件为所述多个头部部件中的任意一个;步骤S902,确定所述目标头部部件的多个目标属性;其中所述多个目标属性包括:所述目标头部部件的分类属性和相应的脸部特征属性,每个目标属性分别对应一个权重;步骤S904,从预设素材库中,获取具有所述多个目标属性的第一目标素材元素;步骤S906,在所述预设素材库中没有所述第一目标素材元素的情形下,根据所述每个目标属性的权重,从所述预设素材库中获取与所述目标头部部件最匹配的第二目标素材元素。通过上述流程可以高效地匹配出最优的各个素材元素。
每个目标属性的权重可以相同,或者根据各属性的重要性分配不同权重。
以眉毛举例,若眉毛4个属性在实际效果的统计中确定对用户的形象影响程度不同,比如眉毛形状>眉毛粗细>眉毛朝向>眉毛颜色,则将权重为:眉毛形状0.4,眉毛粗细0.3, 眉毛朝向0.2,眉毛颜色0.1。例如,在预设素材库中,眉毛有眉毛素材元素A和眉毛素材元素B,眉毛素材元素A和参考对象的眉毛匹配上了眉毛形状、眉毛粗细、眉毛朝向,眉毛素材元素B和参考对象的眉毛匹配上了眉毛粗细、眉毛朝向、眉毛颜色。虽然两个眉毛素材元素都有3个属性匹配成功,但是眉毛素材元素A获得的总分数为0.9,而眉毛素材元素B只有0.6,所以选择素材眉毛元素A为更为合适的素材元素。此外,在同一属性不同类别中,也存在程度类别以及其他相似类别的状况。比如朝向分类可分为上、中、下3类,颜色类别可以分为黑色、棕色、蓝色、绿色、粉色、红色、紫色、橘黄色、灰色、白色。那么在这种情况下如果当预设素材库中没有和具体人物特征相同的素材元素,那么可以用类似的素材元素并结合其他属性特征综合分析。比如上可以用中代替,红色可以用粉色或者橘黄色代替,不能用黑色代替。
所述预设素材库,包括头部、身体中的各个部件的素材元素。需要说明的是,各个素材元素的素材属性标签的分类,是基于预先确定的真实人物属性分类确定的。在所述预设素材库中,可以通过参考对象的各个部件的属性和各个素材元素的素材属性标签进行匹配,找出匹配度最高的素材元素,以用于合成虚拟形象。
根据人物特征的类别,对各个素材配件进行分类打上标签。具体类别和特征提取模块中的特征一一对应。
上述主要介绍了目标虚拟形象的头部的生成。
在一些实施例中,在生成目标虚拟形象的头部后,可以根据用户性别自动匹配身体服饰等。
在一些实施例中,在生成目标虚拟形象的头部后,可以供用户选择用于拼接的身体道具。
当然,在所述参考对象包括半身或全身时,也可以根据参考对象的身体生成目标虚拟形象的身体。
作为可选的实施例,如图10所示,所述方法还可以包括:步骤S1000,分割所述身体区域,以得到多个身体部件,所述多个身体部件包括各个服装、鞋子;步骤S1002,确定各个身体部件的分类属性;步骤S1004,根据所述头部区域、各个服装和鞋子各自在所述参考图像中的位置,确定各个服装的归属;步骤S1006,根据所述各个身体部件的分类属性和所述各个服装的归属,生成所述目标虚拟形象的身体。在本可选的实施例中,可以进一步增加虚拟形象和真实人物的贴合度。
在示例性应用中,通过服装分割算法分割所述参考图像中的各个服装、鞋子的精确区域以及分类属性(服装:短袖上衣、长袖外套、连衣裙、裙子、长裤等,鞋子:运动鞋、皮鞋、靴子等)。然后利用头部区域的位置以及各个服装、鞋子的位置,推测出服装的归属。以上述举例的人脸作为目标人物,计算对应上衣、下装、鞋子区域的颜色。最后,从预设素材库找到相同或相似的素材元素进行拼接。在本示例性应用中,,亦可以利用主颜色、利用各属性的权重等匹配素材元素。
为方便理解,以下结合图11和图12提供一个应用示例:
在本应用示例中,基于真实的人物形象(参考图像中),生成一个属性(特征)匹配的虚拟形象。从人物脸部、服装特征提取,并根据各个部件的属性从素材库中自动选取合适的素材元素,并根据选取的素材元素合成一个完整的虚拟形象。其中涉及主要流程模块:(1)特征提取模块;(2)素材打标模块;(3)素材匹配模块。
(1)特征提取模块:
提取参考图像中人物所拥有的各个属性(特征)。需要说明的是,此处属性包含但不限于年龄、性别、脸型、面部肤色、眉毛形状、眉毛朝向、眉毛粗细、眉毛颜色、嘴巴大小、嘴角朝向、嘴唇颜色、眼睛大小、眼尾朝向、瞳孔颜色、耳朵类型、头发长短、发型、刘海类型、刘海长短、眼镜类别、帽子类别、上衣种类、上衣颜色、下装种类、下装颜色、鞋子种类等。虚拟形象可拆分成为:头发、脸、眼睛、眉毛、嘴巴、耳朵、配件(眼镜、帽子等)、上衣、下装、鞋子等。
(2)素材打标模块:
对预设素材库中的素材元素打上相应分类的类别标签。
(3)素材匹配模块:
根据人物属性和素材标签进行匹配,找出匹配度最高的各部件素材元素。
最后,拼接找出的各部件素材元素,得到能够和用户真实形象拥有相同属性(特征)的虚拟形象。
实施例二
图13示意性示出了根据本申请实施例二的虚拟形象生成系统的框图,该虚拟形象生成系统可以被分割成一个或多个程序模块,一个或者多个程序模块被存储于存储介质中,并由一个或多个处理器所执行,以完成本申请实施例。本申请实施例所称的程序模块是指能够完成特定功能的一系列计算机可读指令段,以下描述将具体介绍本申请实施例中各程序模块的功能。
如图13所示,该虚拟形象生成系统1300可以包括第一确定模块1310、第二确定模块1320、第三确定模块1330第四确定模块1340、第五确定模块1350,和生成模块1360,其中:
第一确定模块1310,用于确定参考图像,所述参考图像包括参考对象;
第二确定模块1320,用于确定所述参考对象的头部区域,所述头部区域包括多个头部部件;
第三确定模块1330,用于确定各个头部部件的分类属性;
第四确定模块1340,用于确定所述头部区域的多个头部关键点;
第五确定模块1350,用于根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及
生成模块1360,用于根据所述多个脸部特征属性和所述各个头部部件的分类属性,生 成目标虚拟形象。
在可选的实施例中,所述第五确定模块1350,还用于:
将所述多个头部关键点投射到预设脸型上,所述预设脸脸型为具有预设形状的正脸型;及
根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性。
在可选的实施例中,所述多个脸部特征属性包括大小特征;
所述第五确定模块1350,还用于:
根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的距离;及
根据所述至少头部关键点之间的距离,确定一个或多个头部部件的大小特征。
在可选的实施例中,所述多个脸部特征属性包括朝向特征;
所述第五确定模块1350,还用于:
根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的连线斜率;及
根据所述至少部分头部关键点之间的连线斜率,确定一个或多个头部部件的朝向特征。
在可选的实施例中,所述分类属性包括颜色类别;
所述第三确定模块1330,还用于:
分割所述头部区域,以得到所述各个头部部件;
根据预设规则,确定各个头部部件的主颜色;及
根据所述各个头部部件的主颜色和预设颜色分类规则,确定所述各个头部部件的颜色类别。
在可选的实施例中,所述生成模块1360,还用于:
根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素;及
对所述各个头部部件的素材元素进行合成,以得到所述目标虚拟形象的头部。
在可选的实施例中,所述生成模块1360,还用于:
确定目标头部部件,所述目标头部部件为所述多个头部部件中的任意一个;
确定所述目标头部部件的多个目标属性;其中所述多个目标属性包括:所述目标头部部件的分类属性和相应的脸部特征属性,每个目标属性分别对应一个权重;
从预设素材库中,获取具有所述多个目标属性的第一目标素材元素;
在所述预设素材库中没有所述第一目标素材元素的情形下,根据所述每个目标属性的权重,从所述预设素材库中获取与所述目标头部部件最匹配的第二目标素材元素。
在可选的实施例中,所述参考对象还包括身体区域;所述系统还包括身体生成模块(未标识),用于:
分割所述身体区域,以得到多个身体部件,所述多个身体部件包括各个服装、鞋子;
确定各个身体部件的分类属性;
根据所述头部区域、各个服装和鞋子各自在所述参考图像中的位置,确定各个服装的归属;
根据所述各个身体部件的分类属性和所述各个服装的归属,生成所述目标虚拟形象的身体。
实施例三
图14示意性示出了根据本申请实施例三的适于实现虚拟形象生成方法的计算机设备10000的硬件架构示意图。计算机设备10000可以是电子设备2或其组成部分,也可以是服务器4或其组成部分。本实施例中,计算机设备10000是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或信息处理的设备。例如,可以是智能手机、平板电脑、笔记本电脑、台式计算机、机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个服务器所组成的服务器集群)等。如图14所示,计算机设备10000至少包括但不限于:可通过系统总线相互通信链接存储器10010、处理器10020、网络接口10030。其中:
存储器10010至少包括一种类型的计算机可读存储介质,可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器10010可以是计算机设备10000的内部存储模块,例如该计算机设备10000的硬盘或内存。在另一些实施例中,存储器10010也可以是计算机设备10000的外部存储设备,例如该计算机设备10000上配备的插接式硬盘,智能存储卡(Smart Media Card,简称为SMC),安全数字(Secure Digital,简称为SD)卡,闪存卡(Flash Card)等。当然,存储器10010还可以既包括计算机设备10000的内部存储模块也包括其外部存储设备。本实施例中,存储器10010通常用于存储安装于计算机设备10000的操作系统和各类应用软件,例如虚拟形象生成方法的程序代码等。此外,存储器10010还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器10020在一些实施例中可以是中央处理器(Central Processing Unit,简称为CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器10020通常用于控制计算机设备10000的总体操作,例如执行与计算机设备10000进行数据交互或者通信相关的控制和处理等。本实施例中,处理器10020用于运行存储器10010中存储的程序代码或者处理数据。
网络接口10030可包括无线网络接口或有线网络接口,该网络接口10030通常用于在计算机设备10000与其他计算机设备之间建立通信链接。例如,网络接口10030用于通过网络将计算机设备10000与外部终端相连,在计算机设备10000与外部终端之间的建立数据传输通道和通信链接等。网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯系统(Global System of Mobile communication,简称为GSM)、宽带码分多址 (Wideband Code Division Multiple Access,简称为WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi等无线或有线网络。
需要指出的是,图14仅示出了具有部件10010-10030的计算机设备,但是应理解的是,并不要求实施所有示出的部件,可以替代的实施更多或者更少的部件。
在本实施例中,存储于存储器10010中的虚拟形象生成方法还可以被分割为一个或者多个程序模块,并由一个或多个处理器(本实施例为处理器10020)所执行,以完成本申请实施例。
实施例四
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质其上存储有计算机可读指令,计算机可读指令被处理器执行时实现实施例中的虚拟形象生成方法的步骤。
本实施例中,计算机可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,计算机可读存储介质可以是计算机设备的内部存储单元,例如该计算机设备的硬盘或内存。在另一些实施例中,计算机可读存储介质也可以是计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(Smart Media Card,简称为SMC),安全数字(Secure Digital,简称为SD)卡,闪存卡(Flash Card)等。当然,计算机可读存储介质还可以既包括计算机设备的内部存储单元也包括其外部存储设备。本实施例中,计算机可读存储介质通常用于存储安装于计算机设备的操作系统和各类应用软件,例如实施例中虚拟形象生成方法的程序代码等。此外,计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的各类数据。
显然,本领域的技术人员应该明白,上述的本申请实施例的各模块或各步骤可以用通用的计算机装置来实现,它们可以集中在单个的计算机装置上,或者分布在多个计算机装置所组成的网络上,可选地,它们可以用计算机装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算机装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请实施例不限制于任何特定的硬件和软件结合。
以上仅为本申请的示例性实施例,并非因此限制本申请的专利范围,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。
Claims (20)
- 一种虚拟形象生成方法,所述方法包括:确定参考图像,所述参考图像包括参考对象;确定所述参考对象的头部区域,所述头部区域包括多个头部部件;确定各个头部部件的分类属性;确定所述头部区域的多个头部关键点;根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
- 根据权利要求1所述的虚拟形象生成方法,所述根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性,包括:将所述多个头部关键点投射到预设脸型上,所述预设脸脸型为具有预设形状的正脸型;及根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性。
- 根据权利要求2所述的虚拟形象生成方法,所述多个脸部特征属性包括大小特征;所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的距离;及根据所述至少头部关键点之间的距离,确定一个或多个头部部件的大小特征。
- 根据权利要求2所述的虚拟形象生成方法,所述多个脸部特征属性包括朝向特征;所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的连线斜率;及根据所述至少部分头部关键点之间的连线斜率,确定一个或多个头部部件的朝向特征。
- 根据权利要求1至4任意一项所述的虚拟形象生成方法,所述分类属性包括颜色类别;所述确定各个头部部件的分类属性,包括:分割所述头部区域,以得到所述各个头部部件;根据预设规则,确定各个头部部件的主颜色;及根据所述各个头部部件的主颜色和预设颜色分类规则,确定所述各个头部部件的颜色类别。
- 根据权利要求1至4任意一项所述的虚拟形象生成方法,所述根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象,包括:根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素;及对所述各个头部部件的素材元素进行合成,以得到所述目标虚拟形象的头部。
- 根据权利要求6所述的虚拟形象生成方法,所述根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素,包括:确定目标头部部件,所述目标头部部件为所述多个头部部件中的任意一个;确定所述目标头部部件的多个目标属性;其中所述多个目标属性包括:所述目标头部部件的分类属性和相应的脸部特征属性,每个目标属性分别对应一个权重;从预设素材库中,获取具有所述多个目标属性的第一目标素材元素;在所述预设素材库中没有所述第一目标素材元素的情形下,根据所述每个目标属性的权重,从所述预设素材库中获取与所述目标头部部件最匹配的第二目标素材元素。
- 根据权利要求1至4任意一项所述的虚拟形象生成方法,所述参考对象还包括身体区域;所述方法还包括:分割所述身体区域,以得到多个身体部件,所述多个身体部件包括各个服装、鞋子;确定各个身体部件的分类属性;根据所述头部区域、各个服装和鞋子各自在所述参考图像中的位置,确定各个服装的归属;根据所述各个身体部件的分类属性和所述各个服装的归属,生成所述目标虚拟形象的身体。
- 一种虚拟形象生成系统,包括:第一确定模块,用于确定参考图像,所述参考图像包括参考对象;第二确定模块,用于确定所述参考对象的头部区域,所述头部区域包括多个头部部件;第三确定模块,用于确定各个头部部件的分类属性;第四确定模块,用于确定所述头部区域的多个头部关键点;第五确定模块,用于根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及生成模块,用于根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
- 一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时用于实现以下步骤:确定参考图像,所述参考图像包括参考对象;确定所述参考对象的头部区域,所述头部区域包括多个头部部件;确定各个头部部件的分类属性;确定所述头部区域的多个头部关键点;根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
- 根据权利要求10所述的计算机设备,所述根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性,包括:将所述多个头部关键点投射到预设脸型上,所述预设脸脸型为具有预设形状的正脸型;及根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性。
- 根据权利要求11所述的计算机设备,所述多个脸部特征属性包括大小特征;所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的距离;及根据所述至少头部关键点之间的距离,确定一个或多个头部部件的大小特征。
- 根据权利要求11所述的计算机设备,所述多个脸部特征属性包括朝向特征;所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的连线斜率;及根据所述至少部分头部关键点之间的连线斜率,确定一个或多个头部部件的朝向特征。
- 根据权利要求10至13任意一项所述的计算机设备,所述分类属性包括颜色类别;所述确定各个头部部件的分类属性,包括:分割所述头部区域,以得到所述各个头部部件;根据预设规则,确定各个头部部件的主颜色;及根据所述各个头部部件的主颜色和预设颜色分类规则,确定所述各个头部部件的颜色类别。
- 根据权利要求10至13任意一项所述的计算机设备,所述根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象,包括:根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素;及对所述各个头部部件的素材元素进行合成,以得到所述目标虚拟形象的头部。
- 根据权利要求15所述的计算机设备,所述根据所述多个脸部特征属性和所述各个头部部件的分类属性,确定所述各个头部部件的素材元素,包括:确定目标头部部件,所述目标头部部件为所述多个头部部件中的任意一个;确定所述目标头部部件的多个目标属性;其中所述多个目标属性包括:所述目标头部部件的分类属性和相应的脸部特征属性,每个目标属性分别对应一个权重;从预设素材库中,获取具有所述多个目标属性的第一目标素材元素;在所述预设素材库中没有所述第一目标素材元素的情形下,根据所述每个目标属性的权重,从所述预设素材库中获取与所述目标头部部件最匹配的第二目标素材元素。
- 根据权利要求10至13任意一项所述的计算机设备,所述参考对象还包括身体区域;所述处理器执行所述计算机可读指令时用于还实现以下步骤:分割所述身体区域,以得到多个身体部件,所述多个身体部件包括各个服装、鞋子;确定各个身体部件的分类属性;根据所述头部区域、各个服装和鞋子各自在所述参考图像中的位置,确定各个服装的归属;根据所述各个身体部件的分类属性和所述各个服装的归属,生成所述目标虚拟形象的身体。
- 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机可读指令,所述计算机可读指令可被至少一个处理器所执行,以使所述至少一个处理器执行所述计算机可读指令时实现以下步骤:确定参考图像,所述参考图像包括参考对象;确定所述参考对象的头部区域,所述头部区域包括多个头部部件;确定各个头部部件的分类属性;确定所述头部区域的多个头部关键点;根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性;及根据所述多个脸部特征属性和所述各个头部部件的分类属性,生成目标虚拟形象。
- 根据权利要求18所述的计算机可读存储介质,所述根据所述多个头部关键点,确定所述参考对象的多个脸部特征属性,包括:将所述多个头部关键点投射到预设脸型上,所述预设脸脸型为具有预设形状的正脸型;及根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性。
- 根据权利要求19所述的计算机可读存储介质,所述多个脸部特征属性包括大小特征;所述根据各个头部关键点在所述预设脸型上的位置,确定所述多个脸部特征属性,包括:根据所述各个头部关键点在所述预设脸型上的位置,确定至少部分头部关键点之间的距离;及根据所述至少头部关键点之间的距离,确定一个或多个头部部件的大小特征。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210066981.4 | 2022-01-20 | ||
CN202210066981.4A CN114419202A (zh) | 2022-01-20 | 2022-01-20 | 虚拟形象生成方法和系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023138345A1 true WO2023138345A1 (zh) | 2023-07-27 |
Family
ID=81275646
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/143805 WO2023138345A1 (zh) | 2022-01-20 | 2022-12-30 | 虚拟形象生成方法和系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114419202A (zh) |
WO (1) | WO2023138345A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114419202A (zh) * | 2022-01-20 | 2022-04-29 | 上海幻电信息科技有限公司 | 虚拟形象生成方法和系统 |
CN114913058B (zh) * | 2022-05-27 | 2024-10-01 | 北京字跳网络技术有限公司 | 显示对象的确定方法、装置、电子设备及存储介质 |
CN118001741A (zh) * | 2024-04-09 | 2024-05-10 | 湖南速子文化科技有限公司 | 一种大量虚拟人物展示方法、系统、设备及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510437A (zh) * | 2018-04-04 | 2018-09-07 | 科大讯飞股份有限公司 | 一种虚拟形象生成方法、装置、设备以及可读存储介质 |
CN109949207A (zh) * | 2019-01-31 | 2019-06-28 | 深圳市云之梦科技有限公司 | 虚拟对象合成方法、装置、计算机设备和存储介质 |
CN110782515A (zh) * | 2019-10-31 | 2020-02-11 | 北京字节跳动网络技术有限公司 | 虚拟形象的生成方法、装置、电子设备及存储介质 |
CN112766027A (zh) * | 2019-11-05 | 2021-05-07 | 广州虎牙科技有限公司 | 图像处理方法、装置、设备及存储介质 |
CN114419202A (zh) * | 2022-01-20 | 2022-04-29 | 上海幻电信息科技有限公司 | 虚拟形象生成方法和系统 |
-
2022
- 2022-01-20 CN CN202210066981.4A patent/CN114419202A/zh active Pending
- 2022-12-30 WO PCT/CN2022/143805 patent/WO2023138345A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510437A (zh) * | 2018-04-04 | 2018-09-07 | 科大讯飞股份有限公司 | 一种虚拟形象生成方法、装置、设备以及可读存储介质 |
CN109949207A (zh) * | 2019-01-31 | 2019-06-28 | 深圳市云之梦科技有限公司 | 虚拟对象合成方法、装置、计算机设备和存储介质 |
CN110782515A (zh) * | 2019-10-31 | 2020-02-11 | 北京字节跳动网络技术有限公司 | 虚拟形象的生成方法、装置、电子设备及存储介质 |
CN112766027A (zh) * | 2019-11-05 | 2021-05-07 | 广州虎牙科技有限公司 | 图像处理方法、装置、设备及存储介质 |
CN114419202A (zh) * | 2022-01-20 | 2022-04-29 | 上海幻电信息科技有限公司 | 虚拟形象生成方法和系统 |
Also Published As
Publication number | Publication date |
---|---|
CN114419202A (zh) | 2022-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102241153B1 (ko) | 2차원 이미지로부터 3차원 아바타를 생성하는 방법, 장치 및 시스템 | |
WO2023138345A1 (zh) | 虚拟形象生成方法和系统 | |
US20230394775A1 (en) | Electronic device for generating image including 3d avatar reflecting face motion through 3d avatar corresponding to face and method of operating same | |
US10540757B1 (en) | Method and system for generating combined images utilizing image processing of multiple images | |
US7764828B2 (en) | Method, apparatus, and computer program for processing image | |
US9478054B1 (en) | Image overlay compositing | |
CN111147877B (zh) | 虚拟礼物的赠送方法、装置、设备及存储介质 | |
CN110555896B (zh) | 一种图像生成方法、装置以及存储介质 | |
KR102491140B1 (ko) | 가상 아바타 생성 방법 및 장치 | |
CN108513089B (zh) | 群组视频会话的方法及装置 | |
CN113362263B (zh) | 变换虚拟偶像的形象的方法、设备、介质及程序产品 | |
CN110755847B (zh) | 虚拟操作对象的生成方法和装置、存储介质及电子装置 | |
CN108629339A (zh) | 图像处理方法及相关产品 | |
CN105808774A (zh) | 信息提供方法及装置 | |
WO2022257766A1 (zh) | 图像处理方法、装置、设备及介质 | |
CN114266695A (zh) | 图像处理方法、图像处理系统及电子设备 | |
CN111862116A (zh) | 动漫人像的生成方法及装置、存储介质、计算机设备 | |
CN112819718A (zh) | 图像处理方法及装置、电子设备及存储介质 | |
US11380037B2 (en) | Method and apparatus for generating virtual operating object, storage medium, and electronic device | |
CN109949207B (zh) | 虚拟对象合成方法、装置、计算机设备和存储介质 | |
KR20210093536A (ko) | 인물 사진의 헤어스타일 합성 시스템 및 그 방법 | |
WO2023077965A1 (zh) | 虚拟宠物的外观编辑方法、装置、终端及存储介质 | |
US20240355019A1 (en) | Product image generation based on diffusion model | |
US20240378763A1 (en) | Diffusion model virtual try-on experience | |
CN110147511B (zh) | 一种页面处理方法、装置、电子设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22921764 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |