Nothing Special   »   [go: up one dir, main page]

CN113240778B - Method, device, electronic equipment and storage medium for generating virtual image - Google Patents

Method, device, electronic equipment and storage medium for generating virtual image Download PDF

Info

Publication number
CN113240778B
CN113240778B CN202110456059.1A CN202110456059A CN113240778B CN 113240778 B CN113240778 B CN 113240778B CN 202110456059 A CN202110456059 A CN 202110456059A CN 113240778 B CN113240778 B CN 113240778B
Authority
CN
China
Prior art keywords
avatar
shape
shape parameter
reference model
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110456059.1A
Other languages
Chinese (zh)
Other versions
CN113240778A (en
Inventor
彭昊天
陈睿智
赵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110456059.1A priority Critical patent/CN113240778B/en
Publication of CN113240778A publication Critical patent/CN113240778A/en
Application granted granted Critical
Publication of CN113240778B publication Critical patent/CN113240778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure discloses a method, a device, electronic equipment and a storage medium for generating an avatar, and particularly relates to the technical field of artificial intelligence such as computer vision and deep learning. The specific implementation scheme is as follows: receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model; acquiring a plurality of avatar shape models corresponding to the avatar reference model; analyzing the face image to determine each first shape parameter corresponding to the face image; and according to each first shape parameter, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image. Therefore, the selected virtual image reference model and the plurality of virtual image shape models can be fused according to each first shape parameter corresponding to the face image, and the virtual image can be quickly generated, so that the requirements of users are met, the labor cost is reduced, and the efficiency is improved.

Description

Method, device, electronic equipment and storage medium for generating virtual image
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical field of artificial intelligence such as computer vision and deep learning, and especially relates to a method and a device for generating an virtual image, electronic equipment and a storage medium.
Background
With the vigorous development of computer technology, the field of artificial intelligence has also been rapidly developed. The virtual image has wide application in the modeling scenes of characters such as social, live broadcast, games and the like, and how to quickly generate the virtual image is a problem which needs to be solved at present.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and storage medium for generating an avatar.
In one aspect of the present disclosure, there is provided a method of generating an avatar, including:
receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model;
acquiring a plurality of avatar shape models corresponding to the avatar reference model;
analyzing the face image to determine each first shape parameter corresponding to the face image;
and according to the first shape parameters, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image.
In another aspect of the present disclosure, there is provided an avatar generating apparatus including:
the receiving module is used for receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model;
An acquisition module for acquiring a plurality of avatar shape models corresponding to the avatar reference model;
the analysis module is used for analyzing the face image to determine each first shape parameter corresponding to the face image;
and the generation module is used for fusing the plurality of avatar shape models and the avatar reference model according to the first shape parameters so as to generate an avatar corresponding to the face image.
In another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the avatar generation method of the embodiment of the above aspect.
In another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon computer instructions for causing the computer to perform the steps of the avatar generation method of the embodiment of the above aspect.
In another aspect of the present disclosure, there is provided a computer program product including a computer program which, when executed by a processor, implements the avatar generation method of the embodiment of the above aspect.
The method, the device, the electronic equipment and the storage medium for generating the avatar can firstly receive an avatar generation request, wherein the generation request comprises a face image and an avatar reference model, then acquire a plurality of avatar shape models corresponding to the avatar reference model, and then analyze the face image to determine each first shape parameter corresponding to the face image, so that the plurality of avatar shape models and the avatar reference model can be fused according to each first shape parameter to generate the avatar corresponding to the face image. Therefore, the selected virtual image reference model and the plurality of virtual image shape models can be fused according to each first shape parameter corresponding to the face image, and the virtual image can be quickly generated, so that the requirements of users are met, the labor cost is reduced, and the efficiency is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart illustrating a method for generating an avatar according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method for generating an avatar according to another embodiment of the present disclosure;
FIG. 2a is a schematic diagram illustrating the generation of a face shape model according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a method for generating an avatar according to still another embodiment of the present disclosure;
fig. 3a is a schematic view illustrating generation of an avatar according to another embodiment of the present disclosure;
FIG. 3b is a schematic diagram of a platform according to another embodiment of the present disclosure;
fig. 4 is a schematic structural view of an avatar generation apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural view of an avatar generating apparatus according to another embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method of generating an avatar according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning, deep learning, a big data processing technology, a knowledge graph technology and the like.
Computer vision is a interdisciplinary field of science that studies how to get a high level of understanding from digital images or video. From an engineering point of view, it seeks an automated task that the human visual system can accomplish. Computer vision tasks include methods of acquiring, processing, analyzing and understanding digital images, and methods of extracting high-dimensional data from the real world to produce digital or symbolic information, for example in the form of decisions.
Deep learning refers to a multi-layer artificial neural network and a method of training it. A neural network takes a large number of matrix numbers as input, weights the matrix numbers by a nonlinear activation method, and then generates another data set as output. Through proper matrix quantity, multiple layers of tissues are linked together to form a neural network 'brain' for precise and complex processing, just like people identify object labeling pictures.
The following describes a method, apparatus, electronic device, and storage medium for generating an avatar according to an embodiment of the present disclosure with reference to the accompanying drawings.
The avatar generation method of the embodiment of the present disclosure may be performed by the avatar generation apparatus provided by the embodiment of the present disclosure, which may be configured in an electronic device.
For convenience of explanation, the generation device of the avatar in the embodiments of the present disclosure may be simply referred to as a "generation device".
Fig. 1 is a flowchart illustrating a method for generating an avatar according to an embodiment of the present disclosure.
As shown in fig. 1, the avatar generation method may include the steps of:
step 101, receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model.
The avatar reference model can be various types of reference models, such as a cartoon type reference model, a cat type reference model and the like, and the corresponding shape of the reference model can be adjusted subsequently to generate other models so as to meet the requirements of users.
In the embodiment of the present disclosure, when the user has an avatar generation requirement, a generation request may be transmitted to the generation apparatus. When the user transmits a generation request to the generation device, the generation device may acquire the avatar generation request by transmitting the corresponding face image and the selected avatar reference model.
It will be appreciated that the avatar reference model may be provided by the user according to his own needs, or may be selected by the user from a plurality of avatar reference models provided by the generating apparatus, etc., which is not limited in this disclosure.
In addition, the generation request acquired by the generating device may be sent by the user through an application program, or may be sent by the user through a web page, for example, a specific control in the web page is triggered, which is not limited in the disclosure.
Step 102, acquiring a plurality of avatar shape models corresponding to the avatar reference model.
The avatar shape model may be a model generated by shape adjustment based on an avatar reference model, etc., which is not limited in this disclosure.
For example, the avatar reference model is: the reference model of the cartoon style has normal proportion of the five sense organs, the eyes of the reference model are adjusted, the adjusted eyes occupy 2/3 of the whole face, and the adjusted model is the corresponding virtual image shape model.
Note that the above examples are merely illustrative, and are not to be construed as limiting the avatar reference model, the avatar shape model, and the like in the embodiments of the present disclosure.
In the embodiment of the present disclosure, in the generating device, a plurality of avatar reference models and a plurality of avatar reference models corresponding thereto, respectively, may be preconfigured. Thus, the generating means may determine a plurality of avatar shape models corresponding to the avatar reference model based on the avatar reference model included in the generation request after receiving the avatar generation request.
It is understood that one avatar reference model may correspond to one or more avatar shape models, any of which have respective characteristics. For example, the aspect ratio of the face of one avatar shape model is 7:1, the opening of the mouth of the other avatar shape model is 80%, etc., which is not limited in the present disclosure.
Optionally, the generating device may further acquire each second shape parameter corresponding to the avatar reference model, and then adjust the avatar reference model based on each second shape parameter to generate a plurality of avatar shape models.
Wherein, each second shape parameter can be each shape parameter set in advance; or, it can also be the shape parameter that the user set up according to self demand. For example, the second shape parameter set by the user: the aspect ratio of the face is 8:1, the eyes occupy 2/3 of the whole face, etc., which is not limited by the present disclosure.
It is understood that the generating means may adjust the avatar reference model based on each of the second shape parameters to generate an avatar shape model corresponding to each of the second shape parameters. Alternatively, the plurality of second shape parameters may be combined as needed, so that the corresponding parts of the avatar reference model are adjusted based on the combined plurality of second shape parameters, thereby generating the avatar shape model corresponding to the plurality of second shape parameters.
For example, when the avatar reference model is adjusted based on the second shape parameter with the aspect ratio of 8:1, the aspect ratio of the face of the avatar reference model may be adjusted, thereby generating a corresponding avatar shape model. Or, based on the second shape parameter with the aspect ratio of the face being 8:1 and the opening degree of the mouth being 90%, the avatar reference model may be adjusted, so as to generate an avatar shape model with the aspect ratio of the face being adjusted and the opening degree of the mouth being adjusted, which is not limited in the disclosure.
In the embodiment of the disclosure, when the user is not satisfied with the avatar shape model and has special requirements, the generating device can adjust the avatar reference model according to each second shape parameter to generate the avatar shape model meeting the requirements of the user, thereby improving the efficiency.
And 103, analyzing the face image to determine each first shape parameter corresponding to the face image.
When the face image is analyzed, various modes can be adopted.
For example, various features in the face image may be extracted to determine corresponding respective first shape parameters. For example, features such as the size of the eyebrow, the distance between the two eyes, the opening degree of each of the two eyes, the opening degree of the mouth, the convexity degree of the cheek, and the like are extracted, so as to determine corresponding first shape parameters, which are not limited in the present disclosure.
And 104, fusing the plurality of avatar shape models and the avatar reference model according to the first shape parameters to generate an avatar corresponding to the face image.
For example, the avatar shape model may be adjusted according to each first shape parameter corresponding to the face image, and then the adjusted avatar shape model and the avatar reference model are fused, so as to generate an avatar corresponding to the face, which is not limited in the present disclosure.
In the embodiment of the disclosure, when the avatar corresponding to the face image is generated, the corresponding avatar shape model can be determined only according to the avatar reference model selected by the user, and then the selected avatar reference model and a plurality of avatar shape models are fused by using each first shape parameter determined by analyzing the face image, so that the avatar meeting the user requirement can be quickly generated, the labor cost is reduced, and the efficiency is improved.
According to the method, the device and the system for generating the virtual image, the virtual image generating request can be received firstly, wherein the generating request comprises a face image and a virtual image reference model, then a plurality of virtual image shape models corresponding to the virtual image reference model are obtained, the face image is analyzed to determine each first shape parameter corresponding to the face image, and accordingly the virtual image shape models and the virtual image reference model can be fused according to each first shape parameter to generate the virtual image corresponding to the face image. Therefore, the selected virtual image reference model and the plurality of virtual image shape models can be fused according to each first shape parameter corresponding to the face image, and the virtual image can be quickly generated, so that the requirements of users are met, the labor cost is reduced, and the efficiency is improved.
In the above embodiment, the avatar may be generated by fusing the selected avatar reference model and the plurality of avatar shape models using the respective first shape parameters corresponding to the face image. In one possible implementation, any of the initial shape parameters in the initial avatar shape model may be used to generate the avatar shape model, and then the same adjustments may be made to the selected face reference model to generate the corresponding face shape model, as further described below in connection with fig. 2.
Fig. 2 is a flowchart of a method for generating an avatar according to an embodiment of the present disclosure, and as shown in fig. 2, the method for generating an avatar may include the following steps:
step 201, receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model.
Step 202, obtaining each initial shape parameter corresponding to the virtual image reference model.
Step 203, based on the respective initial shape parameters, adjusts the avatar reference model to generate a plurality of initial avatar shape models.
The initial shape parameters may be a plurality of initial shape parameters set in advance. Or, the method can be set by the user according to the self requirement, and the like. For example, the initial shape parameters set in advance may be: the opening of the mouth is 50%, the distance between two eyes is 5 cm, etc., and the present disclosure is not limited thereto.
In addition, the correspondence between the avatar reference model and each initial shape parameter may be set in advance, or may be set correspondingly by the user according to his own needs, or the like.
It is understood that the avatar reference model may be adjusted based on each initial shape parameter to generate an avatar shape model corresponding to each initial shape parameter. Alternatively, the plurality of corresponding portions of the avatar reference model may be adjusted based on a combination of a plurality of initial shape parameters, thereby generating an avatar shape model corresponding to the plurality of shape parameters, and the like, which is not limited in the present disclosure.
For example, when the avatar reference model is adjusted based on the initial shape parameter with the face aspect ratio of 3:1, the face aspect ratio of the avatar reference model may be adjusted to 3:1, thereby generating a corresponding initial avatar shape model. Or based on a plurality of initial shape parameters of which the aspect ratio of the face is 3:1, the eyes occupy 2/3 of the whole face and the distance between the eyes is 5 cm, the face aspect ratio of the virtual image reference model is adjusted, and the size and the distance between the eyes are also adjusted, so that a corresponding initial virtual image shape model is generated.
It should be noted that the above examples are only illustrative, and are not intended to limit the initial shape parameters, the initial avatar shape model, and the like in the embodiments of the present disclosure.
And 204, displaying each initial shape parameter and the corresponding initial avatar shape model on a display interface.
After the initial avatar shape model is generated, each initial shape parameter and the corresponding initial avatar shape model can be displayed on the display interface, so that a user can know each initial shape parameter and the corresponding initial avatar shape model more clearly and intuitively.
For example, each initial shape parameter may be displayed at a middle position of the display interface, and each corresponding initial avatar shape model may be displayed below each initial shape parameter; alternatively, each initial avatar shape model may be displayed from a position of the left third of the display interface, each corresponding initial shape parameter may be displayed below each initial avatar shape model, and so on.
It should be noted that the foregoing examples are only illustrative, and should not be taken as limiting the display interface in the embodiments of the present disclosure to display each initial shape parameter and the corresponding initial avatar shape model.
In step 205, in response to receiving a correction operation performed on any of the initial shape parameters at the display interface, any of the initial shape parameters is adjusted to generate a second shape parameter corresponding to any of the initial shape parameters.
It will be appreciated that the user may adjust any of the initial shape parameters as desired.
For example, the user may first select any initial shape parameter to be modified in the display interface, then modify the initial shape parameter to be the data desired by the user, and click the corresponding control to save the data. When the generating device acquires the correction operation, the generating device can adjust any initial shape parameter to generate a second shape parameter corresponding to any initial shape parameter.
Alternatively, the user may perform operations such as zooming in, zooming out, and rotating the initial avatar shape model in any ratio on the display interface as needed, and then modify any one of the initial shape parameters by long-press operation, sliding, clicking, and dragging operations, to generate a second shape parameter corresponding to the one of the initial shape parameters.
For example, the user may first enlarge the eye portion in the initial avatar shape model 1, then adjust the eye portion by touching or dragging with a finger, and the display interface may display corresponding shape parameters in real time. After the user is satisfied with the eye position adjustment, the initial shape parameter to be corrected can be corrected according to the shape parameter displayed in real time. When the generating device acquires the correction operation, the generating device can adjust any initial shape parameter to generate a second shape parameter corresponding to any initial shape parameter.
It should be noted that the above examples are only illustrative, and are not intended to limit the display interface, the initial shape parameters, the correction operations performed on the initial shape parameters, and the like in the embodiments of the present disclosure.
Based on the respective second shape parameters, the avatar reference model is adjusted to generate a plurality of avatar shape models, step 206.
In the embodiment of the disclosure, a user may autonomously select any initial shape parameter to be corrected and perform corresponding correction on the initial shape parameter, so that when the generating device acquires the correction operation, the generating device may adjust the any initial shape parameter to generate a second shape parameter corresponding to the any initial shape parameter. And then corresponding parts in the avatar reference model can be adjusted according to the generated second shape parameters to generate a plurality of avatar shape models.
It is understood that the avatar reference model may be adjusted based on each of the second shape parameters, to generate an avatar shape model corresponding to each of the second shape parameters. Alternatively, the plurality of second shape parameters may be combined as needed, so that the corresponding parts of the avatar reference model may be adjusted based on the combined plurality of second shape parameters, so as to generate an avatar shape model corresponding to the plurality of second shape parameters, and the like, which is not limited in the present disclosure.
Step 207, performing shape adjustment on the face reference model based on the second shape parameters to generate a plurality of face shape models.
The face reference model may be provided by a generating device, or may be provided by a user, or the like, which is not limited in this disclosure.
It will be appreciated that the avatar reference model may be adjusted based on the respective second shape parameters to generate a corresponding plurality of avatar shape models. The face reference model can be subjected to the same shape adjustment based on the same second shape parameters to generate a plurality of corresponding face shape models, so that the generated face shape models can be more in line with the requirements of users, the accuracy is improved, and the users are given better using feeling.
Step 208, analyzing the face image based on the face shape models and the face reference model to determine each first shape parameter corresponding to the face image.
The face image may be analyzed in a plurality of ways based on a plurality of face shape models and face reference models.
For example, the difference between any face shape model and the face reference model may be determined first, and then the first shape parameter corresponding to the face image may be determined according to the relationship satisfied by the face shape model, the face reference model, and the shape parameter corresponding to the face image.
For example, any of the face shape model, the face reference model, and the shape parameters corresponding to the face image may satisfy the following relationship: d1×k1+d2×k2+d3×k3+d0=face image. Wherein D1, D2 and D3 are differences corresponding to each part of the face shape model and the face reference model respectively, K1, K2 and K3 are first shape parameters corresponding to the face image respectively, and D0 is the face reference model. Therefore, according to the corresponding relation, the first shape parameters K1, K2 and K3 corresponding to the face image can be determined.
The foregoing examples are merely illustrative, and are not intended to be limiting of the manner in which the first shape parameter corresponding to the face image is determined in the embodiments of the present disclosure.
For example, as shown in fig. 2a, the user may zoom, stretch, etc. a certain portion of the avatar reference model on the display interface to adjust the same, thereby generating the corresponding second shape parameter. Further, the generating means adjusts the avatar reference model based on the determined second shape parameters or the combination of the plurality of second shape parameters, and generates a plurality of avatar shape models as shown in fig. 2 a. Then, the generating device may map the determined second shape parameters to the face image to obtain corresponding first shape parameters, and then perform shape adjustment on the face reference model (as shown in fig. 2 a) in the generating device by using the bone skin system according to the first shape parameters, so as to generate a plurality of face shape models corresponding to the avatar shape models one by one respectively as shown in fig. 2 a.
The above examples are merely illustrative, and are not intended to limit the generation of shape models or the like in the embodiments of the present disclosure.
Step 209, according to each first shape parameter, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image.
According to the method, the device and the system for generating the virtual image, the virtual image generation request can be received firstly, then each second shape parameter corresponding to the virtual image reference model can be obtained, the virtual image reference model in the virtual image generation request can be adjusted based on each initial shape parameter to generate a plurality of initial virtual image shape models, any one of the initial shape parameters can be adjusted to generate the corresponding second shape parameter, the virtual image reference model can be adjusted based on each second shape parameter to generate a plurality of virtual image shape models, the face reference model can be subjected to shape adjustment to generate a plurality of face shape models, the face image can be analyzed to determine each first shape parameter corresponding to the face image, and accordingly the virtual image corresponding to the face image can be generated by fusing the plurality of virtual image shape models and the virtual image reference model according to each first shape parameter. Therefore, the corresponding virtual image can be quickly generated by using the face image, each shape model and the reference model, so that the requirement of generating the virtual image of a user is met, the labor cost is reduced, and the efficiency is improved.
It will be appreciated that after the avatar is generated, the user may make adjustments to the avatar if he is not satisfied with it. The above process is described in detail with reference to fig. 3.
Step 301, receiving an avatar generation request, wherein the generation request includes a face image and an avatar reference model.
Step 302, a plurality of avatar shape models corresponding to the avatar reference model are acquired.
Step 303, analyzing the face image to determine each first shape parameter corresponding to the face image.
Step 304, determining a target second shape parameter corresponding to the first shape parameter according to a mapping relationship between the preset first shape parameter and the second shape parameter.
It can be appreciated that the mapping relationship between the first shape parameter and the second shape parameter may be set in advance. Therefore, after the first shape parameters corresponding to the face image are determined, the second shape parameters of the target corresponding to the face image can be determined by searching the mapping relation.
And 305, fusing the plurality of avatar shape models and the avatar reference model according to the target second shape parameters to generate an avatar corresponding to the face image.
For example, the difference corresponding to each part between any avatar shape model and the avatar reference model can be determined, then the difference is multiplied by the second shape parameter of the target, and added, and finally the avatar reference model is added, so that the avatar corresponding to the face image can be generated.
Alternatively, differences corresponding to each part between the plurality of avatar shape models and the avatar reference model may be determined, then the differences are multiplied by the second shape parameter of the target, and added, and finally the avatar reference model is added, so that the avatar corresponding to the face image may be generated.
The above examples are merely illustrative, and are not intended to limit the manner in which the avatar corresponding to the face image is generated in the embodiments of the present disclosure.
The generation of the avatar provided in the present disclosure will be described below by taking the generation schematic of the avatar shown in fig. 3a as an example.
In the schematic diagram shown in fig. 3a, a user uploads a face image, and then the uploaded face image can be analyzed according to a face reference model and a plurality of face shape models corresponding to the face reference model to determine a first shape parameter corresponding to the face image. And then determining a target second shape parameter corresponding to the first shape parameter according to a preset mapping relation between the first shape parameter and the second shape parameter, so as to fuse the virtual image reference model and the plurality of virtual image shape models according to the target second shape parameter, and further generate the virtual image corresponding to the face image.
The above examples are merely illustrative, and are not intended to limit the generation of an avatar corresponding to a face image or the like in the embodiment of the present disclosure.
It can be appreciated that the method for generating the avatar provided by the present disclosure may be applied to various character modeling scenarios such as social interaction, live broadcast, game, etc., and may also be configured in a character modeling system, etc., which is not limited in this disclosure.
For example, applying the scheme provided by the present disclosure to a PTA (photo-to-avatar) platform, the architecture of the platform may be as shown in fig. 3 b.
As shown in fig. 3b, the user may perform various operations through the display interface, such as uploading a face image and acquiring a PTA result after running, acquiring a profile of an avatar reference model from a console, and the like.
Further, the user can obtain the complete configuration file through pull-down, the display interface can display the complete shape parameters and the corresponding models to the user, and the user can download the corresponding models from the interface according to the needs, and the like.
It can be understood that the mapping relationship between the first shape parameter and the second shape parameter may be preset in the platform, and then, the user may set the mapping relationship between the first shape parameter and the second shape parameter according to the requirement of the user. The user can also view the operation state of himself/herself through the display interface, such as which shape parameters are adjusted by himself/herself, various avatar shape models before and after adjustment, and so on.
In addition, the display interface can display each shape model and corresponding shape parameters, and a user can adjust the shape parameters according to the needs. Correspondingly, the control console configures corresponding adjustment instructions according to adjustment operations performed by a user. And then the PTA configuration tool can update the corresponding configuration information according to the adjustment instruction. And then, after the console runs the instruction, the PAT can generate the corresponding virtual image after configuration updating, and the result is returned to the console, so that the virtual image is displayed on the display interface.
Note that the above examples are merely illustrative, and are not intended to limit the procedure of generating an avatar or the like in the embodiments of the present disclosure.
And 306, displaying the avatar on the display interface.
It is understood that after the avatar is generated, the avatar may be displayed at a display interface. Alternatively, the avatar and its corresponding second shape parameters, etc. may also be displayed on the display interface, which is not limited in this disclosure.
In step 307, in response to receiving the adjustment operation performed on the avatar at the display interface, a target portion corresponding to the adjustment operation and the adjusted target second shape parameter are determined.
The user can adjust any target second shape parameter on the display interface according to the requirement.
For example, the user may first select any target second shape parameter to be adjusted, then modify the target second shape parameter to be the data desired by the user, and click on the corresponding control to save the data. When the generating device acquires the adjustment operation, the generating device can determine the target part corresponding to the adjustment operation and the adjusted target second shape parameter.
Or, the user may first zoom in, zoom out, rotate, etc. the avatar in any proportion on the display interface, then determine the target portion to be adjusted through long-press action, sliding, clicking, dragging, etc. operations, and then adjust the second shape parameter of the target. When the generating device acquires the correction operation, the generating device can determine the target part corresponding to the adjustment operation and the adjusted target second shape parameter.
For example, the user wants to adjust the shape of the nose in the avatar, the nose part in the avatar can be enlarged first, then the nose part can be zoomed through operations such as finger touch and mouse drag, and the corresponding second shape parameter of the target can be displayed on the display interface in real time. After the nose part is satisfied, the user can click on the corresponding control to save. When the generating device acquires the correction operation, the generating device can determine the target part corresponding to the adjustment operation and the adjusted target second shape parameter.
Note that the above examples are merely illustrative, and are not intended to limit the adjustment operations and the like performed for the avatar in the embodiments of the present disclosure.
Step 308, adjusting the mapping relationship between the preset first shape parameter and the second shape parameter according to the target portion and the adjusted target second shape parameter.
After determining the target part corresponding to the adjustment operation and the adjusted target second shape parameter, the generating device can add the first shape parameter corresponding to the face image and the adjusted target second shape parameter to the mapping relation between the preset first shape parameter and the second shape parameter, and delete the original mapping relation, so that the mapping relation between the preset first shape parameter and the second shape parameter can be adjusted in time to meet the user requirement.
Alternatively, the second shape parameter in the mapping relation corresponding to the first shape parameter may be replaced with the adjusted target second shape parameter. For example, according to the mapping relationship, determining the second shape parameter corresponding to the first shape parameter is: the distance between eyes is 5 cm. And the adjusted target second shape parameters are: the distance between two eyes is 4 cm, so that a second shape parameter corresponding to the first shape parameter in the mapping relation can be modified as follows: the distance between eyes is 4 cm, etc., which is not limited by the present disclosure.
Therefore, when the user has the requirement of the virtual image, the target second shape parameter meeting the requirement of the user can be rapidly determined according to the mapping relation between the adjusted first shape parameter and the second shape parameter and the first shape parameter corresponding to the face image, so that the virtual image meeting the requirement of the user is generated, the efficiency is improved, and meanwhile, the user can feel good.
According to the method, the device and the system for generating the virtual image, the face image and the virtual image reference model which are included in the virtual image generating request can be used for determining a plurality of virtual image shape models corresponding to the virtual image reference model, analyzing the uploaded face image to determine each first shape parameter corresponding to the face image, then determining a target second shape parameter corresponding to the first shape parameter according to the mapping relation between the preset first shape parameter and the second shape parameter, and fusing the plurality of virtual image shape models and the virtual image reference model to generate the virtual image corresponding to the face image. And then displaying the virtual image on the display interface, when the display interface receives the adjustment operation executed on the virtual image, determining a target position corresponding to the adjustment operation and an adjusted target second shape parameter, and then adjusting the mapping relation between the preset first shape parameter and the second shape parameter. Therefore, the adjustment of the virtual image can be realized, thereby meeting the demands of users on the virtual image, reducing the labor cost and improving the efficiency.
In order to implement the above-described embodiments, the present disclosure also proposes an avatar generation apparatus.
Fig. 4 is a schematic structural view of an avatar generation device according to an embodiment of the present disclosure.
As shown in fig. 4, the avatar generating apparatus 400 includes: the device comprises a receiving module 410, an acquiring module 420, a resolving module 430 and a generating module 440.
The receiving module 410 is configured to receive an avatar generation request, where the generation request includes a face image and an avatar reference model.
An acquisition module 420 for acquiring a plurality of avatar shape models corresponding to the avatar reference model.
The parsing module 430 is configured to parse the face image to determine each first shape parameter corresponding to the face image.
And a generating module 440, configured to fuse the plurality of avatar shape models and the avatar reference model according to the first shape parameters, so as to generate an avatar corresponding to the face image.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
The avatar generation device provided by the disclosure may receive an avatar generation request, where the generation request includes a face image and an avatar reference model, then acquire a plurality of avatar shape models corresponding to the avatar reference model, and then parse the face image to determine each first shape parameter corresponding to the face image, so that the plurality of avatar shape models and the avatar reference model may be fused according to each first shape parameter to generate an avatar corresponding to the face image. Therefore, the selected virtual image reference model and the plurality of virtual image shape models can be fused according to each first shape parameter corresponding to the face image, and the virtual image can be quickly generated, so that the requirements of users are met, the labor cost is reduced, and the efficiency is improved.
Fig. 5 is a schematic structural view of an avatar generation device according to an embodiment of the present disclosure.
As shown in fig. 5, the avatar generation apparatus 500 includes: a receiving module 510, an acquiring module 520, a resolving module 530 and a generating module 540.
The receiving module 510 is configured to receive an avatar generation request, where the generation request includes a face image and an avatar reference model.
An acquisition module 520 for acquiring a plurality of avatar shape models corresponding to the avatar reference model.
And the parsing module 530 is configured to parse the face image to determine each first shape parameter corresponding to the face image.
And a generating module 540, configured to fuse the plurality of avatar shape models and the avatar reference model according to the first shape parameters, so as to generate an avatar corresponding to the face image.
In one possible implementation, the acquiring module 520 includes:
an obtaining unit, configured to obtain each second shape parameter corresponding to the avatar reference model;
and a generation unit for adjusting the avatar reference model based on the respective second shape parameters to generate the plurality of avatar shape models.
In a possible implementation manner, the acquiring unit is specifically configured to: acquiring each initial shape parameter corresponding to the virtual image reference model; adjusting the avatar reference model based on the respective initial shape parameters to generate a plurality of initial avatar shape models; displaying the initial shape parameters and the corresponding initial avatar shape models on a display interface; and in response to receiving a correction operation performed on any initial shape parameter at the display interface, adjusting the any initial shape parameter to generate a second shape parameter corresponding to the any initial shape parameter.
In one possible implementation, the parsing module 530 is specifically configured to: based on each second shape parameter, carrying out shape adjustment on the face reference model to generate a plurality of corresponding face shape models; and analyzing the face image based on the face shape models and the face reference model to determine each first shape parameter corresponding to the face image.
In one possible implementation, the generating module 540 is specifically configured to: determining a target second shape parameter corresponding to the first shape parameter according to a mapping relation between a preset first shape parameter and a second shape parameter; and according to the target second shape parameter, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image.
In one possible implementation manner, the apparatus may further include:
and a display module 550 for displaying the avatar on a display interface.
And the determining module 560 is configured to determine, in response to receiving an adjustment operation performed on the avatar at the display interface, a target location corresponding to the adjustment operation and an adjusted target second shape parameter.
The adjusting module 570 is configured to adjust the mapping relationship between the preset first shape parameter and the second shape parameter according to the target portion and the adjusted target second shape parameter.
It is understood that the receiving module 510, the obtaining module 520, the parsing module 530, and the generating module 540 in the embodiments of the present disclosure may have the same structure and function as the receiving module 410, the obtaining module 420, the parsing module 430, and the generating module 440 in the embodiments described above, respectively.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
According to the avatar generating apparatus of the embodiment of the present disclosure, a plurality of avatar shape models corresponding to the avatar reference models may be determined according to a face image and the avatar reference models included in the avatar generating request, and the uploaded face image may be parsed to determine each first shape parameter corresponding to the face image, and then a target second shape parameter corresponding to the first shape parameter may be determined according to a mapping relationship between the preset first shape parameter and the second shape parameter, and the plurality of avatar shape models and the avatar reference models may be fused to generate an avatar corresponding to the face image. And then displaying the virtual image on the display interface, when the display interface receives the adjustment operation executed on the virtual image, determining a target position corresponding to the adjustment operation and an adjusted target second shape parameter, and then adjusting the mapping relation between the preset first shape parameter and the second shape parameter. Therefore, the adjustment of the virtual image can be realized, thereby meeting the demands of users on the virtual image, reducing the labor cost and improving the efficiency.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 601 performs the respective methods and processes described above, for example, an avatar generation method. For example, in some embodiments, the avatar generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the avatar generation method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the avatar generation method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
According to the technical scheme, an avatar generation request can be received firstly, wherein the generation request comprises a face image and an avatar reference model, then a plurality of avatar shape models corresponding to the avatar reference model are obtained, the face image is analyzed to determine each first shape parameter corresponding to the face image, and accordingly the plurality of avatar shape models and the avatar reference model can be fused according to each first shape parameter to generate an avatar corresponding to the face image. Therefore, the selected virtual image reference model and the plurality of virtual image shape models can be fused according to each first shape parameter corresponding to the face image, and the virtual image can be quickly generated, so that the requirements of users are met, the labor cost is reduced, and the efficiency is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (8)

1. A method of generating an avatar, comprising:
receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model;
acquiring each initial shape parameter corresponding to the virtual image reference model;
adjusting the avatar reference model based on the respective initial shape parameters to generate a plurality of initial avatar shape models;
displaying the initial shape parameters and the corresponding initial avatar shape models on a display interface;
in response to receiving a correction operation performed on any initial shape parameter at the display interface, adjusting the any initial shape parameter to generate a second shape parameter corresponding to the any initial shape parameter;
Adjusting the avatar reference model based on the respective second shape parameters to generate a plurality of avatar shape models;
analyzing the face image to determine each first shape parameter corresponding to the face image;
according to the first shape parameters, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image;
the analyzing the face image to determine each first shape parameter corresponding to the face image includes:
based on the second shape parameters, performing shape adjustment on the face reference model to generate a plurality of corresponding face shape models;
and analyzing the face image based on the face shape models and the face reference model to determine each first shape parameter corresponding to the face image.
2. The method of any one of claim 1, wherein the fusing the plurality of avatar shape models and the avatar reference model according to the respective first shape parameters to generate the avatar corresponding to the face image comprises:
Determining a target second shape parameter corresponding to the first shape parameter according to a mapping relation between a preset first shape parameter and a second shape parameter;
and according to the target second shape parameter, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image.
3. The method of claim 2, further comprising:
displaying the avatar on a display interface;
in response to receiving an adjustment operation performed on the avatar at the display interface, determining a target portion corresponding to the adjustment operation and an adjusted target second shape parameter;
and adjusting the mapping relation between the preset first shape parameter and the second shape parameter according to the target part and the adjusted target second shape parameter.
4. An avatar generation apparatus comprising:
the receiving module is used for receiving an avatar generation request, wherein the generation request comprises a face image and an avatar reference model;
the acquisition module is used for acquiring each initial shape parameter corresponding to the virtual image reference model; adjusting the avatar reference model based on the respective initial shape parameters to generate a plurality of initial avatar shape models; displaying the initial shape parameters and the corresponding initial avatar shape models on a display interface; in response to receiving a correction operation performed on any initial shape parameter at the display interface, adjusting the any initial shape parameter to generate a second shape parameter corresponding to the any initial shape parameter; adjusting the avatar reference model based on the respective second shape parameters to generate a plurality of avatar shape models;
The analysis module is used for analyzing the face image to determine each first shape parameter corresponding to the face image;
the generating module is used for fusing the plurality of avatar shape models and the avatar reference model according to the first shape parameters so as to generate an avatar corresponding to the face image;
the analysis module is specifically configured to:
based on the second shape parameters, performing shape adjustment on the face reference model to generate a plurality of corresponding face shape models;
and analyzing the face image based on the face shape models and the face reference model to determine each first shape parameter corresponding to the face image.
5. The apparatus of any one of claim 4, wherein the generating module is specifically configured to:
determining a target second shape parameter corresponding to the first shape parameter according to a mapping relation between a preset first shape parameter and a second shape parameter;
and according to the target second shape parameter, fusing the plurality of avatar shape models and the avatar reference model to generate an avatar corresponding to the face image.
6. The apparatus of claim 5, further comprising:
the display module is used for displaying the virtual image on a display interface;
the determining module is used for determining a target part corresponding to the adjustment operation and an adjusted target second shape parameter in response to receiving the adjustment operation which is executed for the virtual image on the display interface;
and the adjusting module is used for adjusting the mapping relation between the preset first shape parameter and the second shape parameter according to the target part and the adjusted target second shape parameter.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3.
8. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-3.
CN202110456059.1A 2021-04-26 2021-04-26 Method, device, electronic equipment and storage medium for generating virtual image Active CN113240778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110456059.1A CN113240778B (en) 2021-04-26 2021-04-26 Method, device, electronic equipment and storage medium for generating virtual image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110456059.1A CN113240778B (en) 2021-04-26 2021-04-26 Method, device, electronic equipment and storage medium for generating virtual image

Publications (2)

Publication Number Publication Date
CN113240778A CN113240778A (en) 2021-08-10
CN113240778B true CN113240778B (en) 2024-04-12

Family

ID=77129420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110456059.1A Active CN113240778B (en) 2021-04-26 2021-04-26 Method, device, electronic equipment and storage medium for generating virtual image

Country Status (1)

Country Link
CN (1) CN113240778B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187392B (en) * 2021-10-29 2024-04-19 北京百度网讯科技有限公司 Virtual even image generation method and device and electronic equipment
CN114119935B (en) * 2021-11-29 2023-10-03 北京百度网讯科技有限公司 Image processing method and device
CN114239241B (en) * 2021-11-30 2023-02-28 北京百度网讯科技有限公司 Card generation method and device and electronic equipment
CN114445528B (en) * 2021-12-15 2022-11-11 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
WO2024082180A1 (en) * 2022-10-19 2024-04-25 云智联网络科技(北京)有限公司 Virtual image shaping method and related device
CN116152403B (en) * 2023-01-09 2024-06-07 支付宝(杭州)信息技术有限公司 Image generation method and device, storage medium and electronic equipment
CN116993876B (en) * 2023-09-28 2023-12-29 世优(北京)科技有限公司 Method, device, electronic equipment and storage medium for generating digital human image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing
CN109523604A (en) * 2018-11-14 2019-03-26 珠海金山网络游戏科技有限公司 A kind of virtual shape of face generation method, device, electronic equipment and storage medium
US10325416B1 (en) * 2018-05-07 2019-06-18 Apple Inc. Avatar creation user interface
CN109922355A (en) * 2019-03-29 2019-06-21 广州虎牙信息科技有限公司 Virtual image live broadcasting method, virtual image live broadcast device and electronic equipment
CN110782515A (en) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111402399A (en) * 2020-03-10 2020-07-10 广州虎牙科技有限公司 Face driving and live broadcasting method and device, electronic equipment and storage medium
CN112337105A (en) * 2020-11-06 2021-02-09 广州酷狗计算机科技有限公司 Virtual image generation method, device, terminal and storage medium
CN112634416A (en) * 2020-12-23 2021-04-09 北京达佳互联信息技术有限公司 Method and device for generating virtual image model, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275795A (en) * 2012-04-09 2020-06-12 英特尔公司 System and method for avatar generation, rendering and animation
US10636188B2 (en) * 2018-02-09 2020-04-28 Nicholas T. Hariton Systems and methods for utilizing a living entity as a marker for augmented reality content

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing
US10325416B1 (en) * 2018-05-07 2019-06-18 Apple Inc. Avatar creation user interface
CN109523604A (en) * 2018-11-14 2019-03-26 珠海金山网络游戏科技有限公司 A kind of virtual shape of face generation method, device, electronic equipment and storage medium
CN109922355A (en) * 2019-03-29 2019-06-21 广州虎牙信息科技有限公司 Virtual image live broadcasting method, virtual image live broadcast device and electronic equipment
CN110782515A (en) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111402399A (en) * 2020-03-10 2020-07-10 广州虎牙科技有限公司 Face driving and live broadcasting method and device, electronic equipment and storage medium
CN112337105A (en) * 2020-11-06 2021-02-09 广州酷狗计算机科技有限公司 Virtual image generation method, device, terminal and storage medium
CN112634416A (en) * 2020-12-23 2021-04-09 北京达佳互联信息技术有限公司 Method and device for generating virtual image model, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113240778A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113240778B (en) Method, device, electronic equipment and storage medium for generating virtual image
US11587300B2 (en) Method and apparatus for generating three-dimensional virtual image, and storage medium
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
KR102527878B1 (en) Method, apparatus, electronic device and, readable storage medium and program for constructing key-point learning model
JP7316453B2 (en) Object recommendation method and device, computer equipment and medium
CN112347769B (en) Entity recognition model generation method and device, electronic equipment and storage medium
JP2021103574A (en) Method, device, and electronic apparatus for training facial fusion model
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN111968203B (en) Animation driving method, device, electronic equipment and storage medium
CN111563855B (en) Image processing method and device
CN113643412A (en) Virtual image generation method and device, electronic equipment and storage medium
CN112509099A (en) Avatar driving method, apparatus, device and storage medium
CN111861955A (en) Method and device for constructing image editing model
CN114245155A (en) Live broadcast method and device and electronic equipment
CN115879469B (en) Text data processing method, model training method, device and medium
KR102488517B1 (en) A method, a device, an electronic equipment and a storage medium for changing hairstyle
CN113591918A (en) Training method of image processing model, image processing method, device and equipment
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
CN112925412A (en) Control method and device of intelligent mirror and storage medium
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN114972017A (en) Generation method and device of personalized face style graph and electronic equipment
CN112269928A (en) User recommendation method and device, electronic equipment and computer readable medium
CN114399424A (en) Model training method and related equipment
CN112200169B (en) Method, apparatus, device and storage medium for training a model
JP2022534458A (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant