CN113344770B - Virtual model and construction method thereof, interaction method and electronic device - Google Patents
Virtual model and construction method thereof, interaction method and electronic device Download PDFInfo
- Publication number
- CN113344770B CN113344770B CN202110481008.4A CN202110481008A CN113344770B CN 113344770 B CN113344770 B CN 113344770B CN 202110481008 A CN202110481008 A CN 202110481008A CN 113344770 B CN113344770 B CN 113344770B
- Authority
- CN
- China
- Prior art keywords
- model
- virtual model
- error term
- expression
- standard model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000003993 interaction Effects 0.000 title claims abstract description 47
- 238000010276 construction Methods 0.000 title abstract description 11
- 230000014509 gene expression Effects 0.000 claims abstract description 101
- 230000005477 standard model Effects 0.000 claims abstract description 99
- 230000009466 transformation Effects 0.000 claims abstract description 57
- 239000011159 matrix material Substances 0.000 claims abstract description 44
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000001815 facial effect Effects 0.000 claims description 58
- 238000004590 computer program Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000013459 approach Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 15
- 238000000605 extraction Methods 0.000 description 7
- 230000002452 interceptive effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000005457 optimization Methods 0.000 description 5
- 230000001131 transforming effect Effects 0.000 description 5
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 4
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000004397 blinking Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
本申请公开一种虚拟模型及其构建方法、交互方法以及电子设备,所述方法包括获取目标人物的三维模型;利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵;基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型;对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。本申请提供的所述方法,利用标准模型去接近目标人物的三维模型,通过对标准模型(点云数量、面片数量等较为固定,且分布均匀)进行编辑,得到三维的虚拟模型,成本较低。
The present application discloses a virtual model and a construction method thereof, an interaction method, and an electronic device, the method comprising obtaining a three-dimensional model of a target person; performing parameterization processing on the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model; performing transformation processing on each point in the standard model based on the target transformation matrix to obtain a transformed standard model; and editing the transformed standard model to obtain a virtual model, wherein the virtual model contains a plurality of expression target bodies. The method provided by the present application uses a standard model to approach the three-dimensional model of the target person, and obtains a three-dimensional virtual model by editing the standard model (the number of point clouds, the number of facets, etc. are relatively fixed and evenly distributed), with low cost.
Description
技术领域Technical Field
本申请涉及人机交互领域,进一步涉及一种虚拟模型及其构建方法、交互方法以及电子设备。The present application relates to the field of human-computer interaction, and further to a virtual model and a construction method thereof, an interaction method and an electronic device.
背景技术Background Art
随着人工智能的发展,人机交互逐渐得到越来越广泛的应用。例如,在一些服务行业,采用语音机器人与用户进行人机交互,以节省人工成本,提高服务效率。With the development of artificial intelligence, human-computer interaction has gradually been more and more widely used. For example, in some service industries, voice robots are used to interact with users to save labor costs and improve service efficiency.
然而,语音机器人与用户之间的人机交互一般为语音交互方式,缺乏视觉上的直观性,用户体验感较差。而目前的一些视觉上的交互方式,一般采用视频流中的2D信息(如二维画面等)与用户进行交互,相较于3D信息(如三维数字人模型等)缺乏真实感。However, the human-machine interaction between voice robots and users is generally voice interaction, which lacks visual intuitiveness and poor user experience. Some current visual interaction methods generally use 2D information in video streams (such as two-dimensional images, etc.) to interact with users, which lacks realism compared to 3D information (such as three-dimensional digital human models, etc.).
现有技术中的三维数字人模型一般由建模师采用建模软件进行建模得到,不仅建模成本较高,而且模型质量与建模师的经验相关,不同的建模师建模得到的三维数字人模型差别较大,不利于实际应用。The three-dimensional digital human models in the prior art are generally obtained by modelers using modeling software. Not only is the modeling cost high, but the quality of the model is also related to the experience of the modeler. The three-dimensional digital human models obtained by different modelers vary greatly, which is not conducive to practical applications.
发明内容Summary of the invention
本发明的一个优势在于提供一种虚拟模型及其构建方法、交互方法以及电子设备,其中所述方法能够构建得到三维的虚拟模型,以实现人机交互,无需人工建模,成本较低。One advantage of the present invention is that it provides a virtual model and a construction method thereof, an interaction method and an electronic device, wherein the method can construct a three-dimensional virtual model to achieve human-computer interaction without manual modeling and at a low cost.
第一方面,本发明的一个优势在于提供一种虚拟模型的构建方法,包括:In a first aspect, an advantage of the present invention is to provide a method for constructing a virtual model, comprising:
获取目标人物的三维模型;Obtain a three-dimensional model of the target person;
利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵;Parameterizing the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model;
基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型;Transforming each point in the standard model based on the target transformation matrix to obtain a transformed standard model;
对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。The transformed standard model is edited to obtain a virtual model, wherein the virtual model includes a plurality of expression target bodies.
其中一种可能的实现方式中,所述利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵,包括:In one possible implementation, the parameterization of the three-dimensional model using the standard model to obtain a target transformation matrix for each point in the standard model includes:
根据所述三维模型中的每个邻接点的约束,获得第一误差项;Obtaining a first error term according to the constraint of each adjacent point in the three-dimensional model;
根据所述标准模型中每个点的变换矩阵的约束,获得第二误差项;Obtaining a second error term according to the constraints of the transformation matrix of each point in the standard model;
根据所述标准模型与所述三维模型中对应点的约束,获得第三误差项;Obtaining a third error term according to constraints on corresponding points in the standard model and the three-dimensional model;
根据所述第一误差项、所述第二误差项以及所述第三误差项的加权和最小值,确定目标变换矩阵。A target transformation matrix is determined according to a weighted minimum value of the first error term, the second error term, and the third error term.
其中一种可能的实现方式中,所述第一误差项由公式In one possible implementation, the first error term is given by the formula
计算得到,其中,ES为第一误差项,Ti与Tj为变换矩阵,Calculated, where E S is the first error term, Ti and T j are transformation matrices,
所述第二误差项由公式The second error term is given by the formula
计算得到,其中,EI为第二误差项,Calculated, where E I is the second error term,
所述第三误差项由公式The third error term is given by the formula
计算得到,其中,EC为第三误差项,,Vi为所述标准模型中经变换矩阵变换后的第i个点的坐标,Ci为所述三维模型中与所述标准模型中的Vi点相对应的第i个点的坐标,Calculated, where EC is the third error term, Vi is the coordinate of the i-th point in the standard model after transformation by the transformation matrix, Ci is the coordinate of the i-th point in the three-dimensional model corresponding to the point Vi in the standard model,
所述权重和最小值由公式The weight and minimum value are given by the formula
计算得到,其中,WS、WI、WC为权重值。Calculated, where W S , W I , and W C are weight values.
其中一种可能的实现方式中,所述对所述变换后的标准模型进行编辑,以得到虚拟模型,包括:In one possible implementation, editing the transformed standard model to obtain a virtual model includes:
响应用户编辑表情目标体的操作,在所述变换后的标准模型中构建多个所述表情目标体,以得到虚拟模型。In response to the user's operation of editing the expression target body, a plurality of the expression target bodies are constructed in the transformed standard model to obtain a virtual model.
第二方面,本申请提供一种虚拟模型的交互方法,包括:In a second aspect, the present application provides a virtual model interaction method, comprising:
获取用户输入的交互信息;Get interactive information input by the user;
基于所述交互信息,获取视频流,其中,所述视频流中包含多个画面;Based on the interaction information, a video stream is acquired, wherein the video stream includes a plurality of pictures;
从每个所述画面中,提取得到人物面部特征;Extracting facial features of the person from each of the pictures;
基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,所述虚拟模型由如第一方面所述的方法获得;Based on the facial features of the character and the virtual model, acquiring expression coefficients of a plurality of expression targets in the virtual model, wherein the virtual model is obtained by the method according to the first aspect;
基于每个所述表情目标体的表情系数,对所述虚拟模型进行驱动,其中,驱动后的虚拟模型进行动态显示。Based on the expression coefficient of each of the expression targets, the virtual model is driven, wherein the driven virtual model is dynamically displayed.
其中一种可能的实现方式中,所述从每个所述画面中,提取得到人物面部特征,包括:In one possible implementation, extracting facial features of a person from each of the pictures includes:
从每个所述画面中获取采集到的人物面部图像;Acquire the collected facial image of the person from each of the pictures;
对所述人物面部图像进行特征提取,以获得人物面部特征。Feature extraction is performed on the character's facial image to obtain the character's facial features.
其中一种可能的实现方式中,所述从每个所述画面中,提取得到人物面部特征,包括:In one possible implementation, extracting facial features of a person from each of the pictures includes:
从每个所述画面中获取采集到的语音信号;Acquire the collected voice signal from each of the pictures;
对所述语音信号中每个字的口型进行识别,以获得人物面部特征。The mouth shape of each word in the speech signal is recognized to obtain the facial features of the person.
其中一种可能的实现方式中,所述人物面部特征中的特征点与所述虚拟模型中的关键点之间存在对应关系,所述基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,包括:In one possible implementation, there is a correspondence between feature points in the facial features of the person and key points in the virtual model, and acquiring expression coefficients of multiple expression targets in the virtual model based on the facial features of the person and the virtual model includes:
根据所述特征点与所述关键点的位置关系,确定每个表情目标体的表情系数。The expression coefficient of each expression target body is determined according to the positional relationship between the feature points and the key points.
其中一种可能的实现方式中,所述驱动后的虚拟模型由公式In one possible implementation, the virtual model after driving is represented by the formula
计算得到,其中,Base为虚拟模型,Bi为第i个表情目标体,Wi为第i个表情目标体的表情系数,B为驱动后的虚拟模型。It is calculated, where Base is the virtual model, Bi is the i-th expression target body, Wi is the expression coefficient of the i-th expression target body, and B is the virtual model after driving.
第三方面,本申请提供一种虚拟模型的构建装置,包括:In a third aspect, the present application provides a virtual model construction device, comprising:
模型获取模块,用于获取目标人物的三维模型;A model acquisition module is used to acquire a three-dimensional model of a target person;
处理模块,用于利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵;A processing module, used for performing parameterization processing on the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model;
变换模块,用于基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型;A transformation module, used for transforming each point in the standard model based on the target transformation matrix to obtain a transformed standard model;
编辑模块,用于对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。The editing module is used to edit the transformed standard model to obtain a virtual model, wherein the virtual model includes a plurality of expression target bodies.
第四方面,本申请提供一种虚拟模型的交互装置,包括:In a fourth aspect, the present application provides an interactive device for a virtual model, comprising:
交互信息获取模块,用于获取用户输入的交互信息;An interactive information acquisition module, used to acquire interactive information input by a user;
视频流获取模块,用于基于所述交互信息,获取视频流,其中,所述视频流中包含多个画面;A video stream acquisition module, used to acquire a video stream based on the interaction information, wherein the video stream includes multiple pictures;
提取模块,用于从每个所述画面中,提取得到人物面部特征;An extraction module, used for extracting facial features of a person from each of the pictures;
表情系数获取模块,用于基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,所述虚拟模型由如第一方面所述的方法获得;An expression coefficient acquisition module, used to acquire expression coefficients of multiple expression targets in the virtual model based on the facial features of the character and the virtual model, wherein the virtual model is obtained by the method described in the first aspect;
驱动模块,用于基于每个所述表情目标体的表情系数,对所述虚拟模型进行驱动,其中,驱动后的虚拟模型进行动态显示。The driving module is used to drive the virtual model based on the expression coefficient of each expression target body, wherein the driven virtual model is dynamically displayed.
第五方面,本申请提供一种虚拟模型,所述虚拟模型由如第一方面所述的方法构建得到,所述虚拟模型用于在电子设备中进行显示。In a fifth aspect, the present application provides a virtual model, which is constructed by the method described in the first aspect, and the virtual model is used for display in an electronic device.
第六方面,本发明的另一个优势在于提供一种电子设备,包括:In a sixth aspect, another advantage of the present invention is to provide an electronic device, comprising:
一个或多个处理器;存储器;以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述设备执行时,使得所述设备执行如第一方面或第二方面所述的方法。One or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, and the one or more computer programs include instructions, which, when executed by the device, cause the device to perform the method as described in the first aspect or the second aspect.
第七方面,本发明的另一个优势在于提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行如第一方面或第二方面所述的方法。In the seventh aspect, another advantage of the present invention is to provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, which, when executed on a computer, enables the computer to execute the method described in the first aspect or the second aspect.
第八方面,本申请提供一种计算机程序,当所述计算机程序被计算机执行时,用于执行第一方面或第二方面所述的方法。In an eighth aspect, the present application provides a computer program, which, when executed by a computer, is used to execute the method described in the first aspect or the second aspect.
在一种可能的设计中,第八方面中的程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。In one possible design, the program in the eighth aspect may be stored in whole or in part on a storage medium packaged together with the processor, or may be stored in whole or in part on a memory not packaged together with the processor.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1示出了本发明虚拟模型的构建方法一个实施例的方法示意图。FIG. 1 is a schematic diagram showing a method for constructing a virtual model according to an embodiment of the present invention.
图2示出了本发明虚拟模型的交互方法一个实施例的方法示意图。FIG. 2 is a schematic diagram showing an embodiment of a method for interacting with a virtual model of the present invention.
图3示出了本发明虚拟模型的构建装置一个实施例的结构示意图。FIG3 shows a schematic structural diagram of an embodiment of a device for constructing a virtual model of the present invention.
图4示出了本发明虚拟模型的交互装置一个实施例的结构示意图。FIG. 4 shows a schematic structural diagram of an embodiment of an interactive device for a virtual model of the present invention.
图5示出了本发明电子设备一个实施例的结构示意图。FIG. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
以下描述用于揭露本发明以使本领域技术人员能够实现本发明。以下描述中的优选实施例只作为举例,本领域技术人员可以想到其他显而易见的变型。在以下描述中界定的本发明的基本原理可以应用于其他实施方案、变形方案、改进方案、等同方案以及没有背离本发明的精神和范围的其他技术方案。The following description is used to disclose the present invention so that those skilled in the art can implement the present invention. The preferred embodiments described below are only examples, and those skilled in the art can think of other obvious variations. The basic principles of the present invention defined in the following description can be applied to other embodiments, variations, improvements, equivalents, and other technical solutions that do not deviate from the spirit and scope of the present invention.
可以理解的是,术语“一”应理解为“至少一”或“一个或多个”,即在一个实施例中,一个元件的数量可以为一个,而在另外的实施例中,该元件的数量可以为多个,术语“一”不能理解为对数量的限制。It is to be understood that the term "one" should be understood as "at least one" or "one or more", that is, in one embodiment, the number of an element may be one, while in another embodiment, the number of the element may be multiple, and the term "one" should not be understood as a limitation on the quantity.
申请概述Application Overview
现有的人机交互的方法之一是,采用语音机器人与用户之间的人机交互,该方法为语音交互方式,缺乏视觉上的直观性,用户体验感较差。而目前的一些视觉上的交互方式,一般采用视频流中的2D信息(如二维画面等)与用户进行交互,相较于3D信息(如三维数字人模型等)缺乏真实感。One of the existing human-computer interaction methods is to use a voice robot to interact with the user. This method is a voice interaction method, lacks visual intuitiveness, and has a poor user experience. Some current visual interaction methods generally use 2D information (such as two-dimensional images, etc.) in the video stream to interact with the user, which lacks realism compared to 3D information (such as three-dimensional digital human models, etc.).
然而,现有技术中的三维数字人模型一般由建模师采用建模软件进行建模得到,不仅建模成本较高,而且模型质量与建模师的经验相关,不同的建模师建模得到的三维数字人模型差别较大,不利于实际应用。However, the three-dimensional digital human models in the prior art are generally obtained by modelers using modeling software. Not only is the modeling cost high, but the quality of the model is also related to the experience of the modeler. The three-dimensional digital human models obtained by different modelers vary greatly, which is not conducive to practical applications.
本申请人发现,由于采用3D扫描得到的三维模型具有庞大的点云、网格数量以及贴图数量等,数量不固定且分布不均匀,导致3D扫描得到的三维模型难以被编辑,从而难以改变该三维模型的形状,不利于在该三维模型中构建不同的表情目标体等,如微笑、张嘴、眨眼等表情,从而难以实现对该三维模型的驱动,不利于实现人机交互。The applicant has found that since the three-dimensional model obtained by 3D scanning has a huge number of point clouds, grids, and maps, the number is not fixed and the distribution is uneven, the three-dimensional model obtained by 3D scanning is difficult to edit, and thus it is difficult to change the shape of the three-dimensional model, which is not conducive to constructing different expression targets in the three-dimensional model, such as smiling, opening the mouth, blinking, etc., making it difficult to drive the three-dimensional model and not conducive to achieving human-computer interaction.
因此,本申请提供一种虚拟模型的构建方法,所述方法可以包括:获取目标人物的三维模型;利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵;基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型;对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。由此可见,本申请提供的所述方法,利用标准模型去接近目标人物的三维模型,通过对标准模型(点云数量、面片数量等较为固定,且分布均匀)进行编辑,得到三维的虚拟模型,因此,编辑难度较低(无需对目标人物的三维模型进行编辑),且有利于在模型中构建不同的表情目标体,有利于实现对虚拟模型的驱动,以实现人机交互,而且无需人工建模,成本较低。Therefore, the present application provides a method for constructing a virtual model, which may include: obtaining a three-dimensional model of a target person; parameterizing the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model; transforming each point in the standard model based on the target transformation matrix to obtain a transformed standard model; editing the transformed standard model to obtain a virtual model, wherein the virtual model contains multiple expression target bodies. It can be seen that the method provided by the present application uses a standard model to approach the three-dimensional model of the target person, and obtains a three-dimensional virtual model by editing the standard model (the number of point clouds, the number of facets, etc. are relatively fixed and evenly distributed). Therefore, the editing difficulty is relatively low (there is no need to edit the three-dimensional model of the target person), and it is conducive to constructing different expression target bodies in the model, and it is conducive to driving the virtual model to achieve human-computer interaction, and there is no need for manual modeling, and the cost is relatively low.
示例性虚拟模型的构建方法Exemplary virtual model construction method
参考图1,依本发明一实施例的一种虚拟模型的构建方法,所述方法构建得到的三维的虚拟模型可以显示于电子设备,如计算机、手机、智能手表、智能机器人、智能家居、汽车等,以实现人机交互。Referring to FIG. 1 , a method for constructing a virtual model according to an embodiment of the present invention is shown. The three-dimensional virtual model constructed by the method can be displayed on electronic devices such as computers, mobile phones, smart watches, smart robots, smart homes, cars, etc., to achieve human-computer interaction.
如图1所示,所述方法可以包括:As shown in FIG. 1 , the method may include:
S101、获取目标人物的三维模型。S101: Obtain a three-dimensional model of a target person.
在本实施例中,利用3D扫描仪,对目标人物(如自然人)进行360°的3D扫描,以扫描得到目标人物的三维模型。优选地,所述三维模型中可以包含点云、三角网格数量以及贴图数量等。可以理解的是,所述方法中,还可以包括:对扫描得到目标人物的三维模型进行点云的去噪、多帧对齐或者三角面片化等处理,以得到处理后的三维模型。In this embodiment, a 3D scanner is used to perform a 360° 3D scan on a target person (such as a natural person) to obtain a three-dimensional model of the target person. Preferably, the three-dimensional model may include a point cloud, a number of triangular meshes, and a number of textures. It is understandable that the method may also include: performing point cloud denoising, multi-frame alignment, or triangulation processing on the three-dimensional model of the target person obtained by scanning to obtain a processed three-dimensional model.
S102、利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵。S102: Parameterize the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model.
在步骤S102中,所述参数化处理是指,利用变换矩阵对所述标准模型中每个点进行变换处理,使得所述标准模型去接近所述三维模型,根据所述标准模型与所述三维模型之间的接近程度,确定出所述标准模型中每个点的目标变换矩阵。优选地,当所述标准模型最接近所述三维模型时,所对应的变换矩阵为所述目标变换矩阵。In step S102, the parameterization process refers to transforming each point in the standard model using a transformation matrix so that the standard model approaches the three-dimensional model, and determining a target transformation matrix for each point in the standard model according to the degree of proximity between the standard model and the three-dimensional model. Preferably, when the standard model is closest to the three-dimensional model, the corresponding transformation matrix is the target transformation matrix.
也就是说,根据所述目标变换矩阵对所述标准模型中的每个点进行变换处理后,得到变换后的标准模型最接近所述三维模型。That is to say, after each point in the standard model is transformed according to the target transformation matrix, the transformed standard model is closest to the three-dimensional model.
可以理解的是,所述标准模型中的点云数量、面片数量等较为固定,且分布均匀。所述标准模型可以是三维的人物标准模型,根据用户的不同需求,所述标准模型不唯一。It is understandable that the number of point clouds and facets in the standard model is relatively fixed and evenly distributed. The standard model may be a three-dimensional character standard model, and the standard model is not unique according to different needs of users.
其中一种可能的实现方式中,步骤S102可以包括:In one possible implementation, step S102 may include:
S201、根据所述三维模型中的每个邻接点的约束,获得第一误差项;S201, obtaining a first error term according to the constraint of each adjacent point in the three-dimensional model;
S202、根据所述标准模型中每个点的变换矩阵的约束,获得第二误差项;S202, obtaining a second error term according to the constraints of the transformation matrix of each point in the standard model;
S203、根据所述标准模型与所述三维模型中对应点的约束,获得第三误差项;S203, obtaining a third error term according to constraints of corresponding points in the standard model and the three-dimensional model;
S204、根据所述第一误差项、所述第二误差项以及所述第三误差项的加权和最小值,确定目标变换矩阵。S204: Determine a target transformation matrix according to a minimum value of a weighted sum of the first error term, the second error term, and the third error term.
其中一种可能的实现方式中,举例地,假设所述标准模型中每个点的变换矩阵为Ti(如4*4的方阵)。In one possible implementation, for example, it is assumed that the transformation matrix of each point in the standard model is Ti (such as a 4*4 square matrix).
则,所述第一误差项由公式(或第一优化函数)Then, the first error term is given by the formula (or the first optimization function)
计算得到,其中,ES为第一误差项,Ti与Tj为变换矩阵,Calculated, where E S is the first error term, Ti and T j are transformation matrices,
所述第二误差项由公式(或第二优化函数)The second error term is given by the formula (or second optimization function)
计算得到,其中,EI为第二误差项,Calculated, where E I is the second error term,
所述第三误差项由公式(或第三优化函数)The third error term is given by the formula (or third optimization function)
计算得到,其中,EC为第三误差项,,Vi为所述标准模型中经变换矩阵变换后的第i个点的坐标,Ci为所述三维模型中与所述标准模型中的Vi点相对应的第i个点的坐标,Calculated, where EC is the third error term, Vi is the coordinate of the i-th point in the standard model after transformation by the transformation matrix, Ci is the coordinate of the i-th point in the three-dimensional model corresponding to the point Vi in the standard model,
所述权重和最小值由公式The weight and minimum value are given by the formula
计算得到,其中,WS、WI、WC为权重值。Calculated, where W S , W I , and W C are weight values.
可以理解的是,将加权和最小值所对应的第一误差项、第二误差项以及第三误差项代入上述公式中,即可得到所述目标变换矩阵。It can be understood that by substituting the first error term, the second error term and the third error term corresponding to the minimum value of the weighted sum into the above formula, the target transformation matrix can be obtained.
熟知本领域的技术人员可以理解的是,在实现同样原理或功能的情况下,上述参数化处理中的计算公式可以不唯一,在此不受限制。Those skilled in the art will appreciate that, when implementing the same principle or function, the calculation formula in the above parameterization process may not be unique and is not limited here.
S103、基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型。S103. Transform each point in the standard model based on the target transformation matrix to obtain a transformed standard model.
也就是说,利用所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以改变所述标准模型中的每个点的坐标值,从而得到变换后的标准模型。That is to say, each point in the standard model is transformed using the target transformation matrix to change the coordinate value of each point in the standard model, thereby obtaining a transformed standard model.
可以理解的是,由于所述标准模型中的点云数量、面片数量等较为固定,且分布均匀,因此,所述标准模型较易被编辑,有利于在模型中构建不同的表情目标体。It is understandable that, since the number of point clouds, the number of facets, etc. in the standard model is relatively fixed and evenly distributed, the standard model is easier to edit, which is conducive to constructing different expression targets in the model.
S104、对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。S104, editing the transformed standard model to obtain a virtual model, wherein the virtual model includes a plurality of expression targets.
在本实施例中,所述表情目标体可以用于对所述虚拟模型进行驱动,以改变所述虚拟模型的形状,使得所述虚拟模型可以展示出相应的表情,例如眨眼、微笑或张嘴等表情。In this embodiment, the expression target can be used to drive the virtual model to change the shape of the virtual model so that the virtual model can show corresponding expressions, such as blinking, smiling or opening the mouth.
举例地,利用所述表情目标体对所述虚拟模型进行驱动的公式如下:For example, the formula for driving the virtual model using the expression target is as follows:
其中,Base为虚拟模型,Bi为第i个表情目标体,Wi为第i个表情目标体的表情系数,B为驱动后的虚拟模型。Among them, Base is the virtual model, Bi is the i-th expression target body, Wi is the expression coefficient of the i-th expression target body, and B is the virtual model after driving.
也就是说,随着每个表情目标体的表情系数的变化,驱动所述虚拟模型进行相应的变化,如展示相应的表情等,从而有利于实现人机交互。That is to say, as the expression coefficient of each expression target changes, the virtual model is driven to make corresponding changes, such as showing corresponding expressions, etc., which is conducive to achieving human-computer interaction.
其中一种可能的实现方式中,步骤S104可以包括:In one possible implementation, step S104 may include:
响应用户编辑表情目标体的操作,在所述变换后的标准模型中构建多个所述表情目标体,以得到虚拟模型。In response to the user's operation of editing the expression target body, a plurality of the expression target bodies are constructed in the transformed standard model to obtain a virtual model.
也就是说,用户可以将在编辑软件中制作的微笑表情、张嘴表情或眨眼表情等目标体编辑至所述标准模型中,以得到编辑后的标准模型,即所述虚拟模型。That is to say, the user can edit the target objects such as smiling expressions, open mouth expressions or blinking expressions made in the editing software into the standard model to obtain the edited standard model, that is, the virtual model.
综上可知,本申请提供的所述虚拟模型的构建方法,利用标准模型去接近目标人物的三维模型,通过对标准模型(点云数量、面片数量等较为固定,且分布均匀)进行编辑,得到三维的虚拟模型,因此,编辑难度较低(无需对目标人物的三维模型进行编辑),且有利于在模型中构建不同的表情目标体,有利于实现对虚拟模型的驱动,以实现人机交互,而且无需进行建模,成本较低。In summary, the method for constructing the virtual model provided in the present application uses a standard model to approach the three-dimensional model of the target person, and obtains a three-dimensional virtual model by editing the standard model (the number of point clouds, the number of facets, etc. are relatively fixed and evenly distributed). Therefore, the editing difficulty is relatively low (there is no need to edit the three-dimensional model of the target person), and it is conducive to constructing different expression target bodies in the model, which is conducive to driving the virtual model to achieve human-computer interaction, and there is no need for modeling, so the cost is low.
可以理解的是,上述实施例中的部分或全部步骤骤或操作仅是示例,本申请实施例还可以执行其它操作或者各种操作的变形。此外,各个步骤可以按照上述实施例呈现的不同的顺序来执行,并且有可能并非要执行上述实施例中的全部操作。It is to be understood that some or all of the steps or operations in the above embodiments are merely examples, and the present application embodiments may also perform other operations or variations of various operations. In addition, the various steps may be performed in different orders presented in the above embodiments, and it is possible that not all of the operations in the above embodiments need to be performed.
示例性虚拟模型的交互方法Exemplary virtual model interaction method
如图2所示,本申请实施提供了一种虚拟模型的交互方法,所述交互方法应用于电子设备(如计算机、手机、智能手表、智能机器人等),所述虚拟模型可以模仿如视频流等中的人物表情,从而实现人机交互。As shown in Figure 2, the present application implements a virtual model interaction method, which is applied to electronic devices (such as computers, mobile phones, smart watches, smart robots, etc.). The virtual model can imitate the expressions of people in video streams, thereby realizing human-computer interaction.
具体地,所述交互方法可以包括:Specifically, the interaction method may include:
S301、获取用户输入的交互信息。S301: Acquire interaction information input by a user.
也就是说,所述交互信息可以包括由用户通过键盘或鼠标输入文字信息,或者实时采集的用户语音,或者实时采集的用户图像等。That is to say, the interaction information may include text information input by the user through a keyboard or a mouse, or a user voice collected in real time, or a user image collected in real time, etc.
S302、基于所述交互信息,获取视频流,其中,所述视频流中包含画面;S302, acquiring a video stream based on the interaction information, wherein the video stream includes a picture;
在本实施例中,所述视频流预存储于电子设备中,通过对所述交互信息识别,从所述电子设备中调取对应的视频流,所述视频流中包含与用户的交互信息相匹配的信息,以用于人机交互。所述视频流中可以包含多个画面,画面中可以包含人物面部图像或者语音信号等。举例地,步骤S302中,可以通过对交互信息(如用户发出的语音信号)进行语音识别,根据识别结果从存储于电子设备中的多个视频流中提取目标视频流,以作为人机交互的目标视频流,如所述目标视频可以用于应答用户所述发出的语音信号等。In this embodiment, the video stream is pre-stored in the electronic device, and the corresponding video stream is retrieved from the electronic device by identifying the interaction information, and the video stream contains information matching the user's interaction information for human-computer interaction. The video stream may contain multiple pictures, and the pictures may contain facial images of people or voice signals, etc. For example, in step S302, by performing voice recognition on the interaction information (such as the voice signal sent by the user), the target video stream can be extracted from the multiple video streams stored in the electronic device according to the recognition result, as the target video stream for human-computer interaction, such as the target video can be used to respond to the voice signal sent by the user, etc.
S303、从每个所述画面中获取人物面部特征。S303: Acquire facial features of the person from each of the pictures.
在本实施例中,所述人物面部特征可以包括在所述画面中人物的面部特征,以用于表示画面中人物当前的面部表情状态,如开心、疑惑、生气等面部状态。In this embodiment, the facial features of the person may include facial features of the person in the picture, so as to indicate the current facial expression state of the person in the picture, such as happy, confused, angry, etc. facial states.
其中一种可能的实现方式中,步骤S303可以包括:In one possible implementation, step S303 may include:
S401、从每个所述画面中获取采集到的人物面部图像;S401, acquiring a collected facial image of a person from each of the pictures;
S402、对所述人物面部图像进行特征提取,以获得人物面部特征。S402: extracting features from the facial image of the person to obtain facial features of the person.
例如,所述人物面部图像可以是利用相机拍摄采集到的图像,或者从视频流的画面中提取到的人物面部图像等,以从所述人物面部图像中提取得到人物面部特征,所述人物面部特征中可以包含从人物面部图像中提取到特征点,如特征点的坐标或深度信息等。For example, the facial image of the person may be an image captured by a camera, or a facial image of the person extracted from a video stream, etc., so as to extract facial features of the person from the facial image of the person. The facial features of the person may include feature points extracted from the facial image of the person, such as the coordinates or depth information of the feature points.
其中一种可能的实现方式中,步骤S303可以包括:In one possible implementation, step S303 may include:
S501、从每个所述画面中获取采集到的语音信号;S501, acquiring a collected voice signal from each of the pictures;
S502、对所述语音信号中每个字的口型进行识别,以获得人物面部特征。S502: Recognize the mouth shape of each word in the voice signal to obtain facial features of the person.
例如,所述语音信号可以是从视频流中的画面中提取到的语音信号,或者由语音传感器采集到的语音信号等,并利用语音识别模型对采集到的语音信号中的每个字的口型进行识别,从而得到人物面部特征。For example, the voice signal may be a voice signal extracted from a picture in a video stream, or a voice signal collected by a voice sensor, etc., and a voice recognition model is used to recognize the mouth shape of each word in the collected voice signal to obtain facial features of the character.
S304、基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数。S304: Based on the facial features of the character and the virtual model, acquiring expression coefficients of a plurality of expression targets in the virtual model.
在本实施例中,所述虚拟模型由如图1所示方法实施例提供的构建方法获得,其具体步骤或原理可以参考上述图1所示方法实施例,在此不再赘述。In this embodiment, the virtual model is obtained by the construction method provided by the method embodiment shown in FIG1 . The specific steps or principles thereof can refer to the method embodiment shown in FIG1 above, and will not be described in detail here.
优选地,所述人物面部特征中的特征点与所述虚拟模型中的关键点之间存在对应关系,例如,人物面部特征中的眼角(特征点)对应于所述虚拟模型中的眼角(关键点)。Preferably, there is a corresponding relationship between the feature points in the facial features of the person and the key points in the virtual model, for example, the corners of the eyes (feature points) in the facial features of the person correspond to the corners of the eyes (key points) in the virtual model.
其中一种可能的实现方式中,步骤S304可以包括:In one possible implementation, step S304 may include:
根据所述特征点与所述关键点的位置关系,确定每个表情目标体的表情系数。The expression coefficient of each expression target body is determined according to the positional relationship between the feature points and the key points.
也就是说,利用获取到的所述人物面部特征的特征点与所述虚拟模型中关键点的位置关系(如坐标差值等),可以计算得到每个表情目标体的表情系数。That is to say, the expression coefficient of each expression target body can be calculated by using the acquired positional relationship (such as coordinate difference, etc.) between the feature points of the character's facial features and the key points in the virtual model.
在本实施例中,利用所述虚拟模型投影至相机坐标系中的关键点的坐标或位置与实时采集到的人物面部图像中的特征点的坐标或位置的约束,通过优化函数优化得到每个表情目标体的表情系数。In this embodiment, the expression coefficient of each expression target body is obtained by optimizing the optimization function using the constraints of the coordinates or positions of the key points projected from the virtual model into the camera coordinate system and the coordinates or positions of the feature points in the real-time collected facial image of the person.
例如,所述优化函数可以表示为以下公式:For example, the optimization function can be expressed as the following formula:
其中,B为驱动后的虚拟模型,R为旋转矩阵,t为平移距离,所述平移距离t可以由所述特征点与所述关键点之间的距离确定。Among them, B is the virtual model after driving, R is the rotation matrix, and t is the translation distance. The translation distance t can be determined by the distance between the feature point and the key point.
其中,表情目标体的表情系数为W=[w1,w2,…,wm],Wi为第i个表情目标体的表情系数。The expression coefficient of the expression target body is W = [w 1 ,w 2 ,…,w m ], and Wi is the expression coefficient of the i-th expression target body.
S305、基于每个所述表情目标体的表情系数,对所述虚拟模型进行驱动,其中,驱动后的虚拟模型进行动态显示。S305 , driving the virtual model based on the expression coefficient of each of the expression targets, wherein the driven virtual model is dynamically displayed.
具体地,所述驱动后的虚拟模型由公式Specifically, the virtual model after driving is expressed by the formula
计算得到,其中,Base为虚拟模型,Bi为第i个表情目标体,Wi为第i个表情目标体的表情系数,B为驱动后的虚拟模型,Calculated, where Base is the virtual model, Bi is the i-th expression target body, Wi is the expression coefficient of the i-th expression target body, and B is the virtual model after driving.
也就是说,驱动后的虚拟模型可以模仿视频流中的人物的表情,以模仿视频流中的人物表情在电子设备中进行实时地显示,如动态显示,从而实现与用户进行人机交互。所述视频流中可以包含多个2D画面,所述视频流可以预存储于电子设备中。在进行人机交互时,通过获取用户输入的交互信息,根据所述交互信息,从多个视频流中调取目标视频流,根据所述目标视频流,获取所述目标视频流中多个画面的人物面部特征,基于每个所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,基于每个所述表情目标体的表情系数,对所述虚拟模型进行驱动,其中,驱动后的虚拟模型可以在电子设备中动态显示,以模仿目标视频流中的人物表情在所述电子设备上实时地显示,从而实现了与用户的人机交互。That is to say, the driven virtual model can imitate the facial expressions of the characters in the video stream, and can be displayed in real time in the electronic device by imitating the facial expressions of the characters in the video stream, such as dynamic display, so as to realize human-computer interaction with the user. The video stream can contain multiple 2D pictures, and the video stream can be pre-stored in the electronic device. When performing human-computer interaction, by obtaining the interaction information input by the user, according to the interaction information, the target video stream is retrieved from the multiple video streams, and according to the target video stream, the facial features of the characters in the multiple pictures in the target video stream are obtained, and based on each of the facial features of the characters and the virtual model, the expression coefficients of the multiple expression target bodies in the virtual model are obtained, and based on the expression coefficients of each of the expression target bodies, the virtual model is driven, wherein the driven virtual model can be dynamically displayed in the electronic device, so as to imitate the facial expressions of the characters in the target video stream and be displayed in real time on the electronic device, thereby realizing human-computer interaction with the user.
可以理解的是,上述实施例中的部分或全部步骤或操作仅是示例,本申请实施例还可以执行其它操作或者各种操作的变形。此外,各个步骤可以按照上述实施例呈现的不同的顺序来执行,并且有可能并非要执行上述实施例中的全部操作。It is to be understood that some or all of the steps or operations in the above embodiments are merely examples, and the present application embodiments may also perform other operations or variations of various operations. In addition, the various steps may be performed in different orders presented in the above embodiments, and it is possible that not all of the operations in the above embodiments need to be performed.
示例性虚拟模型的构建装置Exemplary virtual model construction device
如图3所示,本申请一个实施例提供了一种虚拟模型的构建装置100,所述装置100包括:As shown in FIG3 , an embodiment of the present application provides a virtual model construction device 100, the device 100 comprising:
模型获取模块110,用于获取目标人物的三维模型;A model acquisition module 110 is used to acquire a three-dimensional model of a target person;
处理模块120,用于利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵;A processing module 120 is used to perform parameterization processing on the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model;
变换模块130,用于基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型;A transformation module 130, configured to transform each point in the standard model based on the target transformation matrix to obtain a transformed standard model;
编辑模块140,用于对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。The editing module 140 is used to edit the transformed standard model to obtain a virtual model, wherein the virtual model includes a plurality of expression targets.
其中一种可能的实现方式中,所述处理模块包括:In one possible implementation, the processing module includes:
根据所述三维模型中的每个邻接点的约束,获得第一误差项;Obtaining a first error term according to the constraint of each adjacent point in the three-dimensional model;
根据所述标准模型中每个点的变换矩阵的约束,获得第二误差项;Obtaining a second error term according to the constraints of the transformation matrix of each point in the standard model;
根据所述标准模型与所述三维模型中对应点的约束,获得第三误差项;Obtaining a third error term according to constraints on corresponding points in the standard model and the three-dimensional model;
根据所述第一误差项、所述第二误差项以及所述第三误差项的加权和最小值,确定目标变换矩阵。A target transformation matrix is determined according to a weighted minimum value of the first error term, the second error term, and the third error term.
其中一种可能的实现方式中,所述第一误差项由公式In one possible implementation, the first error term is given by the formula
计算得到,其中,ES为第一误差项,Ti与Tj为变换矩阵,Calculated, where E S is the first error term, Ti and T j are transformation matrices,
所述第二误差项由公式The second error term is given by the formula
计算得到,其中,EI为第二误差项,Calculated, where E I is the second error term,
所述第三误差项由公式The third error term is given by the formula
计算得到,其中,EC为第三误差项,,Vi为所述标准模型中经变换矩阵变换后的第i个点的坐标,Ci为所述三维模型中与所述标准模型中的Vi点相对应的第i个点的坐标,Calculated, where EC is the third error term, Vi is the coordinate of the i-th point in the standard model after transformation by the transformation matrix, Ci is the coordinate of the i-th point in the three-dimensional model corresponding to the point Vi in the standard model,
所述权重和最小值由公式The weight and minimum value are given by the formula
计算得到,其中,WS、WI、WC为权重值。Calculated, where W S , W I , and W C are weight values.
其中一种可能的实现方式中,所述变换模块包括:In one possible implementation, the transformation module includes:
响应用户编辑表情目标体的操作,在所述变换后的标准模型中构建多个所述表情目标体,以得到虚拟模型。In response to the user's operation of editing the expression target body, a plurality of the expression target bodies are constructed in the transformed standard model to obtain a virtual model.
可以理解的是,图3所示实施例提供的虚拟模型的构建装置可用于执行本申请图1所示方法实施例的技术方案,其实现原理和技术效果可以进一步参考方法实施例中的相关描述。It can be understood that the virtual model construction device provided in the embodiment shown in Figure 3 can be used to implement the technical solution of the method embodiment shown in Figure 1 of the present application, and its implementation principle and technical effects can be further referred to the relevant description in the method embodiment.
示例性虚拟模型的交互装置Example virtual model interactive device
如图4所示,本申请另一个实施例提供了一种虚拟模型的交互装置200,所述装置200包括:As shown in FIG. 4 , another embodiment of the present application provides a virtual model interaction device 200, the device 200 comprising:
交互信息获取模块210,用于获取用户输入的交互信息;The interaction information acquisition module 210 is used to acquire the interaction information input by the user;
视频流获取模块220,用于基于所述交互信息,获取视频流,其中,所述视频流中包含多个画面;A video stream acquisition module 220, configured to acquire a video stream based on the interaction information, wherein the video stream includes a plurality of pictures;
提取模块230,用于从每个所述画面中,提取得到人物面部特征;An extraction module 230 is used to extract facial features of a person from each of the pictures;
表情系数获取模块240,用于基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,所述虚拟模型由如图1所示方法实施例中提供所述的方法获得;An expression coefficient acquisition module 240 is used to acquire expression coefficients of a plurality of expression targets in the virtual model based on the facial features of the character and the virtual model, wherein the virtual model is obtained by the method provided in the method embodiment shown in FIG. 1 ;
驱动模块250,用于基于每个所述表情目标体的表情系数,对所述虚拟模型进行驱动,其中,驱动后的虚拟模型进行动态显示。The driving module 250 is used to drive the virtual model based on the expression coefficient of each expression target body, wherein the driven virtual model is dynamically displayed.
其中一种可能的实现方式中,所述提取模块包括:In one possible implementation, the extraction module includes:
从每个所述画面中获取采集到的人物面部图像;Acquire the collected facial image of the person from each of the pictures;
对所述人物面部图像进行特征提取,以获得人物面部特征。Feature extraction is performed on the facial image of the person to obtain facial features of the person.
其中一种可能的实现方式中,所述提取模块包括:In one possible implementation, the extraction module includes:
从每个所述画面中获取采集到的语音信号;Acquire the collected voice signal from each of the pictures;
对所述语音信号中每个字的口型进行识别,以获得人物面部特征。The mouth shape of each word in the speech signal is recognized to obtain the facial features of the person.
其中一种可能的实现方式中,所述人物面部特征中的特征点与所述虚拟模型中的关键点之间存在对应关系,所述表情系数获取模块包括:In one possible implementation, there is a corresponding relationship between the feature points in the facial features of the character and the key points in the virtual model, and the expression coefficient acquisition module includes:
根据所述特征点与所述关键点的位置关系,确定每个表情目标体的表情系数。The expression coefficient of each expression target body is determined according to the positional relationship between the feature points and the key points.
其中一种可能的实现方式中,所述驱动后的虚拟模型由公式In one possible implementation, the virtual model after driving is represented by the formula
计算得到,其中,Base为虚拟模型,Bi为第i个表情目标体,Wi为第i个表情目标体的表情系数,B为驱动后的虚拟模型。It is calculated, where Base is the virtual model, Bi is the i-th expression target body, Wi is the expression coefficient of the i-th expression target body, and B is the virtual model after driving.
可以理解的是,图4所示实施例提供的虚拟模型的交互装置200可用于执行本申请图2所示方法实施例的技术方案,其实现原理和技术效果可以进一步参考方法实施例中的相关描述。It can be understood that the virtual model interaction device 200 provided in the embodiment shown in Figure 4 can be used to execute the technical solution of the method embodiment shown in Figure 2 of the present application, and its implementation principle and technical effects can be further referred to the relevant description in the method embodiment.
示例性电子设备Exemplary Electronic Devices
图5为本申请电子设备一个实施例的结构示意图,如图5所示,上述电子设备可以包括:一个或多个处理器;存储器;以及一个或多个计算机程序。FIG5 is a schematic diagram of the structure of an embodiment of an electronic device of the present application. As shown in FIG5 , the electronic device may include: one or more processors; a memory; and one or more computer programs.
其中,上述电子设备可以为手机,电脑,服务器,移动终端(手机),智能机器人,计算机,智慧屏,无人机,智能网联车(Intelligent Connected Vehicle;以下简称:ICV),智能(汽)车(smart/intelligent car)或车载设备等设备。Among them, the above-mentioned electronic devices can be mobile phones, computers, servers, mobile terminals (mobile phones), intelligent robots, computers, smart screens, drones, intelligent connected vehicles (Intelligent Connected Vehicle; hereinafter referred to as: ICV), smart/intelligent cars or vehicle-mounted devices and other devices.
其中上述一个或多个计算机程序被存储在上述存储器中,上述一个或多个计算机程序包括指令,当上述指令被上述设备执行时,使得上述设备执行以下步骤:The one or more computer programs are stored in the memory, and the one or more computer programs include instructions. When the instructions are executed by the device, the device performs the following steps:
获取目标人物的三维模型;Obtain a three-dimensional model of the target person;
利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵;Parameterizing the three-dimensional model using a standard model to obtain a target transformation matrix for each point in the standard model;
基于所述目标变换矩阵对所述标准模型中的每个点进行变换处理,以得到变换后的标准模型;Transforming each point in the standard model based on the target transformation matrix to obtain a transformed standard model;
对所述变换后的标准模型进行编辑,以得到虚拟模型,其中,所述虚拟模型中包含多个表情目标体。The transformed standard model is edited to obtain a virtual model, wherein the virtual model includes a plurality of expression target bodies.
其中一种可能的实现方式中,当上述指令被上述设备执行时,使得上述设备执行所述利用标准模型对所述三维模型进行参数化处理,以得到所述标准模型中每个点的目标变换矩阵,包括:In one possible implementation, when the above instruction is executed by the above device, the above device performs parameterization processing on the three-dimensional model using the standard model to obtain a target transformation matrix for each point in the standard model, including:
根据所述三维模型中的每个邻接点的约束,获得第一误差项;Obtaining a first error term according to the constraint of each adjacent point in the three-dimensional model;
根据所述标准模型中每个点的变换矩阵的约束,获得第二误差项;Obtaining a second error term according to the constraints of the transformation matrix of each point in the standard model;
根据所述标准模型与所述三维模型中对应点的约束,获得第三误差项;Obtaining a third error term according to constraints on corresponding points in the standard model and the three-dimensional model;
根据所述第一误差项、所述第二误差项以及所述第三误差项的加权和最小值,确定目标变换矩阵。A target transformation matrix is determined according to a weighted minimum value of the first error term, the second error term, and the third error term.
其中一种可能的实现方式中,所述第一误差项由公式In one possible implementation, the first error term is given by the formula
计算得到,其中,ES为第一误差项,Ti与Tj为变换矩阵,Calculated, where E S is the first error term, Ti and T j are transformation matrices,
所述第二误差项由公式The second error term is given by the formula
计算得到,其中,EI为第二误差项,Calculated, where E I is the second error term,
所述第三误差项由公式The third error term is given by the formula
计算得到,其中,EC为第三误差项,,Vi为所述标准模型中经变换矩阵变换后的第i个点的坐标,Ci为所述三维模型中与所述标准模型中的Vi点相对应的第i个点的坐标,Calculated, where EC is the third error term, Vi is the coordinate of the i-th point in the standard model after transformation by the transformation matrix, Ci is the coordinate of the i-th point in the three-dimensional model corresponding to the point Vi in the standard model,
所述权重和最小值由公式The weight and minimum value are given by the formula
计算得到,其中,WS、WI、WC为权重值。Calculated, where W S , W I , and W C are weight values.
其中一种可能的实现方式中,当上述指令被上述设备执行时,使得上述设备执行所述对所述变换后的标准模型进行编辑,以得到虚拟模型,包括:In one possible implementation, when the instruction is executed by the device, the device edits the transformed standard model to obtain a virtual model, including:
响应用户编辑表情目标体的操作,在所述变换后的标准模型中构建多个所述表情目标体,以得到虚拟模型。In response to the user's operation of editing the expression target body, a plurality of the expression target bodies are constructed in the transformed standard model to obtain a virtual model.
其中一种可能的实现方式中,当上述指令被上述设备执行时,使得上述设备执行以下步骤:In one possible implementation manner, when the above instruction is executed by the above device, the above device performs the following steps:
获取用户输入的交互信息;Get interactive information input by the user;
基于所述交互信息,获取视频流,其中,所述视频流中包含多个画面;Based on the interaction information, a video stream is acquired, wherein the video stream includes a plurality of pictures;
从每个所述画面中,提取得到人物面部特征;Extracting facial features of the person from each of the pictures;
基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,所述虚拟模型由如图1所示方法实施例提供所述的方法获得;Based on the facial features of the character and the virtual model, acquiring expression coefficients of multiple expression targets in the virtual model, wherein the virtual model is obtained by the method provided in the method embodiment shown in FIG1 ;
基于每个所述表情目标体的表情系数,对所述虚拟模型进行驱动,其中,驱动后的虚拟模型进行动态显示。Based on the expression coefficient of each of the expression targets, the virtual model is driven, wherein the driven virtual model is dynamically displayed.
其中一种可能的实现方式中,当上述指令被上述设备执行时,使得上述设备执行所述从每个所述画面中,提取得到人物面部特征,包括:In one possible implementation, when the above instruction is executed by the above device, the above device executes the step of extracting facial features of a person from each of the pictures, including:
从每个所述画面中获取采集到的人物面部图像;Acquire the collected facial image of the person from each of the pictures;
对所述人物面部图像进行特征提取,以获得人物面部特征。Feature extraction is performed on the facial image of the person to obtain facial features of the person.
其中一种可能的实现方式中,当上述指令被上述设备执行时,使得上述设备执行所述从每个所述画面中,提取得到人物面部特征,包括:In one possible implementation, when the above instruction is executed by the above device, the above device executes the step of extracting facial features of a person from each of the pictures, including:
从每个所述画面中获取采集到的语音信号;Acquire the collected voice signal from each of the pictures;
对所述语音信号中每个字的口型进行识别,以获得人物面部特征。The mouth shape of each word in the speech signal is recognized to obtain the facial features of the person.
其中一种可能的实现方式中,所述人物面部特征中的特征点与所述虚拟模型中的关键点之间存在对应关系,当上述指令被上述设备执行时,使得上述设备执行所述基于所述人物面部特征以及虚拟模型,获取所述虚拟模型中多个表情目标体的表情系数,包括:In one possible implementation, there is a correspondence between feature points in the facial features of the person and key points in the virtual model. When the above instruction is executed by the above device, the above device executes the acquisition of expression coefficients of multiple expression targets in the virtual model based on the facial features of the person and the virtual model, including:
根据所述特征点与所述关键点的位置关系,确定每个表情目标体的表情系数。The expression coefficient of each expression target body is determined according to the positional relationship between the feature points and the key points.
其中一种可能的实现方式中,所述驱动后的虚拟模型由公式In one possible implementation, the virtual model after driving is represented by the formula
计算得到,其中,Base为虚拟模型,Bi为第i个表情目标体,Wi为第i个表情目标体的表情系数,B为驱动后的虚拟模型。It is calculated, where Base is the virtual model, Bi is the i-th expression target body, Wi is the expression coefficient of the i-th expression target body, and B is the virtual model after driving.
图5所示的电子设备可以是终端设备也可以是内置于上述终端设备的电路设备。该设备可以用于执行本申请图1所示实施例提供的构建方法中的功能/步骤,或,该设备可以用于执行本申请图2所示实施例提供的交互方法中的功能/步骤,。The electronic device shown in FIG5 can be a terminal device or a circuit device built into the terminal device. The device can be used to execute the functions/steps in the construction method provided in the embodiment shown in FIG1 of the present application, or the device can be used to execute the functions/steps in the interaction method provided in the embodiment shown in FIG2 of the present application.
如图5所示,电子设备900包括处理器910和存储器920。其中,处理器910和存储器920之间可以通过内部连接通路互相通信,传递控制和/或数据信号,该存储器920用于存储计算机程序,该处理器910用于从该存储器920中调用并运行该计算机程序。As shown in FIG5 , the electronic device 900 includes a processor 910 and a memory 920. The processor 910 and the memory 920 can communicate with each other through an internal connection path to transmit control and/or data signals. The memory 920 is used to store computer programs, and the processor 910 is used to call and run the computer program from the memory 920.
上述存储器920可以是只读存储器(read-only memory,ROM)、可存储静态信息和指令的其它类型的静态存储设备、随机存取存储器(random access memory,RAM)或可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electricallyerasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备,或者还可以是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质等。The above-mentioned memory 920 can be a read-only memory (ROM), other types of static storage devices that can store static information and instructions, a random access memory (RAM) or other types of dynamic storage devices that can store information and instructions, or it can be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compressed optical disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), magnetic disk storage medium or other magnetic storage device, or it can also be any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and can be accessed by a computer.
上述处理器910可以和存储器920可以合成一个处理装置,更常见的是彼此独立的部件,处理器910用于执行存储器920中存储的程序代码来实现上述功能。具体实现时,该存储器920也可以集成在处理器910中,或者,独立于处理器910。The processor 910 and the memory 920 may be combined into a processing device, or more commonly, they are independent components, and the processor 910 is used to execute the program code stored in the memory 920 to implement the above functions. In specific implementation, the memory 920 may also be integrated into the processor 910, or may be independent of the processor 910.
应理解,图5所示的电子设备900能够实现本申请图1或图2所示实施例提供的方法的各个过程。电子设备900中的各个模块的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见本申请图1或图2所示方法实施例中的描述,为避免重复,此处适当省略详细描述。It should be understood that the electronic device 900 shown in FIG. 5 can implement each process of the method provided in the embodiment shown in FIG. 1 or FIG. 2 of the present application. The operations and/or functions of each module in the electronic device 900 are respectively to implement the corresponding processes in the above method embodiments. For details, please refer to the description in the method embodiment shown in FIG. 1 or FIG. 2 of the present application. To avoid repetition, the detailed description is appropriately omitted here.
除此之外,为了使得电子设备900的功能更加完善,该电子设备900还可以包括摄像头930、电源940、输入单元950等中的一个或多个。In addition, in order to make the functions of the electronic device 900 more complete, the electronic device 900 may also include one or more of a camera 930, a power supply 940, an input unit 950, etc.
可选地,电源950用于给电子设备中的各种器件或电路提供电源。Optionally, the power supply 950 is used to provide power to various devices or circuits in the electronic device.
应理解,图5所示的电子设备900中的处理器910可以是片上系统SOC,该处理器910中可以包括中央处理器(Central Processing Unit;以下简称:CPU),还可以进一步包括其他类型的处理器,例如:图像处理器(Graphics Processing Unit;以下简称:GPU)等。It should be understood that the processor 910 in the electronic device 900 shown in Figure 5 can be a system on chip SOC, which can include a central processing unit (Central Processing Unit; hereinafter referred to as: CPU), and can further include other types of processors, such as: a graphics processor (Graphics Processing Unit; hereinafter referred to as: GPU), etc.
总之,处理器910内部的各部分处理器或处理单元可以共同配合实现之前的方法流程,且各部分处理器或处理单元相应的软件程序可存储在存储器920中。In summary, the various processors or processing units within the processor 910 can work together to implement the previous method flow, and the corresponding software programs of the various processors or processing units can be stored in the memory 920.
本申请还提供一种电子设备,所述设备包括存储介质和中央处理器,所述存储介质可以是非易失性存储介质,所述存储介质中存储有计算机可执行程序,所述中央处理器与所述非易失性存储介质连接,并执行所述计算机可执行程序以实现本申请图1或图2所示实施例提供的方法。The present application also provides an electronic device, which includes a storage medium and a central processing unit. The storage medium may be a non-volatile storage medium, in which a computer executable program is stored. The central processing unit is connected to the non-volatile storage medium and executes the computer executable program to implement the method provided in the embodiment shown in Figure 1 or Figure 2 of the present application.
以上各实施例中,涉及的处理器可以例如包括CPU、DSP、微控制器或数字信号处理器,还可包括GPU、嵌入式神经网络处理器(Neural-network Process Units;以下简称:NPU)和图像信号处理器(Image Signal Processing;以下简称:ISP),该处理器还可包括必要的硬件加速器或逻辑处理硬件电路,如ASIC,或一个或多个用于控制本申请技术方案程序执行的集成电路等。此外,处理器可以具有操作一个或多个软件程序的功能,软件程序可以存储在存储介质中。In the above embodiments, the processor involved may include, for example, a CPU, a DSP, a microcontroller or a digital signal processor, and may also include a GPU, an embedded neural network processor (Neural-network Process Units; hereinafter referred to as: NPU) and an image signal processor (Image Signal Processing; hereinafter referred to as: ISP). The processor may also include necessary hardware accelerators or logic processing hardware circuits, such as ASIC, or one or more integrated circuits for controlling the execution of the program of the technical solution of the present application. In addition, the processor may have the function of operating one or more software programs, and the software programs may be stored in a storage medium.
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行本申请图1或图2所示实施例提供的方法。An embodiment of the present application also provides a computer-readable storage medium, which stores a computer program. When the computer-readable storage medium is run on a computer, the computer executes the method provided in the embodiment shown in Figure 1 or Figure 2 of the present application.
本申请实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序,当其在计算机上运行时,使得计算机执行本申请图1或图2所示实施例提供的方法。An embodiment of the present application also provides a computer program product, which includes a computer program. When the computer program is run on a computer, it enables the computer to execute the method provided by the embodiment shown in Figure 1 or Figure 2 of the present application.
本领域普通技术人员可以意识到,本文中公开的实施例中描述的各单元及算法步骤,能够以电子硬件、计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the various units and algorithm steps described in the embodiments disclosed herein can be implemented in a combination of electronic hardware, computer software, and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.
在本申请所提供的几个实施例中,任一功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory;以下简称:ROM)、随机存取存储器(Random Access Memory;以下简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。In several embodiments provided in the present application, if any function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application can be essentially or partly embodied in the form of a software product that contributes to the prior art. The computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory; hereinafter referred to as: ROM), random access memory (Random Access Memory; hereinafter referred to as: RAM), disk or optical disk, and other media that can store program codes.
以上所述,仅为本申请的具体实施方式,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以所述权利要求的保护范围为准。The above is only a specific implementation of the present application. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. The protection scope of the present application should be based on the protection scope of the claims.
本领域的技术人员应理解,上述描述及附图中所示的本发明的实施例只作为举例而并不限制本发明。本发明的优势已经完整并适用地实现。本发明的功能及结构原理已在实施例中展示和说明,在没有背离所述原理下,本发明的实施方式可以有任何变形或修改。It should be understood by those skilled in the art that the embodiments of the present invention described above and shown in the accompanying drawings are only examples and do not limit the present invention. The advantages of the present invention have been fully and effectively realized. The functional and structural principles of the present invention have been demonstrated and explained in the embodiments, and the embodiments of the present invention may be deformed or modified in any way without departing from the principles.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110481008.4A CN113344770B (en) | 2021-04-30 | 2021-04-30 | Virtual model and construction method thereof, interaction method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110481008.4A CN113344770B (en) | 2021-04-30 | 2021-04-30 | Virtual model and construction method thereof, interaction method and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113344770A CN113344770A (en) | 2021-09-03 |
CN113344770B true CN113344770B (en) | 2024-11-08 |
Family
ID=77469214
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110481008.4A Active CN113344770B (en) | 2021-04-30 | 2021-04-30 | Virtual model and construction method thereof, interaction method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344770B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681596A (en) * | 2023-05-12 | 2023-09-01 | 华为技术有限公司 | Object model rotation method and related equipment thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103377484A (en) * | 2012-04-28 | 2013-10-30 | 上海明器多媒体科技有限公司 | Method for controlling role expression information for three-dimensional animation production |
CN110335334A (en) * | 2019-07-04 | 2019-10-15 | 北京字节跳动网络技术有限公司 | Avatars drive display methods, device, electronic equipment and storage medium |
CN111045582A (en) * | 2019-11-28 | 2020-04-21 | 深圳市木愚科技有限公司 | Personalized virtual portrait activation interaction system and method |
CN111127641A (en) * | 2019-12-31 | 2020-05-08 | 中国人民解放军陆军工程大学 | Three-dimensional human body parametric modeling method with high-fidelity facial features |
CN112181127A (en) * | 2019-07-02 | 2021-01-05 | 上海浦东发展银行股份有限公司 | Method and device for man-machine interaction |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109584353B (en) * | 2018-10-22 | 2023-04-07 | 北京航空航天大学 | Method for reconstructing three-dimensional facial expression model based on monocular video |
CN110490959B (en) * | 2019-08-14 | 2024-01-30 | 腾讯科技(深圳)有限公司 | Three-dimensional image processing method and device, virtual image generating method and electronic equipment |
CN110866968A (en) * | 2019-10-18 | 2020-03-06 | 平安科技(深圳)有限公司 | Method for generating virtual character video based on neural network and related equipment |
CN111325846B (en) * | 2020-02-13 | 2023-01-20 | 腾讯科技(深圳)有限公司 | Expression base determination method, avatar driving method, device and medium |
CN112508049B (en) * | 2020-11-03 | 2023-11-17 | 北京交通大学 | Clustering method based on group sparse optimization |
-
2021
- 2021-04-30 CN CN202110481008.4A patent/CN113344770B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103377484A (en) * | 2012-04-28 | 2013-10-30 | 上海明器多媒体科技有限公司 | Method for controlling role expression information for three-dimensional animation production |
CN112181127A (en) * | 2019-07-02 | 2021-01-05 | 上海浦东发展银行股份有限公司 | Method and device for man-machine interaction |
CN110335334A (en) * | 2019-07-04 | 2019-10-15 | 北京字节跳动网络技术有限公司 | Avatars drive display methods, device, electronic equipment and storage medium |
CN111045582A (en) * | 2019-11-28 | 2020-04-21 | 深圳市木愚科技有限公司 | Personalized virtual portrait activation interaction system and method |
CN111127641A (en) * | 2019-12-31 | 2020-05-08 | 中国人民解放军陆军工程大学 | Three-dimensional human body parametric modeling method with high-fidelity facial features |
Also Published As
Publication number | Publication date |
---|---|
CN113344770A (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11748934B2 (en) | Three-dimensional expression base generation method and apparatus, speech interaction method and apparatus, and medium | |
US10860838B1 (en) | Universal facial expression translation and character rendering system | |
CN111383308B (en) | Method and electronic device for generating animated expressions | |
CN112836064A (en) | Knowledge graph completion method, device, storage medium and electronic device | |
CN109859305A (en) | Three-dimensional face modeling, recognition methods and device based on multi-angle two-dimension human face | |
CN103765479A (en) | Image-based multi-view 3D face generation | |
WO2013020247A1 (en) | Parameterized 3d face generation | |
US10964083B1 (en) | Facial animation models | |
CN107578467B (en) | Three-dimensional modeling method and device for medical instrument | |
CN114021222B (en) | Building modeling method, electronic device and computer storage medium | |
CN115346262A (en) | Method, device and equipment for determining expression driving parameters and storage medium | |
CN111739134A (en) | Virtual character model processing method and device and readable storage medium | |
CN113344770B (en) | Virtual model and construction method thereof, interaction method and electronic device | |
CN114708636A (en) | Dense face grid expression driving method, device and medium | |
CN112435316B (en) | Method and device for preventing mold penetration in game, electronic equipment and storage medium | |
KR102737091B1 (en) | Method and system for generating morphable 3d moving model | |
CN117974867B (en) | A monocular face avatar generation method based on Gaussian point rendering | |
CN111914595A (en) | A method and device for 3D pose estimation of human hand based on color image | |
CN114299203A (en) | Method and device for processing virtual model | |
CN117422831B (en) | Method and device for generating three-dimensional eyebrow shape, electronic device and storage medium | |
CN116204167B (en) | Method and system for realizing full-flow visual editing Virtual Reality (VR) | |
CN113837053B (en) | Biological facial alignment model training method, biological facial alignment method and device | |
TWI815021B (en) | Device and method for depth calculation in augmented reality | |
US20240362844A1 (en) | Facial expression processing method and apparatus, computer device, and storage medium | |
US20240233230A9 (en) | Automated system for generation of facial animation rigs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |