Nothing Special   »   [go: up one dir, main page]

CN111275799A - Animation generation method and device and electronic equipment - Google Patents

Animation generation method and device and electronic equipment Download PDF

Info

Publication number
CN111275799A
CN111275799A CN202010062743.7A CN202010062743A CN111275799A CN 111275799 A CN111275799 A CN 111275799A CN 202010062743 A CN202010062743 A CN 202010062743A CN 111275799 A CN111275799 A CN 111275799A
Authority
CN
China
Prior art keywords
grid points
grid
target object
points
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010062743.7A
Other languages
Chinese (zh)
Other versions
CN111275799B (en
Inventor
李佩易
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010062743.7A priority Critical patent/CN111275799B/en
Publication of CN111275799A publication Critical patent/CN111275799A/en
Application granted granted Critical
Publication of CN111275799B publication Critical patent/CN111275799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for generating animation and electronic equipment. The animation generation method comprises the following steps: acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order; converting the plurality of grid points into at least one key point according to a first mapping relation; calculating weight values between the plurality of grid points and the at least one key point; acquiring a first position of the at least one key point; determining positions of the plurality of grid points according to the first positions and the weight values; and generating the animation of the target object according to the positions of the plurality of grid points. The method and the device for generating the animation solve the technical problem that the animation is difficult to generate due to too many grid points in the prior art by converting the grid points into the key points and controlling the movement of the grid points through the key points to generate the animation of the target object.

Description

Animation generation method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for generating an animation, and an electronic device.
Background
Currently, with the continuous progress of computer technology and the development of multimedia technology, three-dimensional reconstruction technology has become a research hotspot in the field of graphics in recent years. The indirect method refers to reconstructing a three-dimensional target object from one or more two-dimensional images, and includes three-dimensional target object reconstruction based on a statistical model, three-dimensional target object reconstruction based on a multi-view geometry, three-dimensional target object reconstruction based on an illumination stereo, and three-dimensional target object reconstruction based on machine learning, which has been rapidly developed in recent years.
In the prior art, a target object may be represented using a mesh that partitions the target object into patches. However, when a mesh is used to represent a target object, it is very difficult to animate the target object because the mesh is composed of many vertices. In general, a very large number of vertices are required for displaying a target object in a fine manner, each vertex changes when the target object moves, and it is difficult to generate an animation of the target object directly through the vertices.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
a method for generating animation includes:
acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order;
converting the plurality of grid points into at least one key point according to a first mapping relation;
calculating weight values between the plurality of grid points and the at least one key point;
acquiring a first position of the at least one key point;
determining positions of the plurality of grid points according to the first positions and the weight values;
and generating the animation of the target object according to the positions of the plurality of grid points.
According to another aspect of the present disclosure, the following technical solutions are also provided:
an animation generation apparatus comprising:
a grid point obtaining module, configured to obtain a plurality of grid points of a target object, where the plurality of grid points are a plurality of vertices constituting a grid of the target object, and the plurality of grid points are arranged in a first order;
a key point conversion module, configured to convert the plurality of grid points into at least one key point according to a first mapping relationship;
the weight calculation module is used for calculating weight values between the grid points and the at least one key point;
a key point position obtaining module, configured to obtain a first position of the at least one key point;
a grid point position obtaining module, configured to determine positions of the grid points according to the first positions and the weight values;
and the animation generation module is used for generating the animation of the target object according to the positions of the grid points.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, so that the processor realizes the steps of any animation generation method when executing.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer-readable storage medium for storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the animation generation methods described above.
The embodiment of the disclosure discloses a method and a device for generating animation and electronic equipment. The animation generation method comprises the following steps: acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order; converting the plurality of grid points into at least one key point according to a first mapping relation; calculating weight values between the plurality of grid points and the at least one key point; acquiring a first position of the at least one key point; determining positions of the plurality of grid points according to the first positions and the weight values; and generating the animation of the target object according to the positions of the plurality of grid points. The method and the device for generating the animation solve the technical problem that the animation is difficult to generate due to too many grid points in the prior art by converting the grid points into the key points and controlling the movement of the grid points through the key points to generate the animation of the target object.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
Fig. 1 is a schematic flow chart of a method for generating an animation according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a triangular mesh in an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an animation generation apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an animation generation method. The animation generation method provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrated in a server, a terminal device and the like. As shown in fig. 1, the animation generation method mainly includes the following steps S101 to S106. Wherein:
step S101, acquiring a plurality of grid points of a target object;
wherein the plurality of grid points are a plurality of vertices of a grid constituting the target object, wherein the plurality of grid points are arranged in a first order. Optionally, the mesh includes a plurality of triangles, and a vertex of each triangle is the mesh point. Fig. 2 is a schematic diagram of a triangular mesh.
Optionally, the obtaining a plurality of grid points of the target object includes: and converting the two-dimensional image of the target object into grid points of the three-dimensional image of the target object through an image reconstruction model. In one embodiment, the target object is a three-dimensional target object obtained from a two-dimensional target object, which may, for example, the three-dimensional target object is obtained through an image reconstruction model, the image reconstruction model regresses grid points of the three-dimensional target object corresponding to the two-dimensional target object through the two-dimensional target object, as can be appreciated, the training atlas of the image reconstruction model is a sample image pair including a two-dimensional target object and a three-dimensional target object corresponding thereto, the three-dimensional target object is preprocessed as a vector or matrix of grid points arranged in a first order as supervision data, the image reconstruction model generates a vector or matrix through the two-dimensional object that is the same dimension as the supervised data, and calculating an error with the supervised data, thereby updating parameters of the image reconstruction model. After the image reconstruction model is trained, it may directly regress grid points of a three-dimensional target object corresponding to a two-dimensional object according to the two-dimensional object, and the grid points are arranged in a first order.
Optionally, the obtaining a plurality of grid points of the target object includes: acquiring a first sequence table;
acquiring a plurality of grid points of a target object and arranging the grid points into a grid point matrix according to the first order table. In this alternative embodiment, before obtaining the plurality of grid points of the target object, it is necessary to first obtain a first order table, where the order table specifies an arrangement order of the grid points, and after obtaining the plurality of grid points of the target object, the grid points are sequentially arranged according to the first order table to arrange a matrix or a vector formed by the grid points. For example, if the target object is a three-dimensional target object, the positions of the grid points are coordinates of the grid points on three coordinate axes. Each grid point comprises an identifier in the grid, if the grid point comprises a number, the first sequence table specifies the sequence of the numbers, and then the grid points are arranged into a matrix or a vector according to the number of the grid points and the positions of the grid points in the first sequence table, if the target object is a three-dimensional target object, each vertex can be represented as a 1 x 3 vector, and if the total comprises N nodes, the three-dimensional target object can be represented as an N x 3 matrix or vector.
Since the mesh points of all the target objects are arranged in the first order, after the correspondence between the key points and the mesh points is subsequently determined, the correspondence between the key points and the mesh points may be applied to all the target objects without recalculating the correspondence between the key points and the mesh points for each target object.
Step S102, converting the grid points into at least one key point according to a first mapping relation;
the key points are key points of the target object, and specifically, the key points may be bone key points of the target object.
The first mapping relation is used to determine which grid points the keypoint consists of and how much the weights of the grid points are respectively when the keypoint is composed. For example, if the first mapping relation specifies that the keypoint 1 is composed of grid points 1, 2, and 3, and the weights are 1/3, the coordinate value of the keypoint 1 may be calculated in such a manner that the coordinate values of the grid points 1, 2, and 3 occupy 1/3.
Optionally, the first mapping relationship is a first conversion matrix, the number of rows of the first conversion matrix is the number of the keypoints, the number of columns of the first conversion matrix is the number of the grid points, and the values of the elements in the first conversion matrix are weight values for generating the keypoints through the grid points. Illustratively, the first transformation matrix is a matrix of M × N, where M is the number of keypoints, N is the number of grid points, and all grid points are a matrix of N × 3. Illustratively, M is 3, N is 6, and the first transformation matrix is:
Figure BDA0002375018100000061
it represents: a total of 3 key points, wherein the key point 1 is composed of grid points 1, 2 and 5, and the weights of the three grid points are a11、a12And a15(ii) a The key point 2 is composed of grid points 1, 3 and 6, and the weights of the three grid points are a21、a23And a26(ii) a The key point 3 is composed of grid points 2, 5 and 6, and the weights of the three grid points are a32、a35And a36
The matrix of the grid points is:
Figure BDA0002375018100000071
each element in the matrix is a 1 x 3 vector representing the coordinate values of the grid point in the three-dimensional coordinate system. Let three key points be k1、k2And k3Then, the position of the key point can be calculated by the following formula:
Figure BDA0002375018100000072
thereby, the plurality of grid points may be converted into at least one keypoint. It is understood that the matrices and the dimensions of the matrices in the above examples are examples, and do not limit the present disclosure, and any other method may be used to convert the grid points into the key points according to actual needs.
The first mapping relationship may be a preset relationship, for example, a corresponding relationship between the grid point and the key point is predetermined through a priori knowledge; or the mapping relationship can also be obtained by a deep learning mode, for example, a grid is regressed to M points through a model, and then the regressed M points and M points in the supervision data are used for calculating error updating the model, so that after the model is trained, the mapping relationship can be determined. Other ways of obtaining the first mapping relationship are not described in detail, and any way of obtaining the first mapping relationship may be used in the present disclosure.
Step S103, calculating weight values between the grid points and the at least one key point;
when generating the animation, the positions of the grid points need to be determined according to the positions of the key points, and the grid points closer to the key points are more influenced by the key points. Therefore, optionally, the step S103 includes: and calculating the weight value between each grid point and each key point according to the distance between each grid point and each key point. It can be understood that the distance between each grid point and each key point is the distance in the grid, i.e. the distance needs to be calculated according to the connection relationship of the grid points in the grid, for example, the distance from the human finger cusp to the human toe cusp, and the spatial distance needs to be calculated according to the distance in the grid, rather than directly, because there is no path from the human finger cusp to the human toe cusp in the space. Typically, the shortest distance from a grid point to each key point can be calculated according to a Dijkstra (Dijkstra) algorithm, and then, a weight value is given to each key point from the grid point according to the sorting of the distances; it is also possible to set the weight value to 0 after the distance is greater than a predetermined value, and the motion of the key point is considered to have no influence on the grid point. It is understood that other methods, typically projection method, thermal equilibrium method, etc., may be used for calculating the weight values, and are not described herein again.
It will be appreciated that the weight values may constitute a weight matrix to facilitate subsequent calculations, for example, if the number of vertices of the mesh is N, and the number of keypoints is M, then the weight matrix is a matrix of N × M, where each element in the matrix represents a weight between a vertex of the mesh and a keypoint.
Step S104, acquiring a first position of the at least one key point;
in the present disclosure, the target object is moved using the key points, and thus the first positions of the key points, which are obtained by moving the positions of the key points, are acquired in this step. Illustratively, the position of the first key point is controlled through a human-computer interaction interface or a preset script and the like. In this step, the position after the movement of the key point is acquired as the first position.
Step S105, determining the positions of the grid points according to the first positions and the weight values;
optionally, the step S105 includes: and calculating the positions of the plurality of grid points according to a key point matrix formed by the coordinates of the plurality of first positions and a weight matrix formed by the plurality of weight values. Illustratively, the weight matrix is a matrix of N × M, and the keypoint matrix is a matrix of M × 3, so that the matrix of N × 3 of the grid points can be obtained by matrix multiplication of the weight matrix and the keypoint matrix, and thus the positions of the grid points can be determined according to the positions of the keypoints. Exemplarily, N is 6, M is 3, and the weight matrix is:
Figure BDA0002375018100000091
the key point matrix is as follows:
Figure BDA0002375018100000092
wherein k isjA vector of 1 x 3, representing the three-dimensional coordinates of the key points in space; w is aijRepresenting the weight value between the ith grid point and the jth key point.
The coordinate matrix of the grid points can be obtained by the following formula:
Figure BDA0002375018100000093
step S106, generating the animation of the target object according to the positions of the grid points.
In this step, a new mesh is re-rendered according to the positions of the plurality of mesh points determined in step S104, so that a new target object can be generated, and the above operations are continuously performed in continuous multi-frame images, that is, the positions of the mesh points are continuously controlled according to the positions of the key points, so that an animation of the target object can be generated. Since the number of the key points is much smaller than that of the grid points, the motion of the entire grid points can be controlled by controlling only a few points when controlling the animation, and thus, the generation of the animation of the target object becomes simple. Meanwhile, since the grid points of all the target objects are arranged in the first order, the subsequent mapping relationship, the calculation method of the weight value, and the calculation method of the positions of the grid points are the same, that is, the above method can be applied to any target object of the same type (such as a human body, a quadruped animal, etc.) without resetting the above relationship and calculation method for different target objects.
The embodiment of the disclosure discloses a method and a device for generating animation and electronic equipment. The animation generation method comprises the following steps: acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order; converting the plurality of grid points into at least one key point according to a first mapping relation; calculating weight values between the plurality of grid points and the at least one key point; acquiring a first position of the at least one key point; determining positions of the plurality of grid points according to the first positions and the weight values; and generating the animation of the target object according to the positions of the plurality of grid points. The method and the device for generating the animation solve the technical problem that the animation is difficult to generate due to too many grid points in the prior art by converting the grid points into the key points and controlling the movement of the grid points through the key points to generate the animation of the target object.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure provides an animation generation device. The apparatus may perform the steps described in the above-described animation generation method embodiment. As shown in fig. 3, the apparatus 300 mainly includes: a grid point acquisition module 301, a keypoint conversion module 302, a weight calculation module 303, a keypoint position acquisition module 304, a grid point position acquisition module 305, and an animation generation module 306. Wherein,
a grid point obtaining module 301, configured to obtain a plurality of grid points of a target object, where the plurality of grid points are a plurality of vertices constituting a grid of the target object, and the plurality of grid points are arranged in a first order;
a keypoint conversion module 302, configured to convert the plurality of grid points into at least one keypoint according to a first mapping relationship;
a weight calculating module 303, configured to calculate weight values between the mesh points and the at least one keypoint;
a keypoint location obtaining module 304, configured to obtain a first location of the at least one keypoint;
a grid point position obtaining module 305, configured to determine positions of the plurality of grid points according to the first positions and the weight values;
and an animation generating module 306, configured to generate an animation of the target object according to the positions of the plurality of grid points.
Further, the first mapping relationship is a first conversion matrix, the number of rows of the first conversion matrix is the number of the keypoints, the number of columns of the first conversion matrix is the number of the grid points, and the values of the elements in the first conversion matrix are weight values for generating the keypoints through the grid points.
Further, the weight calculating module 303 is further configured to:
and calculating the weight value between each grid point and each key point according to the distance between each grid point and each key point.
Further, the distance between each grid point and each key point is the distance in the grid.
Further, the grid point position obtaining module 305 is further configured to:
and calculating the positions of the plurality of grid points according to a key point matrix formed by the coordinates of the plurality of first positions and a weight matrix formed by the plurality of weight values.
Further, the grid point obtaining module 301 is further configured to:
acquiring a first sequence table;
acquiring a plurality of grid points of a target object and arranging the grid points into a grid point matrix according to the first order table.
Further, the grid point obtaining module 301 is further configured to:
and converting the two-dimensional image of the target object into grid points of the three-dimensional image of the target object through an image reconstruction model.
The apparatus shown in fig. 3 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order; converting the plurality of grid points into at least one key point according to a first mapping relation; calculating weight values between the plurality of grid points and the at least one key point; acquiring a first position of the at least one key point; determining positions of the plurality of grid points according to the first positions and the weight values; and generating the animation of the target object according to the positions of the plurality of grid points.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an animation generation method, including:
acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order;
converting the plurality of grid points into at least one key point according to a first mapping relation;
calculating weight values between the plurality of grid points and the at least one key point;
acquiring a first position of the at least one key point;
determining positions of the plurality of grid points according to the first positions and the weight values;
and generating the animation of the target object according to the positions of the plurality of grid points.
Further, the first mapping relationship is a first conversion matrix, the number of rows of the first conversion matrix is the number of the keypoints, the number of columns of the first conversion matrix is the number of the grid points, and the values of the elements in the first conversion matrix are weight values for generating the keypoints through the grid points.
Further, the calculating the weight values between the plurality of grid points and the at least one key point includes:
and calculating the weight value between each grid point and each key point according to the distance between each grid point and each key point.
Further, the distance between each grid point and each key point is the distance in the grid.
Further, the determining the positions of the plurality of grid points according to the first positions and the weight values includes:
and calculating the positions of the plurality of grid points according to a key point matrix formed by the coordinates of the plurality of first positions and a weight matrix formed by the plurality of weight values.
Further, the acquiring a plurality of grid points of the target object includes:
acquiring a first sequence table;
acquiring a plurality of grid points of a target object and arranging the grid points into a grid point matrix according to the first order table.
Further, the acquiring a plurality of grid points of the target object includes:
and converting the two-dimensional image of the target object into grid points of the three-dimensional image of the target object through an image reconstruction model.
According to one or more embodiments of the present disclosure, there is provided an animation generation apparatus including:
a grid point obtaining module, configured to obtain a plurality of grid points of a target object, where the plurality of grid points are a plurality of vertices constituting a grid of the target object, and the plurality of grid points are arranged in a first order;
a key point conversion module, configured to convert the plurality of grid points into at least one key point according to a first mapping relationship;
the weight calculation module is used for calculating weight values between the grid points and the at least one key point;
a key point position obtaining module, configured to obtain a first position of the at least one key point;
a grid point position obtaining module, configured to determine positions of the grid points according to the first positions and the weight values;
and the animation generation module is used for generating the animation of the target object according to the positions of the grid points.
Further, the first mapping relationship is a first conversion matrix, the number of rows of the first conversion matrix is the number of the keypoints, the number of columns of the first conversion matrix is the number of the grid points, and the values of the elements in the first conversion matrix are weight values for generating the keypoints through the grid points.
Further, the weight calculation module is further configured to:
and calculating the weight value between each grid point and each key point according to the distance between each grid point and each key point.
Further, the distance between each grid point and each key point is the distance in the grid.
Further, the grid point position obtaining module is further configured to:
and calculating the positions of the plurality of grid points according to a key point matrix formed by the coordinates of the plurality of first positions and a weight matrix formed by the plurality of weight values.
Further, the grid point obtaining module is further configured to:
acquiring a first sequence table;
acquiring a plurality of grid points of a target object and arranging the grid points into a grid point matrix according to the first order table.
Further, the grid point obtaining module is further configured to:
and converting the two-dimensional image of the target object into grid points of the three-dimensional image of the target object through an image reconstruction model.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, so that the processor realizes the steps of any animation generation method when executing.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the animation generation methods described above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method for generating an animation, the method comprising:
acquiring a plurality of grid points of a target object, wherein the plurality of grid points are a plurality of vertexes of a grid forming the target object, and the plurality of grid points are arranged in a first order;
converting the plurality of grid points into at least one key point according to a first mapping relation;
calculating weight values between the plurality of grid points and the at least one key point;
acquiring a first position of the at least one key point;
determining positions of the plurality of grid points according to the first positions and the weight values;
and generating the animation of the target object according to the positions of the plurality of grid points.
2. The animation generation method according to claim 1, wherein the first mapping relationship is a first conversion matrix whose number of rows is the number of the keypoints, whose number of columns is the number of the grid points, and whose values of elements in the first conversion matrix are weight values at which keypoints are generated by the grid points.
3. The animation generation method of claim 1, wherein the calculating weight values between the plurality of mesh points and the at least one keypoint comprises:
and calculating the weight value between each grid point and each key point according to the distance between each grid point and each key point.
4. The animation generation method of claim 3, wherein the distance between each grid point and each keypoint is a distance in the grid.
5. The animation generation method of claim 1, wherein the determining the locations of the plurality of grid points according to the first locations and the weight values comprises:
and calculating the positions of the plurality of grid points according to a key point matrix formed by the coordinates of the plurality of first positions and a weight matrix formed by the plurality of weight values.
6. The animation generation method as claimed in claim 1, wherein the obtaining of the plurality of mesh points of the target object includes:
acquiring a first sequence table;
acquiring a plurality of grid points of a target object and arranging the grid points into a grid point matrix according to the first order table.
7. The animation generation method as claimed in claim 1, wherein the obtaining of the plurality of mesh points of the target object includes:
and converting the two-dimensional image of the target object into grid points of the three-dimensional image of the target object through an image reconstruction model.
8. An animation generation apparatus comprising:
a grid point obtaining module, configured to obtain a plurality of grid points of a target object, where the plurality of grid points are a plurality of vertices constituting a grid of the target object, and the plurality of grid points are arranged in a first order;
a key point conversion module, configured to convert the plurality of grid points into at least one key point according to a first mapping relationship;
the weight calculation module is used for calculating weight values between the grid points and the at least one key point;
a key point position obtaining module, configured to obtain a first position of the at least one key point;
a grid point position obtaining module, configured to determine positions of the grid points according to the first positions and the weight values;
and the animation generation module is used for generating the animation of the target object according to the positions of the grid points.
9. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when running implements the animation generation method of any of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform the animation generation method of any of claims 1-7.
CN202010062743.7A 2020-01-20 2020-01-20 Animation generation method and device and electronic equipment Active CN111275799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010062743.7A CN111275799B (en) 2020-01-20 2020-01-20 Animation generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010062743.7A CN111275799B (en) 2020-01-20 2020-01-20 Animation generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111275799A true CN111275799A (en) 2020-06-12
CN111275799B CN111275799B (en) 2021-03-23

Family

ID=71003309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010062743.7A Active CN111275799B (en) 2020-01-20 2020-01-20 Animation generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111275799B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549767A (en) * 2022-04-24 2022-05-27 广州中望龙腾软件股份有限公司 PLM (product quality model) processing method, system and device and readable medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020089507A1 (en) * 2000-12-07 2002-07-11 Yasunori Ohto Animation generation method and apparatus
CN102800119A (en) * 2012-06-13 2012-11-28 天脉聚源(北京)传媒科技有限公司 Animation display method and device of three-dimensional curve
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
WO2016205265A1 (en) * 2015-06-19 2016-12-22 Schlumberger Technology Corporation Efficient algorithms for volume visualization on irregular grids
CN106780697A (en) * 2016-12-07 2017-05-31 珠海金山网络游戏科技有限公司 It is a kind of based on normal direction, geometry, uv factors lattice simplified method
CN106960459A (en) * 2016-12-26 2017-07-18 北京航空航天大学 The method relocated in role animation based on the dynamic (dynamical) covering technology of expanding location and weight
CN109035388A (en) * 2018-06-28 2018-12-18 北京的卢深视科技有限公司 Three-dimensional face model method for reconstructing and device
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110288681A (en) * 2019-06-25 2019-09-27 网易(杭州)网络有限公司 Skinning method, device, medium and the electronic equipment of actor model

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020089507A1 (en) * 2000-12-07 2002-07-11 Yasunori Ohto Animation generation method and apparatus
CN102800119A (en) * 2012-06-13 2012-11-28 天脉聚源(北京)传媒科技有限公司 Animation display method and device of three-dimensional curve
WO2016205265A1 (en) * 2015-06-19 2016-12-22 Schlumberger Technology Corporation Efficient algorithms for volume visualization on irregular grids
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
CN106780697A (en) * 2016-12-07 2017-05-31 珠海金山网络游戏科技有限公司 It is a kind of based on normal direction, geometry, uv factors lattice simplified method
CN106960459A (en) * 2016-12-26 2017-07-18 北京航空航天大学 The method relocated in role animation based on the dynamic (dynamical) covering technology of expanding location and weight
CN109035388A (en) * 2018-06-28 2018-12-18 北京的卢深视科技有限公司 Three-dimensional face model method for reconstructing and device
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110288681A (en) * 2019-06-25 2019-09-27 网易(杭州)网络有限公司 Skinning method, device, medium and the electronic equipment of actor model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王娅: "基于顶点权重的网格简化在虚拟人脸中的应用", 《计算机仿真》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549767A (en) * 2022-04-24 2022-05-27 广州中望龙腾软件股份有限公司 PLM (product quality model) processing method, system and device and readable medium

Also Published As

Publication number Publication date
CN111275799B (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN111260774B (en) Method and device for generating 3D joint point regression model
CN109754464B (en) Method and apparatus for generating information
CN111243085B (en) Training method and device for image reconstruction network model and electronic equipment
WO2021008627A1 (en) Game character rendering method and apparatus, electronic device, and computer-readable medium
WO2022033444A1 (en) Dynamic fluid effect processing method and apparatus, and electronic device and readable medium
CN114494328B (en) Image display method, device, electronic equipment and storage medium
CN112734910A (en) Real-time human face three-dimensional image reconstruction method and device based on RGB single image and electronic equipment
US20240378784A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113205601B (en) Roaming path generation method and device, storage medium and electronic equipment
CN110069195A (en) Image pulls deformation method and device
WO2023035935A1 (en) Data processing method and apparatus, and electronic device and storage medium
CN111275799B (en) Animation generation method and device and electronic equipment
CN109598344B (en) Model generation method and device
WO2020077912A1 (en) Image processing method, device, and hardware device
WO2022218104A1 (en) Collision processing method and apparatus for virtual image, and electronic device and storage medium
CN114627971B (en) Data processing method and device for solid system
CN111275813B (en) Data processing method and device and electronic equipment
WO2022033446A1 (en) Dynamic fluid effect processing method and apparatus, and electronic device and readable medium
CN113378808B (en) Person image recognition method and device, electronic equipment and computer readable medium
CN113808050B (en) Denoising method, device and equipment for 3D point cloud and storage medium
WO2022135022A1 (en) Dynamic fluid display method and apparatus, and electronic device and readable medium
CN111627105B (en) Face special effect splitting method, device, medium and equipment
CN109670577B (en) Model generation method and device
CN117974877A (en) Texture mapping processing method and device for three-dimensional model and electronic equipment
CN119399335A (en) Animation processing method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.