CN113450444B - Method and device for generating illumination map, storage medium and electronic equipment - Google Patents
Method and device for generating illumination map, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN113450444B CN113450444B CN202110780845.7A CN202110780845A CN113450444B CN 113450444 B CN113450444 B CN 113450444B CN 202110780845 A CN202110780845 A CN 202110780845A CN 113450444 B CN113450444 B CN 113450444B
- Authority
- CN
- China
- Prior art keywords
- face
- normal
- illumination
- face model
- sampling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure relates to a method and a device for generating an illumination map, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring a face model of the virtual character; controlling virtual light to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; and respectively sampling the normal of the face by adopting a sampling key frame under each illumination angle in the plurality of illumination angles to generate an illumination map of the face model. The method and the device solve the technical problems of complex work flow and low processing efficiency of the generation mode of the illumination map of the three-dimensional character face model in the game in the prior art.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for generating an illumination map, a storage medium, and an electronic device.
Background
At present, in the rendering process of cartoon games, in order to realize stylized and customized shadow effects, two main schemes are provided: the first scheme is that a normal modification tool is used as a core to realize the customization of light and shadow; the second scheme is to manually draw sampling importance key frames on the face model UV of the character, then perform distance field calculation on all the key frames, and generate a face illumination map to calculate the shadows.
However, the first scheme can only embody the shadow effect under a fixed illumination angle, and cannot be applied to the scene of the free rotation role; for the conditions of multiple roles and multiple fixed angles, the workload of manually modifying the normal lines of the face of the role is multiplied, and the realization process becomes extremely difficult; in addition, the normal of the model is directly modified by the scheme, and other rendering effects of using the normal in calculation can be influenced; the second scheme adopts the steps of manually drawing sampling importance key frames on a face model and calculating distance fields of the key frames, wherein the matching between a plurality of tools and a plurality of people is involved, and because the number of tools involved in the process of manually drawing the sampling importance key frames is large, the working flow is very complex and long, the result cannot be quickly seen, the key frames in the process of manually drawing are also very difficult to modify, and a large amount of time is consumed for realizing a good face shadow effect.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, a storage medium and an electronic device for generating an illumination map, which are used for at least solving the technical problems of complex work flow and low processing efficiency in the generation mode of the illumination map of a three-dimensional character face model in a game in the prior art.
According to an aspect of an embodiment of the present disclosure, there is provided a method of generating an illumination map, including: acquiring a face model of the virtual character; controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; and under each illumination angle in the plurality of illumination angles, sampling the normal of the face by adopting the sampling key frame respectively to generate an illumination map of the face model.
According to another aspect of the embodiments of the present disclosure, there is also provided an apparatus for generating an illumination map, including: the obtaining module is used for obtaining a face model of the virtual character; the control module is used for controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; and the sampling module is used for sampling the normal of the face by respectively adopting the sampling key frame under each illumination angle in the plurality of illumination angles to generate an illumination map of the face model.
According to another aspect of the embodiments of the present disclosure, there is also provided a computer-readable storage medium including: the computer-readable storage medium includes a stored program, where when the program runs, the apparatus in which the computer-readable storage medium is located is controlled to execute any one of the above methods for generating an illumination map.
According to another aspect of the embodiments of the present disclosure, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform any one of the above methods for generating a light map.
In the embodiment of the disclosure, the face model of the virtual character is obtained; controlling virtual light to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; the method comprises the steps of sampling the face normal line by adopting the sampling key frame respectively under each of the plurality of illumination angles, generating an illumination map of the face model, automatically modifying the normal line in real time and automatically baking the high-precision illumination map, and achieving the purpose of automatically generating the illumination map of the three-dimensional character face model in the game, thereby achieving the technical effect of improving the processing efficiency of generating the illumination map, and further solving the technical problems of complex working flow and low processing efficiency of the generation mode of the illumination map of the three-dimensional character face model in the game in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
FIG. 1 is a flow diagram of a method of generating an illumination map according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an effect of a hand drawing style according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of an automated tessellation exhibition, according to an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a weight assignment effect according to an embodiment of the disclosure;
FIG. 5 is a schematic illustration of a nose shadow effect according to an embodiment of the disclosure;
FIG. 6 is a schematic illustration of a face normal after homogenization treatment in accordance with an embodiment of the present disclosure;
FIG. 7 is a schematic illustration of a shadow shift effect according to an embodiment of the disclosure;
FIG. 8 is a schematic diagram of a key frame sampling effect according to an embodiment of the present disclosure;
FIG. 9 is a schematic illustration of a light and shadow transition effect according to an embodiment of the disclosure;
FIG. 10 is a schematic diagram of a final generated lighting map according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an in-game map generating device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present disclosure, there is provided a method embodiment of generating an illumination map, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The technical scheme of the method embodiment can be executed in a mobile terminal, a computer terminal or a similar arithmetic device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device, an input/output device, and a display device for a communication function. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the method for generating a light map in the embodiments of the present disclosure, and the processor executes various functional applications and data processing by running the computer programs stored in the memory, that is, implementing the method for generating a light map. The memory may include high speed random access memory and may also include computer readable memory, such as one or more magnetic storage devices, flash memory, or other computer readable solid state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner. The technical scheme of the embodiment of the method can be applied to various communication systems, such as: a Global System for Mobile communications (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a General Packet Radio Service (GPRS) System, a Long Term Evolution (LTE) System, a LTE Frequency Division Duplex (FDD) System, a LTE Time Division Duplex (TDD) System, a Universal Mobile Telecommunications System (UMTS) System, a Worldwide Interoperability for Microwave Access (WiMAX) communication System, or a 5G System. Optionally, device-to-Device (D2D) communication may be performed between a plurality of mobile terminals. Alternatively, the 5G system or the 5G network is also referred to as a New Radio (NR) system or an NR network.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human interaction functionality optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable computer-readable storage media.
FIG. 1 is a flow chart of a method of generating an illumination map, as shown in FIG. 1, including the steps of:
step S102, obtaining a face model of the virtual character;
step S104, controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles;
and step S106, sampling the normal of the face by respectively adopting the sampling key frame under each illumination angle in the plurality of illumination angles, and generating an illumination map of the face model.
In the embodiment of the disclosure, the face model of the virtual character is obtained; controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; the method comprises the steps of respectively sampling the face normal line by adopting the sampling key frame under each of the plurality of illumination angles to generate an illumination map of the face model, and automatically modifying the normal line in real time and automatically baking the high-precision illumination map to achieve the purpose of automatically generating the illumination map of the three-dimensional character face model in the game, thereby realizing the technical effect of improving the processing efficiency of generating the illumination map, and further solving the technical problems of complex working flow and low processing efficiency of the generation mode of the illumination map of the three-dimensional character face model in the game in the prior art.
Optionally, in the embodiment of the present disclosure, the virtual character may be, but is not limited to, a virtual character in a game executed in a PC end or a game end, for example, a cartoon game character in a japanese cartoon game; the face model of the virtual character may be a face model of the entire face or a face model of a partial face. The virtual light can be a virtual fluorescent lamp in a game scene or virtual sunlight; the virtual light control mode can be that the illumination angle is automatically changed according to the requirement of a game scene, and can also be that the illumination angle is independently controlled and adjusted by a player.
As an alternative embodiment, the face normal is a boundary line between a shadow region and a non-shadow region formed by the virtual light irradiated on the face model. As shown in fig. 2, the normal direction of the face model of the virtual character is modified to be within a uniform range, so that the face model of the virtual character can show a relatively complete light and shade pure color block at a fixed illumination angle, and a light and shadow effect in a hand-drawing style is achieved.
As an alternative embodiment, the sampling key frames are a plurality of sampling importance key frames which are automatically generated; for example, taking an illumination angle of virtual light right in front of a face model of a virtual character as 0 °, the generated key frames can be determined at several fixed illumination angles of 0 °, 30 °, 60 °, -30 °, -60 °, the generated key frames are different due to different illumination angles, and the finally generated illumination maps are also different, so that different key frames can be obtained by changing different illumination angles according to the actual situation of the virtual character in a game scene, and the obtained different key frames will affect the final illumination map.
In an alternative embodiment, obtaining the face model of the virtual character comprises:
step S202, acquiring the face area of the virtual character;
step S204, performing tessellation processing on the face region to obtain the face model.
As an alternative embodiment, the obtaining of the face area of the virtual character may be performed in, but not limited to, three-dimensional computer graphics software, for example: houdini software. Firstly, a face model (namely a face model) of a virtual character is imported, as shown in fig. 3, a face region is automatically subjected to surface subdivision processing to obtain the face model, and the number of faces of the face model can be greatly increased through the surface subdivision processing, so that the adjustment of light and shadow is more precise, and the precision of the face model and the precision of a final lighting map are improved; and the illumination angle and the face model information are calculated by analyzing the face model, the normal line of the face is automatically modified in real time, so that the shadow curve is more attached to the face model, and the illumination shadow is adjusted by an algorithm to be closer to the hand-drawing style.
In the embodiment of the disclosure, the purpose of automatically generating the final result can be achieved only by introducing the face model and clicking one button through the Houdini software, so that the technical effects of automatically modifying the normal line in real time and automatically baking the high-precision illumination map are achieved, and the processing efficiency of generating the illumination map is improved.
In the embodiment of the disclosure, the nodes and codes of the normal are calculated and modified in real time according to the illumination angle, and the specific contents and definitions of the codes are as follows:
float radians = radians (chf ("angle")); // define the illumination angle
vector light _ pos = set (cos (radians), 0,sin (radians)); // define the position of the light source
vector a = @ N; v/define face vertex Normal
vector b = normalyze (light _ pos- @ P); v/calculate light Source to face vertex vector
f @ df = chf ("lambert")) dot (a, b) + (1-chf ("lambert")); the/Lambert illumination model calculates the illumination influence of light sources on the face
f @ df = fit (f @ df, chf ("min _ value"), 1,0,1); the vector of// remap light source to face is multiplied by dot value of normal of face vertex to 0-1 for displaying color
v @ cd = f @ df; assigning the calculated value to a color attribute
In an alternative embodiment, controlling the virtual lighting to illuminate the face model from a plurality of illumination angles, the determining the face normal for each of the plurality of illumination angles comprises:
step S302, determining a triangle light position and a target shadow position of the face model, wherein the triangle light position comprises a projection area of a brow bone, a nose bridge and a cheekbone in the face model, and the target shadow position comprises a shadow area of a nose in the face model;
step S304, respectively acquiring a first normal of the position of the triangular light and a second normal of the position of the target shadow under the plurality of illumination angles;
step S306 is to determine the face normal line based on the first normal line and the second normal line.
As an alternative embodiment, the triangle light position of the face model is a projection region (i.e., a non-shadow region) of the brow bone, the nose bridge, and the cheekbones in the face model; the target shadow position includes, but is not limited to, a shadow region of a nose in the face model, and in the embodiment of the present application, different shadow regions may be determined according to character features of different virtual characters, for example, a shadow region formed by a dimple, a shadow region formed by hair, and the like. After the target shadow region is determined, a first normal is determined according to a position of the triangular light and the second normal is determined according to the shadow region.
As an alternative embodiment, at each different illumination angle, a first normal of the position of the triangle light and a second normal of the position of the target shadow need to be obtained, and the face normal is determined based on the actual conditions of the first normal and the second normal, for example, a normal is selected from the first normal and the second normal as the face normal, or an intersection line between the first normal and the second normal is taken as the face normal.
In an optional embodiment, after acquiring the first normal of the position of the triangle light, the method further includes:
step S402, acquiring the face curvature, the first normal direction and the height information of the face model;
step S404, determining a target region where the position of the triangle light is located in the face model based on the curvature of the face, the first normal direction and the height information, wherein a central position point of the target region is a central point of the position of the triangle light;
step S406, respectively assigning a weight value and a set normal to each of all position points in the target area, wherein the weight values corresponding to other position points closer to the central position point in all position points are higher, and the weight values corresponding to other position points farther from the central position point in all position points are lower;
step S408, calculating a vector sum value between the normal of the face and the set normal according to the weight value;
step S410, after the vector sum value is normalized, the weight value and the normal direction of the set normal are adjusted based on the illumination angle to modify the first normal.
In the embodiment of the present disclosure, a face model of the virtual character is acquired, a face curvature, a normal direction, and height information are calculated, and a position range of a face triangle light, that is, a target area where the triangle light position is located in the face model is automatically calculated according to the face curvature, the first normal direction, and the height information, where a central position point of the target area is a central point of the triangle light position.
In the embodiment of the present disclosure, in the located target area in the face model, the target area is represented as the position of the face triangle light by an editable curve, the curve is mirrored to obtain another half of symmetric triangle light, all position points in the target area are respectively assigned with a weight value, the higher the weight value of the point closer to the center position is, the lower the weight value of the point closer to the edge is, as shown in fig. 4, the center of the area shown after the weight value is assigned is dark in color, that is, the weight value is high, and the edge color is light, that is, the weight value is low. And setting a new method set _ N, namely the set normal, for all points in the region, calculating the sum of the normal of the original face and the newly set normal vector according to the weight value of each point, and performing normalization treatment: normative (@ N + @ dense @ Set _ N), where N denotes a normal, density denotes a weight value assigned to a point in the region, set _ N denotes a direction of the normal newly Set according to the weight value, and normative denotes normalization; after the normalization processing is completed, the weight value and the normal direction of the set normal are adjusted based on the illumination angle, so as to automatically modify the first normal.
In the embodiment of the disclosure, the normal direction of the face triangular light area is adjusted according to the illumination angle, that is, the weight value and the newly set normal direction are adjusted, so that when the light is at different illumination angles, the normal directions corresponding to the face triangular light area are also different, and the function of automatically modifying the normal in real time according to the light direction can be realized. The method can solve the problems of very complicated and long work flow and has the characteristics of high efficiency and high quality.
In an optional embodiment, before assigning a weight value and setting a normal to each of all location points in the target area, the method further includes:
step S502, marking a first face triangular light in the target area by adopting an editable curve;
step S504, a second face triangular light symmetrical to the first face triangular light is obtained by carrying out mirror image processing on the editable curve;
step S506 is performed to identify all the position points in the target area based on the first surface triangle light and the second surface triangle light.
The target area is expressed as the first face triangle beam, which is the position of the face triangle beam, by an editable curve, and the second face triangle beam, which is the second symmetrical triangle beam, is obtained by mirroring the curve, and is symmetrical to the first face triangle beam; and determining all position points in the target area according to the first surface triangular light and the second surface triangular light.
In an alternative embodiment, the obtaining the second normal of the target shadow position by:
step S602, determining the face orientation of the face model;
step S604, determining the target shadow position based on the face orientation, wherein the shape of the target shadow position is represented by an editable curve;
step S606, based on the editable curve, determining the second normal line of the target shadow position according to the angle change of the illumination angle.
As an alternative embodiment, the face feature of the virtual character may be, for example: the clear characteristics of eyes, ears, nose, mouth and the like determine the face orientation of the virtual character; the target shadow position is determined after determining the face orientation of the face model, e.g., a shadow region of a nose in the face model.
In the embodiment of the present disclosure, the shape of the shadow region of the nose in the above face model may be represented by an editable curve; the point of the face furthest towards is first chosen as the position of the nose, the shape of the nose shadow is represented by an editable curve, and mirroring this curve yields the shape of the other half of the nose shadow, as shown in figure 5. According to the change of the illumination angle, two similar-strip-shaped curves are set for transition, so that the nose shadow has a change trend from large to small or from small to large according to the change of the light angle, and the nose shadow effect of a hand-drawing style under real illumination is simulated.
In an optional embodiment, before sampling the face normal with the sampling keyframe at each of the plurality of illumination angles, respectively, and generating an illumination map of the face model, the method further includes:
step S702, performing a homogenization treatment on the normal line of the face by projecting the sphere-like normal line to the face model, so as to obtain a homogenized normal line of the face, where a shadow curve of the homogenized normal line of the face at each illumination angle meets a predetermined requirement.
As an alternative embodiment, under real illumination, the shadow plane calculated according to the normal of the model itself is irregular and does not have the characteristic of a complete color block in a hand-drawing style, so that a certain correction process needs to be performed on the normal of the face. In the embodiment of the present disclosure, the adopted method may be, but is not limited to, projecting the normal of a sphere-like surface onto the model face, so that the normal of the model face is homogenized, and a perfect shadow curve can be presented at each illumination angle. As shown in fig. 6, the positions of the face apples, i.e., the triangle light and the nose, are automatically positioned during the homogenization process, and the normal line is automatically corrected to present a uniform, perfect shadow curve.
It should be noted that the predetermined requirement refers to that the face shadow curve is uniformly and perfectly displayed, and as shown in fig. 6, the face shadow before the normal line automatic correction is displayed as a non-uniform and perfect curve, that is, the predetermined requirement is not met, and the face shadow after the normal line automatic correction is uniformly and smoothly displayed, which meets the predetermined requirement.
In the embodiment of the present disclosure, several fixed illumination angles may also be set, the number of frames where the fixed angles are located is a sampling key frame (i.e., a sampling importance key frame), and a targeted process (e.g., a correction process) is performed on the several sampling key frames, for example, when the orientation angle between the light and the face model is 90 °, the deviation of the face shadow under the orientation angle is controlled to be about 2 °, a situation that a light and dark boundary line appears in the middle of the face model may be avoided, so that the final generated chartlet effect is more beautiful. As shown in fig. 7, the left side shows a shadow effect when the orientation angle is 90 °, the bright-dark boundary line is in the middle of the face model, the right side shows a shadow effect when the orientation is shifted by-2 °, and the bright-dark boundary line is also shifted and is not in the middle of the face model, which is more suitable for the rendering style of cartoon characters.
In an alternative embodiment, the sampling the face normal with the sampling keyframe at each of the plurality of illumination angles, respectively, and generating the illumination map of the face model includes:
step S802, sampling the homogenized face normal line by respectively adopting a plurality of sampling key frames under each illumination angle in the plurality of illumination angles to obtain a plurality of sampling results;
step S804, carrying out interpolation calculation on a plurality of sampling results to obtain a plurality of interpolation frames;
step S806 is to generate the illumination map based on the inter-frame mean calculated from the plurality of inter-frames.
As an alternative embodiment, as shown in fig. 8, the illumination angle is automatically changed, the keyframe is sampled, and the illumination map at the selected angle of the keyframe can be automatically generated. The problems that a large number of key frames need to be drawn by hand and shadow curves need to be modified continuously are solved.
In the embodiment of the present disclosure, distance field calculation and interpolation may also be performed on the keyframe sampling result, and by setting the keyframe weight, the rhythm control is performed on the important light and shadow effect transition, so that the good-looking light and shadow effect has a longer shape in the illumination change, as shown in fig. 9; finally, the mean value of the interpolated inter-frame is calculated to obtain the final face illumination map, as shown in fig. 10.
It should be noted that the interpolation frame is used for automatically generating the illumination map, for example, a target object defines different attributes on two key frames, such as a face angle, a position change, and the like of the target object, an animation automatically established between the two key frames is the interpolation frame, and the animation automatically established may be a plurality of interpolation frames; and performing correlation calculation on the plurality of interpolation frames to obtain an interpolation frame mean value, and generating the illumination map based on the interpolation frame mean value.
In an alternative embodiment, the homogenizing the face normal, and obtaining a homogenized face normal includes:
step S902, determining an offset angle corresponding to a predetermined angle when the angle between the homogenized facial ray and the facial orientation of the facial model is the predetermined angle, wherein the predetermined angle is an angle corresponding to the sampling key frame;
step S904, performing an offset process on the face shadow of the face model according to the offset angle to adjust the position of the bright-dark boundary line presented on the face model.
In an embodiment of the present disclosure, when an angle between the homogenized facial light and the facial orientation of the facial model is an angle corresponding to the sampling key frame, determining an offset angle corresponding to the predetermined angle; performing offset processing on the face shadow of the face model according to the offset angle; in the process of the offset processing, the degree of the offset processing can be changed according to the actual needs of the virtual character or the virtual scene, so that the final result is more beautiful, and the virtual character is more in line with the style of cartoon rendering.
According to an embodiment of the present disclosure, there is further provided an apparatus for implementing the method for generating an illumination map, and fig. 11 is a schematic structural diagram of an apparatus for generating an illumination map according to an embodiment of the present disclosure, and as shown in fig. 11, the apparatus for generating an illumination map includes: an acquisition module 110, a control module 112, and a sampling module 114, wherein:
an obtaining module 110, configured to obtain a face model of a virtual character; a control module 112, configured to control virtual lighting to illuminate the face model from multiple illumination angles, and determine a face normal and a sampling key frame corresponding to each of the multiple illumination angles; a sampling module 114, configured to sample the face normal by using the sampling keyframe respectively at each of the plurality of illumination angles, so as to generate an illumination map of the face model.
It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; alternatively, the modules may be located in different processors in any combination.
It should be noted here that the obtaining module 110, the control module 112 and the sampling module 114 correspond to steps S102 to S106 in the method embodiment, and the modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the method embodiment. It should be noted that the modules described above may be implemented in a computer terminal as part of an apparatus.
It should be noted that, for alternative or preferred embodiments of the present embodiment, reference may be made to the related description in the method embodiment, and details are not described herein again.
The apparatus for generating an illumination map may further include a processor and a memory, where the obtaining module 110, the control module 112, the sampling module 114, and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory, wherein one or more than one kernel can be arranged. The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or a computer readable memory, such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to an embodiment of the present disclosure, there is also provided a computer-readable storage medium embodiment. Optionally, in this embodiment, the computer-readable storage medium includes a stored program, where when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the above methods for generating an illumination map.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network or in any one of a group of mobile terminals, and the computer-readable storage medium includes a stored program.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: acquiring a face model of the virtual character; controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; and under each illumination angle in the plurality of illumination angles, sampling the normal of the face by adopting the sampling key frame respectively to generate an illumination map of the face model.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: acquiring the face area of the virtual character; and performing surface subdivision processing on the face area to obtain the face model.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: determining a triangle light position and a target shadow position of the face model, wherein the triangle light position comprises a projection area of a brow bone, a nose bridge and a cheekbone in the face model, and the target shadow position comprises a shadow area of a nose in the face model; under the plurality of illumination angles, respectively acquiring a first normal of the position of the triangular light and a second normal of the position of the target shadow; the face normal is determined based on the first normal and the second normal.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: acquiring the face curvature, the first normal direction and the height information of the face model; determining a target area where the position of the triangle light is located in the face model based on the curvature of the face, the first normal direction and the height information, wherein a central position point of the target area is a central point of the position of the triangle light; respectively allocating a weight value and a set normal to each of all position points in the target area, wherein the weight values corresponding to other position points closer to the central position point in all the position points are higher, and the weight values corresponding to other position points farther from the central position point in all the position points are lower; calculating a vector sum between the face normal and the set normal according to the weight value; after the vector sum value is normalized, the weight value and the normal direction of the set normal are adjusted based on the illumination angle, so as to automatically modify the first normal.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: identifying a first face triangle light in the target area by adopting an editable curve; performing mirror image processing on the editable curve to obtain second face triangular light symmetrical to the first face triangular light; all the position points in the target area are determined based on the first surface triangular light and the second surface triangular light.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: determining the face orientation of the face model; determining the target shadow position based on the face orientation, wherein the shape of the target shadow position is represented by an editable curve; and determining the second normal of the target shadow position according to the angle change of the illumination angle based on the editable curve.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: and homogenizing the face normal by projecting the sphere-like face normal to the face model to obtain a homogenized face normal, wherein a shadow curve of the homogenized face normal at each illumination angle meets a preset requirement.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: sampling the homogenized face normal by respectively adopting a plurality of sampling key frames under each illumination angle in the plurality of illumination angles to obtain a plurality of sampling results; carrying out interpolation calculation on a plurality of sampling results to obtain a plurality of interpolation frames; and generating the illumination mapping based on the compensation frame mean value obtained by calculating the plurality of compensation frames.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: determining an offset angle corresponding to a predetermined angle when the angle between the homogenized face light and the face direction of the face model is the predetermined angle, wherein the predetermined angle is an angle corresponding to the sampling key frame; and performing offset processing on the face shadow of the face model according to the offset angle so as to adjust the position of a bright-dark boundary line presented on the face model.
According to an embodiment of the present disclosure, there is also provided an embodiment of a processor. Optionally, in this embodiment, the processor is configured to execute a program, where the program executes any one of the above methods for generating an illumination map.
An embodiment of the present disclosure provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform any one of the above methods for generating an illumination map.
The present disclosure also provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: acquiring a face model of the virtual character; controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles; and under each illumination angle in the plurality of illumination angles, sampling the normal of the face by adopting the sampling key frame respectively to generate an illumination map of the face model.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present disclosure, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present disclosure, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned computer-readable storage media comprise: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and embellishments could be made by those skilled in the art without departing from the principle of the present disclosure, and these should also be considered as the protection scope of the present disclosure.
Claims (11)
1. A method of generating an illumination map, comprising:
acquiring a face model of the virtual character;
controlling virtual light to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles;
sampling the normal of the face by adopting the sampling key frame respectively under each illumination angle in the plurality of illumination angles to generate an illumination map of the face model;
wherein the controlling of the virtual light to illuminate the face model from a plurality of illumination angles, the determining of a face normal corresponding to each illumination angle of the plurality of illumination angles comprises:
determining a triangle-light position and a target shadow position of the face model, wherein the triangle-light position comprises a projection region of a brow bone, a nose bridge and a cheekbone in the face model, and the target shadow position comprises a shadow region of a nose in the face model;
under the plurality of illumination angles, respectively acquiring a first normal of the position of the triangular light and a second normal of the position of the target shadow;
determining the face normal based on the first normal and the second normal, wherein the face normal comprises: the first normal line or the second normal line, or the boundary line between the first normal line and the second normal line.
2. The method of claim 1, wherein obtaining a face model of the virtual character comprises:
acquiring a face area of the virtual character;
and performing surface subdivision processing on the face area to obtain the face model.
3. The method of claim 1, wherein after acquiring the first normal of the position of the triangle light, the method further comprises:
acquiring the face curvature, the first normal direction and the height information of the face model;
determining a target area where the triangle light position is located in the face model based on the face curvature, the first normal direction and the height information, wherein a central position point of the target area is a central point of the triangle light position;
respectively allocating a weight value and a set normal to each position point in all the position points in the target area, wherein the weight values corresponding to other position points which are closer to the central position point in all the position points are higher, and the weight values corresponding to other position points which are farther from the central position point in all the position points are lower;
calculating a vector sum value between the face normal and the set normal according to the weight value;
after normalizing the vector sum value, adjusting the weight value and the normal direction of the set normal based on the illumination angle to modify the first normal.
4. The method of claim 3, wherein before assigning a weight value and setting a normal to each of all location points in the target area, the method further comprises:
identifying a first facial triangle light in the target region using an editable curve;
performing mirror image processing on the editable curve to obtain second face triangular light symmetrical to the first face triangular light;
determining all location points in the target region based on the first and second face triangle lights.
5. The method of claim 1, wherein obtaining the second normal of the target shadow position comprises:
determining a face orientation of the face model;
determining the target shadow position based on the face orientation, wherein the shape of the target shadow position is represented by an editable curve;
and determining the second normal of the target shadow position according to the angle change of the illumination angle based on the editable curve.
6. The method of claim 1, wherein prior to generating the illumination map of the face model using the sampling keyframe to sample the face normal at each of the plurality of illumination angles, the method further comprises:
and adopting a mode of projecting the sphere-like normal line to the face model, and carrying out homogenization treatment on the face normal line to obtain a homogenized face normal line, wherein a shadow curve of the homogenized face normal line under each illumination angle meets a preset requirement.
7. The method of claim 6, wherein sampling the face normal with the sampling keyframe at each of the plurality of illumination angles, respectively, generating an illumination map of the face model comprises:
sampling the homogenized face normal by adopting a plurality of sampling key frames respectively under each illumination angle in the plurality of illumination angles to obtain a plurality of sampling results;
carrying out interpolation calculation on a plurality of sampling results to obtain a plurality of interpolation frames;
and generating the illumination mapping based on the compensation frame mean value calculated by the plurality of compensation frames.
8. The method of claim 6, homogenizing the face normal to obtain a homogenized face normal, comprising:
when the angle between the homogenized face light ray and the face orientation of the face model is a preset angle, determining an offset angle corresponding to the preset angle, wherein the preset angle is the angle corresponding to the sampling key frame;
and carrying out offset processing on the face shadow of the face model according to the offset angle so as to adjust the position of a bright-dark boundary presented on the face model.
9. An apparatus for generating an illumination map, comprising:
the acquiring module is used for acquiring a face model of the virtual character;
the control module is used for controlling virtual lamplight to irradiate the face model from a plurality of illumination angles, and determining a face normal and a sampling key frame corresponding to each illumination angle in the plurality of illumination angles;
the sampling module is used for sampling the normal of the face by adopting the sampling key frame respectively under each illumination angle in the plurality of illumination angles to generate an illumination map of the face model;
wherein the control module is further configured to: determining a triangle-light position and a target shadow position of the face model, wherein the triangle-light position comprises a projection region of a brow bone, a nose bridge and a cheekbone in the face model, and the target shadow position comprises a shadow region of a nose in the face model; under the plurality of illumination angles, respectively acquiring a first normal of the position of the triangular light and a second normal of the position of the target shadow; determining the face normal based on the first normal and the second normal, wherein the face normal comprises: the first normal line or the second normal line, or the boundary line between the first normal line and the second normal line.
10. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the storage medium is located to perform the method for generating a light map according to any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of generating a light map according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110780845.7A CN113450444B (en) | 2021-07-09 | 2021-07-09 | Method and device for generating illumination map, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110780845.7A CN113450444B (en) | 2021-07-09 | 2021-07-09 | Method and device for generating illumination map, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113450444A CN113450444A (en) | 2021-09-28 |
CN113450444B true CN113450444B (en) | 2023-03-24 |
Family
ID=77815916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110780845.7A Active CN113450444B (en) | 2021-07-09 | 2021-07-09 | Method and device for generating illumination map, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113450444B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115375828B (en) * | 2022-10-24 | 2023-02-03 | 腾讯科技(深圳)有限公司 | Model shadow generation method, device, equipment and medium |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7221809B2 (en) * | 2001-12-17 | 2007-05-22 | Genex Technologies, Inc. | Face recognition system and method |
JP4528008B2 (en) * | 2004-03-24 | 2010-08-18 | 株式会社バンダイナムコゲームス | Program, information storage medium, and image generation system |
EP1918880A3 (en) * | 2005-04-19 | 2009-09-02 | Digitalfish, Inc. | Techniques and workflows for computer graphics animation system |
US7948500B2 (en) * | 2007-06-07 | 2011-05-24 | Nvidia Corporation | Extrapolation of nonresident mipmap data using resident mipmap data |
JP5869817B2 (en) * | 2011-09-28 | 2016-02-24 | 株式会社日立ハイテクノロジーズ | Defect inspection method and defect inspection apparatus |
EP3629303A1 (en) * | 2014-03-25 | 2020-04-01 | Apple Inc. | Method and system for representing a virtual object in a view of a real environment |
WO2018180860A1 (en) * | 2017-03-27 | 2018-10-04 | キヤノン株式会社 | Image processing device, image processing method, and program |
CN109427083B (en) * | 2017-08-17 | 2022-02-01 | 腾讯科技(深圳)有限公司 | Method, device, terminal and storage medium for displaying three-dimensional virtual image |
US10509463B2 (en) * | 2017-11-17 | 2019-12-17 | Microsoft Technology Licensing, Llc | Mixed reality offload using free space optics |
US11153550B2 (en) * | 2018-04-06 | 2021-10-19 | Disney Enterprises, Inc. | Depth codec for real-time, high-quality light field reconstruction |
CN209327741U (en) * | 2018-10-26 | 2019-08-30 | 苹果公司 | Electronic equipment |
US10810738B1 (en) * | 2018-12-07 | 2020-10-20 | Bellus 3D, Inc. | Marker-less alignment of digital 3D face and jaw models |
US11069135B2 (en) * | 2019-03-07 | 2021-07-20 | Lucasfilm Entertainment Company Ltd. | On-set facial performance capture and transfer to a three-dimensional computer-generated model |
CN110033509B (en) * | 2019-03-22 | 2023-03-31 | 嘉兴超维信息技术有限公司 | Method for constructing three-dimensional face normal based on diffuse reflection gradient polarized light |
CN111632378B (en) * | 2020-06-08 | 2024-05-17 | 网易(杭州)网络有限公司 | Illumination map making and game model rendering method and device and electronic equipment |
CN111862285B (en) * | 2020-07-10 | 2024-10-29 | 完美世界(北京)软件科技发展有限公司 | Character skin rendering method and device, storage medium and electronic device |
CN112508778B (en) * | 2020-12-18 | 2024-04-12 | 咪咕文化科技有限公司 | 3D face prop mapping method, terminal and storage medium |
CN113052947B (en) * | 2021-03-08 | 2022-08-16 | 网易(杭州)网络有限公司 | Rendering method, rendering device, electronic equipment and storage medium |
-
2021
- 2021-07-09 CN CN202110780845.7A patent/CN113450444B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113450444A (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111598818B (en) | Training method and device for face fusion model and electronic equipment | |
CN109675315B (en) | Game role model generation method and device, processor and terminal | |
US9916676B2 (en) | 3D model rendering method and apparatus and terminal device | |
CN110766776B (en) | Method and device for generating expression animation | |
KR102386642B1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN110148191B (en) | Video virtual expression generation method and device and computer readable storage medium | |
CN109741438B (en) | Three-dimensional face modeling method, device, equipment and medium | |
US20230120253A1 (en) | Method and apparatus for generating virtual character, electronic device and readable storage medium | |
CN111862285A (en) | Method and device for rendering figure skin, storage medium and electronic device | |
CN113450444B (en) | Method and device for generating illumination map, storage medium and electronic equipment | |
CN116363288A (en) | Rendering method and device of target object, storage medium and computer equipment | |
CN110624244A (en) | Method and device for editing face model in game and terminal equipment | |
CN107230249A (en) | Shading Rendering method and apparatus | |
WO2024146337A1 (en) | Shadow rendering method and apparatus, device and medium | |
US20180365878A1 (en) | Facial model editing method and apparatus | |
CN110414345A (en) | Cartoon image generation method, device, equipment and storage medium | |
CN113961746A (en) | Video generation method and device, electronic equipment and readable storage medium | |
CN114863008B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US20220198828A1 (en) | Method and apparatus for generating image | |
CN112562066B (en) | Image reconstruction method and device and electronic equipment | |
CN114529656A (en) | Shadow map generation method and device, computer equipment and storage medium | |
CN113947663A (en) | Vegetation model generation method and device, storage medium and electronic device | |
CN114140560A (en) | Animation generation method, device, equipment and storage medium | |
CN106061054B (en) | A kind of information processing method and electronic equipment | |
CN108053436A (en) | Processing method, device, electronic equipment and the picture servers of picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |