Nothing Special   »   [go: up one dir, main page]

CN112494941A - Display control method and device of virtual object, storage medium and electronic equipment - Google Patents

Display control method and device of virtual object, storage medium and electronic equipment Download PDF

Info

Publication number
CN112494941A
CN112494941A CN202011476410.5A CN202011476410A CN112494941A CN 112494941 A CN112494941 A CN 112494941A CN 202011476410 A CN202011476410 A CN 202011476410A CN 112494941 A CN112494941 A CN 112494941A
Authority
CN
China
Prior art keywords
virtual object
target
rendering data
rendering
grids
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011476410.5A
Other languages
Chinese (zh)
Other versions
CN112494941B (en
Inventor
刘宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011476410.5A priority Critical patent/CN112494941B/en
Publication of CN112494941A publication Critical patent/CN112494941A/en
Application granted granted Critical
Publication of CN112494941B publication Critical patent/CN112494941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure belongs to the technical field of computers, and relates to a display control method and device of a virtual object, a storage medium and electronic equipment. The method comprises the following steps: acquiring the position of a first virtual object in a virtual scene; determining a target mesh corresponding to the location from the plurality of first meshes and the plurality of second meshes; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to the second virtual object; and obtaining corresponding target rendering data according to the target grid index, and rendering the first virtual object according to the target rendering data. In the disclosure, on one hand, the problem that the key area and the non-key area cannot be distinguished is solved, and the occupancy rates of a disk and a memory are reduced; on the other hand, the virtual objects are rendered in a distinguishing mode according to the grids where the real-time positions of the first virtual objects are located, and the rendering speed is improved on the basis of guaranteeing the rendering effect.

Description

Display control method and device of virtual object, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a display control method for a virtual object, a display control apparatus for a virtual object, a computer-readable storage medium, and an electronic device.
Background
With the development of internet technology, the way of entertainment becomes diversified. Meanwhile, people increasingly have higher requirements on entertainment experience, especially game experience. In order to enhance the sense of reality in a game scene, the influence of illumination on a virtual object is generally considered during rendering, so that an illumination-based rendering technology is applied to games. For example, global illumination technology refers to a rendering technology that considers illumination in a scene directly from a light source, and illumination reflected by other objects in the scene.
In the related art, two common modes for processing global illumination are provided, one mode is an Ambient Cube global illumination technology, and the rendering effect of a virtual object is poor due to the fact that stored color information and shadow information are too little; the other is an SHvolume global illumination technology based on a third-order spherical harmonic function, the stored color information and the shadow information are more than those of the Ambient Cube global illumination technology, the rendering effect of the virtual object is good, but the stored data occupies a large memory space, so that the loss of the computer is serious. Meanwhile, because the position of the dynamic virtual object changes in real time, if any position is rendered by adopting the uniform grid, the loss of the computer is serious, and the waste of the performance is large. In view of the above, there is a need in the art to develop a new method and apparatus for controlling display of virtual objects.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a display control method of a virtual object, a display control apparatus of a virtual object, a computer-readable storage medium, and an electronic device, thereby overcoming, at least to some extent, problems of computer loss and performance waste due to limitations of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of embodiments of the present invention, there is provided a display control method of a virtual object, the method including: acquiring a target position of a first virtual object in a virtual scene; determining a target grid corresponding to the target location from a plurality of first grids and a plurality of second grids; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to a second virtual object; and obtaining corresponding target rendering data according to the target grid index, and rendering the first virtual object according to the target rendering data.
In an exemplary embodiment of the present invention, the acquiring a target position of the first virtual object in the virtual scene includes: determining a bounding box of a first virtual object in a virtual scene, and determining a target position of the first virtual object in the virtual scene according to the bounding box.
In an exemplary embodiment of the present invention, the dividing the plurality of first meshes according to the second virtual object includes: and carrying out division processing on a first mesh to be divided, wherein the first mesh to be divided is the first mesh where a second virtual object is located.
In an exemplary embodiment of the present invention, the dividing the first mesh to be divided includes: determining a way-finding grid of the first virtual object from the first grid to be divided as a second grid to be divided; and dividing the second grid to be divided, wherein the second grid to be divided is a routing grid of the first virtual object determined from the first grid to be divided.
In an exemplary embodiment of the present invention, the target mesh and the rendering data are indexed by a mapping relationship; rendering data is stored in a data pool; the obtaining of corresponding rendering data according to the target grid index includes: determining identification information corresponding to the target grid; determining rendering data corresponding to the identification information from the data pool according to the mapping relation; and rendering the target object according to rendering data corresponding to the identification information.
In an exemplary embodiment of the present invention, the target rendering data includes first rendering data corresponding to the first mesh and/or second rendering data corresponding to the second mesh; the rendering the first virtual object according to the target rendering data includes: rendering the first virtual object according to the first rendering data and/or the second rendering data.
In an exemplary embodiment of the present invention, the rendering the first virtual object according to the target rendering data includes: calculating to obtain third rendering data corresponding to the rest grids of the first virtual object according to the target rendering data; rendering the first virtual object according to the target rendering data and the third rendering data.
In an exemplary embodiment of the present invention, the calculating, according to the target rendering data, third rendering data of the remaining grids where the first virtual object is located includes: and calculating the target data by utilizing a third-order spherical harmonic function to obtain third rendering data of the rest grids where the first virtual object is located.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for controlling display of a virtual object, the apparatus including: an acquisition module configured to acquire a target position of a first virtual object in a virtual scene; a determination module configured to determine a target grid corresponding to the target location from a plurality of first grids and a plurality of second grids; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to a second virtual object; and the rendering module is configured to obtain corresponding target rendering data according to the target grid index and render the first virtual object according to the target rendering data.
According to a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display control method of a virtual object in any of the above-described exemplary embodiments.
According to a fourth aspect of the embodiments of the present invention, there is provided an electronic apparatus, including: a processor and a memory; wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the display control method of the virtual object of any of the above-described exemplary embodiments.
As can be seen from the foregoing technical solutions, the display control method of a virtual object, the display control apparatus of a virtual object, the computer storage medium, and the electronic device in the exemplary embodiments of the present invention have at least the following advantages and positive effects:
in the method and apparatus provided in the exemplary embodiment of the present disclosure, a virtual scene is first divided according to a non-uniform grid form to obtain a first grid, then the first grid is divided again according to a second virtual object to obtain a second grid, then a target grid corresponding to the first virtual object is determined according to the first grid and the second grid, finally target rendering data of the target grid is determined, and the first virtual object is rendered according to the rendering data. On one hand, the problem that key areas and non-key areas cannot be distinguished is solved, and the occupancy rates of a disk and a memory are reduced; on the other hand, the virtual objects are rendered in a distinguishing mode according to the grids where the real-time positions of the first virtual objects are located, and the rendering speed is improved on the basis of guaranteeing the rendering effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically illustrates a flowchart of a display control method of a virtual object in an embodiment of the present disclosure;
FIG. 2 schematically illustrates a top-down structural view of different levels of a target grid in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating the structure of rendering data in an embodiment of the present disclosure;
fig. 4 is a schematic view showing a display control effect of a virtual object in the related art;
FIG. 5 is a schematic diagram illustrating a display control effect of a virtual object in an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart illustrating rendering of a first virtual object according to target rendering data in an embodiment of the present disclosure;
fig. 7 schematically illustrates a flowchart of a dividing process performed on a first mesh to be divided in the embodiment of the present disclosure;
FIG. 8 is a schematic flow chart illustrating obtaining corresponding rendering data according to a target grid index in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a two-dimensional space before status update of identification information in an embodiment of the present disclosure;
FIG. 10 is a schematic diagram illustrating a two-dimensional space after status update of identification information in an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a display control apparatus for a virtual object in an embodiment of the present disclosure;
fig. 12 schematically illustrates an electronic device for a display control method of a virtual object in an embodiment of the present disclosure;
fig. 13 schematically illustrates a computer-readable storage medium for a display control method for a virtual object in an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
In the related art, there are two types of global illumination modes used in rendering a virtual object, which are an Ambient Cube global illumination technique and a SHVolume global illumination technique based on a third-order spherical harmonic function.
In the Ambient Cube global illumination technique, color information and shade information in 6 directions, i.e., up, down, left, right, front, and rear, can be obtained from a data pool storing the color information and shade information.
Although the global illumination in the virtual object can be rendered by the Ambient Cube global illumination technology, only color information and shadow information in 6 directions are stored, and the amount of stored data is small, resulting in poor rendering effect.
In the SHVolume global illumination technique based on the third-order spherical harmonic function, the third-order spherical harmonic function is a way to represent the spherical function, and the spherical function is represented as 9 coefficients. The color comprises three channels of red, green and blue during rendering, each channel requires 9 coefficients, so that a total of 27 information need to be obtained from a data pool storing color information and shading information.
Obviously, compared with the Ambient Cube global illumination technology, the SHVolume global illumination technology based on the third-order spherical harmonic function obtains more data, and the rendering effect is better.
However, data acquired by the SHVolume global illumination technology based on the third-order spherical harmonic function is not easy to compress, so that the occupied memory space is large, and the performance loss is increased.
Meanwhile, the two modes in the prior art are both based on uniform grids, and areas in the virtual scene are not distinguished by emphasis and non-emphasis.
With the expansion of the virtual scene, the absence of the distinction between important and non-important areas in the scene increases the memory occupation, thereby increasing the performance loss.
In view of the problems in the related art, the present disclosure provides a display control method of a virtual object. Fig. 1 shows a flowchart of a display control method for a virtual object, where an execution subject of the method may be a server or a terminal, and each step in the method may be executed by the same execution subject or different execution subjects. As shown in fig. 1, the method for controlling display of a virtual object at least includes the steps of:
step S110, the position of the first virtual object in the virtual scene is obtained.
S120, determining a target grid corresponding to the position from the plurality of first grids and the plurality of second grids; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to the second virtual object.
And S130, obtaining corresponding target rendering data according to the target grid index, and rendering the first virtual object according to the target rendering data.
In the method and apparatus provided in the exemplary embodiment of the present disclosure, a virtual scene is first divided according to a non-uniform grid form to obtain a first grid, then the first grid is divided again according to a second virtual object to obtain a second grid, then a target grid corresponding to the first virtual object is determined according to the first grid and the second grid, finally target rendering data of the target grid is determined, and the first virtual object is rendered according to the target rendering data. On one hand, the problem that key areas and non-key areas cannot be distinguished is solved, and the occupancy rates of a disk and a memory are reduced; on the other hand, the virtual objects are rendered in a distinguishing mode according to the grids where the real-time positions of the first virtual objects are located, and the rendering speed is improved on the basis of guaranteeing the rendering effect.
The following describes each step of the virtual object display control method in detail.
In step S110, a target position of the first virtual object in the virtual scene is acquired.
In an exemplary embodiment of the present disclosure, a virtual scene refers to a scene that is imaginary and does not exist in the real world. It is commonly found in literary works such as movies, television shows, etc., and various kinds of games.
The virtual scene may be a virtual scene in a movie, a virtual scene in a computer animation, a virtual scene in a game at a mobile phone end, or a virtual scene in a game at a computer end, which is not particularly limited in this exemplary embodiment.
The first virtual object refers to a specific object in the virtual scene, and the first virtual object may be a dynamic virtual character in the virtual scene, for example, the first virtual object may be a virtual character controlled by a game player, may also be a dynamic virtual creature in the virtual scene, and may also be a dynamic virtual prop in the virtual scene, which is not particularly limited in this exemplary embodiment.
The target position refers to three-dimensional coordinate information indicating a position of the first virtual object, and may be three-dimensional coordinate information of a center point of the first virtual object, or three-dimensional coordinate information of a certain point in the first virtual object, or three-dimensional coordinate information obtained by calculating three-dimensional coordinate information of all points in the first virtual object according to a certain algorithm, which is not particularly limited in this exemplary embodiment. Illustratively, the acquisition of the target location is a real-time acquisition.
Specifically, the target position may be a center point position of the bounding box of the first virtual object, an upper left corner position of the bounding box of the first virtual object, an upper right corner position of the bounding box of the first virtual character, a lower left corner position of the bounding box of the first virtual character, or a position of any point on the bounding box of the first virtual character, which is not particularly limited in this exemplary embodiment.
The bounding box refers to an algorithm for solving an optimal bounding space of a discrete point set, and the basic idea is to approximately replace a complex geometric object by a geometric body (bounding box) with a slightly larger volume and simple characteristics. The bounding box may be a rectangular parallelepiped, a cube, or a sphere, which is not limited in this exemplary embodiment.
For example, there are a variety of three-dimensional scenes in shooting games, including forest scenes, glacier scenes, and battlefield scenes.
When the shooting game is in a forest scene, the forest scene is a virtual scene, a game role A can exist in the shooting game, the game role A is a first virtual object at the moment, the target position can be the position of the central point of a bounding box of the game role A, and specifically, the position of the central point can be the three-dimensional coordinate of the central point.
In an alternative embodiment, obtaining a target position of a first virtual object in a virtual scene includes: determining a bounding box of a first virtual object in a virtual scene, and determining a target position of the first virtual object in the virtual scene according to the bounding box.
The bounding box refers to an algorithm for solving an optimal bounding space of a discrete point set, and the basic idea is to approximately replace a complex geometric object by a geometric body (bounding box) with a slightly larger volume and simple characteristics. The bounding box may be a rectangular parallelepiped, a cube, or a sphere, which is not limited in this exemplary embodiment.
For example, in a shooting game, a game object a exists in a forest scene, and at this time, the game object a is a first virtual object, and an optimal bounding space is first calculated according to a plurality of discrete point sets in the game object a to obtain a cuboid which can replace the game object a, and at this time, the cuboid is a bounding box of the game object a. The position of the center point corresponding to the bounding box of the game object a is then acquired as the target position.
In an alternative embodiment, the accuracy of determining the target position may be improved by determining the bounding box of the first virtual object.
Determining a target mesh corresponding to the target position from the plurality of first meshes and the plurality of second meshes in step S120; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to the second virtual object.
The first grid refers to a three-dimensional body obtained by splitting a virtual scene once. The first grid may be a rectangular parallelepiped, a cube, or any three-dimensional solid figure, which is not particularly limited in this exemplary embodiment.
The second grid refers to a solid obtained by continuously splitting the first grid, where the splitting frequency may be one time, two times, three times, or any number of times, and the splitting frequency is determined according to actual requirements.
It should be noted that the first mesh and the second mesh may be obtained through division processing in advance, or may be obtained through real-time division based on the position of the routing mesh where the second virtual object is located, and this is not limited in this exemplary embodiment.
For example, in a forest of a shooting game, there are a game character a, a grass B, and a sea C. At this time, the forest is a virtual scene, and the grass B and the sea C in the forest scene contain the road-finding grids of the game character A, the forest scene B can be divided in advance to obtain a first grid, and then the first grid at the position of the grass B and the sea C is divided in advance again to obtain a second grid.
For example, in a forest scene of a shooting game, there are a game character a, a grass B, and a sea body C. At this time, the forest is a virtual scene, the game character is a first virtual object, the bush B and the sea C are second virtual objects, and the road-finding grids of the game character a are included in the bush B and the sea C in the forest scene.
When the first virtual object moves to the grass B or the sea C, the virtual scene area where the grass B and the sea C are located is split in real time to obtain a first grid, and then the first grid is split again to obtain a second grid.
Specifically, the first mesh may be a first-level target mesh, the second mesh obtained by splitting the first mesh once may be a second-level target mesh, the third-level target mesh may be obtained by continuously splitting the second-level target mesh once, and so on, and target meshes of different levels may be obtained according to splitting of different times.
The splitting criterion may be a preset splitting value, which may be the number of the second grids, or a splitting criterion of the first grid, and this is not particularly limited in this exemplary embodiment.
Specifically, as shown in fig. 2, a schematic diagram of a top-view structure of target grids of different levels is shown, and as shown in fig. 2, the target grids are assumed to be divided into three levels. Target grid 210 is a first-level target grid, i.e., the first grid, target grid 220 is a second-level target grid, and target grid 230 is a third-level target grid.
The target grids of different levels have different length, width and height. And the length, width and height values of the first-level target grid are greater than those of the second-level target grid, and the length, width and height values of the second-level target grid are greater than those of the third-level target grid.
Here, the first-level target mesh 210 may be divided into 8 second-level target meshes 220 by dividing the first-level target mesh 210 into 2 regions in each of the front-back direction, the left-right direction, and the up-down direction.
The second-level target mesh 220 is divided into 2 regions in the front-back direction, the left-right direction, and the up-down direction, and the second-level target mesh 220 may be divided into 8 third-level target meshes 230.
Since fig. 2 is a top view, 4 second-level target meshes and 4 third-level target meshes are shown, and in practice, the number of the second-level target meshes and the number of the third-level target meshes are both 8.
The target mesh may be divided into 2 regions in the front-back direction, the left-right direction, and the up-down direction, may be divided into 3 regions in the front-back direction, the left-right direction, and the up-down direction, and may be divided into 4 regions and 4 regions or more in the front-back direction, the left-right direction, and the up-down direction, which is not limited in this exemplary embodiment.
The second virtual object refers to a virtual scene object to which the first virtual object can walk, and such virtual scene object usually contains corresponding lighting information. Specifically, the second virtual object may be a house in the virtual scene, may be a forest in the virtual scene, may be a vehicle in the virtual scene, may be a grass in the virtual scene, may be a sea in the virtual scene, may be a river in the virtual scene, and may be an area with illumination in the virtual scene.
It should be noted that the second virtual object refers to any virtual scene object that the first virtual object can go to in the virtual scene, and this exemplary embodiment is not particularly limited to this.
For example, a game character a and a tree B exist in a forest scene of a shooting game, where the game character a is a first virtual object, the forest scene is a virtual scene, the tree B is a second virtual object, and the target position corresponding to the game character a may be (1, 1, 1).
The preset splitting value is 4, the first grids are multiple grids obtained after splitting the virtual scene, for example, the number of the first grids is 10, the first grids are cubes with the side length of 100 pixels, 5 first grids in the multiple first grids are split again according to the tree B, specifically, each first grid in the 5 first grids can be split respectively in the left-right direction, the up-down direction and the front-back direction of the first grid according to the preset splitting value, and each first grid is split into 64 second grids.
At this time, there are 5 first meshes and 320 second meshes in the forest scene. A target mesh corresponding to the target position (1, 1, 1) is determined among the 5 first meshes and the 320 second meshes.
In an alternative embodiment, the target rendering data comprises first rendering data corresponding to the first mesh and/or second rendering data corresponding to the second mesh; rendering the first virtual object according to the target rendering data, comprising: rendering the first virtual object according to the first rendering data and/or the second rendering data.
The first rendering data refers to rendering data in the data pool having a mapping relationship with the first grid, and the second rendering data refers to rendering data in the data pool having a mapping relationship with the second grid.
However, only the first rendering data may be included in the target data, when the first virtual object is rendered according to the first rendering data.
It is also possible to include only the second rendering data in the target data, when rendering the second virtual object according to the second rendering data.
The first rendering data and the second rendering data may be included in the target data, and the first virtual object is rendered according to the first rendering data and the second rendering data.
Fig. 3 shows a schematic diagram of the structure of rendering data, and as shown in fig. 3, rendering data 310 is stored in the size of the first grid, for example, 128 m × 128 m. The rendering data includes global illumination information and shadow information.
Wherein, a part of the global illumination information is stored in the form of RGBA16F, and the rest part of the global illumination information and the shadow information is stored in the form of RGBA 8. Wherein, R represents red information, G represents green information, B represents blue information, A represents transparency information, 16F is hexadecimal, and 8 is octal.
In addition, both the first rendering data 320 and the second rendering data 330 are included in 310. Wherein 320 includes first rendering data to be used in rendering the first mesh and 330 includes second rendering data to be used in rendering the second mesh and the finer second mesh.
In addition, fig. 4 is a schematic diagram illustrating a display control effect of a virtual object in the related art, as shown in fig. 4, a scene 410 is a virtual scene, a person 420 is a first virtual object in the virtual scene, and a rendering effect 430 is a rendering effect implemented by using the Ambient Cube global illumination technology.
Fig. 5 is a schematic diagram illustrating a rendering effect according to an embodiment of the present disclosure, and as shown in fig. 5, a rendering effect 510 is a rendering effect achieved by using an embodiment of the present disclosure.
Compared with 430 in fig. 4, 510 in the embodiment of the present disclosure renders colors more clearly and more effectively than 430 in fig. 4.
In addition, there are two rendering modes in the embodiment of the present disclosure, one is a level-based mode, and the other is a viewpoint position-based mode.
The level-based mode refers to that different areas are rendered differently according to the level values of different areas in the scene. And some areas are rendered by using target grid information corresponding to the target grid in the area, and some areas are rendered by using original grid information corresponding to the target grid in the area.
The viewpoint position-based manner means that a region close to the viewpoint is rendered using target mesh information corresponding to the region, and a region far from the viewpoint is rendered using original mesh information corresponding to the region.
For example, in a shooting game, the first virtual object includes only the first mesh, and the first virtual object is rendered using first rendering data corresponding to the first mesh.
For example, in a shooting game, the first virtual object includes both the first mesh and the second mesh, and when the first virtual object is rendered, the first mesh is rendered using the first rendering data, and the second mesh is rendered using the second rendering data.
In an optional embodiment, the second virtual data is used for rendering the place which needs high-precision rendering, and the first rendering data is used for rendering the place which does not need high-precision rendering, so that the rendering speed of the first virtual object is increased, and the occupancy rates of a disk and a memory are reduced.
In an alternative embodiment, fig. 6 shows a flowchart of rendering a first virtual object according to target rendering data, and as shown in fig. 6, the method at least includes the following steps: in step S610, third rendering data of the remaining grids where the first virtual object is located is obtained by calculation according to the target rendering data.
The target rendering data refers to rendering data having a mapping relation with the target grid, the target grid is determined according to the target position, however, the target position is only one position in the first many objects, and the first virtual object has a plurality of positions, so that the rest grids refer to grids corresponding to other positions except the target position in the first virtual object, and the third rendering data refers to rendering data corresponding to other grids in the data pool.
For example, the target rendering data is rendering data corresponding to the target mesh, but the target rendering data is only rendering data corresponding to one mesh in the first virtual object, and in order to render the entire first virtual object, the one rendering data, that is, the target rendering data, must be calculated first to obtain renderings corresponding to the remaining meshes in the first virtual object, that is, third rendering data.
In step S620, the first virtual object is rendered according to the target rendering data and the third rendering data.
Since the first virtual object is divided into a plurality of meshes, and the plurality of meshes are composed of the target mesh and the other meshes, it is necessary to render the first virtual object according to target rendering data corresponding to the target mesh and third rendering data corresponding to the other meshes.
For example, the obtained target rendering data is yellow, the third rendering data obtained by calculating the target rendering data includes orange, red, and green, and the first virtual object is rendered according to the yellow, orange, red, and green.
In an optional embodiment, the rendering of the first virtual object may be implemented without searching for rendering data corresponding to the remaining grids, which increases the rendering speed of the first virtual object and avoids unnecessary performance loss.
In an optional embodiment, the obtaining, according to the target rendering data, third rendering data of the remaining grids where the first virtual object is located by calculation includes:
and calculating the target data by utilizing a third-order spherical harmonic function to obtain third rendering data of the rest grids where the first virtual object is located.
The third-order spherical harmonic function refers to a representation mode of rendering data, a normal vector parameter is arranged in the third-order spherical harmonic function, different third-order spherical harmonic function results can be obtained according to different normal vector parameter values, and different rendering data can be obtained.
And the difference between the target grid and other grids is that the directions of the normal vectors are different, so that the normal vectors of the other grids are brought into the normal vector parameters of the target grid, and the third rendering data corresponding to the other grids can be obtained.
For example, the normal vector coordinates of the third-order spherical harmonic corresponding to the target rendering data are (1, 1, 1), 2 remaining meshes exist in the first virtual object, and the normal vectors of the two remaining meshes are (1, 1, 0) and (1, 1, 2), respectively, and the (1, 1, 0) and (1, 1, 2) are substituted into the third-order spherical harmonic corresponding to the target rendering data to obtain third rendering data corresponding to the 2 remaining meshes. It should be noted that the number of grids of the first virtual object may also be other values, and this exemplary embodiment is not particularly limited thereto.
In an optional embodiment, the third rendering data is obtained by utilizing the third-order spherical harmonic function and the rest of grids, so that the accuracy of the obtained rendering data can be improved, the rendering effect of the first virtual object is further improved, and the user experience is optimized.
In step S130, corresponding target rendering data is obtained according to the target grid index, and the first virtual object is rendered according to the target rendering data.
The target rendering data refers to rendering data having a mapping relation with the target grid, the target grid is determined according to the target position, however, the target position is only one position in the first virtual object, and the first virtual object has a plurality of positions, so that the rest grids refer to grids corresponding to other positions except the target position in the first virtual object, and the third rendering data refers to rendering data corresponding to other grids in the data pool.
Rendering refers to the process of converting a first virtual object in a virtual scene into a digital image or bitmap image according to rendering data. It should be noted that, in the example embodiment of the present disclosure, the rendering based on the target rendering data is the rendering of the first virtual object, and not the rendering of the static virtual object in the virtual scene, since the position of the static virtual object in the game scene is fixed, the corresponding illumination map may be preset for the static virtual object before the game is executed, and therefore, the rendering of the static virtual object according to the target rendering data is not required here.
Where the digital image designation is a two-dimensional image, a bitmap image refers to an image composed of pixels as a single point.
For example, a game character a exists in a forest scene of a shooting game, and at this time, the game character a is a first virtual object, and corresponding target rendering data is found to be red according to a target grid, so that the game character a is rendered to be red.
In an alternative embodiment, the dividing the plurality of first meshes according to the second virtual object includes: and carrying out division processing on the first mesh to be divided, wherein the first mesh to be divided is the first mesh in which the second virtual object is positioned.
The first grid refers to a cube obtained after the virtual scene is divided according to a preset value. The preset value may be the number of cubes, or the preset value may also be a split value in a horizontal direction, a split value in a vertical direction, and a split value in a left-right direction, which is not particularly limited in this exemplary embodiment.
The second virtual object refers to a virtual scene object that the first virtual object can walk to, and the second virtual object may be a house in the virtual scene, a forest in the virtual scene, a vehicle in the virtual scene, a grass in the virtual scene, a sea in the virtual scene, a river in the virtual scene, or an area with illumination in the virtual scene.
The first to-be-split grid refers to a grid obtained by splitting the first grid where the second virtual object is located again.
For example, this time in a forest scene of a shooting-type game, and a game character a, a tree B, and a room C exist in the forest scene. In this case, the first virtual object refers to the game character a, and the second virtual object refers to the tree B and the room C.
Assuming that the preset value is 4, splitting the forest scene into 4 cubes, namely 64 cuboids, wherein the 64 cuboids are first grids, determining whether a second virtual object exists in the 64 cubes, and if the second virtual object exists in the 64 cubes, determining the first grids where the second virtual object exists as first grids to be split.
In an optional embodiment, the first grid where the second virtual object is located is determined as the first grid to be divided, and the first grid to be divided is split, so that the grid is no longer a uniform grid, a key area and a non-terminal area are distinguished, the virtual object is distinguished and rendered according to the grid where the real-time position of the first virtual object is located, and the rendering speed can be increased on the basis of ensuring the rendering effect.
In an alternative embodiment, fig. 7 is a schematic flowchart illustrating a process of dividing a first mesh to be divided, and as shown in fig. 7, the method at least includes the following steps: in step S710, a way-finding mesh of the first virtual object is determined from the first mesh to be divided as a second mesh to be divided.
Wherein the way-finding mesh refers to a set of triangles in the first virtual object walkable plane.
Wherein, the walkable plane refers to a plane that the first virtual object in the scene can walk to. In addition, each of the way-finding grids has a point that may represent a position of the way-finding grid, where the point that may represent the position of the way-finding grid may be a position of a center point of a triangle in the walkable plane, a position of a certain point of the triangle in the walkable plane, or an average value of position information of all points of the triangle in the walkable plane, which is not particularly limited in this exemplary embodiment.
The second mesh to be divided refers to a mesh that the first virtual object can reach in the first mesh.
For example, a game character a, a bush B, and a sea C exist in a forest scene of a shooting game, and the forest scene, i.e., a virtual scene, is divided into 64 first grids, and the 64 first grids are cubes with a side length of 100 pixels, where there are 4 grids in the 64 first grids where the bush B or the sea C is located as a second virtual object.
Only 3 grids in the first grid of the bush B or the sea C are grids where the game character a, that is, the first virtual object can travel, and at this time, the 3 first grids are the route-finding grids and also the second grid to be divided.
In step S720, a second mesh to be divided, which is a routing mesh of the first virtual object determined from the first mesh to be divided, is divided.
And splitting the first mesh to be divided again when the first mesh to be divided is the route searching mesh and is the mesh where the second virtual object is located.
For example, in a forest scene of a shooting game, there are a game character a, a grass B, and a sea C. And 3 grids exist, namely the route-finding grid and the bush B or the sea C, and the three second grids to be divided are split again to obtain a plurality of second grids.
Specifically, the splitting process may be splitting the second grid to be split 4 times in the horizontal direction, the vertical direction, and the front-back direction according to a preset value of 4, so as to obtain a plurality of second grids.
In an optional embodiment, the way-finding grid in the first to-be-divided grid is more defined as the second grid, so that the grid is no longer a uniform grid, a key area and a non-key area are distinguished, the virtual object is distinguished and rendered according to the grid where the real-time position of the first virtual object is located, and the rendering speed is increased on the basis of ensuring the rendering effect.
In an alternative embodiment, fig. 8 is a schematic flowchart illustrating a process of obtaining corresponding rendering data according to a target grid index, where as shown in fig. 8, the target grid and the rendering data are indexed by a mapping relationship, and the rendering data is stored in a data pool, where the method at least includes the following steps: in step S810, identification information corresponding to the target mesh is determined.
If the target grid is uniform, given a first target location of the target grid, the rendering data corresponding to the target location may be found directly from the data pool.
However, the target mesh in the embodiment of the present disclosure is non-uniform, and there is no corresponding relationship between the target position and the rendering data, so that the corresponding relationship between the target position and the identification information needs to be established first, and then the rendering data is obtained by using the corresponding relationship between the identification information and the rendering data.
Based on this, the identification information refers to information having a correspondence relationship with the target position. The identification information may be a number or a character string, which is not limited in this exemplary embodiment.
For example, in a shooting game, position information of a point at the upper left corner of a target grid is first acquired, and at this time, the position information of the point at the upper sitting corner is the target position of the target grid, and then identification information corresponding to the first target position is acquired.
In addition, the identification information is updated correspondingly with the splitting condition of the first grid.
Fig. 9 shows a schematic diagram of a two-dimensional space before status update of identification information, and as shown in fig. 9, identification information 910 is identification information before status update of identification information. Here, the identification information 912 is identification information corresponding to the first mesh a, the identification information 914 is identification information corresponding to the first mesh B, the identification information 916 is identification information corresponding to the first mesh C, and the identification information 918 is identification information corresponding to the first mesh D.
The identification information update status is uniformly divided in the forms of identification information 910, identification information 912, identification information 914, and identification information 916.
Fig. 10 shows a schematic diagram of a two-dimensional space after status update of identification information, and as shown in fig. 10, identification information 1010 is identification information after status update of identification information. Identification information 1012 is identification information corresponding to first grid a after status update, identification information 1014 is identification information corresponding to first grid B after status update, identification information 1016 is identification information corresponding to first grid C after status update, and identification information 1018 is identification information corresponding to first grid E after status update.
Since the first mesh A, B, C is not divided, the corresponding identification information state does not change. Similarly, since the division processing is performed on the first mesh D, the identification information 1018 updates the identification information according to the division of the first mesh D.
Specifically, since the first mesh D is divided into the second mesh D1, the second mesh D2, the second mesh D3 and the second mesh D4, and then the second mesh D4 is divided into the finer second mesh G4-1, the finer second mesh G4-2, the finer second mesh G4-3 and the finer second mesh G4-4, the identification information 1018 is also divided accordingly, and the identification information at the original 1018 is covered by the new identification.
In step S820, rendering data corresponding to the identification information is determined from the data pool according to the mapping relationship.
The mapping relationship refers to a corresponding relationship between the target grid and the rendering data, and specifically, the mapping relationship may be a relationship in which elements in two element sets correspond to each other. Assuming that x has a mapping relationship with y, x takes one value, y has one and only one value corresponding to x, and y takes one value, and x may have a plurality of values corresponding to y.
The rendering data refers to data for rendering the first virtual object, and the rendering data may be color data, transparency data, or texture data, which is not particularly limited in this exemplary embodiment.
For example, in a shooting game, after identification information is determined, a data pool corresponding to the identification information is obtained, and grid information corresponding to the identification information, such as color information, is obtained from the data pool.
In step S830, the target object is rendered according to the rendering data corresponding to the identification information.
Rendering refers to a process of converting a first virtual object in a virtual scene into a digital image or a bitmap image according to rendering data.
Where the digital image designation is a two-dimensional image, a bitmap image refers to an image composed of pixels as a single point.
For example, a game character a exists in a shooting game, where the game character a is a first virtual object, and when rendering data of the first virtual object is determined, the first virtual object is converted into an image of a three-dimensional virtual scene to obtain a rendered first virtual object.
In the method and apparatus provided in the exemplary embodiment of the present disclosure, a virtual scene is first divided according to a non-uniform grid form to obtain a first grid, then the first grid is divided again according to a second virtual object to obtain a second grid, then a target grid corresponding to the first virtual object is determined according to the first grid and the second grid, finally target rendering data of the target grid is determined, and the first virtual object is rendered according to the rendering data. On one hand, the problem that key areas and non-key areas cannot be distinguished is solved, and the occupancy rates of a disk and a memory are reduced; on the other hand, the virtual objects are rendered in a distinguishing mode according to the grids where the real-time positions of the first virtual objects are located, and the rendering speed is improved on the basis of guaranteeing the rendering effect.
Further, in an exemplary embodiment of the present disclosure, there is also provided a display control apparatus of a virtual object. Fig. 11 is a schematic structural diagram of a display control apparatus for a virtual object, and as shown in fig. 11, a display control apparatus 1100 for a virtual object may include: an acquisition module 1110, a determination module 1120, and a rendering module 1130. Wherein:
an obtaining module 1110 configured to obtain a target position of a first virtual object in a virtual scene; a determining module 1120 configured to determine a target grid corresponding to the target location from the plurality of first grids and the plurality of second grids; the first grids are obtained by dividing the virtual scene, and the second grids are obtained by dividing the first grids according to the second virtual object; the rendering module 1130 is configured to obtain corresponding target rendering data according to the target grid index, and render the first virtual object according to the target rendering data.
The details of the display control apparatus 1100 for virtual objects are described in detail in the display control method for corresponding virtual objects, and therefore are not described herein again.
It should be noted that although several modules or units of the display control apparatus 1100 of the virtual object are mentioned in the above detailed description, such division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic device 1200 according to such an embodiment of the invention is described below with reference to fig. 12. The electronic device 1200 shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 12, the electronic device 1200 is embodied in the form of a general purpose computing device. The components of the electronic device 1200 may include, but are not limited to: the at least one processing unit 1210, the at least one memory unit 1220, the bus 1230 connecting the various system components (including the memory unit 1220 and the processing unit 1210), and the display unit 1240.
Wherein the memory unit stores program code that is executable by the processing unit 1210 to cause the processing unit 1210 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification.
The storage unit 1220 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)1221 and/or a cache memory unit 1222, and may further include a read only memory unit (ROM) 1223.
Storage unit 1220 may also include a program/utility 1224 having a set (at least one) of program modules 1225, such program modules 1225 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, and in some combination, may comprise a representation of a network environment.
Bus 1230 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1200 may also communicate with one or more external devices 1270 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1200, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1250. Also, the electronic device 1200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 1260. As shown, the network adapter 1260 communicates with the other modules of the electronic device 1200 via the bus 1230. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned "exemplary methods" section of the present description, when said program product is run on the terminal device.
Referring to fig. 13, a program product 1300 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. A method for controlling display of a virtual object, the method comprising:
acquiring a target position of a first virtual object in a virtual scene;
determining a target grid corresponding to the target location from a plurality of first grids and a plurality of second grids; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to a second virtual object;
and obtaining corresponding target rendering data according to the target grid index, and rendering the first virtual object according to the target rendering data.
2. The method for controlling display of a virtual object according to claim 1, wherein the obtaining of the target position of the first virtual object in the virtual scene comprises:
determining a bounding box of a first virtual object in a virtual scene, and determining a target position of the first virtual object in the virtual scene according to the bounding box.
3. The method according to claim 1, wherein the dividing the plurality of first meshes according to the second virtual object includes:
and carrying out division processing on a first mesh to be divided, wherein the first mesh to be divided is the first mesh where a second virtual object is located.
4. The virtual object display control method according to claim 3, wherein the dividing the first mesh to be divided includes:
determining a way-finding grid of the first virtual object from the first grid to be divided as a second grid to be divided;
and dividing the second grid to be divided, wherein the second grid to be divided is a routing grid of the first virtual object determined from the first grid to be divided.
5. The method according to claim 1, wherein the target mesh is indexed by a mapping relationship with rendering data; rendering data is stored in a data pool;
the obtaining of corresponding rendering data according to the target grid index includes:
determining identification information corresponding to the target grid;
determining rendering data corresponding to the identification information from the data pool according to the mapping relation;
and rendering the target object according to rendering data corresponding to the identification information.
6. The method according to claim 1, wherein the target rendering data includes first rendering data corresponding to the first mesh and/or second rendering data corresponding to the second mesh;
the rendering the first virtual object according to the target rendering data includes:
rendering the first virtual object according to the first rendering data and/or the second rendering data.
7. The method for controlling display of a virtual object according to claim 1, wherein said rendering the first virtual object according to the target rendering data includes:
calculating to obtain third rendering data corresponding to the rest grids of the first virtual object according to the target rendering data;
rendering the first virtual object according to the target rendering data and the third rendering data.
8. The method according to claim 7, wherein the calculating third rendering data of the remaining grids in which the first virtual object is located according to the target rendering data includes:
and calculating the target data by utilizing a third-order spherical harmonic function to obtain third rendering data of the rest grids where the first virtual object is located.
9. An apparatus for controlling display of a virtual object, comprising:
an acquisition module configured to acquire a target position of a first virtual object in a virtual scene;
a determination module configured to determine a target grid corresponding to the target location from a plurality of first grids and a plurality of second grids; the plurality of first grids are obtained by dividing the virtual scene, and the plurality of second grids are obtained by dividing the plurality of first grids according to a second virtual object;
and the rendering module is configured to obtain corresponding target rendering data according to the target grid index and render the first virtual object according to the target rendering data.
10. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing a method for controlling display of a virtual object according to any one of claims 1 to 8.
11. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the display control method of the virtual object of any one of claims 1-8 via execution of the executable instructions.
CN202011476410.5A 2020-12-14 2020-12-14 Virtual object display control method and device, storage medium and electronic equipment Active CN112494941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011476410.5A CN112494941B (en) 2020-12-14 2020-12-14 Virtual object display control method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011476410.5A CN112494941B (en) 2020-12-14 2020-12-14 Virtual object display control method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112494941A true CN112494941A (en) 2021-03-16
CN112494941B CN112494941B (en) 2023-11-28

Family

ID=74973493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011476410.5A Active CN112494941B (en) 2020-12-14 2020-12-14 Virtual object display control method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112494941B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268666A (en) * 2021-05-25 2021-08-17 北京达佳互联信息技术有限公司 Content recommendation method, device, server and computer readable storage medium
CN113426131A (en) * 2021-07-02 2021-09-24 腾讯科技(成都)有限公司 Virtual scene picture generation method and device, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105378801A (en) * 2013-04-12 2016-03-02 微软技术许可有限责任公司 Holographic snap grid
EP3246879A1 (en) * 2016-05-20 2017-11-22 Thomson Licensing Method and device for rendering an image of a scene comprising a real object and a virtual replica of the real object
CN108196765A (en) * 2017-12-13 2018-06-22 网易(杭州)网络有限公司 Display control method, electronic equipment and storage medium
WO2019033859A1 (en) * 2017-08-18 2019-02-21 腾讯科技(深圳)有限公司 Rendering method for simulating illumination, and terminal
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN110193198A (en) * 2019-05-23 2019-09-03 腾讯科技(深圳)有限公司 Object jump control method, device, computer equipment and storage medium
CN110384924A (en) * 2019-08-21 2019-10-29 网易(杭州)网络有限公司 The display control method of virtual objects, device, medium and equipment in scene of game
CN111311339A (en) * 2020-05-09 2020-06-19 支付宝(杭州)信息技术有限公司 Target object display method and device and electronic equipment
CN111583378A (en) * 2020-06-11 2020-08-25 网易(杭州)网络有限公司 Virtual asset processing method and device, electronic equipment and storage medium
WO2020207202A1 (en) * 2019-04-11 2020-10-15 腾讯科技(深圳)有限公司 Shadow rendering method and apparatus, computer device and storage medium
CN111899323A (en) * 2020-06-30 2020-11-06 上海孪数科技有限公司 Three-dimensional earth drawing method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105378801A (en) * 2013-04-12 2016-03-02 微软技术许可有限责任公司 Holographic snap grid
EP3246879A1 (en) * 2016-05-20 2017-11-22 Thomson Licensing Method and device for rendering an image of a scene comprising a real object and a virtual replica of the real object
WO2019033859A1 (en) * 2017-08-18 2019-02-21 腾讯科技(深圳)有限公司 Rendering method for simulating illumination, and terminal
CN108196765A (en) * 2017-12-13 2018-06-22 网易(杭州)网络有限公司 Display control method, electronic equipment and storage medium
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
WO2020207202A1 (en) * 2019-04-11 2020-10-15 腾讯科技(深圳)有限公司 Shadow rendering method and apparatus, computer device and storage medium
CN110193198A (en) * 2019-05-23 2019-09-03 腾讯科技(深圳)有限公司 Object jump control method, device, computer equipment and storage medium
CN110384924A (en) * 2019-08-21 2019-10-29 网易(杭州)网络有限公司 The display control method of virtual objects, device, medium and equipment in scene of game
CN111311339A (en) * 2020-05-09 2020-06-19 支付宝(杭州)信息技术有限公司 Target object display method and device and electronic equipment
CN111583378A (en) * 2020-06-11 2020-08-25 网易(杭州)网络有限公司 Virtual asset processing method and device, electronic equipment and storage medium
CN111899323A (en) * 2020-06-30 2020-11-06 上海孪数科技有限公司 Three-dimensional earth drawing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268666A (en) * 2021-05-25 2021-08-17 北京达佳互联信息技术有限公司 Content recommendation method, device, server and computer readable storage medium
CN113268666B (en) * 2021-05-25 2024-01-23 北京达佳互联信息技术有限公司 Content recommendation method, device, server and computer readable storage medium
CN113426131A (en) * 2021-07-02 2021-09-24 腾讯科技(成都)有限公司 Virtual scene picture generation method and device, computer equipment and storage medium
CN113426131B (en) * 2021-07-02 2023-06-30 腾讯科技(成都)有限公司 Picture generation method and device of virtual scene, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112494941B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
CN111068312B (en) Game picture rendering method and device, storage medium and electronic equipment
US20170154468A1 (en) Method and electronic apparatus for constructing virtual reality scene model
CN111773709B (en) Scene map generation method and device, computer storage medium and electronic equipment
CN114677467B (en) Terrain image rendering method, device, equipment and computer readable storage medium
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN112494941B (en) Virtual object display control method and device, storage medium and electronic equipment
CN110288688A (en) Rendering method, device, storage medium and the electronic equipment of virtual vegetation
CN110478898B (en) Configuration method and device of virtual scene in game, storage medium and electronic equipment
CN115082607B (en) Virtual character hair rendering method, device, electronic equipment and storage medium
CN112802170A (en) Illumination image generation method, apparatus, device, and medium
CN112717414A (en) Game scene editing method and device, electronic equipment and storage medium
CN111744199A (en) Image processing method and device, computer readable storage medium and electronic device
CN115937389A (en) Shadow rendering method, device, storage medium and electronic equipment
CN109448123A (en) The control method and device of model, storage medium, electronic equipment
WO2024198719A1 (en) Data processing method and apparatus, computer device, computer-readable storage medium, and computer program product
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN118135081A (en) Model generation method, device, computer equipment and computer readable storage medium
CN111973984A (en) Coordinate control method and device for virtual scene, electronic equipment and storage medium
CN116485969A (en) Voxel object generation method, voxel object generation device and computer-readable storage medium
US9311747B2 (en) Three-dimensional image display device and three-dimensional image display program
CN112354188B (en) Image processing method and device of virtual prop, electronic equipment and storage medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN116899216B (en) Processing method and device for special effect fusion in virtual scene
CN116206066B (en) Method, storage medium and system for generating video based on scene reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant