GB2406253A - Image rendering in 3D computer graphics using visibility data - Google Patents
Image rendering in 3D computer graphics using visibility data Download PDFInfo
- Publication number
- GB2406253A GB2406253A GB0321891A GB0321891A GB2406253A GB 2406253 A GB2406253 A GB 2406253A GB 0321891 A GB0321891 A GB 0321891A GB 0321891 A GB0321891 A GB 0321891A GB 2406253 A GB2406253 A GB 2406253A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- computer model
- texture data
- rendering
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
An image of a 3D computer model is rendered using texture data from a camera image and also texture data from one or more texture maps storing texture data for all polygons in the computer model. The image to be used for rendering is selected from images showing different views. Rendering is performed in two passes. First, texture data from the texture map is rendered onto the computer model polygons. Second, image data from the selected image is rendered onto polygons visible both in the selected image and the image being generated. The texture data from the selected image is combined with the texture data from the first pass for each polygon in accordance with a weighting dependent upon the visibility of the polygon in the selected image and also the image being generated.
Description
IMAGE RENDERING IN 3D COMPUTER GRAPHICS The present invention relates to
the field of three dimensional (3D) computer graphics, and more particularly to the rendering of images of a 3D computer model using texture data.
The technique of texture mapping is well known in the field of 3D computer graphics. In this technique, an image (either a digitised camera image or a synthetic image) known as a "texture map" is stored and mapped onto one or more surfaces of a three-dimensional computer model during rendering to represent surface detail in the final image of the model. The texture map is made up of a two-dimensional matrix of individual elements known as "texels" (like the pixels of an image) and, typically, a respective red, green and blue value is stored for each texel to define the texture data in the texture map.
Coordinates defining a point in the texture map are assigned to each vertex of each polygon in the 3D computer model. In this way the assigned texture map coordinates for a polygon's vertices define a corresponding polygon in the texture map containing the texture data which is to be mapped onto the 3D model polygon during rendering. The texture data assigned to each polygon in the 3D computer model is therefore constant, and so the same texture data is applied to a polygon for every viewing position and direction from which the polygon is rendered.
A problem occurs, however, when the texture data in the texture map is real image data from one or more camera images.
More particularly, the illumination, shadows and reflections in a real camera image are only correct for one particular viewing position and direction (that is, those from which the image was recorded relative to the object and the scene light sources). When the viewing position and direction from which the 3D computer model is viewed changes, the viewer would expect to see the illumination, shadows and reflections in the rendered image change accordingly. However, because the same texture data is mapped onto the 3D computer model regardless of the viewing position and direction, these changes are not seen and instead the viewer sees errors in the rendered images. These errors are particularly noticeable when there is a significant variation in the lighting intensity and distribution in the camera image(s) used to generate the texture data, and/or when the object has regions of highlights or self-shadows.
To address this problem, so-called "view-dependent texture mapping" has been proposed. In this technique, the vertices of each polygon in the 3D computer model are mapped into a plurality of camera images recorded from different viewing positions and directions to provide texture data. Accordingly, texture data from a plurality of respective camera images is available for each polygon. The image to be used to provide texture data for a particular virtual image is selected in dependence upon the relationship between the viewing position and direction for the virtual image and the viewing positions and directions of the camera images.
Conventional view-dependent texture mapping techniques suffer from a number of problems, however.
For example, a first problem which occurs in conventional view-dependent texture mapping techniques is that each camera image can only be used to generate texture data for polygons in the 3D computer model that are visible in the image. When a virtual image of the 3D computer model is rendered from a viewing position and direction different to that of a camera image, polygons will be visible in the virtual image which are not visible in the camera image and for which the camera image does not store texture data. Accordingly, post-processing is performed after rendering to identity polygons onto which no texture data was rendered, and to generate texture data for these polygons. The generation of texture data is performed by extrapolation of existing texture data in the rendered image or by performing processing to identify another camera image in which the polygon for which texture data is missing is visible and to extract texture data therefrom (this processing having to be performed many times because many polygons typically do not have texture data after the initial rendering). The identification of polygons not having texture data after the initial rendering and the generation of texture data therefor is very time-consuming, resulting in a large amount of time being required to produce the final image.
In addition, errors are often noticeable in the texture data generated subsequent to the initial rendering because the texture data is taken from at least one different camera image or is an extrapolation of existing rendered texture data.
A second problem which occurs in conventional view dependent texture mapping techniques is that when the viewing position and direction from which the 3D computer model is viewed changes, the image from which texture data is taken for rendering onto the 3D computer model can change. As a result of this change of source of texture data, the rendered images appear discontinuous to the viewer (that is, the viewer sees a "jump" in the rendered images when the image from which texture data is taken changes).
The present invention aims to address at least one of the problems above.
According to the present invention, there is provided a 3D computer graphics processing method and apparatus in which a 3D computer model is rendered in two passes. In one pass, texture data is used from texture data for all parts of the 3D computer model. This texture data may be stored in one or more texture maps and the rendering may be view-independent or view-dependent. In the other pass, view- dependent rendering is performed, selecting an image from a plurality of images to be used for rendering. The data from the two passes is combined in different proportions for different parts of the 3D computer model in dependence upon the visibility of the respective parts.
The features enable a high quality rendered image of the 3D computer model to be generated quickly from a viewing position and/or direction different to those of the camera images available for selection for rendering. In particular, no post-processing to generate additional texture data after the image has been rendered is required because texture data is available in one pass for all polygons in the 3D computer model irrespective of whether they are visible in the camera image selected for rendering.
In addition, the features enable more accurate images of the 3D computer model to be rendered as the viewing position and/or direction changes using texture data derived from camera images in which there are significant lighting changes, in which there are specular reflections, and/or in which there are self-shadows on the object. In particular, the blending of the image data from a camera image with other texture data ensures that changes in the image data are reduced and are therefore less visible to a viewer when the camera image from which image data is taken for rendering changes.
The present invention also provides a computer program product, embodied for example as a storage device carrying instructions or a signal carrying instructions, comprising instructions for programming a programmable processing apparatus to become operable to perform a method as set out above or to become configured as an apparatus as set out above.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 schematically shows the components of an embodiment of the invention, together with the notional functional processing units into which the processing apparatus component may be thought of as being configured when programmed by computer program instructions; Figure 2 shows an example to illustrate the data input to the processing apparatus in Figure 1 to be processed to generate a texture map for a 3D computer model; Figure 3 shows the processing operations performed by the processing apparatus in Figure 1; Figure 4, comprising Figures 4a and 4b, shows the processing operations performed at step S3-10 in Figure 3; and Figure 5 shows an example to illustrate the processing performed at step S4-10 in Figure 4.
Referring to Figure 1, an embodiment of the invention comprises a programmable processing apparatus 2, such as a personal computer (PC), containing, in a conventional manner, one or more processors, memories, graphics cards etc. together with a display device 4, such as a conventional personal computer monitor, and user input devices 6, such as a keyboard, mouse etc. The processing apparatus 2 is programmed to operate in accordance with programming instructions input, for example, as data stored on a data storage medium 12 (such as an optical CD ROM, semiconductor ROM, magnetic recording medium, etc), and/or as a signal 14 (for example an electrical or optical signal input to the processing apparatus 2, for example from a remote database, by transmission over a communication network (not shown) such as the Internet or by transmission through the atmosphere), and/or entered by a user via a user input device 6 such as a keyboard.
As will be described in more detail below, the programming instructions comprise instructions to program the processing apparatus 2 to become configured to render an image of a 3D computer model of an object using texture data from a camera image of the object and also texture data from one or more texture maps, which, together, store texture data for all polygons in the 3D computer model. The camera image to be used for rendering is selected from a plurality of camera images showing different views of the object in dependence upon the respective viewing direction of each camera image and the viewing direction from which the image is to be rendered. Rendering with the texture data is performed in two passes. In the first pass, texture data from the texture map(s) storing texture data for all polygons is rendered onto the polygons of the 3D computer model that are visible in the image to be generated. In the second pass, image data from the selected camera image is rendered onto those polygons of the 3D computer model that are visible both in the selected camera image and the image to be generated, with the texture data from the selected camera image being combined with the texture data from the first pass. The texture data from the camera image added to each polygon in the second pass is weighted in dependence upon the visibility of the polygon in the camera image and also the image to be generated.
As a result of performing rendering in this way, texture data is generated for all polygons in the 3D computer model (because the texture map(s) used for the first pass rendering contains texture data for all polygons). In addition, changes and errors are less noticeable in images generated when the viewing position and direction changes and the camera image changes from which texture data is taken for rendering (because the image data from a camera image is blended during the second pass rendering with the texture data from the first pass rendering).
When programmed by the programming instructions, processing apparatus 2 can be thought of as being configured as a number of functional units for performing processing operations. Examples of such functional units and their interconnections are shown in Figure 1. The units and interconnections illustrated in Figure 1 are, however, notional, and are shown for illustration purposes only to assist understanding; they do not necessarily represent units and connections into which the processor, memory etc of the processing apparatus 2 actually become configured.
Referring to the functional units shown in Figure 1, central controller 20 is operable to process inputs from the user input devices 6, and also to provide control and processing for the other functional units. Memory 30 is provided for use by central controller 20 and the other functional units.
Input data interface 40 is arranged to control the storage of input data within processing apparatus 2. The data may be input to processing apparatus 2 for example as data stored on a storage medium 42, as a signal 44 transmitted to the processing apparatus 2, or using a user input device 6.
In this embodiment, the input data comprises data defining a plurality of camera images of a subject object recorded at different relative positions and orientations, data defining a 3D computer model of the surface of the subject object, data defining the relative 3D positions and orientations of the camera images and the 3D computer surface model, and data defining at least one texture map storing texture data for each polygon of the 3D computer model. In addition, in this embodiment, the input data also includes data defining the intrinsic parameters of each camera which recorded an image, that is, the aspect ratio, focal length, principal point (the point at which the optical axis intersects the imaging plane), first order radial distortion coefficient, and skew angle (the angle between the axes of the pixel grid; because the axes may not be exactly orthogonal).
Thus, referring to Figure 2, the input data defines a plurality of camera images 200-214 and a 3D computer surface model 300 having positions and orientations defined in 3D space. In this embodiment, the 3D computer surface model 300 comprises a mesh of connected polygons (triangles in the embodiment) but other forms of 3D computer surface model may be processed, as will be described later. In addition, the input data defines the imaging parameters of the camera images 200-214, which defines, inter alla, the respective focal point position 310-380 in 3D space of each image. The input data also defines at least one texture map storing texture data such that texture data is defined for every polygon in the 3D computer model 300 (although, if there is more than one texture map, each individual texture map does not need to store texture data for every polygon in the 3D computer model).
The input data defining the camera images 200-214 of the subject object, the data defining the 3D computer surface model 300, and the data defining the positions and orientations of the images and 3D computer surface model may be generated in any of a number of different ways.
For example, processing may be performed as described in The input data defining the texture map(s) may be lo generated in any of a number of different ways, and the texture data may comprise synthetic texture data and/or texture data from camera images. For example, the texture data may be generated from the input camera images 200-214 as described in copending EPC application EP-A-1,204,073 and co-pending US application 2002 0085748-A1 (the full contents of which are incorporated
herein by cross-reference).
The input data defining the intrinsic camera parameters may be input, for example, by a user using a user input device 6.
Referring again to Figure 1, rendering engine 50 is operable to render an image of the 3D computer model 300 from any viewing position and direction using texture data taken from the input texture map(s) and a camera image 200-214.
In this embodiment, rendering engine 50 includes image selector 60, texture map renderer 70, and image renderer 80.
Image selector 60 is operable to select one of the camera images 200-214 as a camera image to provide texture data for rendering, the selection being carried out in accordance with the viewing position and direction from which an image of the 3D computer model 300 is to be rendered and the different viewing positions and directions of the camera images 200- 214.
Texture map renderer 70 is operable to perform a first pass of rendering by rendering texture data from the input texture map(s) onto the polygons in the 3D computer model 300.
Image renderer 80 is operable to perform a second pass of rendering by rendering image data from the camera image 200-214 selected by image selector 60 onto the polygons in the 3D computer model and combining the texture data for each polygon with the texture data generated by texture map renderer 70 in accordance with a weighting representative of the visibility of the polygon in the camera image 200-214 selected by image selector 60 and also the visibility of the polygon in the image to be generated.
Display controller 90, under the control of central controller 20, is arranged to control display device 4 to display image data generated by rendering engine 50 and also to display instructions to the user.
Output data interface 100 is arranged to control the output of data from processing apparatus 2. In this embodiment, the output data comprises the image data generated by rendering engine 50. Output data interface is operable to output the data for example as data on a storage medium 112 (such as an optical CD ROM, semiconductor ROM, magnetic recording medium, etc), and/or as a signal 114 (for example an electrical or optical signal transmitted over a communication network such as the Internet or through the atmosphere). A recording of the output data may be made by recording the output signal 114 either directly or indirectly (for example by making a first recording as a ''master.. and then making a subsequent recording from the master or from a descendent recording thereof) using a recording apparatus (not shown).
Figure 3 shows the processing operations performed by processing apparatus 2 to process input data in this embodiment.
Referring to Figure 3, at step S3-2, rendering engine 50 performs a preprocessing operation before performing any image rendering. More particularly, rendering engine 50 projects each polygon in the 3D computer model 300 into each camera image 200-214 and stores the positions of the projected polygon vertices in each camera image. As a result of this processing, the mapping of the polygons in the 3D computer model into each camera image 200-214 is defined, so that texture data is defined in the camera images 200-214 for the polygons in the 3D computer model 300.
At steps S3-4 to S3-14 processing is performed to render images of the 3D computer model.
More particularly, at step S3-4, rendering engine 50 reads the viewing position and viewing direction for which the next image is to be rendered.
At step S3-6, image selector 60 selects one of the camera images 200-214 for rendering. In this embodiment, image selector 60 calculates the respective dot product of the viewing direction received at step S3-4 and the viewing direction of each camera image 200-214. The camera image for which the calculated dot product value is largest is selected for rendering.
At step S3-8, texture map renderer 70 renders an image of the 3D computer model 300 from the viewing position and direction read at step S3-4 using the texture map(s) storing texture data for all of the polygons in the 3D computer model 300. This rendering is performed in a conventional way and generates texture data for every triangle in the 3D computer model 300 that is visible from the viewing position and direction read at step S3-4.
At step S3-10, image renderer 80 performs a second pass of rendering in which image data from the camera image selected at step S3-6 is combined with the data from the first pass of rendering performed at step S3-8.
Figure 4 shows the processing operations performed by image renderer 80 at step S3-10.
Referring to Figure 4, at step S4-2, image renderer 80 selects the next triangle in the 3D computer model 300 (this being the first triangle the first time step S4-2 is performed).
At step S4-4, image renderer 80 calculates a visibility weight for the triangle selected at step S4-2 representative of the visibility of the polygon in the camera image 200-214 selected at step S3-6 and also in the image to be rendered from the viewing position and direction read at step S3-4. More particularly, in this embodiment, image renderer 80 calculates a visibility weight Vw in accordance with the following equation: Vw = cos cos a (1) where is the angle between the surface normal vector of the triangle selected at step S4-2 (that is, a vector perpendicular to the plane of the triangle) and the viewing direction of the camera image selected at step S3-6 (such that cos represents the visibility of the triangle in the selected camera image); and is the angle between the surface normal vector of the triangle selected at step S4-2 and the viewing direction read at step S3-4 (such that cos represents the visibility of the triangle in the image to be rendered).
It will be seen from equation (1) that the visibility weight Vw of the triangle will have the value O if the plane in which the triangle lies is parallel to either the viewing direction of the camera image selected at step S3-6 or the viewing direction of the image to be rendered (that is, if the triangle is not visible from either viewing direction). In addition, in this embodiment, if cos and/or cos has a negative value, then the value of Vw is set to 0. Consequently, Vw has the value O if the plane of the triangle makes an angle equal to or greater than + 90 with either the viewing direction of the camera image selected at step S3-6 or the viewing direction of the image to be rendered.
At step S4-6, image renderer 80 determines whether there is another triangle in the 3D computer model 300 to be processed. Steps S4-2 to S4-6 are repeated until a visibility weight has been calculated for each triangle in the 3D computer model in the way described above.
At step S4-8, image renderer 80 calculates a respective visibility weight for each triangle vertex in the 3D computer model 300. More particularly, the visibility weight for each triangle vertex is calculated by calculating the average of the visibility weights Vw calculated at step S4-4 of the triangles which meet at the vertex.
At step S4-10, image renderer 80 calculates a respective attenuation factor for each triangle vertex in the 3D computer model 300. More particularly, in this embodiment, image renderer 80 performs processing to identify each vertex in the 3D computer model which is a vertex for both a front facing triangle and a back facing triangle from the viewing position and direction read at step S3-4 of the image to be generated. Each such identified vertex lies on a boundary in the selected camera image 200-214 separating parts of the image of the object which are visible from the viewing position and direction read at step S3-4 of the image to be generated and parts which are invisible therefrom.
To illustrate this, Figure 5 shows an example using camera image 208. Each vertex in the 3D computer model 300 which is a vertex for both a triangle which is front facing and a triangle which is back-facing from the viewing position and direction read at step S3-4 lies on the boundary 400. This boundary 400 separates part 410 of the image of the object which is visible from the viewing position and direction of the new image to be generated (read at step S3- 4) and the part 420 of the image of the object which is invisible from the viewing position and direction of the new image to be generated.
Image data from part 420 will not contribute to the image being rendered from the viewing position and direction read at step S3-4 because polygon vertices in this area will have visibility weights of zero.
In this embodiment, image renderer 80 assigns an attenuation factor of 0. 5 to each identified vertex (that is, each vertex lying along the boundary line 400) and assigns a value 1.0 to every other vertex in the 3D computer model 300.
At step S4-12, image renderer 80 calculates a respective combined visibility and attenuation value for each triangle vertex in the 3D computer model 300. More particularly, in this embodiment, for each triangle vertex, image renderer 80 multiplies the visibility weight for the vertex calculated at step S4-8 by the attenuation factor for the vertex calculated at step S4-10.
At step S4-14, image renderer 80 calculates a respective alpha blending value for each pixel in the image to be rendered. This processing is performed by rendering into the alpha channel using Gouraud rendering and the combined vertex values calculated at step S4-12. As a result, the combined vertex values calculated for the vertices of a polygon are linearly interpolated along the edges of the polygon and between edges along scan lines in the images to generate a respective interpolated value for each pixel defining the alpha blending value for the pixel.
At step S4-16, image renderer 80 renders the new image using the camera image selected at step S3-6 and alpha blending with the alpha values calculated at step S4-14 to combine the image data with the first pass rendering data. The mapping of image data from the selected camera image onto the polygons in the 3D computer model being rendered is defined by the projected polygon vertex positions previously calculated at step S3-2.
Accordingly, the colour component values of each pixel in the rendered image after step S4-16 has been performed are given by: RF = (l-a)R, + aR2.... (2) GF = (l-a)Gl + aG2 (3) BF = (l-a)Bl + aB2 (4) where R., G., BF are the red, green and blue colour component values of the pixel in the final image; a is the a blending value for the pixel calculated at step S4-14; R1, G1, B1 are the red, green and blue colour component values of the pixel after the first rendering pass at step S3-8; and R2, G2, B2 are the red, green and blue colour component values from the camera image selected at step S3-6 to be added to the pixel.
By performing the second pass rendering at step S3-10 in the way described above, the contribution of image data from the camera image 200- 214 selected at step S3-6 may be different for each vertex in the 3D computer model, and is determined by the visibility of the vertex in the new image to be rendered and the visibility of the vertex in the camera image selected at step S3-6 (the contribution of the visibility being determined by the processing at steps S4-2 to S4-8). This variation in the contribution of image data from the selected camera image for different vertices of the 3D computer model 300 blends the image data with the texture data from the first pass rendering in such a way that changes in the image data are reduced when the viewing position and direction of the image to be rendered changes and the camera image selected at step S3-6 changes.
Consequently, sudden changes in the images seen by the user when the camera image changes are ameliorated. In addition, image data from the camera image selected at step S3-6 which is used to generate texture data for portions of the new image in the vicinity of the boundary between regions in the image receiving no texture data from the selected camera image 200-214 and regions receiving texture data from the selected camera image 200-214 is attenuated (as a result of the processing at steps S4-10 and S4-12) with the result that the texture data blends together seamlessly in this region, creating image data free of errors and discontinuities.
Referring again to Figure 3, at step S3-12, central controller 20 controls display controller 90 to display the pixel data generated at step S3-10 on display device 4. In addition or instead, the pixel data generated at step S3-10 is output by output data interface 100 as data stored on a storage medium 112 or as data carried by a signal 114.
At step S3-14, rendering engine 50 determines whether a new viewing position and/or viewing direction has been received defining a new image of the 3D computer model 300 to be rendered. Steps S3-4 to S3-14 are repeated until images of the 3D computer model have been rendered from all required viewing positions and directions.
When the viewing position and/or direction of an image to be rendered changes at step S3-4 such that a different camera image is selected for rendering at step 3-6 compared to the camera image for the previous image, the blending of image data from the camera image with texture data fromthe texture map(s) during the second pass rendering at step S3-10 reduces the changes in the image data in the rendered image, with the result that the change in camera image cannot be discerned by the viewer and the displayed images do not have any noticeable discontinuities or visual artefacts.
Many modifications and variations can be made to the embodiment described above within the scope of the accompanying claims.
For example, in the embodiment described above, the 3D computer surface model 300 comprises a plurality of vertices in 3D space connected to form a polygon mesh.
However, different forms of 3D computer surface model may be rendered. For example, a 3D surface defined by a "point cloud" representation (comprising unconnected points in 3D space representing points on the object surface and a respective surface normal vector for each point) may be processed. In this case, each point in the point cloud may be thought of as an infinitely small triangle so that the same processing as that performed in the embodiment above is carried out for rendering.
In this case, the texture data in the input texture map(s) comprises a respective red, green and blue value for each 3D point in the point cloud. Accordingly, each input texture map comprises a list of red, green and blue values such that the texture map can be considered to be onedimensional (instead of a two-dimensional array of texels as in the embodiment). Consequently, the term "texture map" will be understood to include such a list of red, green and blue values as well as any other form of texture map.
In the embodiment described above, the respective visibility weight for each triangle may be calculated at step S4-4 taking into account occlusions by other triangles in the 3D computer model from the viewing position and direction of the camera image 200-214 selected at step S3-6. For example, each visibility weight calculated at step S4-4 may be multiplied by a value between O and 1 defining the proportion of the triangle that is visible in the selected camera image 200-214. The proportion of each triangle that is visible may be pre- calculated, for example, as described in co-pending EPC application EP-A- 1,204,073 and co-pending US application US 2002-0085748-A1 (the full contents of
which are incorporated herein by cross-reference).
Similarly, each visibility weight may be calculated at step S4-4 taking into account occlusions by triangles in the 3D computer model from the viewing position and direction read at step S3-4.
In the embodiment described above, the visibility weight for each polygon is calculated at step S4-4 in dependence upon both the visibility of the polygon in the image to be rendered and the visibility of the polygon in the selected camera image. However, instead, each visibility weight may be calculated in dependence upon the visibility of the polygon in the image to be rendered alone (such that Vw = cos a) or in dependence upon the visibility of the polygon in the selected camera image alone (such that Vw = cos O).
In the embodiment described, a camera image 200-214 is selected for rendering at step S3-6 by calculating the respective dot product of the viewing direction read at step S3-4 and each camera image viewing direction, and then selecting the camera image having the largest calculated dot product value. However, instead, a camera image may be selected for rendering by calculating the respective dot product of the viewing direction read at step S3-4 with each camera viewing direction (as in the embodiment described above), calculating the respective distance between the viewing position read at step S3-4 and the viewing position of each camera image, dividing each dot product value by the corresponding distance (that is, the distance calculated for the same camera image), and selecting the camera image having the largest resulting value. In this way, the camera image is selected in dependence upon not only viewing direction but also viewing position.
In the embodiment described above, the processing performed at steps S410 and S4-12 may be omitted.
In the embodiment above, the processing at step S3-8 may be performed after the processing at step S3-10 so that image data from the selected camera image is rendered in the first pass and texture data from the texture map(s) is rendered in the second pass and blended with the image data from the first pass.
In the embodiment described above, the texture map(s) storing texture data for all polygons in the 3D computer model 300 is view-independent - that is, the same texture data therefrom is rendered onto the same polygons in the 3D computer model in the first pass rendering at step S3-8 irrespective of the viewing position and direction of the new image read at step S3-4. However, instead, a plurality of view-dependent texture maps may be stored, comprising, for example, a view-dependent texture map for each camera image 200-214, with each respective view dependent map storing texture data for all polygons in the 3D computer model. In this case, the processing to select a camera image for rendering at step S3-6 would also select a view-dependent texture map to be used to generate texture data during the first pass rendering at step S3-8.
In the embodiment described above, the input data defines the intrinsic parameters of the camera(s) used to record the camera images 200-214. However, instead, default values may be assumed for some, or all, of the intrinsic camera parameters, or processing may be performed to calculate the intrinsic parameter values in a conventional manner, for example as described in "Euclidean Reconstruction From Uncalibrated Views" by Hartley in Applications of Invariance in Computer Vision, Mundy, Zisserman and Forsyth eds, pages 237-256, Azores 1993.
In the embodiments described above, processing is performed by a programmable computer using processing routines defined by programming instructions. However, some, or all, of the processing could, of course, be performed using hardware.
Other modifications are, of course, possible.
Claims (25)
1. A method of rendering an image of a three dimensional computer model of an object to generate image data, the method comprising: rendering an image of the three-dimensional computer model using texture data taken from texture data available for all parts of the three-dimensional computer model; selecting a camera image for rendering from among a plurality of camera images of the object; calculating a respective visibility weight for each of a plurality of parts of the threedimensional computer model representative of the visibility of the part in at least one of the selected camera image and the image to be rendered; rendering an image of the three-dimensional computer model using image data from the selected camera image as texture data; and combining the texture data taken from the texture data available for all parts of the three-dimensional computer model with the texture data taken from the selected camera image in dependence upon the calculated visibility weights.
2. A method according to claim 1, wherein each visibility weight is calculated in dependence upon the orientation of the part of the threedimensional computer model relative to at least one of the viewing direction of the selected camera image and the viewing direction of the image to be rendered.
3. A method according to claim l or claim 2, wherein the visibility weights are calculated taking into account occlusions of part of the three-dimensional computer model by other parts.
4. A method according to any preceding claim, wherein each camera image has an associated viewing direction and the selection of the camera image for rendering is made in dependence upon the viewing direction of the image to be rendered and the viewing directions associated with the camera images.
5. A method according to any preceding claim, wherein each camera image has an associated viewing position and the selection of the camera image for rendering is made in dependence upon the viewing position of the image to be rendered and the viewing positions associated with the camera images.
6. A method according to any preceding claim, further comprising: calculating a respective attenuation factor for each of a plurality of parts of the three-dimensional computer model in dependence upon the position of the part relative to parts which are visible in the image to be rendered and relative to parts which are invisible in the image to be rendered; and combining the visibility weight and the attenuation factor for at least some parts of the three-dimensional computer model to give a combined value; and wherein, for said at least some parts, the texture data taken from the texture data available for all parts of the threedimensional computer model and the texture data taken from the selected camera image are combined in dependence upon the combined values.
7. A method according to any preceding claim, wherein the process of rendering an image of the three dimensional computer model using texture data taken from texture data available for all parts of the three dimensional computer model comprises: selecting a texture map for rendering from among a plurality of texture maps, each texture map storing texture data for all parts of the three-dimensional computer model; and rendering an image of the three-dimensional computer model using texture data from the selected texture map.
8. A method according to claim 7, wherein each texture map stores texture data comprising the combination of image data from a plurality of camera images.
9. A method according to any preceding claim, wherein the threedimensional computer model comprises a mesh of polygons, and each part of the three-dimensional computer model comprises a vertex of a polygon.
10. A method according to any of claims 1 to 8, wherein the threedimensional computer model comprises a plurality of unconnected points in three-dimensional space, and each part of the three-dimensional computer model comprises at least one of the points.
11. A method of rendering a sequence of images of a three-dimensional computer model, the method comprising: rendering an image in accordance with the method of any preceding claim in which a first camera image is selected from among the plurality of camera images to render a first image in the sequence; and rendering an image in accordance with the method of any preceding claim in which a second camera image is selected from among the plurality of camera images to render a second image in the sequence;
12. A method according to any preceding claim, further comprising generating a signal carrying the generated image data.
13. A method according to any preceding claim, further comprising making a direct or indirect recording of the generated image data.
14. Apparatus for rendering an image of a three dimensional computer model of an object to generate image data, the apparatus comprising: a camera selector operable to select a camera image for rendering from among a plurality of camera images of the object; a visibility weight calculator operable to calculate a respective visibility weight for each of a plurality of parts of the three-dimensional computer model representative of the visibility of the part in at least one of the selected camera image and the image to be rendered; and an image renderer operable to: render an image of the three-dimensional computer model using texture data taken from texture data available for all parts of the three-dimensional computer model; render an image of the three- dimensional computer model using image data from the selected camera image as texture data; and combine the texture data taken from the texture data available for all parts of the three-dimensional computer model with the texture data taken from the selected camera image in dependence upon the calculated visibility weights.
15. Apparatus according to claim 14, wherein the visibility weight calculator is operable to calculate the respective visibility weight for each part of the three dimensional computer model in dependence upon the orientation of the part relative to at least one of the viewing direction of the selected camera image and the viewing direction of the image to be rendered.
16. Apparatus according to claim 14 or claim 15, wherein the visibility weight calculator is operable to calculate the visibility weights taking into account occlusions of parts of the three-dimensional computer model by other parts.
17. Apparatus according to any of claims 14 to 16, wherein each camera image has an associated viewing direction and the camera selector is operable to select the camera image for rendering in dependence upon the viewing direction of the image to be rendered and the viewing directions associated with the camera images.
18. Apparatus according to any of claims 14 to 16, wherein each camera image has an associated viewing position and the camera selector is operable to select the camera image for rendering in dependence upon the viewing position of the image to be rendered and the viewing positions associated with the camera images.
19. Apparatus according to any of claims 14 to 18, further comprising: an attenuation factor calculator operable to calculate a respective attenuation factor for each of a plurality of parts of the threedimensional computer model in dependence upon the position of the part relative to parts which are visible in the image to be rendered and relative to parts which are invisible in the image to be rendered; and a value combiner operable to combine the visibility weight and the attenuation factor for at least some parts of the three-dimensional computer model to give a combined value; and wherein, for said at least some parts, the image renderer is operable to combine the texture data taken from the texture data available for all parts of the threedimensional computer model and the texture data taken from the selected camera image in dependence upon the combined values.
20. Apparatus according to any of claims 14 to 19, further comprising a texture map selector operable to select a texture map for rendering from among a plurality of texture maps, each texture map storing texture data for all parts of the three-dimensional computer model, and wherein the image renderer is operable to render an image of the three-dimensional computer model using texture data taken from texture data available for all parts of the three-dimensional computer model by rendering an image of the three-dimensional computer model using texture data from the selected texture map.
21. Apparatus according to claim 20, wherein each texture map stores texture data comprising the combination of image data from a plurality of camera images.
22. Apparatus according to any of claims 14 to 21, wherein the threedimensional computer model comprises a mesh of polygons, and each part of the three dimensional computer model comprises a vertex of a polygon.
23. Apparatus according to any of claims 14 to 21, wherein the threedimensional computer model comprises a plurality of unconnected points in three-dimensional space, and each part of the three-dimensional computer model comprises at least one of the points.
24. A storage medium storing computer program instructions for programming a programmable processing apparatus to become operable to perform a method as set out in at least one of claims 1 to 11.
25. A signal carrying computer program instructions for programming a programmable processing apparatus to become operable to perform a method as set out in at least one of claims l to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0321891A GB2406253B (en) | 2003-09-18 | 2003-09-18 | Image rendering in 3d computer graphics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0321891A GB2406253B (en) | 2003-09-18 | 2003-09-18 | Image rendering in 3d computer graphics |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0321891D0 GB0321891D0 (en) | 2003-10-22 |
GB2406253A true GB2406253A (en) | 2005-03-23 |
GB2406253B GB2406253B (en) | 2008-04-02 |
Family
ID=29266236
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0321891A Expired - Fee Related GB2406253B (en) | 2003-09-18 | 2003-09-18 | Image rendering in 3d computer graphics |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2406253B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110975284A (en) * | 2019-12-06 | 2020-04-10 | 珠海金山网络游戏科技有限公司 | Unity-based NGUI resource rendering processing method and device |
CN117258303B (en) * | 2023-11-20 | 2024-03-12 | 腾讯科技(深圳)有限公司 | Model comparison method and related device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2369541A (en) * | 2000-10-27 | 2002-05-29 | Canon Kk | Method and apparatus for generating visibility data |
-
2003
- 2003-09-18 GB GB0321891A patent/GB2406253B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2369541A (en) * | 2000-10-27 | 2002-05-29 | Canon Kk | Method and apparatus for generating visibility data |
Also Published As
Publication number | Publication date |
---|---|
GB0321891D0 (en) | 2003-10-22 |
GB2406253B (en) | 2008-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111508052B (en) | Rendering method and device of three-dimensional grid body | |
US7528831B2 (en) | Generation of texture maps for use in 3D computer graphics | |
US7463269B2 (en) | Texture data compression and rendering in 3D computer graphics | |
US7545384B2 (en) | Image generation method and apparatus | |
US7212207B2 (en) | Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing | |
US6392655B1 (en) | Fine grain multi-pass for multiple texture rendering | |
EP1695294B1 (en) | Computer graphics processor and method for rendering 3-d scenes on a 3-d image display screen | |
US6396503B1 (en) | Dynamic texture loading based on texture tile visibility | |
Hirche et al. | Hardware Accelerated Per-Pixel Displacement Mapping. | |
Sintorn et al. | Sample based visibility for soft shadows using Alias‐free shadow maps | |
US7158133B2 (en) | System and method for shadow rendering | |
US6762760B2 (en) | Graphics system configured to implement fogging based on radial distances | |
Darsa et al. | Walkthroughs of complex environments using image-based simplification | |
US7071937B1 (en) | Dirt map method and apparatus for graphic display system | |
US6816167B1 (en) | Anisotropic filtering technique | |
US8212835B1 (en) | Systems and methods for smooth transitions to bi-cubic magnification | |
US6690369B1 (en) | Hardware-accelerated photoreal rendering | |
JP2006517705A (en) | Computer graphics system and computer graphic image rendering method | |
US6864889B2 (en) | System for previewing a photorealistic rendering of a synthetic scene in real-time | |
JP6898264B2 (en) | Synthesizers, methods and programs | |
US6781583B2 (en) | System for generating a synthetic scene | |
GB2406253A (en) | Image rendering in 3D computer graphics using visibility data | |
EP2249312A1 (en) | Layered-depth generation of images for 3D multiview display devices | |
Lechlek et al. | Interactive hdr image-based rendering from unstructured ldr photographs | |
Meyer et al. | Real-time reflection on moving vehicles in urban environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20190918 |