CN1410948A - Real time light tracing method of non linear refraction and reflection on plane and spherical surfaces - Google Patents
Real time light tracing method of non linear refraction and reflection on plane and spherical surfaces Download PDFInfo
- Publication number
- CN1410948A CN1410948A CN 02130945 CN02130945A CN1410948A CN 1410948 A CN1410948 A CN 1410948A CN 02130945 CN02130945 CN 02130945 CN 02130945 A CN02130945 A CN 02130945A CN 1410948 A CN1410948 A CN 1410948A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- msub
- virtual
- mfrac
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000003287 optical effect Effects 0.000 claims abstract description 17
- 238000009877 rendering Methods 0.000 claims abstract description 14
- 239000013598 vector Substances 0.000 claims description 33
- 238000013507 mapping Methods 0.000 claims description 19
- 238000002156 mixing Methods 0.000 claims description 15
- 230000000694 effects Effects 0.000 claims description 11
- 230000001133 acceleration Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000011514 reflex Effects 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims 5
- 238000005516 engineering process Methods 0.000 claims 4
- 238000004364 calculation method Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 241000251468 Actinopterygii Species 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Landscapes
- Image Generation (AREA)
Abstract
The optical mapped virtual vertex for each vertex of the polyhedron to be refracted or reflected is calculated. Based on the topological relation of the polyhedron, each virtual vertex is connected to form the virtual object composed of several virtual faces. In rendering, the virtual object is projected to the relevant refraction or reflection surfaces by using recursive algorithm. With being mixed with the color value of the surface, the reality graphics, similar to the graphics obtained by the ray tracing method, is obtained. The invention gives the formula to calculate the virtual vertex P' for any point P in five circumstances: the object inside or outside the plane refractive body, the three-dimension refraction, the spherical surface reflection and refraction as well as the algorithm and drawing procedure.
Description
Technical Field
A real-time ray tracing method for plane and spherical non-linear refraction and reflection belongs to the technical field of computer graphics.
Background
The ray tracing algorithm is one of the most commonly used algorithms for generating three-dimensional realistic graphics, and is a realistic graphics generation method proposed by t.whiteted in the paper "An impromoted animation Model for shaped Display" published in 1980 in international journal comm.acm, vol.23, No. 6. The method can generate high-quality realistic graphics, but the calculation speed is slow, and the method cannot be applied to a real-time interactive application system. To increase the speed of ray tracing, various improved methods have been developed. A paper was published in Computer Graphics by P.S. Heckbert and P.Hanrahan et al in 1984, which first proposed a method for beam tracking. Beam tracking is an improvement over traditional ray tracing and is used to increase rendering speed, deal with aliasing problems, improve rendering quality and efficiency, and so on. However, beam tracking can only deal with specular reflection from a planar reflector, and cannot deal with non-linear refraction problems such as thick glass, curved surface reflection problems, and the like, and can only do linear approximations.
In 1996, m.levoy and p.hanrahan et al published papers on SIGGRAPH, and proposed a method for realizing drawing of graphics and images using a 4-dimensional light field. The mapping of the image is computed in a pre-process using a multi-dimensional light field, and then the mapping results are reused in each frame when displayed in real time. However, the problems of image information storage, aliasing, fast search and the like in the light field method still need to be further solved.
A method of view-independent environment mapping was proposed by w.heidrich et al in 1998. The display of the realistic graph is realized by utilizing the environment mapping and the support of hardware, the interactive speed can be achieved, but the real-time effect cannot be achieved. It is generally used more to deal with the reflection phenomenon and there are often significant errors.
Ofek and Rappoport in 1998 proposed the use of reflection virtual objects and extended mapping (expansion map) to achieve the ray tracing effect of curved surface reflection, which transforms all the vertices that may be in the reflection area for the reflector to generate virtual objects, and then performs the same uniform processing as real objects; however, they cannot deal with the refraction problem, and the error of expansion mapping is large, the accumulation and propagation of the error in the recursion process are very serious, and the quality of the generated result image is not high. Although they improve by interpolation, the effect is limited, but the amount of calculation is increased.
At present, many interactive graphic systems utilize hardware, can process a large number of three-dimensional patches per second, but basically can only provide the drawing effect of a depth cache (Z-Buffer) algorithm and cannot process the real-time realistic graphic problem of a reflection scene and a refraction scene.
Disclosure of Invention
The invention aims to provide a plane and spherical nonlinear refraction and reflection real-time ray tracking method which can solve the problems of nonlinear reflection and refraction uniformly and improve the algorithm efficiency.
The invention is characterized in that:
for a refracted or reflected polyhedron, firstly calculating the virtual vertexes of all vertexes of the polyhedron, then sequentially connecting all the virtual vertexes according to the original topological relation between points and surfaces of the polyhedron to form a plurality of virtual surfaces and finally form a reflected or refracted optical mapping virtual object, which is called a virtual object for short; when displaying the image, the virtual object is recursively projected onto the reflecting or refracting surface by using a recursive algorithm, so that an image effect similar to ray tracing is obtained; wherein, the virtual vertex P' can be expressed as: <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein V is a viewpoint, L is the sum of the lengths of paths through which the light L passes when being refracted or reflected to reach a point P in a scene corresponding to the virtual vertex; b is a sample point on the projection plane.
For a planar refractor whose scene is outside the refractor, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
l=|V-I1|+|I1-I2|+|I2-P|,
Q=V-w1N,
wherein l is the sum of the lengths of paths through which the light rays pass when refracted to reach a point P in the scene; w is a1Is the horizontal distance between the viewpoint V and the refraction body; d is point I1A vertical distance from the viewpoint V; q is a point of the lower end of the incident surface of the refraction body and the viewpoint V on the same horizontal position; u is the intersection of the line VP and the refractive body entrance face,the equation for the incident surface of the refractive body is set to C0x+C1y+C2z+C30, and C ═ C0,C1,C2)T;I1、I2Respectively, the intersection point of the light L and the plane of the plane refractor projection plane, namely the entrance and exit of the two refracting surfaces, and N is a point I1The unit normal vector of (a).
For a planar refractor with a scene within the refractor, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>I</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>I</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
l=|V-I|+|I-P|,
Q=V-w1N,
wherein I is the intersection point of the light L and the incident plane of the planar refractor, and the equation of the incident plane is set as C0x+C1y+C2z+C30, and C ═ C0,C1,C2)T(ii) a N being at point IA unit normal vector; d is point I1A vertical distance from the viewpoint V; q is a point of the lower end of the incident surface of the refraction body and the viewpoint V on the same horizontal position; u is the intersection point of the line VP and the incident plane of the refraction body;
for the refraction of three-dimensional planes with mutually perpendicular refraction surfaces, the method sequentially comprises the following steps:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>d</mi> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
d=|V-I1|+|I1-I2|+|I2-P|,
I1=V+dx+dy+dz,
dx=xex,dy=yey,dz=zez,
ey=N2,ez=-N1,eycross multiplication by ezRe-unitization to obtain ex,
Where P is a point in the scene, i.e. on an object, V is a viewpoint, N1Is the normal vector outside the incident plane, N2Is the outer normal vector of the exit surface, the entrance surface is perpendicular to the exit surface, N1、N2Is a unit vector, ex,ey,ezIs the base in the local coordinate system, x, y, z are the coordinates in the local coordinate system, dx, dy, dz are I1The offset of a point from a viewpoint; <math> <mrow> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mi>C</mi> <msqrt> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <mrow> <msup> <mi>η</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mo>-</mo> <mover> <mi>e</mi> <mo>^</mo> </mover> <mfrac> <mrow> <mi>C</mi> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </msqrt> </mrow> <mrow> <mi>η</mi> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> </msqrt> </mrow> </mfrac> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mover> <mi>e</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mrow> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>[</mo> <mrow> <mo>(</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>·</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> <mrow> <mo>|</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>[</mo> <mrow> <mo>(</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>·</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>]</mo> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> <mi>C</mi> <mo>=</mo> <mfrac> <mrow> <mi>η</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mi>y</mi> </mfrac> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> η is the refractive index of the three-dimensional refractor, and α is the vertical distance from the viewpoint to the exit surface of the refractor. For a spherical reflector, it comprises the following steps in sequence: (1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mfrac> <mrow> <mi>I</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>I</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
i is the reflection point of the light L on the sphere, which can be expressed as: i is T.LIWherein L isI=[LIx,LIy,Pz,1]TThe coordinate of the point I on the spherical surface under the local coordinate system; (ii) a T is a transformation matrix from the local coordinate system to the world coordinate system:
wherein wx=Nx,wy=Ny,wz=Nz;N=(Nx,Ny,Nz)TA normal vector of a plane VPO consisting of a viewpoint V, a sphere center O of the spherical surface and a point P in the scene; establishing a local coordinate system by taking O as an origin, N as a z axis, OP as a y axis and OP multiplied by N as an x axis;
ux=vywz-wyvz,uy=wxvz-vxwz,uz=vxwy-wxvy;
vx、vyand vzThree coordinate components of unit vector (P-O)/| P-O | respectively;
x, y are the distances from point I on the sphere to point V and point P in the scene, respectively: x ═ V-I |, y ═ I-P |; using the formula I ═ T · LICan be handled LIThe local coordinate system is transformed into a world coordinate system.
For spherical refractors, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <msub> <mrow> <mo>|</mo> <mi>I</mi> </mrow> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein, I1、I2The incident point and the exit point of the light ray to the spherical refractor are respectively expressed by the formula I ═ T.LIHandle I1、I2The local coordinates of (a) are transformed into coordinates of a world coordinate system; x and y are respectively from a viewpoint V to an incident point I1And point of emergence I2Distance to a point P in the scene, i.e. x ═ V-I1|,y=|I2-P |; t is a transformation matrix from the local coordinate system to the world coordinate system: wherein,
ux=vywz-wyvz,uy=wxvz-vxwz,uz=vxwy-wxvy;
vx、vyand vzThree coordinate components of unit vector (P-O)/| P-O | respectively;
wx=Nx,wy=Ny,wz=Nz;
N=(Nx,Ny,N2)Ta normal vector of a plane VPO consisting of a viewpoint V, a sphere center O of the spherical surface and a point P in the scene; establishing a local coordinate system by taking O as a coordinate origin, N as a z axis, OP as a y axis and OP multiplied by N as an x axis;
the process for displaying graphics using a recursive algorithm comprises the steps of:
algorithm 1 preprocessing
Constructing a binary space subdivision (BSP) tree for objects in a scene, and if the scene contains spherical reflecting or refracting bodies, replacing 6 surfaces of a cube which closely contains the ball with the ball to participate in constructing the BSP tree;
step 2, collecting various reflecting and refracting bodies in the scene and forming a reflecting and refracting body object table;
step 3, determining the sequence of each reflector and refractor in the reflector and refractor body surface according to the current viewpoint; because the BSP tree is irrelevant to the viewpoint, the sequence of the reflecting body and the refracting body can be obtained quickly;
step 4, calculating a virtual vertex and generating all virtual objects to generate a virtual scene;
algorithm 2 generates a virtual scene for a refractive or reflective body:
i) is the maximum recursion level reached? If yes, returning;
ii) if it is a reflector, calculating a reflected virtual scene; if the virtual scene is a refraction body, calculating a refraction virtual scene;
iii) ordering the reflexes and refracts in the virtual scene according to the viewpoint;
iv) if a reflection or refraction body still exists in the virtual scene, recursively calling the algorithm for the reflection or refraction body in the virtual scene, and turning to i); otherwise, turning v);
v) returning;
algorithm 2 has the following characteristics:
(1) the above algorithm is a recursive algorithm, and the recursion is mainly performed on the reflection and refraction bodies in the scene;
(2) the reflector and the refractor are treated uniformly;
(3) for the reflector, a reflection formula is used for generating the virtual scene, and the vertex sequence of the corresponding surface patches is changed, namely the vertex sequence of the surface patches of the reflection virtual objects is opposite to the original sequence, so that the plane external normal vector is ensured to point to the correct direction;
(4) in any virtual scene, the sequence of the reflecting and refracting bodies in the virtual scene is determined again according to the viewpoint and the BSP tree so as to be displayed correctly in real time;
(5) if the number of reflectors or refractors in the scene is greater than 1, the termination condition of the algorithm is that the recursion depth is not greater than the given maximum recursion depth value;
if the viewpoint changes, recalculation is not needed for the pure reflection virtual scene; but the virtual scene appearing in any refraction body and the subsequent recursion process must be recalculated; the following is an algorithm for recalculating virtual scenes when the viewpoint changes:
algorithm 3 recalculation of virtual scenes when viewpoint position changes
i) Is it a refractive body?
ii) is, calling Algorithm 2;
iii) if not, recursively calling the algorithm;
to realize real-time scene rendering, a method of hardware acceleration by using a three-dimensional graphics acceleration card supporting OpenGL programming (OpenGL-compatible) can be adopted; at this time, a software interface of graphics hardware, namely an OpenGL graphics system, is used; the generation of the virtual object lays a foundation for real-time drawing of people; the real-time rendering algorithm we use is as follows;
algorithm 4 real-time rendering and display (the algorithm is for a certain reflector or refractor in a real scene)
x) setting drawing parameters of OpenGL (color cache write forbidding, depth comparison forbidding, Alpha mixing forbidding, and enabling the value of a drawn template buffer area to be + 1);
xi) rendering the reflecting and refracting body itself;
xii) if the virtual scene contains the reflection or refraction body, turning to i), and recursively drawing the reflection or refraction body in the virtual scene;
xiii) setting drawing parameters of OpenGL (making the value of the drawn stencil buffer constant, color cache writable, depth comparison valid, and other drawing parameters constant);
xiv) drawing other virtual objects in the current virtual scene in the current template buffer area;
xv) setting drawing parameters of OpenGL (color cache writable, depth comparison valid, Alpha mixed valid, making the value of the drawn stencil buffer area-1);
xvi) rendering the reflecting, refracting body itself;
xvii) setting drawing parameters of OpenGL (resume depth cache writable, color cache writable, Alpha mixing forbidden, template operation forbidden);
xviii) back.
Experiments prove that: it achieves the intended purpose.
Drawings
FIG. 1 is a schematic diagram of computing a virtual vertex using ray tracing.
FIG. 2 is a schematic illustration of arbitrary planar reflection.
FIG. 3 is a schematic representation of a first case of refraction-the scene being outside the refractor.
FIG. 4 is a schematic view of a second case of refraction-the scene being inside the refractor.
Fig. 5 is a three-dimensional refractive optical path diagram with mutually perpendicular refractive surfaces.
Fig. 6 is a schematic diagram of the optical path of the spherical reflection.
Fig. 7 is a schematic diagram of the optical path of spherical refraction.
Fig. 8 is a schematic diagram of determining the incident and exit points of a sphere.
FIG. 9 is a schematic view of a square box (a) and its virtual object (b).
Fig. 10 is a block diagram of algorithm 1 preprocessing.
Fig. 11 is a block flow diagram of a process for algorithm 2 to generate virtual scenes with planar refractors or reflectors.
Fig. 12 is a block diagram of the process flow for recalculating virtual scenes when the viewpoint location changes for algorithm 3.
Fig. 13 is a flow chart of the process for rendering and displaying algorithm 4 in real time.
Detailed Description
1. Basic idea
As shown in fig. 1, a ray L is emitted from a viewpoint a, refracted and reflected to reach a point V, which is a basic idea of ray tracing. However, given any point V, it is theoretically difficult to accurately obtain the ray L' from the viewpoint A that reaches point V through refraction, because the refraction problem is generally non-linear.
When an L is reflected or refracted several times and intersects with a plane M of an object at a point V, unlike the conventional ray tracing algorithm, we calculate a color value of a corresponding pixel point not by using an illumination model, but an optically mapped virtual vertex V' of the point V. Assuming that the light L reaches V after several reflections or refractions, the sum of the lengths of the paths is: d ═ d1+d2+Λ+d5
The formula for the virtual vertex V' is: <math> <mrow> <msup> <mi>V</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>A</mi> <mo>+</mo> <mi>d</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>A</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>A</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math> where A is the viewpoint and B is a sample point on the projection plane. This is the "concept of optically mapping virtual objects" we propose.
For the reflected or refracted polyhedron, the virtual vertex of each vertex is first calculated, and then the virtual vertices are connected into the edges of virtual objects in sequence according to the topological relation, and finally the reflected or refracted optical mapping virtual object (called virtual object for short) is formed. E.g. a square box, see fig. 9(a), with its vertex P1,P2,...,P8The virtual vertex of (A) is sequentially P'1,p′2,...,P′8Then we can get the corresponding virtual object of the square box, as shown in fig. 9 (b). These virtual objects are projected onto a reflective or refractive surface when the graphics are displayed. This process can be performed recursively, resulting in an image effect similar to ray tracing. To obtain a high quality realistic image, the size of the plane that is reflected or refracted cannot be too large, otherwise the plane should be subdivided into smaller rectangles or triangles. For a curved object that is reflected or refracted in a scene, a curved surface should be first discretized into a plurality of facets, and the curved object is approximately represented by a polyhedron.
2. Formula for calculating virtual vertex
For the planar reflection problem, we can use the idea similar to the beam tracking algorithm to calculate the virtual vertices of each vertex:
suppose there is one over point R ═ Vx,Vy,Vz,1]TAs shown in fig. 2, the unit normal vector of (a), (b), (c), (0) is N ═ a, b, c, 0]T,NTN is 1, and the plane equation is
ax+by+cz+d=0
Wherein d is-NTR and V are viewpoints.
Any point in the three-dimensional scene P ═ Px,Py,Pz,1]TIs reflected and converted on the plane to reach P '═ P'x,P′y,P′z,1]T. From the geometric relationship, one can obtain:
P′=TmP
wherein,
p' is the reflection imaginary vertex of point P.
The refraction problem is much more complex than the planar reflection problem. There are two cases of non-linear refraction of planar refractors: the object is both outside the refractor (as shown in figure 3) and inside the refractor (see figure 4). We discuss below separately.
1) Two-dimensional planar refraction
(a) The object being outside the refractive body
As shown in FIG. 3, V is the viewpoint, P is an arbitrarily chosen point in the scene that is outside the refractive body, and P' is the imaginary vertex corresponding to P. The formula for finding its virtual object is derived as follows:
it is known that: w is a1,w2,w3H, let eta equal to Sin alpha/Sin beta equal to eta2/η1,ξ=1+w3/w1From the geometric relationship, it is possible to obtain
Ad4+Bd3+Cd2+Dd+E=0 (1)
Wherein,
A=(1-η2)ξ2,
B=2η2hξ-2hξ,
C=h2+w2 2-η2h2-w1 2η2ξ2,
D=2η2hξw1 2,
E=-w1 2η2h2。
d can be solved from the formula (1). From the value of d, the intersection point I of the refracted ray and the refracted plane can then be calculated1And I2Thereby, a virtual vertex P' is obtained.
The solving process of the virtual vertex P' is as follows:
(1) connecting V and P, finding the intersection point U of VP and the plane:
wherein the equation of the plane is C0x+C1y+C2z+C30, and further, C ═ C0,C1,C2)T;
(2)Q=V-w1N;
Similarly, can find I2。
Let l be | V-I1|+|I1-I2|+|I2-P |, then the imaginary vertex of P point <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
We generate two sets of points by ray tracing and then back calculate I by the above formula1Obviously, if I1Same, then I2Also, the specific results are shown in the following table.
Ray tracing generated points | Points back calculated by formula | |||||
Viewpoint V | Point P in a scene | I1 | I2 | I1 | I2 | |
xyz | -3.486133-91.011276-30.972000 | -286.265348137.319548-64.033415 | -247.000000110.000000-59.442668 | -270.903693150.000000-62.237392 | -247.000000110.000000-59.442668 | -270.903693150.000000-62.237392 |
xyz | -3.486133-91.011276-30.972000 | -275.475363136.992740-59.345723 | 237.000000110.000000-55.332001 | -260.364890150.000000-57.769409 | 237.000000110.000000-55.332001 | -260.364890150.000000-57.769409 |
(b) The object being in a refractive body
For example, a fish in a fish tank, the object (fish) is on the inner surface of the refractor (fish tank). The calculation formula for this case should be different from that for the first clock case. As shown in FIG. 4, where V is the viewpoint, P is an arbitrarily chosen point in the scene that is inside the refractive body, and P' is the imaginary vertex to which P corresponds. Similarly, we can get the following one-dimensional quadratic equation from the geometric relationship:
Ad4+Bd3+Cd2+Dd+E=0 (2)
wherein,
A=η2-1,
B=-2hA,
C=h2A+η2w1 2-w2 2,
D=-2η2hw1 2,
E=w1 2η2h2。
d can be solved from the equation (2), and then the intersection point I of the refracted ray and the refracted plane can be calculated to obtain the imaginary vertex P'. The solving process of the virtual vertex P' is as follows:
(1) connecting V and P, wherein VP and the mirror plane are intersected in U which can be obtained by moving the previous formula;
(2)Q=V-w1N;
We generate two sets of points by ray tracing and then use the above formula to back-calculate the intersection point I. Specific results are shown in the following table.
Ray tracing generated points | Points back calculated by formula | |||
Viewpoint V | Point P in a scene | I | I | |
xyz | -3.486133-91.011276-30.972000 | -286.265348137.319548-64.033415 | -252.000000110.000000-61.498001 | -252.000000110.000000-61.498001 |
xyz | -3.486133-91.011276-30.972000 | -275.475363136.992740-59.345723 | -237.000000110.000000-55.332001 | -237.000000110.000000-55.332001 |
(c) Selection of solutions
It is obvious that both the formula (1) and the formula (2) are multi-solution equations, and they have at most 4 real roots. Therefore, after solving the equation, the result of the solution will be selected. According to the optical principle, the shortest path will be chosen for light to travel when it reaches from one point to another in the medium. Thus, we connect the viewpoint V with the point P in the scene to get a straight line, and the solution closest to this line is the point we are asking for.
2) Three-dimensional planar refraction
As in fig. 5, it is known that: a viewpoint V, an arbitrarily selected point P in the scene, refractive indices η, AD ═ a, VA ═ z, PF ═ b, <math> <mrow> <mi>e</mi> <mo>=</mo> <mo>|</mo> <msub> <mrow> <mo>(</mo> <mover> <mi>VP</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mi>x</mi> </msub> <mo>|</mo> <mo>,</mo> <mi>f</mi> <mo>=</mo> <mo>|</mo> <msub> <mrow> <mo>(</mo> <mover> <mi>VP</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mi>y</mi> </msub> <mo>|</mo> <mo>,</mo> <mi>g</mi> <mo>=</mo> <mo>|</mo> <msub> <mrow> <mo>(</mo> <mover> <mi>VP</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mi>z</mi> </msub> <mo>|</mo> <mo>.</mo> </mrow> </math> in FIG. 5, the light ray from viewpoint V passes through the top surface I of the medium1The point refracts into the medium and then to the right side of the medium, from I2The point refracts into the air and eventually reaches point P in the scene.
From Snell's law of refraction and geometric relationships, we can obtain the following two equations:
from the above two equations, x and y can be obtained, and then from the values of x, y and z and the refractive index η, I can be calculated in turn1And I2To obtain the position of the imaginary vertex P' of P.
(3) And (4) the formula is a binary 4-degree nonlinear equation system, and in order to improve the calculation speed, the following method is adopted for approximation:
first, d is calculated using the following formula: f. of0+f1d+f2d2+f3d3+f4d4Neglecting the higher order (> 2) term of d to obtain
f0+f1d+f2d2=0,
Wherein, <math> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>[</mo> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>5</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>+</mo> <mn>4</mn> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> </mrow> </math> <math> <mrow> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>5</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>2</mn> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mn>4</mn> </mfrac> <mo>-</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>+</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>+</mo> </mrow> </math> <math> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>+</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>-</mo> <mo>{</mo> <mrow> <mo>(</mo> <mn>2</mn> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>1</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>5</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>)</mo> </mrow> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>7</mn> </msub> <mo>-</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>1</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>+</mo> </mrow> </math> <math> <mrow> <mn>2</mn> <msup> <mrow> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> <mi>b</mi> </mrow> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>]</mo> <mo>}</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> </mrow> <mn>4</mn> </mfrac> <mo>+</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>{</mo> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msup> <mrow> <mn>2</mn> <mo>-</mo> <mi>η</mi> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>+</mo> </mrow> </math> <math> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>]</mo> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>C</mi> </mrow> <mn>5</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mn>2</mn> <mi>η</mi> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mn>2</mn> <mo>)</mo> </mrow> <msup> <mi>b</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>]</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>(</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>+</mo> </mrow> </math> <math> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>+</mo> <msup> <mi>b</mi> <mn>4</mn> </msup> <msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>[</mo> <mn>2</mn> <msup> <mrow> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>+</mo> <msup> <mrow> <mn>4</mn> <mi>b</mi> </mrow> <mn>4</mn> </msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <msub> <mi>C</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> <mo>+</mo> </mrow> </math>
among the above 3 expressions, the expression, <math> <mrow> <msub> <mi>C</mi> <mn>6</mn> </msub> <mo>=</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>5</mn> </msub> <mo>,</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>=</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <msub> <mi>C</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>C</mi> <mn>8</mn> </msub> <mo>=</mo> <msub> <mi>C</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>,</mo> <msub> <mi>C</mi> <mn>9</mn> </msub> <mo>=</mo> <msub> <mi>C</mi> <mn>7</mn> </msub> <mo>-</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>.</mo> </mrow> </math>
then, by substituting d into the following equation, the value of y when x is 0 can be calculated, and it is denoted as y0I.e. by
y0=a-(0.5+d)。
Usually d is a small amount, | d | ≦ 0.5; in the above calculation, we neglect the higher order (≧ 3) term of d, and if the higher order (3, 4) term of d is retained, the calculation accuracy can be improved, but this also increases the calculation amount slightly. And (4) expanding the formula at the point x being 0 and performing linear approximation:
y=y0+y1x,
Finding x from the above equation and substituting the value of x into equation (4) and solving a fourth order equation for y, which is organized as:
H4y4+H3y3+H2y2+H1y+H0=0,
wherein,
H4=e2(2-η2),
H3=-2aex(2-η2),
H2=a2x2(2-η2)+e2(1-η2)(x2+z2)-b2x2,
H1=-2aex(1-η2)(x2+z2),
H0=a2x2(1-η2)(x2+z2)。
then, the point I can be calculated from x, y, z and eta1And I2To find the virtual apex P'.
When x and y are solved by the system of equations consisting of equations (3) and (4), there is another scheme: obtaining y in the foregoing0Then, the equation (3) is expanded and linearly approximated at a point where x is 0 to obtain:
y=y0+y1x,
wherein, <math> <mrow> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <msubsup> <mi>y</mi> <mn>0</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <msup> <mi>z</mi> <mn>2</mn> </msup> </mrow> <mrow> <mi>e</mi> <mrow> <mo>(</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
then, y is equal to y0+y1Substituting x into formula (4) to obtain
T4x4+T3x3+T2x2+T1x+T0=0,
Wherein <math> <mrow> <msup> <mi>e</mi> <mn>2</mn> </msup> <msubsup> <mi>y</mi> <mn>0</mn> <mn>2</mn> </msubsup> <mo>[</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <msubsup> <mi>y</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>]</mo> <mo>-</mo> <msup> <mi>b</mi> <mn>2</mn> </msup> <msubsup> <mi>y</mi> <mn>0</mn> <mn>2</mn> </msubsup> <mo>,</mo> </mrow> </math> By calculating x from the above equation and substituting the value of x into equation (3), y can be directly calculated: <math> <mrow> <mi>y</mi> <mo>=</mo> <mo>±</mo> <msqrt> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mrow> <msup> <mrow> <mo>(</mo> <mi>g</mi> <mo>-</mo> <mi>z</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> <msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>-</mo> <mi>x</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mo>-</mo> <msup> <mi>η</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>η</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> </msqrt> <mo>.</mo> </mrow> </math>
the former scheme was used in the following calculation experiments.
Specific calculation of I1、I2And the process of the virtual vertex P' is as follows: :
a) find I1
Let P be the point on the object, V be the viewpoint, n be the normal vector outside the incident plane1Outer normal vector n of the exit surface2The incident surface is perpendicular to the exit surface, n1,n2Is a unit vector. The basis of the local coordinate system being ex,ey,ezWherein e isy=nz,ex=-n1,eyCross multiplication by ezAnd then unitized to obtain ex。
The calculated x, y, z are coordinates in the local coordinate system. Let dx, dy, dz be I1The amount of dot offset from the viewpoint is:
dx=xex,
dy=yey,
dz=zez,
then I1=V+dx+dy+dz。
b) Find I2
Knowing the incident ray direction vector IN ═ I1V, normal vector n of the refracting surface1And refractive index η, the refracted ray direction vector OUT can be calculated. By refracting a point I on the light1And its direction, and the normal vector n of the exit face2And a point on the surface, and calculating the intersection point I of the refracted ray and the emergent plane2. We can give I2The formula of (1) is as follows: <math> <mrow> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mi>C</mi> <msqrt> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <mrow> <msup> <mi>η</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mo>-</mo> <mover> <mi>e</mi> <mo>^</mo> </mover> <mfrac> <mrow> <mi>C</mi> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </msqrt> </mrow> <mrow> <mi>η</mi> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> </msqrt> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein <math> <mrow> <mover> <mi>e</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mrow> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mrow> <mi>V</mi> <mo>-</mo> <mi>I</mi> </mrow> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>·</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> <mrow> <mo>|</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mrow> <mi>V</mi> <mo>-</mo> <mi>I</mi> </mrow> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>·</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>]</mo> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> <mi>C</mi> <mo>=</mo> <mfrac> <mrow> <mi>η</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mi>y</mi> </mfrac> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> </msqrt> <mo>.</mo> </mrow> </math>
c) Ask for P'
From the viewpoint to the object point, the distance traveled by the light ray is d ═ V-I1‖I1-I2‖+‖I2-P |, where the formula for the calculation of the virtual apex P' can be found:
we generate two sets of points by ray tracing and then back calculate I by the above formula1Obviously, if I1Close, then I2As well as similar. Specific results are shown in the following table, where ε is the relative error, i.e.: the ratio of the distance from the virtual vertex P 'to the exact virtual vertex P' to the distance from the viewpoint to the virtual vertex, which is back-calculated by the approximation formula
ε=|P′-p′|/|V-p′|。
Ray tracing generated points | Points back calculated by formula | |||||||
V | P | I1 | I2 | p’ | I1 | I2 | P’ | ε |
10010050 | 217.123957210.684713102.631579 | 17515080 | 200.000000176.81417590.000000 | 200.000000176.81417590.000000 | 174.540108150.00000079.788060 | 200.000000177.39973689.962460 | 236.239496191.386704104.444653 | 0.002957 |
10610656 | 217.123957210.249311106.231579 | 18115686 | 200.000000176.37877393.600000 | 235.378947192.252632107.751579 | 173.321464150.00000083.009974 | 200.000000178.30877493.713641 | 235.760911190.809209108.061239 | 0.009305 |
3) Spherical reflection and refraction
(1) Spherical reflection
It is known that: the viewpoint V, a point P arbitrarily selected in the scene, is set as the radius of the reflection sphere r, and the center of the sphere is at 0 point, as shown in fig. 6. Let a be | V-P |, b be | V-O |, and c be | O-P |.
According to the geometrical optics principle, the following can be obtained:
2bcxyzcos(α+β)=bcxysin2(α+β)-b2y2z-c2x2z (5)
x(r2+y2-c2)=y(x2+r2-b2) (6)
wherein z is (a)2-(x-y)2)/(4abc)。
And taking the viewpoint V as a coordinate origin, and taking the distance from the viewpoint to the sphere center 0 as a unit length. From equation (6), y can be solved and can be considered as a function of x. Taylor expansion is carried out on y at x-b-r, and linear approximation is taken
y=A+Bx, (7)
Wherein,
substituting (7) into equation (5) can obtain the following fourth-order equation:
x(A+Bx)[a2-(x-A-Bx)2]cos(α+β)=
2bcx(A+Bx)sin2(α+β)-[b2(A+Bx)2+c2x2][(a2-(x-A-Bx)2]/(2bc),(8)
the imaginary vertex P' is calculated by solving x and substituting it into (7) to obtain y.
How to calculate the imaginary vertex P' is described below:
I. since the three points V, 0 and P are on a plane, we use point 0 as the origin, and the normal vector N of plane VP0 is [ N ═ Nx,Ny,Nz]TA local coordinate system is established with 0P as the y-axis and 0P N as the x-axis. So we can get the transformation matrix from the world coordinate system to the local coordinate system:
wherein, wx=Nx,wy=Ny,wz=Nz;
ux=vywz-wyvz,uy=wxvz-vxwz,uz=vxwy-wxvy;
vx、vyAnd vzThree coordinate components of unit vector (P-O)/| P-O | respectively;
x, y are the distances from point I on the sphere to point V and point P in the scene, respectively: x ═ V-I |, y ═ I-P |; using formula I-T.LICan be handled LIFrom the local coordinate system to coordinates in the world coordinate system.
In the local coordinate system, three circles are made with 0, V and P as the center of the circle, R (radius of the sphere) and x and y as the radius, and are respectively marked as c1,c2,c3。
Calculating c1And c2Two intersection points p of1,p3,c1And c3Two intersection points p of3,p4. Respectively calculate p1p3,p1p4,p2p3And p2P4The distance of (2) is an average value of a pair of point coordinates with the minimum distance, namely the coordinate of the point I on the spherical surface on the local coordinate system, and is marked as LI [ L ]Ix,LIy,Pz,1]T. Using the formula I ═ T · LIMixing L withIThe local coordinates of (2) are transformed into coordinates in the world coordinate system, and the result is denoted as I. IV, calculating <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mfrac> <mrow> <mi>I</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>I</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
We generate two sets of points by ray tracing and then back calculate I by the above formula1And I2. Specific results are shown in the following table.
Ray tracing generated points | Points back calculated by formula | |||||
Viewpoint | Points in a scene | I | Exact virtual vertex | I | Approximate imaginary vertex | Relative deviation epsilon |
-116.99539996.421837-31.162001 | 270.1289681170.523969124.238610 | -81.65019734.787454-38.100506 | 512.488025-1001.261011-154.733903 | -81.65188034.786900-38.100637 | 512.460899-1001.275845-154.736798 | 0.000024 |
-116.99539996.421837-31.162001 | -488.154348602.063430-436.722567 | -87.32875632.818073-42.643174 | 246.401988-682.683540-171.799019 | -87.32858032.818149-42.642386 | 246.404709-682.683837-171.789592 | 0.000011 |
(2) Spherical refraction
As shown in fig. 7, the radius of the sphere is r, which can be obtained from geometrical optics: <math> <mrow> <mo>{</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mi>cos</mi> <mi>θ</mi> </mrow> <msup> <mrow> <mn>4</mn> <mi>abr</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>-</mo> <mo>[</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>4</mn> <mi>r</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>]</mo> <mo>(</mo> <mfrac> <mrow> <mi>xy</mi> <mi>cos</mi> <mi>θ</mi> </mrow> <mi>ab</mi> </mfrac> <mo>+</mo> </mrow> </math> (9) <math> <mrow> <mfrac> <mn>2</mn> <msup> <mi>η</mi> <mn>2</mn> </msup> </mfrac> <mo>)</mo> <mo>+</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <msup> <mo>}</mo> <mn>2</mn> </msup> <mo>=</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <msup> <mi>sin</mi> <mn>2</mn> </msup> <mi>θ</mi> <msup> <mrow> <mo>[</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>]</mo> <mrow> <mo>(</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msup> <mi>a</mi> <mn>2</mn> </msup> <msup> <mi>b</mi> <mn>2</mn> </msup> <msup> <mi>r</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>[</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>4</mn> <mi>r</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>]</mo> <mo>,</mo> </mrow> </math> y(x2+r2-a2)=x(y2+r2-b). (10) taking the viewpoint V as the origin of coordinates and the distance from the viewpoint to the sphere center 0 as the unit length without loss of generality. Let x be a-r + t, (11) taylor expansion of formula (10) at t 0, and linear approximation, we can obtain: substituting expressions (11) and (12) into expression (9), and ignoring higher-order terms (more than 5 th order) of t to obtain y ═ G0+G1t+G2t2+G3t3+G4t4(13) wherein, G0=T0 2-C1S0R0;G1=2T0T1-C1(S0R1+S1R0);G2=2T0T2+T1T1-C1(S0R2+S1R1+S2R0);G3=2T0T3+2T1T2-C1(S0R3+S1R2+S2R1+S3R0);G4=2T0T4+2T1T3+T2 2-C1(S0R4+S1R3+S2R2+S3R1+S4R0);C0=a-r;C1=sin2θ/(a2b2r2);C2=-1/(4r2);C3=cosθ/(4abr2);C4=cosθ/(ab);C5=2/η2+C4AC0;C6=C4(A+BC0);C7=C4B;S0=C0 2+C2(C0 2-a2+r2)2;S1=2C0+4C2C0(C0 2-a2+r2);S2=1+2C2(3C0 2-a2+r2);S3=4C2C0;S4=C2;Q0=r2C0+A(a2-C0 2);Q1=r2-2AC0+B(a2-C0 2);
Q2=-A-2BC0;
Q3=-B;
U0=Q0C0 2;
U1=2Q0C0+Q1C0 2;
U2=Q0+2Q1C0+Q2C0 2;
U3=Q1+2Q2C0+Q3C0 2;
U4=Q2+2Q3C0;
R0=Q0U0;
R1=Q0U1+Q1U0;
R2=Q0U2+Q1U1+Q2U0;
R3=Q0U3+Q1U2+Q2U1+Q3U0;
R4=Q0U4+Q1U3+Q2U2+Q3U1;
P0=(a2+r2-C0 2)(b2+r2-A2);
P1=-2AB(a2+r2-C0 2)-2C0(b2+r2-A2);
P2=-B2(a2+r2-C0 2)+4C0AB-(b2+r2-A2);
P3=2C0B2+2AB;
P4=BB;
T0=C3C0 2P0-C5S0+C0 2;
T1=C3(C0 2P1+2C0P0)-C5S1-C6S0+2C0;
T2=C3(C0 2P2+2C0P1+P0)-C5S2-C6S1-C7S0+1;
T3=C3(C0 2P3+2C0P2+P1)-C5S3-C6S2-C7S1;
T4=C3(C0 2P4+2C0P3+P2)-C5S4-C6S3-C7S2;
T can be calculated from equation (13), and x and y can be obtained by substituting t into equations (11) and (12), respectively, whereby the imaginary vertex P' can be obtained.
Determination of I1And I2The method of (1):
after x and y are obtained, respectively using V and P as centre of circle, x and y as radius to draw a circle, in the refraction plane and spherical surface respectively intersecting with I1′,I1"and I2′,I2See FIG. 8. Incident point I of sphere1Is taken from I1' and I1"one of, point of emergence I2Is taken from I2' and I2"is used in the specification. That is, I1And I2There are four possible approaches. At this point we will apply the law of refraction to pick out the one possible case we want. Note that if I1And I2Are the exact points of incidence and emergence, then <math> <mrow> <mfrac> <mrow> <mi>sin</mi> <mi>β</mi> </mrow> <mrow> <mi>sin</mi> <mi>α</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <mi>η</mi> </mfrac> <mo>,</mo> </mrow> </math> In other words, <math> <mrow> <mfrac> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mi>β</mi> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mi>α</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>η</mi> <mn>2</mn> </msup> </mfrac> <mo>,</mo> </mrow> </math> at the same time, the user can select the desired position, <math> <mrow> <mi>cos</mi> <mi>β</mi> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </mfrac> <mo>,</mo> <mi>cos</mi> <mi>α</mi> <mo>=</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mi>xr</mi> </mrow> </mfrac> <mo>=</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>b</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mi>yr</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math> thus, it is possible to provide <math> <mrow> <mfrac> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>4</mn> <mi>r</mi> </mrow> <mn>2</mn> </msup> </mfrac> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mi>xr</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>η</mi> <mn>2</mn> </msup> </mfrac> <mo>,</mo> <mfrac> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>4</mn> <mi>r</mi> </mrow> <mn>2</mn> </msup> </mfrac> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>b</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mi>yr</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>η</mi> <mn>2</mn> </msup> </mfrac> <mo>.</mo> </mrow> </math> Thus, our selection criteria are such that <math> <mrow> <mo>|</mo> <mfrac> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>4</mn> <mi>r</mi> </mrow> <mn>2</mn> </msup> </mfrac> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mi>xr</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mi>η</mi> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>+</mo> <mo>|</mo> <mfrac> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>4</mn> <mi>r</mi> </mrow> <mn>2</mn> </msup> </mfrac> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mi>b</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mi>yr</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mi>η</mi> <mn>2</mn> </msup> </mfrac> <mo>|</mo> </mrow> </math>
The smallest group I1I2。
How to calculate the imaginary vertex P' is described below:
I. due to V, O, P, I1,I2Five points are on a plane, and the normal vector N of the plane VPO is [ N ] by taking the point O as an originx,Ny,Nz]TA local coordinate system is established with OP as the y-axis and N x OP as the x-axis. So we can get the transformation matrix from the local coordinate system to the world coordinate system:
wherein,
wx=Nx,wy=Ny,wz=Nz;
uz=vywz-wyvz,uy=wxvz-vxwz,uz=vxwy-wxvy;
vx、vyand vzAre unit vectors respectivelyThree coordinate components.
Using the formula I ═ T · LIWill I1And I2Is converted into coordinates of the world coordinate system, the result is still I1And I2And (4) showing.
III. calculation <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
We generate two sets of points by ray tracing and then back calculate I by the above formula1And I2. Specific results are shown in the following table.
Ray tracing generated points | Points back calculated by formula | ||||||
Viewpoint | Points in a scene | I1 | I2 | Exact virtual vertex | I1 | Approximate imaginary vertex | Relative deviation of |
-116.99539996.421837-31.162001 | -50.186456-53.644613-41.899063 | -84.35257034.299263-41.259456 | -72.4581227.408969-43.096134 | -40.861831-48.467971-54.712513 | -84.35202334.299371-41.260124 | -40.860776-48.467295-54.714001 | 0.000012 |
-116.99539996.421837-31.162001 | -56.922493-35.934910-58.193442 | -83.72148332.225986-32.146602 | -72.5981887.090201-38.115781 | -47.772221-37.131448-33.210370 | -83.71290632.219162-32.131935 | -47.762689-37.129611-33.179613 | 0.000214 |
3. Algorithm implementation
Algorithm 1: pretreatment, as shown in fig. 10.
Constructing a binary space subdivision (BSP) tree for objects in a scene, and if the scene contains spherical reflecting or refracting bodies, replacing 6 surfaces of a cube which closely contains the ball with the ball to participate in constructing the BSP tree;
step 2, collecting all reflection and refraction bodies in the scene and forming a reflection and refraction body object table;
step 3, determining the sequence of each reflector and refractor in the reflector and refractor body surface according to the current viewpoint; because the BSP tree is irrelevant to the viewpoint, the sequence of the reflecting body and the refracting body can be obtained quickly;
and 4, calculating a virtual vertex and generating all virtual objects to generate a virtual scene.
Algorithm 2 generates a virtual scene for a planar refractive body or reflector as shown in fig. 11. Has the following characteristics:
(1) the above algorithm is a recursive algorithm, and the recursion is mainly performed on the reflection and refraction bodies in the scene;
(2) the reflector and the refractor are treated uniformly;
(3) for the reflector, a reflection formula is used for generating the virtual scene, and the vertex sequence of the corresponding surface patches is changed, namely the vertex sequence of the surface patches of the reflection virtual objects is opposite to the original sequence, so that the plane external normal vector is ensured to point to the correct direction;
(4) in any virtual scene, the sequence of the reflecting and refracting bodies in the virtual scene is determined again according to the viewpoint and the BSP tree so as to be displayed correctly in real time;
(5) if the number of reflectors or refractors in the scene is greater than 1, the termination condition of the algorithm is that the recursion depth is not greater than the given maximum recursion depth value;
if the viewpoint changes, recalculation is not needed for the pure reflection virtual scene; but the virtual scene appearing in any refraction body and the subsequent recursion process must be recalculated; the following is an algorithm for recalculating virtual scenes when the viewpoint changes:
algorithm 3 recalculates the virtual scene for a change in viewpoint position as shown in fig. 12.
To render a scene in real time, a hardware acceleration method must be used. At this time, we need a common software interface OpenGL to graphics hardware. The generation of the virtual object lays a foundation for real-time drawing. The following is the real-time rendering algorithm we use.
Algorithm 4 is a real-time rendering and display as shown in fig. 13.
Claims (6)
1. A real-time ray tracing method for plane and spherical non-linear refraction and reflection comprises a method for realizing real-time ray tracing by using a virtual object, and is characterized in that: for a refracted or reflected polyhedron, firstly calculating the virtual vertexes of all vertexes of the polyhedron by using a computer, then sequentially connecting all the virtual vertexes into an optical mapping virtual object which is formed by a plurality of virtual surfaces and finally forms reflection or refraction according to the original topological relation between points and surfaces of the polyhedron, and the optical mapping virtual object is called as a virtual object for short; when displaying graphics, the virtual object is recursively projected by using a recursive algorithm using a graphics hardware acceleration techniqueThe light is emitted to a reflecting or refracting surface, and alpha mixing is carried out on the light and the brightness value of the refracting surface, so that an image effect similar to ray tracing is obtained; the virtual vertex P' of any point P in the three-dimensional scene can be expressed as: <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein V is a viewpoint and L is the sum of the lengths of paths through which the light L reaches the point P after being refracted or reflected; b is a sampling point on the projection plane;
for a planar refractor whose scene is outside the refractor, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow> </math> l=|V-I1|+|I1-I2|+|I2-P|, Q=V-w1N,
wherein l is the sum of the lengths of paths through which the light rays pass when refracted to reach a point P in the scene; w is a1Is the distance from the viewpoint V to the incident surface of the refractor; q is a vertical foot of a viewpoint V on an incident surface of the refraction body; d is from point Q to point I1The distance of (d); u is the intersection point of the straight line VP and the incident plane of the refraction body; the equation for the incident surface of the refractive body is set to C0x+C1y+C2z+C30, and C ═ C0,C1,C2)T;I1、I2Respectively, the intersection point of two surfaces corresponding to the light L and the plane refractor, namely the entrance and exit of the two refracting surfaces, and N is a point I1The unit normal vector of (d);
(2) connecting all the virtual vertexes into the edges of the virtual objects in sequence according to the original topological relation of the objects in the scene, wherein the corresponding surfaces form all the virtual surfaces;
(3) the virtual object is projected onto the refractive surface using a recursive algorithm and displayed with a resulting image after alpha blending with the local luminance values of the refractive surface itself.
2. A real-time ray tracing method for plane and spherical non-linear refraction and reflection comprises a method for realizing real-time ray tracing by using a virtual object, and is characterized in that: for a refracted or reflected polyhedron, firstly calculating the virtual vertexes of all vertexes of the polyhedron by using a computer, then sequentially connecting all the virtual vertexes into an optical mapping virtual object which is formed by a plurality of virtual surfaces and finally forms reflection or refraction according to the original topological relation between points and surfaces of the polyhedron, and the optical mapping virtual object is called as a virtual object for short; when displaying the graph, using a graph hardware acceleration technology to recursively project the virtual object onto a reflecting or refracting surface by using a recursive algorithm, and performing alpha mixing with the brightness value of the refracting surface to obtain an image effect similar to ray tracing; the virtual vertex P' of any point P in the three-dimensional scene can be expressed as: <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein V is a viewpoint and L is the sum of the lengths of paths through which the light L reaches the point P after being refracted or reflected; b is a sampling point on the projection plane;
for planar refraction of a scene within a refractive body, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>I</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>I</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
l=|V-I|+|I-P|,
Q=V-w1N,
wherein I is the intersection point of the light L and the incident plane of the planar refractor, and the equation of the incident plane is set as C0x+C1y+C2z+C30, and C ═ C0,C1,C2)T(ii) a N is the unit normal vector at point I; q is a vertical foot of a viewpoint V on an incident surface of the refraction body; d is from point Q to point I1The distance of (d); u is straight line VP and incident plane of refraction bodyThe intersection point of (a);
(2) connecting all the virtual vertexes into the edges of the virtual objects in sequence according to the original topological relation of the objects in the scene, wherein the corresponding surfaces form virtual surfaces;
(3) the virtual object is projected onto the refractive surface using a recursive algorithm and displayed with a resulting image after alpha blending with the local luminance values of the refractive surface itself.
3. The real-time ray tracing method of plane and spherical nonlinear refraction and reflection is characterized in that: for a refracted or reflected polyhedron, firstly calculating the virtual vertexes of all vertexes of the polyhedron by using a computer, then sequentially connecting all the virtual vertexes into an optical mapping virtual object which is formed by a plurality of virtual surfaces and finally forms reflection or refraction according to the original topological relation between points and surfaces of the polyhedron, and the optical mapping virtual object is called as a virtual object for short; when displaying the graph, using a graph hardware acceleration technology to recursively project the virtual object onto a reflecting or refracting surface by using a recursive algorithm, and performing alpha mixing with the brightness value of the refracting surface to obtain an image effect similar to ray tracing; the virtual vertex P' of any point P in the three-dimensional scene can be expressed as: <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein V is a viewpoint, L is the sum of the lengths of paths through which the light L passes when being refracted or reflected to reach a point P in a scene corresponding to the virtual vertex; b is a sampling point on the projection plane;
for the refraction of a three-dimensional plane with two mutually perpendicular refraction surfaces of a light ray incoming and outgoing refraction body, the method sequentially comprises the following steps:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>d</mi> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
d=|V-I1|+|I1-I2|+|I2-P|,
I1=V+dx+dy+dz,
dx=xex,dy=yey,dz=zez,
ey=N2,ez=-N1,ex=ey×ez,
where P is a point on an object in the scene, V is a viewpoint, N1Is the normal vector outside the incident plane, N2Is the outer normal vector of the exit surface, the entrance surface is perpendicular to the exit surface, N1、N2Is a unit vector, ex,ey,ezIs a base vector under a local coordinate system, x, y and z are coordinates under the local coordinate system, and dx, dy and dz are offset of the point I1 relative to the viewpoint; <math> <mrow> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mi>c</mi> <msqrt> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <mrow> <msup> <mi>η</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mo>-</mo> <mover> <mi>e</mi> <mo>^</mo> </mover> <mfrac> <mrow> <mi>c</mi> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </msqrt> </mrow> <mrow> <mi>η</mi> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> </msqrt> </mrow> </mfrac> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mover> <mi>e</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mrow> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>[</mo> <mrow> <mo>(</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>·</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> <mrow> <mo>|</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>[</mo> <mrow> <mo>(</mo> <mi>V</mi> <mo>-</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>·</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <mi>c</mi> <mo>=</mo> <mfrac> <mrow> <mi>η</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mi>y</mi> </mfrac> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math>
eta is the refractive index of the three-dimensional refractor, and alpha is the vertical distance from a viewpoint to the emergent surface of the refractor;
(2) connecting all the virtual vertexes into the edges of the virtual objects in sequence according to the original topological relation of the objects in the scene, wherein the corresponding surfaces form virtual surfaces;
(3) the virtual object is projected onto the refractive surface using a recursive algorithm and displayed with a resulting image after alpha blending with the local luminance values of the refractive surface itself.
4. The real-time ray tracing method of plane and spherical nonlinear refraction and reflection is characterized in that: for a refracted or reflected polyhedron, firstly, the computer is used to calculate the virtual vertexes of each vertex of the polyhedron, then all the virtual vertexes are connected in turn according to the topological relation between the original points and the faces of the polyhedron to form an optical mapping virtual object which is composed of a plurality of virtual faces and finally forms the reflection or refraction, and the optical mapping virtual object is called as the virtual object for shortAn object; when displaying the graph, using a graph hardware acceleration technology to recursively project the virtual object onto a reflecting or refracting surface by using a recursive algorithm, and performing alpha mixing with the brightness value of the refracting surface to obtain an image effect similar to ray tracing; the virtual vertex P' of any point P in the three-dimensional scene can be expressed as: <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein V is a viewpoint and L is the sum of the lengths of paths through which the light L reaches the point P after being refracted or reflected; b is a sampling point on the projection plane;
for a spherical reflector, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mfrac> <mrow> <mi>I</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>I</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
i is the incident point of the ray L on the sphere, which can be expressed as: i is T.LIWherein L isI=[LIx,LIy,Pz,1]TIt is the coordinate of the point I on the sphere under the local coordinate system; t is a transformation matrix from the local coordinate system to the world coordinate system:
wherein,
ux=vywz-wyvz,uy=wxvz-vxwz,uz=vxwy-wxvy;
vx、vyand vzThree coordinate components of unit vector (P-O)/| P-O | respectively;
wx=Nx,wy=Ny,wz=Nz;
N=(Nx,Ny,Nz)Ta normal vector of a plane VPO consisting of a viewpoint V, a sphere center O of the spherical surface and a point P in the scene; establishing a local coordinate system by taking O as a coordinate origin, N as a z axis, OP as a y axis and OP multiplied by N as an x axis;
x, y are the distances from point I on the sphere to point V and point P in the scene, respectively: x ═ V-I |, y ═ I-P |; using the formula I ═ T · LICan be handled LITransforming from the local coordinate system to a world coordinate system;
(2) connecting all the virtual vertexes into the edges of the virtual objects in sequence according to the original topological relation of the objects in the scene, wherein the corresponding surfaces form all the virtual surfaces;
(3) the virtual object is projected onto the reflecting surface by a recursive algorithm and displayed as a result after alpha blending with the local luminance values of the reflecting surface itself.
5. The real-time ray tracing method of plane and spherical nonlinear refraction and reflection is characterized in that: for the polyhedron to be refracted or reflected, first, a computer is usedCalculating the virtual vertexes of all vertexes of the polyhedron, and then sequentially connecting all the virtual vertexes into an optical mapping virtual object which is formed by a plurality of virtual surfaces and finally forms reflection or refraction according to the original topological relation between points and surfaces of the polyhedron, wherein the optical mapping virtual object is simply called as a virtual object; when displaying the graph, using a graph hardware acceleration technology to recursively project the virtual object onto a reflecting or refracting surface by using a recursive algorithm, and performing alpha mixing with the brightness value of the refracting surface to obtain an image effect similar to ray tracing; the virtual vertex P' of any point P in the three-dimensional scene can be expressed as: <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mi>l</mi> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein V is a viewpoint and L is the sum of the lengths of paths through which the light L reaches the point P after being refracted or reflected; b is a sampling point on the projection plane;
for spherical refractors, it comprises the following steps in sequence:
(1) calculating each virtual vertex P': <math> <mrow> <msup> <mi>P</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>V</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> </mrow> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>V</mi> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein, I1、I2The incident point and the exit point of the light ray to the spherical refractor are respectively expressed by the formula I ═ T.LIHandle I1、I2The local coordinates of (a) are transformed into coordinates of a world coordinate system; x and y are respectively from a viewpoint V to an incident point I1And point of emergence I2Distance to a point P in the scene, i.e. x ═ V-I1|,y=|I2-P |; t is a transformation matrix from the local coordinate system to the world coordinate system:
wherein,
ux=vywz-wyvz,uy=wxvz-vxwz,uz=vxwy-wxvy;
vx、vyand vzThree coordinate components of unit vector (P-O)/| P-O | respectively;
wx=Nx,wy=Ny,wz=Nz;
N=(Nx,Ny,Nz)Ta normal vector of a plane VPO consisting of a viewpoint V, a sphere center O of the spherical surface and a point P in the scene; establishing a local coordinate system by taking O as a coordinate origin, N as a z axis, OP as a y axis and OP multiplied by N as an x axis;
(2) connecting all the virtual vertexes into the edges of the virtual objects in sequence according to the original topological relation of the objects in the scene, wherein the corresponding surfaces form all the virtual surfaces;
(3) the virtual object is projected onto the refractive surface using a recursive algorithm and displayed with a resulting image after alpha blending with the local luminance values of the refractive surface itself.
6. A method for real-time ray tracing of planar and spherical nonlinear refraction and reflection according to claim 1 or 2, or 3, or 4, or 5, wherein: the process for displaying graphics using a recursive algorithm comprises the steps and algorithms of:
algorithm 1 preprocessing
Constructing a binary space subdivision (BSP) tree for objects in a scene, and if the scene contains spherical reflecting or refracting bodies, replacing 6 surfaces of a cube which closely contains the ball with the ball to participate in constructing the BSP tree;
step 2, collecting all reflection and refraction bodies in the scene and forming a reflection and refraction body object table;
step 3, determining the sequence of each reflector and refractor in the reflector and refractor body surface according to the current viewpoint; because the BSP tree is irrelevant to the viewpoint, the sequence of the reflecting body and the refracting body can be obtained quickly;
step 4, calculating a virtual vertex and generating all virtual objects to generate a virtual scene;
algorithm 2 generates a virtual scene for a refractive or reflective body:
i) is the maximum recursion level reached? If yes, returning;
ii) if it is a reflector, calculating a reflected virtual scene; if the virtual scene is a refraction body, calculating a refraction virtual scene;
iii) ordering the reflexes and refracts in the virtual scene according to the viewpoint;
iv) if a reflection or refraction body still exists in the virtual scene, recursively calling the algorithm for the reflection or refraction body in the virtual scene, and turning to i); otherwise, turning v);
v) returning;
if the viewpoint changes, recalculation is not needed for the plane reflection virtual scene; but the virtual scene of any refractor or spherical reflector and all the virtual scenes appearing in the subsequent recursion process must be recalculated; the following is an algorithm for recalculating virtual scenes when the viewpoint changes:
algorithm 3 recalculation of virtual scenes when viewpoint position changes
i) Is it a refractive body?
ii) is, calling Algorithm 2;
iii) if not, recursively calling the algorithm;
algorithm 4 real-time rendering and display (the algorithm is for a certain reflector or refractor in a real scene)
i) Setting drawing parameters of OpenGL (color cache write forbidding, depth comparison forbidding, Alpha mixing forbidding, and enabling the value of a drawn template buffer area to be + 1);
ii) rendering the reflecting, refracting body itself;
iii) if the virtual scene contains the reflection or refraction body, turning to i), and recursively drawing the reflection or refraction body in the virtual scene; otherwise, go to iv);
iv) setting drawing parameters of OpenGL (making the value of a template buffer area drawn unchanged, color cache writable, depth effective, and other drawing parameters unchanged);
v) drawing other virtual objects in the current virtual scene in the current template buffer area;
vi) setting drawing parameters of OpenGL (color cache writable, depth comparison effective, Alpha mixed effective, and making the value of a drawn template buffer area-1);
vii) rendering the reflecting, refracting body itself;
viii) setting drawing parameters of OpenGL (recover depth cache writable, color cache writable, Alpha mixing forbidden, template operation forbidden);
ix) returning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 02130945 CN1410948A (en) | 2002-09-23 | 2002-09-23 | Real time light tracing method of non linear refraction and reflection on plane and spherical surfaces |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 02130945 CN1410948A (en) | 2002-09-23 | 2002-09-23 | Real time light tracing method of non linear refraction and reflection on plane and spherical surfaces |
Publications (1)
Publication Number | Publication Date |
---|---|
CN1410948A true CN1410948A (en) | 2003-04-16 |
Family
ID=4746520
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 02130945 Pending CN1410948A (en) | 2002-09-23 | 2002-09-23 | Real time light tracing method of non linear refraction and reflection on plane and spherical surfaces |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1410948A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101297325A (en) * | 2005-12-29 | 2008-10-29 | 英特尔公司 | Application of interval algorithm for reducing computation time in ray tracking problems |
CN101359404A (en) * | 2007-07-31 | 2009-02-04 | 英特尔公司 | Real-time luminosity dependent subdivision |
CN100557638C (en) * | 2006-11-28 | 2009-11-04 | 国际商业机器公司 | Carry out the method and system of ray tracing |
CN100557637C (en) * | 2006-11-28 | 2009-11-04 | 国际商业机器公司 | Carry out the method and system of ray tracing |
CN100570638C (en) * | 2006-11-28 | 2009-12-16 | 国际商业机器公司 | The method of dispensing work load and image processing system |
CN101165721B (en) * | 2006-10-17 | 2010-06-02 | 国际商业机器公司 | Ray tracking method and system |
CN101894390A (en) * | 2010-06-29 | 2010-11-24 | 浙江大学 | Ray tracing method for non-constant refractive index medium |
CN101918769A (en) * | 2007-10-24 | 2010-12-15 | 伊苏勒有限公司 | Heliostat calibration in a kind of central tower receiver solar power plant and tracking control |
CN101982838A (en) * | 2010-11-02 | 2011-03-02 | 长春理工大学 | 3D virtual set ray tracking method for accelerating back light source irradiation |
CN102074041A (en) * | 2010-12-21 | 2011-05-25 | 长春理工大学 | Method for drawing planar caustic effect of 3D virtual scene produced by specular reflection |
CN101276479B (en) * | 2007-03-29 | 2011-08-31 | 国际商业机器公司 | Image process method and system |
CN102282591A (en) * | 2008-09-10 | 2011-12-14 | 柯斯提克绘图有限公司 | Ray tracing system architectures and methods |
CN101506847B (en) * | 2006-08-22 | 2012-03-28 | 国际商业机器公司 | Methods and systems for partitioning a spatial index |
-
2002
- 2002-09-23 CN CN 02130945 patent/CN1410948A/en active Pending
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101297325A (en) * | 2005-12-29 | 2008-10-29 | 英特尔公司 | Application of interval algorithm for reducing computation time in ray tracking problems |
CN101297325B (en) * | 2005-12-29 | 2013-04-24 | 英特尔公司 | Method and device for radio tracking |
CN101506847B (en) * | 2006-08-22 | 2012-03-28 | 国际商业机器公司 | Methods and systems for partitioning a spatial index |
CN101165721B (en) * | 2006-10-17 | 2010-06-02 | 国际商业机器公司 | Ray tracking method and system |
CN100570638C (en) * | 2006-11-28 | 2009-12-16 | 国际商业机器公司 | The method of dispensing work load and image processing system |
CN100557637C (en) * | 2006-11-28 | 2009-11-04 | 国际商业机器公司 | Carry out the method and system of ray tracing |
CN100557638C (en) * | 2006-11-28 | 2009-11-04 | 国际商业机器公司 | Carry out the method and system of ray tracing |
CN101276479B (en) * | 2007-03-29 | 2011-08-31 | 国际商业机器公司 | Image process method and system |
CN101359404A (en) * | 2007-07-31 | 2009-02-04 | 英特尔公司 | Real-time luminosity dependent subdivision |
CN101918769A (en) * | 2007-10-24 | 2010-12-15 | 伊苏勒有限公司 | Heliostat calibration in a kind of central tower receiver solar power plant and tracking control |
CN101918769B (en) * | 2007-10-24 | 2013-01-16 | 伊苏勒有限公司 | Calibration and tracking control of heliostats in a central tower receiver solar power plant |
CN102282591A (en) * | 2008-09-10 | 2011-12-14 | 柯斯提克绘图有限公司 | Ray tracing system architectures and methods |
CN101894390A (en) * | 2010-06-29 | 2010-11-24 | 浙江大学 | Ray tracing method for non-constant refractive index medium |
CN101894390B (en) * | 2010-06-29 | 2012-07-04 | 浙江大学 | Ray tracing method for non-constant refractive index medium |
CN101982838A (en) * | 2010-11-02 | 2011-03-02 | 长春理工大学 | 3D virtual set ray tracking method for accelerating back light source irradiation |
CN102074041A (en) * | 2010-12-21 | 2011-05-25 | 长春理工大学 | Method for drawing planar caustic effect of 3D virtual scene produced by specular reflection |
CN102074041B (en) * | 2010-12-21 | 2012-10-10 | 长春理工大学 | Method for drawing planar caustic effect of 3D virtual scene produced by specular reflection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1410948A (en) | Real time light tracing method of non linear refraction and reflection on plane and spherical surfaces | |
CN1924931A (en) | Video rendering apparatus and method | |
CN1233030A (en) | The method and apparatus of adaptive phong shading | |
CN1343343A (en) | Method and apparatus for processing images | |
CN1254676C (en) | Method for reproducing 3-D CT image of body by uisng conical beam projection data | |
CN1691069A (en) | Real-time volume drawing method for block-based fragment filtration with multi-GPU acceleration | |
CN1404016A (en) | Establishing method of human face 3D model by fusing multiple-visual angle and multiple-thread 2D information | |
CN101044507A (en) | Image processing method, image processor, and image processing program | |
CN1554036A (en) | Method for capturing a panoramic image using a rectangular image sensor | |
CN1855150A (en) | Image processing device, method, program and recording medium | |
CN1835022A (en) | Generating a 2d model using 3D transition | |
CN1849608A (en) | Method and program for generating volume data from boundary representation data | |
CN100341031C (en) | Curve image processor and its processing method | |
CN1334913A (en) | Apparatus and method to measure three-dimensional data | |
CN1747559A (en) | Three-dimensional geometric mode building system and method | |
CN1272933A (en) | Image processing apparatus and image processing method, program providing medium, and data providing medium | |
CN1892676A (en) | Apparatus and method for face/iris combination optical imagine | |
CN1171580A (en) | Method for communicating and generating computer graphics, animation data, and recording media | |
CN1630886A (en) | Three-dimensional image comparing program, comparing method, and comparing device | |
CN1339764A (en) | Shading tree mixer for image system recirculation | |
CN1711568A (en) | Visualizing system, visualizing method, and visualization processing program | |
CN1607551A (en) | Method and apparatus for image-based photorealistic 3D face modeling | |
CN1845178A (en) | Image plotting method and image plotting equipment using omnidirectional different pattern mapping | |
CN1526098A (en) | Method and system for outputting data related to two-dimensional or three-dimensional geometric entity | |
CN1465036A (en) | Information processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |