CG Notes All Units
CG Notes All Units
CG Notes All Units
UNIT I - 2D PRIMITIVES
Output primitives Line, Circle and Ellipse drawing algorithms - Attributes of
output primitives Two dimensional Geometric transformation - Two
dimensional viewing Line, Polygon, Curve and Text clipping algorithms
Introduction
A picture is completely specified by the set of intensities for the pixel positions in
the display. Shapes and colors of the objects can be described internally with pixel
arrays into the frame buffer or with the set of the basic geometric structure such as
straight line segments and polygon color areas. To describe structure of basic object
is referred to as output primitives.
/
k
t
Each output primitive is specified with input co-ordinate data and other information
about the way that objects is to be displayed. Additional output primitives that can
be used to constant a picture include circles and other conic sections, quadric
surfaces, Spline curves and surfaces, polygon floor areas and character string.
.
e
b
Figure : Pixel Postions reference by scan line number and column number
u
t
e
cs
//
:
p
t
setpixel (x, y)
To retrieve the current frame buffer intensity setting for a specified location we use
a low level function
getpixel (x, y)
ht
Digital devices display a straight line segment by plotting discrete points between
the two end points. Discrete coordinate positions along the line path are calculated
from the equation of the line. For a raster video display, the line color (intensity) is
then loaded into the frame buffer at the corresponding pixel coordinates. Reading
from the frame buffer, the video controller then plots the screen pixels.
Pixel positions are referenced according to scan-line number and column number
(pixel position across a scan line). Scan lines are numbered consecutively from 0,
starting at the bottom of the screen; and pixel columns are numbered from 0, left to
right across each scan line
y=m.x+b
Where m as slope of the line and b as the y intercept
http://csetube.weebly.com/
(1)
Given that the two endpoints of a line segment are specified at positions (x1,y1) and
(x2,y2) as in figure we can determine the values for the slope m and y intercept b
with the following calculations
/
k
t
Figure : Straight line Segment with five sampling positions along the x axis
between x1 and x2
.
e
b
(2)
b= y1 - m . x1
(3)
u
t
e
For any given x interval x along a line, we can compute the corresponding y
interval
y
y= m x
(4)
s
c
/
/
:
p
The line at unit intervals in one coordinate and determine corresponding integer
values nearest the line path for the other coordinate.
A line with positive slop, if the slope is less than or equal to 1, at unit x intervals
(x=1) and compute each successive y values as
t
t
h
yk+1 = yk + m
Subscript k takes integer values starting from 1 for the first point and increases by 1
until the final endpoint is reached. m can be any real number between 0 and 1 and,
the calculated y values must be rounded to the nearest integer
For lines with slope magnitudes |m| < 1, x can be set proportional to a
small horizontal deflection voltage and the corresponding vertical deflection is then
set proportional to y as calculated from Eq (4).
For lines with a positive slope greater than 1 we reverse the roles of x and y, (y=1)
and calculate each succeeding x value as
For lines whose slopes have magnitudes |m | >1 , y can be set proportional to a
small vertical deflection voltage with the corresponding horizontal deflection
voltage set proportional to x, calculated from Eq (5)
For lines with m = 1,
voltage are equal.
(6)
(5)
xk+1 = xk + (1/m)
(7)
Equation (6) and (7) are based on the assumption that lines are to be processed from
the left endpoint to the right endpoint.
http://csetube.weebly.com/
If this processing is reversed, x=-1 that the starting endpoint is at the right
yk+1 = yk m
{
x += xIncrement;
y += yIncrement;
setpixel (ROUND(x), ROUND(y));
}
}
(8)
(9)
Algorithm Description:
If the absolute value of the slope is less than 1 and the start endpoint is at the left,
we set x = 1 and calculate y values with Eq. (6)
/
k
t
When the start endpoint is at the right (for the same slope), we set x = -1 and
obtain y positions from Eq. (8). Similarly, when the absolute value of a negative
slope is greater than 1, we use y = -1 and Eq. (9) or we use y = 1 and Eq. (7).
k
0
1
2
3
4
5
0+0.66=0.66
0.66+0.66=1.32
1.32+0.66=1.98
1.98+0.66=2.64
2.64+0.66=3.3
3.3+0.66=3.96
0+1=1
1+1=2
2+1=3
3+1=4
4+1=5
5+1=6
.
e
b
u
t
e
s
c
/
:/
Algorithm
#define ROUND(a) ((int)(a+0.5))
void lineDDA (int xa, int ya, int xb, int yb)
{
int dx = xb - xa, dy = yb - ya, steps, k;
float xIncrement, yIncrement, x = xa, y = ya;
if (abs (dx) > abs (dy) steps = abs (dx) ;
else steps = abs dy);
xIncrement = dx / (float) steps;
yIncrement = dy / (float) steps
setpixel (ROUND(x), ROUND(y) ) :
for (k=0; k<steps; k++)
tp
a.
b.
c.
ht
6.
7.
8.
http://csetube.weebly.com/
In addition, Bresenhams line algorithm can be adapted to display circles and other
curves.
To illustrate Bresenham's approach, we- first consider the scan-conversion process
for lines with positive slope less than 1.
Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0,y0) of a given line, we step to each successive
column (x position) and plot the pixel whose scan-line y value is closest to the line
path.
Result :
/
k
t
To determine the pixel (xk,yk) is to be displayed, next to decide which pixel to plot
the column xk+1=xk+1.(xk+1,yk) and .(xk+1,yk+1). At sampling position xk+1, we label
vertical pixel separations from the mathematical line path as d 1 and d2. The y
coordinate on the mathematical line at pixel column position x k+1 is calculated as
u
t
e
.
e
b
Then
s
c
/
:/
(1)
d1 = y-yk
= m(xk+1)+b-yk
d2 = (yk+1)-y
= yk+1-m(xk+1)-b
To determine which of the two pixel is closest to the line path, efficient test that is
based on the difference between the two pixel separations
d1- d2 = 2m(xk+1)-2yk+2b-1
p
t
t
y =m(xk+1)+b
(2)
A decision parameter Pk for the kth step in the line algorithm can be obtained by
rearranging equation (2). By substituting m=y/x where x and y are the
vertical and horizontal separations of the endpoint positions and defining the
decision parameter as
pk = x (d1- d2)
= 2y xk.-2x. yk + c
http://csetube.weebly.com/
(3)
2.
3.
If the pixel at yk is closer to the line path than the pixel at yk+1 (d1< d2) than
decision parameter P k is negative. In this case, plot the lower pixel, otherwise plot
the upper pixel.
Coordinate changes along the line occur in unit steps in either the x or y directions.
To obtain the values of successive decision parameters using incremental integer
calculations. At steps k+1, the decision parameter is evaluated from equation (3) as
4.
load (x0,y0) into frame buffer, ie. Plot the first point.
Calculate the constants x, y, 2y and obtain the starting value for the
decision parameter as P0 = 2y-x
At each xk along the line, starting at k=0 perform the following test
If Pk < 0, the next point to plot is(xk+1,yk) and
Pk+1 = Pk + 2y
otherwise, the next point to plot is (xk+1,yk+1) and
Pk+1 = Pk + 2y - 2x
5. Perform step4 x times.
/
k
t
.
e
b
u
t
e
s
c
/
t
t
h
/
:
p
(5)
Bresenhams line drawing for a line with a positive slope less than 1 in the
following outline of the algorithm.
The constants
scan converted.
Input the two line endpoints and store the left end point in (x0,y0)
http://csetube.weebly.com/
else
{
y++;
p+=twoDyDx;
}
setPixel(x,y);
}
}
Result
/
k
t
y=8
.
e
b
u
t
e
p0 = 2y- x = 6
and the increments for calculating successive decision parameters are
2y=16
2y-2 x= -4
cs
//
We plot the initial point (x0,y0) = (20,10) and determine successive pixel positions
along the line path from the decision parameter as
:
p
t
Tabulation
k
0
1
2
3
4
5
6
7
8
9
pk
6
2
-2
14
10
6
2
-2
14
10
(xk+1, yK+1)
(21,11)
(22,12)
(23,12)
(24,13)
(25,14)
(26,15)
(27,16)
(28,16)
(29,17)
(30,18)
Advantages
Algorithm is Fast
Uses only integer calculations
Disadvantages
ht
http://csetube.weebly.com/
This is not the best method for generating a circle for the following reason
Considerable amount of computation
Spacing between plotted pixels is not uniform
/
k
t
To eliminate the unequal spacing is to calculate points along the circle boundary
using polar coordinates r and . Expressing the circle equation in parametric polar
from yields the pair of equations
Circle-Generating Algorithms
e.
x = xc + rcos
A circle is defined as a set of points that are all the given distance (xc,yc).
y = yc + rsin
b
u
t
Properties of a circle
When a display is generated with these equations using a fixed angular step size, a
circle is plotted with equally spaced points along the circumference. To reduce
calculations use a large angular separation between points along the circumference
and connect the points with straight line segments to approximate the circular path.
e
s
c
Set the angular step size at 1/r. This plots pixel positions that are
approximately one unit apart. The shape of the circle is similar in each quadrant. To
determine the curve positions in the first quadrant, to generate he circle section in
the second quadrant of the xy plane by nothing that the two circle sections are
symmetric with respect to the y axis and circle section in the third and fourth
quadrants can be obtained from sections in the first and second quadrants by
considering symmetry between octants.
/
/
:
p
t
t
Circle sections in adjacent octants within one quadrant are symmetric with respect
to the 450 line dividing the two octants. Where a point at position (x, y) on a oneeight circle sector is mapped into the seven circle points in the other octants of the
xy plane.
(2)
To generate all pixel positions around a circle by calculating only the points within
the sector from x=0 to y=0. the slope of the curve in this octant has an magnitude
less than of equal to 1.0. at x=0, the circle slope is 0 and at x=y, the slope is -1.0.
(1)
http://csetube.weebly.com/
center position (xc,yc) set up our algorithm to calculate pixel positions around a
circle path centered at the coordinate position by adding xc to x and yc to y.
To apply the midpoint method we define a circle function as
fcircle(x,y) = x2+y2-r2
Any point (x,y) on the boundary of the circle with radius r satisfies the equation
fcircle (x,y)=0. If the point is in the interior of the circle, the circle function is
negative. And if the point is outside the circle the, circle function is positive
fcircle (x,y) <0, if (x,y) is inside the circle boundary
=0, if (x,y) is on the circle boundary
>0, if (x,y) is outside the circle boundary
/
k
t
The tests in the above eqn are performed for the midposition sbteween pixels near
the circle path at each sampling step. The circle function is the decision parameter
in the midpoint algorithm.
.
e
b
u
t
e
cs
/
/
:
p
t
t
For a straight line segment the midpoint method is equivalent to the bresenham line
algorithm. The error involved in locating pixel positions along any conic section
using the midpoint test is limited to one half the pixel separations.
In the raster line algorithm at unit intervals and determine the closest pixel
position to the specified circle path at each step for a given radius r and screen
http://csetube.weebly.com/
2xk+1+1-2 yk+1.
Evaluation of the terms 2xk+1 and 2 yk+1 can also be done incrementally as
2xk+1=2xk+2
2 yk+1=2 yk-2
At the Start position (0,r) these two terms have the values 0 and 2r respectively.
Each successive value for the 2xk+1 term is obtained by adding 2 to the previous
value and each successive value for the 2yk+1 term is obtained by subtracting 2 from
the previous value.
The circle octant in the first quadrant from x=0 to x=y. The initial value of the
decision parameter is
P0=1-r = - 9
For the circle centered on the coordinate origin, the initial point is (x 0,y0)=(0,10)
and initial increment terms for calculating the decision parameters are
/
k
t
2x0=0 , 2y0=20
.
e
b
P0=(5/4)-r
k
0
1
2
3
4
5
6
or
u
t
e
s
c
/
P0=1-r(for r an integer)
Midpoint circle Algorithm
/
:
p
1.
Input radius r and circle center (xc,yc) and obtain the first point on the
circumference of the circle centered on the origin as
(x0,y0) = (0,r)
2. Calculate the initial value of the decision parameter as P 0=(5/4)-r
3. At each xk position, starting at k=0, perform the following test. If P k <0 the next
point along the circle centered on (0,0) is (xk+1,yk) and Pk+1=Pk+2xk+1+1
Otherwise the next point along the circle is (xk+1,yk-1) and Pk+1=Pk+2xk+1+1-2 yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk-2
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x,y) onto the circular path centered at (x c,yc)
and plot the coordinate values.
x=x+xc y=y+yc
6. Repeat step 3 through 5 until x>=y.
pk
-9
-6
-1
6
-3
8
5
(xk+1, yk-1)
(1,10)
(2,10)
(3,10)
(4,9)
(5,9)
(6,8)
(7,7)
t
t
h
http://csetube.weebly.com/
2xk+1
2
4
6
8
10
12
14
2yk+1
20
20
20
18
18
16
14
p +=2* (x - Y) + 1;
}
circlePlotPoints(xCenter, yCenter, x, y)
}
}
void circlePlotPolnts (int xCenter, int yCenter, int x, int y)
{
setpixel (xCenter + x, yCenter + y ) ;
setpixel (xCenter - x. yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter - x, yCenter - y ) ;
setpixel (xCenter + y, yCenter + x);
setpixel (xCenter - y , yCenter + x);
setpixel (xCenter t y , yCenter - x);
setpixel (xCenter - y , yCenter - x);
}
/
k
t
.
e
b
u
t
e
Ellipse-Generating Algorithms
s
c
/
/
:
p
Properties of ellipses
t
t
h
An ellipse can be given in terms of the distances from any point on the ellipse to
two fixed positions called the foci of the ellipse. The sum of these two distances is
the same values for all points on the ellipse.
If the distances to the two focus positions from any point p=(x,y) on the
ellipse are labeled d1 and d2, then the general equation of an ellipse can be stated as
d1+d2=constant
10
http://csetube.weebly.com/
/
k
t
.
e
b
tu
Ax2+By2+Cxy+Dx+Ey+F=0
e
s
c
The coefficients A,B,C,D,E, and F are evaluated in terms of the focal coordinates
and the dimensions of the major and minor axes of the ellipse.
x=xc+rxcos
y=yc+rxsin
Angle called the eccentric angle of the ellipse is measured around the perimeter of
a bounding circle.
/
/
:
The major axis is the straight line segment extending from one side of the
ellipse to the other through the foci. The minor axis spans the shorter dimension of
the ellipse, perpendicularly bisecting the major axis at the halfway position (ellipse
center) between the two foci.
We must calculate pixel positions along the elliptical arc throughout one quadrant,
and then we obtain positions in the remaining three quadrants by symmetry
p
t
t
Ellipse equations are simplified if the major and minor axes are oriented to
align with the coordinate axes. The major and minor axes oriented parallel to the x
and y axes parameter rx for this example labels the semi major axis and parameter r y
labels the semi minor axis
((x-xc)/rx)2+((y-yc)/ry)2=1
11
http://csetube.weebly.com/
1. Start at position (0,ry) and step clockwise along the elliptical path in the
first quadrant shifting from unit steps in x to unit steps in y when the slope becomes
less than
-1
2. Start at (rx,0) and select points in a counter clockwise order.
2.1 Shifting from unit steps in y to unit steps in x when the slope
becomes greater than -1.0
2.2 Using parallel processors calculate pixel positions in the two
regions simultaneously
3. Start at (0,ry)
step along the ellipse path in clockwise order throughout the first
quadrant ellipse function (xc,yc)=(0,0)
fellipse (x,y)=ry2x2+rx2y2 rx2 ry2
which has the following properties:
fellipse (x,y) <0, if (x,y) is inside the ellipse boundary
=0, if(x,y) is on ellipse boundary
>0, if(x,y) is outside the ellipse boundary
Thus, the ellipse function fellipse (x,y) serves as the decision parameter in
the midpoint algorithm.
/
k
t
.
e
b
The midpoint ellipse method is applied throughout the first quadrant in two
parts.
u
t
e
The below figure show the division of the first quadrant according to the slope of an
ellipse with rx<ry.
cs
//
:
p
t
Starting at (0,ry):
Unit steps in the x direction until to reach the boundary between region 1
and region 2. Then switch to unit steps in the y direction over the remainder of the
curve in the first quadrant.
At each step to test the value of the slope of the curve. The ellipse slope is
calculated
ht
dy/dx= -(2ry2x/2rx2y)
At the boundary between region 1 and region 2
dy/dx = -1.0 and 2ry2x=2rx2y
In the x direction where the slope of the curve has a magnitude less than 1 and unit
steps in the y direction where the slope has a magnitude greater than 1.
12
http://csetube.weebly.com/
p1k+1 = p1k +2 ry2(xk +1) + ry2 + rx2 [(yk+1 -)2 - (yk -)2]
The following figure shows the midpoint between two candidate pixels at sampling
position xk+1 in the first region.
if p1k <0 }
if p1k 0 }
Increments for the decision parameters can be calculated using only addition and
subtraction as in the circle algorithm.
/
k
t
The terms 2ry x and 2rx2 y can be obtained incrementally. At the initial
position (0,ry) these two terms evaluate to
2 ry2x = 0
.
e
b
u
t
e
s
c
/
To determine the next position along the ellipse path by evaluating the decision
parameter at this mid point
:/
p
t
t
if P1k <0, the midpoint is inside the ellipse and the pixel on scan
line yk is closer to the ellipse boundary.
Otherwise the midpoint is outside or
on the ellipse boundary and select the pixel on scan line yk-1
p10 = fellipse(1,ry - )
x and y are incremented updated values are obtained by adding 2ry2to the
current value of the increment term and subtracting 2rx 2 from the current value of
the increment term. The updated increment values are compared at each step and
more from region 1 to region 2. when the condition 4 is satisfied.
region 2 at unit intervals in the negative y direction and the midpoint is now taken
between horizontal pixels at each step for this region the decision parameter is
evaluated as
2rx2 y =2rx2 ry
(x0,y0) = (0,ry)
(yk+1 -) -
rx2 ry2
Or
Or
13
http://csetube.weebly.com/
over region 2, we sample at unit steps in the negative y direction and the midpoint is
now taken between horizontal pixels at each step. For this region, the decision
parameter is evaluated as
p2k = fellipse(xk + ,yk - 1)
2.
3.
1. If P2k >0, the mid point position is outside the ellipse boundary, and
select the pixel at xk.
2. If P2k <=0, the mid point is inside the ellipse boundary and select pixel
position xk+1.
To determine the relationship between successive decision parameters in
region 2 evaluate the ellipse function at the sampling step : yk+1 -1= yk-2.
Otherwise the next point along the ellipse is (xk+1, yk-1) and
.
e
b
with
P2k+1 = fellipse(xk+1 +,yk+1 -1 )
=ry2(xk
/
k
t
rx2 ry2
u
t
e
or
2 ry2xk +1 = 2 ry2xk
2 rx2yk +1 = 2 rx2yk
+ 2ry2
+ 2rx2
cs
p2k+1 = p2k -2 rx2(yk -1) + rx2 + ry2 [(xk+1 +)2 - (xk +)2]
/
/
:
4.
With xk+1set either to xkor xk+1, depending on the sign of P2k. when we
enter region 2, the initial position (x0,y0) is taken as the last position. Selected in
region 1 and the initial decision parameter in region 2 is then
tp
5.
ht
Input rx,ry and ellipse center (xc,yc) and obtain the first point on an
ellipse centered on the origin as
(x0,y0) = (0,ry)
6.
14
http://csetube.weebly.com/
7.
8.
Move each calculate pixel position (x,y) onto the elliptical path
centered on (xc,yc) and plot the coordinate values
x=x+xc, y=y+yc
Repeat the steps for region1 unit 2ry2x>=2rx2y
The remaining positions along the ellipse path in the first quadrant are
then calculated as
k
0
1
2
/
k
t
k
0
1
2
3
4
5
6
p1k
-332
-224
-44
208
-108
288
244
xk+1,yk+1
(1,6)
(2,6)
(3,6)
(4,5)
(5,5)
(6,4)
(7,3)
2ry xk+1
72
144
216
288
360
432
504
u
t
e
/
:
p
tt
s
c
/
2rx2yk+1
768
768
768
640
640
512
384
2ry2xk+1
576
576
-
.
e
b
p10=ry2-rx2ry2+1/4rx2=-332
xk+1,yk+1
(8,2)
(8,1)
(8,0)
For region 1 the initial point for the ellipse centered on the origin is
(x0,y0) = (0,6) and the initial decision parameter value is
P2k
-151
233
745
/ * Region 1 */
p = ROUND(Ry2 - (Rx2* Ry) + (0.25*Rx2));
while (px < py)
{
x++;
px += twoRy2;
i f (p < 0)
p += Ry2 + px;
else
{
y--;
py -= twoRx2;
p += Ry2 + px - py;
15
http://csetube.weebly.com/
2rx2yk+1
256
128
-
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
/* Region 2 */
p = ROUND (Ry2*(x+0.5)*' (x+0.5)+ Rx2*(y- l )* (y- l ) - Rx2*Ry2);
while (y > 0 )
{
y--;
py -= twoRx2;
i f (p > 0)
p += Rx2 - py;
else
{
x++;
px+=twoRy2;
p+=Rx2-py+px;
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
}
void ellipsePlotPoints(int xCenter, int yCenter,int x,int y);
{
setpixel (xCenter + x, yCenter + y);
setpixel (xCenter - x, yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter- x, yCenter - y);
}
/
k
t
.
e
b
u
t
e
s
c
/
:/
p
t
t
Line Attributes
Curve Attributes
Color and Grayscale Levels
Area Fill Attributes
Character Attributes
Bundled Attributes
Line Attributes
Basic attributes of a straight line segment are its type, its width, and its color. In
some graphics packages, lines can also be displayed using selected pen or brush
options
Line Type
Line Width
Pen and Brush Options
16
http://csetube.weebly.com/
Line Color
Round cap obtained by adding a filled semicircle to each butt cap. The circular arcs
are centered on the line endpoints and have a diameter equal to the line thickness
Line type
Projecting square cap extend the line and add butt caps that are positioned onehalf of the line width beyond the specified endpoints.
Possible selection of line type attribute includes solid lines, dashed lines and dotted
lines.
To set line type attributes in a PHIGS application program, a user invokes the
function
setLinetype (lt)
/
k
t
.
e
b
Line width
Implementation of line width option depends on the capabilities of the output
device to set the line width attributes.
setLinewidthScaleFactor(lw)
u
t
e
s
c
/
Line width parameter lw is assigned a positive number to indicate the relative width
of line to be displayed. A value of 1 specifies a standard width line. A user could set
lw to a value of 0.5 to plot a line whose width is half that of the standard line.
Values greater than 1 produce lines thicker than the standard.
:/
Line Cap
tp
ht
We can adjust the shape of the line ends to give them a better appearance by adding
line caps.
1. A miter join accomplished by extending the outer boundaries of each of the two
lines until they meet.
2. A round join is produced by capping the connection between the two segments
with a circular boundary whose diameter is equal to the width.
3. A bevel join is generated by displaying the line segment with but caps and filling
in tri angular gap where the segments meet
17
http://csetube.weebly.com/
A poly line routine displays a line in the current color by setting this color value in
the frame buffer at pixel locations along the line path using the set pixel procedure.
We set the line color value in PHlGS with the function
setPolylineColourIndex (lc)
Nonnegative integer values, corresponding to allowed color choices, are assigned to
the line color parameter lc
Example : Various line attribute commands in an applications program is given by
the following sequence of statements
/
k
t
setLinetype(2);
setLinewidthScaleFactor(2);
setPolylineColourIndex (5);
polyline(n1,wc points1);
setPolylineColorIindex(6);
poly line (n2, wc points2);
.
e
b
u
t
e
s
c
/
/
:
p
This program segment would display two figures, drawn with double-wide dashed
lines. The first is displayed in a color corresponding to code 5, and the second in
color 6.
Curve attributes
Parameters for curve attribute are same as those for line segments. Curves displayed
with varying colors, widths, dot dash patterns and available pen or brush options
t
t
h
Line color
18
http://csetube.weebly.com/
We can put the color codes in a separate table and use pixel values as an
index into this table
With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer
for each-component pixel in the output primitives to be displayed in that color.
A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel, as shown in Table
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
A user can set color-table entries in a PHIGS applications program with the
function
setColourRepresentation (ws, ci, colorptr)
t
t
h
Color tables(Color Lookup Tables) are an alternate means for providing extended
color capabilities to a user without requiring large frame buffers
Grayscale
With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed primitives.
Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
19
http://csetube.weebly.com/
/
k
t
Intensity = 0.5[min(r,g,b)+max(r,g,b)]
.
e
b
u
t
e
s
c
/
Areas are displayed with three basic fill styles: hollow with a color border, filled
with a solid color, or filled with a specified pattern or design. A basic fill style is
selected in a PHIGS program with the function
setInteriorStyle(fs)
/
:
p
t
t
h
Values for the fill-style parameter fs include hollow, solid, and pattern. Another
value for fill style is hatch, which is used to fill an area with selected hatching
patterns-parallel lines or crossed lines
The color for a solid interior or for a hollow area outline is chosen with where fill
color parameter fc is set to the desired color code
setInteriorColourIndex(fc)
Pattern Fill
We select fill patterns with setInteriorStyleIndex (pi) where pattern index
parameter pi specifies a table position
20
http://csetube.weebly.com/
Control of text color (or intensity) is managed from an application program with
For example, the following set of statements would fill the area defined in the
fillArea command with the second pattern type stored in the pattern table:
setTextColourIndex(tc)
SetInteriorStyle( pattern)
SetInteriorStyleIndex(2);
Fill area (n, points)
/
k
t
.
e
b
u
t
e
s
c
/
Character Attributes
/
:
p
tt
Text Attributes
Parameter ch is assigned a real value greater than 0 to set the coordinate height of
capital letters
Where the character width parameter cw is set to a positive real value that scales the
body width of character
The choice of font or type face is set of characters with a particular design style as
courier, Helvetica, times roman, and various symbol groups.
The characters in a selected font also be displayed with styles. (solid,
dotted, double) in bold face in italics, and in
or shadow styles.
A particular font and associated stvle is selected in a PHIGS program by setting an
integer code for the text font parameter tf in the function
Spacing between characters is controlled separately with
setTextFont(tf)
setCharacterSpacing(cs)
21
http://csetube.weebly.com/
The orientation for a displayed character string is set according to the direction of
the character up vector
/
k
t
setCharacterUpVector(upvect)
.
e
b
Parameter upvect in this function is assigned two values that specify the x and y
vector components. For example, with upvect = (1, 1), the direction of the up
vector is 45o and text would be displayed as shown in Figure.
Another handy attribute for character strings is alignment. This attribute specifies
how text is to be positioned with respect to the $tart coordinates. Alignment
attributes are set with
u
t
e
cs
//
:
p
t
setTextAlignment (h,v)
ht
setTextPrecision (tpr)
setTextPath (tp)
Marker Attributes
Where the text path parameter tp can be assigned the value: right, left, up, or down
A marker symbol is a single character that can he displayed in different colors and
in different sizes. Marker attributes are implemented by procedures that load the
chosen character into the raster at the defined positions with the specified color and
size. We select a particular character to be the marker symbol with
22
http://csetube.weebly.com/
setMarkerType(mt)
Entries in the bundle table for line attributes on a specified workstation are set with
the function
where marker type parameter mt is set to an integer code. Typical codes for marker
type are the integers 1 through 5, specifying, respectively, a dot (.) a vertical cross
(+), an asterisk (*), a circle (o), and a diagonal cross (X).
Parameter ws is the workstation identifier and line index parameter li defines the
bundle table position. Parameter lt, lw, tc are then bundled and assigned values to
set the line type, line width, and line color specifications for designated table index.
setMarkerSizeScaleFactor(ms)
with parameter marker size ms assigned a positive number. This scaling parameter
is applied to the nominal size for the particular marker symbol chosen. Values
greater than 1 produce character enlargement; values less than 1 reduce the marker
size.
Example
/
k
t
setPolylineRepresentation(1,3,2,0.5,1)
setPolylineRepresentation (4,3,1,1,7)
.
e
b
setPolymarkerColourIndex(mc)
A selected color code parameter mc is stored in the current attribute list and used to
display subsequently specified marker primitives
u
t
e
cs
Bundled Attributes
//
The procedures considered so far each function reference a single attribute that
specifies exactly how a primitive is to be displayed these specifications are called
individual attributes.
p:
t
t
h
A particular set of attributes values for a primitive on each output device is chosen
by specifying appropriate table index. Attributes specified in this manner are called
bundled attributes. The choice between a bundled or an unbundled specification is
made by setting a switch called the aspect source flag for each of these attributes
23
http://csetube.weebly.com/
Inquiry functions
Basic transformation
Current settings for attributes and other parameters as workstations types and status
in the system lists can be retrieved with inquiry functions.
Translation
T(tx, ty)
Translation distances
Scale
S(sx,sy)
Scale factors
Rotation
R( )
Rotation angle
Translation
/
k
t
inquireInteriorcColourIndex (lastfc)
Copy the current values for line index and fill color into parameter lastli and lastfc.
.
e
b
u
t
e
s
c
/
:/
tp
y = y + ty
The translation distance point (tx,ty) is called translation vector or shift vector.
Translation equation can be expressed as single matrix equation by using column
vectors to represent the coordinate position and the translation vector as
ht
P ( x, y )
T (t x , t y )
x' x t x
y' y t y
x'
y'
24
http://csetube.weebly.com/
x
y
P' P T
tx
ty
/
k
t
.
e
b
Rotation of a point from position (x,y) to position (x,y) through angle relative to
coordinate origin
The transformation equations for rotation of a point position P when the pivot point
is at coordinate origin. In figure r is constant distance of the point positions is the
original angular of the point from horizontal and is the rotation angle.
u
t
e
cs
//
Moving a polygon from one position to another position with the translation
vector (-5.5, 3.75)
p:
Rotations:
t
t
h
x = rcos,
y = rsin
the transformation equation for rotating a point at position (x,y) through an angle
about origin
Positive values for the rotation angle define counter clock wise rotation
about pivot point. Negative value of angle rotate objects in clock wise direction.
The transformation can also be described as a rotation about a rotation axis
perpendicular to xy plane and passes through pivot point
x = xcos ysin
y = xsin + ycos
Rotation equation
P= R . P
25
http://csetube.weebly.com/
Rotation Matrix
R=
cos
sin
sin
cos
Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and sy= 1.
x'
y'
cos
sin
sin
cos
x
y
Any positive numeric values are valid for scaling factors sx and sy. Values less than
1 reduce the size of the objects and values greater than 1 produce an enlarged
object.
/
k
t
Note : Positive values for the rotation angle define counterclockwise rotations about
the rotation point and negative values rotate objects in the clockwise.
.
e
b
Scaling
A scaling transformation alters the size of an object. This operation can be carried
out for polygons by multiplying the coordinate values (x,y) to each vertex by
scaling factor Sx & Sy to produce the transformed coordinates (x,y)
Uniform scaling
Non Uniform Scaling
x= x.Sx
y = y.Sy
s
c
/
:/
x'
y'
sx
0
0
sy
x
y
u
t
e
p
t
t
or
To get uniform scaling it is necessary to assign same value for sx and sy. Unequal
values for sx and sy result in a non uniform scaling.
Matrix Representation and homogeneous Coordinates
Many graphics applications involve sequences of geometric transformations. An
animation, for example, might require an object to be translated and rotated at each
increment of the motion. In order to combine sequence of transformations we have
to eliminate the matrix addition. To achieve this we have represent matrix as 3 X 3
instead of 2 X 2 introducing an additional dummy coordinate h. Here points are
specified by three numbers instead of two. This coordinate system is called as
Homogeneous coordinate system and it allows to express transformation equation
as matrix multiplication
Cartesian coordinate position (x,y) is represented as homogeneous coordinate
triple(x,y,h)
P = S. P
26
http://csetube.weebly.com/
x'
y'
1
1 0 tx
0 1 ty
0 0 1
Translation
x
y
1
P' T t x , t y
={T(tx2,ty2).T(tx1,ty1)}.P
Where P and P are represented as homogeneous-coordinate column vectors.
For Scaling
x'
y'
1
sx
0
0
0
sy
0
0
0
1
k/
1 0 tx2 1 0 tx1
0 1 ty2 . 0 1 ty1
0 0 1 0 0 1
x
y
1
b
u
t
Or
P'
S sx , s y
t
.
e
1 0 tx1 tx2
0 1 ty1 ty2
0 0
1
T(tx2,ty2).T(tx1,ty1) = T(tx1+tx2,ty1+ty2)
e
s
c
For rotation
x'
y'
1
P' R
cos
sin
0
sin
cos
0
0 x
0 y
1 1
/
/
:
Rotations
Two successive rotations applied to point P produce the transformed position
tp
P=R(2).{R(1).P}={R(2).R(1)}.P
ht
By multiplying the two rotation matrices, we can verify that two successive rotation
are additive
R(2).R(1) = R(1+ 2)
Composite Transformations
So that the final rotated coordinates can be calculated with the composite rotation
matrix as
P = R(1+ 2).P
27
http://csetube.weebly.com/
cos 2
sin 2
0
sin 2 0 cos 1
cos 2 0 . sin 1
0
1
0
sin 1 0
cos 1 0
0
1
cos( 2
sin( 2
0
1)
1)
sin( 2
cos( 2
0
1) 0
1) 0
1
Scaling
Concatenating transformation matrices for two successive scaling operations
produces the following composite scaling matrix
sx 2 0 0 sx1 0 0
0 sy 2 0 . 0 sy1 0
0
0 1 0
0 1
/
k
t
sx 2.sx1
0
0
0
sy 2.sy1 0
0
0
1
.
e
b
The composite transformation matrix for this sequence is obtain with the
concatenation
u
t
e
s
c
/
/
:
p
t
t
h
28
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
cs
//
p:
x'
y'
w'
t
t
h
Concatenating the matrices for these three operations produces the required scaling
matix
1 0 tx cos
0 1 ty sin
0 0 1
0
sin
cos
0
0 sx 0 0
0 0 sy 0
1 0 0 1
29
http://csetube.weebly.com/
x
y
w
matrix3x3SetIdentity (m):
a = pToRadians (a);
m[0][0]= cosf (a);
m[0][1] = -sinf (a) ;
m[0] [2] = refPt.x * (1 - cosf (a)) + refPt.y sinf (a);
m[1] [0] = sinf (a);
m[l][l] = cosf (a];
m[l] [2] = refPt.y * (1 - cosf (a) - refPt.x * sinf ( a ) ;
matrix3x3PreMultiply (m, theMatrix);
}
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
30
http://csetube.weebly.com/
}
Other Transformations
1. Reflection
2. Shear
Reflection
A reflection is a transformation that produces a mirror image of an object. The
mirror image for a two-dimensional reflection is generated relative to an axis of
reflection by rotating the object 180o about the reflection axis. We can choose an
axis of reflection in the xy plane or perpendicular to the xy plane or coordinate
origin
/
k
t
.
e
b
1 0 0
0 1 0
0 0 1
u
t
e
cs
/
/
:
p
t
t
1
0
0
0 0
1 0
0 1
31
http://csetube.weebly.com/
1
0
0
0 0
1 0
0 1
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
1.
2.
3.
0
1
0
Reflection about the diagonal line y=x is accomplished with the transformation
matrix
1 0
0 0
0 1
Shear
0 1 0
1 0 0
0 0 1
32
http://csetube.weebly.com/
X- Shear
1
sh y
0
The x shear preserves the y coordinates, but changes the x values which cause
vertical lines to tilt right or left as shown in figure
0 0
1 0
0 1
sh y .x
/
k
t
XY-Shear
.
e
b
1 shx
0 1
0 0
u
t
e
0
0
1
s
c
/
:/
x'
y'
1
tp
shx
1
0
0 x
0 y
1 1
ht
1
sh y
0
sh y .x
The y shear preserves the x coordinates, but changes the y values which cause
horizontal lines which slope up or down
We can apply x shear and y shear transformations relative to other reference lines.
In x shear transformations we can use y reference line and in y shear we can use x
reference line.
33
http://csetube.weebly.com/
1 shx
0 1
0 0
shx . yref
0
1
x =x
y = shy (x- xref) + y
Example
Shy =
yref )
and xref=-1
y = y
/
k
t
Example
Shx =
and yref=-1
.
e
b
u
t
e
s
c
/
/
:
p
tt
We can generate y-direction shears relative to other reference lines with the
transformation matrix
1 shx
0 1
0 0
shx . y ref
0
1
34
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
The viewing- coordinate reference frame is used to provide a method for setting up
arbitrary orientations for rectangular windows. Once the viewing reference frame is
established, we can transform descriptions in world coordinates to viewing
coordinates.
We then define a viewport in normalized coordinates (in the range from 0 to 1) and
map the viewing-coordinate description of the scene to normalized coordinates.
At the final step all parts of the picture that lie outside the viewport are clipped, and
the contents of the viewport are transferred to device coordinates. By changing the
position of the viewport, we can view objects at different positions on the display
area of an output device.
35
http://csetube.weebly.com/
A point at position (xw,yw) in the window is mapped into position (x v,yv) in the
associated view port. To maintain the same relative placement in view port as in
window
xv xvmin
xw xwmin
=
xvmax xvmin xwmax xwmin
/
k
t
yv yvmin
yw ywmin
=
yvmax yvmin ywmax ywmin
.
e
b
xv = xvmin + xw xwmin
where scaling factors
yv =areyv
min + yw ywmin
sx = xvmax xvmin
xwmax xwmin
u
t
e
xvmax xvmin
xwmax xwmin
yvmax yvmin
yw
yw yv
symax
= yv -min
s
c
/
/
:
p
max
min
ywmax - ywmin
t
t
h
36
http://csetube.weebly.com/
ywsVPortmin, ywsVPortmax)
where was gives the workstation number. Window-coordinate extents are specified
in the range from 0 to 1 and viewport limits are in integer device coordinates.
where x0,y0 are coordinate of viewing origin and parameter x v, yv are the world
coordinate positions for view up vector.An integer error code is generated if the
input parameters are in error otherwise the view matrix for world-to-viewing
transformation is calculated. Any number of viewing transformation matrices can
be defined in an application.
Clipping operation
Any procedure that identifies those portions of a picture that are inside or outside of
a specified region of space is referred to as clipping algorithm or clipping. The
region against which an object is to be clipped is called clip window.
/
k
t
Point clipping
Line clipping (Straight-line segment)
Area clipping
Curve clipping
Text clipping
.
e
b
u
t
e
Line and polygon clipping routines are standard components of graphics packages.
cs
//
setViewRepresentation(ws,viewIndex,viewMatrix,viewMappingMatrix,
xclipmin, xclipmax, yclipmin, yclipmax,
clipxy)
:
p
t
Point Clipping
Clip window is a rectangle in standard position. A point P=(x,y) for display, if
following inequalities are satisfied:
xwmin <= x <= xwmax
Where parameter ws designates the output device and parameter view index sets an
integer identifier for this window-view port point. The matrices viewMatrix and
viewMappingMatrix can be concatenated and referenced by viewIndex.
setViewIndex(viewIndex)
ht
Line Clipping
A line clipping procedure involves several parts. First we test a given line segment
whether it lies completely inside the clipping window. If it does not we try to
determine whether it lies completely outside the window . Finally if we can not
identify a line as completely inside or completely outside, we perform intersection
calculations with one or more clipping boundaries.
37
http://csetube.weebly.com/
Process lines through inside-outside tests by checking the line endpoints. A line
with both endpoints inside all clipping boundaries such as line from P1 to P2 is
saved. A line with both end point outside any one of the clip boundaries line P3P4
is outside the window.
Every line endpoint in a picture is assigned a four digit binary code called
a region code that identifies the location of the point relative to the boundaries of
the clipping rectangle.
/
k
t
.
e
b
u
t
e
s
c
/
All other lines cross one or more clipping boundaries. For a line segment with end
points (x1,y1) and (x2,y2) one or both end points outside clipping rectangle, the
parametric representation
:/
p
t
x = x +h
ut
x x ,
1
y = y +u y
Binary region codes assigned to line end points according to relative position
with respect to the clipping rectangle.
Regions are set up in reference to the boundaries. Each bit position in region code is
used to indicate one of four relative coordinate positions of points with respect to
clip window: to the left, right, top or bottom. By numbering the bit positions in the
region code as 1 through 4 from right to left, the coordinate regions are corrected
with bit positions as
y , 0 u 1
bit 1:
left
bit 2: right
38
bit 3:
below
bit4:
above
http://csetube.weebly.com/
A value of 1 in any bit position indicates that the point is in that relative
position. Otherwise the bit position is set to 0. If a point is within the clipping
rectangle the region code is 0000. A point that is below and to the left of the
rectangle has a region code of 0101.
/
k
t
(2) Use the resultant sign bit of each difference calculation to set the corresponding
value in the region code.
.
e
b
u
t
e
cs
/
/
:
Once we have established region codes for all line endpoints, we can quickly
determine which lines are completely inside the clip window and which are clearly
outside.
Line extending from one coordinates region to another may pass through the
clip window, or they may intersect clipping boundaries without entering
window.
Cohen-Sutherland line clipping starting with bottom endpoint left, right ,
bottom and top boundaries in turn and find that this point is below the clipping
rectangle.
p
t
t
Any lines that are completely contained within the window boundaries
have a region code of 0000 for both endpoints, and we accept
Starting with the bottom endpoint of the line from P1 to P2, we check P1
against the left, right, and bottom boundaries in turn and find that this point is below
the clipping rectangle. We then find the intersection point P1 with the bottom
boundary and discard the line section from P1 to P1.
these lines. Any lines that have a 1 in the same bit position in the region codes for
each endpoint are completely outside the clipping rectangle, and we reject these
lines.
We would discard the line that has a region code of 1001 for one endpoint
and a code of 0101 for the other endpoint. Both endpoints of this line are left of the
clipping rectangle, as indicated by the 1 in the first bit position of each region code.
The line now has been reduced to the section from P1 to P2,Since P2, is
outside the clip window, we check this endpoint against the boundaries and find
that it is to the left of the window. Intersection point P2 is calculated, but this point
is above the window. So the final intersection calculation yields P2, and the line
from P1 to P2is saved. This completes processing for this line, so we save this part
and go on to the next line.
A method that can be used to test lines for total clipping is to perform the
logical and operation with both region codes. If the result is not 0000,the line is
completely outside the clipping region.
39
http://csetube.weebly.com/
/
:
p
t
t
h
.
e
b
s
c
/
0x4
/
k
t
u
t
e
x= x1 +( y- y1) / m
#define LEFT_EDGE
0x1
#define RIGHT_EDGE 0x2
#define BOTTOM_EDGE
#define TOP_EDGE
0x8
#define TRUE 1
#define FALSE 0
code=code|LEFT_EDGE;
if(pt.x>winmax.x)
code=code|RIGHT_EDGE;
if(pt.y<winmin.y)
code=code|BOTTOM_EDGE;
if(pt.y>winmax.y)
code=code|TOP_EDGE;
return(code);
}
void swappts(wcPt2 *p1,wcPt2 *p2)
{
wcPt2 temp;
tmp=*p1;
*p1=*p2;
*p2=tmp;
}
void swapcodes(unsigned char *c1,unsigned char *c2)
{
unsigned char tmp;
tmp=*c1;
*c1=*c2;
*c2=tmp;
}
void clipline(dcPt winmin, dcPt winmax, wcPt2 p1,ecPt2 point p2)
{
unsigned char code1,code2;
int done=FALSE, draw=FALSE;
float m;
while(!done)
{
code1=encode(p1,winmin,winmax);
code2=encode(p2,winmin,winmax);
if(ACCEPT(code1,code2))
{
done=TRUE;
draw=TRUE;
}
else if(REJECT(code1,code2))
40done=TRUE;
else
http://csetube.weebly.com/
{
if(INSIDE(code1))
{
swappts(&p1,&p2);
swapcodes(&code1,&code2);
}
if(p2.x!=p1.x)
m=(p2.y-p1.y)/(p2.x-p1.x);
if(code1 &LEFT_EDGE)
{
p1.y+=(winmin.x-p1.x)*m;
p1.x=winmin.x;
}
else if(code1 &RIGHT_EDGE)
{
p1.y+=(winmax.x-p1.x)*m;
p1.x=winmax.x;
}
else if(code1 &BOTTOM_EDGE)
{
if(p2.x!=p1.x)
p1.x+=(winmin.y-p1.y)/m;
p1.y=winmin.y;
}
else if(code1 &TOP_EDGE)
{
if(p2.x!=p1.x)
p1.x+=(winmax.y-p1.y)/m;
p1.y=winmax.y;
}
}
}
if(draw)
lineDDA(ROUND(p1.x),ROUND(p1.y),ROUND(p2.x),ROUND(p2.y));
}
0<=u<=1
/
k
t
.
e
b
tu
pk <= qk.
k=1,2,3,4
se
c
/
/
p:
p1 = -x
p2 = x
P3 = -y
P4 = y
q1 = x1 - xwmin
q2 = xwmax - x1
q3 = y1- ywmin
q4 = ywmax - y1
Any line that is parallel to one of the clipping boundaries have pk=0 for
values of k corresponding to boundary k=1,2,3,4 correspond to left, right, bottom
and top boundaries. For values of k, find q k<0, the line is completely out side the
boundary.
t
t
h
41
http://csetube.weebly.com/
For each line, calculate values for parameters u1and u2 that define the part
of line that lies within the clip rectangle. The value of u1 is determined by looking
at the rectangle edges for which the line proceeds from outside to the inside (p<0).
For these edges we calculate
rk = qk / pk
The value of u1 is taken as largest of set consisting of 0 and various values of r. The
value of u2 is determined by examining the boundaries for which lines proceeds
from inside to outside (P>0).
/
k
t
.
e
b
If u1>u2, the line is completely outside the clip window and it can be
rejected.
u
t
e
Line intersection parameters are initialized to values u1=0 and u2=1. for
each clipping boundary, the appropriate values for P and q are calculated and used
by function
s
c
/
Cliptest to determine whether the line can be rejected or whether the intersection
parameter can be adjusted.
/
:
p
t
t
h
If updating u1 or u2 results in u1>u2 reject the line, when p=0 and q<0, discard the
line, it is parallel to and outside the boundary.If the line has not been rejected after
all four value of p and q have been tested , the end points of clipped lines are
determined from values of u1 and u2.
The Liang-Barsky algorithm is more efficient than the Cohen-Sutherland
algorithm since intersections calculations are reduced. Each update of parameters
u1 and u2 require only one division and window intersections of these lines are
computed only once.
Cohen-Sutherland algorithm, can repeatedly calculate intersections along a
line path, even through line may be completely outside the clip window. Each
intersection calculations require both a division and a multiplication.
void clipLine (dcPt winMin, dcPt winMax, wcPt2 p1, wcpt2 p2)
{
float u1=0.0, u2=1.0, dx=p2.x-p1.x,dy;
if (clipTest (-dx, p1.x-winMin.x, &u1, &u2))
if (clipTest (dx, winMax.x-p1.x, &u1, &u2))
{
dy=p2.y-p1.y;
if (clipTest (-dy, p1.y-winMin.y, &u1, &u2))
if (clipTest (dy, winMax.y-p1.y, &u1, &u2))
{
42
http://csetube.weebly.com/
if (u1<1.0)
{
p2.x=p1.x+u2*dx;
p2.y=p1.y+u2*dy;
}
if (u1>0.0)
{
p1.x=p1.x+u1*dx;
p1.y=p1.y+u1*dy;
}
lineDDA(ROUND(p1.x),ROUND(p1.y),ROUND(p2.x),ROUND(p2.y));
}
}
}
Nicholl-Lee-Nicholl Line clipping
By creating more regions around the clip window, the Nicholl-Lee-Nicholl (or
NLN) algorithm avoids multiple clipping of an individual line segment. In the
Cohen-Sutherland method, multiple intersections may be calculated.These extra
intersection calculations are eliminated in the NLN algorithm by carrying out more
region testing before intersection positions are calculated.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
43
http://csetube.weebly.com/
/
k
t
The
intersection
with
the
appropriate
window boundary is then
carried out, depending on which one of the four regions (L, T, R, or B) contains P2.
If both P1 and P2 are inside the clipping rectangle, we simply save the entire line.
.
e
b
u
t
e
s
c
/
If P1 is in the region to the left of the window, we set up the four regions, L, LT,
LR, and LB, shown in Fig.
/
:
p
These four regions determine a unique boundary for the line segment. For instance,
if P2 is in region L, we clip the line at the left boundary and save the line segment
from this intersection point to P2. But if P2 is in region LT, we save the line
segment from the left window boundary to the top boundary. If P2 is not in any of
the four regions, L, LT, LR, or LB, the entire line is clipped.
For the third case, when P1 is to the left and above the clip window, we usethe
clipping regions in Fig.
t
t
h
Fig : The two possible sets of clipping regions used in NLN algorithm when P1
is above and to the left of the clip window
44
http://csetube.weebly.com/
/
k
t
In this case, we have the two possibilities shown, depending on the position of P1,
relative to the top left corner of the window. If P2, is in one of the regions T, L, TR,
TB, LR, or LB, this determines a unique clip window edge for the intersection
calculations. Otherwise, the entire line is rejected.
y2 y1
.
e
b
POLYGON CLIPPING
u
t
e
line to the slopes of the boundaries of the clip regions. For example, if P1 is left of
the clipping rectangle (Fig. a), then P2, is in region LT if
s
c
/
slopeP1PTR<slopeP1P2<slopeP1PTL
/
:
p
or
yT y1
xR x1
< y2
y1 <
yT y1
x2 x1
xL x1
t
t
h
For polygon clipping, we require an algorithm that will generate one or more closed
areas that are then scan converted for the appropriate area fill. The output of a
polygon clipper should be a sequence of vertices that defines the clipped polygon
boundaries.
x = x1 + (x2 x1)u
y = y1 + (y2 y1)u
45
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
:/
1.
If the first vertex is outside the window boundary and second vertex is
inside, both the intersection point of the polygon edge with window
boundary and second vertex are added to output vertex list.
2. If both input vertices are inside the window boundary, only the second
vertex is added to the output vertex list.
3. If first vertex is inside the window boundary and second vertex is
outside only the edge intersection with window boundary is added to
output vertex list.
4. If both input vertices are outside the window boundary nothing is
added to the output list.
Clipping a polygon against successive window boundaries.
p
t
t
Clipping a polygon against the left boundary of a window, starting with vertex
1. Primed numbers are used to label the points in the output vertex list for this
window boundary.
46
http://csetube.weebly.com/
/
k
t
.
e
b
Processing the vertices of the polygon in the above fig. through a boundary
clipping pipeline. After all vertices are processed through the pipeline, the
vertex list is { v2, v2, v3,v3}
u
t
e
s
c
/
/
:
p
A point is added to the output vertex list only after it has been determined to be
inside or on a window boundary by all boundary clippers. Otherwise the point does
not continue in the pipeline.
t
t
h
47
http://csetube.weebly.com/
break;
case Top:
ipt.y=wmax.y;
if(p1.x!=p2.x)
ipt.x=p2.x+(wmax.y-p2.y)/m;
else
ipt.x=p2.x;
break;
}
return(ipt);
}
void clippoint(wcPt2 p,Edge b,dcPt wmin,dcPt wmax, wcPt2 *pout,int *cnt, wcPt2
*first[],struct point *s)
{
wcPt2 iPt;
if(!first[b])
first[b]=&p;
else
if(cross(p,s[b],b,wmin,wmax))
{
ipt=intersect(p,s[b],b,wmin,wmax);
if(b<top)
clippoint(ipt,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=ipt;
(*cnt)++;
}
}
s[b]=p;
if(inside(p,b,wmin,wmax))
if(b<top)
clippoint(p,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=p;
(*cnt)++;
}
}
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
48
http://csetube.weebly.com/
currently being processed represents an outside-to-inside pair or an inside- tooutside pair. For clockwise processing of polygon vertices, we use the following
rules:
For an outside-to-inside pair of vertices, follow the polygon boundary.
For an inside-to-outside pair of vertices,. follow the window boundary in a
clockwise direction.
In the below Fig. the processing direction in the Weiler-Atherton algorithm and
the resulting clipped polygon is shown for a rectangular clipping window.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
But if the bounding rectangle test fails, we can look for other computationsaving approaches. For a circle, we can use the coordinate extents of individual
49
http://csetube.weebly.com/
quadrants and then octants for preliminary testing before calculating curve-window
intersections.
The below figure illustrates circle clipping against a rectangular window.
On the first pass, we can clip the bounding rectangle of the object against the
bounding rectangle of the clip region. If the two regions overlap, we will need to
solve the simultaneous line-curve equations to obtain the clipping intersection
points.
/
k
t
.
e
b
u
t
e
Text clipping
cs
There are several techniques that can be used to provide text clipping in a
graphics package. The clipping technique used will depend on the methods used to
generate characters and the requirements of a particular application.
/
/
:
tp
ht
50
http://csetube.weebly.com/
Exterior clipping:
The appearance of the solid object can be reconstructed from the major
views.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
This coordinate reference defines the position and orientation for the plane
of the camera film which is the plane we want to use to display a view of
the objects in the scene.
Object descriptions are then transferred to the camera reference
coordinates and projected onto the selected display plane.
The objects can be displayed in wire frame form, or we can apply lighting
and surface rendering techniques to shade the visible surfaces.
Perspective Projection:
Parallel Projection:
51
http://csetube.weebly.com/
This makes objects further from the viewing position be displayed smaller
than objects of the same size that are nearer to the viewing position.
The vibrations of the mirror are synchronized with the display of the scene
on the CRT.
Depth Cueing:
Depth information is important to identify the viewing direction, which is
the front and which is the back of displayed object.
As the mirror vibrates, the focal length varies so that each point in the
scene is projected to a position corresponding to its depth.
/
k
t
Depth cueing is a method for indicating depth with wire frame displays is
to vary the intensity of objects according to their distance from the viewing
position.
Stereoscopic devices present two views of a scene; one for the left eye and
the other for the right eye.
.
e
b
tu
e
s
c
A simplest way to identify the visible line is to highlight the visible lines
or to display them in a different color.
Another method is to display the non visible lines as dashed lines.
//
Surface Rendering:
:
p
t
ht
World coordinate descriptions are extended to 3D, and users are provided
with output and input routines accessed with specifications such as
Lighting conditions include the intensity and positions of light sources and
the background illumination.
Surface characteristics include degree of transparency and how rough or
smooth the surfaces are to be.
Exploded and Cutaway views:
Exploded and cutaway views of objects can be to show the internal
structure and relationship of the objects parts.
52
Polyline3(n, WcPoints)
Fillarea3(n, WcPoints)
Text3(WcPoint, string)
Getlocator3(WcPoint)
http://csetube.weebly.com/
Where points and vectors are specified with 3 components and transformation
matrices have 4 rows and 4 columns.
2.2
Coordinate values for each vertex in the object are stored in this
table.
Representation schemes for solid objects are divided into two categories as
follows:
1.
/
k
t
It contains pointers back into the edge table to identify the edges
for each polygon.
.
e
b
u
t
e
Polygon Surfaces
s
c
/
Polygon Tables
/
:
p
For each polygon input, the data are placed into tables that are to be used
in the subsequent processing.
t
t
h
Polygon data tables can be organized into two groups: Geometric tables
and attribute tables.
Geometric Tables
Contain vertex coordinates and parameters to identify the spatial
orientation of the polygon surfaces.
Attribute tables
Contain attribute information for an object such as parameters specifying
the degree of transparency of the object and its surface reflectivity and texture
characteristics.
53
http://csetube.weebly.com/
Vertex table
table
Edge Table
Polygon
V1 : X1, Y1, Z1
E1 : V1, V2
S1 : E1, E2, E3
V2 : X2, Y2, Z2
E2 : V2, V3
V3 : X3, Y3, Z3
E3 : V3, V1
V4 : X4, Y4, Z4
E4 : V3, V4
V5 : X5, Y5, Z5
E5 : V4, V5
vertices are input, we can calculate edge slopes and we can scan the
coordinate values to identify the minimum and maximum x, y and z values
for individual polygons.
surface
The more information included in the data tables will be easier to check for
errors.
Some of the tests that could be performed by a graphics package are:
E6 : V5, V1
1.
2.
/
k
t
3.
Listing the geometric data in three tables provides a convenient reference
to the individual components (vertices, edges and polygons) of each
object.
4.
e.
5.
b
u
t
The object can be displayed efficiently by using data from the edge table to
draw the component lines.
Extra information can be added to the data tables for faster information
extraction. For instance, edge table can be expanded to include forward
points into the polygon table so that common edges between polygons can
be identified more rapidly.
t
t
h
se
/
:
p
E2 : V2, V3, S1
Plane Equations:
/c
E1 : V1, V2, S1
For these processes, we need information about the spatial orientation of the
individual surface components of the object. This information is obtained
from the vertex coordinate value and the equations that describe the polygon
planes.
E6 : V5, V1, S2
This is useful for the rendering procedure that must vary surface shading
smoothly across the edges from one polygon to the next. Similarly, the
vertex table can be expanded so that vertices are cross-referenced to
corresponding edges.
----(1)
Where (x, y, z) is any point on the plane, and the coefficients A,B,C and D
are constants describing the spatial properties of the plane.
54
http://csetube.weebly.com/
Ax + By + Cz + D 0
We can obtain the values of A, B,C and D by solving a set of three plane
equations using the coordinate values for three non collinear points in the
plane.
We can identify the point as either inside or outside the plane surface
according o the sigh (negative or positive) of Ax + By + Cz + D:
For that, we can select three successive polygon vertices (x 1, y1, z1), (x2, y2,
z2) and (x3, y3, z3) and solve the following set of simultaneous linear plane
equations for the ratios A/D, B/D and C/D.
(A/D)xk + (B/D)yk + (c/D)zk = -1,
k=1,2,3
-----(2)
The solution for this set of equations can be obtained in determinant form,
using Cramers rule as
A=
C=
y1
z1
y2
z2
y3
x1
/
k
t
x1
z1
x2
z2
z3
x3
z3
y1
x1
y1
z1
One type of polygon mesh is the triangle strip.A triangle strip formed
with 11 triangles connecting 13 vertices.
x2
y2
------(3)
x2
y2
z2
x3
x3
y3
y3
B=
D=-
Polygon Meshes
.
e
b
u
t
e
s
c
/
z3
/
:
p
Expanding the determinants , we can write the calculations for the plane
coefficients in the form:
tt
This function produces n-2 connected triangles given the coordinates for n
vertices.
As vertex values and other information are entered into the polygon data
structure, values for A, B, C and D are computed for each polygon and
stored with the other polygon data.
Plane equations are used also to identify the position of spatial points
relative to the plane surfaces of an object. For any point (x, y, z) hot on a
plane with parameters A,B,C,D, we have
55
http://csetube.weebly.com/
/
k
t
e.
When functions are specified, a package can project the defining equations
for a curve to the display plane and plot pixel positions along the path of
the projected function.
For surfaces, a functional description in decorated to produce a polygonmesh approximation to the surface.
2.2.3 Quadric Surfaces
ub
t
e
s
/c
:/
ht
x +y +z =r
-------(2)
z = rsin
The parameter representation in eqn (2) provides a symmetric range for the
angular parameter and .
- <= <=
tp
y = r cos sin,
Ellipsoid
x
rx
-------------------------(1)
y
+
ry
z
+
rz
=1
56
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
Torus
/
:
p
t
t
h
The Cartesian representation for points over the surface of a torus can be
written in the form
57
http://csetube.weebly.com/
x
rx
y
ry
z
rz
=1
1.
- <= <=
- <= <=
/
k
t
.
e
b
z = rz sin
u
t
e
s
c
/
Several small weights are distributed along the length of the strip to hold it
in position on the drafting table as the curve is drawn.
/
:
p
The Spline curve refers to any sections curve formed with polynomial
sections satisfying specified continuity conditions at the boundary of the
pieces.
t
t
h
2.
When the polynomials are fitted to the general control point path
without necessarily passing through any control points, the
resulting curve is said to approximate the set of control points.
A set of six control points approximated with piecewise
continuous polynomial sections
58
http://csetube.weebly.com/
The convex polygon boundary that encloses a set of control points is called
the convex hull.
The shape of the convex hull is to imagine a rubber band stretched around
the position of the control points so that each control point is either on the
perimeter of the hull or inside it.
/
k
t
Convex hull shapes (dashed lines) for two sets of control points
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
-----(a)
59
http://csetube.weebly.com/
We can state the set of boundary conditions that are imposed on the spline; (or)
2.
u
t
e
/
/
:
We can state the set of blending functions that determine how specified
geometric constraints on the curve are combined to calculate positions along
the curve path.
To illustrate these three equivalent specifications, suppose we have the
following parametric cubic polynomial representation for the x coordinate
along the path of a spline section.
x(u)=axu3 + axu2 + cxu + dx
tp
Three control points fitted with two curve sections joined with
a) parametric continuity
.
e
b
1.
3.
cs
/
k
t
Spline specifications
ht
From the boundary conditions we can obtain the matrix that characterizes
this spline curve by first rewriting eq(1) as the matrix product
x(u) = [u3
60
http://csetube.weebly.com/
u2
u1
1]
ax
bx
cx
------( 2 )
This involves the visualization of data sets and processes that may be
difficult or impossible to analyze without graphical methods. Example
medical scanners, satellite and spacecraft scanners.
dx
Visualization techniques are useful for analyzing process that occur over a
long period of time or that cannot observed directly. Example quantum
mechanical phenomena and special relativity effects produced by objects
traveling near the speed of light.
= U.C
where U is the row matrix of power of parameter u
coefficient column matrix.
and C is the
/
k
t
Using equation (2) we can write the boundary conditions in matrix form
and solve for the coefficient matrix C as
C = Mspline . Mgeom
-----(3)
.
e
b
u
t
e
cs
//
p:
------(4)
t
t
h
Contour plots are used to display isolines ( lines of constant scalar value)
for a data set distributed over a surface. The isolines are spaced at some
61
http://csetube.weebly.com/
convenient interval to show the range and variation of the data values over
the region of space. Contouring methods are applied to a set of data values
that is distributed over a regular grid.
A 2D contouring algorithm traces the isolines from cell to cell
within the grid by checking the four corners of grid cells to determine
which cell edges are crossed by a particular isoline.
The path of an isoline across five grid cells
/
k
t
Sometimes isolines are plotted with spline curves but spline fitting can
lead to misinterpretation of the data sets. Example two spline isolines could
cross or curved isoline paths might not be a true indicator of data trends since
data values are known only at the cell corners.
.
e
b
u
t
e
s
c
/
For 3D scalar data fields we can take cross sectional slices and display the
2D data distributions over the slices. Visualization packages provide a slicer
routine that allows cross sections to be taken at any angle.
:/
p
t
t
62
http://csetube.weebly.com/
Usually, physical tensor quantities are symmetric, so that the tensor has
only six distinct values. Visualization schemes for representing all six components
of a second-order tensor quantity are based on devising shapes that have six
parameters.
Field lines are commonly used for electric , magnetic and gravitational
fields. The magnitude of the vector values is indicated by spacing between
field lines, and the direction is the tangent to the field.
/
k
t
.
e
b
u
t
e
s
c
/
:/
tp
ht
2.4.1 Translation
In a three dimensional homogeneous coordinate representation, a point or
an object is translated from position P = (x,y,z) to position P = (x,y,z) with the
matrix operation.
63
http://csetube.weebly.com/
tx
ty
yz
x
y
y
z
--------(1)
(or)
P = T.P
/
k
t
----------------(2)
.
e
b
u
t
e
Inverse of the translation matrix in equation (1) can be obtained by negating the
translation distance tx, ty and tz.
s
c
/
y = y + ty
z = z + tz
----------------------
/
:
p
---------(3)
2.4.2 Rotation
tt
64
http://csetube.weebly.com/
y = x sin + y cos
z = z
Transformation equations for rotation about the other two coordinate axes
can be obtained with a cyclic permutation of the coordinate parameters x, y and z in
equation (2) i.e., we use the replacements
--------------------------(2)
x yzx
---------(5)
cos
-sin 0
ycos - zsin
sin
cos 0
ysin + zcos
y
z
/
k
t
-------(3)
e.
x
---------------(6)
b
u
t
x
------------------(4)
e
s
c
cos
-sin
sin
cos
(or)
P = Rx (). P
-------(7)
/
/
:
tp
-----------(8)
ht
65
http://csetube.weebly.com/
zcos - xsin
zsin + xcos
---------------(9)
Since only the sine function is affected by the change in sign of the
rotation angle, the inverse matrix can also be obtained by interchanging
rows and columns. (i.e.,) we can calculate the inverse of any rotation
matrix R by evaluating its transpose
(R-1 = RT).
/
k
t
cos
=
-sin
1
0
sin
0
cos
0
(or)
A rotation matrix for any axis that does not coincide with a coordinate axis
can be set up as a composite transformation involving combinations of
translations and the coordinate axes rotations.
.
e
b
--------(10)
tu
P = Ry (). P
e
s
c
----------------( 11
/
/
:
tp
ht
1.
Translate the object so that the rotation axis coincides with the
parallel coordinate axis.
2.
3.
Translate the object so that the rotation axis is moved back to its
original position.
66
http://csetube.weebly.com/
In such case, we need rotations to align the axis with a selected coordinate
axis and to bring the axis back to its original orientation.
Given the specifications for the rotation axis and the rotation angle, we can
accomplish the required rotation in five steps:
1.
Translate the object so that the rotation axis passes through the
coordinate origin.
2.
Rotate the object so that the axis of rotation coincides with one of
the coordinate axes.
3.
4.
5.
Apply the inverse translation to bring the rotation axis back to its
original position.
/
k
t
.
e
b
2.4.3 Scaling
u
t
e
s
c
/
:/
p
t
t
sx
sy
sz
--------(11)
(or)
P = S.P ---------(12)
where scaling parameters sx , sy, and sz are assigned any position values.
Explicit expressions for the coordinate transformations for scaling relative
to the origin are
x = x.sx
y = y.sy ----------(13)
67
http://csetube.weebly.com/
z = z.sz
Scaling an object changes the size of the object and repositions the object
relatives to the coordinate origin.
If the transformation parameters are not equal, relative dimensions in the
object are changed.
The original shape of the object is preserved with a uniform scaling (s x =
sy= sz) .
Scaling with respect to a selected fixed position (x f, yf, zf) can be
represented with the following transformation sequence:
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
(1-sx)xf
sy
(1-sy)yf
sz
(1-sz)zf
-------------
(14)
68
http://csetube.weebly.com/
Shears
Reflection relative to a given axis are equivalent to 1800 rotations about the
axis.
They are also used in three dimensional viewing for obtaining general
projections transformations.
/
k
t
When the reflection plane in a coordinate plane ( either xy, xz or yz) then
the transformation can be a conversion between left-handed and righthanded systems.
e.
SHz
b
u
t
e
s
c
/
/
:
RFz
-1
p
t
t
69
http://csetube.weebly.com/
/
k
t
composeMatrix3
e.
composeTransformationMatrix3
b
u
t
se
/c
/
:
p
The
order
of
the
transformation
sequence
for
the
buildTransformationMarix3
and
composeTransfomationMarix3
functions, is the same as in 2 dimensions:
1.
scale
2.
rotate
3.
translate
tt
buildTransformationMatrix3
rotateX(thetaX, xMatrixRotate)
rotateY(thetaY, yMatrixRotate)
rotateZ(thetaZ, zMatrixRotate)
Preconcatenate,
Postconcatenate, or replace.
2.4.7
70
http://csetube.weebly.com/
which transforms unit vectors ux, uy and uz onto the x, y and z axes
respectively.
For instance, tables, chairs and other furniture, each defined in a local
coordinate system, can be placed into the description of a room defined in
another reference frame, by transforming the furniture coordinates to room
coordinates. Then the room might be transformed into a larger scene
constructed in world coordinate.
Three dimensional objects and scenes are constructed using structure
operations.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
2.5Three-Dimensional Viewing
R=
ux1
ux2
ux3
uy1
uy2
uy3
uz1
uz2
uz3
we can view an object from any spatial position, from the front,
from above or from the back.
2.5.1Viewing Pipeline:
71
http://csetube.weebly.com/
2.
3.
4.
2.5.2Viewing Coordinates
When snap the shutter, the scene is cropped to the size of the
window of the camera and light from the visible surfaces is
projected into the camera film.
/
k
t
In such a way the below figure shows the three dimensional transformation
pipeline, from modeling coordinates to final device coordinate.
Modeling
World
Viewing
Co-ordinates
Co-ordinates
Co-ordinates
s
c
/
Projection
Projection
Co-ordinates
ordinates
Transformation
Processing Steps
:/
Work Station
co-
p
t
t
Transformation
1.
2.
3.
u
t
e
transformation
transformation
Device.
.
e
b
Viewing
Modeling
72
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
Before object descriptions can be projected to the view plane, they must be
transferred to viewing coordinate. This transformation sequence is,
:/
1.
2.
Apply rotations to align the xv, yv and zv axes with the world
xw,yw and zw axes respectively.
tp
ht
T =
u2
u3
v1
v2
v3
-z0
n1
n2
n3
-x0
-y0
R=
which transforms u into the world xw axis, v onto the yw axis and n onto
the zw axis.
73
http://csetube.weebly.com/
Projections
Once world coordinate descriptions of the objects are converted to viewing
coordinates, we can project the 3 dimensional objects onto the two
dimensional view planes.
There are two basic types of projection.
1.
/
k
t
.
e
b
u
t
e
Parallel Projections
s
c
/
2.
/
:
p
Parallel projections are specified with a projection vector that defines the
direction for the projection lines.
When the projection in perpendicular to the view plane, it is said to be an
Orthographic parallel projection, otherwise it said to be an Oblique
parallel projection.
Orientation of the projection vector Vp to produce an
orthographic projection (a) and an oblique projection (b)
t
t
h
Orthographic Projection
74
http://csetube.weebly.com/
Orthographic projections are used to produce the front, side and top views
of an object.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
xp = x, yp = y
where the original z coordinates value is kept for the depth information
needed in depth cueing and visible surface determination procedures.
Oblique Projection
An oblique projection in obtained by projecting points along parallel lines
that are not perpendicular to the projection plane.
The below figure and are two angles.
The orthographic projection that displays more than one face of an object
is called axonometric orthographic projections.
75
http://csetube.weebly.com/
yp = y + z(L1sin)
The transformation matrix for producing any parallel projection onto the
xvyv plane is
Mparallel =
L1cos
L1sin
/
k
t
Oblique projections are generated with non zero values for L1.
.
e
b
Perspective Projections
The oblique projection line form (x,y,z) to (xp,yp) makes an angle with
the line on the projection plane that joins (xp,yp) and (x,y).
This line of length L in at an angle with the horizontal direction in the
projection plane.
s
c
/
- - - -(1)
u
t
e
tp
thus,
x = x - xu
:/
yp = y + Lsin
tan = z / L
If the projection reference point is set at position zprp along the zv axis and
the view plane is placed at zvp as in fig , we can write equations describing
coordinate positions along this perspective projection line in parametric
form as
y = y - yu
z = z (z zprp) u
ht
L = z / tan
= z L1
where L1 is the inverse of tan, which is also the value of L when z = 1.
76
http://csetube.weebly.com/
-1/dp
zprp/dp
--------------(4)
and the projection coordinates on the view plane are calculated from eq
(2)the homogeneous coordinates as
xp = xh / h
yp = yh / h
---------------------(5)
2.6
Parameter u takes values from 0 to 1 and coordinate position (x, y,z)
represents any point along the projection line.
e
s
c
At the other end of the line, u = 1 and the projection reference point coordinates
(0,0,zprp)
//
On the view plane z` = zvp and z can be solved for parameter u at this position
along the projection line:
:
p
t
`
Substituting this value of u into the equations for x and y`, we obtain the
perspective transformation equations.
xp = x((zprp zvp) / (zprp z)) = x( dp/(zprp z))
ht
yh
zh
0 -(zvp/dp) zvp(zprp/dp)
where dp = zprp zvp is the distance of the view plane from the projection
reference point.
xh
.
e
b
CLIPPING
tu
/
k
t
The intersection of a line with a boundary is found using the line equations
along with the plane equation.
z --------(3)
77
http://csetube.weebly.com/
Intersection coordinates (x1, y1, z1) are values that are on the line and that
satisfy the plane equation Ax1, + By1 + Cz1 + D = 0.
To clip a polygon surface, we can clip the individual polygon edges. First,
we could test the coordinate extents against each boundary of the view
volume to determine whether the object is completely inside or completely
outside that boundary. If the coordinate extents of the object are inside all
boundaries, we save it. If the coordinate extents are outside all boundaries,
we discard it. Other-wise, we need to apply the intersection calculations.
Viewport Clipping
The two-dimensional parametric clipping methods of Cyrus-Beck or
Liang-Barsky can be extended to three-dimensional scenes. For a line
segment with endpoints P1 = (x1, y1, z1,) and P2 = (x2, y2, z2), we can write
the parametric line equations as
Lines and polygon surfaces in a scene can be clipped against the viewport
boundaries with procedures similar to those used for two dimensions,
except that objects are now processed against clipping planes instead of
clipping edges.
The two-dimensional concept of region codes can be extended to three
dimensions by considering positions in front and in back of the threedimensional viewport, as well as positions that are left, right, below, or
above the volume. For three dimensionalpoints, we need to expand the
region code to six bits. Each point in the description of a scene is then
assigned a six-bit region code that identifies the relative position of the
point with respect to the viewport.
/
k
t
.
e
b
tu
e
s
c
/
/
:
For a line endpoint at position (x, y, z), we assign the bit positions in the
region code from right to left as
bit 1 = 1, if x < xvmin(left)
0<=u <=1
y = y1 + (y2 - y1)u
z= z1 + (z2 - z1)u
-------------( 1 )
Coordinates (x, y, z) represent any point on the line between the two
endpoints.
At u = 0, we have the point PI, and u = 1 puts us at P2.
p
t
t
x = x1 + (x2 - x1)u
u= zvmin z1
bit 5 = 1, if z <zvmin(front)
z2 z1
---------------------------- (
2)
When the calculated value for u is not in the range from 0 to 1, the line
segment does not intersect the plane under consideration at any point
between endpoints P1 and P2 (line A in fig).
78
http://csetube.weebly.com/
x1 = x1 + ( x2 x1)
zvmin z1
World coordinate vector (xN, yN, zN) defines the normal to the
view plane and the direction of the positive zv viewing axis.
The world coordinates (xV, yV, zV) gives the elements of the
view up vector.
z2 z1
y1 = y1 + ( y2 y1)
zvmin z1
z2 z1
2.
/
k
t
Limits of the 3D view port within the unit cube are set with
normalized coordinates xvmin, xvmax, yvmin, yvmax, zvmin and
zvmax.
The position of the viewplane along the viewing zv axis is set with
parameter z view.
Positions along the viewing zv axis for the front and back planes
of the view volume are given with parameters z front and z back.
.
e
b
u
t
e
s
c
/
/
:
p
tt
Matrix)
79
http://csetube.weebly.com/
2.
2.8.2
/
k
t
.
e
b
----------------(1 )
When an inside point is along the line of sight to the surface, the
polygon must be a back face .
u
t
e
s
c
/
/
:
p
Thus, in general, we can label any polygon as a back face if its normal vector has a
z component value
C<= 0
By examining parameter C for the different planes defining an object, we
can immediately identify all the back faces.
2.8.3 Depth Buffer Method
tt
80
http://csetube.weebly.com/
1. Initialize the depth buffer and refresh buffer so that for all buffer positions (x, y),
Therefore, for each pixel position (x, y) on the view plane, object depths
can be compared by comparing z values. The figure shows
three surfaces at varying distances along the orthographic projection line from
position (x,y ) in a view plane taken as the (xv,yv) plane. Surface S1, is closest at this
position, so its surface intensity value at (x, y) is saved.
refresh(x , y )=Ibackgnd
refresh(x,y)= I surf(x, y)
/
k
t
where Ibackgnd is the value for the background intensity, and Isurf(x, y)
is the projected intensity value for the surface at pixel position (x,y).
.
e
b
After all surfaces have been processed, the depth buffer contains
depth values for the visible surfaces and the refresh buffer contains
u
t
e
s
c
/
/
:
p
Depth values for a surface position (x, y) are calculated from the plane
equation for each surface:
Ax By
C
-----------------------------(1)
For any scan line adjacent horizontal positions across the line differ by1,
and a vertical y value on an adjacent scan line differs by 1. If the depth of
position(x, y) has been determined to be z, then the depth z' of the next position (x
+1, y) along the scan line is obtained from Eq. (1) as
t
t
h
z'
Or
Initially,all positions in the depth buffer are set to 0 (minimum depth), and
the refresh buffer is initialized to the background intensity.
z' z
A( x 1) By
C
A
C
-----------------------(2)
-----------------------(3)
On each scan line, we start by calculating the depth on a left edge of the
polygon that intersects that scan line in the below fig. Depth values at each
successive position across the scan line are then calculated by Eq. (3).
81
http://csetube.weebly.com/
/
k
t
A/ m B
C
u
t
e
If we are processing down a vertical edge, the slope is infinite and the
recursive calculations reduce to
cs
//
z' z
.
e
b
:
p
t
----------------------(4)
z' z
B
C
-----------------------(5)
ht
82
http://csetube.weebly.com/
the depth buffer so that each position in the buffer can reference a linked
list of surfaces.
surface identifier
Thus, more than one surface intensity can be taken into consideration at
each pixel position, and object edges can be antialiased.
If the depth field is positive, the number stored at that position is the depth
of a single surface overlapping the corresponding pixel area. The intensity field then
stores the RCB components of the surface color at that point and the percent of
pixel coverage, as illustrated in Fig.A
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
We assume that tables are set up for the various surfaces, which include both an
edge table and a polygon table. The edge table contains coordinate endpoints for
each line in-the scene, the inverse slope of each
line, and pointers into the polygon table to identify the surfaces bounded by each
line.
The polygon table contains coefficients of the plane equation for each
surface, intensity information for the surfaces, and possibly pointers into the edge
table.
t
t
h
To facilitate the search for surfaces crossing a given scan line, we can set
up an active list of edges from information in the edge table. This active list will
contain only edges that cross the current scan line, sorted in order of increasing x.
In addition, we define a flag for each surface that is set on or off to
indicate whether a position along a scan line is inside or outside of the surface. Scan
lines are processed from left to right. At the leftmost boundary of a surface, the
surface flag is turned on; and at the rightmost boundary, it is turned off.
Scan lines crossing the projection of two surfaces S1 and S2 in the view
plane. Dashed lines indicate the boundaries of hidden surfaces
depth
83
http://csetube.weebly.com/
greatest
/
k
t
Sorting operations are carried out in both image and object space, and the
scan conversion of the polygon surfaces is performed in image space.
.
e
b
The figure illustrates the scan-line method for locating visible portions of
surfaces for pixel positions along the line.
u
t
e
cs
The active list for scan line 1 contains information from the edge table for
edges AB, BC, EH, and FG. For positions along this scan line between edges AB
and BC, only the flag for surface S1 is on.
/
/
:
tp
ht
Similarly, between edges EH and FG, only the flag for surface S2 is on.
No other positions along scan line 1 intersect surfaces, so the intensity values in the
other areas are set to the background intensity.
1.surfaces are ordered on the first pass according to the smallest z value on
each surface.
For scan lines 2 and 3 , the active edge list contains edges AD,
2.Surfaces with the greatest depth is then compared to the other surfaces in
the list to determine whether there are any overlaps in depth. If no depth overlaps
occur, S is scan converted. Figure shows two surfaces that overlap in the xy plane
but have no depth overlap.
EH, BC, and FG. Along scan line 2 from edge AD to edge EH, only the flag for
surface S1, is on. But between edges EH and BC, the flags for both surfaces are on.
In this interval, depth calculations must be made using the plane
coefficients for the two surfaces. For this example, the depth of surface S1 is
assumed to be less than that of S2, so intensities for surface S1, are loaded into the
refresh buffer until boundary BC is encountered. Then the flag for surface S1 goes
off, and intensities for surface S2 are stored until edge FG is passed.
3.This process is then repeated for the next surface in the list. As long as
no overlaps occur, each surface is processed in depth order until all have been scan
converted.
84
http://csetube.weebly.com/
/
k
t
We make the following tests for each surface that overlaps with S. If any one of
these tests is true, no reordering is necessary for that surface. The tests are listed in
order of increasing difficulty.
.
e
b
u
t
e
s
c
/
1. The bounding rectangles in the xy plane for the two surfaces do not overlap
/
:
p
t
t
h
4. The projections of the two surfaces onto the view plane do not overlap.
85
http://csetube.weebly.com/
/
k
t
.
e
b
If tests 1 through 3 have all failed, we try test 4 by checking for intersections
between the bounding edges of the two surfaces using line equations in the xy
plane. As demonstrated in Fig., two surfaces may or may not intersect even though
their coordinate extents overlap in the x, y, and z directions.
u
t
e
s
c
/
/
:
p
Should all four tests fail with a particular overlapping surface S', we
interchange surfaces S and S' in the sorted list.
t
t
h
86
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
With plane P1,we first partition the space into two sets of objects. One set
of objects is behind, or in back of, plane P1, relative to the viewing direction, and
the other set is in front of P1. Since one object is intersected by plane P1, we divide
that object into two separate objects, labeled A and B.
87
http://csetube.weebly.com/
Objects A and C are in front of P1 and objects B and D are behind P1. We
next partition the space again with plane P2 and construct the binary tree
representation shown in Fig.(b).
In this tree, the objects are represented as terminal nodes, with front
objects as left branches and back objects as right branches.
2.8.8 Area Subdivision Method
This technique for hidden-surface removal is essentially an image-space
method ,but object-space operations can be used to accomplish depth ordering of
surfaces.
The tests for determining surface visibility within an area can be stated in terms
of these four classifications. No further subdivisions of a specified area are needed
if one of the following conditions is true:
/
k
t
.
e
b
u
t
e
3. A surrounding surface obscures all other surfaces within the area boundaries.
cs
/
/
:
p
t
t
88
http://csetube.weebly.com/
/
k
t
.
e
b
Another method for carrying out test 3 that does not require depth sorting
is to use plane equations to calculate depth values at the four vertices of the area for
all surrounding, overlapping, and inside surfaces, If the calculated depths for one of
the surrounding surfaces is less than the calculated depths for all other surfaces, test
3 is true. Then the area can be filled with the intensity values of thesurrounding
surface.
u
t
e
s
c
/
/
:
p
t
t
h
89
http://csetube.weebly.com/
When an octree representation is used for the viewing volume, hiddensurface elimination is accomplished by projecting octree nodes onto the viewing
surface in a front-to-back order.
When a color value is encountered in an octree node, the pixel area in the
frame buffer corresponding to this node is assigned that color value only if no
values have previously been stored in this area. In this way, only the front colors are
loaded into the buffer. Nothing is loaded if an area is void. Any node that is found
to be completely obscured is eliminated from further processing, so that its subtrees
are not accessed.
In the below Fig. the front face of a region of space (the side
toward the viewer) is formed with octants 0, 1, 2, and 3. Surfaces in the front of
these octants are visible to the viewer. Any surfaces toward the re in the back
octants (4,5,6, and 7) may be hidden by the front surfaces.
/
k
t
A method for displaying an octree is first to map the octree onto a quadtree
of visible areas by traversing octree nodes from front to back in a recursive
procedure. Then the quadtree representation for the visible surfaces is loaded into
the frame buffer. The below Figure depicts the octants in a region of space and
the corresponding quadrants on the view plane.
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
Back surfaces are eliminated, for the viewing directionby processing data
elements in the octree nodes in the order 0, 1, 2,3,4, 5, 6, 7.
This results in a depth-first traversal of the octree, so that nodes
representing octants 0, 1.2, and 3 for the entire region are visited before the nodes
representing octants 4,5,6, and 7.
View
Plane
90
http://csetube.weebly.com/
other two quadrants are generated from the pair of octants aligned with each of
these quadrants.
) Quadtree;
int nQuadtree = 0.
back octant. If the front is empty the, the rear octant is processed. Otherwise, two
,.recursive calls are made, one for the rear octant and one for the front octant.
Quadtree *newQuadtree;
bdefine EMPTY -1
int i, j;
if (oTree->status == SOLID) (
int id;
qTree->status = SOLID:
Status status;
.
e
b
u
t
e
return:
union (
cs
int color;
/
/
:
int id:
qTree->status = MIXED:
/*Fill in each quad of the quadtree *I
tp
}Octree:
typedef struct tQuadtree i
/
k
t
ht
{
front = oTree->data.children[il;
Status status;
back = oTree->data..children[i+4];
union [
int color;
newQuadtree->id = nQuadtree++;
newQuadtree->status = SOLID;
) data;
qTree->data.childrenIil = newQuadtree;
91
http://csetube.weebly.com/
if (front->status == SOLID)
if (front->data.color != EMPTY)
qTree->data.children[i]->data.color = front->data.color;
else
if (back->status == SOLID)
if (back->data.color != EMPTY)
/
k
t
qTree->data.children[i]->data.color = back->data.color;
else
.
e
b
qTree->data.children[il->data.color = EMPTY;
else ( / * back node is mixed * /
u
t
e
newQuadtree->status = MIXED;
s
c
/
/
:
p
}
}
}
t
t
h
If we consider the line of sight from a pixel position on the view plane
through a scene, as in the Fig. below, we can determine which objects in the scene
intersect this line.
In ray casting, we process pixels one at a time and calculate depths for all
surfaces along the projection path to that pixel. Ray casting is a special case of ray-
92
http://csetube.weebly.com/
tracing algorithms that trace multiple ray paths to pick up global reflection and
refraction contributions from
multiple objects in a scene. With ray casting, we only follow a ray out from each
pixel to the nearest object.
/
k
t
With octree, once the representation has been established from the input
definition of the objects, all visible surfaces are identified with the same processing
procedures.
.
e
b
u
t
e
cs
//
Curved-Surface Representations
:
p
t
y=f(x,z)
----------------(1)
A curve in the xy plane can then be plotted for values of z within some
selected range, using a specified interval z. Starting with the largest value of z, we
plot the curves from "front" to "back" and eliminate hidden sections.
We draw the curve sections on the screen by mapping an xy range for the
function into an xy pixel screen range. Then, unit steps are taken in x and the
corresponding y value for each x value is determined from Eq. (1) for a given value
of z.
ht
One way to identify the visible curve sections on the surface is to maintain
a list of ymin, and ymax, values previously calculated for the pixel x coordinates on
the screen.
z=f(x,y)
As we step from one pixel x position to the next, we check the calculated y
value against the stored range, ymin, and ymax, for the next pixel.
93
http://csetube.weebly.com/
If ymin<= y<= ymax that point on the surface is not visible and we do not
plot it. But if the calculated y value is outside the stored y bounds for that pixel, the
point is visible. We then plot the point and reset the bounds for that pixel.
/
k
t
.
e
b
u
t
e
s
c
/
For each line, depth values are compared to the surfaces to determine
which line sections are not visible. We can use coherence methods to identify
hidden line segments without actually testing
/
:
p
each coordinate position. If both line intersections with the projection of a surface
boundary have greater depth than the surface at those points, the line segment
between the intersections is completely hidden, as in Fig. (a).
tt
By processing the surfaces from back to front, hidden lines are erased by
the nearer surfaces.An area-subdivision method can be adapted to hidden-line
removal by displaying only the boundaries of visible surfaces. Scan-line methods
can be used to display visible lines by setting points along the scan line that
coincide with boundaries of visible surfaces.
Often, three-dimensional graphics packages accommodate several visiblesurface detection procedures, particularly the back-face and depth-buffer methods.
This is the usual situation in a scene, but it is also possible to have lines
and surfaces intersecting each other. When a line has greater depth at one boundary
intersection and less depth than the surface at the other boundary intersection, the
line must penetrate the surface interior, as in Fig. (b). In this case, we calculate the
intersection point of the line with the surface using the plane equation and display
only the visible sections.
A particular function can then be invoked with the procedure name, such
as back-Face or depthBuffer.
Hidden line sections (dashed) for a line that (a) passes behind a surface and (b)
penetrates a surface
94
http://csetube.weebly.com/
Spectral colors range from the reds through orange and yellow at the low
frequency end to greens, blues and violet at the high end.
Since light is an electro magnetic wave, the various colors are described in
terms of either the frequency for the wave length of the wave.
The wave length ad frequency of the monochromatic wave are inversely
proportional to each other, with the proportionality constants as the speed
of light C where C = f
Color Models
Color Model is a method for explaining the properties or behavior of color within
some particular context. No single color model can explain all aspects of color, so
we make use of different models to help describe the different perceived
characteristics of color.
A light source such as the sun or a light bulb emits all frequencies within
the visible range to produce white light. When white light is incident upon
an object, some frequencies are reflected and some are absorbed by the
object. The combination of frequencies present in the reflected light
determines what we perceive as the color of the object.
/
k
t
Properties of Light
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
Purity describes how washed out or how pure the color of the
light appears.
At the low frequency end is a red color (4.3*10 4 Hz) and the highest
frequency is a violet color (7.5 *10 14Hz)
95
http://csetube.weebly.com/
Two different color light sources with suitably chosen intensities can be
used to produce a range of other colors.
If the 2 color sources combine to produce white light, they are called
complementary colors. E.g., Red and Cyan, green and magenta, and blue
and yellow.
X = (x/y)Y,
Z = (z/y)Y
Where z = 1-x-y.
Color models that are used to describe combinations of light in terms of
dominant frequency use 3 colors to obtain a wide range of colors, called
the color gamut.
/
k
t
The 2 or 3 colors used to produce other colors in a color model are called
primary colors.
Starting with the pigment for a pure color the color is added to black
pigment to produce different shades. The more black pigment produces
darker shades.
.
e
b
Standard Primaries
XYZ Color Model
tu
e
s
c
//
Different tints of the color are obtained by adding a white pigment to the
original color, making it lighter as more white is added.
Tones of the color are produced by adding both black and white pigments.
p:
-------------(1)
Based on the tristimulus theory of version, our eyes perceive color through
the stimulation of three visual pigments in the cones on the retina.
t
t
h
This is the basis for displaying color output on a video monitor using the 3
color primaries, red, green, and blue referred to as the RGB color model. It is
represented in the below figure.
with x + y + z = 1
Any color can be represented with just the x and y amounts. The
parameters x and y are called the chromaticity values because they depend
only on hue and purity.
96
http://csetube.weebly.com/
Shades of gray are represented along the main diagonal of the cube from the
origin (black) to the white vertex.
2.5.5 YIQ Color Model
The National Television System Committee (NTSC) color model for forming
the composite video signal in the YIQ model.
In the YIQ color model, luminance (brightness) information in contained in
the Y parameter, chromaticity information (hue and purity) is contained into
the I and Q parameters.
/
k
t
A combination of red, green and blue intensities are chosen for the Y
parameter to yield the standard luminosity curve.
.
e
b
u
t
e
The sign represents black, and the vertex with coordinates (1,1,1) in white.
s
c
/
Vertices of the cube on the axes represent the primary colors, the remaining
vertices represents the complementary color for each of the primary colors.
/
:
p
The RGB color scheme is an additive model. (i.e.,) Intensities of the primary
colors are added to produce other colors.
t
t
h
Each color point within the bounds of the cube can be represented as the
triple (R,G,B) where values for R, G and B are assigned in the range from 0
to1.
Y
I
Q
0.299
0.596
0.212
0.587
0.275
0.528
0.144 R
0.321 G
0.311 B
97
http://csetube.weebly.com/
R
G
B
1.000
1.000
1.000
0.956
0.272
1.108
0.620 Y
0.647 I
1.705 Q
/
k
t
A color model defined with the primary colors cyan, magenta, and yellow
(CMY) in useful for describing color output to hard copy devices.
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
Equal amounts of each of the primary colors produce grays along the main
diagonal of the cube.
A combination of cyan and magenta ink produces blue light because the
red and green components of the incident light are absorbed.
The printing process often used with the CMY model generates a color
point with a collection of 4 ink dots; one dot is used for each of the
primary colors (cyan, magenta and yellow) and one dot in black.
The conversion from an RGB representation to a CMY representation is
expressed as
98
http://csetube.weebly.com/
C
M
Y
1
1
1
R
G
B
Where the white is represented in the RGB system as the unit column vector.
/
k
t
.
e
b
The boundary of the hexagon represents the various hues, and it is used as
the top of the HSV hexcone.
R
G
B
1
1
1
C
M
Y
u
t
e
/
:
p
Where black is represented in the CMY system as the unit column vector.
HSV Color Model
s
c
/
t
t
h
The HSV model uses color descriptions that have a more interactive appeal
to a user.
Color parameters in this model are hue (H), saturation (S), and value (V).
The 3D representation of the HSV model is derived from the RGB cube.
The outline of the cube has the hexagon shape.
99
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
100
http://csetube.weebly.com/
The remaining colors are specified around the perimeter of the cone in the
same order as in the HSV model.
/
k
t
It defines the motion sequences as a set of basic events that are to take
place.
.
e
b
Animation
Computer animation refers to any time sequence of visual changes in a
scene.
u
t
e
Object Definition
cs
//
p:
animation
are
entertainment,
The associated movements of each object are specified along with the
shape.
t
t
h
Example : Advertising animations often transition one object shape into another.
Frame-by-Frame animation
Key frame
A key frame is detailed drawing of the scene at a certain time in the
animation sequence.
Each frame of the scene is separately generated and stored. Later, the frames can be
recoded on film or they can be consecutively displayed in "real-time playback"
mode
Within each key frame, each object is positioned according to the time for
that frame.
Some key frames are chosen at extreme positions in the action; others are
spaced so that the time interval between key frames is not too much.
101
http://csetube.weebly.com/
In-betweens
Object shapes and associated parameter are stored and updated in the
database.
Standard functions can be applied to identify visible surfaces and apply the
rendering algorithms.
Film requires 24 frames per second and graphics terminals are refreshed at
the rate of 30 to 60 frames per seconds.
Camera movement functions such as zooming, panning and tilting are used
for motion simulation.
Time intervals for the motion are setup so there are from 3 to 5 in-between
for each pair of key frames.
/
k
t
Given the specification for the key frames, the in-betweens can be
automatically generated.
Depending on the speed of the motion, some key frames can be duplicated.
.
e
b
Raster Animations
For a 1 min film sequence with no duplication, 1440 frames are needed.
tu
e
s
c
Motion verification
Editing
/
/
:
p
t
t
Set the pixels at the first position of the object to on values, and
set the pixels at the other object positions to the background color.
2.
Camera motion
3.
Generation of in-betweens
102
http://csetube.weebly.com/
Keyframe Systems
Each set of in-betweens are generated from the specification of two
keyframes.
For complex scenes, we can separate the frames into individual
components or objects called cells, an acronym from cartoon animation.
Morphing
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
Action specification involves the layout of motion paths for the objects and
camera.
t
t
h
103
http://csetube.weebly.com/
1.
2.
k/
and
t
.
e
b
u
t
se
/c
:/
2.
Simulating Accelerations
p
t
t
Curve-fitting techniques are often used to specify the animation paths between key
frames. Given the vertex positions at the key frames, we can fit the positions with
linear or nonlinear paths. Figure illustrates a nonlinear fit of key-frame positions.
This determines the trajectories for the in-betweens. To simulate accelerations, we
can adjust the time spacing for the in-betweens.
Suppose we equalize the edge count and parameters Lk and Lk+1 denote the
number of line segments in two consecutive frames. We define,
Lmax = max (Lk, Lk+1)
1.
104
http://csetube.weebly.com/
Motion Specification
These are several ways in which the motions of objects can be specified in
an animation system.
Direct Motion Specification
Here the rotation angles and translation vectors are explicitly given.
Then the geometric transformation matrices are applied to transform
coordinate positions.
/
k
t
.
e
b
For constant speed (zero acceleration), we use equal-interval time spacing for the
in-betweens. Suppose we want n in-betweens for key frames at times t1 and t2.
u
t
e
s
c
/
/
:
p
t
t
h
The time interval between key frames is then divided into n + 1 subintervals,
yielding an in-between spacing of
= t2-t1/n+1
j = 1,2, . . . . . . n
105
http://csetube.weebly.com/
where A is the initial amplitude, is the angular frequency, 0 is the phase angle
and k is the damping constant.
We can specify the motions that are to take place in general terms that
abstractly describe the actions.
Libraries
These systems are called goal directed. Because they determine specific
motion parameters given the goals of the animation.
OpenGL Utility Library (GLU) contains several routines that use lowerlevel OpenGL commands to perform such tasks as setting up matrices for
specific viewing orientations and projections and rendering surfaces.
OpenGL Utility Toolkit (GLUT) is a window-system-independent toolkit,
written by Mark Kilgard, to hide the complexities of differing window
APIs.
/
k
t
.
e
b
u
t
e
Include Files
s
c
/
/
:
p
t
t
h
For all OpenGL applications, you want to include the gl.h header file in every file.
Almost all OpenGL applications use GLU, the aforementioned OpenGL Utility
Library, which also requires inclusion of the glu.h header file. So almost every
OpenGL source file begins with:
#include <GL/gl.h>
#include <GL/glu.h>
If you are using the OpenGL Utility Toolkit (GLUT) for managing your window
manager tasks, you should include:
#include <GL/glut.h>
The following files must be placed in the proper folder to run a OpenGL Program.
OpenGL is a software interface that allows you to access the graphics hardware
without taking care of the hardware details or which graphics adapter is in the
system. OpenGL is a low-level graphics library specification. It makes available to
the programmer a small set of geomteric primitives - points, lines, polygons,
images, and bitmaps. OpenGL provides a set of commands that allow the
106
http://csetube.weebly.com/
RGBA mode provides more flexibility. In general, use RGBA mode whenever
possible. RGBA mode is the default.
glut32.lib
Include files (place in the include\GL\ subdirectory of Visual C++)
Another decision we need to make when setting up the display mode is whether we
want to use single buffering (GLUT_SINGLE) or double buffering
(GLUT_DOUBLE). If we aren't using annimation, stick with single buffering,
which is the default.
gl.h
glu.h
glut.h
3. glutInitWindowSize(640,480)
/
k
t
opengl32.dll
glu32.dll
glut32.dll
.
e
b
4. glutInitWindowPosition(100,15)
u
t
e
s
c
/
1. glutInit(&argc, argv)
:/
The first thing we need to do is call the glutInit() procedure. It should be called
before any other GLUT routine because it initializes the GLUT library. The
parameters to glutInit() should be the same as those to main(), specifically main(int
argc, char** argv) and glutInit(&argc, argv).
p
t
t
2. glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB)
The window is not actually displayed until the glutMainLoop() is entered. The very
last thing is we have to call this function
We must first decide whether we want to use an RGBA (GLUT_RGB) or colorindex (GLUT_INDEX) color model. The RGBA mode stores its color buffers as
red, green, blue, and alpha color components. Color-index mode, in contrast, stores
color buffers in indicies. And for special effects, such as shading, lighting, and fog,
The method of associating a call back function with a particular type of event is
called as event driven programming. OpenGL provides tools to assist with the event
management.
107
http://csetube.weebly.com/
GLUT_KEY_PAGE_DOWN
GLUT_KEY_HOME
GLUT_KEY_END
GLUT_KEY_INSERT
1. glutDisplayFunc(mydisplay)
The glutDisplayFunc() procedure is the first and most important event callback
function. A callback function is one where a programmer-specified routine can be
registered to be called in response to a specific type of event. For example, the
argument of glutDisplayFunc(mydisplay) is the function that is called whenever
GLUT determines that the contents of the window needs to be redisplayed.
Therefore, we should put all the routines that you need to draw a scene in this
display callback function.
4. glutMouseFunc(mymouse)
/
k
t
2. glutReshapeFunc(myreshape)
GLUT supports interaction with the computer mouse that is triggered when one of
the three typical buttons is presses. A mouse callback fuction can be initiated when
a given mouse button is pressed or released. The command glutMouseFunc() is
used to specify the callback function to use when a specified button is is a given
state at a certain location. This buttons are defined as either GL_LEFT_BUTTON,
GL_RIGHT_BUTTON, or GL_MIDDLE_BUTTON and the states for that button
are either GLUT_DOWN (when pressed) or GLUT_UP (when released). Finally, x
and y callback parameters indicate the location (in window-relative coordinates) of
the mouse at the time of the event.
.
e
b
Page Down
Home
End
Insert
u
t
e
s
c
/
/
:
p
t
t
h
Special keys can also be used as triggers. The key passed to the callback function,
in this case, takes one of the following values (defined in glut.h).
Special keys can also be used as triggers. The key passed to the callback function,
in this case, takes one of the following values (defined in glut.h).
glutInit(&argc, argv);
GLUT_KEY_UP
GLUT_KEY_RIGHT
GLUT_KEY_DOWN
GLUT_KEY_PAGE_UP
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(465, 250);
glutInitWindowPosition(100, 150);
glutCreateWindow("My First Example");
Up Arrow
Right Arrow
Down Arrow
Page Up
glutDisplayFunc(mydisplay);
108
http://csetube.weebly.com/
glutReshapeFunc(myreshape);
glutMouseFunc(mymouse);
GL_QUAD_STRIP
GL_POLYGON
glutKeyboardFunc(mykeyboard);
myinit();
glVertex( ) : The main function used to draw objects is named as glVertex. This
function defines a point (or a vertex) and it can vary from receiving 2 up to 4
coordinates.
glutMainLoop();
return 0;
}
/
k
t
.
e
b
u
t
e
OpenGL Provides tools for drawing all the output primitives such as points, lines,
triangles, polygons, quads etc and it is defined by one or more vertices.
s
c
/
To draw such objects in OpenGL we pass it a list of vertices. The list occurs
between the two OpenGL function calls glBegin() and glEnd(). The argument of
glBegin() determine which object is drawn.
:/
When we wish to refer the basic command without regard to the specific arguments
and datatypes it is specified as
glVertex*();
OpenGL Datatypes
p
t
t
The parameter mode of the function glBegin can be one of the following:
GL_POINTS
GL_LINES
GL_LINE_STRIP
GL_LINE_LOOP
GL_TRIANGLES
GL_TRIANGLE_STRIP
GL_TRIANGLE_FAN
GL_QUADS
109
http://csetube.weebly.com/
glBegin(GL_TRIANGLES);
glVertex3f(100.0f, 100.0f, 0.0f);
glVertex3f(150.0f, 100.0f, 0.0f);
glVertex3f(125.0f, 50.0f, 0.0f);
glEnd( );
// the following code draw a lines
glBegin(GL_LINES);
glVertex3f(100.0f, 100.0f, 0.0f); // origin of the line
glVertex3f(200.0f, 140.0f, 5.0f); // ending point of the line
glEnd( );
/
k
t
.
e
b
OpenGl State
OpenGl keeps track of many state variables, such as current size of a point, the
current color of a drawing, the current background color, etc.
u
t
e
s
c
/
/
:
p
Example
//the following code plots three dots
glBegin(GL_POINTS);
glVertex2i(100, 50);
The value of a state variable remains active until new value is given.
glPointSize() : The size of a point can be set with glPointSize(), which takes one
floating point argument
Example : glPointSize(4.0);
t
t
h
glClearColor() : establishes what color the window will be cleared to. The
background color is set with glClearColor(red, green, blue, alpha), where alpha
specifies a degree of transparency
glVertex2i(100, 130);
glVertex2i(150, 130);
Example : glClearColor (0.0, 0.0, 0.0, 0.0); //set black background color
glEnd( );
110
http://csetube.weebly.com/
glClear() : To clear the entire window to the background color, we use glClear
(GL_COLOR_BUFFER_BIT). The argument GL_COLOR_BUFFER_BIT is
another constant built into OpenGL
glFlush() : ensures that the drawing commands are actually executed rather than
stored in a buffer awaiting (ie) Force all issued OpenGL commands to be executed
Example : glClear(GL_COLOR_BUFFER_BIT)
glColor3f() : establishes to use for drawing objects. All objects drawn after this
point use this color, until its changed with another call to set the color.
/
k
t
.
e
b
Example:
glShadeModel : Sets the shading model. The mode parameter can be either
GL_SMOOTH (the default) or GL_FLAT.
u
t
e
//black
//red
//green
//yellow
//blue
//magenta
//cyan
//white
s
c
/
/
:
p
tt
With flat shading, the color of one particular vertex of an independent primitive is
duplicated across all the primitives vertices to render that primitive. With smooth
shading, the color at each vertex is treated individually.
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
Example : gluOrtho2D(0.0, 640.0, 0.0, 480.0);
void myInit(void)
{
glOrtho() : specifies the coordinate system in three dimension
glClearColor (1.0, 1.0, 1.0, 0.0);
111
http://csetube.weebly.com/
glutCreateWindow("Example");
glPointSize(4.0);
glutDisplayFunc(Display);
glMatrixMode(GL_PROJECTION);
myInit();
glLoadIdentity();
glutMainLoop();
return 0;
/
k
t
void Display(void)
{
.
e
b
glClear (GL_COLOR_BUFFER_BIT);
glBegin(GL_POINTS);
u
t
e
glVertex2i(100, 50);
s
c
/
glVertex2i(100, 130);
glVertex2i(150, 130);
/
:
p
glEnd( );
glFlush();
}
int main (int argc, char **argv)
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
t
t
h
void Display(void)
{
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glutInit(&argc, argv);
glClear (GL_COLOR_BUFFER_BIT);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glColor3f (1.0, 1.0, 1.0);
glutInitWindowSize(640,480);
glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0);
glutInitWindowPosition(100,150);
112
http://csetube.weebly.com/
glBegin(GL_POLYGON);
glVertex3f (0.25, 0.25, 0.0);
glVertex3f (0.75, 0.25, 0.0);
glVertex3f (0.75, 0.75, 0.0);
glVertex3f (0.25, 0.75, 0.0);
glEnd();
/
k
t
glFlush();
}
.
e
b
u
t
e
glutInit(&argc, argv);
cs
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
//
glutInitWindowSize(640,480);
glutCreateWindow("Intro");
p:
glClearColor(0.0,0.0,0.0,0.0);
glutDisplayFunc(Display);
glutMainLoop();
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void myInit(void)
tt
return 0;
glPointSize(4.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, 640.0, 0.0, 480.0);
113
http://csetube.weebly.com/
glutInitWindowPosition(100,150);
void Display(void)
glutDisplayFunc(Display);
glClear (GL_COLOR_BUFFER_BIT);
myInit();
glBegin(GL_POINTS);
glutMainLoop();
glVertex2i(289, 190);
return 0;
glVertex2i(320, 128);
/
k
t
glVertex2i(239, 67);
.
e
b
glVertex2i(194, 101);
glVertex2i(129, 83);
u
t
e
glVertex2i(75, 73);
s
c
/
glVertex2i(74, 74);
glVertex2i(20, 10);
/
:
p
glEnd( );
glFlush();
}
t
t
h
114
http://csetube.weebly.com/
/
k
t
glVertex2i(40, 100);
glVertex2i(202, 96);
.
e
b
glEnd();
u
t
e
cs
/
/
:
A lines color is set in the same way as for points, using glColor3f().
tp
glVertex2i(20,10);
ht
glVertex2i(50,10);
To make stippled (dotted or dashed) lines, you use the command glLineStipple() to
define the stipple pattern, and then we enable line stippling with glEnable()
glVertex2i(20,80);
glVertex2i(50,80);
glEnd();
glLineStipple(1, 0x3F07);
glFlush();
glEnable(GL_LINE_STIPPLE);
115
http://csetube.weebly.com/
// draw a rectangle with opposite corners (x1, y1) and (x2, y2);
glVertex2i(50,10);
glVertex2i(20,80);
glVertex2i(50,80);
glEnd();
glFlush();
/
k
t
glRecti(20,20,100,70);
.
e
b
Attributes such as color, thickness and stippling may be applied to polylines in the
same way they are applied to single lines. If it is desired to connect the last point
with the first point to make the polyline into a polygon simply replace
GL_LINE_STRIP with GL_LINE_LOOP.
u
t
e
cs
/
/
:
Polygons
p
t
t
Polygons are the areas enclosed by single closed loops of line segments, where the
line segments are specified by the vertices at their endpoints
A special case of a polygon is the aligned rectangle, so called because its sides are
aligned with the coordinate axes.
Polygons are typically drawn by filling in all the pixels enclosed within the
boundary, but you can also draw them as outlined polygons or simply as points at
the vertices. A filled polygon might be solidly filled, or stippled with a certain
pattern
116
http://csetube.weebly.com/
vertices: v0, v1, v2, then v0, v2, v3, then v0, v3, v4, etc.
To draw a convex polygon based on vertices (x0, y0), (x1, y1), , (xn, yn) use the
usual list of vertices, but place them between a glBegin(GL_POLYGON) and an
glEnd():
glBegin(GL_POLYGON);
glVertex2f(x0, y0);
/
k
t
glVertex2f(x1, y1);
. . . . ..
.
e
b
glVertex2f(xn, yn);
glEnd();
u
t
e
s
c
/
The following list explains the function of each of the five constants:
/
:
p
GL_TRIANGLES: takes the listed vertices three at a time, and draws a separate
triangle for each;
t
t
h
GL_QUADS: takes the vertices four at a time and draws a separate quadrilateral for
each
Example to draw smooth shaded Trigangle with shades
117
http://csetube.weebly.com/
#include <gl/gl.h>
void init(void)
glutInit(&argc, argv);
glShadeModel (GL_SMOOTH);
glMatrixMode (GL_PROJECTION);
glutCreateWindow ("Shade");
glLoadIdentity ();
init ();
glutDisplayFunc(display);
void display(void)
glutMainLoop();
.
e
b
u
t
e
return 0;
cs
glClear (GL_COLOR_BUFFER_BIT);
/
/
:
glBegin (GL_TRIANGLES);
glColor3f (1.0, 0.0, 0.0);
Polygon Filling
p
t
t
/
k
t
The pattern is specified with 128-byte array of data type GLubyte. The 128 bytes
provides the bits for a mask that is 32 bits wide and 32 bits high.
118
http://csetube.weebly.com/
The first 4 bytes prescribe the 32 bits across the bottom row from left to right; the
next 4 bytes give the next row up, etc..
Example
0x10, 0x18, 0x18, 0x08, 0x10, 0x00, 0x00, 0x08};
void myInit(void)
#include "stdafx.h"
{
#include "gl/glut.h"
/
k
t
.
e
b
glPointSize(4.0);
GLubyte mask[]={
glMatrixMode(GL_PROJECTION);
u
t
e
glLoadIdentity();
cs
/
/
:
}
void Display(void)
tp
ht
glClear(GL_COLOR_BUFFER_BIT);
119
http://csetube.weebly.com/
glDisable(GL_POLYGON_STIPPLE);
glFlush();
}
/
k
t
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
.
e
b
glutInitWindowSize(640,480);
glutInitWindowPosition(100,150);
u
t
e
glutCreateWindow("Polygon Stipple");
s
c
/
glutDisplayFunc(Display);
myInit();
/
:
p
glutMainLoop();
return 0;
}
tt
When the user presses or releases a mouse button, moves the mouse, or presses a
keyboard key, an event occur. Using the OpenGL Utility Toolkit (GLUT) the
programmer can register a callback function with each of these events by using the
following commands:
120
http://csetube.weebly.com/
Mouse interaction.
When a mouse event occurs the system calls the registered function, supplying it
with values for these parameters. The value of button will be one of:
/
k
t
.
e
b
GLUT_MIDDLE_BUTTON,
GLint x = mouseX;
GLUT_RIGHT_BUTTON,
u
t
e
switch(theKey)
with the obvious interpretation, and the value of state will be one of: GLUT_UP or
GLUT_DOWN. The values x and y report the position of the mouse at the time of
the event.
cs
//
:
p
t
Keyboard interaction.
case p:
drawDot(x, y); // draw a dot at the mouse position
break;
ht
As mentioned earlier, pressing a key on the keyboard queues a keyboard event. The
callback function myKeyboard() is registered with this type of event through
List[ last].y = y;
break;
case E:
glutKeyboardFunc(myKeyboard).
break; // do nothing
121
http://csetube.weebly.com/
}
}
Drawing three dimensional objects & Drawing three dimensional scenes
/
k
t
.
e
b
Example : gluLookAt(3.0, 2.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glutSwapBuffers () : glutSwapBuffers swaps the buffers of the current window if
double buffered.
u
t
e
cs
/
/
:
p
t
t
glEnd();
// Triangle
122
http://csetube.weebly.com/
glBegin( GL_TRIANGLES );
icosahedron
teapot
cube
octahedron
tetrahedron
dodecahedron
sphere
torus
/
k
t
.
e
b
glutWireCube(double size);
glBegin(GL_QUADS);
glutSolidCube(double size);
glColor3f(1,0,0); //red
u
t
e
s
c
/
glColor3f(0,1,0); //green
glVertex3f(-0.5, 0.5, 0.0);
/
:
p
glColor3f(0,0,1); //blue
glVertex3f(0.5, 0.5, 0.0);
glColor3f(1,1,1); //white
glVertex3f(0.5, -0.5, 0.0);
tt
glutWireTeapot(double size);
glutSolidTeapot(double size);
glEnd();
3D Transformation in OpenGL
cone
123
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
Use glPushMatrix and glPopMatrix to save and restore the untranslated coordinate
system.
s
c
/
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void Display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glColor3f(0.0, 1.0, 0.0);
glBegin(GL_POLYGON);
glVertex3f( 0.0, 0.0, 0.0);
// V0 ( 0, 0, 0)
glVertex3f( 1.0f, 0.0, 0.0);
// V1 ( 1, 0, 0)
glVertex3f( 1.0f, 1.0f, 0.0);
// V2 ( 1, 1, 0)
glVertex3f( 0.5f, 1.5f, 0.0);
// V3 (0.5, 1.5, 0)
glVertex3f( 0.0, 1.0f, 0.0);
// V4 ( 0, 1, 0)
glEnd();
glPushMatrix();
glTranslatef(1.5, 2.0, 0.0);
glRotatef(90.0, 0.0, 0.0, 1.0);
glScalef(0.5, 0.5, 0.5);
glBegin(GL_POLYGON);
glVertex3f( 0.0, 0.0, 0.0);
// V0 ( 0, 0, 0)
glVertex3f( 1.0f, 0.0, 0.0);
// V1 ( 1, 0, 0)
glVertex3f( 1.0f, 1.0f, 0.0);
// V2 ( 1, 1, 0)
glVertex3f( 0.5f, 1.5f, 0.0);
// V3 (0.5, 1.5, 0)
/
:
p
t
t
h
124
http://csetube.weebly.com/
Introduction to shading models Flat and smooth shading Adding texture to faces
Adding shadows of objects Building a camera ina program Creating shaded
objects Rendering texture Drawing shadows.
// V4 ( 0, 1, 0)
glFlush();
glutSwapBuffers();
}
void Init(void)
{
glClearColor(0.0, 0.0, 0.0, 0.0);
}
void Resize(int width, int height)
{
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, width/height, 0.1, 1000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(400, 400);
glutInitWindowPosition(200, 200);
glutCreateWindow("Polygon in OpenGL");
Init();
glutDisplayFunc(Display);
glutReshapeFunc(Resize);
glutMainLoop();
return 0;
}
/
k
t
.
e
b
A shading model uses two types of light source to illuminate the objects in
a scene : point light sources and ambient light. Incident light interacts with the
surface in three different ways:
u
t
e
s
c
/
/
:
p
tt
UNIT IV RENDERING
125
http://csetube.weebly.com/
Each face of a mesh object has two sides. If the object is solid ,
one is inside and the other is outside. The eye can see only the outside and
it is this side for which we must compute light contributions.
plastic. In a more complex model the color of the specular light varies
, providing a better approximation to the shininess of metal surfaces.
The total light reflected from the surface in a certain direction is
the sum of the diffuse component and the specular component. For each
surface point of interest we compute the size of each component that
reaches the eye.
Suppose that a light falls from a point source onto one side of a
face , a fraction of it is re-radiated diffusely in all directions from this side.
Some fraction of the re-radiated part reaches the eye, with an intensity
denoted by Id.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
On the other hand the amount of light that illuminates the face
does not depend on the orientation of the face relative to the point source.
The amount of light is proportional to the area of the face that it sees.
t
t
h
1.
2.
3.
126
http://csetube.weebly.com/
But for simplicity and to reduce computation time, these effects are usually
suppressed when rendering images. A reasonable value for d is chosen for
each surface.
4.1.3 Specular Reflection
Real objects do not scatter light uniformly in all directions and so
a specular component is added to the shading model. Specular reflection
causes highlights which can add reality to a picture when objects are
shinny. The behavior of specular light can be explained with Phong model.
/
k
t
Phong Model
.
e
b
tu
Fig (b) the face is turned partially away from the light source
through angle . The area subtended is now only cos() , so that the
brightness is reduced of S is reduced by this same factor. This relationship
between the brightness and surface orientation is called Lamberts law.
e
s
c
/
/
:
p
t
t
s.m
s m
s.m
,0
Id = Is d max
sm
The reflection coefficient d depends on the wavelength of the
incident light , the angle and various physical properties of the surface.
127
http://csetube.weebly.com/
with beam patterns. The distance from P to the beam envelope shows the
relative strength of the light scattered in that direction.
/
k
t
.
e
b
The source is assigned an intensity Ia. Each face in the model is assigned a
value for its ambient reflection coefficient d, and the term Ia a is added to the
diffuse and specular light that is reaching the eye from each point P on that face. Ia
and a are found experimentally.
Fig(c) shows how to quantify this beam pattern effect . The direction r
of perfect reflection depends on both s and the normal vector m to the
surface, according to:
u
t
e
s
c
/
s.m
r = -s + 2
m ( the mirror reflection direction)
m
/
:
p
For surfaces that are shiny but are not true mirrors, the amount of light
reflected falls off as the angle between r and v increases. In Phong model
the is said to vary as some power f of the cosine of i.e., ( cos ( ))f in
which f is chosen experimentally and usually lies between 1 and 200.
Too little ambient light makes shadows appear too deep and harsh., too
much makes the picture look washed out and bland.
tt
lambert = max
128
0,
s.m
sm
http://csetube.weebly.com/
0,
h.m
hm
/
k
t
.
e
b
--------------- (1)
u
t
e
The above equations are applied three times to compute the red,
green and blue components of the reflected light.
The light sources have three types of color : ambient =(I ar,Iag,Iab) ,
diffuse=(Idr,Idg,Idb) and specular=(Ispr,Ispg,Ispb). Usually the diffuse and the specular
light colors are the same. The terms lambert and phongf do not depends on the
color component so they need to be calculated once. To do this we need to define
nine reflection coefficients:
s
c
/
/
:
p
tt
dr , dg and db
The key idea is that the vertices of a mesh are sent down the
pipeline along with their associated vertex normals, and all shading calculations are
done on vertices.
The above fig. shows a triangle with vertices v0,v1 and v2 being
rendered. Vertex vi has the normal vector mi associated with it. These quantities are
sent down the pipeline with calls such as :
The ambient and diffuse reflection coefficients are based on the color of
the surface itself.
The Color of Specular Light
Specular light is mirrorlike , the color of the specular component is same
as that of the light source.
glBegin(GL_POLYGON);
129
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
cs
//
:
p
t
ht
glEnable(GL_LIGHT0);
130
http://csetube.weebly.com/
Some sources such as desk lamp are in the scene whereas like the
sun are infinitely remote. OpenGL allows us to create both types by using
homogenous coordinates to specify light position:
Colors are specified in RGBA format meaning red, green, blue and
alpha. The alpha value is sometimes used for blending two colors on the screen.
Light sources have various default values. For all sources:
k/
.t
Default specular=(1,1,1,1)
brightest possible:white
brightest possible:white
e
b
u
t
e
s
Spotlights
Light sources are point sources by default, meaning that they emit
light uniformly in all directions. But OpenGL allows you to make them into
spotlights, so they emit light in a restricted set of directions. The fig. shows a
spotlight aimed in direction d with a cutoff angle of .
c
/
/
p:
t
t
h
No light is seen at points lying outside the cutoff cone. For vertices
such as P, which lie inside the cone, the amount of light reaching P is attenuated by
131
http://csetube.weebly.com/
the factor cos(), where is the angle between d and a line from the source to P
and is the exponent chosen by the user to give the desired falloff of light with angle.
ambient source to a non-zero value makes object in a scene visible even if you have
not invoked any of the lighting functions.
//=4.0
/
k
t
glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION,dir);
.
e
b
The default values for these parameters are d= (0,0,-1) , =180 degree and =0,
which makes a source an omni directional point source.
u
t
e
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
s
c
/
/
:
p
t
t
h
The fig.(a) shows a eye viewing a cube which is modeled using the
counterclockwise order notion. The arrows indicate the order in which the vertices
are passed to OpenGL. For an object that encloses that some space, all faces that are
visible to the eye are front faces, and OpenGL draws them with the correct shading.
OpenGL also draws back faces but they are hidden by closer front faces.
This code sets the ambient source to the color (0.2, 0.3, 0.1). The
default value is (0.2, 0.2, 0.2,1.0) so the ambient is always present. Setting the
132
http://csetube.weebly.com/
void display()
{
GLfloat position[]={2,1,3,1};
/
k
t
glMatrixMode(GL_MODELVIEW);
.
e
b
glLoadIdentity();
glPushMatrix();
Fig(b) shows a box with a face removed. Three of the visible faces
are back faces. By default, OpenGL does not shade these properly. To do proper
shading of back faces we use:
u
t
e
cs
//
glRotated(.);
glTranslated();
glLightfv(GL_LIGHT0,GL_POSITION,position);
:
p
t
glPopMatrix();
ht
gluLookAt(.);
GLfloat pos[]={0,0,0,1};
glMatrixMode(GL_MODELVIEW);
glLightfv(GL_LIGHT0,GL_POSITION,position)
133
http://csetube.weebly.com/
glLoadIdentity();
gluLookAt(.);
This code establishes the light to be positoned at the eye and the
light moves with the camera.
The effect of a light source can be seen only when light reflects off
an objects surface. OpenGL provides methods for specifying the various reflection
coefficients. The coefficients are set with variations of the function glMaterial and
they can be specified individually for front and back faces. The code:
/
k
t
.
e
b
u
t
e
cs
//
:
p
t
glMaterialfv(GL_FRONT,GL_DIFFUSE,myDiffuse);
sets the diffuse reflection coefficients( dr , dg ,db) equal to (0.8, 0.2, 0.0)
for all specified front faces. The first parameter of glMaterialfv() can take the
following values:
ambient .2 .6 0
ht
scale 4 4 4 sphere
134
http://csetube.weebly.com/
o
o
o
o
Painting a Face
A face is colored using a polygon fill routine. A polygon routine is
sometimes called as a tiler because it moves over a polygon pixel by pixel, coloring
each pixel. The pixels in a polygon are visited in a regular order usually from
bottom to top of the polygon and from left to right.
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
In both kinds of shading, the vertices are passed down the graphics
pipeline, shading calculations are performed to attach a color to each vertex and the
vertices are converted to screen coordinates and the face is painted pixel by pixel
with the appropriate color.
for (int y= ybott ; y<= ytop; y++) // for each scan line
{
find xleft and xright
135
http://csetube.weebly.com/
for( int x= xleft ; x<= xright; x++) // fill across the scan line
{
Gouraud shading
Phong shading
Gouraud Shading
}
}
/
k
t
The main difference between flat and smooth shading is the manner
in which the color c is determined in each pixel.
.
e
b
When a face is flat, like a roof and the light sources are distant , the
diffuse light component varies little over different points on the roof. In such cases
we use the same color for every pixel covered by the face.
OpenGL offers a rendering mode in which the entire face is drawn
with the same color. In this mode, although a color is passed down the pipeline as
part of each vertex of the face, the painting algorithm uses only one color value. So
the command find the color c for this pixel is not inside the loops, but appears
before the loop, setting c to the color of one of the vertices.
u
t
e
s
c
/
/
:
p
----------(1)
varies between 0 and 1 as ys varies from ybott to y4. The eq(1) involves three
calculations since each color quantity has a red, green and blue component.
Colorright is found by interpolating the colors at the top and bottom
of the right edge. The tiler then fills across the scan line , linearly interpolating
between colorleft and colorright to obtain the color at pixel x:
C(x) = lerp
t
t
h
When objects are rendered using flat shading. The individual faces
are clearly visible on both sides. Edges between faces actually appear more
pronounced than they would on an actual physical object due to a phenomenon in
the eye known as lateral inhibition. When there is a discontinuity across an object
the eye manufactures a Mach Band at the discontinuity and a vivid edge is seen.
C(x+1)=c(x)+
136
http://csetube.weebly.com/
The incremented is calculated only once outside of the inner most loop. The
code:
for ( int y= ybott; y<=ytop ; y++)
{
find xleft and xright
find colorleft and colorright
colorinc=( colorright - colorleft) / (xright - xleft);
/
k
t
.
e
b
u
t
e
s
c
/
glShadeModel(GL_SMOOTH);
/
:
p
tt
Phong Shading
137
http://csetube.weebly.com/
left and
interpolation
The fig shows a projected face with the normal vectors m1, m2, m3
and m4 indicated at the four vertices.
This interpolated vector must be normalized to unit length before it
is used in the shading formula once m left and m right are known they are interpolated
to form a normal vector at each x along the scan line that vector is used in the
shading calculation to form the color at the pixel.
/
k
t
.
e
b
u
t
e
cs
Interpolating normals
/
/
:
tp
ht
138
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
Procedural Textures
/
:
p
tt
//sphere intensity
//dark background
}
This function varies from 1(white) at the center to 0 (black) at the
edges of the sphere.
glBegin(GL_QUADS);
139
http://csetube.weebly.com/
/
k
t
.
e
b
The fig. shows the use of texture coordinates , that tile the texture,
making it to repeat. To do this some texture coordinates that lie outside the
interval[0,1] are used. When rendering routine encounters a value of s and t outside
the unit square, such as s=2.67, it ignores the integral part and uses only the
fractional part 0.67. A point on a face that requires (s,t)=(2.6,3.77) is textured with
texture (0.6,0.77).
u
t
e
s
c
/
The fig. shows the a case where the four corners of the texture
square are associated with the four corners of a rectangle. In this example, the
texture is a 640-by-480 pixel bit map and it is pasted onto a rectangle with aspect
ratio 640/480, so it appears without distortion.
/
:
p
t
t
h
The points inside F will be filled with texture values lying inside P,
by finding the internal coordinate values (s,t) through the use of interpolation.
140
http://csetube.weebly.com/
to hold all of the coordinate pairs of the mesh. The two important techniques to treat
texture for an object are:
1.
2.
/
k
t
.
e
b
The fig shows the camera taking a snapshot of a face F with texture pasted
onto it and the rendering in progress. The scan line y is being filled from x left to
xright. For each x along this scan line, we compute the correct position on the face
and from that , obtain the correct position (s*, t*) within the texture.
u
t
e
cs
/
/
:
p
t
t
141
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
:/
p
t
t
142
http://csetube.weebly.com/
/
k
t
.
e
b
The fig. shows the face of a barn. The left edge of the projected face has
endpoints a and b. The face extends from x left to xright across scan line y. We need to
find appropriate texture coordinates (sleft, tleft) and
(sright, tright) to attach to xleft
and xright, which we can then interpolate across the scan line
u
t
e
s
c
/
Consider finding sleft(y), the value of sleft at scan line y.We know that
texture coordinate s A is attached to point a and sB is attached to point b. If the scan
line at y is a fraction f of the way between y bott and ytop so that f=(y ybott)/ (ytop
ybott), the proper texture coordinate to use is
/
:
p
t
t
h
Shading calculations are done using this normal, producing the color c=(c r,
cg, cb). The texture coordinates (s A, tA) are attached to A. Vertex A then goes
perspective transformation, producing a =(a 1,a2, a3,a4). The texture coordinates and
color c are not altered.
Next clipping against the view volume is done. Clipping can cause some
vertices to disappear and some vertices to be formed. When a vertex D is created,
we determine its position (d 1, d2, d3, d4) and attach it to appropriate color and
texture point. After clipping the face still consists of a number of verices, to each of
which is attached a color and a texture point. For a point A, the information is
stored in the array (a1, a2, a3, a4, sA, tA, c,1). A final term of 1 has been appended;
this is used in the next step.
Perspective division is done, we need hyberbolic interpolation so
we divide every term in the array that we wish to interpolate hyperbolically by a 4, to
obtain the array (x, y, z, 1, s A/a4, t4/a4, c, 1/a4). The first three components of the
array (x, y, z)=(a1/a4, a2/a4, a3/a4).
Finally, the rendering routine receives the array (x, y, z, 1, s A/a4, t4/a4, c,
1/a4) for each vertex of the face to be rendered.
4.3.3 What does the texture Modulate?
143
http://csetube.weebly.com/
There are three methods to apply the values in the texture map in the
rendering calculations
The goal is to make a scalar function texture(s,t) disturb the normal vector
at each spot in a controlled fashion. This disturbance should depend only on the
shape of the surface and the texture.
I=texture(s,t)
The object then appears to emit light or glow. Lower texture values emit
less light and higher texture values emit more light. No additional lighting
calculations are needed. OpenGL does this type of texturing using
/
k
t
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,
.
e
b
GL_REPLACE);
u
t
e
s
c
/
The color of an object is the color of its diffuse light component. Therefore
we can make the texture appear to be painted onto the surface by varying the diffuse
reflection coefficient. The texture function modulates the value of the reflection
coefficient from point to point. We replace eq(1) with
I= texture(s,t) [Ia a + Id d lambert ]+ Isp s phongf
/
:
p
The fig. shows in cross section how bump mapping works. Suppose the
surface is represented parametrically by the function P(u,v) and has unit normal
vector m(u,v). Suppose further that the 3D point at(u*,v*) corresponds to texture at
(u*,v*).
t
t
h
For appropriate values of s and t. Phong specular reflections are the color
of the source and not the object so highlights do not depend on the texture. OpenGL
does this type of texturing using
Blinns method simulates perturbing the position of the true surface in the
direction of the normal vector by an amount proportional to the texture (u*,v*);that
is
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,
GL_MODULATE);
P(u*,V*) = P(u*,v*)+texture(u*,v*)m(u*,v*).
Figure(a) shows how this techniques adds wrinkles to the surface. The disturbed
surface has a new normal vector m(u*,v*)at each point. The idea is to use this
disturbed normal as if it were attached to the original undisturbed surface at each
144
http://csetube.weebly.com/
you are getting a second look at the object from the view point of the light source.
There are two methods for computing shadows:
point, as shown in figure (b). Blinn has demonstrated that a good approximation to
m(u*,v*) is given by
m(u*,v*) =m(u*,v*)+d(u*,v*)
Shadows as Texture
Creating shadows with the use of a shadow buffer
4.4.1 Shadows as Texture
/
k
t
.
e
b
u
t
e
s
c
/
Chrome mapping
A rough and blurry image that suggests the surrounding
environment is reflected in the object as you would see in an object coated with
chrome.
/
:
p
t
t
h
Environment mapping
A recognizable image of the surrounding environment is seen
reflected in the object. Valuable visual clues are got from such reflections
particularly when the object is moving.
Fig(a) shows a box casting a shadow onto the floor. The shape of the
shadow is determined by the projections of each of the faces of the box onto the
plane of the floor, using the light source as the center of projection.
4.4 ADDING SHADOWS OF OBJECTS
Fig(b) shows the superposed projections of two of the faces. The top faces
projects to top and the front face to front.
Shadows make an image more realistic. The way one object casts a
shadow on another object gives important visual clues as to how the two objects are
positioned with respect to each other. Shadows conveys lot of information as such,
This provides the key to drawing the shadow. After drawing the plane by
the use of ambient, diffuse and specular light contributions, draw the six projections
145
http://csetube.weebly.com/
of the boxs faces on the plane, using only the ambient light. This technique will
draw the shadow in the right shape and color. Finally draw the box.
Building the Projected Face
To make the new face F produced by F, we project each of the vertices of
F onto the plane. Suppose that the plane passes through point A and has a normal
vector n. Consider projecting vertex V, producing V. V is the point where the ray
from source at S through V hits the plane, this point is
V ' S (V
S)
n.( A S )
n.(V S )
/
k
t
.
e
b
This method uses a variant of the depth buffer that performs the removal
of hidden surfaces. An auxiliary second depth buffer called a shadow buffer is used
for each light source. This requires lot of memory.
This method is based on the principle that any points in a scene that are
hidden from the light source must be in shadow. If no object lies between a point
and the light source, the point is not in shadow.
u
t
e
cs
/
/
:
The shadow buffer contains a depth picture of the scene from the point of
view of the light source. Each of the elements of the buffer records the distance
from the source to the closest object in the associated direction. Rendering is done
in two stages:
1) Loading the shadow buffer
The fig. shows a scene being viewed by the usual eye camera and
a source camera located at the light source. Suppose that point P is on the ray from
the source through the shadow buffer pixel d[i][j] and that point B on the pyramid is
also on this ray. If the pyramid is present d[i][j] contains the pseudodepth to B; if
the pyramid happens to be absent d[i][j] contains the pseudodepth to P.
tp
ht
Each face in the scene is rendered using the eye camera. Suppose the
eye camera sees point P through pixel p[c][r]. When rendering p[c][r], we need to
find
The shadow buffer is initialized with 1.0 in each element, the largest
pseudodepth possible. Then through a camera positioned at the light source, each of
the scene is rasterized but only the pseudodepth of the point on the face is tested.
Each element of the shadow buffer keeps track of the smallest pseudodepth seen so
far.
146
http://csetube.weebly.com/
Camera();
//default constructor
//roll it
//yaw it
//like gluLookAt()
//slide it
/
k
t
};
cam.slide(-1, 0, -2);
cam.roll(30);
cam.yaw(20);
The Camera class definition contains fields for eye and the directions u,
v and n. Point3 and Vector3 are the basic data types. It also has fields that describe
the shape of the view volume: viewAngle, aspect, nearDist and farDist.
u
t
e
cs
/
/
:
p
t
t
class Camera {
private:
Point3 eye;
.
e
b
ux
vx
nx
0
uy
vy
ny
0
uz
vz
nz
0
dx
dy
dz
0
Vector3 u, v, n;
double viewAngle, aspect, nearDist, farDist; //view volume shape
void setModelViewMatrix();
public:
147
http://csetube.weebly.com/
The method set() acts like gluLookAt(): It uses the values of eye, look
and up to compute u, v and n according to equation:
n= eye look,
float m[16];
u = up X n
Vector3 eVec(eye.x, eye.y, eye.z);
/
k
t
; m[7]= 0
The routine setShape() is simple. It puts the four argument values into
the appropriate camera fields and then calls
.
e
b
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
s
c
/
}
void Camera :: set
:/
p
t
t
n.normalize();
setModelViewMatrix();
and
glLoadIdentity()
The central camera functions are slide(), roll(), yaw() and pitch(), which
makes relative changes to the cameras position and orientation.
// make n
//make u= up X n
The user flies the camera through a scene interactively by pressing keys
or clicking the mouse. For instance,
u.normalize();
v.set(n.cross(u));
glMatrixMode(GL_PROJECTION)
u
t
e
along with
// make v= n X u
// tell OpenGL
148
http://csetube.weebly.com/
taking different snapshots. If the snapshots are stored and then played back, an
animation is produced of the camera flying around the scene.
Roll, pitch and yaw the camera , involves a rotation of the camera about
one of its own axes.
There are six degrees of freedom for adjusting a camera: It can be slid in
three dimensions and it can be rotated about any of three coordinate axes.
To roll the camera we rotate it about its own n-axis. This means that
both the directions u and v must be rotated as shown in fig.
Sliding the camera means to move it along one of its own axes that is, in
the u, v and n direction without rotating it. Since the camera is looking along the
negative n axis, movement along n is forward or back. Movement along u is left or
right and along v is up or down.
/
k
t
To move the camera a distance D along its u axis, set eye to eye + Du.
For convenience ,we can combine the three possible slides in a single function:
.
e
b
u
t
e
slides the camera amount delU along u, delV along v and delN along n. The code is
as follows:
s
c
/
/
:
p
{
eye.x += delU * u.x + delV * v.x + delN * n.x;
Two new axes are formed u and v that lie in the same plane as u and v,
and have been rotated through the angle radians.
t
t
h
149
http://csetube.weebly.com/
Vector3 t = u;
//remember old u
/
k
t
setModelViewMatrix();
setModelViewMatrix();
u
t
e
Implementation of pitch()
s
c
/
:/
p
t
t
.
e
b
The Camera class can be used with OpenGL to fly a camera through a scene. The
scene consists of only a teapot. The camera is a global object and is set up in
main(). When a key is pressed myKeyboard() is called and the camera is slid or
rotated, depending on which key was pressed.
For instance, if P is pressed, the camera is pitched up by 1 degree. If
CTRL F is pressed , the camera is pitched down by 1 degree. After the keystroke
has been processed, glutPostRedisplay() causes myDisplay() to be called again to
draw the new picture.
setModelViewMatrix();
}
#include camera.h
Implementation of yaw()
Camera cam;
150
http://csetube.weebly.com/
//---------------------- myKeyboard-------------------------------
//--------------------------myDisplay------------------------------
void myDisplay(void)
{
switch(key)
glClear(GL_COLOR_BUFFER_BIT |GL_DEPTH_BUFFER_BIT);
glutWireTeapot(1,0);
//controls for the camera
case F:
glFlush();
//slide camera forward
k/
glutSwapBuffers();
t
.
e
cam.slide(0, 0, 0.2);
break;
//--------------------------main----------------------------
case F-64:
b
u
t
e
s
c
break;
case P:
/
/
:
cam.pitch(-1.0);
case P-64:
cam.pitch(1.0);
tp
break;
glutInit(&argc, argv);
glutInitWindowPosition(50, 50);
ht
break;
glutDisplayFunc(myDisplay);
//draw it again
//background is white
//set color of stuff
cam.set(4, 4, 4, 0, 0, 0, 0, 1, 0);
151
http://csetube.weebly.com/
glutMainLoop();
}
UNIT V FRACTALS
Successive generations of the Koch curve are denoted K0, K1, K2.The
zeroth generation shape K0 is a horizontal line of length unity.
/
k
t
.
e
b
Computers are good at repetition. In addition, the high precision with which modern
computers can do calculations allows an algorithm to take closer look at an object,
to get greater levels of details.
Computer graphics can produce pictures of things that do not even exist in
nature or perhaps could never exist. We will study the inherent finiteness of any
computer generated picture. It has finite resolution and finite size, and it must be
made in finite amount of time. The pictures we make can only be approximations,
and the observer of such a picture uses it just as a hint of what the underlying object
really looks like.
u
t
e
s
c
/
/
:
p
To create K1 , divide the line K0 into three equal parts and replace the
middle section with a triangular bump having sides of length 1/3. The total length of
the line is 4/3. The second order curve K2, is formed by building a bump on each of
the four line segments of K1.
t
t
h
Subdivide each segment of Kn into three equal parts and replace the middle
part with a bump in the shape of an equilateral triangle.
Other curves are statistically self-similar, such that the wiggles and
irregularities in the curve are the same on the average, no matter how many times
the picture is enlarged. Example: Coastline.
152
http://csetube.weebly.com/
We call n the order of the curve K n, and we say the order n Koch
curve consists of four versions of the order (n-1) Koch curve.To draw K2 we draw a
smaller version of K1 , then turn left 60 , draw K1 again, turn right 120 , draw K1
a third time. For snowflake this routine is performed just three times, with a
120 turn in between.
The recursive method for drawing any order Koch curve is given
in the following pseudocode:
The Koch snowflake of the above figure is formed out of three Koch
curves joined together. The perimeter of the ith generations shape Si is three times
length of a Koch curve and so is 3(4/3)i , which grows forever as i increases. But the
area inside the Koch snowflake grows quite slowly. So the edge of the Koch
snowflake gets rougher and rougher and longer and longer, but the area remains
bounded.
Koch snowflake s3, s4 and s5
/
k
t
.
e
b
To draw Kn:
u
t
e
cs
Draw Kn-1;
//
Turn left
:
p
t
60 ;
Draw Kn-1;
ht
Turn right
120 ;
Draw Kn-1;
Turn left
60 ;
Draw Kn-1;
}
153
http://csetube.weebly.com/
/
k
t
// in radians
if (n ==0)
.
e
b
else {
n--;
len /=3;
u
t
e
s
c
/
:/
dir +=60;
p
t
t
dir +=60;
drawKoch(dir, len, n);
}
}
154
http://csetube.weebly.com/
Each affine map reduces the size of its image at least slightly, the
orbit converge to a unique image called the attractor of the IFS. We denote the
attractor by the set A, some of its important properties are:
1.
The attractor set A is a fixed point of the mapping W(.), which we write as
W(A)=A. That is putting A through the copier again produces exactly the
same image A.
The iterates have already converged to the set A, so iterating once
more makes no difference.
2.
Starting with any input image B and iterating the copying process enough
times, we find that the orbit of images always converges to the same A.
If Ik = W (k)(B) is the kth iterate of image B, then as k goes to
infinity Ik becomes indistinguishable from the attractor A.
/
k
t
.
e
b
I = set of all black points = { (x,y) such that (x,y) is colored black }
u
t
e
s
c
/
I is the input image to the copier. Then the ith lens characterized
by transformation Ti, builds a new set of points we denote as T i(I) and adds them to
the image being produced at the current iteration. Each added set T i(I) is the set of
all transformed points I:
/
:
p
tt
{ //Draw kth iterate of input point list pts for the IFS
For instance the copy of the first image I0 is the set W(I0).
int i;
155
http://csetube.weebly.com/
RealPolyArray newpts;
do {
Draw a dot at P;
if(k==0) drawPoints(pts);
else for(i=1; i<=N; i++)
{
newpts.num= N * pts.num;
Set P= newPt;
} while(!bored);
/
k
t
.
e
b
tu
e
s
c
tp
ht
Inefficient
Huge amount of memory is required.
5.3.4 The Chaos Game
1
( P + p[])
2
Can be written as
/
/
:
Drawbacks
P=
1
P= P 2
0
0
1
2
1
p[..]
2
So that P is subjected to the affine map, and then the transformed version is written
back into P. The offset for this map depends on which point p[i[ is chosen.
156
http://csetube.weebly.com/
int index;
do {
index = chooseAffine(pr , N);
P = transform(aff[ondex], P);
drawRealDot(P);
} while (!bored);
/
k
t
.
e
b
One of the three affine maps is chosen at random each time, and
the previous point P is subjected to it. The new point is drawn and becomes the next
point to be transformed.
u
t
e
Adding Color
cs
Also listed with each map is the probability pri that the map is
chosen at each iteration.
/
/
:
p
t
t
The idea behind this approach is that the attractor consists of all
points that are reachable by applying a long sequence of affines in the IFS. The
randomness is invoked to ensure that the system is fully exercised, that every
combination of affine maps is used somewhere in the process.
157
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
/
:
p
t
t
h
158
http://csetube.weebly.com/
----------------------------------
(3)
f(z) = z + c
/
k
t
-------------------------------(1)
where c is some constant. The system produces each output by squaring its input
and adding c. We assume that the process begins with the starting value s, so the
system generates the sequence of values or orbit
e.
p+, p- =
--------------------------------(4)
d2= ((s) + c) + c
e
s
c
1
4
b
u
t
d1= (s)2 + c
2
1
2
/
/
:
------------------------------(2)
tp
For each complex number c, either the orbit is finite so that how
far along the orbit one goes, the values remain finite or the orbit explodes that is the
values get larger without limit. The Mandelbrot set denoted by M, contains just
those values of c that result in finite orbits:
ht
The IFS works well with both complex and real numbers. Both s
and c are complex numbers and at each iteration we square the previous result and
add c. Squaring a complex number z = x + yi yields the new complex number:
159
http://csetube.weebly.com/
Definition:
The Mandelbrot set M is the set of all complex numbers c that
produce a finite orbit of 0.
tmp = dx;
real part
/
k
t
//save
dx = dx * dx dy * dy +cx;
5.4.3 Computing whether Point c is in the Mandelbrot Set
.
e
b
se
/c
value of 2, then the orbit will explode at some point. The number of iterations
/
:
p
tu
d k either
d k exceeds the
fsq = dx * dx + dy * dy;
old
return count;
dk
tt
d k to exceed 2.
dk
is kept in fsq and compared with 4. The dwell() function plays a key role in
Num iterates, we assume that it will never and we conclude that c is in M. The
orbits for values of c just outside the boundary of M have a large dwell and if their
dwell exceeds Num, we wrongly decide that they lie inside M. A drawing based on
too small value of Num will show a Mandelbrot set that is slightly too large.
dwell() routine
int dwell (double cx, double cy)
160
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
:/
p
t
t
161
http://csetube.weebly.com/
1
2 W,P
y
cols
i
cij =
Px
1
2W
cols
Like the Mandelbrot set, Julia sets are extremely complicated sets
of points in the complex plane. There is a different Julia set, denoted J c for each
value of c. A closely related variation is the filled-in Julia set, denoted by Kc, which
is easier to define.
------------------------
(5)
In the IFS we set c to some fixed chosen value and examine what
happens for different starting point s. We ask how the orbit of starting point s
behaves. Either it explodes or it doesnt. If it is finite , we say the starting point s is
in Kc, otherwise s lies outside of Kc.
Definition:
.
e
b
The filled-in Julia set at c, Kc, is the set of all starting points
whose orbits are finite.
u
t
e
s
c
/
:/
p
t
t
setPixel( j , k, Color);
}
/
k
t
When studying Kc, one chooses a single value for c and considers
different starting points. Kc should be always symmetrical about the origin, since
the orbits of s and s become identical after one iteration.
5.5.2 Drawing Filled-in Julia Sets
A starting point s is in Kc, depending on whether its orbit is finite
or explodes, the process of drawing a filled-in Julia set is almost similar to
Mandelbrot set. We choose a window in the complex plane and associate pixels
with points in the window. The pixels correspond to different values of the starting
point s. A single value of c is chosen and then the orbit for each pixel position is
examined to see if it explodes and if so, how quickly does it explodes.
Pseudocode for drawing a region of the Filled-in Julia set
for(j=0; j<rows; j++)
Also when working close to the boundary of the set , you should
use a larger value of Num. The calculation times for each image will increase
as you zoom in on a region of the boundary of M. But images of modest size
can easily be created on a microcomputer in a reasonable amount of time.
162
http://csetube.weebly.com/
For all other values of c, the set Kc, is complex. It has been shown
that each Kc is one of the two types:
Kc is connected or
Kc is a Cantor set
A theoretical result is that Kc is connected for precisely those
values of c that lie in the Mandelbrot set.
setPixel( j , k, Color);
}
The dwell() must be passed to the starting point s as well as c.
Making a high-resolution image of a Kc requires a great deal of computer time,
since a complex calculation is associated with every pixel.
/
k
t
.
e
b
tu
se
/c
:/
(orbit of s)
or
f(s), f2(s), f3(s), f4(s),.
tp
If the process started instead at f(s), the image of s, then the two
(orbit of f(s))
which have the same value forever. If the orbit of s is finite, then
so is the orbit of its image f(s). All of the points in the orbit , if considered as
starting points on their own, have orbits with thew same behavior: They all are
finite or they all explode.
ht
Any starting point whose orbit passes through s has the same
behavior as the orbit that start at s: The two orbits are identical forever. The point
just before s in the sequence is called the preimage of s and is the inverse of the
is the set of all complex numbers lying inside the unit circle, the
circle of radius 1 centered at the origin.
z c , so we have
2. c = -2: in this case it turns out that the filled-in Julia set consists
z c ------------------(6)
163
http://csetube.weebly.com/
orbit of s is shown in black dots and the two preimages of s are marked. The two
orbits of these preimages join up with that of s.
The Julia set Jc can be characterized in many ways that are more
precise than simply saying it is the boundary of Kc. One such characterization that
suggests an algorithm for drawing Jc is the following:
Each of these preimages has two preimages and each if these has
two, so there is a huge collection of orbits that join up with the orbit of s, and
thereafter committed to the same path. The tree of preimages of s is illustrated in
fig(B): s has two parent preimages, 4 grandparents, etc. Going back k generations
we find that there are 2k preimages.
/
k
t
.
e
b
finding a point in Jc
keeping track of all the preimages
2.
An approach known as
the backward-iteration method
overcomes these obstacles and produces good result. The idea is simple: Choose
some point z in the complex plane. The point may or may not be in J c. Now iterate
in backward direction: at each iteration choose one of the two square roots
randomly, to produce a new z value. The following pseudocode is illustrative:
1.
u
t
e
s
c
/
:/
tp
do {
if ( coin flip is heads z=
ht
else z =
z c );
z c ;
draw dot at z;
} while (not bored);
The idea is that for any reasonable starting point iterating
backwards a few times will produce a z that is in Jc. It is as if the backward orbit is
sucked into the Julia set. Once it is in the Julia set, all subsequent iterations are
there, so point after point builds up inside J c, and a picture emerges.
164
http://csetube.weebly.com/
/
k
t
.
e
b
u
t
e
s
c
/
:/
p
t
t
-----------------------------------(7)
165
http://csetube.weebly.com/
computed randomly. If t is positive, the elbow lies to one side of AB; if t is negative
it lies to the other side.
fract(A, C, stdDev);
fract(C, B, stdDev);
}
The routine fract() generates curves that approximate actual
fractals. The routine recursively replaces each segment in a random elbow with a
smaller random elbow. The stopping criteria used is: When the length of the
segment is small enough, the segment is drawn using cvs.lineTo(), where cvs is a
Canvas object. The variable t is made to be approximately Gaussian in its
distribution by summing together 12 uniformly distributed random values lying
between 0 and 1. The result has a mean value of 6 and a variance of 1. The mean
value is then shifted to 0 and the variance is scaled as necessary.
/
k
t
e.
Point2 C;
line segment.
b
u
t
cvs.lintTo(B.x, B.y);
e
s
c
else
/
/
:
stdDev *=factor;
//scale stdDev by
factor
S(f)= 1/f
p
t
t
double t=0;
Where the power of the noise process is the parameter the user
can set to control the jaggedness of the fractal noise. When is 2, the process is
known as Brownian motion and when is 1, the process is called 1/f noise. 1/f
noise is self similar and is shown to be a good model for physical process such as
clouds. The fractal dimension of such processes is:
t+= rand()/32768.0;
t= (t-6) * stdDev;
and sc
D
C.x = 0.5 *(A.x +B.x) t * (B.y A.y);
5
2
166
http://csetube.weebly.com/
curve. Values larger than 2 leads to smoother curves and values smaller than 2 leads
to more jagged curves. The value of factor is given by:
factor = 2 (1 /2 )
First the ray is transformed into the generic coordinates of the object and
then the various intersection with the generic object are computed.
//global variables
The generic square lies in the z=0 plane and extends from -1 to 1 in both x
and y. The square can be transformed into any parallelogram positioned in space,
so it is often used in scenes to provide this, flat surfaces such as walls and windows.
The function hit(1) first finds where the ray hits the generic plane and then test
whether this hit spot also lies within the square.
/
k
t
.
e
b
u
t
e
The side of the cylinder is part of an infinitely long wall with a radius of L
at z=0,and a small radius of S at z=1.This wall has the implicit form as
cvs.moveTo(A);
cs
fract(A, B, StdDev);
//
:
p
t
If S=1, the shape becomes the generic cylinder, if S=0 , it becomes the
generic cone. We develop a hit () method for the tapered cylinder, which also
provides hit() method for the cylinder and cone.
3) Intersecting with a Cube (or any Convex Polyhedron)
ht
167
http://csetube.weebly.com/
To render the surface of an object, we project pixel areas on to surface and then
reflect the projected pixel area on to the environment map to pick up the surface
shading attributes for each pixel. If the object is transparent, we can also refract the
projected pixel are also the environment map. The environment mapping process
for reflection of a projected pixel area is shown in figure. Pixel intensity is
determined by averaging the intensity values within the intersected region of the
environment map.
The generic cube can be used as an extent for the other generic primitives
in the sense of a bounding box. Each generic primitives, such as the
cylinder, fits snugly inside the cube.
4) Adding More Primitives
To find where the ray S + ct intersects the surface, we substitute S + ct for
P in F(P) (the explicit form of the shape)
A simple method for adding surface detail is the model structure and patterns
with polygon facets. For large scale detail, polygon modeling can give good results.
Also we could model an irregular surface with small, randomly oriented polygon
facets, provided the facets were not too small.
/
k
t
This function is
Surface pattern polygons are generally overlaid on a larger surface polygon and
are processed with the parents surface. Only the parent polygon is processed by the
visible surface algorithms, but the illumination parameters for the surfac3e detail
polygons take precedence over the parent polygon. When fine surface detail is to be
modeled, polygon are not practical.
.
e
b
positive at these values of t for which the ray is outside the object.
zero when the ray coincides with the surface of the object and
negative when the ray is inside the surface.
u
t
e
cs
/
/
:
For quadrics such as the sphere, d(t) has a parabolic shape, for the torus, it has a
quartic shape. For other surfaces d(t) may be so complicated that we have to search
numerically to locate ts for which d(.) equals zero. The function for super ellipsoid
is
p
t
t
168
http://csetube.weebly.com/
U=fu(s,t)=au s+ but + cu
N = Pu P v
V=fv(s,t)=av s+ bvt + cv
Where Pu and Pv are the partial derivatives of P with respect to parameters u and v.
.
e
b
N'=Pu' + Pv'
We calculate the partial derivative with respect to u of the perturbed position vector
as
The mapping from image space to texture space does require calculation of
the inverse viewing projection transformation mVP -1 and the inverse texture map
transformation mT -1 .
u
t
e
cs
/
k
t
This adds bumps to the surface in the direction of the unit surface normal n=N/|N|.
The perturbed surface normal is then obtained as
//
:
p
t
Assuming the bump function b is small, we can neglect the last term and write
p u' pu + bun
ht
Similarly p v'= p v + b v n.
and the perturbed surface normal is
N' = Pu + Pv + b v (Pu x n ) + bu ( n x Pv ) + bu bv (n x n).
Although texture mapping can be used to add fine surface detail, it is not a
good method for modeling the surface roughness that appears on objects such as
oranges, strawberries and raisins. The illumination detail in the texture pattern
usually does not correspond to the illumination direction in the scene.
169
http://csetube.weebly.com/
where for rings about z-axis, the radius r = x2+y2 .The value of the function
rings () jumps between zero and unity as r increases from zero.
5.8.7 Turbulence
.
e
b
u
t
e
s
c
/
/
k
t
:/
p
t
t
1/2Knoise(2ks, x, y, z).
Marble shows veins of dark and light material that have some regularity
,but that also exhibit strongly chaotic irregularities. We can build up a marble like
3D texture by giving the veins a smoothly fluctuating behavior in the z-direction
and then perturbing it chantically using turb(). We start with a texture that is
constant in x and y and smoothly varying in z.
170
http://csetube.weebly.com/
Marble(x,y,z)=undulate(sin(2)).
Here undulate() is the spline shaped function that varies between some
dark and some light value as its argument varies from -1 to 1.
sin(2)
The great strengths of the ray tracing method is the ease with which it can
handle both reflection and refraction of light. This allows one to build scenes of
exquisite realism, containing mirrors, fishbowls, lenses and the like. There can be
multiple reflections in which light bounces off several shiny surfaces before
reaching the eye or elaborate combinations of refraction and reflection. Each of
these processes requires the spawnins and tracing of additional rays.
C2
C1
/
k
t
.
e
b
In ray traving scenes that include transparent objects, we must keep track
of the medium through which a ray is passing so that we can determine the value
C2/C1 at the next intersection where the ray either exists from the current object or
enters another one. This tracking is most easily accomplished by adding a field to
the ray that holds a pointer to the object within which the ray is travelling.
u
t
e
cs
//
:
p
t
R=dir-2(dir.m)m
sin(1)
The figure 5.15 shows a ray emanating, from the eye in the direction dir
and hitting a surface at the point Ph. when the surface is mirror like or transparent,
the light I that reaches the eye may have 5 components
The first three are the fan=miler ambient, diffuse and specular
contributions. The diffuse and specular part arise from light sources in the
environment that are visible at Pn. Iraft is the reflected light component ,arising
from the light , Ik that is incident at P n along the direction r. This direction is such
that the angles of incidence and reflection are equal,so
ht
Similarly Itran is the transmitted light components arising from the light IT
that is transmitted thorough the transparent material to Ph along the direction t. A
portion of this light passes through the surface and in so doing is bent, continuing
its travel along dir. The refraction direction + depends on several factors.
I is a sum of various light contributions, IR and IT each arise from their
own fine components ambient, diffuse and so on. IR is the light that would be seen
by an eye at Ph along a ray from P to P n. To determine IR, we do in fact spawn a
secondary ray from Pn in the direction r, find the first object it hits and then repeat
the same computation of light component. Similarly IT is found by casting a ray in
the direction t and seeing what surface is hit first, then computing the light
contributions.
171
http://csetube.weebly.com/
Intersection lftinter,rtinter;
if (ray misses the extends)return false;
if (C) left >hit(r,lftinter)||((right>hit(r,rtinter)))
return false;
return (inter.numHits > 0);
}
Extent tests are first made to see if there is an early out.Then the proper
hit() routing is called for the left subtree and unless the ray misses this subtree,the
hit list rinter is formed.If there is a miss,hit() returns the value false immediately
because the ray must hit dot subtrees in order to hit their intersection.Then the hit
list rtInter is formed.
/
k
t
The code is similar for the union Bool and DifferenceBool classes. For
UnionBool::hit(),the two hits are formed using
if((!left-)hit(r,lftInter))**(|right-)hit(r,rtinter)))
return false;
.
e
b
which provides an early out only if both hit lists are empty.
Ray trace objects that are Boolean combinations of simpler objects. The
ray inside lens L from t3 to t2 and the hit time is t3.If the lens is opaque, the familiar
shading rules will be applied to find what color the lens is at the hit spot. If the lens
is mirror like or transparent spawned rays are generated with the proper directions
and are traced as shown in figure 5.18.
Ray,first strikes the bowl at t1,the smallest of the times for which it is in S1
but not in either S2 or C. Ray 2 on the other hand,first hits the bowl at t 5. Again this
is the smallest time for which the ray is in S1,but in neither the other sphere nor the
cone.The hits at earlier times are hits with components parts of the bowl,but not
with the bowl itself.
u
t
e
cs
/
/
:
p
t
t
return true;
which gives an early out if the ray misses the left subtree,since it must then miss the
whole object.
172
http://csetube.weebly.com/