Nothing Special   »   [go: up one dir, main page]

CG Full Sem

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 48

Midpoint ellipse drawing

algorithm
 

Midpoint ellipse algorithm plots(finds) points of an ellipse on the first


quadrant by dividing the quadrant into two regions.
Each point(x, y) is then projected into other three quadrants (-x, y),
(x, -y), (-x, -y) i.e. it uses 4-way symmetry.
Function of ellipse: 
fellipse(x, y)=ry2x2+rx2y2-rx2ry2 
fellipse(x, y)<0 then (x, y) is inside the ellipse. 
fellipse(x, y)>0 then (x, y) is outside the ellipse. 
fellipse(x, y)=0 then (x, y) is on the ellipse. 
 
Decision parameter:
Initially, we have two decision parameters p10 in region 1 and p20 in
region 2. 
These parameters are defined as : p10 in region 1 is given as :
p10=ry2+1/4rx2-rx2ry
Mid-Point Ellipse Algorithm : 
1. Take input radius along x axis and y axis and obtain center of
ellipse. 
2. Initially, we assume ellipse to be centered at origin and the
first point as : (x, y0)= (0, ry).
3. Obtain the initial decision parameter for region 1 as:
p10=ry2+1/4rx2-rx 2ry
4. For every xk position in region 1 : 
If p1k<0 then the next point along the is (xk+1 , yk) and
p1k+1=p1k+2ry2xk+1+ry2
Else, the next point is (xk+1, yk-1 ) 
And p1k+1=p1k+2ry2xk+1 – 2rx2yk+1+ry2
5. Obtain the initial value in region 2 using the last point (x0, y0)
of region 1 as: p20=ry2(x0+1/2)2+rx2 (y0-1)2-rx2ry2
6. At each yk in region 2 starting at k =0 perform the following
task. 
If p2k>0 the next point is (xk, yk-1) and p2k+1=p2k-2rx2yk+1+rx2
7. Else, the next point is (xk+1, yk -1) and p2k+1=p2k+2ry2xk+1 -
2rx2yk+1+rx2
8. Now obtain the symmetric points in the three quadrants and
plot the coordinate value as: x=x+xc, y=y+yc
9. Repeat the steps for region 1 until 2ry2x>=2rx2y
2D Transformation
Transformation means changing some graphics into something
else by applying rules. We can have various types of
transformations such as translation, scaling up or down, rotation,
shearing, etc. When a transformation takes place on a 2D plane,
it is called 2D transformation.
Transformations play an important role in computer graphics to
reposition the graphics on the screen and change their size or
orientation.

Homogenous Coordinates

To perform a sequence of transformation such as translation


followed by rotation and scaling, we need to follow a sequential
process −
 Translate the coordinates,
 Rotate the translated coordinates, and then
 Scale the rotated coordinates to complete the composite
transformation.
To shorten this process, we have to use 3×3 transformation
matrix instead of 2×2 transformation matrix. To convert a 2×2
matrix to 3×3 matrix, we have to add an extra dummy
coordinate W.
In this way, we can represent the point by 3 numbers instead of
2 numbers, which is called Homogenous Coordinate system.
In this system, we can represent all the transformation equations
in matrix multiplication. Any Cartesian point PX,YX,Y can be
converted to homogenous coordinates by P’ (Xh, Yh, h).

Translation

A translation moves an object to a different position on the


screen. You can translate a point in 2D by adding translation
coordinate (tx, ty) to the original coordinate X,YX,Y to get the
new coordinate X′,Y′X′,Y′.
From the above figure, you can write that −
X’ = X + tx
Y’ = Y + ty
The pair (tx, ty) is called the translation vector or shift vector.
The above equations can also be represented using the column
vectors.
P=[X][Y]P=[X][Y] p' = [X′][Y′][X′][Y′]T = [tx][ty][tx][ty]
We can write it as −
P’ = P + T
Rotation

In rotation, we rotate the object at particular angle


θ thetatheta from its origin. From the following figure, we can
see that the point PX,YX,Y is located at angle φ from the
horizontal X coordinate with distance r from the origin.
Let us suppose you want to rotate it at the angle θ. After
rotating it to a new location, you will get a new point P’ X′,Y′X′,Y
′.
Using standard trigonometric the original coordinate of point
PX,YX,Y can be represented as −
X=rcosϕ......(1)X=rcosϕ......(1)
Y=rsinϕ......(2)Y=rsinϕ......(2)
Same way we can represent the point P’ X′,Y′X′,Y′ as −
x′=rcos(ϕ+θ)=rcosϕcosθ−rsinϕsinθ.......(3)x
′=rcos(ϕ+θ)=rcosϕcosθ−rsinϕsinθ.......(3)
y′=rsin(ϕ+θ)=rcosϕsinθ+rsinϕcosθ.......(4)y
′=rsin(ϕ+θ)=rcosϕsinθ+rsinϕcosθ.......(4)
Substituting equation 11 & 22 in 33 & 44 respectively, we will
get
x′=xcosθ−ysinθx′=xcosθ−ysinθ
y′=xsinθ+ycosθy′=xsinθ+ycosθ
Representing the above equation in matrix form,
[X′Y′]=[XY][cosθ−sinθsinθcosθ]OR[X′Y′]=[XY]
[cosθsinθ−sinθcosθ]OR
P’ = P . R
Where R is the rotation matrix
R=[cosθ−sinθsinθcosθ]R=[cosθsinθ−sinθcosθ]
The rotation angle can be positive and negative.
For positive rotation angle, we can use the above rotation
matrix. However, for negative angle rotation, the matrix will
change as shown below −
R=[cos(−θ)−sin(−θ)sin(−θ)cos(−θ)]R=[cos(−θ)sin(−θ)
−sin(−θ)cos(−θ)]
=[cosθsinθ−sinθcosθ](∵cos(−θ)=cosθandsin(−θ)=−sinθ)
2D Reflection in Computer Graphics-
 

Reflection is a kind of rotation where the angle of rotation is 180


degree.
The reflected object is always formed on the other side of mirror.
The size of reflected object is same as the size of original object.
 
Consider a point object O has to be reflected in a 2D plane.
 
Let-
Initial coordinates of the object O = (Xold, Yold)
New coordinates of the reflected object O after reflection = (Xnew, Ynew)
 
Reflection On X-Axis:
 
This reflection is achieved by using the following reflection equations-
Xnew = Xold
Ynew = -Yold
 
In Matrix form, the above reflection equations may be represented as-
 
 

For homogeneous coordinates, the above reflection matrix may be


represented as a 3 x 3 matrix as-
 

Reflection On Y-Axis:
 
This reflection is achieved by using the following reflection
equations-
Xnew = -Xold
Ynew = Yold
 
In Matrix form, the above reflection equations may be
represented as-
 
 
For homogeneous coordinates, the above reflection matrix may be
represented as a 3 x 3 matrix as-
 

 
PRACTICE PROBLEMS BASED ON 2D
REFLECTION IN COMPUTER GRAPHICS-
 

Problem-01:
 
Given a triangle with coordinate points A(3, 4), B(6, 4), C(5, 6).
Apply the reflection on the X axis and obtain the new coordinates
of the object.
 
Solution-
 
Given-
Old corner coordinates of the triangle = A (3, 4), B(6, 4),
C(5, 6)
Reflection has to be taken on the X axis
 
For Coordinates A(3, 4)
 
Let the new coordinates of corner A after reflection = (Xnew, Ynew).
 
Applying the reflection equations, we have-
Xnew = Xold = 3
Ynew = -Yold = -4
 
Thus, New coordinates of corner A after reflection = (3, -4).
Sutherland-Hodgeman Polygon Clipping:
It is performed by processing the boundary of polygon against each window
corner or edge. First of all entire polygon is clipped against one edge, then
resulting polygon is considered, then the polygon is considered against the
second edge, so on for all four edges.

Four possible situations while processing

1. If the first vertex is an outside the window, the second vertex is inside
the window. Then second vertex is added to the output list. The point
of intersection of window boundary and polygon side (edge) is also
added to the output line.
2. If both vertexes are inside window boundary. Then only second vertex
is added to the output list.
3. If the first vertex is inside the window and second is an outside
window. The edge which intersects with window is added to output list.
4. If both vertices are the outside window, then nothing is added to
output list.

Following figures shows original polygon and clipping of polygon against four
windows.

Example: Consider a polygon with vertices ABCDEFG. Apply Weiler- Atherton


algorithm to find the clipped polygon?
Solution: Let us assume that the vertices of window = C1, C2, C3, C4
The vertices of polygon = ABCDEFG
First, create two lists, one for subject polygon and second for clip polygon.
Iteration 1-

The First new list is- A, B, C, D, A.


Iteration 2-
The Second new list is- E, E, F, F, F.
Explanation:
 Check A if it is an intersection point then, start from A otherwise,
move to next.
 A is an intersection point (A will be saved in the new list)
then check the next vertex.
 B is not an intersection point save that in the new list. Then move
to next.
 C is also not an intersection point; save it in the new list and move
to the next point.
 Now, move to D (D is an intersection point). We jump to clip
polygon list to find D`.
 We will follow the same procedure for the next iteration.
 This process is repeated until we found the ending point.
S.NO Random Scan Raster Scan

While the resolution of raster


The resolution of random scan scan is lesser or lower than
1. is higher than raster scan. random scan.

While the cost of raster scan


2. It is costlier than raster scan. is lesser than random scan.

In random scan, any alteration


is easy in comparison of raster While in raster scan, any
3. scan. alteration is not so easy .

In random scan, interweaving is While in raster scan,


4. not used. interweaving is used.

In random scan, mathematical While in which, for image or


function is used for image or picture rendering, raster scan
5. picture rendering. uses pixels.

It is suitable for applications It is suitable for creating


6. requiring polygon drawings. realistic scenes.

Applications of Computer Graphics


Computer graphics deals with creation, manipulation and storage of
different type of images and objects.
Some of the applications of computer graphics are:
1. Computer Art:
Using computer graphics we can create fine and commercial
art which include animation packages, paint packages. These
packages provide facilities for designing object shapes and
specifying object motion.Cartoon drawing, paintings, logo
design can also be done.
2. Computer Aided Drawing:
Designing of buildings, automobile, aircraft is done with the
help of computer aided drawing, this helps in providing minute
details to the drawing and producing more accurate and sharp
drawings with better specifications.
3. Presentation Graphics:
For the preparation of reports or summarising the financial,
statistical, mathematical, scientific, economic data for research
reports, managerial reports, moreover creation of bar graphs,
pie charts, time chart, can be done using the tools present in
computer graphics.

4. Entertainment:
Computer graphics finds a major part of its utility in the movie
industry and game industry. Used for creating motion
pictures , music video, television shows, cartoon animation
films. In the game industry where focus and interactivity are
the key players, computer graphics helps in providing such
features in the efficient way.
5. Education:
Computer generated models are extremely useful for teaching
huge number of concepts and fundamentals in an easy to
understand and learn manner. Using computer graphics many
educational models can be created through which more
interest can be generated among the students regarding the
subject.

6. Training:
Specialised system for training like simulators can be used for
training the candidates in a way that can be grasped in a short
span of time with better understanding. Creation of training
modules using computer graphics is simple and very useful.

7. Visualisation:
Today the need of visualise things have increased drastically,
the need of visualisation can be seen in many advance
technologies , data visualisation helps in finding insights of the
data , to check and study the behaviour of processes around
Write a boundary fill procedure to fill 8-
connected region. Polygon Filling:-
Filling the polygon means highlighting all the pixels which lie inside the
polygon with any colour other than background colour. Polygons are easier
to fill since they have linear boundaries.

There are two basic approaches used to fill the polygon.

One way to fill polygon is to start from given “seed”, point known to be


inside the polygon and highlight outward from this point i.e. neigh-bouring
pixels until we encounter the boundary pixels. This approach is called seed
fill , because colour flows from the seed pixel until reaching the polygon
boundary like water flooding on the surface of the container.

Another approach to fill the polygon is to apply the inside test i.e. to check
whether the pixels is inside the polygon or outside the polygon and then
highlight pixels which lie inside the polygon. This approach is known
as scan-line algorithm It avoids the need for seed pixel it requires some
computation.

Seed Fill :-

The seed fill algorithm is further classified as flood fill


algorithm and boundary fill algorithm.

Boundary Fill Algorithm :


In this method, edges of the polygons are drawn. Then starting with some
seed, any point inside the Polygon we examine the neighbouring pixels to
check whether the boundary pixel is reached. If boundary pixels are not
reached, pixels are highlighted and the process is continued until boundary
pixels are reached.

Boundary defined regions may be either 4-connected or 8-


connected as shown in the figure (a) and (b).

If a region is 4-connected, then every pixel in the region may be reached


by a combination of moves in only four directions: left, right, up and
down.
For an 8-connected region every pixel in the region may be reached by a
combination of moves in the two horizontal, two vertical, and four
diagonal directions.

In some cases, an 8-connected algorithm is more accurate than the 4-


connected algorithm. This is illustrated in figure (c). Here, a 4-connected
algorithm produces the partial fill.

The following procedure illustrates the recursive method for filling a 4-


connected region with colour specified in parameter fill colour
(f_colour) up to a boundary colour specified with parameter boundary
colour (b_colour).

Procedure:-
8-connected boundary / edge fill Algorithm:-

boundary_fill(x, y, f_colour, b_colour)


{
if(getpixel(x, y)! = b_colour && getpixel(x, y)! = f_colour)
{
putpixel(x, y, f_colour);
boundary_fill(x + 1, y, f_colour, b_colour);
boundary_fill(x - 1, y, f_colour, b_colour);
boundary_fill(x, y + 1, f_colour, b_colour);
boundary_fill(x, y - 1, f_colour, b_colour);
boundary_fill(x + 1, y + 1, f_colour, b_colour);
boundary_fill (x - 1, y - 1, f_colour, b_colour);
boundary_fill(x + 1, y - 1, f_colour, b_colour);
boundary_fill(x - 1, y + 1, f_colour, b_colour);
}
Projections
Projections are a very important idea in GIS and mapping. If you are not
familiar with projections, please begin by reading the Projections
Tutorial topic.
 
This documentation provides several paths through the general subject of
working with projections in Manifold. Some material is repeated in the
various topics so that essential material is covered no matter which path
the reader takes through the Help system.
 
·      Experts may jump directly to the Projections Quick
Reference topic for a terse introduction.
 
·      If you are a new user and are working completely within Manifold
System, read this topic and then proceed to the Projecting a
Map topic. Very little expertise in the inner workings of projections
is required to make simple projections when working within Manifold
System.
 
·      To permanently re-project a specific component we open it in a
window and apply the Edit - Change Projection command. If the
component appears in a map and we would like to re-project it to
use the same projection as the map, we can right click its tab in the
map and choose Project to Map.
 
·      If you must deal with maps in legacy formats such as ESRI .shp or
AutoCAD .dxf, please read the Projections and Legacy
Formats topic. Because legacy formats do not normally save
information on projection parameters used, importing maps saved
in legacy formats can require extensive knowledge of projections
and coordinate systems.
 
·      If you are new to projections and coordinate systems in GIS and
would like to learn more about the general concepts involved,
consult the Projections Tutorial and subsequent topics.
 
·      The Projections Readings topics provide a classical introduction
to projection concepts together with a guide to selecting projections
based on the writings of John Parr Snyder, one of the greatest
enthusiasts of map projections ever.
 
·      The Manifold Projections topics discuss the specific
characteristics in summary form of various families of projections
available within Manifold System. This is a reference section that
should be consulted for summary information on a specific
projection. 
·      Experts may specify custom units of measure, ellipsoids, datums
and even customized coordinate systems (projections) by specifying
customized projection presets. See the Customization topic. 
·      Maps are a special case because they are viewports that show
their contents in whatever temporary projection is desired. They
don't change their contents, they simply show them in a different
form. The projection used by a map to display its contents is set in
the Edit - Assign Projection dialog.
 

Why Use Projections?


 

In Manifold, projections are used for three main reasons:


 
·    To provide a more natural looking map,
·    To enable measurements of areas and lengths in printed maps.
·    To enable easy measurement in linear units such as meters when
performing analyses.

  Parallel Projection
Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until they
intersect the plane of the screen. The point of intersection is the projection of vertex.

Parallel projections are used by architects and engineers for creating working drawing of
the object, for complete representations require two or more views of an object using
different planes.
1. Isometric Projection: All projectors make equal angles generally angle is of 30°.
2. Dimetric: In these two projectors have equal angles. With respect to two principle axis.
3. Trimetric: The direction of projection makes unequal angle with their principle axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no change in
length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one half of their
length. These give a realistic appearance of object.
Difference Between Parallel Projection And Perspective
Projection :
SR.N
O Parallel Projection Perspective Projection

Parallel projection Perspective projection represents


represents the object in a the object in three dimensional
1 different way like telescope. way.

In perspective projection, objects


that are far away appear smaller,
In parallel projection, these and objects that are near appear
2 effects are not created. bigger.

The distance of the object


from the center of The distance of the object from
3 projection is infinite. the center of projection is finite.

Parallel projection can give Perspective projection cannot


4 the accurate view of object. give the accurate view of object.

The lines of parallel The lines of perspective


5 projection are parallel. projection are not parallel.

Projector in parallel Projector in perspective


6 projection is parallel. projection is not parallel.

Three types of perspective


Two types of parallel projection:
projection : 1.one point perspective,
1.Orthographic, 2.Two point perspective,
7 2.Oblique 3. Three point perspective,

8 It does not form realistic It forms a realistic view of


SR.N
O Parallel Projection Perspective Projection

view of object. object.

Visible Surface Detection


When we view a picture containing non-transparent objects and
surfaces, then we cannot see those objects from view which are
behind from objects closer to eye. We must remove these hidden
surfaces to get a realistic screen image. The identification and
removal of these surfaces is called Hidden-surface problem.
There are two approaches for removing hidden surface problems
− Object-Space method and Image-space method
When we want to display a 3D object on a 2D screen, we need to
identify those parts of a screen that are visible from a chosen
viewing position.
Depth Buffer Z−BufferZ−Buffer Method
This method is developed by Cutmull. It is an image-space
approach. The basic idea is to test the Z-depth of each surface to
determine the closest visiblevisible surface.
.
Algorithm
Step-1 − Set the buffer values −
Depthbuffer x,yx,y = 0
Framebuffer x,yx,y = background color
Step-2 − Process each polygon OneatatimeOneatatime
For each projected x,yx,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,yx,y
Compute surface color,
set depthbuffer x,yx,y = z,
framebuffer x,yx,y = surfacecolor x,yx,y

Scan-Line Method
It is an image-space method to identify visible surface. This
method has a depth information for only single scan-line. In
order to require one scan-line of depth values, we must group
and process all polygons intersecting a given scan-line at the
same time before processing the next scan-line. Two important
tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line
in the scene, the inverse slope of each line, and pointers into the
polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface
material properties, other surface data, and may be pointers to
the edge table.

Area-Subdivision Method
The area-subdivision method takes advantage by locating those view areas
that represent part of a single surface. Divide the total viewing area into
smaller and smaller rectangles until each small area is the projection of part
of a single visible surface or no surface at all.

 Surrounding surface − One that completely encloses the area.


 Overlapping surface − One that is partly inside and partly outside
the area.
 Inside surface − One that is completely inside the area.
 Outside surface − One that is completely outside the area.

Back-Face Detection
A fast and simple object-space method for identifying the back faces of a
polyhedron is based on the "inside-outside" tests. A point x,y,zx,y,z is
"inside" a polygon surface with plane parameters A, B, C, and D if When an
inside point is along the line of sight to the surface, the polygon must be a
back
face weareinsidethatfaceandcannotseethefrontofitfromourviewin
gpositionweareinsidethatfaceandcannotseethefrontofitfromourviewingposit
ion.
In general, if V is a vector in the viewing direction from the
eye or"camera"or"camera" position, then this polygon is a back face if
In a right-handed viewing system with viewing direction along the
negative ZVZV axis, the polygon is a back face if C < 0. Also, we cannot see
any face whose normal has z component C = 0, since your viewing direction
is towards that polygon. Thus, in general, we can label any polygon as a
back face if its normal vector has a z component value −
C <= 0

A-Buffer Method
The A-buffer method is an extension of the depth-buffer method.
The A-buffer method is a visibility detection method developed at
Lucas film Studios for the rendering system Renders Everything
You Ever Saw REYESREYES.
The A-buffer expands on the depth buffer method to allow
transparencies. The key data structure in the A-buffer is the
accumulation buffer.
Each position in the A-buffer has two fields −
 Depth field − It stores a positive or negative real number
 Intensity field − It stores surface-intensity information or
a pointer value

Depth Sorting Method

Depth sorting method uses both image space and object-space


operations. The depth-sorting method performs two basic
functions −
 First, the surfaces are sorted in order of decreasing depth.
 Second, the surfaces are scan-converted in order, starting
with the surface of greatest depth.
The scan conversion of the polygon surfaces is performed in
image space. This method for solving the hidden-surface
problem is often referred to as the painter's algorithm. The
following figure shows the effect of depth sorting −

 Do the x-extents not overlap?


 Do the y-extents not overlap?
 Is P entirely on the opposite side of Q’s plane from the
viewpoint?
 Is Q entirely on the same side of P’s plane as the viewpoint?
 Do the projections of the polygons not overlap?
Binary Space Partition BSPBSP Trees
Binary space partitioning is used to calculate visibility. To build
the BSP trees, one should start with polygons and label all the
edges. Dealing with only one edge at a time, extend each edge
so that it splits the plane in two. Place the first edge in the tree
as root. Add subsequent edges based on whether they are inside
or outside. Edges that span the extension of an edge that is
already in the tree are split into two and both are added
to the
tree.

Hermite curve
 A Hermite curve is a spline where every piece is a third degree
polynomial defined in Hermite form: that is, by its values and
initial derivatives at the end points of the equivalent domain
interval. Cubic Hermite splines are normally used for interpolation
of numeric values defined at certain dispute values x1,x2,x3,
…..xn,to achieve a smooth continuous function. The data should
have the preferred function value and derivative at each Xk. The
Hermite formula is used to every interval (Xk,
Xk+1) individually. The resulting spline become continuous and
will have first derivative.
 
Cubic polynomial splines are specially used in computer
geometric modeling to attain curves that pass via defined points
of the plane in 3D space. In these purposes, each coordinate of
the plane is individually interpolated by a cubic spline function of
a divided parameter‘t’.
 
Cubic splines can be completed to functions of different
parameters, in several ways. Bicubic splines are frequently used
to interpolate data on a common rectangular grid, such as pixel
values in a digital picture. Bicubic surface patches, described by
three bicubic splines, are an necessary tool in computer graphics.
Hermite curves are simple to calculate but also more powerful.
They are used to well interpolate between key points.

Fig.2.2. Hermite curve


 
The following vectors needs to compute a Hermite curve:
 ·        P1: the start point of the Hermite curve
 
·        T1: the tangent to the start point
 
·        P2: the endpoint of the Hermite curve
 
·        T2: the tangent to the endpoint
 
These four vectors are basically multiplied with four Hermite basis functions
h1(s), h2(s), h3(s) and,h4(s) and added together.
 
h1(s) = 2s3 - 3s2 + 1 h2(s) = -2s3 + 3s2
 
h3(s) = s3 - 2s2 + s h4(s) = s3 - s2
 
A closer view at functions ‘h1’and ‘h2’,the result shows that function ‘h1’starts at
one and goes slowly to zero and function ‘h2’starts at zero and goes slowly to
one.
At the moment, multiply the start point with function ‘h1’and the endpoint
with function ‘h2’. Let s varies from zero to one to interpolate between start and
endpoint of Hermite Curve. Function
 
‘h3’and function ‘h4’are used to the tangents in the similar way. They make
confident that the Hermite curve bends in the desired direction at the start and
endpoint.

 Animation Functions
1. Morphing: Morphing is an animation function
which is used to transform object shape from one
form to another is called Morphing. It is one of the
most complicated transformations. This function is
commonly used in movies, cartoons,
advertisement, and computer games.
The process of Morphing involves three steps:
1. In the first step, one initial image and
other final image are added to morphing
application as shown in fig: Ist & 4th object
consider as key frames.
The second step involves the selection of key
points on both the images for a smooth transition
between two images as shown in 2nd object.
3. In the third step, the key point of the first
image transforms to a corresponding key point of
the second image as shown in 3rd object of the
figure.
2. Wrapping: Wrapping function is similar to
morphing function. It distorts only the initial
images so that it matches with final images and no
fade occurs in this function.
3. Tweening: Tweening is the short form of
'inbetweening.' Tweening is the process of
generating intermediate frames between the initial
& last final images. This function is popular in the
film industry.
2.
4. Panning: Usually Panning refers to rotation of
the camera in horizontal Plane. In computer
graphics, Panning relates to the movement of fixed
size window across the window object in a scene.
In which direction the fixed sized window moves,
the object appears to move in the opposite
direction as shown in fig:

If the window moves in a backward direction, then


the object appear to move in the forward direction
and the window moves in forward direction then
the object appear to move in a backward direction.
5. Zooming: In zooming, the window is fixed an
object and change its size, the object also appear
to change in size. When the window is made
smaller about a fixed center, the object comes
inside the window appear more enlarged. This
feature is known as Zooming In.
When we increase the size of the window about the
fixed center, the object comes inside the window
appear small. This feature is known as Zooming
Out.
6. Fractals: Fractal Function is used to generate a
complex picture by using Iteration. Iteration
means the repetition of a single formula again &
again with slightly different value based on the
previous iteration result. These results are
displayed on the screen in the form of the display
picture.

Ro ta te A P o int
Ab o u t An
Arb i tr ary Ax is
(3 Di men si o n s)

Rotation of a point in 3 dimensional space by theta about an


arbitrary axes defined by a line between two points P1 = (x1,y1,z1)
and P2 = (x2,y2,z2) can be achieved by the following steps
(1) translate space so that the rotation axis passes through
the origin

(2) rotate space about the x axis so that the rotation axis lies
in the xz plane

(3) rotate space about the y axis so that the rotation axis lies
along the z axis

(4) perform the desired rotation by theta about the z axis

(5) apply the inverse of step (3)

(6) apply the inverse of step (2)

(7) apply the inverse of step (1)

Note:

 If the rotation axis is already aligned with the z axis then


steps 2, 3, 5, and 6 need not be performed.

 In all that follows a right hand coordinate system is assumed


and rotations are positive when looking down the rotation axis
towards the origin.

 Symbols representing matrices will be shown in bold text.

 The inverse of the rotation matrices below are particularly


straightforward since the determinant is unity in each case.

Step 1

Translate space so that the rotation axis passes through the origin.
This is accomplished by translating space by -P1 (-x1,-y1,-z1). The
translation matrix T and the inverse T-1 (required for step 7) are
given below
x' x

y' y
= T-1 Rx-1 Ry-1 Rz Ry Rx T
z' z

1 1

Using quaternions
To rotate a 3D vector "p" by angle theta about a (unit) axis "r" one
forms the quaternion

Q1 = (0,px,py,pz)

and the rotation quaternion

Q2 = (cos(theta/2), rx sin(theta/2), ry sin(theta/2), rz sin(theta/2)).

The rotated vector is the last three components of the quaternion

Q3 = Q2 Q1 Q2*
Q3 = Q2* Q1 Q2

Note also that the quaternian Q2 is of unit magnitude.

Z-Buffer Algorithm
It is also called a Depth Buffer Algorithm. Depth buffer algorithm is
simplest image space algorithm. For each pixel on the display screen, we
keep a record of the depth of an object within the pixel that lies closest to
the observer. In addition to depth, we also record the intensity that should
be displayed to show the object. Depth buffer is an extension of the frame
buffer. Depth buffer algorithm requires 2 arrays, intensity and depth each of
which is indexed by pixel coordinates (x, y).

Algorithm
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a
background value.
For each polygon in the scene, find all pixels (x, y) that lie within the
boundaries of a polygon when projected onto the screen. For each of these
pixels:

(a) Calculate the depth z of the polygon at (x, y)

(b) If z < depth [x, y], this polygon is closer to the observer than others
already recorded for this pixel. In this case, set depth [x, y] to z and
intensity [x, y] to a value corresponding to polygon's shading. If instead z >
depth [x, y], the polygon already recorded at (x, y) lies closer to the
observer than does this new polygon, and no action is taken.

3. After all, polygons have been processed; the intensity array will contain
the solution.

4. The depth buffer algorithm illustrates several features common to all


hidden surface algorithms.

5. First, it requires a representation of all opaque surface in scene polygon in


this case.

6. These polygons may be faces of polyhedral recorded in the model of


scene or may simply represent thin opaque 'sheets' in the scene.

7. The IInd important feature of the algorithm is its use of a screen


coordinate system. Before step 1, all polygons in the scene are transformed
into a screen coordinate system using matrix multiplication
.

Computer Animation languages

There is several animation languages already develop. All of them


can be categories under three groups:

1. linear list notations language: 


It is the specially animation supporting language. Each event in
the animation is described by start and ending frame number and
an action that is to take place (event). The example of this type
of languages is SCEFO (scene format). For example:
42, 53, B, rotate, “palm”, 1, 30
Here,
42 => start frame no.
53 => ending frame no.
B => table.
Rotate => action.
Palm => object.
1 => start angle.
30 => end angle.

2. General purpose languages: 


The high level computer languages which are developed for the
normal application software development also have the animation
supporting features along with graphics drawing, For example
QBASIC, C, C++, java etc.

3. Graphical language: 

it is also computer high level language and especially develop for


graphics drawing and animation has been already develop for e.g.
AutoCAD.
COMPUTER ANIMATION LANGUAGES
Languages:

Design and control of animation sequences are handled with a set


of animation outlines.A general purpose language, such as
c,lips,pascal,or fortran, is often used to program the animation
functions, but several specialized animation languages have been
developed. Animation functions include a graphics editor, a key
frame generator, an in-between generator,and standard graphics
routines. The graphics editor allows us to design and modify
object shapes, using spline surface, constractive solid geometry
methods, or other representation schemes.

A typical task in an animation specification is scene


description.This includes the positioning of objects and light
sources, defining the photometric parameters, and setting the
camera parameters (position, orientation, and less
characteristics). Another standard function is action specification.
This involves the layout of motion paths for the object and
camera. And we need the usual graphics routines: viewing and
perspective transformations, geometric transformations to
generate object movements as a function of accelerations or
kinematic path specification, visible-surface identification, and the
surface rendering operations.

B-spline
Jump to navigationJump to search
Spline curve drawn as a weighted sum of B-splines with control
points/control polygon, and marked component curves
In the mathematical subfield of numerical analysis, a B-
spline or basis spline is a spline function that has
minimal support with respect to a given degree, smoothness,
and domain partition. Any spline function of given degree can be
expressed as a linear combination of B-splines of that degree.
Cardinal B-splines have knots that are equidistant from each
other. B-splines can be used for curve-fitting and numerical
differentiation of experimental data.
In computer-aided design and computer graphics, spline
functions are constructed as linear combinations of B-splines with
a set of control points.

Shear

A transformation that slants the shape of an object is


called the shear transformation. There are two shear
transformations X-Shear and Y-Shear. One shifts X
coordinates values and other shifts Y coordinate values.
However; in both the cases only one coordinate changes
its coordinates and other preserves its values. Shearing
is also termed as Skewing.
Y-Shear
The Y-Shear preserves the X coordinates and changes
the Y coordinates which causes the horizontal lines to
transform into lines which slopes up or down as shown in
the following figure.

The Y-Shear can be represented in matrix from as −


Ysh⎡⎣⎢1shy0010001⎤⎦⎥Ysh[100shy10001]
X’ = X + Shx . Y
Y’ = Y

You might also like