Nothing Special   »   [go: up one dir, main page]

Chapter Three

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 68

Chapter 3

Introduction to The Rendering Process with OpenGL

1
OUTLINE
 The Role of OpenGL in the Reference Model
 Coordinate Systems
 Viewing Using a Synthetic Camera
 Output Primitives and Attributes

2
What Is OpenGL?

 OpenGL is an application programming interface---‘‘API’’ for


short---which is simply a software library for accessing features in
graphics hardware
 OpenGL is a software interface to graphics hardware.
 This interface consists of more than 700 distinct commands that you
use to specify the objects, images, and operations needed to produce
interactive three dimensional computer graphics applications.
3
Cont..

 OpenGL is designed as a streamlined, hardware-independent interface to be


implemented on many different hardware platforms.
 To achieve these qualities, no commands are included in OpenGL for
performing windowing tasks or obtaining user input; instead, you must work
through whatever windowing system that controls the particular hardware
you’re using.

4
Cont..

 Rendering, is the process by which a computer creates an image


from models.
 OpenGL is just one example of a rendering system; there are many
others. OpenGL is a rasterization-based system but there are other
methods for generating images as well, such as ray tracing.
 models, or objects are constructed from geometric primitives such as
points, lines, and triangles that are specified by their vertices.
5
Cont..
 the major operations that an OpenGL application would perform to render an image
are:-
 Specify the data for constructing shapes from OpenGL’s geometric primitives.
 Execute various shaders to perform calculations on the input primitives to determine
their position, color, and other rendering attributes.
 Convert the mathematical description of the input primitives into their fragments
associated with locations on the screen. This process is called rasterization.

6
Cont..
 Rendering Pipeline is the sequence of steps that OpenGL takes when

rendering objects. Vertex attribute and other data go through a sequence of

steps to generate the final image on the screen. There are usually 9-steps in

this pipeline most of which are optional and many are programmable.

7
Cont..
The Sequence of steps taken by
openGL to generate an image are
the following:

8
Cont..

1. Vertex Specification : In Vertex Specification, list an ordered list

of vertices that define the boundaries of the primitive. Along with this,

one can define other vertex attributes like color, texture coordinates etc.

Later this data is sent down and manipulated by the pipeline

9
Cont..
2. Vertex Shader : The vertex specification defined above now pass through
Vertex Shader.
 Vertex Shader is a program written in GLSL that manipulate the vertex data.
 The main job of a vertex shader is to calculate the final positions of the
vertices in the scene.

10
Cont..
 Vertex shaders are executed once for every vertex(in case of a triangle it

will execute 3- times) that the GPU processes. So if the scene consists of

one million vertices, the vertex shader will execute one million times once

for each vertex.

3. Tessellation : This is a optional stage. In this stage primitives are


tessellated i.e. divided into smoother mesh of triangles.

11
Cont..
4. Geometry Shader : This shader stage is also optional.

The work of Geometry Shader is to take an input primitive and generate zero or

more output primitive. If a triangle strip is sent as a single primitive, geometry shader

will visualize a series of triangles.

 Geometry Shader is able to remove primitives or tessellate them by outputting many

primitives for a single input.

 Geometry shaders can also convert primitives to different types.

For example, point primitive can become triangles.


12
Cont..
5. Vertex Post Processing : This is a fixed function stage i.e. user has
a very limited to no control over these stages.

The most important part of this stage is Clipping.


 Clipping discards the area of primitives that lie outside the viewing
volume.

13
Cont..
6. Primitive Assembly : This stage collects the vertex data into a
ordered sequence of simple primitives(

lines, points or triangles).

7. Rasterization : This is an important step in this pipeline. The


output of rasterization is a fragments.

14
Cont..
8. Fragment Shader : Although not necessarily a required stage it is
used 96% of the time

This user-written program in GLSL calculates the color of each


fragment that user sees on the screen.
 The fragment shader runs for each fragment in the geometry.
 The job of the fragment shader is to determine the final color for each
fragment.
15
Cont..
Both vertex and fragment shaders are required in every OpenGL
program.

9. Per-sample Operations : There are few tests that are


performed based on user has activated them or not.

Some of these tests for example are Pixel ownership test, Scissor
Test, Stencil Test, Depth Test.

16
cont..

 The final generated image consists of pixels drawn on the screen;


 a pixel is the smallest visible element on your display.
 The pixels in your system are stored in framebuffer, which is a
chunk of memory that the graphics hardware manages, and feeds
to your display device.

17
THE ROLE OF OPENGL IN THE REFERENCE MODEL

 OpenGL (Open Graphics Library)


 It is a cross platform, hardware accelerated, language independent,
industrial standard API for producing 3D (including 2D) graphics.
 Modern computers have dedicated GPU (Graphics Processing Unit)
with its own memory to speed up graphics rendering.

18
Cont..
 OpenGL is the software interface to graphics hardware. In other
words, OpenGL graphic rendering commands issued by your
applications could be directed to the graphic hardware and
accelerated.

19
Cont..
We use 3 sets of libraries in our OpenGL programs:

1. Core OpenGL (GL): consists of hundreds of commands, which


begin with a prefix "gl“ (e.g., glColor, glVertex, glTranslate, glRotate).

The Core OpenGL models an object via a set of geometric primitives


such as point, line and polygon.

20
Cont..
2. OpenGL Utility Library (GLU): built on-top of the core
OpenGL to provide important utilities (such as setting camera
view and projection) and more building models (such as quadric
surfaces and polygon tessellation).
 GLU commands start with a prefix "glu"

(e.g., gluLookAt, gluPerspective).

21
Cont..

3. OpenGL Utilities Toolkit (GLUT):


 OpenGL is designed to be independent of the windowing system or
operating system.
 GLUT is needed to interact with the Operating System (such as
creating a window, handling key and mouse inputs);
 it also provides more building models (such as sphere and torus).
 GLUT is simple, easy, and small
22
Cont..
OpenGL is based on the GL (Graphics Library) graphics package developed by the graphics
hardware manufacturer Silicon Graphics.
Is an API that provides a large set of functions that are used to manipulate graphics and images.
To use OpenGL we need C++ development environment that supports these two (OpenGL and
glut ) libraries. Two possibilities are:
 The free Dev-C++ environment has OpenGL built-in and glut can be easily added on as a
separate package.
 Microsoft Visual C++ also has OpenGL built-in but not glut. The glut library is available as a
free download from http://www.xmission.com/~nate/glut.html. Installation is fairly
straightforward.

23
Cont.
 Summary of the most important of the OpenGL family of graphics libraries:
 OpenGL: the core library, it is platform (i.e. hardware system) independent, but not
windows-system independent (i.e. the code for running on Microsoft Windows will
be different to the code for running on the UNIX environments, X-Windows or
Gnome).
 glut: The GL Utilities Toolkit, it contains some extra routines for drawing 3D
objects and other primitives. Using glut with OpenGL enables us to write windows-
system independent code.
 glu: The OpenGL Utilities, it contains some extra routines for projections and
rendering complex 3D objects.
 glui: Contains some extra routines for creating user-interfaces.
24
Cont.
 Every routine provided by OpenGL or one of the associated libraries listed above
follows the same basic rule of syntax:
 The prefix of the function name is either gl, glu, or glut, depending on which of
these three libraries the routine is from.
 The main part of the function name indicates the purpose of the function.
 The suffix of the function name indicates the number and type of arguments
expected by the function.
 For example, the suffix 3f indicates that 3 floating point arguments are expected.
o The function glVertex2i is an OpenGL routine that takes 2 integer arguments and defines
as vertex.
25
Cont.
 Some function arguments can be supplied as predefined
symbolic constants. (These are basically identifiers that have
been defined using the C++ #define statement.)
 These functions are always written in a capital letter.
 For example, GL_RGB, GL_POLYGON and
GLUT_SINGLE are all symbolic constants used by OpenGL
and its associated libraries;
26
Getting Started with OpenGL
 The first thing in OpenGL is we need to know which header files to include.

If we are only using the core OpenGL library, then the following line must be added

near the beginning of your source code: #include <GL/gl.h>

If we also want to use the GL Utilities library, we must add the following line:

#include <GL/glu.h>

For the glui user-interface library we must add:

#include <GL/glui.h>
27
Cont.

If we want to use the glut library (and this makes using OpenGL a

lot easier) we do not need to include the OpenGL or glu header files.

All we need to do is include a single header file for glut:

#include <GL/glut.h>

This will automatically include the OpenGL and glu header files too.

28
Cont.
 Before displaying any graphics primitives we always need to perform some
basic initialization tasks and creating a window for display.

1. We perform the GLUT initialization with the statement

glutInit (&argc, argv);

2. We use the glutInitWindowPosition function to give an initial location for


the upper left corner of the display window. This position is specified in integer
screen coordinates, whose origin is at the upper-left corner of the screen.

glutInitWindowPosition (50, 100);


29
Cont.
3. The glutInitWindowSize function is used to set the initial pixel width and height of
the display window.

glutInitWindowSize (400, 300);

30
Cont.

4. To set a number of other options for the display window, such as the type of

frame buffer we want to have and a choice of color modes, we use the

glutInitDisplayMode function.

glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB)

5. Then we will create display window using glutCreateWindow function.

glutCreateWindow(“An Example OpenGL program”)


31
Cont.
 OpenGL is an event-driven package. This means that it always executes
code in response to events.
 The code that is executed is known as a callback function, and it must be
defined by the programmer.
 OpenGL will continually detect a number of standard events in an event
loop, and execute the associated callback functions.
 For example, mouse clicks and keyboard presses are considered as OpenGL
events.
32
Cont.

 The most important event is the display event. This occurs whenever

OpenGL decides that it needs to redraw the display window.

 It is guaranteed to happen once at the beginning of the event loop; after this

it will occur whenever the window needs to be redrawn.

 Any primitives that we want to draw should be drawn in the display event

callback function, otherwise they will not be displayed.


33
Cont.
 To specify a callback function we must first write a programmer-
defined function, and then specify that this function should be
associated with a particular event. For example, myDisplay will be
executed in response to the display event:

glutDisplayFunc(myDisplay);
 To start the event loop we write:

glutMainLoop();
34
Cont.

 Therefore all OpenGL programs consist of a number of parts:

 Perform any initialization tasks such as creating the display window.

 Define any required callback functions and associate them with the

appropriate events.

 Start the event loop.

35
Cont.
#include<GL/glut.h> glutCreateWindow("GLUT Points
int main(int argc, char *argv[])
demo");
{
glutDisplayFunc(display);
glutInit(&argc, argv);
glutInitWindowSize(640,480); myInit();
glutInitWindowPosition(10,10); glutMainLoop();
glutInitDisplayMode(GLUT_SINGLE|
}
GLUT_RGB);

36
Coordinate systems
 Coordinate systems are an important concept in computer graphics, as they
are used to specify the position and orientation of objects in a virtual space.
 A coordinate system is a system of numbers, symbols, and axes that is used
to describe the position of a point or object in space.

37
Cont.
 In geometry, a coordinate system is a system that uses one or more numbers,

or coordinates, to uniquely determine the position of the points or other geometric

elements on a manifold.

 The order of the coordinates is significant, and they are sometimes identified by their

position in an ordered tuple and sometimes by a letter, as in "the x-coordinate".

 The coordinates are taken to be real numbers in elementary mathematics, but may

be complex numbers or elements of a more abstract system such as a commutative

ring.
38
Common Coordinate Systems
1. Number line: The simplest example of a coordinate system is the identification of
points on a line with real numbers using the number line. In this system, an arbitrary
point O (the origin) is chosen on a given line

2. Cartesian coordinate system: The prototypical example of a coordinate system is


the Cartesian coordinate system.
 In the plane, two perpendicular lines are chosen and the coordinates of a point are
taken to be the signed distances to the lines.
 In three dimensions, three mutually orthogonal planes are chosen and the three
coordinates of a point are the signed distances to each of the planes.
39
Cont.
 This can be generalized to create n coordinates for any point in n-dimensional
space.
 Depending on the direction and order of the coordinate axes, the three-dimensional
system may be a right-handed or a left-handed system. This is one of many
coordinate systems.

40
Cont..
 There are several different types of coordinate systems that are commonly used in
computer graphics, including the cartesian, polar, and spherical coordinate
systems.
 The most commonly used coordinate system in computer graphics is the Cartesian
coordinate system, which is named after the mathematician René Descartes. In this
system, a point in space is defined by two or three numbers: its x-coordinate, y-
coordinate. The x-coordinate specifies the point's position along the x-axis, the y-
coordinate specifies the point's position along the y-axis. In 3D graphics a 3rd z-
coordinate is used.
41
Cartesian Coordinates (Rectangular Coordinate System)
2d coordinates 3d coordinates

42
3D Coordinates Cartesian
2 type
Left-handed system Right-handed system

Align your right-hand thumb with the x-axis.


If you can curl your right-hand fingers from the y-axis to the z-axis to make
a fist, that’s a right-handed system
43
polar coordinate system
 The polar coordinate system is another commonly used coordinate system in
computer graphics. In this system, a point in space is defined by two
numbers: its radius and angle. The radius specifies the distance of the point
from the origin, and the angle specifies the direction of the point from the
origin.

44
convert Cartesian to polar coordinates
 To convert from Cartesian to polar coordinates, you need to first
determine the radius of the point. This can be done using the
Pythagorean theorem, which states that the square of the radius
is equal to the sum of the squares of the x-coordinate and the y-
coordinate. In other words, the radius is the square root of the
sum of the squares of the x-coordinate and the y-coordinate.

45
Example: what is (12, 5) in polar coordinates?

46
Cont..
 Once the radius has been determined, the angle can be calculated using the arc tangent
function. This function takes the y-coordinate and the x-coordinate as inputs and
returns the angle in radians. The angle can then be converted to degrees if desired.
 For example, to convert the point (3,4) from Cartesian to polar coordinates, first
determine the radius using the Pythagorean theorem:
 radius = sqrt(3^2 + 4^2) = 5
Next, use the arc tangent function to determine the angle:
 angle = atan2(4/3) = 53.13 degrees
Therefore, the polar coordinates of the point (3,4) are (5,53.13).
47
Cont.

The idea of a coordinate system, or coordinate frame is pervasive in

computer graphics.

For example, it is usual to build a model in its own modeling frame,

and later place this model into a scene in the world coordinate frame.

We often refer to the modeling frame as the object frame, and the

world coordinate frame as the scene frame.


48
3-D Viewing: the Synthetic Camera
Synthetic camera is a programmer’s reference model for specifying 3D view projection
parameters to the computer. These parameters include:
 position of camera
 Orientation
 field of view (wide angle, normal…)
 depth of field (near distance, far distance)
 focal distance
 tilt of view/film plane (if not normal to view direction, produces oblique projections)
 perspective or parallel projection? (camera near objects or an infinite distance away)

49
View Volume
A view volume contains everything visible from the point of view or direction (it
bounds portion of 3D space that is to be clipped out and projected onto view plane).
 What does the camera see?
Conical view volumes:
 Approximates what eye sees
 Expensive math (simultaneous quadratic equations)
when clipping objects against cone’s surface.
Can approximate with rectangular cone instead (called a frustum)
 Works well with a rectangular viewing window
 Simultaneous linear equations for easy clipping
50
Position
Determining the Position is analogous to a photographer deciding the vantage point
from which to shoot a photo.
Three degrees of freedom: x, y, and z coordinates in 3-space
This x, y, z coordinate system is right-handed: if you open your right hand, align your
fingers with the +x axis, and curl them towards the +y axis, your thumb will point
along the +z axis

51
Orientation
Orientation is specified by a point in 3D space to look at (or a
direction to look in) and an angle of rotation about this direction.
Default (conical )orientation is looking down the negative z-axis
and up direction pointing straight up the y-axis.

52
Output primitives and Attributes
 Attribute of a primitive is a parameter that affects the way the
primitive is displayed.
 OpenGL is a state system: it maintains a list of current state
variables that are used to modify the appearance of each primitive.
 State variables represent the attributes of the primitives.
 The values of state variables remain in effect until new values are specified.
 Changes to state variables affects primitives drawn after the change.

53
Color Attributes
We can define colours in two different ways:
 RGB: the colour is defined as the combination of its red, green
and blue components.
 RGBA: the colour is defined as the combination of its red, green
and blue components together with an alpha value, which
indicates the degree of opacity/transparency of the colour.

54
Color Values
Color Code Red Green Blue Displayed Color

0 0 0 0 Black
1 0 0 1 Blue
2 0 1 0 Green
3 0 1 1 Cyan
4 1 0 0 Red
5 1 0 1 Magenta
6 1 1 0 Yellow

7 1 1 1 White
55
Cont.
 Frame buffer is used by raster graphics systems to store the
image ready for display.
 There are two ways in which graphics packages store color
values in a frame buffer:
 Direct storage
 Look-up table

56
Direct storage

 The color values of the pixel are stored in the frame buffer.
 Each pixel in the frame buffer must have 3 or 4 values stored.

57
Cont.

 The frame buffer take a lot of memory.


 For example, suppose we have a screen resolution of 1024x768,
and we use 8 bits for each of the red, green and blue components
of the colors stored.
 The total storage required will be 1024x768x24 bits = 2.4MB.

58
Color Look-up table

 It store an index for each pixel in the frame buffer.


 The actual colors are stored in a separate look-up table, and the
index looks up a color in this table.

59
Cont.

 Color look-up tables save storage, but they slow down the
rasterization process as an extra look-up operation is required.
 They were commonly used in the early days of computer graphics
when memory was expensive, but these days memory is cheaper,
so most systems use direct storage.
 OpenGL uses direct storage by default, but the programmer can
choose to use a color look-up table if they wish.
60
OpenGL Colour Attributes.
We change the current drawing colour using the glColor* function and the current

background colour using the glClearColor function. The background colour is always set

using an RGBA colour representation (although, as we will see below, the alpha value is

sometimes ignored).
The drawing colour can be set using either RGB or RGBA representations. The following
are all valid examples of setting the drawing and background colours:
glColor3f(1.0,0.0,0.0); // set drawing colour to red
glColor4f(0.0,1.0,0.0,0.05); // set drawing colour to semi-transparent green
glClearColor(1.0,1.0,1.0,1.0); // set background colour to opaque white
61
Cont.
 Before we do any drawing in OpenGL, we must define a frame buffer.
 To do color drawing we must specify a color frame buffer.

glutInitDisplayMode(GLUT_SINGLE,GLUT_RGB)
 To store alpha values in the frame buffer.

glutInitDisplayMode(GLUT_SINGLE,GLUT_RGBA)
 To use a color look-up table.

glutInitDisplayMode(GLUT_SINGLE,GLUT_INDEX)
62
Point Attributes
Points are the simplest type of primitive.
The only attributes we can modify for point are the color and
size.
 To change the size of points in pixels we use:

glPointSize(size)
 The default point size is 1 pixel.

63
Line Attributes
The attributes of line primitives that we can modify include the
following:
 Line color
 Line width
 Line style (solid/dashed etc.)

64
Line Color

 setcolor(color): This is the function that is needed to make a colorful line.

 Using this function along with the Line() function, users can able to draw

a colorful line. There in this function, we have to provide the color name

as the argument.

 This function needs to be placed before the Line() function. This will help

to provide the appropriate output.


65
Line Width
 The simplest and most common technique for increasing the
width of a line is to
 Plot a line of width 1 pixel, and then add extra pixels in either the
horizontal or vertical directions., which of these two approaches we use
depends on the gradient m of the line.

66
Cont.

If |m| ≤ 1 plot extra pixels vertically, i.e. same x-coordinate,

different y-coordinate.

67
Cont.
 If |m|> 1, plot extra pixels horizontally, i.e. same y-coordinate,
different x-coordinate.

68

You might also like