Nothing Special   »   [go: up one dir, main page]

Unit 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Introduction to Image processing and Image Processing Operations


Syllabus:
Introduction to Image processing: Overview, Nature of IP, IP and its related fields, Digital Image
representation, types of images.
Digital Image Processing Operations: Basic relationships
ionships and distance metrics, Classification of
Image processing Operations.
Resources:
1. Text Book 2-S.
S. Sridhar, Digital Image Processing, second edition, Oxford University press 2016.
2016., Chapters: 1,3

Overview
 Images are everywhere! Sources of Images are paintings, photographs in magazines, Journals,
Image galleries, digital Libraries, newspapers, advertisement boards, television and Internet.
 Images are imitations of real
real-world objects.
 In image processing, the term ‘image’ is used to denote the image d data that is sampled,
quantized, and readily available in a form suitable for further processing by digital computers.
 In digital image processing, the image shall be fed in as an input to the system and the system
shall interpret and understand the content to let the further actions happen.
 In other words, Image processing enables us to perform the required operations on a particular
image which could help in either enhancing the image or to extract the information from the
image. One can categorize image p processing
rocessing as one of the fields of signal processing.

Nature of IP

Image Processing Environment


 In image processing Environment, first we have a Radiation Sourcewhich
Source is a light source
which is essential for vieing the object.
object.The sun, lamps, and cloudsds are all examples of radiation
or light sources. Next, we have objects, The object is the target for which the image needs to be
created.

1 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 The object can be people, industrial components, or the anatomical structure of a


patient.
 The objects can be 2-D, 3-D, or multidimensional mathematical functions involving
many variables. For example, a printed document is a 2D object. Most real-world objects
are 3D.
 Sensors are important components of imaging systems. They convert light energy to electric
signals.
 The first major challenge in image processing is to acquire the image for further processing.
 The 3 ways of acquiring image are:
1. Reflective mode imaging
 Reflective mode imaging represents the simplest form of imaging and uses a sensor
to acquire the digital image.
 All video cameras, digital cameras, and scanners use some types of sensors for
capturing the image.
2. Emissive type imaging
 Emissive type imaging is the second type, where the images are acquired from self-
luminous objects without the help of a radiation source.
 In emissive type imaging, the objects are self-luminous.
 The radiation emitted by the object is directly captured by the sensor to form an
image.
 MRI(Magnetic Resonance Imaging) is an example of emissive type imaging.
3. Transmissive imaging
 Transmissive imaging is the third type, where the radiation source illuminates the
object.
 The absorption of radiation by the objects depends upon the nature of the material.
 Some of the radiation passes through the objects. The attenuated radiation is
sensed into an image. This is called Transmissive imaging.
 X-Ray imaging is an example of Transmissive type imaging.
 Figure above shows 3 types of processing:
1. Optical image processing is an area that deals with the object, optics, and how processes
are applied to an image that is available in the form of reflected or transmitted.
1. Analog image processing
2. An analog or continuous image is a continuous function f(x, y), where x and y are two
spatial coordinates. Analog signals are characterized by continuous signals varying
with time. They are often referred to as pictures. The processes that are applied to
the analog signal are called analog processes.
3. Analog image processing is an area that deals with the processing of analog
electrical signals using analog circuits. The imaging systems that use film for
recording images are also known as analog imaging systems.
4. The analog signal is often sampled, quantized, and converted into digital form using
a digitizer.Digitization refers to the process of sampling and quantization.
 Sampling is the process of converting a continuous-valued image f(x, y) into

2 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

a discrete image, as computers cannot handle continuous data. So the main


aim is to create a discretized version of the continuous data. Sampling is a
reversible process, as it is possible to get the original image back.
 Quantization is the process of converting the sampled analog value of the
function f(x, y) into a discrete
discrete-valued integer.
2. 3. Digital image processing is an area that uses digital circuits, systems,
syst and software
algorithms to carry out the image processing operations. The image processing operations
may include quality enhancement of an image, counting of objects, and image analysis.

Reasons for Popularity of DIP(Advantages)


1. It is easy to post-process
rocess the image
image.
2. Small corrections can also be made in the captured image using software.
3. It is easy to store the image in the digital memory.
4. It is possible to transmit the image over networks. So sharing an image is quite easy.
easy
5. A digital image does not require any chemical process. So it is very environment friendly,
friendly as
harmful film chemicals are not required or used.
6. It is easy to operate a digital camera.

Disadvantages
 The disadvantages of digital images are very few.
 Some of the disadvantages aare:
 The initial cost,
 problems associated with sensors such as high power consumption
 potential equipment failure, and
 other security issues associated with the storage and transmission of digital images.

IMAGE PROCESSING AND RELATED FIELDS

 Image processing
ocessing & Computer Graphics
Graphics:: Image processing deals with raster data or bitmaps,
whereas computer graphics primarily deals with vector data.

3 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 Image processing & Signal Processing: Digital signal processing deals with the processing of
a one dimensional signal
nal while image processing deals with visual information that is often in
two or more dimensions.
 Image processing & Machine vision: The main goal of machine vision is to interpret the
image and to extract its physical, geometric, or topological properties
properties..
 Image processing & Video Processing: Image processing is about still images.images A video can
be considered as a collection of images indexed by time. Thus, video processing is an extension
of image processing.
 Image processing & Optics: Optical image processing sing deals with lenses, light, lighting
conditions, and associated optical circuits. The study of lenses and lighting conditions has an
important role in the study of image processing.
 Image processing & Statistics: Statistics play an important role in imageima understanding and
image analysis.

Digital Image representation


 An image is a two-dimensional
dimensional function that represents a measure of some characteristic such as
brightness or color of a viewed scene.
 An image may be defined as a two two-dimensional function f(x,y),, where x and y are spatial (plane)
coordinates, and the amplitude of f at any pair of coordinates (x,y)) is called the intensity of the image at
that point.

 In general, the image can be written as a mathematical function f(x, y) as follows:

4 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 The term gray level is used often to refer to the intensity of monochrome images. images
 Color images are formed by a combination of individual 2 2-D images.
 For example: The RGB color system.
 When x, y, and f (x, y) are all finite, discrete quantities, we cacall the image a digital image.
image
 Digital image is composed of a finite number of elements, each of which has a particular location and
value. These elements are referred to as:
 picture elements
 image elements
 Pels
 pixels
 Pixel is the most widely used term.
 Image
mage resolution depends on two factorsfactors—optical resolution of the lens and spatial resolution.
 Optical resolution:
- Refers to the level of detail that an imaging system, such as a camera or scanner, can capture.
- It is primarily determined by the physical characteristics of the imaging system, including the quality
of its lenses, the size of it sensor
ensor or film, and other factors
- Typically measured in terms of the number of pixels per unit length, such as pixels per inch (PPI) or
pixels per millimeter (PPM).
- Higher optical resolution means more pixels are used to represent the same area, resulting in finer
detail and greater clarity in the captured image
 Spatial resolution:
- Refers to the scale or size of the smallest unit of an image capable of distinguishing
distinguishin objects
or
-A
A measure of the smallest angular or linear distance to identify adjacent objects in an image.
- Spatial resolution is a term utilized to describe how many pixels are employed to comprise a digital
image
-Depends on two parameters— —
1. The numberr of pixels of the image
2. The number of bits necessary for adequate intensity resolution, referred to as the bit depth.
- Number of pixels determines quality of digital imageimage.
-Total
Total number pixels are measured by X (Number of rows) x Y(Number of Columns)
-To
To represent Pixel intensity value, certain bits are required
Example: Binary Image
- Binary image pixels can have only two colors, usually black and white
- Binary images are also called bi bi-level or two-level,
- Pixelart made of two colours is often ref referred to as 1-Bit or 1bit.
- This means that each pixel is stored as a single bit bit—i.e., a 0 or 1.

5 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

- Number of bits required to encode the pixel value called Bit Depth
- Bit Depth is power of two, written as 2m
- In Monochrome Gray scale image, pixel values can be rangefrom 0 to 255
- Eight bits are required to represent monochrome gray scaleimage as 28 = 256 (Range from 0 to
255), - So Bit Depth of Gray scale image is 8
 So the total number of bits necessary to represent the image is = Number of rows *Number of
columns * Bit depth
 The concept of 2D images can be extended to 3D images also. A 3D Image is a finction (x, y, z), where x,
y, and z are spatial coordinates.
 In 3D images, the term 'voxel' is used for pixel. Voxel is an abbreviation of 'volume element'.

6 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Classification of Images
1.Based on Nature:
i. Natural: Images of natural objects obtained using devices called cameras or scanners.
ii. Synthetic: Images that are generated using computer programs.
2.Based on Attributes:
i. Raster: Pixel based, quality based on no. of pixels
ii. Vector graphics: Use basic geometric attributes such as line andcircles to describe image

7 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

3. Based on Colour:
i. Grey scale images are different from binary images as they have many shades of grey
between black and white. These images are also called monochromatic as there is no
colour component in the image, like in binary images. Grey scale is the term that refers to
the range of shades between white and black or vice versa. versa.Gray scale use 8 bits
representation, produce 28 =256 levels. Human visual system can distinguish only 32
different gray levels
ii. In binary images, the pixels assume a value of 0 or 1. So one bit is sufficient to represent
the pixel value. Binary images are also called bibi-level images.
iii. In true colour images ges,, the pixel has a colour that is obtained by mixing the primary
colours red, green, and blue blue.. Each colour component is represented using 8 bits, sotrue
colour images use 24 bits to represent all the colours. That is 224 = 1,67,77,216 Colours
iv. A special category
ategory of colour images is the indexed image. In most images, the full range of
colours is not used. So it is better to reduce the number of bits by maintaining a colourmap,
gamut, or palette with the image.
v. Like true colour images, Pseudo-colour images are also used widely in image processing.
True colour images are called three-band images. However, in remote sensing
applications, multi-band
band images or multi
multi-spectral
spectral images are generally used. These images,
which are captured by satellites, contain many bands.
4. Based on Dimensions:
 Images can be classified based on dimensions also.
 Normally, digital images are a 2D rectangular array of pixels.
 If another dimension, of depth or any other characteristic, is considered, it may be
necessary to use a higherer-order stack of images(3D).
 A good example of a 3D image is a volume image, where pixels are called voxels.
 By '3D image',, it is meant that the dimension of the target in the imaging system is 3D.
 Example: CT images, MRIs, and microscopy images.

8 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

5.Based on Data Types:


 Sometimes, image processing operations produce images with negative numbers, decimal
fractions, and complex numbers.
 To handle negative numbers, signed and unsigned integer types are used. In these
data types, the first bit is used to eencode
ncode whether the number is positive or negative.
 Floating-point involves storing the data in scientific notation. For example, 1230 can be
represented as 0.123×10, where 0.123 is called the significant and the power is called the
exponent. There are many floating-point conventions.
 The quality of such data representation is characterized by parameters such as data
accuracy and precision. For this we use double datatype.
6.Based on Domain specific:
- Images can classified based on domains and applications
i. Range Images:
 Range images are often encountered in computer vision. vision In these images, Pixel
values denote distance between object and camera
camera. These images are referred to as
depth images.
ii.. Multispectral Images
- Multispectral Images are often encountered tered in remote sensing applications.
applications These images
may ay have many bands that may include infrared and ultravioletregions of EM spectrum
spectrum.
Fundamental Steps in Digital Image Processing
 The different tasks being done in DIP are divided into two broad categori
categories based on theoutput
of the study and all the DIP methods or techniques must have output of any of the shapes
amongst the two categories, the categories are:
1. Input is an image and the output is also an image.
2. Input is the image and output is the attribute extracted from an image.
 Below is the graphical notation for the fundamental steps in image processing

9 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

1. Image Acquisition:
 In image processing, it is defined as the action of retrieving an image from some source,
usually a hardware-based source for processing.
 It is the first step in the workflow sequence because, without an image, no processing is
possible. The image that is acquired is completely unprocessed. In image acquisition using
pre-processing such as scaling is done.
2. Image Enhancement:
 It is the process of adjusting digital images so that the results are more suitable for display
or further image analysis. Usually it includes sharpening of images, brightness & contrast
adjustment, removal of noise, etc.
 In image enhancement, we generally try to modify the image, so as to make it more pleasing
to the eyes.
 It is subjective in nature as for example some people like high saturation images and some
people like natural colour. That’s why it is subjective in nature as it differs from person to
person.
3. Image Restoration:
 It is the process of recovering an image that has been degraded by some knowledge of
degraded function H and the additive noise term. Unlike image enhancement, image
restoration is completely objective in nature.
4. Color Image Processing:
 Color image processing is an area that has been gaining its importance because of the
significant increase in the use of digital images over the Internet. This may include color
modeling and processing in a digital domain etc. This handles the image processing of
colored images either as indexed images or RGB images.
5. Wavelets and multiresolution processing:
 Wavelets are small waves of limited duration which are used to calculate wavelet transform
which provides time-frequency information.
 Wavelets lead to multiresolution processing in which images are represented in various
degrees of resolution.
6. Compression:
 Compression deals with the techniques for reducing the storage space required to save an
image or the bandwidth required to transmit it.
 This is particularly useful for displaying images on the internet as if the size of the image is
large, then it uses more bandwidth (data) to display the image from the server and also
increases the loading speed of the website.
7. Morphological Processing:
 It deals with extracting image components that are useful in representation and description
of shape.
 It includes basic morphological operations like erosion and dilation.
 As seen from the block diagram that the outputs of morphological processing generally are
image attributes.

10 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

8. Segmentation:
 It is the process of partitioning a digital image into multiple segments. It is generally used to
locate objects and boundaries in objects.
 In general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A segmentation procedure brings the process a long way toward successful
solution of imaging problems that require objects to be identified individually.
9. Representation and Description:
 Representation deals with converting the data into a suitable form for computer
processing.
 Boundary representation: it is used when the focus is on external shape
characteristics e.g. corners
 Regional representation: it is used when the focus in on internal properties e.g.
texture
 Description deals with extracting attributes that
 results in some quantitative information of interest
 is used for differentiating one class of objects from others
10. Recognition:
 It is the process that assigns a label (e.g. Notebook, Laptop) to an object based on its
description.
 It is the last step of image processing which use artificial intelligence.
Knowledge Base:
 Knowledge about a problem domain is coded into an image processing system in the form
of a knowledge base.
 This knowledge may be as simple as detailing regions of an image where the information of
the interest in known to be located.
 The knowledge base also can be quite complex such interrelated list of all major possible
defects in a materials inspection problem or an image database containing high resolution
satellite images of a region in connection with change detection application.

Digital Image Processing Operations


 Generally, image processing operations are divided into two categories as follows
o Low-level Operations
o High-level Operations
 Image acquisition, preprocessing and compression are consideredas low level operations.
 Image segmentation and feature extraction deal with the extraction of necessary image
portions and analysis of image features. Hence, segmentation and feature extraction are
considered to be important areas, these stage serve link between low-level and high-levelimage
processing
 High level image processing deals with image understanding, Image interpretation in more
meaningful manner. It is based on knowledge, goals and plans. Image understanding uses
concept of Artificial Intelligence toimitate human cognition.

11 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Basic Relationships and


nd Distance Metrics
 The various neighborhood relationships between pixels and Distancemetrics between image
points(pixels) are discussed in this section.
1.Image Coordinate System:
 In image co-ordinate
ordinate system, images can be easily represented as atwo atwo-dimensional array
matrix.
 The popularity of the matrix form is due to the fact that most of the programming languages
support 2D array data structure
tructure and can easily implement matrix
matrix-level
level computation.
 Pixels can be visualized logically and physically
physically.
 Logical pixels specify the points of a continuous 2D function. These are logical in the sense
that they specify a location but occupy no physica
physicall area. Normally, this is represented in the
Cartesian first coordinate system.
 Physical pixels, on the other hand, occupy a small amount of space when displayed on the
output device.
 The pixels array is represented by the Cartesian co-ordinateordinate system.
 For example, an analog image of size 3 x 3 is represented in the first quadrant of the Cartesian
coordinate system as shown in Fig. 3.1.
 Figure 3.1 illustrates an image f(x, y) of dimension 3 x 3, where f(0, 0) is the bottom left corner.
Since it starts from
om the coordinate position (0, 0), it ends with f(2, 2), that is, x=0, 1, 2, ..., M
M-1
and y = 0, 1, 2, ..., N-1.
1. x and y define the dimensions of the image.

12 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 However, in digital image processing, the discrete form of the image is often used. Discrete
images
ges are usually represented in the fourth quadrant of the Cartesian coordinate system. A
discrete image f(x, y), of dimension 3 x 3, is shown in Fig. 3.2(a).
 Many programming environments including MATLAB start with an index of (1, 1). The
equivalent representation
sentation of the given matrix is shown in Fig. 3.2(b).
 The coordinates used for discrete image is, by default, the fourth quadrant of the Cartesian
system.

2. Image Topology
 Image topology is a branch of image processing that deals with the fundamental properties of
the image such as image neighborhood, paths among pixels, boundary, and connected
components.
 It characterizes the image with topological properties such as neighborhood, adjacency, and
connectivity.
Neighborhood
 Neighbourhood is fundamenta
fundamental to understanding image topology.
 In the simplest case, the neighbours of a given reference pixel are those pixels with which the
given reference pixel shares its edges and corners.

 In (4 Neighbourhood) N4(p), the reference pixel p(x, y) at the coordinate


coordi position (x, y) has
two horizontal and two vertical pixels as neighbours. This is shown graphically below. Thus
from figure, we have 4 neighbours of pare:{(x-1,y), (x+1,y), (x,y+1), (x,y-1)}
(

13 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 The Diagonal Neighbours of pixel p(x,y) are represented as ND(p). The diagonal neighbours
are: {(x-1,y+1), (x+1,y+1), (x-1,y-1), (x+1,y-1)}

 The 4-neighbourhood (N4) and ND are collectively called the 8-Neighbourhood (N8). This
refers to all the neighbours and pixels that share a common corner with the reference pixel
p(x, y). These pixels are called indirect neighbours. This is represented as N8(p) and is
shown graphically in Fig. 3.5.
 The set of pixels N8(x) = N4(x) ꓴ ND (x)

Connectivity /Adjacency/Neighbors Connectivity


 The relationship between two or more pixels is defined by pixel connectivity.
 Connectivity information is used to establish the boundaries of the objects.
 The pixels p and q are said to be connected if certain conditions on pixel brightness specified
by the set V and spatial adjacency are satisfied.
 For a binary image the set ‘V’ will be 0,1 and for gray scale images, ‘V’might be any range of
gray levels.
1. 4-Connectivity The pixels p and q are said to be in 4-connectivity when both have the same
values as specified by the set V and if q is said to be in the set N4(p). This implies any path from
p to q on which every other pixel is 4-connected to the next pixel.

2. 8-Connectivity It is assumed that the pixels p and q share a common grey scale value. The
pixels p and q are said to be in 8-connectivity if q is in the set N8(p).

14 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

3. Mixed connectivity Mixed connectivity is also known as m m-connectivity.


connectivity. Two pixels p and q
are said to be in m-connectivity
connectivity wh
when
a. q is in N4(p)or
b. q is in ND(p) and the intersection of N4(p) and N4(q) is empty.

For example, Fig. 3.6.1 shows 4-connectivity Fig. 3.6.2 shows 4-connectivity
connectivity when V= {0, 1}.
4- and 8-Connectivity
Connectivity is shown as lines. Here, a multiple path or loo
loopp is present. In m-connectivity,
m
there are no such multiple paths. The mm-connectivity
connectivity for the image in Fig. 3.6.2
3.6 is as shown in Fig.
3.7.It can be observed that the multiple paths have been removed.

Relations
 A binary relation between two pixels a and b, denoted as aRb,, specifies a pair of elements of an
image. For example, consider the image pattern given in Fig. 3.8.

 The set is given as A= {x1, x2, x3}. The set based on the 4-connectivity
connectivity relation is given as A =
(X1, X2). It can be observed that x3 is ignored as it is not connected to any other element of the
image by 4-connectivity.
 The following are the properties of the binary relations:

 If all these properties hold, the relationship is called Equivalence relation.

15 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Distance Measures
 The distance between the pixels p and q in an image can be given by distance measures such as
Euclidian distance(De), D4 distance, and D8 distance.
 Consider three pixels p, q, and z. If the coordinates of the pixels are P(x, y), Q(s, t), and Z(u,
w) as shown in Fig. 3.9, the distances between the pixels can be calculated.

i) Euclidian distance(De)
 The Euclidean Distance between p and q is definedas:
𝑫𝒆 (𝒑, 𝒒) = (𝒙 − 𝒔)𝟐 + (𝒚 − 𝒕)𝟐

ii) D4 distance(also called city-block distance)between p and q is definedas:


D4(p,q) = | x – s | + | y – t |

Example:
The pixels with distance D4≤ 2 from (x,y) form the following contours of constant distance. The
pixels with D4= 1 are the 4-neighbors of (x,y)

16 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

iii) D8distance (also called chessboard distance)


distance)between p and q is definedas:
D8(p,q) = max(| x – s |,| y – t |)

Example:
D8 distance ≤ 2 from (x,y) form the following contours of constant distance. The pixels with D8=1
are the 8-neighbors of (x, y).

iv) DmDistance:
It is defined as the shortest m
m-path between
tween the points.In this case, the distance between two
pixels will depend on the values of the pixels along the path, as well as the values of their
neighbors.

17 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Example:

Solution:

1. Shortest-4 path 2. Shortest-8 path

So, no path exist


So, shortest-8 path is 4.

3. Shortest-m path

So, shortest-m path is 5.

18 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

19 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Path
 A digital path (or curve) from pixel p with coordinate (x, y) to pixel q with coordinate (s, t) is a
sequence of distinct pixels with coordinates (x0 , y0 ), (x1 , y1 ), ...,
.., (xn , yn ), where(x0 , y0 )=
(x, y), (xn , yn )= (s, t)
 (xi , yi )and (xi-1 , yi-1 ) are adjacent pixel for 1≤j≤n ,
 n- The length of the path.
 If (x0 , y0 )= (xn , yn ):the
the path is closed path.
 We can define 4- ,8- , or m-pathspaths depending on the typee of adjacency specified.

Region
 A connected set is also called a Region.
 Two regions (let Ri and Rj) are said to be adjacent if their union forms a connected set.
Adjacent Regions or joint regions
 Regions that are not adjacent are said to be disjoint regions.
 4- and 8-adjacency
adjacency is considered when referring to regions

20 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 Discussing a particular region, type of adjacency must be specified.


 Fig. below the two regions are adjacent only if 8
8-adjacency
adjacency is considered

Boundary
 The boundary (border or contour) of a region R is the set of points that are adjacent to the
points in the complement of R.
 Set of pixels in the region that have at least one background neighbor.
 The boundary of the region R is the set of pixels in the region that have one or more neighb
neighbors
that are not in R.
 Inner Border: Border of Foreground
 Outer Border: Border of Background

Classification of image processing operations


 There are various ways to classify image operations.
 One way to characterize the operations based on neighbourhood iiss as follows
1. Point operations
2. Local operations
3. Global operations
 Point operations are those whose output value at a specific coco-ordinate
ordinate is dependent only on
the input value.
 A local operation is one whose output value at a specific co
co-ordinate
ordinate is dependent on the input
values in the neighborhood of that pixel.
 Global operations are those whose output value at a specific co co-ordinate
ordinate is dependent on all
the values in the input image.
 Another way of categorizing the operations is as follows:
1. Linear operations
2. Non-linear operations
 An operator is called a linear operator if it obeys the following rules of Additivity and
Homogeneity.
1. Property of Additivity

21 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

2. Property of Homogeneity

 A non-linear
linear operator does not follow above operations.
 Image operations are array operations
operations.. These operations are done on a pixel-by-pixel basis.
 Array operations are different from matrix operations.
 For example, consider two images

 The multiplication of F1, and F2 is element


element-wise, as follows:

 The multiplication
lication of F1, and F2 is element
element-wise, as follows:
 In addition, one can observe that F1 * F2 = F2 * F1, whereas matrix multiplication is clearly
different, since in matrices, A*B ≠B*A.
 By default, image operations are array operations only.

Arithmetic Operations
 Arithmetic operations in images include Addition, Subtraction, Multiplication, and Division.

1. Image Addition
 Two images can be added in a direct manner, as given by

 The pixels of the input images f1(x, y) and f2(x, y) are added to obtain th
the resultant image
g(x,y).
 Similarly it is possible to add a constant value to a single image as follows to increase its
brightness or intensity

2. Image Subtraction
 The Subtraction of two images can be done as follows:

 The subtraction is a modulus operation.

 Similarly it is possible to subtract with a constant value also

22 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 To
o decrease the intensity or brightness.
 Some of the practical applications of image subtraction are:
1.Background Elimination
2.Brightness reduction
3.Change detection

3. Image Multiplication
 Image Multiplication can be done in the following manner
manner:

 Scaling the given image by a constant ’k’ can be performed as follows:

 If ’k’ is less than 1, contrast of the image decreases, if ’k’ is greater than 1, contrast increases.

3. Image Division
 Image Division operation can done as follows:

 The division operation may result in floating


floating-point
point numbers, hence float datatype is used in
programming for this operation.
 Division using a constant(k) can also be performed as:

23 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

24 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Applications of Arithmetic operations


 Arithmetic operations can be combined and put to effective use.
 For example, the image averaging operation can be used to remove noise in an image.
 Noise is a random fluctuation of pixel values, which affects th
the quality of an image.
 A noise image can be expresses as original image added with noise component as follows:
g(x, y) = f (x, y) + η(x, y)
where f(x,y) is the input image and g(x,y) is the output image.

25 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 Several instances of noise images can be taken and aaveraged as:

Logical Operations
 Bit wise logical operations can be applied to image pixels.
 The resultant pixel is determined by the rules of the particular operation.
 Some of the logical operations that are widely used in image processing are as follows:
follow
1. AND/NAND
2. OR/NOR
3. EXOR/EXNOR
4. NOT

1. AND/NAND

 The operators AND and NAND take two images as input and produce one output image.
 The output image pixels are the output of the logical AND/NAND of the individual pixels.
 Some of the practical applications of the AND and NAND operators are as follows:
1. Computation of the intersection of images
2. Design of filter masks
3. Slicing of grey scale images

2. OR/NOR
 The practical applications of the OR and NOR operators are as follows:
1. OR is used as the union operator of two images.
2. OR can be used as a merging operator.

26 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

3. EXOR/EXNOR

 The practical applications of the XOR and XNOR operators are as follows:
1. Change detection
2. Use as a subcomponent of a complex imaging operation. XOR for identical inputs is zero.
Hence it can be observed that the common region of image 1 and image 2 in Figs (a) and (b),
respectively, is zero and hence dark. This is illustrated in Fig. 3.19.

27 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

4. Invert/Logical NOT

 For grey scale values, the inv


inversion operation is described as
g(x, y)=255-f(x,
f(x, y)
 The practical applications of the inversion operator are as follows:
1. Obtaining the negative of an image. Figure 3.20 (b) shows the negative of the original image
shown in Fig. 3.20(a).
2. Making features
tures clear to the observer
3. Morphological processing

28 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Geometric operations

1. Translation
 Translation is the movement of an image to a new position.
 Let us assume a point at the co
co-ordinate position X=(x, y) of the matrix F is moved to the
new position X ′,, whose co
co-ordinate position is (x’, y’).
 Mathematically this can be stated as a translation of a point X to the new position X’.
 The translation is represented as
x ′ = x + δx
y ′ = y + δy
 In a Homogeneous Co-ordinate
ordinate system, in matrix form as
as:

2. Scaling
 Scaling means enlargement and shrinking.
 In the homogeneous co-ordinate
ordinate system, the scaling of the point (x,y) of the image F to the
new point (x’, y’) of the image F’ is described as:
x ′ = x * Sx
y ′ = y * Sy

 Where Sx and Sy are calle


calledd scaling factors along the x and y axis, respectively.
 If the scale factor is 1, the object would appear larger.
 If the scaling factors are less than 1, the object would shrink.
 In the homogeneous co-ordinate
ordinate system, it is represented as:

29 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

3. Mirror or Reflection Operation


 This function creates the reflection of the object in a plane mirror.
 The function returns an image in which the pixels are reversed.
 The reflected object is of same size of the original object, but the object is in op
opposite quadrant.
 The reflection is also described as rotation by 1800
 The reflection along the x-axis
axis is given by:

 Similarly the reflection along the yy-axis is given by

4. Shearing

 Shearing is a transformation that produces a distortion of the sha


shape.
 This can be applied either in X
X-axis or in Y-axis.
 In this transformation, the parallel and opposite layers of the object are simply sided with
respect to each other.

30 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

4. Rotation
 An image can be rotated by various degrees such as 900, 1800, 2700.
 In the matrix form it is given as:

 This can be represented as FF′=RA.


 The parameter θ is the angle of rotation with respect to the xx-axis.
 It is assumed that the object rotation is about the origin.
 The value of θ can be Positive or negative.
 A positive angle represents counter clockwise rotation and a negative angle represent
represents
clockwise rotation.

31 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

32 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Affine Transformation

 The affine transform is a compact way of representing all transformations.


 The given equation represents all transformations.
 Translation  a0 = 1, a1 = 0, a2 = δx, b0=1, b1=1 ,b2= δy;
 Scaling transformation  a0 = Sx, a1 = 0, a2 = 0, b0=0,b1= Sy, b2= 0;;
 Rotation a0 = cos(ϴ),), a1 = -sin(ϴ), a2 = 0,b0=sin(ϴ),b1=cos(ϴ),b
),b2= 0;
 and horizontal shear is performed when a0 = 1, a1 = Shx, a2 = 0,b0=1,b1= Shy,b2= 0;

33 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Inverse Transformation

3D Transforms

34 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

35 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Set operations

 Morphology is a collection of operations based on set theory, to accomplish various tasks such
as extracting boundaries, filling small holes present in the image, and removing noise present
in the image.
 Mathematical morphology is a very powerful tool for analysing the shapes of the objects that
are present in the images. The theory of mathematical morphology is based on set theory.
 The set operators such as intersection, union, inclusion, and complement can then be applied to
the images.

36 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

 The two most widely used Morphological operations are Erosion and Dilation.
1. Erosion
 Erosion shrinks the image pixels, or erosion removes pixels on object boundaries.
 Let us assume that A and B are a set of pixel coordinates. The dilation of A by B can be denoted
as:

where x and y correspond to the set A, and u and v correspond to the set B.
B The coordinates are
subtracted and the intersection
rsection is carried out to create the resultant set.
se

2. Dilation
 Dilation expands the image pixels, or it adds pixels on object boundaries.
 It can be applied to binary as well as grey scale images.
 The basic effect of this operator on a binary image is that it gradually increases the boundaries
of the region, while
le the small holes that are presen
presentt in the images become smaller.
 Let us assume that A and B are a set of pixel coordinates. The dilation of A by B can be denoted
as:

where x and y correspond to the set A, and u and v correspond to the set B. The coordinates
are added and the union is carried out to create the resultant se
set.

37 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Statistical operations

 Statistics play an important role in Digital Image processing.


 An image can be assumed as a set of discrete points.
 Statistical operations can be applied to an image to get the desired results such as manipulation
of brightness and contrast.
 Some of the very useful statistical operations include mean, median, mode and mid-range.
mid The
measures of data dispersion also include quartiles, inter-quartile range, and variance.
 Some of the common statistical measures as follows:

38 Dept. of CSE, Dr. AIT, Bang


Bangalore
Introduction to Image processing and Image Processing Operations Unit -4 21CST602

39 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

Questions on Unit-4

1. Write a Program to read a digital image. Split and display image into 4 quadrants, up, down,
right and left.
2. Write a program to show rotation, scaling, and translation on an image.
3. In detail explain the fundamental steps involved in digital image processing systems.
4. Explain in detail the classification of images.
5. Illustrate the relationship between image processing and other related fields.
6. Given a grey-scale image of size 5 inches by 6 inches scanned at the rate of 300 dpi, answer
the following:
(a) How many bits are required to represent the image?
(b) How much time is required to transmit the image if the modem is 28 kbps?
(c) Repeat (a) and (b) if it were a binary image.

7. Explain Digital image representation. A picture of physical size 2.5 inches by 2 inches is
scanned at 150 dpi. How many pixels would be there in the image?
8. Explain Distance measure. Compute the Euclidean Distance (D1), City-block Distance (D2)
and Chessboard distance (D3) for points p and q, where p and q be (5, 2) and (1, 5)
respectively. Give answer in the form (D1, D2, D3).
9. Describe the fundamental steps in image processing?
10. Describe the basic relationship between the pixels
a. Neighbours of a pixel
b. Adjacency, Connectivity, Regions and Boundaries
c. Distance measures
11. All solved problems in notes.
12. Summarize the Arithmetic operations on digital images with relevant expressions.
13. Summarize the Logical operations on digital images with relevant expressions.
14. Explain 2D Geometric transformation with equations and homogeneous matrix.
15. Consider two pixels x and y whose coordinates are (0, 0) and (6, 3) .Compute De, D4, D8 distance
between x and y
16. Consider the following two images. Perform the arithmetic operations: addition,
multiplication, division. Assume that all the operations are uint8.

17. Consider the images f1 and f2 in above question. What is the result of image subtraction
and image absolute difference? Is there any difference between them?

40 Dept. of CSE, Dr. AIT, Bangalore


Introduction to Image processing and Image Processing Operations Unit -4 21CST602

18.

19. Consider the following two images. The addition and subtraction of images are given by f1+f2 and
f1−f2. Assume both the images are of the 8-bit integer type.

20. Consider the following two images. Perform the logical operations AND, OR, NOT and
difference.

DIFFERENCE:

41 Dept. of CSE, Dr. AIT, Bangalore

You might also like