Nothing Special   »   [go: up one dir, main page]

Experiments

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Experiments

Experiment 0:
Objective: Write introduction about MatLab and Image processing tools.
#Write 3-4 pages at your own.
Experiment 1:
Objective: Write a program to read an image and convert it in grey and binary and show all the images
on a single plot.

Answer 1:
I=imread(‘monkey.jpg’);
G=rgb2gray(I);
B=im2bw(I);
whos
subplot(2,2,1);
imshow(I);
title(‘Orignal image’);
subplot(2,2,2);
imshow(G);
title(‘Grey image’);
subplot(2,2,3);
imshow(B);
title(‘Black and White’);

Experiment 2[a]:
Objective: Write a program to add, subtract and multiply values from the image and also fuse them.

Program:
I=imread(‘monkey.jpg’);
C=imread(‘cat.jpg’);
F=imfuse(I,C);
G=rgb2gray(I);
Z=imadd(I,50);
M=imultiply(I,2);
subplot(2,3,1);
imshow(Z);
title(‘add’);
subplot(2,3,2);
imshow(S);
title(‘Substraction’);
subplot(2,3,3);
imshow(M);
title(‘Multiplication’);
subplot(2,3,4);
imshow(I);
title(‘Orignal’);
subplot(2,3,5);
imshow(F);
title(‘Fuse’);
subplot(2,3,6);
imshow(C);
title(‘Orignal2’);

Experiment 2[b]:
Objective: Write a program to implement logical operation as OR, XOR, AND and Complement of an
image.

Program:
clc;
A=imread(‘ex.jpg’);
A=resize(A,[256,256]);
A=im2bw(A);
Subplot(3,2,1),imshow(A);

clc;
B=imread(‘ex.jpg’);
B=resize(B,[256,256]);
B=im2bw(B);
Subplot(3,2,2),imshow(B);

Resand=A&B;
Subplot(3,2,3),imshow(Resand);
title(‘AND’);

Resor=A|B;
Subplot(3,2,4),imshow(Resor);
title(‘OR’);

Resxor=xor(A,B);
Subplot(3,2,5),imshow(Resxor);
title(‘XOR’);

D=imcomplement(A,B);
Subplot(3,2,6),imshow(D);
title(‘Complement’);
Experiment–3
Object: To shows image rotation, scaling, and translation using Geometric transformations.

Software: MATLAB

Theory:
Perform generic geometric transformations using the imwarp workflow. Geometric
transformations map pixel coordinates in the output image to coordinates in the input image. The
mapping process then interpolates the value of output pixels from the input image. Use these
functions to perform general 2-D, 3-D, and N-D geometric transformations. To perform a 2-D or3-
D geometric transformation, first create a geometric transformation object that stores information
about the transformation. Then, pass the image to be transformed and the geometric transformation
object to the imwarp function.
Functions
imwarp Apply geometric transformation to image

affineOutputView Create output view for warping images

fitgeotrans Fit geometric transformation to control point pairs

findbounds Find output bounds for spatial transformation

fliptform Flip input and output roles of spatial transformation structure

makeresampler Create resampling structure

maketform Create spatial transformation structure (TFORM)

tformarray Apply spatial transformation to N-D array

tformfwd Apply forward spatial transformation

tforminv Apply inverse spatial transformation


Program:
clc;
clearall;
imgetfile;
I = imread(ans);
T = imtranslate(I,[50 50]);
O = imrotate(I,90);
S1 = imresize(I,[300,300]);
S2 = imresize(I,0.5);
C = imcrop(I,[15 68 600 500]);
subplot(2,3,1), imshow(I), title('Original image');
subplot(2,3,2), imshow(T), title('Translate(50,50)');

subplot(2,3,3), imshow(O), title('Rotation(90)');


subplot(2,3,4), image(S1), title('Scaling(300,300)');
subplot(2,3,5), image(S2), title('Scaling(0.5)');
subplot(2,3,6), imshow(C), title('Croping');

Result: We have done the operation on digital image and shown image rotation, scaling, and translation
using Geometric transformations.
Viv Question

1. What is meant by geometric transformation?

2. What is spatial transformation?

3. What are the types of geometric transformation?

4. What is geometric transformation in image processing?

5. What are the basic transformations?

6. What are the rules of translations?

7. What is the mean of transformation?

8. What is a coordinate rule?

9. What is Transformation computer graphics?

10. What do you mean by 3d transformation?

Experiment–4[a]

Object: WAP to implement histogram and histogram equalization using inbuilt functions.

Software: MATLAB

Theory:
Histogram is a graphical representation of the intensity distribution of an image. In simple terms, it
represents the number of pixels for each intensity value considered.
Histogram Equalization is a computer image processing technique used to improve contrast in
images. It accomplishes this by effectively spreading out the most frequent intensity values, i.e.,
Stretching out the intensity range of the image. This method usually increases the global contrast
of images when its usable data is represented by close contrast values. This allows for areas of
lower local contrast to gain a higher contrast.
A color histogram of an image represents the number of pixels in each type of color component.
Histogram equalization cannot be applied separately to the red, green and blue components of the
image as it leads to dramatic changes in the image’s color balance. However, if the image is first
converted to another color space, like HSL/HSV color space, then the algorithm can reapply the
illuminance or value channel without resulting in changes to the hue and saturation of the image.
Adaptive Histogram Equalization
Adaptive Histogram Equalization differs from ordinary histogram equalization in the respect that
the adaptive method computes several histograms, each corresponding to a distinct section of the
image, and uses them to redistribute the lightness values of the image. It is therefore suitable
forimprovingthelocalcontrastandenhancingthedefinitionsofedgesineachregionofanimage.

Contrastive Limited Adaptive Equalization

Contrast Limited AHE (CLAHE) differs from adaptive histogram equalization in its contrast limiting. In the
case of CLAHE, the contrast limiting procedure is applied to each neighborhood from which a
transformation function is derived. CLAHE was developed to prevent the overamplification of noise that
adaptive histogram equalization can give rise to.
Program 4[a]:

clc;
i=imread(‘ams’);
k=rgb2gray(i);
subplot(2,2,1);
imshow(k);

subplot(2,2,2);
imhist(k);

j=histeq(k);
subplot(2,2,3);
imshow(j);

subplot(2,2,4);
imhist(j);
Experiment–4[b]

Object: WAP to implement histogram and histogram equalization manually.


Histogram Manually:

I = imread('parrot.jpg');
J = rgb2gray(I);

[r, c] = size(J);

h = zeros(1, 256);

for i = 1:r
for j = 1:c
t = J(i,j);
h(t) = h(t)+1;
end
end

disp(h);
bar(h);

subplot(2,2,1), imshow(I);
title('RGB image');
subplot(2,2,2), imshow(J);
title('Gray image');
subplot(2,2,3), imhist(J);
title('Histogram using imhist');
subplot(2,2,4), bar(h, 'b');
title('Histogram calculated');
Histogram Equalization Manually:

clc;
clear all;

I = imread('parrot.jpg');
J = rgb2gray(I);

[r, c] = size(J);
s = r*c;

h = zeros(1, 256);
z = zeros(1, 300);

for i = 1:r
for j = 1:c
t = J(i,j);
h(t) = h(t)+1;
end
end

pdf = h/s;
cdf(1) = pdf(1);

for i = 2:256
cdf(i) = cdf(i-1)+pdf(i)
end

new = round(cdf*256);
new = new+1;

for i=1:r
for j=1:c
temp = J(i,j);
b(i,j) = new(temp);
t = b(i,j);
z(t) = z(t)+1;
end
end

b=b-1;

subplot(2,2,1), imshow(J);
title('Gray image');
subplot(2,2,2), bar(h, 'b');
title('Histogram calculated');
subplot(2,2,3), imshow(uint8(b));
title('Emhanced image calculated');
subplot(2,2,4), bar(z, 'b');
title('Histogram calculated for Enhanced image');
VivaQuestion

1. What is histogram equalization in digital image processing?

2. Why histogram equalization is needed?

3. What happens if histogram equalization is applied twice?

4. Why histogram is used in image processing?

5. What is histogram equalization in Matlab?

6. What is entropy image processing?

7. Why is a histogram useful?

8. What is the advantage of histogram?

9. What is contrast in image processing?

10. How do you make a histogram graph?


Experiment – 5[a]

Objective: WAP to extract red, green and blue components from an image.

Program:
clc;
i=imread(‘exp.jpg’);
subplot(2,2,1),imshow(i);

r=i(:, :, 1);
subplot(2,2,2),imshow(r);

g=i(:, :, 2);
subplot(2,2,3),imshow(g);

b=i(:, :, 3);
subplot(2,2,4),imshow(b);

Experiment – 5[b]

Objective: WAP to implement bit plain slicing.

Program:
i=imread('earcell.jpg');
b0=double(bitget(i,1));
bl=double(bitget(1,2));
b2=double(bitget(1,3));
b3rdouble(bitget(1,4));
b4=double(bitget(15));
b5=double(bitget(1,6));
b6=double(bitget(i,7));
b7=double(bitget(i,8));
subplot(3,3,1);imshow(i);title('Original Image);
subplot(3,3,2);subimage(b0);title('BIT PLANE 0');
subplot(3,3,3);subimage(b1);title('BIT PLANE 1');
subplot(3,3,4);subimage(b2);title('BIT PLANE 2');
subplot(3,3,5);subimage(b3);title('BIT PLANE 3');
subplot(3,3,6);subimage(b4);title('BIT PLANE 4’);
subplot(3,3,7);subimage(b5);title('BIT PLANE 5');
subplot(3,3,8);subimage(b6);title('BIT PLANE 6');
subplot(3,3,9);subirnage(b7);title('BiT PLANE 7');
Experiment–3

Object:ToperformtheTwo-dimensionalFouriertransformoperationinanimage.

Software:MATLAB

Theory:
The Fourier Transform is animportant image processing tool whichis used to decompose animage
into its sine and cosine components. The output of the transformation represents the imagein the
Fourier or frequency domain, while the input image is the spatial domain equivalent. In theFourier
domainimage,eachpointrepresentsaparticularfrequency containedinthespatialdomainimage.
The Fourier Transform is used in a wide range of applications, such as image analysis,
imagefiltering,imagereconstructionandimagecompression.
Program:
clc;
clearall;
imgetfile;
I = imread(ans);
G = rgb2gray(I);
F = fft2(G);
T = ifft2(F);
subplot(2,2,1), imshow(I), title('Original image');
subplot(2,2,2), imshow(G), title('Greyscale image');
subplot(2,2,3), imshow(F), title('Fourier Transformed Image');
subplot(2,2,4), imshow(uint8(T)), title('Retrieve from FT');

Result:PerformedtheTwo-dimensionalFouriertransformoperationinanimage.
VivaQuestion

1. WhatistheFourier transformofanimage?

2. Whatis2dFouriertransform?

3. Whatisspatialfrequencyinimageprocessing?

4. Why FFTisusedinimageprocessing?

5. WhatisthedifferencebetweenDFTandFFT?

6. Whatistheconvolutiontheoreminimageprocessing?

7. WhatisFourierTransformanditsapplications?

8. Whatisanimagesignal?

9. WhatisFFTsize?

10. Whatarethestepsinvolvedindigitalimageprocessing?
Experiment–4

Object:ToperformtheLinearfilteringusingconvolutioninanimage.

Software:MATLAB

Theory:
Linearfilteringofanimageisaccomplishedthroughanoperationcalled convolution.Convolution is a
neighborhood operation in which each output pixel is the weighted sum ofneighboring input
pixels. The matrix of weights is calledthe convolutionkernel,
alsoknownasthefilter.Aconvolutionkernel isa correlationkernel that hasbeenrotated180degrees.
Forexample,supposetheimageis

A=[17 241 815


23 571416
4 6132022
10 121921 3
1118252 9]
andtheconvolutionkernelis
h=[8 1 6
3 5 7
4 9 2]

Thefollowingfigureshowshowtocomputethe(2,4)outputpixelusingthesesteps:
1. Rotatetheconvolutionkernel180degreesaboutitscenterelement.
2. Slidethecenterelement oftheconvolutionkernelsothatitliesontopofthe(2,4)elementofA.
3. Multiply eachweightintherotatedconvolutionkernelby thepixelofAunderneath.
4. Sumtheindividualproductsfromstep3.

Hencethe(2,4)outputpixelis
Program:
clc;clear
all;close
all;
imgetfile;u=imread(a
ns);imshow(u);
Hm=fspecial('motion',20,45);MotionBlur
=imfilter(u,Hm,'replicate');figure;
imshow(MotionBlur);H
b=fspecial('disk',10);
blurred=imfilter(u,Hb,'replicate');fig
ure;
imshow(blurred);
Result:WeperformtheLinearfilteringusingconvolutioninanimage.
VivaQuestion

1. Whatislinearfilteringinimageprocessing?

2. Whatisaconvolutionfilter?

3. Whatislinearimage?

4. Whyfiltersareusedinimageprocessing?

5. Whatmakesafilter linear?

6. Areconvolutionfilterslinear?

7. Whatisconvolutionof animage?

8. Whatisthepurposeofconvolution?

9. Whatisnonlinearfilter inimageprocessing?

10. Whatistheintensityofanimage?
Experiment–5

Object:ImageEdgeDetectionUsingSobelFilteringandCannyFiltering.

Software:MATLAB

Theory:
Edge detection is an image processing technique for finding the boundaries of objects
withinimages. Itworks by detecting discontinuities inbrightness. Edge detectionis usedfor
imagesegmentation anddataextractioninareassuchasimageprocessing,computervision,andmachine
vision. Common edge detection algorithms include Sobel, Canny, Prewitt, Roberts,
andfuzzylogicmethods.

Edgedetectionmethod,specifiedasoneofthefollowing.

Method Description

'Sobel' FindsedgesatthosepointswherethegradientoftheimageIismaximum,usingtheSobelap
proximationtothederivative.

'Prewitt' FindsedgesatthosepointswherethegradientofIismaximum,usingthePrewittapproxim
ationtothederivative.

'Roberts' FindsedgesatthosepointswherethegradientofIismaximum,usingtheRoberts
approximationtothederivative.

'log' Findsedgesbylookingforzero-
crossingsafterfilteringIwithaLaplacianofGaussian(LoG)filter.

'zerocross' Findsedgesbylookingforzero-crossingsafterfilteringIwithafilterthatyouspecify,h

'Canny' Finds edges by looking for local maxima of the gradient of I. The edge
functioncalculates the gradient using the derivative of a Gaussian filter. This
method usestwo thresholds to detect strong and weak edges, including weak edges
in theoutput if they are connected to strong edges. By using two thresholds, the
Cannymethodisless likely than the other methodsto befooled by noise,
andmorelikelytodetecttrueweakedges.
'approxcanny' Finds edges using an approximate version of the Canny edge detection
algorithmthatprovidesfasterexecutiontimeattheexpenseoflessprecisedetection.Floati
ng pointimagesareexpectedtobenormalizedintherange[01].

Program:
clc;clearall;closeall;p
o=imgetfile;
I=imread(po);%selectcoini
mshow(I)
BW1=edge(I,'sobel');B
W2=edge(I,'canny');fig
ure;
imshowpair(BW1,BW2,'montage');
title('SobelFilter,CannyFilter');

Result:wehaveperformtheImageEdgeDetectionUsingSobelFilteringandCannyFiltering.
VivaQuestion

1. WhatdoesaSobelfilterdo?

2. Whatiscannyedgedetectioninimageprocessing?

3. HowdoesSobeledgedetectionwork?

4. HowdoyouimplementCannyedgedetection?

5. Howisedgedetectiondone?

6. WhatdoesSobelmean?

7. Why isSobeledgedetected?

8. WhatdoesLaplacianfilterdo?

9. Whatisanedgeinanimage?

10. Whatisanedgefilter?
Experiment–6

Object: Toperformthefollowingoperationsinanimage.
(a) erosion,
(b) dilation.

Software:MATLAB

Theory:
Morphology is a broad set of image processing operations that process images based on
shapes.Morphological operationsapply astructuringelementtoaninputimage,creatingan
outputimageof thesamesize.Inamorphologicaloperation,thevalueof
eachpixelintheoutputimageisbasedonacomparisonofthecorrespondingpixelintheinputimagewithitsn
eighbors.

The most basic morphological operations are dilation and erosion. Dilation adds pixels to
theboundariesof objectsinanimage,whileerosionremovespixels onobject boundaries. Thenumber of
pixels added or removedfrom the objects inanimage depends onthe sizeand shapeof the structuring
element used to process the image. In the morphological dilation and erosionoperations,thestateof
any given pixel inthe output imageisdetermined by applying a ruletothe corresponding pixel and
its neighbors in the input image. The rule used to process the pixelsdefines the operation as a
dilation or an erosion. This table lists the rules for both dilation anderosion.
Dilationanderosion are oftenusedincombinationtoimplement image processing operations.For
example, the definition of a morphological opening of an image is an erosion followed by
adilation, using the same structuring element for both operations. We can combine dilation
anderosiontoremovesmallobjectsfromanimageandsmooththeborderoflargeobjects.

Program:
(a) erosionc
lc;clear
all;close
all;
po=imgetfile;
I=imread(po);origina
lBW=I;
se=strel('disk',11);
erodedBW=imerode(originalBW,se);imshow(origina
lBW),figure,imshow(erodedBW)

(b) dilation

clc;clear
all;close
all;
po=imgetfile;
I=imread(po);
se=strel('ball',5,5);I2
=imdilate(I,se);
imshow(I),title('Original')
figure,imshow(I2),title('Dilated')

Result:Wehaveperformtheerosionanddilationoperationsinanimage.
VivaQuestion

1. Whatiserosioninimageprocessing?

2. Whataremorphologicaloperators?

3. Whatisbinarydilation?

4. Howsegmentationisdoneinimageprocessing?

5. Whatiserosionanddilation?

6. Howdoyoudodilation?

7. Whatarethetypesofmorphology?

8. Whatisresolutionof animage?

9. Whatisthehighestresolutionphoto?

10. Howmanypixelsisconsideredhighresolution?
Experiment–7

Object: Toperformthefollowingoperationsinanimage.
(a) opening,
(b) closing.

Software:MATLAB

Theory:
Morphologicalimage processing isa collection of non-linear operationsrelatedtotheshape or
morphology of features in an image. According to Wikipedia, morphological operationsrely only
on the relative ordering of pixel values, not on their numerical values, and therefore
areespeciallysuitedtotheprocessingofbinaryimages.Morphologicaloperationscanalsobeappliedtogre
yscaleimages suchthat their light transfer functions are
unknownandthereforetheirabsolutepixelvaluesareofnoorminor interest.
Morphologicaltechniquesprobeanimagewithasmallshapeortemplatecalledastructuring
element. The structuring element is positionedat all possible locations in the imageand it is
compared with the corresponding neighborhood of pixels. Some operations test whetherthe
element "fits" within the neighborhood, while others test whether it "hits" or intersects
theneighborhood:

Program:

(a) Opening
clc;clear
all;close
all;
po=imgetfile;
I=imread(po);figur
e,imshow(I);se=str
el('disk',5);
afterOpening=imopen(I,se);
figure,imshow(afterOpening,[]);
(b) Closing
clc;clearall;closeall;p
o=imgetfile;
I=
imread(po);originalBW
=I;imshow(originalBW
);se=strel('disk',10);
closeBW=imclose(originalBW,se);figure,i
mshow(closeBW);
Result:Wehaveperformtheopeningandclosingoperationsinanimage.
VivaQuestion

1. Whataremorphologicaloperators?

2. Whatisopeningandclosinginimageprocessing?

3. Whatisanopeningimage?

4. Whatarethetypesofmorphology?

5. Whatismorphologicalreconstruction?

6. Whatisthemeaningofmorphologicalstructure?

7. Whatisanotherwordformorphology?

8. Whatismeant bydigitalimage?

9. Whatdoyoumeanby imageprocessing?

10. Whyisimageprocessingimportant?

You might also like