Nothing Special   »   [go: up one dir, main page]

Ipmv Viva Questions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

1. Steps of image processing ?

(Bookish answer chahiye usey)


ANS:
1. Image Acquisition
This is the first step or process of the fundamental steps of digital image processing.
Image acquisition could be as simple as being given an image that is already in
digital form. Generally, the image acquisition stage involves preprocessing, such as
scaling etc.
2. Image Enhancement
Image enhancement is among the simplest and most appealing areas of digital
image processing. Basically, the idea behind enhancement techniques is to bring out
detail that is obscured, or simply to highlight certain features of interest in an image.
Such as, changing brightness & contrast etc.
3. Image Restoration
Image restoration is an area that also deals with improving the appearance of an
image. However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on mathematical
or probabilistic models of image degradation.

4. Color Image Processing


Color image processing is an area that has been gaining its importance because of
the significant increase in the use of digital images over the Internet. This may
include color modeling and processing in a digital domain etc.
5. Wavelets and Multiresolution Processing
Wavelets are the foundation for representing images in various degrees of
resolution. Images subdivision successively into smaller regions for data
compression and for pyramidal representation.
6. Compression
Compression deals with techniques for reducing the storage required to save an
image or the bandwidth to transmit it. Particularly in the uses of internet it is very
much necessary to compress data.
7. Morphological Processing
Morphological processing deals with tools for extracting image components that are
useful in the representation and description of shape.
8. Segmentation
Segmentation procedures partition an image into its constituent parts or objects. In
general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward
successful solution of imaging problems that require objects to be identified
individually.
9. Representation and Description
Representation and description almost always follow the output of a segmentation
stage, which usually is raw pixel data, constituting either the boundary of a region or
all the points in the region itself. Choosing a representation is only part of the
solution for transforming raw data into a form suitable for subsequent computer
processing. Description deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one class of objects
from another.
10. Object recognition
Recognition is the process that assigns a label, such as, “vehicle” to an object based
on its descriptors.
11. Knowledge Base:
Knowledge may be as simple as detailing regions of an image where the information
of interest is known to be located, thus limiting the search that has to be conducted
in seeking that information. The knowledge base also can be quite complex, such as
an interrelated list of all major possible defects in a materials inspection problem or
an image database containing high-resolution satellite images of a region in
connection with change-detection applications.
2. Fourier transform of a sine wave of 25Hz ?
ANS:
3. What is Opening and ek pic ka example deke pucha effect kya aayega ?
ANS:
4. If we have 2,5,10 rupee coin and want 5 rupee coins, then which method should we apply
?
5. Purpose of histogram equalization
ANS:
-Histogram equalization is used to enhance contrast.
- There may be some cases were histogram equalization can be worse. In that cases
the contrast is decreased.

Histogram of this image


The histogram of this image has been shown below.

Now we will perform histogram equalization to it.

PMF
First we have to calculate the PMF (probability mass function) of all the pixels in this
image. If you donot know how to calculate PMF, please visit our tutorial of PMF
calculation.

CDF
Our next step involves calculation of CDF (cumulative distributive function). Again if
you donot know how to calculate CDF , please visit our tutorial of CDF calculation.
Calculate CDF according to gray levels
Lets for instance consider this , that the CDF calculated in the second step looks
like this.

Then in this step you will multiply the CDF value with (Gray levels (minus) 1) .
Considering we have an 3 bpp image. Then number of levels we have are 8. And 1
subtracts 8 is 7. So we multiply CDF by 7. Here what we got after multiplying.

Now we have is the last step, in which we have to map the new gray level values
into number of pixels.
Lets assume our old gray levels values has these number of pixels.
Now map these new values you are onto histogram, and you are done.
Lets apply this technique to our original image. After applying we got the following
image and its following histogram.

Histogram Equalization Image

Cumulative Distributive function of this image


Histogram Equalization histogram

Comparing both the histograms and images


Conclusion
As you can clearly see from the images that the new image contrast has been
enhanced and its histogram has also been equalized. There is also one important
thing to be note here that during histogram equalization the overall shape of the
histogram changes, where as in histogram stretching the overall shape of histogram
remains same.
6. Chain code rotate
ANS:
A chain code is a lossless compression algorithm for monochrome images. The
basic principle of chain codes is to separately encode each connected component,
or "blob", in the image. For each such region, a point on the boundary is selected
and its coordinates are transmitted

Differential chain code is rotational invariant so; shape number of rotated object is
same as the original object. Shape number of the original object is same as rotated
object.
7. Enhancement and restoration difference
ANS:
8. Code for checker board
ANS:
I came across the matlab built in function checkerboard. And I tried to
implement my own code.
First I declared the size of each square and then the number of rows
and columns. Then I declared two matrixes initialized with zeros and
ones. Using mod 2 I changed the colors or ones and zeros alternatively.
MATLAB CODE:
%Size of the square
sz=45;
%Number of rows
xvalue=8;
%Number of columns
yvalue=8;
%Intialize matrix A with zeros and matrix B with ones
A=zeros([sz sz]);
B=ones([sz sz]);
clear C
m=sz;
n=1;
num=2;
for i=1:xvalue
n1=1;
m1=sz;

for j=1:yvalue
if(mod(num,2)==0)
C(n:m,n1:m1)=A;
num=num+1;
else
C(n:m,n1:m1)=B;
num=num+1;
end

m1=m1+sz;
n1=n1+sz;
end
if(mod(yvalue,2)==0)
num=num+1;
end
n=n+sz;
m=m+sz;
end
imshow(C)
8X8 size:45

8X20 size:15
9. Why central shifting

10. What is contrast


ANS:
CONTRAST STRETCHING:
Contrast stretching (often called normalization) is a simple image
enhancement technique that attempts to improve the contrast in an image by
‘stretching’ the range of intensity values it contains to span a desired range of
values, the full range of pixel values that the image type concerned allows

Why:
his is normally done to accent image details that may be difficult for the
human viewer to observe. Our main goal in this paper is to apply a contrast
enhancement technique to recover an image from blurred images, also
improve image quality of it
11. Histogram plotting for rgb
pl = im.histogram()plt.bar(range(256),
pl[:256], color='r',
alpha=0.5)plt.bar(range(256),
pl[256:2*256], color='g',
alpha=0.4)plt.bar(range(256),
pl[2*256:], color='b',
alpha=0.3)plt.show()

The histogram() function can be used to compute the histogram (a table


of pixel values versus frequencies) of pixels for each channel and
return the concatenated output (for example, for an RGB image, the
output contains 3 x 256 = 768 values):
12.Mask of lpf in spatial domain
ANS:
A low-pass filter attenuates high frequencies and retains low frequencies
unchanged. The result in the spatial domain is equivalent to that of a
smoothing filter; as the blocked high frequencies correspond to sharp
intensity changes, i.e. to the fine-scale details and noise in the spatial domain
image

13. Gamma coeff


ANS:
Gamma correction function is a function that maps luminance levels to compensate
the non-linear luminance effect of display devices (or sync it to human perceptive
bias on brightness). where γ is a constant, and “^” is the power operator. The value γ
is said to be the gamma.

14. What is contrast


ANS:
Contrast is the difference in luminance or colour that makes an object (or its
representation in an image or display) distinguishable.
16. Find first and second derivative
ANS:
In image processing and especially edge detection,
-when we apply sobel convolution matrix to a given image, we say that we got the
first derivative of the input image,
-when applying the laplacian matrix to the initial image we say that we got the
second derivative

17. what is shape number


ANS:
chain code depends on the starting point. – We select the smallest number in its
representation. • The shape number of a boundary is the first difference of smallest
magnitude. – First differences make it invariant to rotation. • The order n of a shape
number is defined as the number of digits in its representation.

18. Fax question, which method to use to get back the normal image except dilation
19. What is better sobel or perwitt and why.
ANS:
SOBEL IS BETTER
Sobel have slightly superior noise-suppression characteristics, an important issue when
dealing with derivatives.

20. What is SVM, hyperplanes


ANS:
A Support Vector Machine (SVM) performs classification by finding the hyperplane that
maximizes the margin between the two classes. The vectors (cases) that define the
hyperplane are the support vectors.

In geometry, a hyperplane is a subspace whose dimension is one less than that of its ambient
space. If a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the
space is 2-dimensional, its hyperplanes are the 1-dimensional lines.

22. What to do if 3 bit numbers added become 4bit

23. 5 10 15 coins on sheets


Extract 5 coins name the process
25. How to find out negative of a image
26. Differentiate between histogram matching and histogram equalization
ANS:
Histogram Equalization

● Histogram Equalization is an image processing technique.


● It enhances contrast by adjusting the intensities of the image.
● It ensures that an image contains a uniform distribution of intensities.

Histogram Specification

● It is also known as histogram matching.


● It refers to the generalized version of Histogram Equalization.
● It is a transformation technique which ensures that the histogram of
an image matches a specified histogram.

Contrast Stretching

● It is often called as normalization.


● It is an image enhancement technique.
● It enhances the contrast of the image by stretching its intensity
values.
27. What is ringing effect n cause of ringing effect.
ANS:
-Ringing effect so known as Gibbs phenomenon in mathematical methods of image
processing is the annoying effect in images and video appeared as rippling artifact
near sharp edges.
-This effect is caused by distortion or loss of high frequency information in image

29. Morphological operations


ANS:

• It deals with tools for extracting image components


that are useful in representation and description of
shape.

30. Histogram point processing or neighborhood processing


ANS:
POINT PROCESSING
31. Difference between point processing and neighbourhood processing
ANS:
32. How do u use Spatial domain filter

33. Why is restoration a objective method


ANS:
The process of restoration is objective in nature that is; it aims at a specific goal
like removal of blur in an image by means of a deblurring function . The
techniques that are used in the restoration of images can be formulated in spatial
domain or in frequency domain.
34. filters used in restoration
ANS:
1. Inverse Filter:
Inverse Filtering is the process of receiving the input of a system from its
output. It is the simplest approach to restore the original image once the
degradation function is known.
2. Pseudo Inverse Filter:
Pseudo inverse filter is the modified version of the inverse filter and stabilized
inverse filter. Pseudo inverse filtering gives more better result than inverse
filtering but both inverse and pseudo inverse are sensitive to noise.
3. Wiener Filter:
(Minimum Mean Square Error Filter). Wiener filter executes and optimal trade
off between filtering and noise smoothing. IT removes the addition noise and
inputs in the blurring simultaneously. Weiner filter is real and even.

Drawbacks of Restoration Filters:


● Not effective when images are restored fro the human eye.
● Cannot handle the common cause of non-stationary signals and
noise.
● Cannot handle spatially variant blurring point spread function.
35. what is different types of adjacency
ANS:
a) 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).
b) 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).
c) m-adjacency(mixed adjacency): Two pixels p and q with values from V are m-adjacent if
1. q is in N4(p), or
2. 2) q is in ND(p) and the set N4(p)∩N4(q) has no pixels whose values are from V.

36. Connectivity
ANS:
Connectivity between pixels
It is an important concept in digital image processing.
It is used for establishing boundaries of objects and components of regions in an image.
Two pixels are said to be connected:
● if they are adjacent in some sense(neighbour pixels,4/8/m-adjacency)
● if their gray levels satisfy a specified criterion of similarity(equal intensity level)
There are three types of connectivity on the basis of adjacency. They are:
a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with
each others.
b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with
each others.
c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent
with each others.

37. Clipping
ANS:
Clipping is used to enhance features within an image. You provide a threshold level to
determine how the clipping occurs. The values above (or below) the threshold level remain
the same while the other values are set equal to the level.

You might also like