Nothing Special   »   [go: up one dir, main page]

BM3652 - MIP - Unit 2 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

DEPARTMENT OF BIOMEDICAL ENGINEERING

BM3652 – Medical Image Processing

NOTES

UNIT II ENHANCEMENT TECHNIQUES


Gray level transformation- Log transformation, Power law transformation, Piecewise linear
transformation. Histogram processing- Histogram equalization, Histogram Matching. Spatial domain
Filtering - Smoothing filters, sharpening filters. Frequency domain filtering- Smoothing filters,
Sharpening filters- Homomorphic filtering -Medical image enhancement using Hybrid filters Performance
measures for enhancement techniques. Experiment with various filtering techniques for noise reduction
and enhancement in medical images using Matlab.

1
UNIT-2
IMAGEENHANCEMENT TECHNIQUES

1.1 Enhancement by Point Processing

The principal objective of enhancement is to process an image so that the result is more suitable than the
original image for a specific application. Image enhancement approaches fall into two board categories

 Spatial domain methods


 Frequency domain methods

The term spatial domain refers to the image plane itself and approaches in this categories are based on
direct manipulation of pixel in an image. Spatial domain process are denoted by the expression

g(x,y)=T[f(x,y)]

f(x,y) – input image T- operator on f, defined over some neighborhood of f(x,y) g(x,y) - processed
image. The neighborhood of a point (x,y) can be explain by using as square or rectangular sub image
area centered at (x,y).
The center of sub image is moved from pixel to pixel starting at the top left corner. The operator T is
applied to each location (x,y) to find the output g at that location . The process utilizes only the pixel in
the area of the image spanned by the neighborhood.
It is the simplest form of the transformations when the neighborhood is of size I x I. In this case g
depends only on the value of f at (x,y) and T becomes a gray level transformation function of the forms

S=T(r)
r- Denotes the gray level of f(x,y)
s- Denotes the gray level of g(x,y) at any point (x,y)

Because enhancement at any point in an image deepens only on the gray level at that point, technique in this
category are referred to as point processing.

There are basically three kinds of functions in gray level transformation.

2
Point Processing:
Contract stretching – It produces an image of higher contrast than the original one. The operation is
performed by darkening the levels below m and brightening the levels above m in the original image.

In this technique the value of r below m are compressed by the transformation function into a narrow
range of s towards black .The opposite effect takes place for the values of r above m.
Thresholding function: It is a limiting case where T(r) produces a two levels binary image. The values
below m are transformed as black and above m are transformed as white.

1.2 Basic Gray Level Transformation:


These are the simplest image enhancement techniques.
1.2.1 Image Negative: The negative of in image with gray level in the range [0, l-1] is obtained by
using the negative transformation.
The expression of the transformation is
s=L -1-r

Reverting the intensity levels of an image in this manner produces the equivalent of a photographic
negative. This type of processing is practically suited for enhancing white or gray details embedded in
dark regions of an image especially when the black areas are dominant in size.
1.2.2 Log transformations:
This transformation maps a narrow range of gray level values in the input image into a wider range of
output gray levels. The opposite is true for higher values of input levels. We would use this
transformations to expand the values of dark pixels in an image while compressing the higher level
values. The opposite is true for inverse log transformation. The log transformation function has an
important characteristic that it compresses the dynamic range of images with large variations in pixel
values.

3
Example – Fourier spectrum

1.2.3 Power Law Transformation:


Power law transformations has the basic form

Power law curves with fractional values of y map a narrow range of dark input values into a wider range
of output values, with the opposite being true for higher values of input gray levels. We may get various
curves by varying values of γ.

A variety of devices used for image capture, printing and display respond according to a power law. The
process used to correct this power law response phenomenon is called gamma correction.
For eg. – CRT devices have intensity to voltage response that is a power function.
Gamma correction is important if displaying an image accurately on a computer screen is of concern.
Images that are not corrected properly can look either bleached out or too dark. Color phenomenon also
uses this concept of gamma correction. It is becoming more popular due to use of images over the
internet. It is important in general purpose contract manipulation. To make an image black we use γ >1
and γ <1 for white image.

1.2.4 Piece wise linear transformation functions


4
The principal advantage of piece wise linear functions is that these functions can be arbitrarily complex.
But their specification requires considerably more user input.
 Contrast Stretching
It is the simplest piecewise linear transformation function. We may have various low contrast images
and that might result due to various reasons such as lack of illumination, problem in imaging sensor or
wrong setting of lens aperture during image acquisition. The idea behind contrast stretching is to
increase the dynamic range of gray levels in the image being processed.

The location of points (r1,s1) and (r2, s2) control the shape of the curve.
a) If r1=r2and s1=s2, the transformation is a linear function that deduces no change in gray levels.
b) If r1=s1, s1=0 , and s2=L-1, then the transformation become a thresholding function that creates a
binary image
c) Intermediate values of (r1, s1) and (r2, s2) produce various degrees of spread in the gray value of the
output image thus effecting its contract. Generally r1≤ r2 and s1 ≤ s2 so that the function is single
valued and monotonically increasing.

 Gray Level Slicing


Highlighting a specific range of gray levels in an image is often desirable For example when enhancing
features such as masses of water in satellite image and enhancing flaws in x- ray images.
There are two ways of doing this-
(1) One method is to display a high value for all gray level in the range. Of interest and a low value for
all other gray level.
(2) Second method is to brighten the desired ranges of gray levels but preserve the background and gray
level tonalities in the image.

5
 Bit Plane Slicing

Sometimes it is important to highlight the contribution made to the total image appearance by specific
bits. Suppose that each pixel is represented by 8 bits. Imagine that an image is composed of eight 1-bit
planes ranging from bit plane 0 for the least significant bit to bit plane 7 for the most significant bit. In
terms of 8-bit bytes, plane 0 contains all the lowest order bits in the image and plane 7 contains all the
high order bits. High order bits contain the majority of visually significant data and contribute to more
subtle details in the image. Separating a digital image into its bits planes is useful for analyzing the
relative importance played by each bit of the image. It helps in determining the adequacy of the number
of bits used to quantize each pixel. It is also useful for image compression.

1.3 Histogram Processing:


The histogram of a digital image with gray levels in the range [0,L-1] is a discrete function of the Form
H(rk) = nk
Where rk is the kth gray level and nk is the number of pixels in the image having the level rk.
A normalized histogram is given by the equation

p(rk)=nk/n for k=0,1,2,…..,L-1

P(rk) gives the estimate of the probability of occurrence of gray level rk. The sum of all components of a
normalized histogram is equal to 1.The histogram plots are simple plots of

H(rk)=nk versus rk.

In the dark image the components of the histogram are concentrated on the low (dark) side of the gray
scale. In case of bright image the histogram components are biased towards the high side of the gray
scale. The histogram of a low contrast image will be narrow and will be centred towards the middle of
the gray scale.

6
The components of the histogram in the high contrast image cover a broad range of the grayscale. The
net effect of this will be an image that shows a great deal of gray levels details and has high dynamic
range.

1.4 Histogram Equalization

Histogram equalization is a common technique for enhancing the appearance of images. Suppose we
have an image which is predominantly dark. Then its histogram would be skewed towards the lower end
of the grey scale and all the image detail are compressed into the dark end of the histogram. If we could
„stretch out‟ the grey levels at the dark end to produce a more uniformly distributed histogram then the
image would become much clearer.
Let there be a continuous function with r being gray levels of the image to be enhanced.
The range of r is [0, 1] with r=0 repressing black and r=1 representing white. The transformation
function is of the form

S=T(r) where 0<r<1


It produces a level for every pixel value r in the original image. The transformation function is assumed
to fulfill two conditions
 T(r) is single valued and monotonically increasing in the interval 0<T(r)<1 for 0< r, 1
 The transformation function should be single valued so that the inverse transformations should exist.
Monotonically increasing condition preserves the increasing order from black to white in the output
image.
 The second conditions guarantee that the output gray levels will be in the same range as the input
levels. The gray levels of the image may be viewed as random variables in the interval [0.1] .The most
fundamental descriptor of a random variable is its probability density function (PDF) Pr(r) and Ps(s)
denote the probability density functions of random variables r and s respectively. Basic results from an
7
elementary probability theory states that if Pr(r) and Tr are known and T(r) is continuous and
differentiable over the range of values of interest, then the PDF of the transformed (mapped) variable s
can be obtained using the simple formula

The PDF of the output intensity variable, s, is determined by the PDF of the input intensities and the
transformation function used.
A transformation function of particular importance in image processing has the form

Where w is a dummy variable of integration. The right side of this equation is recognized as the
cumulative distribution function (CDF) of random variable r. Because PDFs always are positive, and
recalling that the integral of a function is the area under the function, it follows that the transformation
function w.

To find the Ps(s) corresponding to the

Substituting this result and keeping in mind that all probability values are positive, yields

``

8
1.5 Basic of Spatial Filtering

Spatial filtering is an example of neighborhood operations; in this the operations are done on the values
of the image pixels in the neighborhood and the corresponding value of a sub image that has the same
dimensions as of the neighborhood. This sub image is called a filter, mask, kernel, template or window;
the values in the filter sub image are referred to as coefficients rather than pixel. Spatial filtering
operations are performed directly on the pixel values (amplitude/grayscale) of the image. The process
consists of moving the filter mask from point to point in the image. At each point (x,y) the response is
calculated using a predefined relationship.

For linear spatial filtering the response is given by a sum of products of the filter coefficient and the
corresponding image pixels in the area spanned by the filter mask.

The results R of linear filtering with the filter mask at point (x, y) in the image is

The sum of products of the mask coefficient with the corresponding pixel directly under the mask. The
coefficient w (0, 0) coincides with image value f(x,y) indicating that mask it centered at (x,y).when the
computation of sum of products takes place.
For a mask of size M x N we assume m=2a+1 and n=2b+1, where a and b are non-negative integers. It
shows that all the masks are of add size.
In the general liner filtering of an image of size f of size M*N with a filter mask of size m*m is given by
the expression

9
Where a = (m-1)/2 and b = (n-1)/2
To generate a complete filtered image this equation must be applied for x=0, 1, 2, M-1 and y=0, 1, 2, N-
1.Thus the mask processes all the pixels in the image. The process of linear filtering is similar to
frequency domain concept called convolution. For this reason, linear spatial filtering often is referred to
as convolving a mask with an image. Filter mask are sometimes called convolution mask.

R=W1, Z1 + W2, Z2+… + Wmn, Zmn

Where w‟s are mask coefficients and z‟s are the values of the image gray levels corresponding to those
coefficients. mn is the total number of coefficients in the mask.
An important point in implementing neighborhood operations for spatial filtering is the issue of what
happens when the center of the filter approaches the border of the image.
i) To limit the excursion of the center of the mask to be at distance of less than (n-1) /2 pixels form the
border. The resulting filtered image will be smaller than the original but all the pixels will be processed
with the full mask.
ii) Filter all pixels only with the section of the mask that is fully contained in the image. It will create
bands of pixels near the border that will be processed with a partial mask.
iii)Padding the image by adding rows and columns of o‟s & or padding by replicating rows and
columns. The padding is removed at the end of the process.

2.6.1 Smoothing Spatial Filters


These filters are used for blurring and noise reduction blurring is used in pre-processing steps such as
removal of small details from an image prior to object extraction and bridging of small gaps in lines or
curves.
Smoothing Linear Filters
The output of a smoothing linear spatial filter is simply the average of the pixel contained in the
neighborhood of the filter mask. These filters are also called averaging filters or low pass filters. The
operation is performed by replacing the value of every pixel in the image by the average of the gray
levels in the neighborhood defined by the filter mask. This process reduces sharp transitions in gray
levels in the image.
A major application of smoothing is noise reduction but because edge are also provided
usingsharptransitionssosmoothingfiltershavetheundesirablesideeffectthattheybluredges
. It also removes an effect named as false contouring which is caused by using insufficient number of
gray levels in the image. Irrelevant details can also be removed by these kinds of filters, irrelevant
means which are not of our interest. A spatial averaging filter in which all coefficients are equal is
sometimes referred to as a “box filter”.
A weighted average filter is the one in which pixel are multiplied by different coefficients.

10
The general implementation for filtering an M X N image with a weighted averaging filter of size
M x N is given by

2.6.2 Order Statistics Filter


These are nonlinear spatial filter whose response is based on ordering of the pixels contained in the
image area compressed by the filter and the replacing the value of the center pixel with value determined
by the ranking result.
The best example of this category is median filter. In this filter the values of the center pixel is replaced
by median of gray levels in the neighborhood of that pixel. Median filters are quite popular because, for
certain types of random noise, they provide excellent noise-reduction capabilities, with considerably less
blurring than linear smoothing filters.
These filters are particularly effective in the case of impulse or salt and pepper noise. It is called so
because of its appearance as white and black dots superimposed on an image. The median £ of as e to f
values is such that half the values in these less than or equal to £and half are greater than or equal to this.
In order to perform median filtering at a point in an image, we first sort the values of the pixel in the
question and its neighbors, determine their median and assign this value to that pixel.
We introduce some additional order-statistics filters. Order-statistics filters are spatial filters whose
response is based on ordering (ranking) the pixels contained in the image area encompassed by the filter.
The response of the filter at any point is determined by the ranking result.
Median filter

The best-known order-statistics filter is the median filter, which, as its name implies, replaces the value
of a pixel by the median of the gray levels in the neighborhood of that pixel:

11
The original value of the pixel is included in the computation of the median. Median filters are quite
popular because, for certain types of random noise, they provide excellent noise-reduction capabilities,
with considerably less blurring than linear smoothing filters of similar size. Median filters are
particularly effective in the presence of both bipolar and unipolar impulse noise. In fact, the median
filter yields excellent results for images corrupted by this type of noise.
Max and min filters
Although the median filter is by far the order-statistics filter most used in image processing. It is by no
means the only one. The median represents the 50th percentile of a ranked set of numbers, but the reader
will recall from basic statistics that ranking lends itself to many other possibilities. For example, using
the 100th percentile results in the so-called max filter given by:

This filter is useful for finding the brightest points in an image. Also, because pepper noise has very low
values, it is reduced by this filter as a result of the max selection process in the sub image area S. The
0th percentile filter is the Min filter.

2.7 Sharpening Spatial Filters


 The principal objective of sharpening is to highlight fine details in an image or to enhance details that
have been blurred either in error or as a natural effect of particular method for image acquisition.
 The applications of image sharpening range from electronic printing and medical imaging to
industrial inspection and autonomous guidance in military systems.
 As smoothing can be achieved by integration, sharpening can be achieved by spatial differentiation.
The strength of response of derivative operator is proportional to the degree of discontinuity of the
image at that point at which the operator is applied. Thus image differentiation enhances edges and other
discontinuities and deemphasizes the areas with slow varying grey levels.
 It is a common practice to approximate the magnitude of the gradient by using absolute values instead
of square and square roots.
A basic definition of a first order derivative of a one dimensional function f(x) is the difference.

The second-order derivative of a one-dimensional function f(x) is

12
Development of the Laplacian method
The two dimensional Laplacian operator for continuous functions:

Laplacian highlights gray-level discontinuities in an image and deemphasize the regions of slow varying
gray levels. This makes the background a black image. The background texture can be recovered by
adding the original and Laplacian images.
To sharpen an image, the Laplacian of the image is subtracted from the original image.

If the center coefficient of the Laplacian mask is negative. If the center coefficient of the Laplacian
mask is positive.

13
2.7.1 Unsharp Masking and High Boost Filtering

Unsharp masking means subtracting a blurred version of an image form the image itself. Where f(x,y)
denotes the sharpened image obtained by unsharp masking and f(x,y) is a blurred version of (x,y)

A slight further generalization of unsharp masking is called high boost filtering. A high boost filtered
image is defined at any point (x,y) as

2.8 Frequency Domain Filtering

Basis of Filtering in Frequency Domain

Basic steps of filtering in frequency Domain

i) Multiply the input image by (-1) x+y to centre the transform


ii) Compute F(u, v), Fourier Transform of the image
iii)Multiply f(u, v) by a filter function H(u, v)
iv) Compute the inverse DFT of Result of (iii)
v) Obtain the real part of result of (iv)

vi) Multiply the result in (v) by (-1) x+y

H (u, v) called a filter because it suppresses certain frequencies from the image while leaving others
unchanged.

14
2.8.1 Smoothing Frequency Domain Filters Low pass filtering:
Edges and other sharp transition of the gray levels of an image contribute significantly to the high
frequency contents of its Fourier transformation. Hence smoothing is achieved in the frequency domain
by attenuation a specified range of high frequency components in the transform of a given image.

Basic model of filtering in the frequency domain is

G(u, v)=H(u, v)F(u, v)

Where F(u, v):the Fourier transform of the image to be smoothed


H(u, v):a filter transfer function
Objective is to find out a filter function H(u, v) that yields G(u, v) by attenuating the high frequency
component of F (u, v)
There are three types of low pass filters
1. Ideal
2. Butterworth
3. Gaussian

Ideal Low Pass Filter


It is the simplest of all the three filters. It cuts of all high frequency component of the Fourier transform
that are at a distance greater that a specified distance D0 form the origin of the transform. It is called a
two–dimensional ideal low pass filter (ILPF) and has the transfer function

Where D(u, v) : the distance from point (u, v) to the center of the frequency rectangle

15
Butterworth Low Pass Filter

It has a parameter called the filter order. For high values of filter order it approaches the form of the
ideal filter where as for low filter order values it reach Gaussian filter. It may be viewed as a transition
between two extremes. The transfer function of a Butterworth low pass filter (BLPF)of order n with cut
off frequency at distance Do from the origin is defined as

Most appropriate value of n is 2.It does not have sharp discontinuity unlike ILPF that establishes a
clear cut-off between passed and filtered frequencies. Defining a cut-off frequency is a main concern in
these filters. This filter gives a smooth transition in blurring as a function of increasing cut-off
frequency. A Butterworth filter of order 1 has no ringing. Ringing increases as a function of filter order.
(Higher order leads to negative values)

Gaussian Low Pass Filter


The transfer function of a Gaussian low pass filter is

D(u, v) – the distance of point (u, v) from the center of the transform σ = D0 – specified cutoff frequency
The filter has an important characteristic that the inverse of it is also Gaussain.

16
2.8.2 Sharpening Frequency Domain High pass filtering:

Image sharpening can be achieved by a high pass filtering process, which attenuates the low frequency
components without disturbing high-frequency information. These are radially symmetric and
completely specified by a cross section.

If we have the transfer function of a low pass filter the corresponding high pass filter can be obtained
using the equation

Ideal High Pass Filter


This filter is opposite of the Ideal Low Pass filter and has the transfer function of the form

Butterworth High Pass Filter


The transfer function of Butterworth High Pass filter of order n is given by the equation

Gaussian High Pass Filter


The transfer function of a Gaussain High Pass Filter is given by the equation

17
Homomorphic Filtering

Homomorphic filtering is a frequency-domain technique that aims at a simultaneous increase in


contrast and dynamic range compression. It is mainly utilized for non-uniformly illuminated images in
medical, sonar images etc. for edge enhancement that makes the image details clear to the observer.

F(x,y) = I(x,y) + R(x,y),

Where F, I and R are the Fourier transforms ln f(x,y),ln i(x, y) , and ln r(x, y). respectively. The
function F represents the Fourier transform of the sum of two images: a low-frequency illumination
image and a high- frequency reflectance image.

Homomorphic filters are used in such situations where the image is subjected to the multiplicative
interference or noise. We cannot easily use the above product to operate separately on the frequency
components of illumination and reflection because the Fourier transform of f( x , y) is not separable;
that is

F [f(x,y)] not equal to F[i(x, y)].F[r(x, y)].

We can separate the two components by taking the logarithm of the two sides

ln f(x,y) = ln i(x, y) + ln r(x, y).

Taking Fourier transforms on both sides we get

F [ln f(x,y)} = F[ln i(x, y)} + F[ln r(x, y)].

that is, F(x,y) = I(x,y) + R(x,y),


where F, I and R are the Fourier transforms ln f(x,y),ln i(x, y) and ln r(x, y) respectively.

The function F represents the Fourier transform of the sum of two images: a low-frequency
illumination image and a high frequency reflectance image. If we now apply a filter with a transfer
function that suppresses low- frequency components and enhances high-frequency components, then
we can suppress the illumination component and enhance the reflectance component

Features & Application:

1. Homomorphic filter is used for image enhancement.


18
2. It simultaneously normalizes the brightness across an image and increases contrast.
3. It is also used to remove multiplicative noise.
Images normally consist of light reflected from objects.

The basic nature of the image f(x,y) may be characterized by two components:
(1) The amount of source light incident on the scene being viewed, &
(2) The amount of light reflected by the objects in the scene.

These portions of light are called the illumination and reflectance components, and are denoted i(x,y)
and r(x,y) respectively. The functions i and r combine multiplicatively to give the image function
F: f(x,y) = i(x,y)r(x,y), where 0 < i(x,y) < 1 indicates perfect black body ,indicates perfect absorption ,
and 0 < r(x,y) < 1 indicates perfect white body ,indicates perfect reflection.

Since i and r combine multiplicatively, they can be added by taking log of the image intensity, so that
they can be separated in the frequency domain. Illumination variations can be thought as a
multiplicative noise and can be reduced by filtering in the log domain. To make the illuminations of an
image more even, the HF components are increased and the LF.

Medical Image Enhancement using Hybrid filter

Medical image enhancement addresses the problem of improving the visibility of significant features in
a medical image to facilitate the diagnosis process. Poor visualization of medical images may be due to
limited pixel resolution, limited image dimension, and the inevitable effect of noise components.

This chapter presents a hybrid filter architecture using the combination of


(1) Fourier descriptors (FDs) for shape similarity-invariant enhancement and rotation-invariant
enhancement to accommodate the arbitrary orientation of textures and contours,
(2) An Adaptive Multistage Nonlinear Filter (AMNF), and
(3) A Multiresolution/ Multiorientation Wavelet Transform (MMWT) that are specifically designed for
image enhancement in medical imaging.
Here three visual attributes of the delineated mass and microcalcification clusters (MCCs) in
mammograms are considered namely, shape, texture, and area.

Filter Architecture
A block diagram of the hybrid filter architecture is shown in Figure. The input mammogram image g (i,
j) is first processed by the Fourier descriptors and the Adaptive Multistage Nonlinear Filter (FDs-
AMNF), for enhancing desired features while suppressing image noise and smoothing the details of
background parenchymal tissue structures. The output image, expressed as gFDsAMNF (i, j), is
processed in two different ways:
(a) A weight coefficient α1 is applied to the output image producing α1 gFDsAMNF (i, j) and
(b) The same output image is processed by the wavelet transform. The MMWT, as shown in Figure,
decomposes the output image, gFDsAMNF (i, j), into a set of independent, spatially oriented frequency
bands or lower resolution sub images. These sub images are then classified into two categories: one
category primarily contains structures of interests, and another category mainly contains background
structures. The sub images are then reconstructed by the MMWT into two images, gW1 (i, j) and gW2
(i, j), that contain the desired features and background features, respectively. Finally, the outputs of the
reconstructed sub images weighted by coefficients α2 and α3 and the original weighted output image
α1 gFDsAMNF (i, j) are combined as indicated in Figure to yield the output image g0 that further
improves the enhancement of MCCs/masses as follows:
g0 = α1 gFDsAMNF (i, j) + α2 gW1 (i, j) - α3 gW2 (i, j)

A linear gray scaling is then used to scale the enhanced images.


19
Shape Similarity-Invariant and Rotation-Invariant Descriptors
Expand the functions x(s) and y(s) separately to obtain the elliptic Fourier descriptors (EFDs).
The EFDs corresponding to the nth harmonic of a contour composed of K points are given by ψ.

When the first harmonic locus is an ellipse, the rotations are defined relative to the semi-major axis of
the locus and produce two related representations of the curve:

Adaptive Multistage Nonlinear Filtering (AMNF)


Basic Filter Structure
An image g (i, j) can be considered to consist of two parts: a low frequency part gL (i, j) and a high-
frequency part gH (i, j) can be expressed as
g (i, j) = gL (i, j) + gH (i, j). 20
The low-frequency part may be dominant in homogeneous regions, whereas the high-frequency part
may be dominant in edge regions. The two-component image model allows different treatment of the
components, and it can be used for adaptive image filtering and enhancement. The high-frequency part
may be weighted with a signal-dependent weighting factor to achieve enhancement. A two-component
model is suitable not only for noise suppression, but also for many other enhancement operations such
as statistical differencing and contrast enhancement.
The TSF is based on the central weighted median filter (CWMF), which provides a selectable
compromise between noise removal and edge preservation in the operation of the conventional median
filter. Consider an M×N window W with M and N odd, centered around a pixel x(i, j) in the input image.
The output y(i, j) of the CWMF is obtained by computing the median of pixels in the window
augmented by 2K repetitions of x(i, j),

Where 2K is an even positive integer such that 0 < 2K< MN-1. If K = 0, the CWMF is reduced to the
standard median filter, and if 2K≥MN-1, then the CWMF becomes the identity filter. Larger values of K
preserve more image detail to the expense of noise smoothing, compared to smaller values.
The structure of the AMNF is shown in Figure, and its output is
gAMNF (i, j) = gAF (i, j) + b(i, j) [g (i, j) gAF (i, j)]
Where b (i, j) is a signal-dependent weighting factor that is a measure of the local signal activity, and
gAF (i, j) is the output from the second stage of the filter in Figure.

Adaptive Operation
In order to achieve better adaptive properties, five different filters with different window sizes are
selected according to the value of b(i, j). With respect to an appropriate window size W, two effects are
taken into account. Noise suppression increases, while spatial resolution decreases with increasing
window size.

Representing the outputs of the five filters as the vector gj , this adaptive process can be written with the
vector notation:

Where gl is the vector whose nonzero element is the output gAF (i, j), and the elements of c are set
adaptively according to the weighting factor b(i, j) as follows:

21
The thresholds τ1 to τ4 have to be set by the user according to the application.

Parameter Computation for the AMNF

The local mean g (i, j) and local variance σ2 (i, j) of g (i, j) needed for Equations can be calculated over
a uniform moving average window of size (2r+1)×(2s+1) with

Considering the image to be formed by a noise-free ideal image f (i, j) and a noise process n(i, j) such
that g (i, j) = f(i, j) + n(i, j), it is straightforward to show that the local variance of f (i, j) is given by

Where σ2n (i, j) is the non stationary noise variance assuming that f (i, j) and n (i, j) are independent. The
variance σ2n (i, j) is assumed to be known from an a priori measurement on the
imaging system.

Multiresolution/Multiorientation Wavelet Transform (MMWT)

Multiresolution Wavelet Transform

An efficient way to construct them is to use “separable wavelets” obtained from products of one-
dimensional wavelets and scaling functions.

Where A and B represent linear operators

Multiorientation Wavelet Transform

Directional sensitivity can be obtained by retaining only the components that lie in the desired
orientation in the WT domain. Selection of the orientation can be achieved with a fan having a desired
angular width and oriented in the desired orientation in the WT domain.

22
The appropriate fan width can also change from image to image, and narrower fan widths may be better
in many images. Therefore, the DWT is implemented by using Multiorientation filter banks with nine
fan widths: 45◦ (4 orientations), 30◦ (6 orientations), 22.5◦ (8 orientations), 15◦ (12 orientations), 11.25◦
(16 orientations), 9◦ (20 orientations), 7.2◦ (25 orientations), 6◦ (30 orientations), and 5.63◦ (32
orientations). In each orientation, the maximal gradient magnitude defined as Gn , (i=1, 2, 3, ...,9). The
technique of the adaptive directional filter bank (ADFB) is shown in Figure, where the output depends
on the maximal gradient among all orientations, defined as

where αi (i = 1, 2, ...,9) are normalizing factors to make all αi Gi have the value given by a unit arc area
and βi (i = 1, 2, 3, ...,9) are adaptive control parameters. If Gmax = αi Gi , then βi = 1 and βj = 0 for j = i.
For example, when Gmax = α9G9, the output of the 32-channel directional filter is used as the
orientation of the striation in the window.

Performance Measures

The visual evaluation was conducted with a dual monitor workstation used for image enhancement,
receiver operating characteristic (ROC) analysis and free response ROC (FROC) analysis. FROC/ROC
curves were computed and generated using developed computer software. A customized interface has
been developed to review left and right breast mammograms using the dual monitor display and to allow
real-time pan/zoom for each view.

23

You might also like