Nothing Special   »   [go: up one dir, main page]

Lec 34

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Biomedical Signal Processing

Prof. Sudipta Mukhopadhyay


Department of Electrical and Electronics Communication Engineering
Indian Institute of Technology, Kharagpur

Lecture – 34
Frequency Domain Characterization

Good morning. So, today we will start a new chapter. So, for that we need to introduce
something.

(Refer Slide Time: 00:30)

So, here we will take you through the reference list once again and all the previous
references remains as it is.
(Refer Slide Time: 00:37)

And, we will just add one more at the end that is that this Steven Kay’s book on ‘Modern
spectral estimation theory’, the last one, ok. At times, we will refer to it and I think that
would be useful in the following chapters, ok.

So, we will proceed to frequency domain characterization. Now, to start this topic the
first thing we should note that we the engineers, we are problem solvers and for that we
need to look for the opportunity of different kind of solution that whatever suits best for
a solution. So, when we look at our heart rate, we find it easy to express that in 72 beats
per minute compared to the fact that if we tell that it is so many seconds or milliseconds
is a cycle, ok. So, based on the convenience we pick up the thing.
(Refer Slide Time: 01:59)

And, here let us look back to one of the examples and say we have the phonocardiogram
signal which has two components S1 and S2 and the way it is shown here actually it is a
filtered signal, clean signal, but in real life it will have a lot of noise as superimposed on
it, ok. Now, if you think of that a lot of noise is superimposed on it and not just here, but
here also you get that that S1 and S2 does not come so clean.

Now, once it is not so clean the problem is that it becomes very difficult to actually get
the characteristics of S1 and S2 and two different phenomena, and so first thing what we
do, we try to eliminate the most of the noisy portion, the portion in between say S 1 and S2
and we want to isolate this part which is the portion of S 1 and S2 and eliminate rest of the
part because we know there is no activity, but only noise is there. So, for that we took the
help of the QRS complex of the ECG signal and the notch in the carotid pulse, ok. So, in
that case that we found that we can confine them in the time domain and we can get rid
of the problems there.
(Refer Slide Time: 03:48)

First confine them in the time domain. Now, after that when we try to characterize them
what we find that if there is any abnormality say in S1 wave which consist of number of
components, first of all that the blood is coming out of the ventricle at high speed and
opening up the valves towards the arteries. And so, it is causing some turbulence next the
opening of the valve that makes some sound, then the valves actually which were so far
giving the blood from aorta to the ventricle, they are getting closed. So, if there is any
leakage there that will also contribute some sound, if there is actually constriction in the
arteries which are supposed to carry the blood from the ventricles to the next stage. So,
all these things also have some component in them.

So, if we look at even the S1 signal, it is a multi-component signal and lot of different
causes are there. One of the things we know that when there is murmur then the
frequency of the S1 wave increases, but then again already noise is present and that our
zero-crossing or that kind of technique or the turns count, they are not very accurate
measure of frequency. They give a rough measure of the change in frequency. So, it
becomes difficult actually to capture in that way that what is the change that is occurring
from that a person who is normal and the person who has having the disease.

In fact, at the early age also or for the people who are pretty young there are also some
murmurs ok, but when they grow old actually that subsides only for the pathogenic cases
we get the murmur and that murmur has some different frequency components. So, what
the people found that instead of looking the signal in the time domain if we can look at
them in the frequency domain it becomes much easier to diagnose them. So, here the first
thing what we note that if we look at the S 1 signal or the PCG signal S1 wave. In that, for
the normal one at around 40 hertz we will get a peak.

So, most of the energy that would be concentrated at near the 40 hertz band. But if there
is any abnormality in the heart, we have actually change in the spectral characteristics,
we get the power there at the 40 hertz band it actually subsides and we get increase in the
lower band. In the case of acute myocardial infarction, we have lack of blood actually
lack of oxygen in those muscles.

So, then we get that near the thirty hertz frequencies, they are increasing there and it has
a steep decrease as the frequencies are going down. When this myocardial infarction is
getting healed then again there is some more changes what is happening towards the 30
hertz the that lower frequency one that components are actually again getting decreased,
but there is no increase near the 40 hertz brand band. But for the higher frequency that
you see the that frequency components they are increasing going towards the normal
patient.

So, if we look at that frequency domain description of the S1 wave, it becomes much
easier to get actually the changes. For the normal, the characteristics is clear that it has
maximum frequency in the 40 hertz band and for the acute myocardial infarction we get
that the signal is more at near 30 hertz and at higher frequency above 50 or 60 hertz, it is
much lower. When it gets healed again, at the higher frequency 50 to 60 hertz, it is
actually moving upward and the component which was there near 30 hertz, the increase
is again getting actually subdued here in this portion.

So, frequency domain description in this case is becoming more effective and that is why
we find as an engineer that instead of looking at time domain at times we should look at
the frequency domain description of the signal and from that point of view we are
looking at the frequency domain analysis of the signal. And we know that classical
definition of the frequency actually that the frequency is described with the help of the
auto correlation coefficients. We know the auto correlation coefficient and the PSD has
one to one relationship through the Fourier transform, but calculating that autocorrelation
function for all the lags is itself a big task. So, we try to find actually an easier way so
that directly from the signal, can we get the PSD and from that point of view that we
look for new technique that is called periodogram.

(Refer Slide Time: 11:44)

So, here the definition of periodogram is given. We take here the Fourier transform of a
sequence starting from -M going up to +M; that means, 2M+1 points and for that we take
the mod square of sum of actually that this components and it is averaged divided by the
number of samples and after that we need to take the expectation because x(n) is a
random variable and then we take the limiting case where M tends to infinity. So, that is
the definition of periodogram.

So, now what we would like to know that if we compare it with the standard definition of
the PSD which is again a sum from -∞ to +∞ that containing all the lags of the
autocorrelation coefficient rxx and we take the Fourier transform of it to give the PSD.
Now, in this case what is the commonality of these two or do they have anything in
common in between? So, that is the first question we asked that we have a standard
definition and we are taking a new definition where we need not have to compute
explicitly the autocorrelation coefficient, but we are directly going to get the PSD with a
new definition called periodogram, will they give me the same thing or it will be
something different. So, we start with this question now.
(Refer Slide Time: 14:21)

So, for that we actually do the derivation of that first we take the periodogram formula.
In the periodogram formula, we have taken the mod square. The mod square can be
represented as the simple quantity multiplied the conjugate of that quantity, ok. So, we
take the conjugate of the same expression and just to make sure that two indices does not
get mixed up we take the two summations with two indices one with small m another
with small n and one of them that gets conjugated here x*(n) and because of the
conjugation we get that here in the exponential term we get that one term called m – n,
ok.

So, we get an expression like this after simplification or getting rid of that square term,
ok. We get a quadratic term like this involving x(m) and x*(n). Now, let us look at the
expectation operator what is there. Now, expectation operator will work on the random
variable, rest of the thing they do not change with the probability space. So, if we take it
inside then we will get actually a term expectation of x(m) and x*(n). So, from that we
get that is nothing but our autocorrelation function for (m – n)th lag, ok.

So, we look at that and here that m minus n the way we could write only because; that
means that this is equal to rxx(m – n) we could write this with some assumption. First of
all, we have taken the assumption that this is a stationary process and because it is
stationary it is shift invariant. Over a time if we change it will not change with the time
even if we change the shift or the indices of time i.e m and n and it depends on the
difference of these two time instances and that is why we could write it in that way
otherwise we need to go for a more complicated kind of thing we have to write m comma
n, ok. So, that property we have used. So, when we are taking this kind of derivation one
thing is implied that we are dealing with the stationary signals, only then these
derivations they will follow otherwise the things could fail.

Now, what we find that when we try to simplify them, we have ended up with double
summation which does not make it that simple and it in fact, is longer than the initial
expression. So, we try to do something about it and here we notice something that if we
take a double summation of a function g(m – n) of this form, then it can be expressed in
terms of a single summation of almost double the lags. If we take (m – n) term i.e. the
lag ok, here the double summation was from -M to +M, here we are going -2M to +2M,
ok.

We have that many lags here in this case. So, we have replaced (m – n), we have an
index for that i.e. k and however, these term g(k) it is weighted. We have a weight i.e.
2M + 1 – |k|; that means for 0 th lag, we will have most of the terms they are getting
accumulated. when k equal to 0, we will have 2M + 1 such terms. When we are going
away from the 0 in either side then the number of terms will reduce. Any way that
whatever may be the scaling one thing is clear that we could replace this kind of function
which can be expressed in terms of g(m – n) having double summation with m and n
with a single summation when we take m – n is the new index. So, we apply that in this
case because we get in this term that rxx is having m – n term and exponential also has m
– n term, ok.
(Refer Slide Time: 21:24)

So, using that we simplify and we write this expression in a simpler way we see it in the
second line that we take the new index k and with the help of that that we can write it in
the form of a single summation of rxx(k) and exp(-j2πfk). Here, one thing that we need to
keep in mind that f is a continuous variable and we are getting a continuous spectrum. If
we get spectra in that sense what we are supposed to get for the classical definition using
auto correlation function.

Now, in the next step what we do we take the term 1/(2M + 1) inside and we combine
with this scaling 2M + 1 – |k|. So, we get a new scaling here 1 – (|k|/(2M + 1)) and we
get here that rxx(k) - j2πfk. Now, if we look from here itself that this term that looks very
similar to our autocorrelation function. In fact, this is the autocorrelation function of the
signal and this is the that taking the kernel of the Fourier transform so, we already got
somewhere near. Now, taking the limit, so far, we have not considered the limit. So, we
apply this limit now.

Now, limit is on M and we see these two terms they are not directly affected, because
they do not have terms involving M what will change first of all the index k that limit
that –2M to +M it will become –∞ to +∞. So, we have a change here and for these terms
what we get at as M tends to infinity this term second part of it this one it goes to 0, for
all finite value of k. So, we can simply replace this term that the limiting value would be
1, so, we get the classical definition of the PSD here. So, what we get that the expression
what we have derived for our case that periodogram that is nothing but a new way to find
out PSD without directly going to get the autocorrelation function, and then going to
compute the frequency domain transform to get the PSD. It is a direct way to get the
PSD and that way periodogram gets a legitimate place in the literature.

So, with that we stop here.

Thank you.

You might also like