Laboratory Journal: Signal Coding Estimation Theory
Laboratory Journal: Signal Coding Estimation Theory
Laboratory Journal: Signal Coding Estimation Theory
D. Y. PATIL GROUP
Dr. D. Y. Patil Group of Institutions
CERTIFICATE
This is to certify that Mr./Ms. ...................................................................................... Of
Class ..................................... , Roll. No. ..................,University Examination Seat No.
................. has completed all practical satisfactory in the Subject Signal Coding & Estimation Theory as prescribed by the university of Pune.
Bahubali Shiragapur
Head of Department
Dept. of E&TC
Dr. S. S. Sonavane
Principal
DYPSOE
Contents
Friday, 10 January 2014
11
2.2
Index
45
51
Case Study
38
45
31
38
22
31
15
22
11
12
15
51
56
56
59
ii | 60
p( A, B) = p( A/B) p( B)
1
= log2 p
p
and its entropy H is considered the average Information per message of . Entropy
can be regarded intuitively as uncertaintyor disorder of any event.
H=
p log2 p = p log2 p
1 | 60
Entropy measures the uncertainty inherent in the distribution of a random variable. Joint entropy and conditional entropy are simple extensions that measure the
uncertainty in the joint distribution of a pair of random variables, and the uncertainty in the conditional distribution of a pair of random variables.
Joint Entropy:- The joint entropy H(X, Y ) of a pair of discrete random variables with
a joint distribution p(x, y) is defined as,
m
H ( X, Y ) =
x =0 y =0
H ( X/Y ) =
x =0 y =0
Chain Rule for Entropy:- The joint entropy, conditional entropy, and marginal entropy
for two ensembles X and Y are related by:
H ( X, Y ) = H ( X ) + H (Y/X ) = H (Y ) + H ( X/Y )
Mutual Information:- The mutual information between two random variables measures the amount of information that one conveys about the other. Equivalently, it
measures the average reduction in uncertainty about X that results from learning
about Y . It is defined:m
I ( X; Y ) =
p( x, y)
x =0 y =0
I ( X; Y ) = H ( X ) H ( X/Y )
I ( X; Y ) = H (Y ) H (Y/X )
I ( X; Y ) = H ( X ) + H (Y ) H ( X, Y )
Classification of Digital Communication Channel : Noise free channel :- Noise free channel has only diagonal elements of the joint
probability matrix.
H ( X ) = H (Y ) = H ( X, Y )
Error free channel :- A channel is said to be error free if capacity of the channel
is greater than entropy of the channel.
H (Y ) = H ( X, Y )
H (X)
= H ( X, Y )
Binary Symmetric Channel:- BSC channel is characterized by,
No.of input = No. of output = 2
2 | 60
P
1 P
P
H ( X ) = H (Y ) = 0
Noisy channel :- For noisy channel,
H (X)
= H (Y )
= H ( X, Y )
Algorithm:1. Input the number of transmitters and receivers i.e. m and n.
2. Input joint probability matrix i.e. p(x,y).
3. Calculate p(x) from p(x,y) i.e. sum of all column elements for all rows.
4. Calculate p(y) from p(x,y) i.e. sum of all row elements for all columns.
5. Calculate entropy of transmitter H(x) using p(x).
6. Calculate entropy of receiver H(y) using p(y).
7. Calculate joint entropy with the help of formula
m
H ( X/Y ) =
x =0 y =0
8. From the values of H(X),H(Y) and H(X,Y) we can calculate conditional entropies as
H ( X/Y ) = H ( X, Y ) H (Y )
H (Y/X ) = H ( X, Y ) H ( X )
9. calculate mutual information as
I ( X; Y ) = H ( X ) H ( X/Y )
I ( X; Y ) = H (Y ) H (Y/X )
Post-lab:- Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPGs),
and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and
electrical engineering. Its impact has been crucial to the success of the Voyager
missions to deep space, the invention of the compact disc, the feasibility of mobile
phones, the development of the Internet, the study of linguistics and of human
perception, the understanding of black holes, and numerous other fields. Important
3 | 60
Program 1: Determination of various entropy and mutual information for various channel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
4 | 60
41
42
43
44
45
46
47
48
49
disp(Hxy)
%calculation of H(x/y)
disp('Conditional Entropy H(X/Y) ')
Hx_y=HxyHy
disp('Conditional Entropy H(Y/X) ')
Hy_x=HxyHx
%calculation of I(x,y)
disp('Mutual Information I(X;Y) ')
Ixy=Hx+HyHxy
Program 2: Determination of various entropy and mutual information for binary symmetric Channel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
clc;
close all;
clear all;
x= input('enter the input matrix P(X)=');
t= input('enter the transition matrix P(Y/X)');
n= length(x);
y= x*t;
m= length(y);
disp('Py');
disp(y);
Hx=0;
for i=1:n
Hx=(x(i)*(log2(1/x(i))))+Hx;
end
disp('Hx');
disp(Hx);
Hy=0;
for j=1:m
Hy=(y(j)*(log2(1/y(j))))+Hy;
end
disp('Hy');
disp(Hy);
d=diag(x);
disp('d=');
disp(d);
for k=1:n
d(k,k)=x(k);
end
q=d*t;
l=length(q);
disp('Pxy');
disp(q);
Hxy=0;
for i=1:n
for j=1:m
Hxy=(q(i,j)*(log2(1/q(i,j))))+Hxy;
end
end
disp('Hxy');
disp(Hxy);
Hxby=HxyHy;
disp('Hxby');
5 | 60
43
44
45
46
47
48
49
disp(Hxby);
Hybx=HxyHx;
disp('Hybx');
disp(Hybx);
Ixy=HxHxby;
disp('Ixy');
disp(Ixy);
0.2500
0.1500
0.1000
Entropy H(X)
1.4855
Entropy H(Y)
1
Entropy H(X,Y)
2.4855
Conditional Entropy H(X/Y)
Hx_y =
1.4855
Conditional Entropy H(Y/X)
Hy_x =
1.0000
Mutual Information I(X;Y)
Ixy =
4.4409e-16
6 | 60
0
0.5000
0
0
0
0.2000
Entropy H(X)
1.4855
Entropy H(Y)
1.4855
Entropy H(X,Y)
1.4855
Conditional Entropy H(X/Y)
Hx_y =
0
Conditional Entropy H(Y/X)
Hy_x =
0
Mutual Information I(X;Y)
Ixy =
1.4855
7 | 60
0.3600
0.3400
Hx
1.0499
Hy
1.0598
d=
0.3000
0
0
0.4000
0.1200
0.2400
0.1800
0.1600
Pxy
Hxy
1.7295
Hxby
0.6697
Hybx
0.6797
Ixy
0.3801
Conclusion:-
8 | 60
Viva Questions
1. What is entropy?
9 | 60
10 | 60
11 | 60
Symbols
Respective Probability
S1
S2
S3
S4
S5
0.4
0.3
0.2
0.05
0.05
Coding Step
I II III IV
0
1
1
1
1
0
1
1
1
0
1
1
0
1
Codeword
0
10
110
1110
1111
Program:-
clear all;
close all;
clc;
ssS=input('enter the symbols you want to encode');
[a,b]=size(ssS);
ss=input('enter the symbols occurences or probabilities');
12 | 60
8
9
10
11
12
13
14
15
16
17
18
19
20
fano=0;
%initializations for Pk
n=1;Hs=0; %initializations for entropy H(s)
for iii=1:length(ss)
Hs=Hs+ ss(iii)*log2(1/ss(iii)); %solving for entropy
end
21
22
23
24
25
26
for o=1:length(ss)1
fano=fano+ss(o);
sf=[sf 0]+[zeros(1,o) fano]; %solving for Pk for every codeword
siling=[siling 0]+[zeros(1,o) ceil(log2(1/ss(o+1)))]; %solving ...
for length every codeword
end
27
28
29
for r=1:length(sf)
esf=sf(r);
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
for p=1:siling(r)
esf=mod(esf,1)*2;
h(p)=esfmod(esf,1); %converting Pk into a binary number
end
hh(r)=h(1)*10^(siling(r)1); %initializtion for making the ...
binary a whole number
for t=2:siling(r)
hh(r)=hh(r)+h(t)*10^(siling(r)t);
%making the binary a ...
whole number
end
%e.g. 0.1101 ==> 1101
end
c={'0','1'};
for i=1:length(hh)
u=1;
%converting the codes ...
into a string
for t=siling(i):1:1
f=floor(hh(i)/10^(t1));
%1001 ==>1 (getting ...
the first highest unit of a number)
hh(i)=mod(hh(i),10^(t1));
%1001 ...
==>001(eliminating the first highest unit of a number)
if f==1
if u==1
d=c{2};
%conversion part ...
(num(1001) to str(1001))
else
d=[d c{2}];
end
else
13 | 60
if u==1
d=c{1};
else
d=[d c{1}];
end
53
54
55
56
57
end
codex{i,:}={d};
u=u+1;
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
end
end
tao=siling(1)*ss(1); %initialization for codeword length
for u=1:length(ss)1 %computing for codeword length
tao=tao+siling(u+1)*ss(u+1);
end
T=tao/n; %computing for average codeword length
B=[flipud(rot90(ss)),flipud(rot90(siling)),flipud(rot90(sf))];
disp('Code')
for i=1:b
disp(codex{i})
end
disp(['Hs = ',num2str(Hs)])
disp(['T = ',num2str(T),'bits/symbol'])
disp([num2str(Hs),' <= ',num2str(T),' <= ',num2str(Hs+1)])
Result:-
The Output
enter the symbols you want to encode1:5
enter the symbols occurences or probabilities[.3 .3 .2 .1 .1]
Code
00
01
100
1100
1110
Hs = 2.171
T = 2.6bits/symbol
2.171 <= 2.6 <= 3.171
Conclusion:-
14 | 60
15 | 60
Symbols
Probability
Codeword
S1
S2
S3
S4
S5
0.4
0.2
0.2
0.1
0.1
1
01
000
0010
0011
Post-Lab:- Huffman coding today is often used as a back-end to some other compression methods and multimedia codecs such as JPEG and MP3 have a front-end
model and quantization followed by Huffman coding. These coding techniques can
be used in cryptography.
Advancement:-Compare the efficiencies between Shannon-Fano and Huffman coding.
Program:-
clear all;
close all;
clc;
symbols = input('enter the symolsyouwant to encode ')% Alphabet vector
[r,c]=size(symbols);
6
7
8
9
10
11
12
13
14
15
16 | 60
disp(hx);
disp('Efficiency');
disp(hx/avglen);
%
% Pretty print the dictionary.
temp = dict;
for i = 1:length(temp)
temp{i,2} = num2str(temp{i,2});
end
temp
16
17
18
19
20
21
22
23
24
25
Result:-
The Output
enter the symolsyouwant to encode 1:5
symbols =
1
0.3000
[1x2
[1x2
[1x2
[1x3
[1x3
0.2000
0.1000
0.1000
double]
double]
double]
double]
double]
avglen =
2.2000
Entropy
2.1710
Efficiency
0.9868
temp =
[1]
[2]
[3]
[4]
[5]
0
0
1
1
1
1
0
0
1 1
1 0
17 | 60
Conclusion:-
18 | 60
Viva Questions
1. What is Shannons first theorem?
19 | 60
7. Need of compression?
20 | 60
9. How we calculate average codeword length and efficiency? Explain with formula and
example.
21 | 60
22 | 60
1 0 1
0 1 1
1 1 0
1 0 0 1 0 1
G = 0 1 0 0 1 1
0 0 1 1 1 0
Now from the above generator matrix and message vector we can calculate the
codewords. No. of message vectors = 2k = 23 = 8
we can calculate the code word vectors using formula
C = mG
Message(m)
000
001
010
011
100
101
110
111
Codeword(C)
000000
001110
010011
011101
100101
101011
110110
111000
Decoding:- The parity check matrix H is can be represented in terms of Parity matrix
P and an Identity matrix using following formula.
H = [ P T : Ink ]
so here in this particular Problem
1 0 1 1 0 0
H = 0 1 1 0 1 0
1 1 0 0 0 1
Now considering one bit error pattern Syndrome Table can be calculated using
S = eH T as
23 | 60
Error(e)
100000
010000
001000
000100
000010
000001
Syndrome(S)
101
011
110
100
010
001
1
0
1
s = [101110]
1
0
0
0
1
1
0
1
0
1
1
0
= [101]
0
0
1
This matches with first row of the syndrome. It means that error has occurred in
the first bit.Now take the all zero vector of size n=6 and make the first bit of it 1.
Let this vector as error vector e, which is [100000]. We can correct this problem by
modulo 2 addition of this error pattern and received codeword.And, therefore our
corrected code word is :Cc = [001110]
24 | 60
16
17
18
19
20
21
22
23
24
25
26
27
28
29
25 | 60
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
D1=floor(D/2)
%eye(n) returns an idendity matrix of order nXn for error pattern
e=eye(n);
%calculating syndrome
s1=e*H';
%checcking validity of recived vector
r=input('enter the received vector (in binary) r=');
%A=length of received vector
A=length(r);
%Blength of parity check matrix
B=length(H');
if A==B
s=r*H';
S1=mod (s,2)
else
disp('entered received vector is not correct...')
end
Erp=zeros(1,nk);
for i=1:n
if S1(1,:)==s1(i,:)
Erp=e(i,:)
end
end
%corrected codeword
disp('Corrected Codeword')
c_correct=xor(r,Erp)
Result:-
The Output
0
1
0
0
0
0
1
0
0
0
0
1
1
1
0
1
1
1
1
1
0
1
1
1
1
1
0
1
1
1
0
1
1
1
1
1
1
0
0
0
1
0
0
0
1
H =
26 | 60
c =
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
1
0
1
0
1
0
1
0
0
1
0
1
0
1
1
0
1
0
0
1
1
0
0
1
0
1
1
0
0
1
1
0
1
0
0
1
0
1
1
0
1
0
0
1
Erp =
27 | 60
Corrected Codeword
c_correct =
1
Conclusion:-
28 | 60
Viva Questions
1. What is Block code?
29 | 60
6. What are the other types linear block codes? Also comment on error correcting
capability of linear block code.
30 | 60
31 | 60
C = [1101001]
Algorithm:1. Enter the length of codewords i.e n.
2. Enter length of message word i.e k.
3. Find number of parity bits i.e q = n k.
4. Enter the input data sequence m.
5. Enter the coefficients for generator polynomial g.
6. Find message polynomial m( x ).
7. Find generator polynomial g( x ).
8. Find x (nk) m( x )
9. Find p( x ) = mod[ x (nk) m( x )/g( x )]
10. c( x ) = m( x ) + p( x ).
Program:-
10
clear all;
close all;
clc;
n=input('enter the codewoed length= ');
k=input('enter the no. of message bits= ');
M_d=0:2^k1;
disp('message bits are :')
M_b=de2bi(M_d,'leftmsb')
g=input('enter the generator polynomial g0+g1X+g2X2+...as ...
[g0,g1,g2,...]:: g=')
g1=fliplr(g);
11
12
13
14
15
16
17
18
19
for i=1:2^k
[q,r]= deconv([M_b(i,:),zeros(1,nk)],g1);
R=rem(abs(r),2);
c(i,:)=xor([M_b(i,:),zeros(1,nk)],R);
end
disp('Encoded Code Words are::')
disp(c);
R=input('enter the received codeword= ');
32 | 60
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
pause;
[q2,r2]=deconv(R,g1);
ra=rem(abs(r2),2);
s=sum(ra');
if s==0
disp('error is not present');
else
disp('error is present');
end
disp('error pattern is')
erp1=eye(n);
for i=1:n
[q,r3]=deconv([erp1(i,:)],g1);
S(i,:)=rem(abs(r3),2);
end
for i=1:n
if S(i,:)==ra
p=i;
end
end
disp(erp1(p,:))
disp('correct code word is :')
disp(xor(R,erp1(p,:)))
Result:-
The Output
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
33 | 60
g =
1
0
0
1
1
1
1
0
0
1
1
0
0
0
0
1
1
0
1
1
0
1
0
0
1
0
1
1
0
1
0
0
1
0
1
0
1
1
0
1
0
1
0
1
0
0
1
0
1
0 1 1]
Conclusion:-
34 | 60
Viva Questions
1. What is the difference between LBC and cyclic codes?
35 | 60
36 | 60
37 | 60
38 | 60
But large block lengths cause delays. There is another coding scheme in which
much smaller blocks of uncoded data of length k0 are used. These are called
information frames. Convolution codes have memory which retains the previous
incoming information frames. Thus decoding & encoding in this code is based on
past information i.e. memory is required.
Theory:Convolution Encoding:- To encode data, start with k memory registers, each holding
1 input bit. Unless otherwise specified, all memory registers start with a value of
0. The encoder has n modulo-2 adders, and n generator polynomials - one for each
adder (see Figure 3 above). An input bit m 1 is fed into the leftmost register. Using
the generator polynomials and the existing values in the remaining registers, the
encoder outputs n bits. Now bit shift all register values to the right (m1 moves to
m0 , m0 moves to m1 ) and wait for the next input bit. If there are no remaining
input bits, the encoder continues output until all registers have returned to the zero
state. The Figure 3 below is a rate 1/3 (m/n) encoder with constraint length (k) of
3. Generator polynomials are
G1 = (1, 1, 1),
G2 = (0, 1, 1)
G3 = (1, 0, 1).
Therefore, output bits are calculated (modulo 2) as follows:
n 1 = m 1 + m 0 + m 1
n 2 = m 0 + m 1
n 3 = m 1 + m 1
Steps For Code Tree : Tree becomes repetitive after 3rd branch. Beyond the 3rd branch, the 2 nodes
labeled a are identical.
The encoder has memory M=K-1=2 message bits. Hence when third message
bit enters the encoder, the 1st message bit is shifted out of the register.
In the code tree starting with 1 & 0, if there is 1 in the input sequence.then go
downward.(This is shown by dotted line) & note down correct code written
on that line.
If there is 0 in the i/p seq then go upward(shown by solid line) & note down
code written on that line.
Thus trace the code-tree up to level = no of bits in i/p sequence to get the
corresponding o/p seq.
Steps For Code Trellis:-
39 | 60
If there is 0 in k0 , then trace upper i.e. solid line & note down code written
above the line.
If there is 1 in k0 .then trace lower i.e. dotted line & note down code written
above the line.
Thus for
k0 = 1 1 0 1 0 0 0
We get,
n0 = 11 01 01 00 10 11 00
Algorithm:1. Enter the number of input bits i.e k.
2. Enter the number of output bits i.e. n
3. Find code rate r = k/n.
4. Enter the constraint length K.
5. Enter the input data sequence to be coded.
40 | 60
Program:-
%convolutional Encoding
clc;
clear all;
close all;
n=input('enter the length of codeword frame=');
L=input('enter the length of message vector=');
M=input('enter the no of flipflops=');
msg=input('enter the message bit');
g1=input('enter the impulse response of adder 1=');
g2= input('enter the impulse response of adder 2=');
K=M+1
msg=[msg,zeros(1,K1)]
b=num2str(g1)
c=num2str(g2)
cv1=str2num(dec2base(bin2dec(b),8))
cv2=str2num(dec2base(bin2dec(c),8))
cv=[cv1,cv2]
trellis=poly2trellis(K,cv)
code= convenc(msg,trellis)
Result:-
The Output
enter the length of codeword frame=5
enter the length of message vector=5
enter the no of flipflops=2
enter the message bit[1 0 0 1 1]
enter the impulse response of adder 1=[1 1 1]
enter the impulse response of adder 2=[1 0 1]
K =
3
msg =
1
0
0
1
1
0
0
b =1 1 1
c =1 0 1
cv1 =
7
cv2 =
5
cv =
7
5
41 | 60
trellis =
numInputSymbols: 2
numStates: 4
outputs: [4x2 double]
code = Columns 1 through 10
1
1
1
0
1
1
Columns 11 through 14
0
1
1
1
numOutputSymbols: 4
nextStates: [4x2 double]
Conclusion:-
42 | 60
Viva Questions
1. What are the disadvantages of code tree and advantages of code trellis?
43 | 60
44 | 60
45 | 60
clc;
clear all;
n = input('enter no. of bits in output');
k = input('enter no. of shift register');
m = input('enter the input message');
46 | 60
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
l=k+1;
disp('constraint lenth l=');
disp(l);
g = [zeros(1,0)];
g1 = [zeros(1,n)];
for i=1:n;
gx = input('enter the generator matrix rowwise');
g=[g,gx];
a=0;
for j=1:l;
k=j1;
a=a+((2^k)*gx(i));
end;
g1=[g1,a];
end;
%formation of trellis
trellis=poly2trellis(l,g1);
disp(trellis);
24
25
26
27
28
29
30
31
32
%encoding
c=convenc(m,trellis);
disp('convolutionally encoded binary data');
disp(c);
%DECODING
v=vitdec(c,trellis,l,'trunc','hard');
disp('convolutionally decoded binary data')
disp(v);
Result:-
The Output
rowwise[1 0 0]
rowwise[1 1 1]
rowwise[1 0 1]
double]
double]
47 | 60
Columns 12 through 22
0
Columns 23 through 30
0
Conclusion:-
48 | 60
Viva Questions
1. Why viterbi decoding is efficient?
49 | 60
50 | 60
51 | 60
n k = mt = 18
dmin 2t + 1 = 7
This is a triple-error-correcting (63, 45) BCH code.
Decoding procedure
Syndrome computation.
Determination of the error pattern.
Error correction.
Program:-
14
15
16
17
18
19
20
clear all
%BCH code parameter
n=input('enter the code word length=');
k=input('enter the msg length=');
%para of GF(p^m)
p=2;
m=log2(n+1);
n=2^m1;%codewordv length
dmin=nk+1 %designed distance
%%%%%
nw=input('enter the no. of words to process= ');
msg=gf(randint(nw,k)) %rndom k symbol msg & data represented using ...
Galois field
%generate the Galois field,GF(2^m)
pol=gfprimfd(m,'min',p);
%compute the nonzero elements
%n=2^m1;
for i=1:n;
field(i,:)=(gftuple(i1,pol,2));
end
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
52 | 60
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
if newmsg==msg
disp('The msg was recover perfectly.')
%elseif newmsg~=msg
%disp('The msg received contains error/errors.')
end
Result:-
The Output
enter the codeword length=7
n =
7
enter the message length4
k =
4
All errors were corrected.
The message was recovered perfectly.
enter the codeword length=7
n =
7
enter the message length4
k =
4
t2 =
1
53 | 60
1
0
0
1
0
1
1
1
1
0
0
1
0
1
1
1
0
0
0
0
0
1
0
1
0
0
0
0
1
0
1
1
0
0
1
1
1
1
1
1
1
0
0
1
1
0
1
1
0
0
1
1
0
0
0
0
1
1
0
1
0
0
0
1
0
0
1
0
1
0
1
1
0
0
1
1
1
1
1
1
1
0
0
1
1
0
1
1
0
0
1
1
0
0
0
0
1
0
0
1
1
0
1
1
1
1
1
1
0
0
0
1
0
0
1
0
0
0
0
0
1
0
0
1
0
1
1
1
1
0
1
0
1
1
1
0
0
0
0
1
0
1
0
0
0
0
1
54 | 60
err =
1
1
0
0
1
0
1
1
1
1
0
0
1
0
1
1
1
0
0
0
0
0
1
0
1
0
0
0
0
1
0
1
1
0
0
1
1
1
1
1
1
1
0
0
1
1
0
1
1
0
0
1
1
0
0
0
0
1
1
0
1
Conclusion:-
55 | 60
Case Study
Case Study
8 RS Coding and Decoding
56 | 60
Case Study
Bibliography
[1] Ranjan Bose, Information Theory, Coding and Cryptography. The McGraw-Hill, New
Delhi, 2nd Edition, 2008.
[2] Salvatore Gravano, Introduction to Error Contrl Codes. Oxford Univercity Press, New
Delhi, First Edition,2011.
58 | 60
Index
Decoding convolution codes using Viterbi
algorithm, 5057
Determination of various entropies and
mutual information, 49
Determination of Mutual Information & Entropy, 59
Generating and decoding Cyclic Code,
3643
Generating and decoding of linear block
code (LBC), 2736
Generation and evaluation of variable
length source code, 1419
Generation and evaluation of Shannonfano coding, 1519
Implementation of Convolution Code,
4350
Implementation of of BCH algorithm,
5762
RS Coding and Decoding, 62
59 | 60