Nothing Special   »   [go: up one dir, main page]

Laboratory Journal: Signal Coding Estimation Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

Laboratory Journal

Signal Coding Estimation Theory


Bachelor of Engineering
Third Year-E & TC.

D. Y. PATIL GROUP
Dr. D. Y. Patil Group of Institutions

Dr. D. Y. PATIL SCHOOL OF ENGINEERING


Knowledge City, Charoli(Bk),Lohegaon, Pune.

CERTIFICATE
This is to certify that Mr./Ms. ...................................................................................... Of
Class ..................................... , Roll. No. ..................,University Examination Seat No.
................. has completed all practical satisfactory in the Subject Signal Coding & Estimation Theory as prescribed by the university of Pune.

Sumit Kumar Choubey


Sub. In-charge
Dept. of E&TC

Bahubali Shiragapur
Head of Department
Dept. of E&TC

Dr. S. S. Sonavane
Principal
DYPSOE

Contents
Friday, 10 January 2014

Determination of various entropies and mutual information . . . . . . .

Friday, 17 January 2014

11

Generation and evaluation of variable length source code . . . . . . . .


2.1
Generation and evaluation of Shannon-fano coding . . . . . . . .

Friday, 24 January 2014

2.2

Implementation of Convolution Code . . . . . . . . . . . . . . . . . . . .

Decoding convolution codes using Viterbi algorithm . . . . . . . . . . .

Index

45
51

Implementation of of BCH algorithm . . . . . . . . . . . . . . . . . . . .

Case Study

38
45

Friday, 14 March 2014

31
38

Friday, 07 March 2014

22
31

Generating and decoding Cyclic Code . . . . . . . . . . . . . . . . . . . .

Friday, 28 February 2014

15
22

Generating and decoding of linear block code (LBC) . . . . . . . . . . .

Friday, 21 February 2014

11
12
15

Generation and evaluation of Huffman Coding . . . . . . . . . .

Friday, 31 January 2014

51
56

RS Coding and Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . .

56
59

ii | 60

Friday, 10 January 2014

Friday, 10 January 2014


1 Determination of various entropies and mutual
information
AIM:- Implementation of algorithm for determination of various entropies and
mutual information of the given channel. Test various types of channel such as:1. Noise free channel
2. Error free channel
3. Binary symmetric channel
4. Noisy channel
Pre-lab:random variables:- Random variables are variables that take on values determined
by probability distributions. They may be discrete or continuous, in either their
domain or their range. For example, a stream of ASCII encoded text characters
in a transmitted message is a discrete random variable, with a known probability
distribution for any given natural language. Probability Theory rests upon two
rules:1. Product Rule:- joint probability of both A and B p(A,B) is given by, p( A, B) =
p( A/B) p( B) or equivalently p(B/A)p(A)
2. Sum Rule:- If event A depends on a number of other events B, then the total
probability of A is the sum of its joint probabilities with all B:
p( A) =

p( A, B) = p( A/B) p( B)

Bayes Theorem:p( B/A) = p( A/B) p( B)/p( A)


Theory:Information & Entropy:-The information content I of a single event or message is
defined as the base-2 logarithm of inverse its probability p:
I = log2

1
= log2 p
p

and its entropy H is considered the average Information per message of . Entropy
can be regarded intuitively as uncertaintyor disorder of any event.
H=

p log2 p = p log2 p
1 | 60

Friday, 10 January 2014

Entropy measures the uncertainty inherent in the distribution of a random variable. Joint entropy and conditional entropy are simple extensions that measure the
uncertainty in the joint distribution of a pair of random variables, and the uncertainty in the conditional distribution of a pair of random variables.
Joint Entropy:- The joint entropy H(X, Y ) of a pair of discrete random variables with
a joint distribution p(x, y) is defined as,
m

H ( X, Y ) =

p(x, y) log2 p(x, y)

x =0 y =0

Conditional entropy:- It measures the uncertainty remaining about random variable


X after specifying that random variable Y has taken on a particular value y = b j . It
is defined as,
m

H ( X/Y ) =

p(x, y) log2 p(x/y)

x =0 y =0

Chain Rule for Entropy:- The joint entropy, conditional entropy, and marginal entropy
for two ensembles X and Y are related by:
H ( X, Y ) = H ( X ) + H (Y/X ) = H (Y ) + H ( X/Y )
Mutual Information:- The mutual information between two random variables measures the amount of information that one conveys about the other. Equivalently, it
measures the average reduction in uncertainty about X that results from learning
about Y . It is defined:m

I ( X; Y ) =

p( x, y)

p(x, y) log2 p(x) p(y)

x =0 y =0

I ( X; Y ) = H ( X ) H ( X/Y )
I ( X; Y ) = H (Y ) H (Y/X )
I ( X; Y ) = H ( X ) + H (Y ) H ( X, Y )
Classification of Digital Communication Channel : Noise free channel :- Noise free channel has only diagonal elements of the joint
probability matrix.
H ( X ) = H (Y ) = H ( X, Y )
Error free channel :- A channel is said to be error free if capacity of the channel
is greater than entropy of the channel.
H (Y ) = H ( X, Y )
H (X)
= H ( X, Y )
Binary Symmetric Channel:- BSC channel is characterized by,
No.of input = No. of output = 2

2 | 60

Friday, 10 January 2014

Its conditional probability matrix is as follows :




P

1

P


1 P
P

H ( X ) = H (Y ) = 0
Noisy channel :- For noisy channel,
H (X)
= H (Y )
= H ( X, Y )
Algorithm:1. Input the number of transmitters and receivers i.e. m and n.
2. Input joint probability matrix i.e. p(x,y).
3. Calculate p(x) from p(x,y) i.e. sum of all column elements for all rows.
4. Calculate p(y) from p(x,y) i.e. sum of all row elements for all columns.
5. Calculate entropy of transmitter H(x) using p(x).
6. Calculate entropy of receiver H(y) using p(y).
7. Calculate joint entropy with the help of formula
m

H ( X/Y ) =

p(x, y) log2 p(x/y)

x =0 y =0

8. From the values of H(X),H(Y) and H(X,Y) we can calculate conditional entropies as
H ( X/Y ) = H ( X, Y ) H (Y )
H (Y/X ) = H ( X, Y ) H ( X )
9. calculate mutual information as
I ( X; Y ) = H ( X ) H ( X/Y )
I ( X; Y ) = H (Y ) H (Y/X )
Post-lab:- Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPGs),
and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and
electrical engineering. Its impact has been crucial to the success of the Voyager
missions to deep space, the invention of the compact disc, the feasibility of mobile
phones, the development of the Internet, the study of linguistics and of human
perception, the understanding of black holes, and numerous other fields. Important

3 | 60

Friday, 10 January 2014

sub-fields of information theory are source coding, channel coding, algorithmic


complexity theory, algorithmic information theory, information-theoretic security,
and measures of information.
Advancements: Find all entropies when conditional probabilities are given.
Find conditional probabilities.
Program:-

Program 1: Determination of various entropy and mutual information for various channel
1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40

%Exp.Noimplementation of algortithm for determination of various ...


entropy
%and mutual information
clc;
clear all;
close all;
k=input('Enter the probability')
[m,n]=size(k);
%calculation of p(x)
px=sum(k');
%calculation of p(y)
py=sum(k);
%calculation of H(x)
Hx=0;
for c=1:m
hx=px(c)*(log(px(c))/log(2));
Hx=Hx+hx;
end
disp('Entropy H(X) ')
disp(Hx)
%calculation of H(y)
Hy=0;
for d=1:n
hy=py(d)*(log(py(d))/log(2));
Hy=Hy+hy;
end
disp('Entropy H(Y) ')
disp(Hy)
%calculation of H(x,y)
Hxy=0;
for r=1:m
for
c=1:n
if k(r,c)==0
hxy=0;
else
hxy=k(r,c)*(log(k(r,c))/log(2));
end
Hxy=Hxy+hxy;
end
end
disp('Entropy H(X,Y) ')

4 | 60

Friday, 10 January 2014

41
42
43
44
45
46
47
48
49

disp(Hxy)
%calculation of H(x/y)
disp('Conditional Entropy H(X/Y) ')
Hx_y=HxyHy
disp('Conditional Entropy H(Y/X) ')
Hy_x=HxyHx
%calculation of I(x,y)
disp('Mutual Information I(X;Y) ')
Ixy=Hx+HyHxy

Program 2: Determination of various entropy and mutual information for binary symmetric Channel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

clc;
close all;
clear all;
x= input('enter the input matrix P(X)=');
t= input('enter the transition matrix P(Y/X)');
n= length(x);
y= x*t;
m= length(y);
disp('Py');
disp(y);
Hx=0;
for i=1:n
Hx=(x(i)*(log2(1/x(i))))+Hx;
end
disp('Hx');
disp(Hx);
Hy=0;
for j=1:m
Hy=(y(j)*(log2(1/y(j))))+Hy;
end
disp('Hy');
disp(Hy);
d=diag(x);
disp('d=');
disp(d);
for k=1:n
d(k,k)=x(k);
end
q=d*t;
l=length(q);
disp('Pxy');
disp(q);
Hxy=0;
for i=1:n
for j=1:m
Hxy=(q(i,j)*(log2(1/q(i,j))))+Hxy;
end
end
disp('Hxy');
disp(Hxy);
Hxby=HxyHy;
disp('Hxby');

5 | 60

Friday, 10 January 2014

43
44
45
46
47
48
49

disp(Hxby);
Hybx=HxyHx;
disp('Hybx');
disp(Hybx);
Ixy=HxHxby;
disp('Ixy');
disp(Ixy);

Result:Output for Noisy Channel


Enter the probability[.25 .25;.15 .15;.1 .1]
k =
0.2500
0.1500
0.1000

0.2500
0.1500
0.1000

Entropy H(X)
1.4855
Entropy H(Y)
1
Entropy H(X,Y)
2.4855
Conditional Entropy H(X/Y)
Hx_y =
1.4855
Conditional Entropy H(Y/X)
Hy_x =
1.0000
Mutual Information I(X;Y)
Ixy =
4.4409e-16

6 | 60

Friday, 10 January 2014

Output for Noisy Free Channel


Enter the probability[.3 0 0;0 .5 0; 0 0 .2]
k =
0.3000
0
0

0
0.5000
0

0
0
0.2000

Entropy H(X)
1.4855
Entropy H(Y)
1.4855
Entropy H(X,Y)
1.4855
Conditional Entropy H(X/Y)
Hx_y =
0
Conditional Entropy H(Y/X)
Hy_x =
0
Mutual Information I(X;Y)
Ixy =
1.4855

Output for Binary Symmetric Channel


enter the input matrix P(X)=[ .3 .4]
enter the transition matrix P(Y/X)[.4 .6;.6 .4]
Py

7 | 60

Friday, 10 January 2014

0.3600

0.3400

Hx
1.0499
Hy
1.0598
d=
0.3000
0

0
0.4000

0.1200
0.2400

0.1800
0.1600

Pxy

Hxy
1.7295
Hxby
0.6697
Hybx
0.6797
Ixy
0.3801

Conclusion:-

8 | 60

Friday, 10 January 2014

Viva Questions
1. What is entropy?

2. What are the units of entropy and mutual information?

3. Why we calculate entropy?

9 | 60

Friday, 10 January 2014

4. What is the significance of conditional entropy and joint entropy?

5. What are conditions for different channel entropies?

10 | 60

Friday, 17 January 2014

Friday, 17 January 2014


2 Generation and evaluation of variable length source
code
AIM:- Implementation of algorithm for generation and evaluation of variable length
source coding using
Shannon-Fano Coding
Huffman Coding
Pre-lab:Coding :- Coding is the process of data transformation from one form to another
form. Source coding: Source coding is coding applied to the messages generated by
the source. Some of the Source coding techniques are listed below:1. Shannon-Fano coding
2. Huffman coding
3. LZW coding
Fixed length coding:- In this, codes assign to each message are of same length.
Variable length coding:- In this, less no. of bits are assigned to more probable message
and vice versa.
Need of source coding:- To increase the coding efficiency and compression.

Figure 1: A Communication System Model

11 | 60

Friday, 17 January 2014

2.1 Generation and evaluation of Shannon-fano coding


Theory:-In the field of data compression, ShannonFano coding, named after Claude
Shannon and Robert Fano, is a technique for constructing a prefix code based on a
set of symbols and their probabilities (estimated or measured). In ShannonFano
coding, the symbols are arranged in order from most probable to least probable,
and then divided into two sets whose total probabilities are as close as possible to
being equal. All symbols then have the first digits of their codes assigned; symbols
in the first set receive 0 and symbols in the second set receive 1. As long as any
sets with more than one member remain, the same process is repeated on those sets,
to determine successive digits of their codes. When a set has been reduced to one
symbol, of course, this means the symbols code is complete and will not form the
prefix of any other symbols code.
Algorithm:1. Accept number of symbols and their respective probabilities. Sort the probabilities in the descending order.
2. Partition these probabilities such that their sum should be equal or nearly
equal.
3. Assign 0 to upper group symbols and 1 to lower group symbols.
4. Repeat steps 2 and 3 till all the probabilities are finished.
5. For the formation of code read assigned code from left to right.
Example:- for 5 symbols with probabilities {0.4,0.3,0.2,0.05 and 0.05}, Shannon-Fano
can be applied as follows::

Symbols

Respective Probability

S1
S2
S3
S4
S5

0.4
0.3
0.2
0.05
0.05

Coding Step
I II III IV
0
1
1
1
1

0
1
1
1

0
1
1

0
1

Codeword
0
10
110
1110
1111

Program:-

Program 3: Shannon-fano Coding


1
2
3
4
5
6

clear all;
close all;
clc;
ssS=input('enter the symbols you want to encode');
[a,b]=size(ssS);
ss=input('enter the symbols occurences or probabilities');

12 | 60

Friday, 17 January 2014

8
9
10
11

12
13

%input = row matrix of occurrences or probabilities e.g. ss=[1 3 4 ...


5] or
%ss[0.4 0.3 0.2 0.1]
%outputs = string of codewords,average codeword length
ss=ss./sum(ss); %if occurrences are inputted, probabilities are gained
ss=sort(ss,'descend'); %the probabilities are sorted in descending ...
order
siling=ceil(log2(1/ss(1))); %initial length is computed
sf=0;

14
15
16
17
18
19
20

fano=0;
%initializations for Pk
n=1;Hs=0; %initializations for entropy H(s)
for iii=1:length(ss)
Hs=Hs+ ss(iii)*log2(1/ss(iii)); %solving for entropy
end

21
22
23
24
25

26

for o=1:length(ss)1
fano=fano+ss(o);
sf=[sf 0]+[zeros(1,o) fano]; %solving for Pk for every codeword
siling=[siling 0]+[zeros(1,o) ceil(log2(1/ss(o+1)))]; %solving ...
for length every codeword
end

27
28
29

for r=1:length(sf)
esf=sf(r);

30
31
32
33
34
35

36
37

38
39
40
41
42

43
44

45

46
47
48

49
50
51
52

for p=1:siling(r)
esf=mod(esf,1)*2;
h(p)=esfmod(esf,1); %converting Pk into a binary number
end
hh(r)=h(1)*10^(siling(r)1); %initializtion for making the ...
binary a whole number
for t=2:siling(r)
hh(r)=hh(r)+h(t)*10^(siling(r)t);
%making the binary a ...
whole number
end
%e.g. 0.1101 ==> 1101
end
c={'0','1'};
for i=1:length(hh)
u=1;
%converting the codes ...
into a string
for t=siling(i):1:1
f=floor(hh(i)/10^(t1));
%1001 ==>1 (getting ...
the first highest unit of a number)
hh(i)=mod(hh(i),10^(t1));
%1001 ...
==>001(eliminating the first highest unit of a number)
if f==1
if u==1
d=c{2};
%conversion part ...
(num(1001) to str(1001))
else
d=[d c{2}];
end
else

13 | 60

Friday, 17 January 2014

if u==1
d=c{1};
else
d=[d c{1}];
end

53
54
55
56
57

end
codex{i,:}={d};
u=u+1;

58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75

end
end
tao=siling(1)*ss(1); %initialization for codeword length
for u=1:length(ss)1 %computing for codeword length
tao=tao+siling(u+1)*ss(u+1);
end
T=tao/n; %computing for average codeword length
B=[flipud(rot90(ss)),flipud(rot90(siling)),flipud(rot90(sf))];
disp('Code')
for i=1:b
disp(codex{i})
end
disp(['Hs = ',num2str(Hs)])
disp(['T = ',num2str(T),'bits/symbol'])
disp([num2str(Hs),' <= ',num2str(T),' <= ',num2str(Hs+1)])

Result:-

The Output
enter the symbols you want to encode1:5
enter the symbols occurences or probabilities[.3 .3 .2 .1 .1]
Code
00
01
100
1100
1110
Hs = 2.171
T = 2.6bits/symbol
2.171 <= 2.6 <= 3.171

Conclusion:-

14 | 60

Friday, 24 January 2014

Friday, 24 January 2014


2.2 Generation and evaluation of Huffman Coding
Theory:- The ShannonFano algorithm doesnt always generate an optimal code. In
1952, David A. Huffman gave a different algorithm that always produces an optimal
tree for any given probabilities.
The Shannon-Fano algorithm works and it produces fairly efficient variable-length
encodings; when the two smaller sets produced by a partitioning are in fact of
equal probability, the one bit of information used to distinguish them is used most
efficiently. Unfortunately, ShannonFano does not always produce optimal prefix
codes; the set of probabilities {0.35, 0.17, 0.17, 0.16, 0.15} is an example of one
that will be assigned non-optimal codes by Shannon-Fano coding. For this reason,
Shannon-Fano is almost never used; Huffman coding is almost as computationally
simple and produces prefix codes that always achieve the lowest expected code word
length, under the constraints that each symbol is represented by a code formed of
an integral number of bits.
Algorithm:1. List the source symbols in order of decreasing probability.
2. Combine the probabilities of the two symbols having the lowest probabilities
and reorder the resultant probabilities. The same procedure is repeated until
there are two ordered probabilities remaining.
3. Start encoding with the last reduction, which consist of exactly two ordered
probabilities. Assign 0 as the first digit in the codeword for all the source
symbols associated with the first probability; assign 1 to the second probability.
4. Now go back and assign 0 and 1 to the second digit for the two probabilities
that were combined in the previous reduction step, retaining all assignments
made in step 3.
5. Keep regressing this way until the first column is reached.
Example:-for 5 symbols with probabilities {0.4,0.2,0.2,0.1 and 0.1}, huffman Coding
can be applied as follows::

15 | 60

Friday, 24 January 2014

Figure 2: Maximum Variance Huffman Coding

Symbols

Probability

Codeword

S1
S2
S3
S4
S5

0.4
0.2
0.2
0.1
0.1

1
01
000
0010
0011

Post-Lab:- Huffman coding today is often used as a back-end to some other compression methods and multimedia codecs such as JPEG and MP3 have a front-end
model and quantization followed by Huffman coding. These coding techniques can
be used in cryptography.
Advancement:-Compare the efficiencies between Shannon-Fano and Huffman coding.
Program:-

Program 4: Huffman Coding


1
2
3
4
5

clear all;
close all;
clc;
symbols = input('enter the symolsyouwant to encode ')% Alphabet vector
[r,c]=size(symbols);

6
7

8
9
10
11
12
13
14
15

prob = input ('enter their respective probability ') % ...


Symbol probability vector
[dict,avglen] = huffmandict(symbols,prob)
%
%Entropy
hx=0;
for i=1:c
hx=hx(prob(1,i)*(log(prob(1,i))/log(2)));
end
disp('Entropy');

16 | 60

Friday, 24 January 2014

disp(hx);
disp('Efficiency');
disp(hx/avglen);
%
% Pretty print the dictionary.
temp = dict;
for i = 1:length(temp)
temp{i,2} = num2str(temp{i,2});
end
temp

16
17
18
19
20
21
22
23
24
25

Result:-

The Output
enter the symolsyouwant to encode 1:5
symbols =
1

enter their respective probability [.3 .3 .2 .1 .1]


prob =
0.3000
dict =
[1]
[2]
[3]
[4]
[5]

0.3000

[1x2
[1x2
[1x2
[1x3
[1x3

0.2000

0.1000

0.1000

double]
double]
double]
double]
double]

avglen =
2.2000
Entropy
2.1710
Efficiency
0.9868
temp =
[1]
[2]
[3]
[4]
[5]

0
0
1
1
1

1
0
0
1 1
1 0

17 | 60

Friday, 24 January 2014

Conclusion:-

18 | 60

Friday, 24 January 2014

Viva Questions
1. What is Shannons first theorem?

2. Which coding is better FLC or VLC? Why?

3. Which coding is better Shannon-Fano or Huffman coding? Why?

19 | 60

Friday, 24 January 2014

4. Why source coding is required?

5. What is Difference between Lossy and Lossless Compression?(Examples)

6. What is Efficiency and Redundancy?

7. Need of compression?

20 | 60

Friday, 24 January 2014

8. What are the types of source coding?

9. How we calculate average codeword length and efficiency? Explain with formula and
example.

21 | 60

Friday, 31 January 2014

Friday, 31 January 2014


3 Generating and decoding of linear block code (LBC)
AIM:-Implementation of algorithm for generating and decoding of linear block code
(LBC).
Pre-lab:- In channel coding, we map the incoming data sequence to a channel input
sequence. This encoding procedure is done by the channel encoder. The encoder
sequence is then transmitted over the noisy channel. The channel output sequence
at the receiver is inverse mapped an input data sequence. This is called as the
decoding procedure and is carried out by the channel decoder. But the encoder and
decoder are under the designers control. The encoder introduces redundancy in
the prescribed manner.
The decoder exploits redundancy in a prescribed manner in order to reconstruct
the source sequence as accurately as possible. The channel coding makes possible
reliable communication over unreliable channels. Channel coding is also called as
error control coding. In coding theory, a linear code is an error-correcting code for
which any linear combination of codewords is also a codeword. Linear codes are
traditionally partitioned into block codes and convolution codes, although Turbo
codes can be seen as a hybrid of these two types. Linear codes allow for more
efficient encoding and decoding algorithms than other codes (syndrome decoding).
Linear codes are used in forward error correction and are applied in methods for
transmitting symbols (e.g., bits) on a communications channel so that, if errors occur
in the communication, some errors can be corrected or detected by the recipient of
a message block. The codewords in a linear block code are blocks of symbols which
are encoded using more symbols than the original value to be sent. A linear code of
length n transmits blocks containing n symbols. For example, the [ 7,4,3] Hamming
code is a linear binary code which represents 4-bit messages using 7-bit codewords.
Two distinct codewords differ in at least three bits. As a consequence, up to two
errors per codeword can be detected and a single error can be corrected.This code
contains 24 = 16 codewords.
Theory:-Linear block code is a block code in which every code can be represented
by the sum of two other code-word.The Coding and decoding process utilizes this
properties and we have a very efficient coding and decoding process.
Encoding and decoding procedure of the linear block code:- Consider the (n,k) systematic
linear block code(LBC) code, where n is length of the codeword and k is the length
of the message. The simplicity of the coding and decoding process is demonstrated

22 | 60

Friday, 31 January 2014

using a simple example.


Encoding :- Lets consider the (6,3) LBC with the following parity Matrix

1 0 1
0 1 1
1 1 0

as we know that Generator matrix


G = [ Ik : Pnk ]
so

1 0 0 1 0 1
G = 0 1 0 0 1 1
0 0 1 1 1 0

Now from the above generator matrix and message vector we can calculate the
codewords. No. of message vectors = 2k = 23 = 8
we can calculate the code word vectors using formula
C = mG
Message(m)
000
001
010
011
100
101
110
111

Codeword(C)
000000
001110
010011
011101
100101
101011
110110
111000

Decoding:- The parity check matrix H is can be represented in terms of Parity matrix
P and an Identity matrix using following formula.
H = [ P T : Ink ]
so here in this particular Problem

1 0 1 1 0 0
H = 0 1 1 0 1 0
1 1 0 0 0 1
Now considering one bit error pattern Syndrome Table can be calculated using
S = eH T as

23 | 60

Friday, 31 January 2014

Error(e)
100000
010000
001000
000100
000010
000001

Syndrome(S)
101
011
110
100
010
001

Let the received codeword R = [101110] the S can be calculated as

1
0

1
s = [101110]
1

0
0

0
1
1
0
1
0

1
1

0
= [101]
0

0
1

This matches with first row of the syndrome. It means that error has occurred in
the first bit.Now take the all zero vector of size n=6 and make the first bit of it 1.
Let this vector as error vector e, which is [100000]. We can correct this problem by
modulo 2 addition of this error pattern and received codeword.And, therefore our
corrected code word is :Cc = [001110]

Algorithm:1. Enter the length of codewords i.e n.


2. Enter length of message word i.e k.
3. Find number of parity bits i. e q = n k.
4. Enter parity matrix say p.
5. Take identity matrix ik (k k ).
6. Find generator matrix G = [ IP].
7. Enter message which is to be coded say m.
8. Find codeword for entered message using C = m G.
9. Find transpose of parity matrix P and generate identity matrix of (q q) iq
10. Find parity check matrix H = [ pt : iq ].
11. Find transpose of parity check matrix H t .

24 | 60

Friday, 31 January 2014

12. Enter received codeword r.


13. Find syndrome vector s = r H t .
14. Find error pattern E=eye(n, n).
15. Find the row of Ht matching the syndrome vector and take the corresponding
error vector ei .
16. Correct the codeword using y = r ei .
Post-lab:- Channel encoding, adds extra data bits to make the transmission of data
more robust to disturbances present on the transmission channel. A typical music
CD uses the Reed-Solomon code to correct for scratches and dust. In this application
the transmission channel is the CD itself. Cell phones also use coding techniques to
correct for the fading and noise of high frequency radio transmission. Data modems,
telephone transmissions, and NASA all employ channel coding techniques to get
the bits through, for example the turbo code.
Program:-

Program 5: Linear Block Code


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

16
17
18
19
20
21
22
23
24
25
26
27
28
29

%LINEAR BLOCK CODING


clear all;
close all;
clc;
n=input('enter the block length n=');
k=input('enter the msg bits k=');
%parity matrix
p=input('enter the parity check matrix p=');
%generating a generator matrix
G=[eye(k),p]
%calculate the parity check matrix
H=[p',eye(nk)]
%no. of possible combination of msgs
a=[0:2^k1];
%converting decimal to binary by default function returns MSB to ...
R.H.S left MSB returns MSB to L.H.S
a1=de2bi(a,'leftmsb');
b=a1*G;
%for decimal correction performing mod2 operation on b
disp('calculated Binary Code Word ')
c=mod(b,2)
%To calculating Minimum hamming weight
wt=sum(c,2);
wt(1)=[];
disp('minimum hamming weight')
dmin=min(wt)
%for calculating error detection capability
disp('Error Correcting Capability ')
D=dmin1
disp('Error Detecting capability');

25 | 60

Friday, 31 January 2014

30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55

D1=floor(D/2)
%eye(n) returns an idendity matrix of order nXn for error pattern
e=eye(n);
%calculating syndrome
s1=e*H';
%checcking validity of recived vector
r=input('enter the received vector (in binary) r=');
%A=length of received vector
A=length(r);
%Blength of parity check matrix
B=length(H');
if A==B
s=r*H';
S1=mod (s,2)
else
disp('entered received vector is not correct...')
end
Erp=zeros(1,nk);
for i=1:n
if S1(1,:)==s1(i,:)
Erp=e(i,:)
end
end
%corrected codeword
disp('Corrected Codeword')
c_correct=xor(r,Erp)

Result:-

The Output

enter the block length n=7


enter the msg bits k=4
enter the parity check matrix p=[1 1 0;1 1 1;0 1 1;1 1 1]
G =
1
0
0
0

0
1
0
0

0
0
1
0

0
0
0
1

1
1
0
1

1
1
1
1

0
1
1
1

1
1
0

1
1
1

0
1
1

1
1
1

1
0
0

0
1
0

0
0
1

H =

calculated Binary Code Word

26 | 60

Friday, 31 January 2014

c =
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1

0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1

0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

0
1
0
1
1
0
1
0
1
0
1
0
0
1
0
1

0
1
1
0
1
0
0
1
1
0
0
1
0
1
1
0

0
1
1
0
1
0
0
1
0
1
1
0
1
0
0
1

minimum hamming weight


dmin =
2
Error Correcting Capability
D =
1
Error Detecting capability
D1 =
0
enter the received vector (in binary) r=[1 1 1 1 1 0 0]
S1 =
0

Erp =

27 | 60

Friday, 31 January 2014

Corrected Codeword
c_correct =
1

Conclusion:-

28 | 60

Friday, 31 January 2014

Viva Questions
1. What is Block code?

2. Explain linearity property of LBC?

3. What is the difference between systematic and non-systematic Block codes?

29 | 60

Friday, 31 January 2014

4. What is significance of syndrome vector?

5. What is the need of channel coding?

6. What are the other types linear block codes? Also comment on error correcting
capability of linear block code.

30 | 60

Friday, 21 February 2014

Friday, 21 February 2014


4 Generating and decoding Cyclic Code
AIM:- Implementation of algorithm for generation and decoding of cyclic codes.
Pre-lab:- Cyclic codes form a subclass of linear block codes. An advantage of cyclic
codes over most other types of codes is that they are easy to encode. Furthermore,
they possess a well defined mathematical structure which has led to the development of very efficient decoding schemes for them. A code C is cyclic if and if
following is true...
1. C is linear code and
2. C is Cyclic-shift invariant.
Theory:- Encoding:- Consider the (7,4) cyclic code. i.e.n = 7 and k = 3 The generator
polynomial is
G(X ) = X3 + X + 1
Let the message is m=[1 1 0 1].So,the message polynomial is
M(X ) = X3 + X2 + 1
so code polynomial is
C ( X ) = Q( X ) G ( X )
Where Q( X ) is quotient polynomial.
Q( X ) = [ M ( X ) X nk %G ( X )]
or,
C ( X ) = [ M( X ) X nk ] + RG(x) ([ M( X ) X nk ])
so,
C(X ) = X6 + X5 + X3 + 1
and
C = [1101001]
Decoding-: Let the received codeword is 1111001. Hence the corresponding polynomial is
R( X ) = X 6 + X 5 + X 4 + X 3 + 1
Now calculating Syndrome using
S( X ) = RG(X ) ( R( X ))

31 | 60

Friday, 21 February 2014

So Syndrome is S( X ) = X 2 + X, since Syndrome depends upon error pattern and


error pattern alone,so assuming single bit error, Error- pattern E is [001000]. So
under above assumption Corrected codeword
C = RE

C = [1101001]
Algorithm:1. Enter the length of codewords i.e n.
2. Enter length of message word i.e k.
3. Find number of parity bits i.e q = n k.
4. Enter the input data sequence m.
5. Enter the coefficients for generator polynomial g.
6. Find message polynomial m( x ).
7. Find generator polynomial g( x ).
8. Find x (nk) m( x )
9. Find p( x ) = mod[ x (nk) m( x )/g( x )]
10. c( x ) = m( x ) + p( x ).
Program:-

Program 6: Cyclic Code


1
2
3
4
5
6
7
8
9

10

clear all;
close all;
clc;
n=input('enter the codewoed length= ');
k=input('enter the no. of message bits= ');
M_d=0:2^k1;
disp('message bits are :')
M_b=de2bi(M_d,'leftmsb')
g=input('enter the generator polynomial g0+g1X+g2X2+...as ...
[g0,g1,g2,...]:: g=')
g1=fliplr(g);

11
12
13
14
15
16
17
18
19

for i=1:2^k
[q,r]= deconv([M_b(i,:),zeros(1,nk)],g1);
R=rem(abs(r),2);
c(i,:)=xor([M_b(i,:),zeros(1,nk)],R);
end
disp('Encoded Code Words are::')
disp(c);
R=input('enter the received codeword= ');

32 | 60

Friday, 21 February 2014

20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

pause;
[q2,r2]=deconv(R,g1);
ra=rem(abs(r2),2);
s=sum(ra');
if s==0
disp('error is not present');
else
disp('error is present');
end
disp('error pattern is')
erp1=eye(n);
for i=1:n
[q,r3]=deconv([erp1(i,:)],g1);
S(i,:)=rem(abs(r3),2);
end
for i=1:n
if S(i,:)==ra
p=i;
end
end
disp(erp1(p,:))
disp('correct code word is :')
disp(xor(R,erp1(p,:)))

Result:-

The Output

enter the codewoed length= 7


enter the no. of message bits= 4
message bits are :M_b =
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1

0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1

0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

33 | 60

Friday, 21 February 2014

enter the generator polynomial

g0+g1X+g2X2+...as [g0,g1,g2,...]:: g=[1 1 0 1]

g =
1

Encoded Code Words are::0


0
0
0
0
0
0
1
0
0
1
0
0
0
1
1
0
1
0
0
0
1
0
1
0
1
1
0
0
1
1
1
1
0
0
0
1
0
0
1
1
0
1
0
1
0
1
1
1
1
0
0
1
1
0
1
1
1
1
0
1
1
1
1

0
0
1
1
1
1
0
0
1
1
0
0
0
0
1
1

0
1
1
0
1
0
0
1
0
1
1
0
1
0
0
1

enter the received codeword= [1 1 0 0


error is present
error pattern is
0
0
0
0
0
0
correct code word is :1
1
0
0

0
1
0
1
1
0
1
0
1
0
1
0
0
1
0
1
0 1 1]

Conclusion:-

34 | 60

Friday, 21 February 2014

Viva Questions
1. What is the difference between LBC and cyclic codes?

2. Explain cyclic property of Cyclic codes?

3. What is the difference between systematic and non-systematic cyclic codes?

35 | 60

Friday, 21 February 2014

4. How to find the generator polynomial?

5. Comment on error correcting capability of cyclic code.

6. When linear code is called as cyclic code?

36 | 60

Friday, 21 February 2014

7. What are the properties of the syndrome table?

37 | 60

Friday, 28 February 2014

Friday, 28 February 2014


5 Implementation of Convolution Code
AIM:- Implementation of algorithms for generating convolution code using1. Code Tree &
2. Trellis code.
Pre-lab:- In telecommunication, a convolution code is a type of error-correcting
code in which each m bit information symbol (each m-bit string) to be encoded is
transformed into an n-bit symbol, where m/n is the code rate (n m) and the
transformation is a function of the last k information symbols, where k is the constraint length of the code.
Requirement of convolution code:- In encoding of block of k-information symbols are
encoded into a block of n coded symbols. There is always one-to-one correspondence between the uncoded block of symbols & the coded Block of symbols. It is
used for high data rate applications. A large block length is important because: Many of good codes that have large distance properties are of large block
lengths.
Larger block length imp to that the encoding overhead is small.

Figure 3: A Convolution Encoder

38 | 60

Friday, 28 February 2014

But large block lengths cause delays. There is another coding scheme in which
much smaller blocks of uncoded data of length k0 are used. These are called
information frames. Convolution codes have memory which retains the previous
incoming information frames. Thus decoding & encoding in this code is based on
past information i.e. memory is required.
Theory:Convolution Encoding:- To encode data, start with k memory registers, each holding
1 input bit. Unless otherwise specified, all memory registers start with a value of
0. The encoder has n modulo-2 adders, and n generator polynomials - one for each
adder (see Figure 3 above). An input bit m 1 is fed into the leftmost register. Using
the generator polynomials and the existing values in the remaining registers, the
encoder outputs n bits. Now bit shift all register values to the right (m1 moves to
m0 , m0 moves to m1 ) and wait for the next input bit. If there are no remaining
input bits, the encoder continues output until all registers have returned to the zero
state. The Figure 3 below is a rate 1/3 (m/n) encoder with constraint length (k) of
3. Generator polynomials are
G1 = (1, 1, 1),
G2 = (0, 1, 1)
G3 = (1, 0, 1).
Therefore, output bits are calculated (modulo 2) as follows:
n 1 = m 1 + m 0 + m 1
n 2 = m 0 + m 1
n 3 = m 1 + m 1
Steps For Code Tree : Tree becomes repetitive after 3rd branch. Beyond the 3rd branch, the 2 nodes
labeled a are identical.
The encoder has memory M=K-1=2 message bits. Hence when third message
bit enters the encoder, the 1st message bit is shifted out of the register.
In the code tree starting with 1 & 0, if there is 1 in the input sequence.then go
downward.(This is shown by dotted line) & note down correct code written
on that line.
If there is 0 in the i/p seq then go upward(shown by solid line) & note down
code written on that line.
Thus trace the code-tree up to level = no of bits in i/p sequence to get the
corresponding o/p seq.
Steps For Code Trellis:-

39 | 60

Friday, 28 February 2014

Figure 4: Example of Code-Tree

If there is 0 in k0 , then trace upper i.e. solid line & note down code written
above the line.
If there is 1 in k0 .then trace lower i.e. dotted line & note down code written
above the line.
Thus for
k0 = 1 1 0 1 0 0 0
We get,
n0 = 11 01 01 00 10 11 00
Algorithm:1. Enter the number of input bits i.e k.
2. Enter the number of output bits i.e. n
3. Find code rate r = k/n.
4. Enter the constraint length K.
5. Enter the input data sequence to be coded.

40 | 60

Friday, 28 February 2014

6. Enter the combinations of shift registers.


7. Find the output states of shift registers.
8. Find corresponding outputs from states and combinations by ex-oring.
9. Plot the code tree according to given input sequence.
Post-lab:- Convolution codes are often used to improve the performance of digital
radio, mobile phones and satellite links. Advancements:- Design of convolution
encoder for constraint length K=3, code rate is 12 & no. of module-2 adders v=2

Program:-

Program 7: Convolution Code


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

%convolutional Encoding
clc;
clear all;
close all;
n=input('enter the length of codeword frame=');
L=input('enter the length of message vector=');
M=input('enter the no of flipflops=');
msg=input('enter the message bit');
g1=input('enter the impulse response of adder 1=');
g2= input('enter the impulse response of adder 2=');
K=M+1
msg=[msg,zeros(1,K1)]
b=num2str(g1)
c=num2str(g2)
cv1=str2num(dec2base(bin2dec(b),8))
cv2=str2num(dec2base(bin2dec(c),8))
cv=[cv1,cv2]
trellis=poly2trellis(K,cv)
code= convenc(msg,trellis)

Result:-

The Output
enter the length of codeword frame=5
enter the length of message vector=5
enter the no of flipflops=2
enter the message bit[1 0 0 1 1]
enter the impulse response of adder 1=[1 1 1]
enter the impulse response of adder 2=[1 0 1]
K =
3
msg =
1
0
0
1
1
0
0
b =1 1 1
c =1 0 1
cv1 =
7
cv2 =
5
cv =
7
5

41 | 60

Friday, 28 February 2014

trellis =

numInputSymbols: 2
numStates: 4
outputs: [4x2 double]
code = Columns 1 through 10
1
1
1
0
1
1
Columns 11 through 14
0
1
1
1

numOutputSymbols: 4
nextStates: [4x2 double]

Conclusion:-

42 | 60

Friday, 28 February 2014

Viva Questions
1. What are the disadvantages of code tree and advantages of code trellis?

2. Define constraint length & code rate.

3. What are the properties of convolutional codes?

43 | 60

Friday, 28 February 2014

4. Differentiate between block code and convolutional codes.

5. What are the different representations of convolutional codes?

44 | 60

Friday, 07 March 2014

Friday, 07 March 2014


6 Decoding convolution codes using Viterbi algorithm
AIM:- Implementation of algorithms for decoding convolution codes using Viterbi
algorithm.
Pre-lab:- Convolution decoding can be performed using a Viterbi algorithm. A
Viterbi decoder logically explores in parallel every possible user data in sequence.
It encodes and compare each one against the received sequence and picks up the
closest match: it is a maximum likelihood decoder. To reduce the complexity (the
number of possible data sequence double with each additional data bit), the decoder
recognizes at each point that certain sequences cannot belong to the maximum
likelihood path and it discards them. The encoder memory is limited to K bits; a
k-1 Viterbi decoder in steady-state operation keeps only 2 paths. Its complexity
increases exponentially with the constraint length K.
Theory:- Convolution Decoding using the Viterbi Algorithm:1. Initialization :- Label the leftmost state on the Trellis (i.e. the all zero state at
level 0)as 0,since there is no discrepancy at this point in the computation.
2. Computation step :a) Let j= 0,1,2 ,...... & suppose that previous step j we have done two things
i. All survivor paths are identified.
ii. The survivor path & its metric for each state of the trellis are stored .
b) Then at level j+1 ,compute the metric for all the paths entering each state
of the trellis by adding the metric of the incoming branches to the metric
of connecting survivor path from level j.
c) Hence for each state, identify the path with the lowest metric as the
survivor of step j+1 , thereby updating the computation.
3. Final Step:a) Continue the computation until the algorithm completes its forward
search through the trellis & reaches the termination mode at which time
it makes a decision on the maximum likelihood path.
b) Like a block decoder ,the sequence on symbols associated with that path
is released to the destination as the decoded version of the received
sequence .
c) It is therefore more correct to refer to the viterbi algorithm as a maximum
likelihood sequence estimator.

45 | 60

Friday, 07 March 2014

Figure 5: decoding of 11 01 11 11 00 01 01 11 using Viterbis algorithm

Algorithm:1. Enter the constraint length i.e K.


2. Enter the generator matrix G.
3. Enter the input data sequence.
4. Encode the entered sequence using convenc()
5. Insert error in single bit.
6. Decode this sequence usingvitdec().
7. Display the decoded sequence.
Post-lab:- A Viterbi decoder uses the Viterbi algorithm for decoding a bit stream that
has been encoded using based on a convolutional code. There are other algorithms
for decoding a convolutionally encoded stream (for example, the Fano algorithm).
The Viterbi algorithm is the most resource- consuming, but it does the maximum
likelihood decoding. It is most often used for decoding convolutional codes with
constraint lengths k<=10, but values up to k=15 are used in practice.
Advancements: Check for more than single bit errors.
Check the effect of track back length.
Program:-

Program 8: Decoding Convolution Code using viterbi


1
2
3
4
5

clc;
clear all;
n = input('enter no. of bits in output');
k = input('enter no. of shift register');
m = input('enter the input message');

46 | 60

Friday, 07 March 2014

6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

l=k+1;
disp('constraint lenth l=');
disp(l);
g = [zeros(1,0)];
g1 = [zeros(1,n)];
for i=1:n;
gx = input('enter the generator matrix rowwise');
g=[g,gx];
a=0;
for j=1:l;
k=j1;
a=a+((2^k)*gx(i));
end;
g1=[g1,a];
end;
%formation of trellis
trellis=poly2trellis(l,g1);
disp(trellis);

24
25
26
27
28
29
30
31
32

%encoding
c=convenc(m,trellis);
disp('convolutionally encoded binary data');
disp(c);
%DECODING
v=vitdec(c,trellis,l,'trunc','hard');
disp('convolutionally decoded binary data')
disp(v);

Result:-

The Output

enter no. of bits in output3


enter no. of shift register2
enter the input message[1 1 0 1 1]
constraint lenth l=
3
enter the generator matrix
enter the generator matrix
enter the generator matrix
numInputSymbols: 2
numOutputSymbols: 64
numStates: 4
nextStates: [4x2
outputs: [4x2

rowwise[1 0 0]
rowwise[1 1 1]
rowwise[1 0 1]

double]
double]

convolutionally encoded binary data


Columns 1 through 11
0

47 | 60

Friday, 07 March 2014

Columns 12 through 22
0

Columns 23 through 30
0

convolutionally decoded binary data


1
1
0
1
1

Conclusion:-

48 | 60

Friday, 07 March 2014

Viva Questions
1. Why viterbi decoding is efficient?

2. What is the difficulty in its application?

3. Explain maximum likelihood decoding of convolution codes?

49 | 60

Friday, 07 March 2014

4. What is the significance of track back length?

5. Explain vitdec() and conven() commands.

50 | 60

Friday, 14 March 2014

Friday, 14 March 2014


7 Implementation of of BCH algorithm
AIM:- Implementation of algorithms for decoding of BCH code.
Pre-lab:- In coding theory the BCH codes form a class of cyclic error-correcting
codes that are constructed using finite fields. BCH codes were invented in 1959 by
Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri The abbreviation BCH comprises the initials of these inventors names. One of the key features
of BCH codes is that during code design, there is a precise control over the number
of symbol errors correctable by the code. In particular, it is possible to design binary
BCH codes that can correct multiple bit errors. Another advantage of BCH codes is
the ease with which they can be decoded, namely, via an algebraic method known
as syndrome decoding. This simplifies the design of the decoder for these codes,
using small low-power electronic hardware.
BCH codes are used in applications like satellite communications, compact disc
players, DVDs, disk drives, solid-state drives and two-dimensional bar codes.
Theory:- BCH codes are cyclic codes. Only the codes, not the decoding algorithms,
were discovered by these early writers. The original applications of BCH codes were
restricted to binary codes of length 2m 1 for some integer m. These were extended
later by Gorenstein and Zieler (1961) to the nonbinary codes with symbols from
Galois field GF (q).
Primitive BCH Codes:- For any integer m 3 and t < 2m 1 there exists a primitive
BCH code with the following parameters: n = 2m 1
n k = mt
dmin 2t + 1
This code can correct t or fewer random errors over a span of 2m 1 bit
positions.
The code is a t-error-correcting BCH code.
For example, for m = 6, t = 3
n = 2m 1 = 63

51 | 60

Friday, 14 March 2014

n k = mt = 18
dmin 2t + 1 = 7
This is a triple-error-correcting (63, 45) BCH code.
Decoding procedure
Syndrome computation.
Determination of the error pattern.
Error correction.
Program:-

Program 9: Implementation of BCH Code


1
2
3
4
5
6
7
8
9
10
11
12
13

14
15
16
17
18
19
20

clear all
%BCH code parameter
n=input('enter the code word length=');
k=input('enter the msg length=');
%para of GF(p^m)
p=2;
m=log2(n+1);
n=2^m1;%codewordv length
dmin=nk+1 %designed distance
%%%%%
nw=input('enter the no. of words to process= ');
msg=gf(randint(nw,k)) %rndom k symbol msg & data represented using ...
Galois field
%generate the Galois field,GF(2^m)
pol=gfprimfd(m,'min',p);
%compute the nonzero elements
%n=2^m1;
for i=1:n;
field(i,:)=(gftuple(i1,pol,2));
end

21
22
23
24
25
26
27
28
29
30
31

%print the table of GF(2^m)


fprintf(' ');for j=1:m,fprintf('0');end,fprintf('\n');
for i=1:n
fprintf('%4d',i1);
for j=1:m
fprintf('%d',field(i,j));
end
fprintf('\n');
end
fprintf('\n\n');

32
33
34
35
36

%primitive ele in GF(2^m)


pold=bin2dec(num2str(pol))
alpha=gf(2,m,pold)
%%%%%%%%%%%%

52 | 60

Friday, 14 March 2014

37
38
39
40
41
42
43
44
45
46
47
48
49

%find t,the error capability


[genpoly,t]=bchgenpoly(n,k)
%define t2,the no. of errors to add in this ex.
t2=t
%Encode the msg in msg using an [n,k] BCH encoder
code=bchenc(msg,n,k)
%corrupt up to t2 bits in each codeword
noisycode=code+randerr(nw,n,1:t2)
%decode the noisy code using an [n,k]BCH decoder
[newmsg,err,ccode]=bchdec(noisycode,n,k)
if ccode==code
disp('All errorb were corrected.')
end

50
51
52
53
54
55

if newmsg==msg
disp('The msg was recover perfectly.')
%elseif newmsg~=msg
%disp('The msg received contains error/errors.')
end

Result:-

The Output
enter the codeword length=7
n =
7
enter the message length4
k =
4
All errors were corrected.
The message was recovered perfectly.
enter the codeword length=7
n =
7
enter the message length4
k =
4

t2 =
1

53 | 60

Friday, 14 March 2014

code = GF(2) array.


Array elements =
0
0
0
0
0
1
0
0
0
1

1
0
0
1
0
1
1
1
1
0

0
1
0
1
1
1
0
0
0
0

0
1
0
1
0
0
0
0
1
0

1
1
0
0
1
1
1
1
1
1

1
0
0
1
1
0
1
1
0
0

1
1
0
0
0
0
1
1
0
1

0
0
0
1
0
0
1
0
1
0

1
1
0
0
1
1
1
1
1
1

1
0
0
1
1
0
1
1
0
0

1
1
0
0
0
0
1
0
0
1

noisycode = GF(2) array.


Array elements =
1
0
0
0
0
1
0
0
0
0

1
0
1
1
1
1
1
1
0
0

0
1
0
0
1
0
0
0
0
0

newmsg = GF(2) array.


Array elements =
0
0
0
0
0
1
0
0
0

1
0
0
1
0
1
1
1
1

0
1
0
1
1
1
0
0
0

0
1
0
1
0
0
0
0
1

54 | 60

Friday, 14 March 2014

err =
1

ccode = GF(2) array.


Array elements =
0
0
0
0
0
1
0
0
0
1

1
0
0
1
0
1
1
1
1
0

0
1
0
1
1
1
0
0
0
0

0
1
0
1
0
0
0
0
1
0

1
1
0
0
1
1
1
1
1
1

1
0
0
1
1
0
1
1
0
0

1
1
0
0
0
0
1
1
0
1

All errors were corrected.


The message was recovered perfectly.

Conclusion:-

55 | 60

Case Study

Case Study
8 RS Coding and Decoding

56 | 60

Case Study

Bibliography
[1] Ranjan Bose, Information Theory, Coding and Cryptography. The McGraw-Hill, New
Delhi, 2nd Edition, 2008.
[2] Salvatore Gravano, Introduction to Error Contrl Codes. Oxford Univercity Press, New
Delhi, First Edition,2011.

58 | 60

Index
Decoding convolution codes using Viterbi
algorithm, 5057
Determination of various entropies and
mutual information, 49
Determination of Mutual Information & Entropy, 59
Generating and decoding Cyclic Code,
3643
Generating and decoding of linear block
code (LBC), 2736
Generation and evaluation of variable
length source code, 1419
Generation and evaluation of Shannonfano coding, 1519
Implementation of Convolution Code,
4350
Implementation of of BCH algorithm,
5762
RS Coding and Decoding, 62

59 | 60

You might also like