Comparative Study of Turbo Codes in AWGN Channel Using MAP and SOVA Decoding
Comparative Study of Turbo Codes in AWGN Channel Using MAP and SOVA Decoding
Comparative Study of Turbo Codes in AWGN Channel Using MAP and SOVA Decoding
X
k
Y
k
Puncturer
Fig. 2 Punctured rate 1/2 turbo encoder.
row of the matrix. Block interleaving then consists of reading the matrix
elements column by column starting from the rst one. The resulting sequence
is written to an array of length L as shown in Fig. 3.
Turbo decoding
A turbo code is far too complex to decode with a single decoder. Instead, each
convolutional code in the turbo code is decoded separately with soft infor-
mation being passed from one decoder to the next. The decoding scheme of a
turbo code is shown in Fig. 4. The above decoder consists of two serially
interconnected soft-in soft-out (SISO) decoders, which can be SOVA or MAP
decoders. x
k
and y
k
are the channel outputs with x
k
corresponding to the
systematic encoder output, and y
k
is a multiplexed stream of the two punctured
encoder outputs. Hence, a demultiplexer is necessary at the receiver. z
k
is called
the a priori information and is equal to zero for the rst iteration.
As described in Ref. 1, the decoder soft output, L (d
k
), also called log-likeli-
hood ratio (LLR), can be separated into three components:
L (d
k
)=L
sys
+L
apr
+L
ext
(1)
d
k
denotes the actual hard decision of the decoder at step k. L
ext
is called the
extrinsic information. It is a function of the redundant information introduced
by the encoder and has the same sign as d
k
.7 (L
sys
+L
apr
) constitutes the
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
Input Sequence 0,1,2,3,.,12,13,14,15
Output Sequence 0,4,8,12,.,3, 7, 11,15
Fig. 3 Block interleaving.
International Journal of Electrical Engineering Education 38/2
106 K. M. S. Soyjaudah and M. T. Russool
LLR1
LLR2
v
2k
v
1k
v
k
x
k
z
k
d
k
Fig. 4 Iterative turbo decoder.
intrinsic information. L
sys
is the LLR for the channel output and is the received
systematic input to the decoder, scaled by the channel reliability. L
apr
is equal
to the extrinsic information produced by the previous decoder.
The intrinsic information is subtracted from the soft decoder output (LLR1,
LLR2). The resulting output is the extrinsic information, which is passed on
to the next decoder. This process is repeated until a desired performance is
attained after a number of iterations.
It is the decoding method that gives turbo codes their name, since the
feedback action of the decoder is reminiscent of a turbo-charged engine.
Principle of the SOVA
The SOVA is a modied Viterbi algorithm which produces soft outputs associ-
ated with the decoded bit sequence. These modications include a modied
metric computation and a reliability value update along the maximum likeli-
hood (ML) path.
Let m denote a trellis branch into a node and M(m)
k
denote the accumulated
metric at time k for branch m. Also, let u(m)
k
be the systematic encoder output
(in bipolar voltage form) for path m and x(m)
k
be the corresponding parity
output (Fig. 5).
If y
k,1
and y
k,2
are the channel outputs corresponding to the systematic and
parity outputs of the RSC encoder, then the metric used for the SOVA algorithm
becomes4
M(m)
k
=M(m)
k1
+u(m)
k
L
c
y
k,1
+x(m)
k
L
c
y
k,2
(2)
RSC
ENCODER
dk
) ( m
k
u
) ( m
k
x
Fig. 5 An RSC encoder.
International Journal of Electrical Engineering Education 38/2
107 Turbo codes in AWGN channel
Time k-1 k k
1
) 1 (
1 k
M
) 2 (
1 k
M
) 1 ( ) 1 (
k k
x u
) 2 ( ) 2 (
k k
x u
) (m
k
M
) (m
k
u
) (m
k
x
: systematic encoder
output for path m
: parity bit
Fig. 6 T rellis section illustrating SOVA metric computation.
Referring to the trellis section in Fig. 6, there are two branches merging into
the central node (i.e. the node at time k). Therefore, m=1, 2. In the rst step,
the metric associated with each trellis node is computed by performing a
forward sweep through the trellis. In addition, the metric dierence (which
represents the reliability value) between the paths merging in any node is
calculated and stored. Then, the ML path is found. Finally, the reliability
update procedure is performed as follows:
Referring to Fig. 7, where the ML path is shown in full lines, the non-
surviving paths obtained by making an initial opposite decision at each stage
along the ML path, are considered one at a time, until they merge with the
ML path. Only one such non-surviving path is shown ( broken lines) in Fig. 6
for the sake of illustration. If the bit on the survivor path at any step is the
same as the associated bit on the competing path, the reliability value at that
bit position remains unchanged. However, if they dier, for example, at times
k0 k1 k2 k3 k4 k5
1
1
-1
1
1 -1
-1
-1
1
Fig. 7 Example of reliability update in SOVA.
International Journal of Electrical Engineering Education 38/2
108 K. M. S. Soyjaudah and M. T. Russool
1 and 4, the reliability values at the corresponding steps needs to be updated.
The soft output is then the nal updated reliablity values of each decoded bit.
Principle of the MAP algorithm
The MAP algorithm proceeds by estimating the log-likelihood ratios (LLR) of
the decoded bits, based on the received symbols. The MAP decoder produces
estimates, d
k
, of the information bits based on the comparison of the LLR,
L (d
k
), to a threshold. For equiprobable binary input values over an AWGN
channel,
d
k
=
G
1 if L (d
k
)0
0 if L (d
k
)<0
(3)
Dening ai
k
(m) and bi
k
(m) as forward and backward state metrics respectively,
where m is a state on the trellis diagram, the LLR is given by8
L (d
k
)=log
m
a1
k
(m)b1
k
(m)
m
a0
k
(m)b0
k
(m)
, where 0m2n1 (4)
ai
k
(m) represents a state likelihood at time k for input bit i, based on a forward
sweep through the trellis, and bi
k
(m) also represents a state likelihood but it is
based on a backward sweep through the trellis.
The expression for ai
k
(m) is
ai
k
(m)=d
i
(R
k
, m)
1
j=0
aj
k1
[Sj
b
(m)] (5)
where Sj
b
(m) is the state going backwards in time which leads to state m with
input j. d
i
(R
k
, m) is the branch metric and depends on the channel. It is given
by
d
i
(R
k
, m) =exp
G
2
s2
[x
k
i+y
k
Y i
k
(m)]
H
(6)
where Y i
k
(m) is the encoder output with input bit i and encoder state m.
Also,
bi
k
(m) =
1
j=0
bj
k+1
[Si
f
(m)]d
j
[R
k+1
, Si
f
(m)] (7)
where Si
f
(m) is the state going forward in time starting from state m with input
bit i.
Using (5) and (7), the state metrics ai
k
(m) and bi
k
(m) can be computed
recursively. On a trellis, d
i
(R
k
, m) corresponds to the branch of the transitions
from time k to k+1, with an initial encoder state m and an information bit i
as shown in Fig. 8.
The forward state metric ai
k
(m) represents the state metric for the transition
from the state m to the next state, at time k and with a transition bit i, as
illustrated in Fig. 9.
International Journal of Electrical Engineering Education 38/2
109 Turbo codes in AWGN channel
00 00 00
01 01 01
10 10 10
11 11 11
) 00 , (
2 0
R
k 2 k 3 k 4
) 01 , (
2 1
R
) 10 , (
2 0
R
) 00 , (
3 1
R
) 11 , (
3 1
R
Fig. 8 T rellis representation of d
i
(R
k
, m).
00 00 00
01 01 01
10 10 10
11 11 11
k 2 k 3 k 4
) 10 (
1
3
) 00 (
0
3
1
i
0
Fig. 10 Trellis representation of bi
k
(m).
International Journal of Electrical Engineering Education 38/2
111 Turbo codes in AWGN channel
Step 1
Starting at time k=0, the branch metrics d
i
(R
k
, m) are calculated and stored
for all received symbols (x
k
, y
k
) using (6).
Step 2
Since the encoder starts in state 00, ai
0
(00) is initialised to d
i
(R
0
, 00) for i=0, 1
and ai
0
(m) =0 for all m00. ai
k
(m) is computed for each k from 1 to N1
using (5).
Step 3
Since the trellis is terminated in the zero state, bi
N1
[Si
b
(00)] is initialised to 1
for i=0, 1 and bi
N1
(m)=0 for all other m00. bi
k
(m) is computed and stored
for each k from N2 to 0 using (7).
Step 4
Finally, using ( 4), L (d
k
) is computed for k=0 to N1 and the estimated bit
d
k
is obtained using (3).
Design and simulation
The aim of this project is to enable the student to have hands-on experience of:
(a) generation of a pseudorandom sequence,
( b) turbo encoding of bits using the parallel concatenated RSC encoders,
(c) generation and addition of AWGN to the encoded bits,
(d) iterative decoding of the corrupted information using SOVA and MAP
algorithms,
(e) computation of the BER at dierent levels of signal-to-noise ratio per
bit, and
(f ) evaluation of the eect of using dierent frame sizes.
In working through the project, the student understands the eect of channel
impairments on the data transmitted and how the errors are reduced through
the use of error-control codes. In addition, the student compares the perform-
ance of these codes using the Viterbi, SOVA and MAP decoding approaches.
Moreover, the eect of increasing the frame size is made clear.
The project is divided into three parts:
(a) the generation of AWGN and evaluation of its eect on the performance
during data transmission using hard decision Viterbi decoding,
( b) study of the eect of employing a convolutional code during data trans-
mission in a noisy channel, and
(c) study of the performance improvement produced by employing turbo
codes during data transmission in a noisy channel.
The three schemes to be implemented are illustrated in Fig. 11. Students are
expected to write programs in C to implement a pseudorandom sequence
International Journal of Electrical Engineering Education 38/2
112 K. M. S. Soyjaudah and M. T. Russool
Pseudo-
Random
Bit Sequence
Hard Decision
Decoder
Sink
Additive
White
Gaussian
Noise
Pseudo-
Random
Bit Sequence
ConvoIutionaI
Encoder
Sink
Additive
White
Gaussian
Noise
Viterbi
Decoder
Pseudo-
Random
Bit Sequence
Turbo
Encoder
Sink
Iterative Turbo
Decoder
(SOVA/MAP)
Additive
White
Gaussian
Noise