Nothing Special   »   [go: up one dir, main page]

WO2007085653A1 - Check-irregular ldpc codes for uep - Google Patents

Check-irregular ldpc codes for uep Download PDF

Info

Publication number
WO2007085653A1
WO2007085653A1 PCT/EP2007/050842 EP2007050842W WO2007085653A1 WO 2007085653 A1 WO2007085653 A1 WO 2007085653A1 EP 2007050842 W EP2007050842 W EP 2007050842W WO 2007085653 A1 WO2007085653 A1 WO 2007085653A1
Authority
WO
WIPO (PCT)
Prior art keywords
check
code
parity
uep
bit
Prior art date
Application number
PCT/EP2007/050842
Other languages
French (fr)
Inventor
Lucile Sassatelli
Werner Henkel
David Declercq
Original Assignee
Jacobs University Bremen Ggmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jacobs University Bremen Ggmbh filed Critical Jacobs University Bremen Ggmbh
Priority to EP07704196A priority Critical patent/EP1992072A1/en
Publication of WO2007085653A1 publication Critical patent/WO2007085653A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1148Structural properties of the code parity-check or generator matrix
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/35Unequal or adaptive error protection, e.g. by providing a different level of protection according to significance of source information or by adapting the coding according to the change of transmission channel characteristics
    • H03M13/356Unequal error protection [UEP]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/61Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
    • H03M13/618Shortening and extension of codes

Definitions

  • the present invention relates to a method and a corresponding apparatus for unequal error protection for the transmission of data, in particular source-encoded audio and/or video data, allowing to adapt the error correction and coding gains to requirements and sensitivity of the data, using irregular Low-Density Parity-Check (LDPC) codes.
  • LDPC Low-Density Parity-Check
  • the present invention relates further to an encoding method and apparatus for encoding an input signal and to a corresponding decoding method and apparatus for decoding a received signal which has been encoded by such an encoding method.
  • the present invention relates further to a signal which has been encoded by an encoding method according to the invention, to a record carrier carrying such a signal and to a computer program for implementing the methods of the present invention on a computer.
  • UEP codes are a suitable tool to protect data according to quality requirements or importance levels.
  • Low-Density Parity-Check codes are generally known to be (almost) capacity-achieving. The common understanding was that they can be constructed to offer UEP properties by using different connection degrees at the bit-node side of the describing bipartite Tanner graph. Involving a bit node into more checks, i.e., connecting it to more check nodes would improve the protection. However, the reliabilities of all bits grow with the number of iterations and the UEP properties finally disappear with the number of iterations.
  • the object of the invention is to make available another option to establish UEP properties in an LDPC construction by choosing the check profile, i.e., the connection degree on the check-node side of the Tanner graph to be chosen according to the protection requirements of the connected bit nodes.
  • the check profile i.e., the connection degree on the check-node side of the Tanner graph to be chosen according to the protection requirements of the connected bit nodes.
  • One possible practical realization to obtain the uneven degree distribution is outlined which uses a precoding leading to a sub-code of a chosen mother LDPC code, which may not have UEP properties, yet.
  • the pruning can be simplified by just setting information to fixed known values.
  • the present invention relates to a method for unequal error protection for the transmission of data of different sensitivities in accordance with claim 1 . It realizes unequal error protection by an unequal distribution of connections (edges) to check nodes, representing the parity-check equations in the so-called Tanner graph.
  • Fig. 1 shows the Tanner graph of a regular LDPC code with a constant number of connections
  • Fig. 2 shows the pruning procedure to obtain a check-irregular code from a regular one
  • Fig. 3 shows a block diagram of an apparatus for pruning represented by precoding
  • Fig. 4 shows a flow-chart of an embodiment of the pruning method according to the present invention
  • Fig. 5 shows a block diagram of the encoding and decoding scheme according to the present invention
  • Fig. 6 shows EXIT curves for different error-protection classes of almost concentrated and non-concentrated check-irregular codes
  • Fig. 7 shows bit-error rates of different protection classes of almost concentrated and non-concentrated check-irregular codes
  • Figs. A.1 to A.19 show further figures for illustrating the present invention.
  • Figure 1 shows the Tanner graph of a regular LDPC code with a constant number of connections to variable and check nodes.
  • 10 represent the variable (bit) nodes of the codeword.
  • the parity checks defined by the rows of the parity-check matrix 40 are represented by the check nodes 20 together with the incoming edges 30.
  • Figure 2 sketches an example of the procedure (pruning) according to claim 6 leading to the desired unequal distribution of edges in the graph at the check-node side.
  • variable nodes can have a regular or irregular distribution.
  • data will not be of the same importance or sensitivity.
  • the present invention assumes iterative decoding of the LDPC code.
  • the coded bits are arranged into sensitivity classes C k - One of them will typically be reserved for the parity symbols, since there is no special protection requirement.
  • ⁇ (k) the relative portion of bits devoted to class C ⁇ as ⁇ (k), i.e.,
  • V " rxik) — 1 Nc denotes the number of classes including the parity check bits class.
  • the desired non-concentrated check-degree distribution realizing the unequal protection of the different classes according to claim 3 can be realized by pruning a mother LDPC code.
  • the parameters of the mother code be (/Vo, Ko), the length and the number of information symbols (bits), respectively.
  • the pruning algorithm is performed in an iterative manner with respect to an optimum order in which the bits are pruned.
  • the optimization of the pruning algorithm takes into account two key parameters of the Mh class, and mn
  • connection degree d that belong to class C k -
  • the upper limit t is the maximum rmax possible check degree.
  • the protection in class C k can be improved by minimizing the average check connection degree d (Ck > , which requires to minimize mn as well.
  • Any pruned bit must not be linked with a check node of degree already identical to the lower limit of a priori chosen degree distributions.
  • V V :; p c? - ⁇ > where p y denotes the proportion of edges of the graph connected to check nodes of degree /.
  • mn may be further reduced after ensuring that these listed constraints are fulfilled (if the lower limit of allowed degrees is not yet reached).
  • a further pruning process is used to reduce d (Ck > .
  • Minimizing the average check connection degree d (Ct) can be shown to increase the difference between the average mutual information of messages from check nodes of one class C k , Xcv C ⁇ k ⁇ and the average mutual information of messages from check nodes of the whole graph, x cv , which is given as a possible measure for the quality of one class relative to the average in claim 4.
  • pruning (as illustrated in Figure 3) can be represented by a precoder P 70 with K 1 inputs and ⁇ T 0 outputs, delivering the input to the original mother encoder G
  • a permutation matrix Eli can be used to formulate H mpmn as
  • H 1 must be of full rank to ensure that P is of full rank. It can be further rephrased
  • H 1 is of full rank then A(N 0 - ⁇ T 0 + 1 : N O ,:) is invertible.
  • a tool is available to determine P , and the additional requirement in the pruning procedure according to claims 5 and 6 to ensure that H 1 is full rank.
  • a still flexible low-complexity realization of pruning is obtained by just omitting information bits from the input to the encoder of the LDPC mother code and setting it to a fixed value (e.g. to zero) also known at the receiver.
  • This is performed by making use of so-called systematic LDPC codes, that is LDPC codes for which the parity check matrix has an upper (or lower) triangular structure.
  • the pruning is then performed by simply omitting an information bit of the mother code, or equivalents by removing the corresponding column in the information part of the parity check matrix (the part which is not upper triangular). By doing so, the dimensions of H 5 and G s will be
  • N 0 - (K 0 - K 1 ) N 0 - (K 0 - K 1 ) mother code need to be known at the transmitter and receiver in order to be able to encode and decode the pruned code.
  • the method of claim 8 hence describes a pruning without a preprocessing matrix, just realized by omitting input bits to the encoder.
  • Figure 4 illustrates a flow-chart of the pruning method according to the present invention.
  • Figure 5 shows the principal block structure of a transmission of source (e.g. video-, audio-) encoded data with different sensitivity to errors and its corresponding unequal protection.
  • source e.g. video-, audio-
  • Figure 5 shows the principal block structure of a transmission of source (e.g. video-, audio-) encoded data with different sensitivity to errors and its corresponding unequal protection.
  • Well-known coding methods adapted to heterogeneous sensitivity of data often focus on average performance over the whole codeword.
  • UEP could be achieved by puncturing or pruning convolutional codes to adapt the code rate without changing the decoder.
  • UEP properties can also be obtained within the same codeword and thus, the shown different coding gains can appear in the same codeword.
  • UEP unequal error protection
  • LDPC Codes achieved by irregularity on the check node profile, the bit node profile is set to be regular.
  • a UEP coding scheme could be useful in the transmission of multi-media content (voice, fixed image, or video) whose characteristics have heterogeneous sensibility to errors.
  • the code stream of source-encoded blocks is hierarchically structured and contains typically:
  • compressed data delivered from the source codei e.g. speech encoder coefficients, image texture, or motion vectors.
  • the parameters of the protection should be adaptable dynamically with minimum changes in the encoder and/or the decoder.
  • variable and check blocks with respect to 7r, which are the usual polynomials for conventional representation:
  • xi v (d) and x[ c (b) be the mutual information between the input of the channel and the messages from check nodes of degree d to any bit node at the Uh iteration, and from bit nodes of degree b to any check node, respectively.
  • Equation (1) we observe that the smaller d is, the greater is the mutual information of messages coming out of check nodes of degree d, i.e. the faster is the local convergence.
  • Equation (2) we see on Equation (2) that the mutual information of messages coming out of bit nodes of degree 6 is larger when 6 is larger. This is what we are going to exploit to optimize the local convergence speeds of UEP LDPC codes.
  • a sensitivity class by a set of information bits in the codeword that will have the same protection, i.e., approximately the same error probability at a given number of iterations.
  • the sensitivity classes are defined by the source encoder.
  • B and D can either be the sets of the degrees over the whole graph, and then Equation (1) describes the usual Gaussian approximation of density evolution, or the sets of the degrees inside the kth sensitivity class called C k .
  • a check node will belong to a class C k if it is linked to at least one bit node of this class. Consequently, a check node can belong to several sensitivity classes.
  • the average mutual information of messages coming out of the check nodes of class C k . to the bit nodes of this class can be expressed as
  • the first solution to limit the degradation of the overall convergence threshold is to limit the range of the check irregularity around jf c ⁇ in the optimization. Or we could check after the optimization process wether the non-concentrated code has a threshold that is not too far from the optimum (concentrated code) threshold. We have also verified by simulations that for short block lengths, the UEP designed codes have similar global performance as the concentrated code.
  • Pruning is a well-known method foi convolutional codes [I], but not so much for LDPC codes, for which it has been applied to reduce the influence of stopping sets [I]. Pruning away some bits of the codeword means to consider them deterministic, i.e. fixing the pruned bits, e.g., to zero. Consequently, we do not transmit these bits that disappear from the graph of the code since their messages are equal to infinity. Besides, since the edges connected to the pruned bits disappear, the girth (minimum cycle length) of the subcode can only be increased. Thus, the columns of the parity matrix that correspond to these bits are removed.
  • H m and G m denote the parity- check and generator matrices, respectively, of the mother code of dimension KQ and length NQ.
  • H m and G m denote the parity- check and generator matrices, respectively, of the mother code of dimension KQ and length NQ.
  • TO construct a subcode oF dimension K ⁇ we prune away KQ — Ki columns of H m , and obtain the parity-check matrix H s of the subcode.
  • the next section deals with our sequential prun ing procedure. 4.2 The Sequential Pruning Procedure
  • the N c sensitivity classes to be optimized are defined by the proportions a(k) for k ⁇ N c — 1.
  • the optimization focuses on the two important quantities in the bound (6) : p (Cfc ' and d ⁇ , and is composed of two main stages. For a given class G k -'
  • H mprun will denote H m whose pruned columns are replaced by zero columns. Then we have
  • H s and G s are obtained by removing columns in H m , and the corresponding ones in G m which are columns of the identity part of G 1n . The corresponding rows of G 1n are also removed. Then H s and G s are of size M 0 X NQ - (Ko - Ki) and K ⁇ x N 0 - (K 0 - Ki), respectively. They are both of full rank and the code rate of the subcode is the target rate:
  • Table 2 Comparison of degree distributions for the different classes of the imconcentrated code.
  • Figure (6) shows the EXIT curves defined in equation (5) for each class of almost concentrated and non-concentrated check irregularity codes.
  • the intermediate classes are quite equivalent whereas the last class of the non-concentrated code has a slower convergence than the corresponding one in the concentrated one.
  • Figuie(7) repiesents bit error rates of the UEP almost concentrated and non-concentrated codes after 30 decoding iterations.
  • the check irregularity is a mean to achieve LJEP at low number of iterations (accelerating the convergence), but also at a high number of iterations since the differences between classes are still visible after 30 decoding iterations when Looking at the bit error rates.
  • stretching the check degrees allows a stronger difference in the error protection , without degradation of total average bit error probability at this code length, compared with the concentrated code.
  • Source coding in this block, source data are compressed and reshaped. Some parts of the source si ⁇ gon* al are more vulnerable than others.
  • the code rate R is den ned as the ratio between tran smitted information bits and the number of tran smitted bits.
  • the source can be en coded in an uniform way or in a heterogeneous one in order to take into account the properties of the source signal (unequal error protection (UEP) techniques).
  • UEP unequal error protection
  • Modulation the order and the type of the modulation are man aged here (PSK, ASK, QAM, multicarrier modulation), the power of transmitted symbols, or the spreading when we have to deal with a multiple access system by code spreading.
  • the physical chan nel disturbances are introduced.
  • the chan nel leads to inter symbol interferences (ISI caused by fading chan nel) due to multi-paths, multiuser inteferences (MUI), and adds (e.g. white gaussian or impulse) n oise.
  • ISI inter symbol interferences
  • MUI multiuser inteferences
  • adds e.g. white gaussian or impulse
  • the receiver is composed of correspondin g blocks, that can be described as follows:
  • Demodulation/Despreadin g used to find bits from received symbols and to separate users in a multiple access system.
  • Chan nel decodin g corrects remainin g errors in the previous obtained binary sequence.
  • Source decodin g recon struction of the emitted data by decompression of the sequence goin g out from chan nel decoder.
  • LDPC Low-Density Parity-Check
  • LDPC codes have iterative decodin g, that allows to reach bit error probabilities of 10 ⁇ 5 — 10 ⁇ 6 , for a wide range of signal to n oise ratios. These are the required orders for sen sible application s such as fixed picture or video tran smission s. A delay caused by the interleaver must be tolerated. Therefore LDPC codes can be an alternative to Turbo Codes for UEP target multimedia tran smissions.
  • LDPC codes are low den sity linear block codes, introduced by Gallager [8] in 1963.
  • parity matrix can be regular or not.
  • a code is regular if the number of non zero elements in every rows (respectively column s) is constant. Irregular if these numbers are n ot con stant.
  • a regular LDPC code with its three parameters (N, t c , t r ) is defined by a matrix with exactly t c and t r ones per column and row, respectively.
  • Those three parameters define a family of regular codes, and one code among this family is given by a particular realization of the parity-check matrix.
  • an LDPC code can be represented by a bipartite graph, called factor graph [12], or Tan ner graph, made of two kinds of nodes: variable n odes representing bits of codeword, and check n odes associated to parity-check functions.
  • Those two kinds of vertices are linked with each other by edges indicatin g to which parity equation variable nodes, i.e. the associated bits, take part in .
  • the degree of con nection of a bit n ode (the same for a check n ode) is the number of edges linked to this n ode.
  • a node is said i con nected or of degree i if it is con nected to i edges.
  • One code corresopnds to one particular realization of the interleaver. 2.1.1.2 Irregular LDPC Codes
  • a code is irregular if it is not regular.
  • the usual parameterization of irregular LDPC codes is done by means of polynomials:
  • is the proportion of edges of the graph connected to bit nodes of degree i
  • t cmax is the maximum number of edges linked to a bit node.
  • p ⁇ is the proportion of edges of the graph connected to check nodes of degree j
  • t rmax is the maximum number of edges linked to a check node
  • codes should be systematic: information bits are directly copied into the codeword.
  • the generator matrix G from H is n ot too easy. Nevertheless it is possible to encode usin g the parity matrix.
  • a sub-optimum decodin g algorithm kn own as Sum- Product algorithm or Belief Propagation (BP) algorithm is used in stead. It spreads along edges messages forwardin g probabilities or logarithmic likelihood ratios (LLR). To each branch two messages are associated, one for each direction .
  • the principle of BP is Bayes rule applied locally (on every bit of the codeword) and iteratively to estimate a posteriori probabilities (APP) of every bit.
  • LLR logarithm likelihood ratio
  • c is the bit value of the node and y denotes all the information available to the node up to the present iteration obtained from edges other than the one carrying v.
  • c is the bit value of the variable node that gets the message from the check node up to the present iteration obtained from edges other than the one carrying u.
  • ⁇ m is the message (LLR) over the mth edge coming out of a bit node.
  • the messages u k are the LLR coming out of a check node and u 0 is the LLR of the channel observation.
  • every messages U k are equal to zero.
  • u k is the message (LLR) over the fcth edge coming out of a check node.
  • the messages v m are the LLR coming out of a bit node.
  • P ⁇ (z) is the average probability of the codes of the family, such that sub-jacent graph be a tree.
  • the zero codeword is transmitted (since for a chan nel and the BP decoder fulfilling symmetry condition , error probabilities at the Zth iteration do n ot depend on the transmitted codeword).
  • Therfore u 0 is Gaussian N(2/ ⁇ 2 , A/ ⁇ 2 ) which is con sistent according to Def. (2.4).
  • Theorem 3 of [19] (p.628) asserts that con sistency is kept alon g iterations for a given binary-input memoryless output- symmetric chan nel.
  • v be a message such that v ⁇ N( ⁇ m, 2m) that is the output of a binary-input Gaussian channel.
  • the mutual information between v and input c of the virtual channel is given by:
  • Equation (2.10) is rewritten as:
  • J is continuous and a strictly mon oton ous function , so J "1 exists and permits to compute the mean of messages from the mutual information .
  • x is the random variable describin g the codeword bit value associated to the variable node a
  • y is the random variable describin g all the information incorporated into this message.
  • the stability condition is very important because it controls the mutual information behavior at very low error probabilities (or equivalently, when the mutual information is near to 1).
  • ⁇ * ⁇ ⁇ g mm(-l— ⁇ F( ⁇ , x, ⁇ 2 ) > Z 5 VzG [O 5 I]) (3.1)
  • the code stream of source-encoded blocks is hierarchically structured and contains:
  • Compressed data delivered from the source coder e.g. speech encoder coefficients, image texture, or movement vectors.
  • Irregular punctured/pruned systems Puncturin g con sists of n ot emitting some bits of the codeword, thereby decreasin g the initial code rate R.
  • the receiver kn ows the puncturin g pattern , and con siders n ot tran smitted bits as erasures. This technique worsen s the performance of the code allowin g to obtain a wide ran ge of rates.
  • Unequal error protection can then be achieved by applyin g different code rates to each part of the source data, according to the required robustness.
  • An other way of addin g irregularity is usin g a pre-processin g block before the code, in order to prune it. Puncturin g and prunin g will be the chosen method to realize UEP in our work, and has been further studied in [24] for Turbo Codes.
  • the local minimum distance associated to each bit of the codeword determines the maximum number of errors in the whole codeword, still allowin g this bit to be corrected.
  • the local minimum distance can be greater than the global one, which mean s that the rth bit can be corrected even if the whole codeword is n ot recovered by MLD. That explain s the interest of such codes under MLD, when considerin g in the previously mentioned JPG tran smission , for example. Con struction methods of such codes have been presented, but a big problem is the poor control that we can have over the proportion s of the classes, which can be very disturbing for the latest application .
  • An other family of such irregular coding systems is multi-level coded modulation .
  • Each bit of a symbol is associated to a given code, which differs from others by its code rate.
  • the protection level of bits depends on the code, and on the position in con stellation labellin g, which mean s that two kinds of irregularities can be exploited
  • LDPC codes can be punctured [9] in order to create average irregularity. Puncturing in fluences the code rate: average performances differ between two codewords encoded with different puncturing patterns. Nevertheless it is more suited to make use of an irregularity that leads to unequal error protection of bits in side a codeword: most con nected bits will have lower error probability. This has been highlighted in [6], and applied for optimization for several tran smit chan nels. The optimization for AWGN done by Poulliat in [17] will be presented in the next chapter.
  • the first con siders LDPC code as a linear block code and optimizes the code according to the local minimum distances [4, 16].
  • the second approach is an asymptotic optimization for BP decodin g and is based on pruning and puncturin g of a mother code.
  • Equation (4.2) is obtained by adding the mutual information comin g into each class of bitn odes since there is no overlap between the classes. We then can derive convergence and stability conditions from the fact tha 4.1.2 Cost Function for such UEP
  • X 11 be the random variable whose distribution is N(O, 1).
  • X is Gaussian consistent for any iteration according to the symmetry of the chan nel and the conservation of the con sistence alon g the iteration s:
  • X den ote improperly X ⁇ bit 0, but this will make the expression s clearer.
  • J "1 (xh-l ) is an increasin g function of I. Since Q is a decreasing function , 4.3 shows that at a given number of iteration s, the more a bit is con nected, the more it is protected, con sidering the associated error probability (the convergence of this node is faster).
  • the derived linear programming algorithm is meant to achieve a joint optimization of ⁇ * ) and P 0 J 11n , under the con straints of proportion , code rate, convergence, stability, and hierarchical constraints (since the optimization is sequential, the irregularity profile of already optimized classes must n ot be modified by the current optimization ).
  • the degree of independence of the ith column of the parity-check matrix of the code is the minimum number of columns that are included in a linear combination that equals zero, with a coefficient one at the ith column.
  • Lemma 4.1 The local minimum distance d ⁇ of the ith bit of the codeword is the the degree of independence of the ith column of the parity -check matrix of the code.
  • the protection level f ⁇ of the ith bit of a codeword is the maximum number of errors in the codeword that still allows the correction of this bit.
  • the local minimum distance associated to each bit of the codeword determines the maximum number of errors in the whole codeword that still allows the correction of this bit.
  • the local minimum distance can be greater than the global one, which mean s that the «th bit can be corrected even if the whole codeword can n ot be restored by MLD.
  • Those algebraic properties can be linked to Majo ⁇ ty Logic Decoding presented in [3] which works on a poorer difmition of local minimal distance to simplify the decoding.
  • Classes are n ot defined by their proportion s at the begin ning, which is another drawback of the linear codin g approach.
  • n ot intend to construct an arbitrary linear block code, but a subcode of a mother code from which we choose the right column s to be removed in the parity-check matrix in order the resultin g parity-check matrix be the matrix definin g a code with the required properties.
  • parameters of the optimization are the parameters of the optimization :
  • U is the vector where indexes of columns of H to be pruned away are stored (length K 0 - K 1 ).
  • w lmtt is the initial w ⁇ vector, ordered in decreasin g order, before optimization of a selected column .
  • a cycle of length 2d is a set of d variable nodes and d constraint nodes connected by edges such that a path exists that travels through every node in the set and connects each node to itself without traversing an edge twice.
  • Definition 4.5 (C d Cycle set)
  • a set of variable nodes in a bipartite graph is a G d set if (l) it has d elements, and (2) one or more cycles are formed between this set and its neighboring constraint set.
  • a set of d variable nodes does not form a C d set only if no cycles exist between these variables and their constraint neighbors.
  • variable node set is called an S d set if it has d elements and all its neighbors are connected to it at least twice.
  • Ld Linearly dependent set A variable node set is called an Ld set if it is comprised of exactly d elements whose columns are linearly dependent but any subset of these columns is linearly independent.
  • the code is decoded in an optimal way, in the sense of the minimum distance.
  • the code can have UEP properties due to its local minimum distances, associated to some cycles in the graph, that can be different from each other.
  • the UEP properties are then dependent on the realization of the H matrix.
  • the local properties of the code are taken into account by the MLD.
  • the Belief Propagation is sub-optimum decodin g, and quite "global" in the sen se that it does n ot take into account local properties randomly created with the H matrix. Local differences will be created by the local sub-optimalities of BP decoding at finite code len gth, and some of these sub-optimalities are associated to small local minimum distances.
  • Belief Propagation decodin g is the Maximum Likelihood Decodin g.
  • the minimum distance tends to in finity and the length of the smallest cycle, called the girth, too. Therefore, all local minimum distances tend to in finity too and UEP properties den ned by the two mean s of decodin g tend to be the same.
  • UEP properties depend on the code and also on the way that it is decoded: the optimization must be done as a function of the chosen decodin g method. This is, of course, practically determined by the code len gth since at low N (N ⁇ 500), MLD will be used, otherwise BP.
  • Theorem 3 in [11] states that under local tree assumption of depth 2T and some other constraints, for any 1 ⁇ I ⁇ T, the distribution functions Q ⁇ (d) of messages originating from check nodes of degree d and Pi (b) of messages originating from bit nodes of degree b are equal to
  • ⁇ ® ⁇ d i-j[(d- I)J- 1 ( i - ⁇ ⁇ (b, d) x «l(b) (4.13) b&B
  • LLR LLR ⁇ M., and 0 ⁇ ta.nh(LLR) ⁇ 1. At a high number of iteration s, many LLRs are high.
  • n ode At a bit n ode, the important LLRs are the highest because they are summed up. At a given high number I of iterations, we decide that a message comin g out of a bit n ode is of bad quality if the correspondin g LLR is below a fixed threshold that does n ot depend on the con sidered bit node or on the number of iterations. At a high en ough number I of iteration s, a bit n ode produces bad message (i.e. a low LLR) if the number of incomin g high LLRs is below a fixed number that we choose in terms of the number of iterations, i.e.
  • This ratio is a con stant. It does n ot depend on q ⁇ , i.e., on the number of iteration s, for high en ough number of iteration s. The behaviors of different check n odes remain s different even at high number of iterations, i.e., at a low bit-error rate. This explains that the UEP created by irregularities over check n odes remain s at a high number of iterations which we exploit in this work.
  • Den sity problem according to Gallager's result, den sest codes have the lowest gap to capacity. At given code rate, there is one optimum average con nectivity of check nodes ⁇ p that minimizes the gap to the capacity Fig. (A.7) (for in finite code length and infinite number of iteration s). This key parameter of the code, linked with t cm ⁇ x , determines the den sity of the code. The denser is the graph, the higher have to be the con nectivity ratio. If the value of ⁇ p is moved from the optimum, the value of t cm ⁇ x must be chan ged too.
  • the required p can be achieved wether with a concentrated degree distribution at check n ode side, or with an unconcentrated one.
  • Our optimization by removing bit n odes, decreases ⁇ p while t cmax is kept. The UEP less den se code must have higher threshold.
  • tran smission has to be achieved, even with poor quality, we allow big amplitude on degrees of check nodes. For example if one wants to tran smit a JPG picture even with bad quality, puttin g headers and very low frequency DCT coefficients in most protected classes en sures the transmission , even if the resultin g picture is quite fuzzy.
  • n ot be a problem if the maximum degree of con nection of bit n odes is adapted, i.e. in a joint optimization . It could raise a problem if the check optimization is proceeded after the bit n odes optimization , as a second stage. If it is done before, a con straint on t cmax should be added in the optimization of bit node profile if one wants to keep the best convergence threshold.
  • Figure (A.17) shows the coding scheme that we use as a startin g point.
  • H and G be the parity-check (size M 0 x N 0 ) and generator (size K 0 x N 0 ) matrices of the mother code and assume that they are in a systematic form (i.e. full rank).
  • R 0 be the code rate of the mother code.
  • the subcode has a given number of info bits : K 1 .
  • K 1 the code rate of the mother code.
  • P a preprocessin g generator matrix
  • This preprocessin g matrix is n ot needed if we prune away only columns of information of the H matrix, and choose the K 1 best protected columns amon g the information column s of the H matrix, which reduces a lot the possible UEP configuration s.
  • H 3 and G 3 are the parity-check and generator matrices of the subcode, are obtained by removin g column s to prune away in H, and the correspondin g ones, which are column s of the identity, in G where we remove also corresponding rows (i.e. the row where there was the one). Since the best protected columns are chosen as bein g information column s, they are already made of the identity. Then H s and G s are of size M 0 x N 0 — (K 0 — Ki) and K 1 x N 0 — (K 0 — K 1 ), respectively.
  • the obtained rate is the desired one.
  • Definition 4.11 A matrix is in a reduced row echelon form if it is made of a triangular upper part of size the rank of the matrix, after linear combinations of its rows, and then permutation of the columns.
  • Definition 4.12 A matrix will be said in a reduced row form if the previous manipulations on its rows have been made, but without permutting its columns at the end.
  • Theorem 4.1 A necessary and sufficient condition on G that allows to compute P that fulfills Eq. (4.21) is: rank(G') > K 1 (4.22)
  • Equation (4.23) can be represented by
  • G' fulfills Condition (4.22), a solution for P 2 exists, and if rank(G') ⁇ K 0 , then we have degrees of freedom for P 2 , and then also for P.
  • a constraint calledCode rate constraint in the optimization algorithm, en sures that the parity-check matrix of the subcode, i.e. the matrix of the mother code without the pruned column s, will have a code rate of N , ⁇ 1 1 _ ⁇ y or that equivalently ran k(H m otherpruned)-
  • ⁇ pruned is not anymore the parity matrix of the subcode since an other parity equation s are added.
  • the subcode is den ned by:
  • G s P G -. K 1 X N 0 H 5 : [N 0 - K 1 ) x N 0
  • H 5 is made of H mot/>ercorfe an d the H p parity matrix of the generator preprocessing matrix P.
  • H p is of size (Ko — K 1 ) x K 0 :
  • I K0 - K1 is the identity associated to redundancy column s of the precode P, and HL (K0 - K1) XK1 are associated to information bits of the subcode.
  • H of the mother code is the identity associated to redundancy column s of the precode P, and HL (K0 - K1) XK1 are associated to information bits of the subcode.
  • the K 0 bits of the codewor of the precode P are directly copied into the K 0 information bits of the mother code.
  • H s of the subcode is n ot in a systematic form in Eq. (4.31), and then we can n ot distinguish column s of redundancy and column s of information of the subcode in this form.
  • H Ssys H Ssys , where the last K 1 column s are associated to the K 1 information bits of the subcode, and the K 0 — K 1 pruned column s are taken among the N 0 — K 1 columns, which are the column s of a squarred upper trian gular matrix.
  • condition (4.22) be fulfilled to be able to compute the P matrix and have a code rate of the subcode equal to the one desired, even if we choose column s to be pruned away and best protected column s amon g redundancy of the mother code.
  • Computation of the preprocessing matrix After having verified that we can choose the K 0 — K 1 bits to be pruned away and the K 1 best protected amon g the TV 0 bits of the mother code, we are goin g to explain how the P matrix is computed.
  • H 5 Another possibility is to consider the P matrix as some addition al parity-check equation s, as showed in expression of H 5 .
  • H p an arbitrary H p , for example such that it improves the UEP properties of interesting bits by choosing its irregularity accordingly, or as a part of H mother to decrease the required memory.
  • the user will have to choose the con straints on the optimization and so the stren gth of UEP, according to his available memory and processing power.
  • H p (K 0 — Ki) x K 0 whose elements are h(i,j) and P r : K 0 X Ki whose elements are d(i,j)
  • N 0 (Ut) denotes the set of check nodes linked to variable node bit.
  • N ⁇ (bit) is the set of bit nodes linked to each check node belonging to No(bit).
  • bit pruned arg max ⁇ (di(6i£)) under:
  • This condition is automatically fulfilled in the case of a regular mother code.
  • the mother code has parameters (2000,3,6).
  • the decoding is done bu using only the pruned parity-check matrix of the mother code.
  • Fig. (A.18) shows EXIT curves defined in Eq. (4.16) for each class of almost concentrated and unconcentrated check irregularity codes.
  • the intermediate classes are quite equivalent whereas the last class of the unconcentrated code has a slower convergence than the corresponding one in the concentrated one.
  • Figure (A.19) shows the behavior at low bit-error rates, which cannot be seen from an EXIT curve. This would be near the (1,1) point in the EXIT chart, i.e. at a high number of iterations. Here for 30 iterations.
  • UEP properties remain also at a high number of iterations, which constitutes a huge difference from UEP properties generated by irregularities over bit nodes, which induces convergence speed differences.
  • the check optimization would be a means to achieve UEP at low number of iterations (accelerating the convergence), and at a high number. This behavior can be explained by Fig. (A.14) and the comments following it in the first section.
  • the puncturin g could be a method to realize UEP by increasin g the code rate and worsening certain bits, but without the possibility to improve some others.
  • the quality of the messages coming to interestin g checks i.e. belonging to one class
  • the quality of the messages coming to interestin g checks would be more important than the degrees of these checks.
  • erasure messages i.e. with LLR, den ned in Def . (2.2), that equals zero
  • prunin g i.e. LLR equals infinity that makes the bitn ode and the linked edges disappear from the graph).
  • G ⁇ is the set of bit nodes of degree i linked to check nodes of degree j.
  • Tr 4 is the proportion of puntured symbols in G hJ before decoding.
  • ⁇ (i, j) and p(i, j) are the proportion of bit nodes of degree i among bit nodes linked to check nodes of degree j, and the proportion of check nodes of degree j among check nodes linked to bit nodes of degree i, respectively.
  • bitnode is of degree i and linked to check of degree j )
  • the design goal optimal puncturin g defined in [9] is to maximize the puncturin g fraction p ⁇ for a given E b /N 0 , such that Eq. (4.40) is fulfilled.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Error Detection And Correction (AREA)

Abstract

Data, especially when it is source coded multimedia information (video, speech, audio), often should be protected differently, since the effects of errors would lead to disturbances of different susceptibility. So-called 'Unequal Error Protection (UEP) codes' are a suitable tool to protect data according to quality requirements or importance levels. Low-Density Parity-Check codes are generally known to be (almost) capacity-achieving. The common understanding was that they can be constructed to offer UEP properties by using different connection degrees at the bit-node side of the describing bi-partite Tanner graph. Involving a bit node into more checks, i.e., connecting it to more check nodes would improve the protection. However, the reliabilities of all bits grow with the number of iterations and the UEP properties finally disappear with the number of iterations. The invention proposes a method for the transmission of data of different sensitivities and realizes unequal error protection by an unequal distribution of connections (edges) to check nodes, representing the parity-check equations in the Tanner graph.

Description

CHECK-IRREGULAR LDPC CODES FOR UEP
The present invention relates to a method and a corresponding apparatus for unequal error protection for the transmission of data, in particular source-encoded audio and/or video data, allowing to adapt the error correction and coding gains to requirements and sensitivity of the data, using irregular Low-Density Parity-Check (LDPC) codes.
The present invention relates further to an encoding method and apparatus for encoding an input signal and to a corresponding decoding method and apparatus for decoding a received signal which has been encoded by such an encoding method.
The present invention relates further to a signal which has been encoded by an encoding method according to the invention, to a record carrier carrying such a signal and to a computer program for implementing the methods of the present invention on a computer.
Data, especially when it is source-coded multimedia information (video, speech, audio), often should be protected differently, since the effects of errors would lead to disturbances of different susceptibility. So-called "Unequal Error Protection (UEP) codes" are a suitable tool to protect data according to quality requirements or importance levels. Low-Density Parity-Check codes are generally known to be (almost) capacity-achieving. The common understanding was that they can be constructed to offer UEP properties by using different connection degrees at the bit-node side of the describing bipartite Tanner graph. Involving a bit node into more checks, i.e., connecting it to more check nodes would improve the protection. However, the reliabilities of all bits grow with the number of iterations and the UEP properties finally disappear with the number of iterations.
The object of the invention is to make available another option to establish UEP properties in an LDPC construction by choosing the check profile, i.e., the connection degree on the check-node side of the Tanner graph to be chosen according to the protection requirements of the connected bit nodes. One possible practical realization to obtain the uneven degree distribution is outlined which uses a precoding leading to a sub-code of a chosen mother LDPC code, which may not have UEP properties, yet. In the case of systematic respresentations of the generator and parity-check matrices, the pruning can be simplified by just setting information to fixed known values. The present invention relates to a method for unequal error protection for the transmission of data of different sensitivities in accordance with claim 1 . It realizes unequal error protection by an unequal distribution of connections (edges) to check nodes, representing the parity-check equations in the so-called Tanner graph.
Preferred embodiments of the invention are defined in the dependent claims. It shall be understood that the encoding method and apparatus, the decoding method and apparatus, the signal and the computer program of the present invention have similar and/or identical preferred embodiments as defined in the dependent claims.
Methods for unequal error protection are known since long. Two central publications for earlier approaches not yet based on iterative decoding are:
• Hagenauer, J., "Rate-Compatible Punctured Convolutional Codes (RCPC Codes) and their Applications," IEEE Trans, on Comm., Vol. 36, No. 4, April 1988, S. 389- 400. This approach changed the transmission rate and with it the error protection by puncturing output bits.
• C-H. Wang and C-C. Chao, "Path-Compatible Pruned Convolutional (PCPC) Codes: A New Scheme for Unequal Error Protection," proc. ISIT 1998, Combridge, MA, USA, 1998. This more recent publication uses pruning with a so-called path- locating matrix which the authors did not yet see as a precoder. They applied the principle also recursively.
Since the present invention is based on the iterative decoding of LDPC codes, general methods for interative decoding are described in:
• R. G. Gallager, "Low-Density Parity-Check Codes," IRE Trans, on Inform. Theory, pages 21-28, 1962 and
• R. G. Gallager, Low-Density Parity-Check Codes, PhD thesis, MIT, 1963.
Applying and analyzing LDPC codes for unequal error protection modifying the bit-node profile, but keeping the check-node profile in a concentrated form with only 2 possible connection degrees in contrast to the present invention, was published in:
• C. Poulliat, D. Declercq, and I. Fijalkow, Optimization of LDPC Codes for UEP Channels," proc. ISIT 2004, Chicago, USA, June 2004. Pruning, as described in the already mentioned works by Wang et al., has only been used for LDPC codes to reduce the influence of stopping sets thereby lowering the error floor in:
• T. Tian, C. Jones, J. D. Villasenor, and R. D. Wesel, "Construction of Irregular LDPC
Codes with Low Error Floors," proc. ICC 2003, Anchorage, Alaska, USA, 2003. Unequal error protection was not treated.
The invention will now be explained in more detail with reference to the drawings in which Fig. 1 shows the Tanner graph of a regular LDPC code with a constant number of connections,
Fig. 2 shows the pruning procedure to obtain a check-irregular code from a regular one, Fig. 3 shows a block diagram of an apparatus for pruning represented by precoding, Fig. 4 shows a flow-chart of an embodiment of the pruning method according to the present invention, Fig. 5 shows a block diagram of the encoding and decoding scheme according to the present invention, Fig. 6 shows EXIT curves for different error-protection classes of almost concentrated and non-concentrated check-irregular codes, Fig. 7 shows bit-error rates of different protection classes of almost concentrated and non-concentrated check-irregular codes, Figs. A.1 to A.19 show further figures for illustrating the present invention.
Figure 1 shows the Tanner graph of a regular LDPC code with a constant number of connections to variable and check nodes. In the figure, 10 represent the variable (bit) nodes of the codeword. The parity checks defined by the rows of the parity-check matrix 40 are represented by the check nodes 20 together with the incoming edges 30. Figure 2 sketches an example of the procedure (pruning) according to claim 6 leading to the desired unequal distribution of edges in the graph at the check-node side. In the said figure, an original regular code 50 is modified to obtain the irregular code 60 with a varying check-node profile. It also shows resulting classes C1J = L...,3 by eliminating pruned bits
(shown dashed). These classes are specified by their average connection degree of the check nodes they are connected to. The distribution needs to be chosen such that more than 2 different numbers of connections at these check nodes are used. The counterpart of the check nodes, the variable nodes, can have a regular or irregular distribution.
In many practical applications, data will not be of the same importance or sensitivity. There may, for example, be headers, control data, compressed data, with different resolution (e.g. scalable video or audio) that all would lead to differently susceptible artefacts or distortions when they would contain transmission errors. One may also think of transmitting data of a money wire transfer. The effect would, of course, dramatically depend on the position of an error in the decimal number.
According to claim 2, the present invention assumes iterative decoding of the LDPC code.
According to claim 3, the coded bits are arranged into sensitivity classes Ck- One of them will typically be reserved for the parity symbols, since there is no special protection requirement. Let us name the relative portion of bits devoted to class C^ as α(k), i.e.,
Λ".
V" rxik) — 1 Nc denotes the number of classes including the parity check bits class.
According to claim 6, the desired non-concentrated check-degree distribution realizing the unequal protection of the different classes according to claim 3 can be realized by pruning a mother LDPC code. Let the parameters of the mother code be (/Vo, Ko), the length and the number of information symbols (bits), respectively. The rate is RQ = KQ I NQ. The subcode obtained by pruning according to claim 6 should have the parameters (Λ/i, K1) and R<\= K1 / /V1. Pruning leads to a rate of R-\ = K1 I (NQ - Ko + K] ), since instead of KQ, K1 information symbols (bits) are provided to the pruned code and, accordingly, KQ - K| symbols are omitted from the output data as well. The amount of redundancy stays the same in the mother and the sub-code, i.e., (1 - F?i) /V1 = (1 - RQ) NQ.
The pruning algorithm is performed in an iterative manner with respect to an optimum order in which the bits are pruned. The optimization of the pruning algorithm takes into account two key parameters of the Mh class, and mn The first is the average check connection degree of the Mh class — ^-π ή (Ck ) rl (Ct ) s ΛV = > β'' θfC* '(fi') , where mm and max are the minimum and maximum
check connection degrees, respectively. p;1°(ff) 's the relative portion of check nodes
with connection degree d that belong to class Ck- The proportion relation
Figure imgf000006_0001
is obtained where the second sum starts at a connection degree of 2 since a check node should at least be connected to 2 variable nodes. The upper limit t is the maximum rmax possible check degree.
According to claim 5, the protection in class Ck can be improved by minimizing the average check connection degree d(Ck > , which requires to minimize mn as well. For
(C, ) each considered class, d (c*k )) i ios I lrowwA/αerroerdl a acs m miu iochh a acs r p»ro»cscsiihbllαe m miinniimmiizviinnπg "' ™π n step by
'CO step as well. For a chosen mn , one would try to put a maximum number of check d, (Q ) nodes with the minimum degree ™n in order to decrease d k . Although this may be interpreted to keep the degree distribution concentrated inside a certain class, we do not see this as a strict rule. The reduction of d(Ck ) may be realized in different ways. However the steps and the succession in pruning are chosen, including possible reallocations of variable nodes to classes, the following constraints need to be fulfilled and checked every time:
1. Any pruned bit must not be linked with a check node of degree already identical to the lower limit of a priori chosen degree distributions.
2. Unvoluntary pruning shall be avoided, meaning that a column of the parity-check matrix H becomes independent from all the others and then it would not define a code any more.
3. The chosen code rate Ki/Λ/i must still be achieved given by the total number of checks N-K and the number of bit nodes N.
4. Convergence at a desired signal-to-noise ratio (near the Shannon-capacity limit) must be ensured, typically by investigating the so-called EXIT charts. (S. ten Brink, "Convergence Behavior of Iteratively Decoded Parallel Concatenated Codes," IEEE Transactions on Communications, Vol. 49, No. 10, Oct. 2001 , pp. 1727-1737.) 5. A stability constraint has to be ensured, which is formulated as a rule for λ2, which is the proportion of edges of the graph connected to bit nodes of degree 2. This rule has been presented in S.Y. Chung, T. Richardson, and R. Urbanke, "Analysis of Sum-Product Decoding Low-Density Parity-Check Codes using a Gaussian Approximation, " IEEE Trans, on Inform. Theory, Vol. 47, No. 2, pp. 657-670, Feb. 2001 as
V :; p c? - ϊ> where py denotes the proportion of edges of the graph connected to check nodes of degree /.
ή (Ct )
In a possible iterative procedure, mn may be further reduced after ensuring that these listed constraints are fulfilled (if the lower limit of allowed degrees is not yet reached). A further pruning process is used to reduce d(Ck > .
Minimizing the average check connection degree d(Ct) can be shown to increase the difference between the average mutual information of messages from check nodes of one class Ck, Xcv C~k\ and the average mutual information of messages from check nodes of the whole graph, xcv, which is given as a possible measure for the quality of one class relative to the average in claim 4.
According to claim 6, pruning (as illustrated in Figure 3) can be represented by a precoder P 70 with K1 inputs and ^T0 outputs, delivering the input to the original mother encoder G
80. The existence of a pruning precoder can be shown with the following derivation. This will finally lead to a matrix A, consisting of the generator matrix of the mother code Gm and permutation matrices III and IT2 , which has to be full rank to be able to determine the precoder generator matrix P.
The following steps outline the derivation of the matrix A. Also with a precoder P, the duality to Hm is present and given by When replacing the pruned columns by zero columns, it is obtained nmprm"»i r ~ (JV0- Jf0 ) Jf1
A permutation matrix Eli can be used to formulate Hmpmn as
-" mpran = [""1 ' ^ (JV0- Tf0) (Tf0- K1 ) ^ 1 "
H1 must be of full rank to ensure that P is of full rank. It can be further rephrased
LH1 I U(JV(]_ Kι> ) IKo_ Ki ) \ - QLl(W0- Jf0 ) I "-(JV0- Jf0 ) K1 ' "(N0- Jf0) (Jf0- AT1 ) 1 ^*2 with some matrices Q and Rand a second permutation matrix 1I2. I(JVo- /fo) denotes an identity matrix. This rephrasing leads to O1 (JNV0- J Kf0 )) JKf1
It can now be defined A to be
Figure imgf000008_0002
and let i be the information word of length K1 of the sub-code. The information part of APrir should be associated to R(N . K ) K while the part of APrir corresponding to pruned columns should be associated to all-zero columns. P is therefore computed such that
Figure imgf000008_0003
If H1 is of full rank then A(N0 - ^T0 + 1 : NO,:) is invertible. Thus, a tool is available to determine P , and the additional requirement in the pruning procedure according to claims 5 and 6 to ensure that H1 is full rank.
According to claim 8 a still flexible low-complexity realization of pruning is obtained by just omitting information bits from the input to the encoder of the LDPC mother code and setting it to a fixed value (e.g. to zero) also known at the receiver. This is performed by making use of so-called systematic LDPC codes, that is LDPC codes for which the parity check matrix has an upper (or lower) triangular structure. The pruning is then performed by simply omitting an information bit of the mother code, or equivalents by removing the corresponding column in the information part of the parity check matrix (the part which is not upper triangular). By doing so, the dimensions of H5 and Gs will be
M0 ' N0 - (K0 - K1) and K1 ' N0 - (K0 - K1) , respectively. The coderate is obtained as R, = 1 —^ — = l . Only the indices of the pruned columns of the
N0 - (K0 - K1) N0 - (K0 - K1) mother code need to be known at the transmitter and receiver in order to be able to encode and decode the pruned code. Thus, there is almost no complexity increase for realizing different UEP configurations with the same mother LDPC code. The method of claim 8 hence describes a pruning without a preprocessing matrix, just realized by omitting input bits to the encoder. One may slightly sacrifice performance, but the realization simplification will outweigh giving up optimization options of a possible preprocessing.
Figure 4 illustrates a flow-chart of the pruning method according to the present invention.
Figure 5 shows the principal block structure of a transmission of source (e.g. video-, audio-) encoded data with different sensitivity to errors and its corresponding unequal protection. Well-known coding methods adapted to heterogeneous sensitivity of data often focus on average performance over the whole codeword. UEP could be achieved by puncturing or pruning convolutional codes to adapt the code rate without changing the decoder. One may interpret Figure 5 in this direction, where essentially different codes (possibly obtained by pruning or puncturing) would be devoted to the different importance classes. However, UEP properties can also be obtained within the same codeword and thus, the shown different coding gains can appear in the same codeword.
In the following a further description of the present invention is provided illustrating further embodiments of the invention by way of further figures.
1 Introduction
This paper deals with unequal error protection (UEP) LDPC Codes, achieved by irregularity on the check node profile, the bit node profile is set to be regular. A UEP coding scheme could be useful in the transmission of multi-media content (voice, fixed image, or video) whose characteristics have heterogeneous sensibility to errors. The code stream of source-encoded blocks is hierarchically structured and contains typically:
• headers to describe the type and parameters of compression,
• control data for code-stream synchronization, position, or indexing,
• compressed data delivered from the source codei, e.g. speech encoder coefficients, image texture, or motion vectors.
For such data streams, errors on headers or control data generally lead to a source- decoder failure since true reconstruction parameters of the compression are missing. In contrast, errors on high frequency coefficients of a DCT may be much
less disturbing. Consequently, uniform protection for such a code stream would be sub-optimal. This highlights the interest of realizing unequal error protection by modifying the structure of a code. Well-known coding methods adapted to heterogeneous sensitivity of data often focus on average performance over the whole codeword. UEP could be achieved by puncturing orpruning (see [I]) convolutional codes to adapt the code rate without changing the decoder. In our work, we make use of LDPC codes [2] as a UEP coding scheme, because more and more standards consider these codes thanks to their very good performance. We concentrated on heterogeneous Iy protecting some bits inside a codeword by suitable selection of irregularities of the code. The linearprobleni of hierarchical optimization of variable node profile, assuming the check node profile to be fixed, has been studied in [3]. We chose to focus on the dual case of bit-regular check-irregular LDPC codes. To do so, we will start by presenting asymptotical study of new codes ensemble minutely represented, i.e., a generalized density evolution [4], and we will show in particular that a carefully chosen cost function could lead to families of UEP LDPC codes which are bit-regular. We discuss the optimization procedure and the advantage of our approach in Section 3. Moreover, in real systems, flexibility in terms of types of data that can be protected is important, i.e. the parameters of the protection should be adaptable dynamically with minimum changes in the encoder and/or the decoder. In order to achieve this flexibility of our UEP codes, we propose a practical and flexible method based on the pruning of a mother code. This procedure is presented in Section 4. Finally the results are presented and a conclusion given.
2 Detailed Representation of Irregular LDPC Codes
A very useful parameterization for our work is the detailed representation of irregular LDPC codes presented by Kasai et al. in [4]. They constructed new families of LDPC codes which are sub-ensembles of conventional irregular LDPC code ensembles. The detailed representation they adopted allows to design optimal codes more accurately by restricting the possible choices for the interleaver
2.1 The Parameterization
These new codes ensembles are introduced by specifying the fiaction of edges connecting sets of variable nodes and check nodes with given degree at the same time. Let B and D be two sets or irregularity degrees of variable and check nodes, respectively. A function π : B x D → [0, 1] is said to be the joint degree distribution of (B, D) if ∑beβ Y2jζD ""(^i 41O = 1- This function describes the connections between the different degrees of the code, and is called the detailed representation of the code.
They also defined a marginal degree distributions of variable and check blocks with respect to 7r, which are the usual polynomials for conventional representation:
Ijg β d£D
Figure imgf000012_0001
Il can be verified that ρ(b, d) equals the fraction of edges connecting nodes of degree b and d among all edges of degree b. This detailed representation can be used, for example, to descπbe the methods for different Poisson constructions explored by MacKay et al. in [5].
2.2 Density Evolution with Detailed Representation
In [4], the authors also present a generalized density evolution for the newly proposed code ensembles. This generalized density evolution can treat density evolution for conventional code ensembles as a special case. From Theorem 3 in [4], we can derive a detailed evolution of the mutual information of messages on the edges under Gaussian approximation. The description is "detailed" in the sense that we distinguish the mutual information of messages coming from bit nodes or checkodes from different degrees. Let s = 2/σ2 where σ2 is the noise variance of the additive white Gaussian noise (AWGN) channel. The function J is defined by
Figure imgf000012_0002
Let xiv (d) and x[c (b) be the mutual information between the input of the channel and the messages from check nodes of degree d to any bit node at the Uh iteration, and from bit nodes of degree b to any check node, respectively.
Figure imgf000012_0003
From Equation (1), we observe that the smaller d is, the greater is the mutual information of messages coming out of check nodes of degree d, i.e. the faster is the local convergence. In contrast, we see on Equation (2) that the mutual information of messages coming out of bit nodes of degree 6 is larger when 6 is larger. This is what we are going to exploit to optimize the local convergence speeds of UEP LDPC codes.
3 Achieving UEP with Check-Irregular LDPC Codes
3.1 A Cost Function for UEP
Let us define a sensitivity class by a set of information bits in the codeword that will have the same protection, i.e., approximately the same error probability at a given number of iterations. Iu practice, the sensitivity classes are defined by the source encoder. B and D can either be the sets of the degrees over the whole graph, and then Equation (1) describes the usual Gaussian approximation of density evolution, or the sets of the degrees inside the kth sensitivity class called Ck. A check node will belong to a class Ck if it is linked to at least one bit node of this class. Consequently, a check node can belong to several sensitivity classes. The average mutual information of messages coming out of the check nodes of class Ck. to the bit nodes of this class can be expressed as
Figure imgf000013_0001
3.2 Discussion on Breaking the Concentration of Check Degrees
Chung et al. have shown in [6] that a concentrated p(x) check degree p(x) = px + (1 — p)x maximizes the speed of convergence of the overall code This also means that concentrated check degrees minimize the convergence threshold of the code, expressed in Eb/Na for an AWGN channel. To achieve UEP properties by irregularity in the check profile, we have to increase the range of the check degrees, and then break the concentration, Since not obeying concentration increases the gap to capacity of the overall code, we must define a tolerance on this gap before starting the optimization process. The global UEP code will converge slower, but its most protected cLasses faster than the ones of the concentrated code. The first solution to limit the degradation of the overall convergence threshold is to limit the range of the check irregularity around jfc^ in the optimization. Or we could check after the optimization process wether the non-concentrated code has a threshold that is not too far from the optimum (concentrated code) threshold. We have also verified by simulations that for short block lengths, the UEP designed codes have similar global performance as the concentrated code.
4 A Practical Means to Achieve UEP: Pruning a Mother Code
The problem in optimizing the check irregularity using Equation (6) lies in the fact that the optimization is non-linear due to the dependence between the irregularities of the different sensitivity classes. In [3], the authors have proposed a hierarchical procedure to overcome this problem. In this section, we present a sequen tial procedure based on "code pruning" which has both the advantage of providing UEP codes based on Equation (6) and also leads to a practical and flexible scheme for various UEP configurations.
4.1 General Presentation of Pruning
To achieve irregularity on check nodes, we chose to prune a regular mother code in order to build a subcode with UEP behavior. Pruning is a well-known method foi convolutional codes [I], but not so much for LDPC codes, for which it has been applied to reduce the influence of stopping sets [I]. Pruning away some bits of the codeword means to consider them deterministic, i.e. fixing the pruned bits, e.g., to zero. Consequently, we do not transmit these bits that disappear from the graph of the code since their messages are equal to infinity. Besides, since the edges connected to the pruned bits disappear, the girth (minimum cycle length) of the subcode can only be increased. Thus, the columns of the parity matrix that correspond to these bits are removed. Let Hm and Gm denote the parity- check and generator matrices, respectively, of the mother code of dimension KQ and length NQ. TO construct a subcode oF dimension K\ , we prune away KQ — Ki columns of Hm, and obtain the parity-check matrix Hs of the subcode. The subcode will have a length of N1 = W0 — [Ko - Ki ). The next section deals with our sequential prun ing procedure. 4.2 The Sequential Pruning Procedure
Due to the chosen coding scheme, K\ and the mother code are fixed at the beginning of the optimization, therefore the code rate is fixed to Ri — H _^ . The Nc sensitivity classes to be optimized are defined by the proportions a(k) for k < Nc — 1. The number of info bits in the class Ck is a(k) ' Ri - Ni if k = 1, ..., Nc - 1, and ∑)^1 a(k) = 1. The last class (the Ncth class) contains the redundancy. The amount of redundancy is the same in the mother code and in the subcode, and equals (1 - R\) ■ Ni = (1 - Ra) • No. The optimization focuses on the two important quantities in the bound (6) : p(Cfc' and d^^ , and is composed of two main stages. For a given class Gk-'
• We choose the (a.kKι) most protected bit nodes.
• For a given number of degrees of the check nodes linke
Figure imgf000015_0001
'n order to decrease pc'fc .
• The following constraints need to be fulfilled.
• any pruned bit must not be linked with a check node of degree lower or equal to the concentration constraint
• avoid unvoluntary pruning (a column of H can become independent from all the others and then does not define a code anymore)
• proportion constraint
• code rate constraint
• convergence constraint
• stability constraint
If these previous constraints are fulfilled:
• We decrease dm^ by one if the tolerance that we fixed regarding the concentration is not yet reached, and start over again.
4.3 General Preprocess
In this section, our goal is to find the generator matrix G3 of the subcode obtained by pruning the mother code, given H3 which is Hm without the pruned columns. Consequently, the generator matrix Ga must fix the pruned bits to zero. A great advantage of the paining method is that it enables us to use the mother decoder to decode any subcode created by pruning. However, we assume that the encoder of the subcode can be different from the encoder of the mother code. We observe that for any matrix P of size Kx x Ko, we still have HmGm rPr = 0(N0 -K0) X Ki ■ Thus, we look for a matrix P that is be used at the transmitter side as Gs = PGm and such that forall binary vector i of size Ki, Gsi = c with elements of c corresponding to the pruned bits must be equal to zero. Consequently, the matrix P will be called preprocessing matrix, since it can be considered as a kind of precode (see Fig.(J)). But we now explicit all the conditions that must be fulfilled by P, and how this matrix can be computed.
In the following, Hmprun will denote Hm whose pruned columns are replaced by zero columns. Then we have
Figure imgf000015_0002
We can reformulate Hmpt-un using a permutation matrix IIi on its columns:
Figure imgf000016_0002
In order to reach the target code rate Ri = -N _ffi and to have P of full rank, Hi must be of full rank. This implies to add a constraint at the end of the optimization. We then assume that Hi is of full rank in the followin g, which ensures that there exist a matrix Q and a permutation II2 such that:
Figure imgf000016_0001
where l^0u) denotes the identity matrix of size NQ — Ko and H^0 ^ K(1^ x Kl any matrix of size (JVo - Ko) x K\ . Using Equation (7), we have
Figure imgf000016_0003
Let A = π2lIiGm τ and i be the information word of length Kx of the subcode. The information part of APTir should be associated to R(W0-Kn) X A']. while the part of APri2 corresponding to pruned columns should be associated to all-zero columns. P is therefore computed such that
Figure imgf000016_0004
We can s that if Hi is full rank then A(No - Ko + 1 : NQ, :) is invertible. With this procedure, we obtain a preprocessing matrix P whicli is full rank. This method allows to achieve different UEP configurations using different subcodes, only by changing the preprocessing matrix at the transmitter and knowing the indices of the pruned columns of the mother code at the receiver. Our scheme thus offers some flexibility. However, storing the preprocessing matrices can represent a drawback since they are block code matrices in front of LDPC encoder. In the next section, we present a completely flexible particular case of the pruning procedure.
4.4 A Flexible Particular Pruning Case
What has been previously explained allows to explore all the reachable UEP configurations given the number of pruned columns, but may be too complex for a real system. An alternative is to restrict ourselves in choosing the pruned columns and the information columns of the subcode only among the information columns of the mother code. Then Hs and Gs are obtained by removing columns in Hm, and the corresponding ones in Gm which are columns of the identity part of G1n. The corresponding rows of G1n are also removed. Then Hs and Gs are of size M0 X NQ - (Ko - Ki) and K\ x N0 - (K0 - Ki), respectively. They are both of full rank and the code rate of the subcode is the target rate:
Figure imgf000016_0005
Despite the restriction, this is a very flexible approach since we only need to know the indices of the pruned columns of the mother code at the transmitter and at the Table 1 : Comparison of degree distributions p^Cfc' (j) for the differen t classes of the concentrated code.
Figure imgf000017_0002
Table 2: Comparison of degree distributions
Figure imgf000017_0001
for the different classes of the imconcentrated code.
Figure imgf000017_0003
receiver, to encode and decode, so almost no memory is required to reach different UEP configurations with the same mothei code. Thus, since the preprocessing matrix P has not a neglectible size, if we accept a slight loss in optimization, we would prefer the flexible case for real-world applications
5 Results
We present here the results we obtained using the general pruning case, that says the optimization ran over all the columns of the parity matrix. In the simulations, we did not use the preprocessing matrix since we worked with the all-zero codeword. We started from a regular (3, 6) LDPC mother code of length JV0 = 2000 and code rate Ro — 1/2. The subcode has a length of JVi = 1500 and code rate R.\ — 1/3. The optimization is done for Nc = 3 classes with α(l) = 0.1, α(2) — 0.9. We compare the performances of optimized non-concentrated (degrees of checks between 2 and 6) code and almost concentrated (degrees of checks between 4 and 6) codes. The decoding is done by using only the pruned parity- check matrix of the mother code.
Figure (6) shows the EXIT curves defined in equation (5) for each class of almost concentrated and non-concentrated check irregularity codes. The more the first class is protected, the more the less protected ones are degraded: the best protected class has a faster convergence for the non-concentrated code than the corresponding one in the concentrated code. The intermediate classes are quite equivalent whereas the last class of the non-concentrated code has a slower convergence than the corresponding one in the concentrated one. Figuie(7) repiesents bit error rates of the UEP almost concentrated and non-concentrated codes after 30 decoding iterations. We observe that the check irregularity is a mean to achieve LJEP at low number of iterations (accelerating the convergence), but also at a high number of iterations since the differences between classes are still visible after 30 decoding iterations when Looking at the bit error rates. In particular, we still have bettei performance at the 30th iteration for the first class of the non-concentrated code than for the concentrated one, whereas the last class is worse in the non- concentrated code than in the almost concentrated one. We can see that stretching the check degrees allows a stronger difference in the error protection , without degradation of total average bit error probability at this code length, compared with the concentrated code.
6 Conclusion
In this paper, we have proposed a method to optimize the UEP properties of a bit-regular code by optimizing its check-irregularity. The so-called detailed representation of LDPC codes allowed us to define a cost function which improves the local convergence of the messages, thereby creating UEP behavior. We implemented the cost function by a highly flexible pruning method, that allows to have different UEP configurations with a same mother code. The next step of this work would be to combine bit and check irregularities to provide the best possible UEP with LDPC codes under iterative decoding.
References
[ 1] C-H. Wang and C-C. Chao. Path-Compatible Pruned Convolutionnal (PCPC) Codes: A New Scheme for Unequal Error Protection. In ISITl 998, Cambridge, MA, USA, 1998.
[2] R,G. Gal lager. Low-Density Parity-Check Codes. IRE Trans, on Inform. Theory, pages 21-28, 1962.
[3] C Poulhat, D. Declercq, and I. Fijalkow. Optimization of LDPC Codes for UEP Channels. In ISIT 2004, Chicago, USA, June 2004.
[4] K. Kasai, T. Shibuya, and K. Sakantwa. Detailedly Represented Irregular Low-Density Parity-Check Codes. IEICE Trans. Fundamentals, E86- A(10):2435-2443, October 2003.
[5] D J. C. MacKay, S.T. Wilson, and M.C Davey. Comparison of Constructions of Irregular Low-Density Parity-Check Codes. IEEE Trans, on Communications, 47(10): 1449-I453, October 1999.
[6] S.Y. Chung, T. Richardson, and R Urbanke. Analysis of Sum-Product Decoding Low-Density Parity-Check Codes using a Gaussian Approximation. IEEE Trans, on Inform. Theory, 47(2):657-670, February 2001.
[7] T. Tian, C. Jones, J. D. Villasenor, and R.D. Wesel. Construction of Irregular LDPC Codes with Low Error Floors. In /CC2003, Anchorage, Alaska, USA, 2003.
[8] J.C. Chen, A. Dholakia, E. Eleftheriou, M.P.C Fossorier, and X-Y Hu. Reduced-Complexity Decoding of LDPC Codes. IEEE Trans, on Communications, 53(8): 1288-1299, August 2005.
[9] T. Richardson, A. Shokrollahi, and R Urbanke. Design of Capacity- Approaching Irregular Low-Density Parity-Check Codes. IEEE Transactions on Communications, 47(2):619-637, February 2001. [10] I. M. Boyarinov and G. L. Katsman. Linear Unequal Error Protection Codes. IEEE Trans, on Inform. Theory, 27(2): 168-175, March 1981.
Contents
1 Introduction
2 General Presentation of LDPC Codes
2.1 Definition , Parameterization , and Usual Notations
2.1.1 Definition s
2.1.1.1 Regular LDPC Codes
2.1.1.2 Irregular LDPC Codes
2.1.1.3 Systematic codin g of LDPC codes
2.1.2 Decodin g LDPC Codes by Belief Propagation
2.1.3 Den sity evolution and Gaussian Approximation
2.1.3.1 Den sity evolution
2.1.3.2 Gaussian Approximation : Mutual Information Evolution
2.1.3.3 Mutual Information for a Gaussian Consistent Chan nel .
2.1.3.4 Evolution Equation s
2.1.3.5 Stability Condition
3 State of the Art
3.1 Optimization of LDPC codes over some chan nels
3.1.1 Optimisation over AWGN Chan nel
3.1.2 Optimization over other chan nels
3.2 UEP Optimization : An Introduction 4 UEP LDPC Codes
4.1 UEP properties created by irregularities over bit nodes
4.1.1 Parameterization and Asymptotic Study of such UEP LDPC Codes
4.1.2 Cost Function for such UEP
4.2 Short LDPC: UEP linear block code optimization
4.2.1 Algebraic Properties and Unequal Error Protection
4.2.2 The Derived Algorithm
4.2.3 Results under Maximum likelihood and Belief Propagation Decodin g
4.3 Optimization of the Check-Node Profile
4.3.1 Parametrization of UEP LDPC codes
4.3.2 Den sity evolution for the detailed representation and UEP properties
4.3.3 Set of Possible Good Codes
4.3.4 A particular choice to realize UEP over check n odes
4.3.4.1 The chosen coding scheme
4.3.4.2 Case we don 't need P: only info column s of H are pruned and protected
4.3.4.3 Case we prune away redundancy in H or choose protected column among it, and then need P
Existence of P
Code rate of the subcode
Computation of the preprocessin g matrix
4.3.4.4 Hierarchical optimization algorithm
4.3.4.5 Results
4.4 Puncturing
5 Conclusions
A
A.I Tran sition from proportions of edges to proportion s of n odes
A.2 Tran sition from proportions of n odes to proportion s of edges
A.3 Expression of the code rate in terms of proportions łibliography
Chapter 1
Introduction
Nowadays, high rate digital communication s are very important. It allows increased rate and number of users, achieving good performances. In order to position our work, we first present the global transmission scheme in Fig. A.I.
To illustrate our problem, let us describe the elements of this chain . We begin in the tran smitter with the followin g blocks:
• Source coding: in this block, source data are compressed and reshaped. Some parts of the source si ^gon* al are more vulnerable than others.
• Chan nel codin g: structured redundancy is added to make data robust to errors caused by the chan nel. The code rate R is den ned as the ratio between tran smitted information bits and the number of tran smitted bits. The source can be en coded in an uniform way or in a heterogeneous one in order to take into account the properties of the source signal (unequal error protection (UEP) techniques).
• Modulation : the order and the type of the modulation are man aged here (PSK, ASK, QAM, multicarrier modulation), the power of transmitted symbols, or the spreading when we have to deal with a multiple access system by code spreading.
• The physical chan nel: disturbances are introduced. The chan nel leads to inter symbol interferences (ISI caused by fading chan nel) due to multi-paths, multiuser inteferences (MUI), and adds (e.g. white gaussian or impulse) n oise.
The receiver is composed of correspondin g blocks, that can be described as follows:
• Estimation and Equalization : the tran smission chan nel is estimated (multiple paths). Equalization allows to reduce or cancel ISI.
• Demodulation/Despreadin g: used to find bits from received symbols and to separate users in a multiple access system. • Chan nel decodin g: corrects remainin g errors in the previous obtained binary sequence.
• Source decodin g: recon struction of the emitted data by decompression of the sequence goin g out from chan nel decoder.
In this work, we will focus on the optimization of chan nel codes for UEP, which has to minimize the effect of errors on media recon struction by adaptin g the chan nel code to the structure of the source data (errors should rather affect less important bits, con siderin g the properties of the source). Here we will only con sider Low-Density Parity-Check (LDPC) codes, which are very flexible and have very good performances. Studies of other UEP methods have already been done especially for UEP Turbo Codes [24].
Within the framework of chan nel coding optimization for UEP usin g LDPC codes, an alysis of UEP properties of LDPC codes thanks to usual or detailed parameterization permits to express the bit error probability for each class of sen sibility of the codeword. An optimization method is then proposed. In [17] a con struction method is developped in order to achieve UEP for a given code rate and given UEP constraints (proportion s of classes), only by modifying the irregularity profile of bit n odes, and con siderin g the check n ode profile to be fixed.
The goal of this work was to focus on UEP properties of LDPC codes achieved by pruning and puncturin g. By prunin g some bits in a codeword, i.e. to fix (e.g. to 0) and then n ot tran smit them, or equivalently replacin g the correspondin g column s by zero columns in the parity check matrix, we directly modify the irregularity profile of check nodes, and can achieve some UEP con figuration . The resultin g code, of lower code rate, is a subcode derived from a mother code. Thus we study consequences of modifying check n odes profile. The idea behind is achievin g different UEP con figuration s with different prunin g schemes and the same decoder.
By chan ging the check-n odes profile, we will see that we con struct codes that converge faster on a part of the codeword, but also achieve UEP at in finite number of iteration s.
This thesis is structured as follows. In Chapter 2 we present an overall study of LDPC codes and specially the den sity evolution , the an alytic tool to an alyze these codes and their behavior. An overview of usual optimization s of LDPC codes is given in Chapter 3, as well as an introduction to Unequal Error Protection (UEP) methods. In Chapter 4 is explained the work of this thesis, i.e. the developped theory for UEP LDPC codes, a practical system to realize it, and expected experimental results. Finally, conclusion s are given in Chapter 5. Chapter 2
General Presentation of LDPC Codes
LDPC (Low-Density Parity-Check) codes are known as a family of high-performance codes, that are capacity-achievin g on some standard chan nels like the bin ary erasure chan nel. They stand for an altern ative to Turbo Codes, which achieve very good performance codes on some standard chan nels too. The followin g aspects guided us to con sider this family of chan nel codes in UEP applications:
• Performances fit with UEP target multimedia communication s: like Turbo Codes, LDPC codes have iterative decodin g, that allows to reach bit error probabilities of 10~5 — 10~6, for a wide range of signal to n oise ratios. These are the required orders for sen sible application s such as fixed picture or video tran smission s. A delay caused by the interleaver must be tolerated. Therefore LDPC codes can be an alternative to Turbo Codes for UEP target multimedia tran smissions.
• Systematic code: For our appication s, a systematic code is very useful. In such a code, information bits are integrally copied into the codeword. The first interest is that even in case of decodin g errors (convergence to a word that is n ot a codeword), we should be able to find the systematic part, although with errors. Moreover if the receiver has no chan nel decoder, the source decoder has only to ign ore the redundancy part. Building a systematic code from any code is very easy.
• Easy to parameterize and then to optimize: one of the biggest advantages of this family of codes is the possibility to optimize accordin g to the chan nel and the application , thanks to the explicit (an alytical) characterization of the state of the decoder during iterations in an asymptotic approach (infinite codeword length), that is function of chan nel and receiver parameters. This represents a huge advantage compared to Turbo con struction s for which n o analytical description exists, only simulation based on EXITcharts.
• A n ot completely explored domain : the optimization of LDPC codes irregularity for standard and n on -standard chan nels has already been studied, especially by Poulliat in [17] for UEP application s, but only con siderin g bit n odes. Publication s on the use of prunin g on LDPC codes do n ot seem numerous (e.g. some to avoid stoppin g sets cite 18), and almost non -existent when prunin g is con sidered to achieve UEP in side a codeword.
2.1 Definition, Parameterization, and Usual Notations
LDPC codes are low den sity linear block codes, introduced by Gallager [8] in 1963. An LDPC code in GF(Q) (with Q = 2q) is represented by its sparse parity matrix H of size (N-K) x N whose n on zero elements belon g to the Galois field GF(Q) . N is the codeword len gth, K the number of information bits related to a codeword, M > N — K the number of redundancy bits, and the code rate R = K /N > 1 — M /N, with equality if H is fullrank. The code is the set of words c <^GF(Q)K such that H.c = 0. When Q = 2, this the case binary LDPC codes and their description is done by parity equation s. When Q > 2, this is the case of n on bin ary LDPC codes. In this work, we con sider only binary LDPC codes. In each case, the structure the parity matrix can be regular or not. A code is regular if the number of non zero elements in every rows (respectively column s) is constant. Irregular if these numbers are n ot con stant.
2.1.1 Definitions
2.1.1.1 Regular LDPC Codes
Definition 2.1 A regular LDPC code with its three parameters (N, tc, tr) is defined by a matrix with exactly tc and tr ones per column and row, respectively.
The code rate is R = KjN > 1 — tc/tr, with equality if H is fullrank. Those three parameters define a family of regular codes, and one code among this family is given by a particular realization of the parity-check matrix. In an equivalent way, an LDPC code can be represented by a bipartite graph, called factor graph [12], or Tan ner graph, made of two kinds of nodes: variable n odes representing bits of codeword, and check n odes associated to parity-check functions. Those two kinds of vertices are linked with each other by edges indicatin g to which parity equation variable nodes, i.e. the associated bits, take part in .
The ith bit node and the Zth check n ode are con nected if H^1 = 1. The degree of con nection of a bit n ode (the same for a check n ode) is the number of edges linked to this n ode. A node is said i con nected or of degree i if it is con nected to i edges. Figure (A.2) shows the representation of a regular code parametrized by (N = 8, tc = 2, ir = 4). One code corresopnds to one particular realization of the interleaver. 2.1.1.2 Irregular LDPC Codes
A code is irregular if it is not regular. The usual parameterization of irregular LDPC codes is done by means of polynomials:
• Polynomial associated to variable nodes: tcmax λ(x) =∑X ' i-l
X ι=2 where \ is the proportion of edges of the graph connected to bit nodes of degree i, and tcmax is the maximum number of edges linked to a bit node.
• Polynomial associated to check nodes: trmax
Figure imgf000027_0001
where p} is the proportion of edges of the graph connected to check nodes of degree j, and trmax is the maximum number of edges linked to a check node.
Those two quantities are linked by the code rate:
R= 1-
Figure imgf000027_0002
There is also a dual parameterization of the previous one:
• Polynomial associated to data nodes: fccmax λ(x) = ∑ Kx*-1 ι=2 where \ is the proportion of bit nodes of degree i.
• Polynomial associated to check nodes: trmax p(x) = ∑ Poχ3~λ
3=1 where p3 is the proportion of check nodes of degree j. The transition from one paramatrization to another is done by:
Figure imgf000028_0001
Thus a family of irregular codes is parametrized by (N, X(x), p(x)). The regular case is a particular case of this parameterization where λ(x) and p(x) are mon omials. Figure (A.3) is a graphical representation for this kind of code. (\(x), p(x)) defines the irregularity profile of the code according to columns and rows.
2.1.1.3 Systematic coding of LDPC codes
For practical reason s, codes should be systematic: information bits are directly copied into the codeword. In general buildin g the generator matrix G from H is n ot too easy. Nevertheless it is possible to encode usin g the parity matrix. Several methods exist to make the encodin g to be systematic. However they are out of our scope. We will only con sider the simplest method usin g an upper trian gular matrix for H with ones on the main diagonal: info bits are associated to the n on triangular part, and redundancy bits are computed recursively from parity equation s, and associated to the triangular part. The sparcity of the matrix is more or less preserved. We will thus only con sider systematic matrices.
2.1.2 Decoding LDPC Codes by Belief Propagation
Although a maximum likelihood decodin g of LDPC codes is possible, the complexity in creases too much as soon as we con sider sufficiently long codes, which is important to obtain decent performances. A sub-optimum decodin g algorithm, kn own as Sum- Product algorithm or Belief Propagation (BP) algorithm is used in stead. It spreads along edges messages forwardin g probabilities or logarithmic likelihood ratios (LLR). To each branch two messages are associated, one for each direction . The principle of BP is Bayes rule applied locally (on every bit of the codeword) and iteratively to estimate a posteriori probabilities (APP) of every bit. It has been shown that over a cycle-free graph (tree case), local factorization of Bayes rules leads to exact computation of APP of bit n odes. In this case, messages over the whole graph are independent from each other. However, in a non cycle free graph (which is the case for any finite len gth LDPC code), messages are not independent, and then APP are n ot computed exactly, that says that the algorithm is not optimal anymore. The sparcer the graph, that says H, will be, the less numerous cycles will be, and the less important the dependency between messages will be.
Definition 2.2 The messages over edges are one dimensionnal and called LLR, for logarithm likelihood ratio. LLR of a message coming from a variable node will be denoted by υ, and u will denote a message coming from a variable node. They are respectively defined by
Figure imgf000029_0001
(2.3) where c is the bit value of the node and y denotes all the information available to the node up to the present iteration obtained from edges other than the one carrying v. c is the bit value of the variable node that gets the message from the check node up to the present iteration obtained from edges other than the one carrying u.
Let us now present the BP algorithm, where messages over edges are one dimensionnal (LLR), with v = log yrrlJ, the outcoming message from a bit node and u = log pγ, c,~ ' the outcoming message from a check node. Each decoding iteration is consists of two stages:
• Update of a bit node of degree i (notations on Fig. (A.4))
Figure imgf000029_0002
υm is the message (LLR) over the mth edge coming out of a bit node. The messages uk are the LLR coming out of a check node and u0 is the LLR of the channel observation. At the first iteration, every messages Uk are equal to zero.
• Update of a check node of degree j (notations on Fig. (A.5))
tanh^-= Il tanh ^-,VJb=I... j
uk is the message (LLR) over the fcth edge coming out of a check node. The messages vm are the LLR coming out of a bit node.
One interation of the BP algorithm is accomplished when all messages of the graph have been computed once by the previous equations. After L iterations, we can compute the aposteriori ratio for each bit node given by:
Figure imgf000029_0003
k=\
And the final decision on binary values of data nodes is taken by:
1 - sign(wα ψ.p,nj mn = ,Vn= I,...,JV
2 2.1.3 Density evolution and Gaussian Approximation
Here the analysis and optimization of LDPC codes presented in an asymptotic context under
BP decodin g.
2.1.3.1 Density evolution
In [19] and [20], a general method that permits to predict asymptotic preformances is presented: the authors of [20] proved a so-called concentration theorem accordin g to which decodin g performances over any random graph converge to its average performance when the codeword length is large en ough. Thus, relevant evaluation of performances of LDPC codes is possible only in the limit case of infinite codeword len gths. The in finite graph can then be con sidered as a tree (cycle-free graph), which allows to con sider every messages independent from each other. The method called Density Evolution, proposed in [19, 20], follows the evolution of probability densities of messages spreadin g over the whole graph when usin g belief propagation . Messages are assumed to be independent and identically distributed (iid). Let express this evolution of densities of messages [19].
Let a den ote one bit n ode to which is associated the observation un = log to
Figure imgf000030_0001
Update of bit node a of degree i: mo, a = ua + Ui1 + . . . + m,_i (2.4) Update of check node β of degree j : m = 7"1 (7(mi) + . . . +
Figure imgf000030_0002
(2.5) where
GF(2) x
7(z) = sign(. -)> - log ( tanh y with
0 if z > 0
0 with proba 0.5 if z = 0 stgn(z) =
1 with proba 0.5 if z = 1 1 if z < 0
Figure (A.6) shows what den sity evolution computes: the densities Pi+1 and Qι+\ of the two kinds of messages in function of Pi and Qi, respectively. Let us do this computation . Hypothesis:
The zero codeword for bein g transmit: x = 1. • Cycle-free graph (all messages are independent) (due to the concentration theorem to cycle- free case for in finite code len gth).
Pι(z) is the average probability of the codes of the family, such that sub-jacent graph be a tree.
Figure imgf000031_0001
Let (8) den ote the convolution .
• At a bit n ode :
By the independence assumption , random variables that are summed up in Eq. (2.4) are independent. So, the den sity of their sum is the convolution of their densities.
Figure imgf000031_0005
So, for an irregular graph: edge con nected to bit node of degree i)
Figure imgf000031_0006
WiUi
Figure imgf000031_0002
• At a c eck n ode : Assumin g that if
Then
Figure imgf000031_0007
Usin g Eq. (2.5), we obtain in the same way:
So
Figure imgf000031_0003
J And by linearity of F"1
with
Figure imgf000031_0004
• By combinin g Eq. (2.7) and Eq. (2.9) we obtain the desired recursion for Pj+1 in terms Oi P1:
Definition 2.3 Density evolution presented in [19] is expressed by
Pi+I = Po ® X [Y-1 W[P1))))
The zero codeword for being tran smit, they computed equations describin g density evolution along the iterations, and this an alysis led to the followin g main results, when bin ary inputs and symetric outputs chan nels are con sidered (remember that a bin ary input x = 0, 1 chan nel is said to be output-symmetric if and only if the condition n al probability for the output fulfills
Figure imgf000032_0001
• Consistence
Definition 2.4 A density of probability f(x) is said to be consistent (i.e. with exponential symmetry) iff f(x) = exf(-x), VxER
According to [19], if the chan nel is a bin ary input output- symmetric chan nel the initial den sities of messages are con sistent in the sen se of Def. (2.4) (Proposition 1 of [19] page 629), and this property is kept along the iteration s of decodin g (Theorem 3 of [19] page 628).
• Convergence
According to the con sistency con servation properties, (Theorem 7 and 8 of [19]) show that the error probability P} is a n on increasin g function of I and converges to zero as I tends to in finity, or equivalently as the message den sities tends to Δ.
• Stability condition
Analysin g convergence by den sity evolution , (Theorem 5 of [19] page 630) shows that studyin g the stability in the neighborhood of the fixed point allows to determine a necessary condition on the parameters of the code to en sure the convergence of the error probability to zero.
Theorem 2.1 Let S = fR fo(x) exp~x I2 dx, where f0 is the consistent initial density of messages, and let X (x) and p (x) denote the derivatives of irregularity polynomials X(x) = ∑!r2 o* A1(Z)*-1 and p(x) = ∑^T ^W^"1- # λ'(0)p'(l) < S~\ then the error probability will converge to zero, otherwise it a non zero value will minimize it.
This condition gives an upper bound on the value A2. Under BP decodin g, density evolution permits to show that LDPC code have a threshold behavior: there is an optimal threshold δ* of signal to n oise ratio beyond which the block error probability converges to zero for an in finite codeword length. In the case of AWGN (Additive White Gaussian Noise) chan nel, the optimal threshold is given by the sign al to noise ratio δ* = (Eb/N0)*. Asymptotic performances of BP decoded LDPC codes can be compared with the Shan n on limit, and we can then determine what could be the best family of codes for a given chan nel. This has been done in [19] to find the degree distribution pairs (X(x), p(x)) that have the lowest threshold at a given code rates.
2.1.3.2 Gaussian Approximation: Mutual In formation Evolution
We now present the asymptotic study od LDPC codes over the AWGN chan nel, what we are goin g to use from now. Since the previous introduced density evolution method is too complex to be easily applied, a simpler version has been introduced by Chun g [6] for the particular case of the transmission over Gaussian chan nel. Densities of messages are modeled by Gaussian density or mixture of Gaussian den sities for a regular and irregular codes, respectively. Thanks to the con sistency of messages alon g stages of BP decodin g, this approximation allows to boil down the asymptotic decoding study to study of only one parameter alon g the iteration s. This kind of den sity approximation has first been introduced by ten Brink for the analysis of the convergence behavior of the iterative decodin g of parallel concaten ated codes [22] using EXIT charts. This approach has then been used for a lot of concaten ated system, like turboequalization , by modeling the input extrinsic information by a Gaussian den sity N(m, 2m). Due to the difficulty to express den sity update rules in many of the systems, input-output relation are computed by Monte-Carlo simulation s, which is equivalent to an asymptotic study when con sidering infinite codeword lengths.
The huge advantage of LDPC codes is that their parameters and parameters of the chan nel allow to find an an alytical relation between the value of the study parameter (which can be m for example) between iteration I and I + 1. In [6] den sities of messages u coming from check nodes and v from bit n odes are modeled by con sistent Gaussian den sities. This seems realistic for messages v, but debatable for messages u (Figure 4 and 5 in [6]). We assume tran smission using BPSK modulation over an AWGN chan nel whose n oise variance is σ2. The zero codeword is transmitted (since for a chan nel and the BP decoder fulfilling symmetry condition , error probabilities at the Zth iteration do n ot depend on the transmitted codeword). Therfore u0 is Gaussian N(2/σ2, A/σ2) which is con sistent according to Def. (2.4). Theorem 3 of [19] (p.628) asserts that con sistency is kept alon g iterations for a given binary-input memoryless output- symmetric chan nel. Then , in order to be able to model message den sities to be Gaussian , they must fulfil the con sistency condition : N(m, σ2) is consistent if and only if σ2 = Im. That is the reason why the density evolution can be expressed as a one parameter evolution . [6] chose the mean of messages, but we can chose, still under the Gaussian approximation of the message den sities, to follow the mutual information s of a virtual AWGN chan nel, whose output would be messages v or u coming out from bit n odes or check node, respectively.
2.1.3.3 Mutual Information for a Gaussian Consistent Channel
Let v be a message such that v ~ N(±m, 2m) that is the output of a binary-input Gaussian channel. The mutual information between v and input c of the virtual channel is given by:
Wenow shorten fυ\c(v\c) by /(w|c). Since the channel is symmetric, we have f(v\c = —1) = f(—υ\c = 1), and by consistency f(—v\c = 1) = f(v\c = 1) exp"11, this implies:
Figure imgf000034_0002
J is calle the mutual information function , linkin g mutual information of a Gaussian con sistent chan nel to the mean of messages whose den sity is N(m, 2m). Equation (2.10) is rewritten as:
Figure imgf000034_0003
J is continuous and a strictly mon oton ous function , so J"1 exists and permits to compute the mean of messages from the mutual information . 2.1.3.4 Evolution Equations
Let Xcύ and xil be the mutual information associated to messages coming from check n odes to variable n odes and from variable n odes to check n odes, respectively. From [7], we have the followin g update relation :
• Update at variable nodes
Figure imgf000035_0001
• Update at check n odes trrnax
*2 = 1 ~ T. PΛ(J - I)^(I - 4D) (2-13)
J=I
Proof: Let a den ote a bit n ode, and m the mean of message m. From Eq. (2.4) we have (I) — . (Z-I) . . (Z-I)
Assumin g that m is W^ the average over the whole graph of messages coming from check nodes, and by symmetry of the chan nel:
m, (ZL = -o + (degree of α).^"1) (2.14) σ
And usin g the J function , we have m = J 1(x). So, in the same way as for general den sity evolution , making the average over the whole graph of messages coming from variable nodes leads to Eq. (2.12). Proof of Eq. (2.13): Now let β denote a check node. We can rewrite :
— log tan h( — — ) = — log )
Now remember that, by B ayes rule:
Figure imgf000035_0002
where x is the random variable describin g the codeword bit value associated to the variable node a, and y is the random variable describin g all the information incorporated into this message. Let v[0] = P(y\x = 1) and v[l] = P(y\x = —1). A message goin g out of a check node will be generally called u. So
Figure imgf000035_0003
Figure imgf000036_0001
Using the Discrete Fourier Transform, we have:
Figure imgf000036_0004
Finally we ha
Denning DFTv = log ode sums up the messages,
Figure imgf000036_0005
but in the frequential domain. Moreover according to Eq. (2.10) we express the mutual information of DFTv:
Figure imgf000036_0007
Which is exactly:
Figure imgf000036_0006
We can then describe the evolution o XBFTU exactly in the same way as xυc since the update of DFTu is exactly the same as mOiQ.:
Figure imgf000036_0002
That says
With
Figure imgf000036_0008
Finally over the whole graph:
Figure imgf000036_0009
Applying Eq. (2.15), we proved Eq. (2.13):
Figure imgf000036_0003
D
Note that we can also express the evolution by following the mean of messages over the graph when decoding. To do so, we need the function φ. Definition 2.5 φ{x) = 1 -= / tanh ifx > 0 (2.16)
Figure imgf000037_0001
Figure imgf000037_0002
(2.18)
We then express the evolution of the mean, computed exactly in the same way as the mutual information.
Figure imgf000037_0006
The combination of Eq. (2.12) and (2.13) yields the EXIT chart of the code den ned by (λ(x), p(x)). It is an explicit n on-linear function of (λ(x), p(x)) and of parameters of the chan nel (σ2 in our case):
Figure imgf000037_0003
where λ = [λ2, • • • , \cτnaJ\τ and p = [p2, . . . , Ptrmαx]τ '. Several optimization methods exist, which can consider p(x) as fixed in order to make F linear in λ. The convergence threshold is den ned by the smallest Eb/N0 beyond which xM → 1 when I → oo.
2.1.3.5 Stability Con dition
Under the Gaussian approximation , [6] provides a looser stability condition in the neighborhood of the fixed point:
Figure imgf000037_0007
whereas the general condition given by density evolution is:
Figure imgf000037_0004
Jensen 's inequality shows that the first condition is looser than the secund one:
Figure imgf000037_0005
The stability condition is very important because it controls the mutual information behavior at very low error probabilities (or equivalently, when the mutual information is near to 1). Chapter 3
State of the Art
This chapter is meant to first introduce principal possibilities for optimizations using the characteristics of LDPC codes, and then we focus on already studied UEP optimization strategies, before in next chapter presentin g our own strategies.
3.1 Optimization of LDPC codes over some channels
3.1.1 Optimisation over AWGN Channel
One of the most common criteria is to minimize the convergence threshold of LDPC code of code rate R:
δ* = Ατg mm(-l— \F(\, x, σ2) > Z5 VzG [O5 I]) (3.1)
In order to simplify the optimization , we can first only optimize \{x) for a fixed p(x). Optimization is carried out in two stages:
• Maximization of code rate for fixed p(x) and σ2:
The most used cost function consists in optimizes the code rate R. The problem becomes linear since the cost function and the con straints become linear in λ(x), and thus can be solved by linear programmin g. For given p{x) and σ2, we determine λ(-c) that maximizes R. Let λ = [A2, . . . , Xtcmax]T and 1/^ = [1/2, . . . , l/tcmax]τ. Rememberin g the relation
VfM n Ii
R = 1 - ~**tcmax the optimization problem can be expressed as:
Figure imgf000038_0001
= arg maxλ(l/tc λ) under the con straints: Mixture con straint: λτl = i
Proportion constraint:
Vi = 2. . . tσmax, KΦ, ϊ\
Convergence con straint:
Stability con straint ;
Figure imgf000039_0001
Threshold minimization
Definition 3.1 A concentrated degree distribution over check nodes is defined by the polynomial p(x) = pxk~λ + (1 - p)xk
From here we will call improperly concentrated code a code with such a concentrated p polyn omial.
Once R is determined, we increase σ2 as lon g as R can be reached. The resulting threshold (Ef, /N0) is δ = -^p. p(x) can n ow be optimized. From Theorem 2 of [6] (page 665), we learn that a concentrated degree distribution optimizes the speed of convergence. That's why, assuming ρ(x) in this form, this optimization mean s optimizin g one parameter p~ = k — p which is the average degree of con nectivity of check n odes. Fin ally, the degree distribution pair (X(x), p(x)) that minimizes the threshold is chosen .
Hence, the evolution of mutual information in the form of an asymptotic study for in finite codeword len gth shows that an optimal value ~popt exists for each tcmax, that allows to reach a minimum convergence threshold. This threshold approaches Shan n on capacity when tcmax increases, and then ~p too (these two parameters are stron gly linked together), see Fig. (A.7) extracted from [7].
This is similar to Gallager's result under maximum likelyhood decoding, according to which den sest codes are the best regardin g the convergence threshold. Two important remarks must be made:
• This asymptotic result is verified only for large codeword length (e.g. N = 30000), but for short ones (e.g. TV = 1000), the tree assumption is not valid en ough. Cycles in the graph of densest codes worsen them, breaking message independency and thus BP optimality. Less den se codes have higher girth (the girth is the len gth of the smallest cycle), which en sures best efficiency of BP decodin g. Hence the code hierarchy for "short" code length is the contrary of the expected one.
A second remark, important for our work on UEP, is to highlight that the convergence threshold deals with the global behavior of the code, and we will see that a lower BER than the global one can be achieved for most protection classes, even if the global threshold is increased. Differences between classes will depend on which offset on global threshold is allowed, but this will be seen in detail in next chapter.
3.1.2 Optimization over other channels
Similar optimization s can be performed for other chan nels (BEC, BSC, Laplace, AWGN [19, 20], Rayleigh), amon g which optmization strategies on multiple access chan nels, mul- ticarriers chan nels, memory chan nel or high spectrum efficiency chan nels. We restricted ourselves to an introduction into the usual LDPC optimization for AWGN chan nels. This offers some insights and links to our UEP optimization s.
3.2 UEP Optimization: An Introduction
Let us con sider the tran smission of media like voice, fixed image, or video, whose characteristics are heterogeneous sensibility to errors in the data stream. The code stream of source-encoded blocks is hierarchically structured and contains:
• Headers to describe the type and parameters of compression .
• Structure control data that are indicators of code stream synchronization , position , or indexin g.
• Compressed data delivered from the source coder: e.g. speech encoder coefficients, image texture, or movement vectors.
This con stitutes a very logical en semble, and it is obvious that errors on headers is a disaster since true recon struction parameters of the compression are n ot kn own at all. The fin al result at the receiver will then be completely different according to error localization . Sen sibility classes can be distin guished in side compressed data, accordin g to the compression system used. For video, errors on movement vectors are more disturbing than errors on texture. The same holds for low frequency coefficients in fixed images, whereas high frequency ones are generally associated to details. Actually uniformly protectin g such a code stream would be sub-optimal. This highlights the interest of realizing unequal error protection by modifying on the irregularity of a code. When speaking about irregularity for UEP, we distin guish systems with irregularity caused by puncturin g and/or prunin g, and those with intrin sic irregularity:
• Irregular punctured/pruned systems: Puncturin g con sists of n ot emitting some bits of the codeword, thereby decreasin g the initial code rate R. The receiver kn ows the puncturin g pattern , and con siders n ot tran smitted bits as erasures. This technique worsen s the performance of the code allowin g to obtain a wide ran ge of rates. Unequal error protection can then be achieved by applyin g different code rates to each part of the source data, according to the required robustness. An other way of addin g irregularity is usin g a pre-processin g block before the code, in order to prune it. Puncturin g and prunin g will be the chosen method to realize UEP in our work, and has been further studied in [24] for Turbo Codes.
• Intrin sic irregularity systems:
One can think of systems without a posteriori added irregularity block, but with intrin sic unequal protection properties.
• [4, 16] presented unequal error protection linear codes. Linear codes can achieve different protection inside a codeword, i.e., averaged error probability is n ot uniform in side the codeword. These properties are due to algebraic characteristics of the parity-check matrix, con sidering maximum likelihood decodin g (MLD, syndrome decodin g). The protection level of the rth bit is associated to its local minimum distance, which is exactly the minimum codeword weight with a one in the zth position This is also the degree of independence of the zth column in the parity-check matrix: the minimum number of column s that are included in linear combin ation that leads to zero, with coefficient one at the rth column . The local minimum distance associated to each bit of the codeword determines the maximum number of errors in the whole codeword, still allowin g this bit to be corrected. The local minimum distance can be greater than the global one, which mean s that the rth bit can be corrected even if the whole codeword is n ot recovered by MLD. That explain s the interest of such codes under MLD, when considerin g in the previously mentioned JPG tran smission , for example. Con struction methods of such codes have been presented, but a big problem is the poor control that we can have over the proportion s of the classes, which can be very disturbing for the latest application .
• An other family of such irregular coding systems is multi-level coded modulation . Each bit of a symbol is associated to a given code, which differs from others by its code rate. Then the protection level of bits depends on the code, and on the position in con stellation labellin g, which mean s that two kinds of irregularities can be exploited
• LDPC codes can be punctured [9] in order to create average irregularity. Puncturing in fluences the code rate: average performances differ between two codewords encoded with different puncturing patterns. Nevertheless it is more suited to make use of an irregularity that leads to unequal error protection of bits in side a codeword: most con nected bits will have lower error probability. This has been highlighted in [6], and applied for optimization for several tran smit chan nels. The optimization for AWGN done by Poulliat in [17] will be presented in the next chapter.
In the following we will present our work that concentrated on two approaches. The first con siders LDPC code as a linear block code and optimizes the code according to the local minimum distances [4, 16]. The second approach is an asymptotic optimization for BP decodin g and is based on pruning and puncturin g of a mother code.
Chapter 4
UEP LDPC Codes
In the previous chapter, we presented the family of LDPC codes, its parameterization and the asymptotic study of belief propagation (BP) decoding: den sity evolution under gaussian approximation . We are n ow goin g to focus on the possibilities that both dimen sion s of their irregularity profile provide, to achieve unequal error protection in side a codeword.
In the first section of this chapter the usual UEP optimization of LDPC codes is presented, which allocates most important bits to the most con nected variable n odes. We then develop a pruning method for linear block codes, completely derived from [4, 16], that has n o real practical interest. Finally check optimization is carried out. This is n ot usual method due to the fact that check profile must be concentrated. But this will be seen in detail. We then look at the prunin g method to optimize the check irregularity, and fin ally an alyse briefly what an optimal puncturin g of such an UEP code could be.
4.1 UEP properties created by irregularities over bit nodes
We n ow present the optimization realized by C. Poulliat in [17, 18] . This was a quite unusual optimization when it was presented because known methods were focusin g on the global average performances, such as the convergence threshold on a given chan nel or puncturing pattern . The global convergence of the error probability to zero is usually the only one cost con sidered, because the parameterization , and the evolution equation s of LDPC codes do n ot distin guish information and redundancy in side codeword and consider an in finite code length and an in finite number of iteration s. We try to see how to specialize those equation s for UEP created by irregularities on bit and check n odes. We are interested here in local convergence of a part of codeword, associated to sensitive data, for finite number of iteration s, which determines the followin g kind of optimization . 4.1.1 Parameterization and Asymptotic Study of such UEP LDPC Codes
Let us assume the proportion of each class Ck of sensibility den ned by the source. We use from here
Figure imgf000044_0001
where Nc is the number of classes over the whole codeword, information bits are spread over the Nc — 1 first classes, the iVcth class containin g the whole redundancy. We have
Figure imgf000044_0003
The proportion s of bits in codewo inside classes are
Figure imgf000044_0004
We still have ut defin e
Figure imgf000044_0005
and
Figure imgf000044_0006
which are polynomials of proportion of edges linked to degree i bit n odes belon gin g to the fcth class, and proportion of degree i bit nodes belon ging to C&.
Specified evolution equation of mutual information can then be derived from Eq. (2.12) and Eq. (2.13):
• Update check n odes
Figure imgf000044_0007
• Update variable n odes
Figure imgf000044_0002
Equation (4.2) is obtained by adding the mutual information comin g into each class of bitn odes since there is no overlap between the classes. We then can derive convergence and stability conditions from the fact tha
Figure imgf000044_0008
4.1.2 Cost Function for such UEP
Under the Gaussian approximation , [6] gives the error probability associated to degree i bit node at the Zth iteration :
Figure imgf000045_0001
Proof: Let X be the random variable that den otes the a posteriori probability of one bit. Let us express the error probability of one bit:
Pe(bit) = P(X < 0\bit =
Figure imgf000045_0002
= P(X < 0\bit = 0). + P(-X < (4.4)
Figure imgf000045_0004
Figure imgf000045_0003
Thanks to the symmetry of the chan nel:
Figure imgf000045_0005
Let X11 be the random variable whose distribution is N(O, 1). Under Gaussian approximation , X is Gaussian consistent for any iteration according to the symmetry of the chan nel and the conservation of the con sistence alon g the iteration s: X\bιt = 0 ~ N(m, σ2 = 2m). Let now X den ote improperly X\bit = 0, but this will make the expression s clearer. We then have:
Figure imgf000045_0006
Which yields
X = ^ + T
In serted in Eq. (4.5), we fin allyy hhaavvee
Figure imgf000045_0007
Finally
Pjbit) = Q Q
Now we remember that σ i σ1 m
2 V 4 V 2 We conclude that
Figure imgf000046_0001
By replacin g the mean m of the sum of messages comin g into that bit by the its expression of Eq. (2.14), we obtain Eq. (4.3). D
Beyond the asymptotic convergence threshold, J"1 (xh-l ) is an increasin g function of I. Since Q is a decreasing function , 4.3 shows that at a given number of iteration s, the more a bit is con nected, the more it is protected, con sidering the associated error probability (the convergence of this node is faster).
The protection of one class can then be expressed as
(4.7)
Figure imgf000046_0002
and then be bounded by
Figure imgf000046_0003
with
akR ^ ι=l which is the average degree of a bit n ode in Ck- The minimum bound is directly obtained by the convexity of the Q(.) function , and the maximum bound by the decreasing of the Q(.) function .
The derived linear programming algorithm is meant to achieve a joint optimization of λ^*) and P0J11n, under the con straints of proportion , code rate, convergence, stability, and hierarchical constraints (since the optimization is sequential, the irregularity profile of already optimized classes must n ot be modified by the current optimization ).
4.2 Short LDPC: UEP linear block code optimization
We are n ow goin g to present some results regardin g UEP optimization of LDPC code con siderin g it as linear block code under maximum likelihood decoding. Our optimization algorithm is based on computatin g of degree of independence of column s of the H matrix. This approach has a huge drawback: due to computation time, it can be applied only to very short codes (N < 50), and thus excludes required practical approach. However this way still serve as a first step towards understandin g UEP created by irregularities over check n odes. 4.2.1 Algebraic Properties and Unequal Error Protection
Masn ick in [16] and then Boyarinov in [4] presented linear unequal error protection codes under MLD.
Definition 4.1 Inside a codeword, the local minimum distance dt of the ith bit is exactly the minimum codeword weight with one in the ith position.
Definition 4.2 The degree of independence of the ith column of the parity-check matrix of the code is the minimum number of columns that are included in a linear combination that equals zero, with a coefficient one at the ith column.
Lemma 4.1 The local minimum distance dτ of the ith bit of the codeword is the the degree of independence of the ith column of the parity -check matrix of the code.
Definition 4.3 The protection level fτ of the ith bit of a codeword is the maximum number of errors in the codeword that still allows the correction of this bit.
/, = L^J
Thus, the local minimum distance associated to each bit of the codeword determines the maximum number of errors in the whole codeword that still allows the correction of this bit. The local minimum distance can be greater than the global one, which mean s that the «th bit can be corrected even if the whole codeword can n ot be restored by MLD. Those algebraic properties can be linked to Majoήty Logic Decoding presented in [3] which works on a poorer difmition of local minimal distance to simplify the decoding.
4.2.2 The Derived Algorithm
Classes are n ot defined by their proportion s at the begin ning, which is another drawback of the linear codin g approach. Actually, we do n ot intend to construct an arbitrary linear block code, but a subcode of a mother code from which we choose the right column s to be removed in the parity-check matrix in order the resultin g parity-check matrix be the matrix definin g a code with the required properties. Here are the parameters of the optimization :
• Let us den ote the set of initial (i.e. mother code) local minimum distances by W1 = [wι (l), ..., Wi(N0)], which has to evolve to w2 alon g the optimization .
• Let C1 be the number of different zero linear combin ation s of W1 column s (we have C1 > W1), and Cjthe set of corresponding indices (card(Ct) = C1). In our example:
• We start from a regular mother code with parameters (N0 = 20, tc = 3, tr = 6), the number of info bits of the subcode K1 = 3. This defines a subcode length of
N1 = N0 - (K0 - K1) = 13.
• Let the required UEP profile be w^ = [w2 ( 1 ), ..., w2 (K1 ) ] , which are the required local minimum distances on info bits of the subcode.
• U is the vector where indexes of columns of H to be pruned away are stored (length K0 - K1).
• wlmtt is the initial w± vector, ordered in decreasin g order, before optimization of a selected column .
We sequentially choose the best column to be optimized by run nin g in wlmτt from left to right Fig. (A.10).
4.2.3 Results under Maximum likelihood and Belief Propagation Decoding
Figures (A .11 ) and (A.12) show that the code of len gth 20 seems better than the UEP length 13 one in any case, although biggest differences between classes can be seen in the last code. This could have two possible reason s:
• The Gallager's result: the densest code is the best under MLD (and even BP because it is lon ger in our case)
• The rescaling due to the huge differencies between the two code rates (1/2 and 3/13)
Moreover, con siderin g the mother code as an LDPC code under BP decodin g, we would say that it is n ot UEP at all since it is completely regular (on bit and check n odes). On the contrary, considerin g it as a linear block code under syndrome decodin g, it has some UEP properties, since local minimum distances are either 4 or 6. This specificity of unequal error protection that depends on the code and on the mean s of decodin g, has to be further explained.
All the essential in gredients for the explanation are already available from [23], where we extract some required definition s.
Definition 4.4 (Cycle) A cycle of length 2d is a set of d variable nodes and d constraint nodes connected by edges such that a path exists that travels through every node in the set and connects each node to itself without traversing an edge twice. Definition 4.5 (Cd Cycle set) A set of variable nodes in a bipartite graph is a Gd set if (l) it has d elements, and (2) one or more cycles are formed between this set and its neighboring constraint set. A set of d variable nodes does not form a Cd set only if no cycles exist between these variables and their constraint neighbors.
Definition 4.6 (Sd Stopping set) A variable node set is called an Sd set if it has d elements and all its neighbors are connected to it at least twice.
Definition 4.7 (Ld Linearly dependent set) A variable node set is called an Ld set if it is comprised of exactly d elements whose columns are linearly dependent but any subset of these columns is linearly independent.
According to Lemma 1 and Theorem 2 of [23], we can summarize the relation ship between Cd, & an d Ld in Fig (A.13).
That says that the local minimum distance of each variable node is associated to a cycle of len gth 2dmtn, but the converse is n ot true (the variable n odes that form a cycle are n ot necessarily dependent). So an ML decoder can successfully decode an erased stoppin g set because the associated local minimum distances of the n odes are n ot necessarily low, whereas such a stopping set can never be overcome by an iterative decoder.
Thus, the local minimum distance of one bit, den ned in Def. (4.1), is an upper bound on the depth plus one of the biggest local tree that starts at the considered variable n ode. Let us then analyse the case of a regular LDPC code:
• For a finite code len gth N:
• Under Maximun Likelihood Decodin g, the code is decoded in an optimal way, in the sense of the minimum distance. The code can have UEP properties due to its local minimum distances, associated to some cycles in the graph, that can be different from each other. The UEP properties are then dependent on the realization of the H matrix. The local properties of the code are taken into account by the MLD.
• The Belief Propagation is sub-optimum decodin g, and quite "global" in the sen se that it does n ot take into account local properties randomly created with the H matrix. Local differences will be created by the local sub-optimalities of BP decoding at finite code len gth, and some of these sub-optimalities are associated to small local minimum distances.
• For an infinite code length: Belief Propagation decodin g is the Maximum Likelihood Decodin g. The minimum distance tends to in finity and the length of the smallest cycle, called the girth, too. Therefore, all local minimum distances tend to in finity too and UEP properties den ned by the two mean s of decodin g tend to be the same. Thus, UEP properties depend on the code and also on the way that it is decoded: the optimization must be done as a function of the chosen decodin g method. This is, of course, practically determined by the code len gth since at low N (N < 500), MLD will be used, otherwise BP.
4.3 Optimization of the Check-Node Profile
We first describe the specific parameterization of UEP LDPC codes, then the den sity evolution for this detailed representation, and fin ally our optimization algorithm. It is based on the optimization of the check node irregularity profile, which still con siders local performances, but n ot only for a finite number of iteration s anymore.
4.3.1 Parametrization of UEP LDPC codes
A very useful parameterization for our work is the detailed representation of irregular LDPC codes presented by Kasai in [11]. They con structed new families of LDPC codes which are sub-en sembles of convention al irregular LDPC code en sembles. The detailed representation they adopted allows to design optimal codes more accurately by restrictin g choices for the interleaver.
Definition 4.8 Let B and D be two sets or irregularity degrees. Afunction π : B x D → [0, 1] is said to be the joint degree distribution of (B, D) if∑bζB ∑^€D π(^> 0O = 1- This function describing the connections between the different degrees of the code is called the detailed representation of the code.
Definition 4.9 We also define marginal degree distribution s of variable and check blocks with respect to vr by
X(x) := X^-1
Figure imgf000050_0001
and p(x) := ]T pax0-1
with λ6 := ∑ τr(M) , pd := ∑ π(b, <£)
Definition 4.10 For π(b, d), we define two fractions π(b, d) , . π(b, d)
X{b, d) := Λ^ , p(b, d) := ^M Pd Xb It can be verified that p(b, d) equals the fraction of edges connecting nodes of degree b and d among all edges of degree b.
This detailed representation can be used to describe the methods for different Poisson constructions explored by Mac Kay et al. in [15]. They distinguish a Poisson, a super-Poisson and a sub-Poisson construction, which differ from each other by the variance of the distribution of the high weight columns per row.
In this work, we focus only on the influence of the check node irregularities on the UEP properties, i.e., on the distribution of rows weight. We consider codes with a regular bit nodes profile (all of the same degree).
4.3.2 Density evolution for the detailed representation and UEP properties
Theorem 3 in [11] states that under local tree assumption of depth 2T and some other constraints, for any 1 < I < T, the distribution functions Qι(d) of messages originating from check nodes of degree d and Pi (b) of messages originating from bit nodes of degree b are equal to
Figure imgf000051_0001
Pι(b) = P0(b) ® (∑ p(b,d)Ql_1(d))^b-^) (4.10) d€D
Note that Pi (b) and Qι(d) do not depend on d and b, respectively. These expressions are directly derived from equations (2.8) and (2.6), respectively.
Let s still denote 2/σ2. From Eq. (4.9), we can derive a Gaussian approximation for the detailed representation:
Figure imgf000051_0002
(4.11) b&B rrξ) (6) = s + {b _ i) Y^ p{bj d)mV-i) {d) (4.12)
and χ®{d) = i-j[(d- I)J-1 ( i - ∑ \(b, d)x«l(b) (4.13) b&B
4Kb) = J + (b-l)∑ p(b, d).]-1
Figure imgf000051_0003
d&D We may mention the equality between mu(d) in (4.11) and f3 in [6] in the case that B den otes the degrees of irregularity over the whole graph, d and j den ote the same thin g: the con nectivity degree of one check n ode. Let t be the mean over the whole graph of messages coming out of check nodes. f3 (s, t) is the mean of messages coming out of a check node of degree j in terms of s = 2/σ2 and t the mean at the previous iteration . In [6] we find
Figure imgf000052_0001
From this equation we observe that the lower is j (or equivalently d in our work, the higher are the messages comin g out of check node of degree j (d), i.e. the faster is the local convergence of these. Figure (A.14) extracted from [6] shows the gaps f3 (s, t) — t that represent this local convergence. The curves are parameterized by j. t increases as the number of iterations increases, from left to right. We clearly see the previous quoted effect of j on the local convergence. We must also n otice that the difference between messages coming from check nodes of different degrees seems not to decrease when decodin g. b and i den ote the same thing: the con nectivity degree of one bit node. Let r den ote the function φ(.) of the mean over the whole graph of messages comin g out of bit nodes. hτ(s, r) is the function φ(.) (see Def. (2.5)) of the mean of messages coming out of bit n odes of degree i in terms of s = 2/σ2 and r the mean at the previous iteration . In [6], we find tτmax \
K(s, r) = φ + (i - l)
Figure imgf000052_0002
~ (! " rY'1) J
Figure imgf000052_0003
We see from this equation that the higher is i (or equivalently b in our work), the lower is φ(.) of the messages comin g out of bit n ode of degree i (b), i.e. the faster is the local convergence of these messages. Figure (A.15) extracted from [6] shows the gaps ht(s, r) — r that represent this local convergence. The curves are parameterized by i. r decreases as the number of iterations increases, from right to left. We clearly see the previous quoted effect of i on the local convergence and the flattening out of differences between messages coming from bit nodes of different degrees at high en ough number of iteration s, in contrast to the behavior at check nodes side. This check n odes behavior is a very interestin g, and we will elaborate on more.
The behaviors of different degrees check n odes seem to remain different despite of the increasin g number of iterations. This behavior is directly linked with the erasure-correction capability of a check n ode. In the sequel, we do n ot provide rigorous argumentation , but we try to give an intuition on what is happenin g at both kinds of nodes.
An erasure message corresponds to the LLR L = O since we have absolutely n o information from the chan nel if the bit was 0 or 1.
• At a bit n ode LLRs are summed up. Then it is sufficient to have at least one message not erased to en sure that all the messages but one coming from this bit n ode are different from 0. So the probability that at least one message is n ot erased grows with the con nectivity of the bit n ode, which explain s that the more a bit node is con nected, the more it is protected.
• At a check n ode LLR are multiplied (improper but equivalent since (tan h L = 0) <=> (L = O)). Then , it is sufficient that two messages are erased to have all the messages coming from this check n ode equal to 0. So the probability that at least two messages are erased decreases with the con nectivity of the check node, which can explain the less a check n ode is con nected, the more it can correct incomin g erasures.
Remember that LLR^M., and 0 < ta.nh(LLR) < 1. At a high number of iteration s, many LLRs are high.
• At a bit n ode, the important LLRs are the highest because they are summed up. At a given high number I of iterations, we decide that a message comin g out of a bit n ode is of bad quality if the correspondin g LLR is below a fixed threshold that does n ot depend on the con sidered bit node or on the number of iterations. At a high en ough number I of iteration s, a bit n ode produces bad message (i.e. a low LLR) if the number of incomin g high LLRs is below a fixed number that we choose in terms of the number of iterations, i.e. in terms of the order of the current LLRs that can be added to reach the fixed threshold, but n ot in terms of the degree of the bit since the quality criterion for the messages is the same over the whole graph. Let fix(l) denote this maximum number of high LLRs that produce bad messages at the /th iteration . Let pi den ote the probability that a LLR be considered as high (i.e. message of good quality) at the lth iteration .
Then we can write
Pfeo^(i) = P(a message coming from a bit node of degree % be of bad quality |high number I of iter)
Figure imgf000053_0001
Since
Hm LLR = co we have lim fix(l) = 0
Z→oo whereas the con nectivity degree i of a bitn ode is, of course, fixed when decodin g. Therefore, lim mm(fιx(l), ι) = fix(l)
I— >oo
Hence, at high en ough number of iterations, we obtain
Figure imgf000053_0002
and then can state that the probability -Pbαd(«) that a bit node of degree i produces a bad message tends to be independent of its degree i when the number of iteration s increases sufficiently.
We conclude that when the number of iterations tends to infinity, the probabilities
P^d{i\) and -P^ («2) °f two bit nodes of different degrees I1 and i2 to generate bad messages tend to be the same. This mean s that all the variable nodes of the graph of any degree have the same behavior at a high number of iteration s.
This explain s that the UEP created by irregularities over bit n odes disappears at a high number of iterations.
• Whereas at check n ode side, high LLR have n o in fluence since they are tran slated by ones by the hyperbolic tangent and then multiplied. The most important LLRs, which determine the quality of outgoing messages, are the smallest ones. A message coming out of a check n ode is of bad quality if at least one of the incomin g LLRs is small. Let qι den ote the probability that a LLR enterin g into a check n ode be con sidered as small at a given high number of iteration s I. Then we can write:
^bad (j ) = P(a message coming from check node of degree j be of bad quality high number I of iter) = P(at least one of incoming LLR be small high number I of iter)
Figure imgf000054_0001
fc=l
Since we are at a given high number of iteration s, this can be approximated by
P&ϋ) = U - i) «
Let us now express the ratio between the probabilities of outgoing LLRs of a check node to be small for two check n odes of different degrees J1 and j2:
Figure imgf000054_0002
This ratio is a con stant. It does n ot depend on q{, i.e., on the number of iteration s, for high en ough number of iteration s. The behaviors of different check n odes remain s different even at high number of iterations, i.e., at a low bit-error rate. This explains that the UEP created by irregularities over check n odes remain s at a high number of iterations which we exploit in this work.
We continue by switchin g from mean of messages to their mutual information , in order to be able to plot EXIT charts (Extrin sic Information Tran sfer charts) for UEP codes and express the error probability. B and D can either be the degrees contained in the whole graph, and then Eq. (4.13) desbribe the usual Gaussian approximation of density evolution or the degrees inside one class of protection . We have seen at the very begin ning of this chapter that a class is den ned by the bit n odes that belon g to it. The check n odes belon gin g to a class will be the ones linked to the bit nodes of this class.
The averaged mutual information of messages goin g from the check nodes of this class to the bit nodes of this class can be expressed as
Figure imgf000055_0001
with
Figure imgf000055_0002
Together with Eq. (4.13), we obtain
Figure imgf000055_0006
And so we can express the difference den ning the convergence of one class, i.e., the medium quality arrivin g to its bit n odes, compared to the medium quality of all messages of the graph at the previous iteration .
Figure imgf000055_0007
which is in our particular case of regularity over bit n odes (λ(«) = δ(t — 3)): - χi-»
Figure imgf000055_0003
We can rewrite this as
Figure imgf000055_0004
In the followin g we will keep the last expression of this gap since the relation between xvc and x cυ does n ot depend on parameters included in the optimization .
According to Eq. (4.6), we can express the error probability associated with a bit at the Zth iteration by
Figure imgf000055_0005
In our particular case of aregular degree distribution over bit n odes, the difference between bits is due anymore to their degree i of con nection , but caused by the degree of con nection of the check n odes linked to this bit (called
Figure imgf000056_0001
I later) Equation (4.3) (extracted from [6]) is
Figure imgf000056_0002
And the error probability of bits in side class Ck is:
Figure imgf000056_0003
we can then derive bounds on the error probability of class Ck, where dmt kj is the minimum degree of checks belonging to the class C^:
Figure imgf000056_0004
However more interestin g bounds seem to be the ones directly on the difference between mutual informations, den nin g the convergence of one class, expressed in our particular case as:
Figure imgf000056_0005
We see a dependency on the average check con nection degree of the class Ck-
Figure imgf000056_0006
Our algorithm is directly derived from the given bounds in Eq. (4.19). It is first meant to speed up the convergence, but we will see that UEP properties also remain at high number of iteration s. At given al, then at given x , we try to maximize the bounds of Eq. (4.19) by optimizin g jointly both parameters p(Cfc) and <r^ .
Therefore, the most protected classes, at a given number of iteration s, are the ones linked with check nodes of lowest degrees, and even at high number of iteration s according to Eq. (4.14). We should highlight two results of the asymptotic approach that appear to be contradictory to the first section of Chapter 3:
• The correction capability of a check node increases when its con nection degree decreases, whereas
• The convergence threshold decreases when the con nection degree of check n odes increases (with variable n odes degree).
We may think that at low SNR (bad quality of messages), and low number of iteration (increases the risk that poor Id1 I bit receives only bad messages), the hierarchy of classes is reversed. This is n ot the case when lookin g at simulation s: at any SNR, at any number of iterations, the class with the smallest ~p has the lowest BER. We should rather think that improving convergence of some classes, still acting on check n odes, implies worsenin g some others (see Fig. (A.19)), and worsenin g the overall convergence threshold of the code. That's why we should define the set of possible good codes, con siderin g practical code len gth and expected performances.
4.3.3 Set of Possible Good Codes
Irregularity on the check profile leads to two problems:
• Con centration problem: influences the speed of convergence of the code. Indeed Chun g has shown in [6] that a concentrated p(x) polyn omial p(x) = pxd + (1 — p)xd~1 den ned in Def. (3.1) maximises the speed of convergence of the whole code. To achieve UEP properties by irregularity on checks profile, little tolerance on concentration , and then on the global speed of convergence has to be den ned. The global code will converge slower, but its most protected class will converge faster that the ones of concentrated code Fig. (A.18). However, the problem is n ot so extremely important, since the complexity is n ot so much increased with the number of iterations due to intelligent scheduling algorithms [5].
• Den sity problem: according to Gallager's result, den sest codes have the lowest gap to capacity. At given code rate, there is one optimum average con nectivity of check nodes ~p that minimizes the gap to the capacity Fig. (A.7) (for in finite code length and infinite number of iteration s). This key parameter of the code, linked with tcmαx, determines the den sity of the code. The denser is the graph, the higher have to be the con nectivity ratio. If the value of ~p is moved from the optimum, the value of tcmαx must be chan ged too. Thus, at a given tcmax, the required p can be achieved wether with a concentrated degree distribution at check n ode side, or with an unconcentrated one. But obtainin g UEP by reducin g the degree of some check n odes needs to adapt tcmax- If one does n ot do so, the global convergence threshold of the code, expressed by Eb/N0 will increase. That is the main problem in the chosen optimization scheme hat we present later. Our optimization , by removing bit n odes, decreases ~p while tcmax is kept. The UEP less den se code must have higher threshold. However if we con sider finite (quite short) code length N, a reducin g ~p approach could be relevant due to the reversed hierarchy (chapter 3 first section ). For N = 1500, both codes have quite same global threshold (at in finite number of iterations). Although thresholds of these found UEP codes are the same, UEP properties are quite different Fig. (A.19). Possible approach for long codes would be to first choose a tolerance offset e on global threshold Fig. (A.16) in order to fix a trade-off between differences of classes and degradation of the threshold, but this work is n ot able to en sure that most protected classes of unconcentrated code with degraded threshold will have lower error probability than the global more concentrated code with lower gap to capacity, for long code len gth. We should quantify, for high number of iterations, the gain of local thresholds of most protected classes in function of the amplitude over check degrees. Somehow for short en ough code length, the chosen optimization seems to be relevant.
Thus, at short en ough code len gth, and low or high number of iteration s, such approach that reduces p in our chosen optimization system is quite flexible:
• If the tran smission has to be achieved, even with poor quality, we allow big amplitude on degrees of check nodes. For example if one wants to tran smit a JPG picture even with bad quality, puttin g headers and very low frequency DCT coefficients in most protected classes en sures the transmission , even if the resultin g picture is quite fuzzy.
• If initial global threshold must be kept, this UEP method, allowin g a p polyn omial almost concentrated (three con secutive degrees), can be seen as a kind of patch, or second stage method after UEP optimization over bit n odes.
Remember that the spreadin g of degrees of check n odes should n ot be a problem if the maximum degree of con nection of bit n odes is adapted, i.e. in a joint optimization . It could raise a problem if the check optimization is proceeded after the bit n odes optimization , as a second stage. If it is done before, a con straint on tcmax should be added in the optimization of bit node profile if one wants to keep the best convergence threshold.
4.3.4 A particular choice to realize UEP over check nodes
The goal of this work was to focus on UEP properties led by pruning and puncturin g methods. Actually by prunin g some bits of the codeword, it mean s to fix (e.g. to 0) and then n ot to tran smit them, or equivalently, replace the correspondin g columns in the H matrix by 0, we directly modify the irregularity profile of the check n odes, and can achieve some UEP con figuration . The resultin g code is a subcode derived from a mother code. By doing so, we intend to reach different UEP con figurations, with different pruning schemes, with the same mother code and the same decoder.
We assume the number of information components of the subcode to be given . Then the code rate of the subcode is given too. We will see that the amount of redundancy is the same in the mother code and in the subcode.
4.3.4.1 The chosen coding scheme
Figure (A.17) shows the coding scheme that we use as a startin g point.
Let H and G be the parity-check (size M0 x N0) and generator (size K0 x N0) matrices of the mother code and assume that they are in a systematic form (i.e. full rank). Let R0 be the code rate of the mother code. The subcode has a given number of info bits : K1. Then we are able to prune away K0 — K1 column s of the H matrix and the subcode would have a length of N1 = N0 — (K0 — K1). We introduce a preprocessin g generator matrix, called P (size K1 x K0), which is used to fix the desired bits of the codewords of the subcode. Let u be the number of pruned bits, then the code rate of the subcode is
K0 - U
R1 =
N0 - U
K1
(4.20)
N0 - (K0 - K1) Then we can write R1 as a function of v = u/N0:
Figure imgf000059_0001
which is a decreasin g function of v. Whatever the number of bits we prune away, pruning decreases the code rate.
This preprocessin g matrix is n ot needed if we prune away only columns of information of the H matrix, and choose the K1 best protected columns amon g the information column s of the H matrix, which reduces a lot the possible UEP configuration s.
Let G' of size K0 x K0 be: G' = [protected column s of G, columns of G to be pruned away] , and B of size K1 x K0 be: B = [1^1 , Ox1 x (X0-X1)].
We are going to choose some column s of H to be pruned away. This mean s that the correspondin g bits of the codeword of the subcode must equals zero, i.e. fixed deterministically. Then the correspondin g column s of the generator matrix of the subcode have to be made of only zeros. Once we determined the K1 best protected column s of the H matrix of the mother code, the correspondin g columns in the generator matrix of the subcode have to be column s of the identity matrix, since the UEP code must be systematic to be able to control UEP over information bits.
Then the preprocessin g matrix P, which is the tool to achieve the UEP we chose, is designed such that
P • G' = B (4.21)
We are goin g to verify that we can choose totally freely the K0 — K1 bits to prune away and the K1 best protected amon g the Ao bits of the mother code by discussin g different choices of columns to prune away and to protect, and show that P permits to reach the expected and desired code rate.
4.3.4.2 Case we don't need P: only info columns of H are pruned and protected.
Then H3 and G3 are the parity-check and generator matrices of the subcode, are obtained by removin g column s to prune away in H, and the correspondin g ones, which are column s of the identity, in G where we remove also corresponding rows (i.e. the row where there was the one). Since the best protected columns are chosen as bein g information column s, they are already made of the identity. Then Hs and Gs are of size M0 x N0 — (K0 — Ki) and K1 x N0 — (K0 — K1), respectively. They are both of full rank (because Gs is made of the identity IKl and H3 of IMo since pruned column s are n ot identity columns which are associated to the redundancy). Then the code rate of the subcode would be: rank(Hs) K1
K1 = 1 -
N - (K0 - K1) N0 - (K0 - K1)
The obtained rate is the desired one.
4.3.4.3 Case we prune away redun dan cy in H or choose protected column amon g it, and then need P
We prune n on identity column s of G, and then may have rank(G ) < K0, which can raise a problem on the existence of a P matrix that fulfills Eq. (4.21) because (G' can be n ot invertible anymore. We first prove that we can find a full rank P matrix, and then will see that the code rate of the subcode is the one desired.
Existence of P
Definition 4.11 A matrix is in a reduced row echelon form if it is made of a triangular upper part of size the rank of the matrix, after linear combinations of its rows, and then permutation of the columns. Definition 4.12 A matrix will be said in a reduced row form if the previous manipulations on its rows have been made, but without permutting its columns at the end.
So we may have rank(G') < K0. We want to find a condition on G' such that we can compute P that fulfills Eq. (4.21).
Theorem 4.1 A necessary and sufficient condition on G that allows to compute P that fulfills Eq. (4.21) is: rank(G') > K1 (4.22)
Proof: Let G2 be G in a reduced row form (i.e. without any manipulation on the column s of G ). We prove that such linear combin ation s on the rows of G still allow to find a P2 matrix such that
P2 • G2 = B (4.23)
Let g2k den ote the fcth row of the G2 matrix. Then we have the linear combin ation
Figure imgf000061_0001
Remember that for any matrices rank(A • B) < min (rank(A), rank(B)) (4.24)
• Necessity
Usin g Eq. (4.24) in Eq. (4.23) we obtain rank(5) = K1 < min (rank(P2), rank(G2))
A necessary condition is then rank(G2) > K1 since rank(G2) = rank(G ) by construction of G2, this condition is equivalent to rank(G') > K1
• Sufficient
Assumin g that rank(G2) > K1, the computed P2 matrix from Eq. (4.23) will be of rank greater or equal to K1 by construction . Since
G2 = A G' we tran slate Eq. (4.23) by
P2 A G' = B (4.25) which means that
P = P2.A (4.26) and from which we can infer rank(P2 A = P) ≥ K1 The P matrix will be computed by usin g Eq. (4.26).
The condition rank(G') > K1 is then necessary and sufficient to en sure the existence of a matrix P that fulfills Eq. (4.21), and since P is of size K1 x K0, P will be of full rank accordin g to Eq. (4.25). π
Let rg den ote rank(G ). Equation (4.23) can be represented by
Figure imgf000062_0001
)
Provided G' fulfills Condition (4.22), a solution for P2 exists, and if rank(G') < K0, then we have degrees of freedom for P2, and then also for P.
Code rate of the subcode Let us n ow compute the rate of the subcode. For this, we con sider the decodin g. We have two possibility for the decodin g:
• Either we use the decoder of the mother code without addin g anything, the parity- check matrix of the subcode will exactly Hmother with pruned column s removed. This allows to save memory and complexity, but does n ot exploit all the available parity-check equation s since the ones of the preprocessin g code P are not used, which limits the performances.
Then we have p , ran k(rlmotherprunedj
1 = " TV0 - (K0 - K1)
A constraint, calledCode rate constraint in the optimization algorithm, en sures that the parity-check matrix of the subcode, i.e. the matrix of the mother code without the pruned column s, will have a code rate of N ,^1 1 _κ y or that equivalently rank(Hmotherpruned)-
• Or we use all the available parity-check equation s to have better performances. Let us study this case in what follows. So in this last case, let us forget pruning and consider the subcode as an usual serial concaten ation (without any interleaver, discussed later) of the two codes P and G (i.e. the mother code).
ϋpruned is not anymore the parity matrix of the subcode since an other parity equation s are added. The subcode is den ned by:
Gs = P G -. K1 X N0 H5 : [N0 - K1) x N0
H5 is made of Hmot/>ercorfe and the Hp parity matrix of the generator preprocessing matrix P. Hp is of size (Ko — K1) x K0:
Figure imgf000063_0001
IK0-K1) is the identity associated to redundancy column s of the precode P, and HL(K0-K1) XK1 are associated to information bits of the subcode. The same form for H of the mother code:
Figure imgf000063_0002
The K0 bits of the codewor of the precode P are directly copied into the K0 information bits of the mother code. The have Hs in this form:
Figure imgf000063_0003
That can be rewritten as
Figure imgf000063_0004
G is in a systematic form but P is n ot, i.e. Hmot^er is in a systematic form but Hp is n ot. We are only sure that the bits of the whole codeword that fulfill parity-check equation s of the precode P are the information bits of the mother code. The parity-check matrix Hs of the subcode is n ot in a systematic form in Eq. (4.31), and then we can n ot distinguish column s of redundancy and column s of information of the subcode in this form. To put Hs in a systematic form, i.e. in a reduced row echelon form, the permutation s on its column s that we would have to do will show that the information of the subcode can correspond to redundancy of the mother code (be careful to n ot confuse subcode and precode). We n ow want to show that after having pruned any K0 — K1 column s of the Hs matrix in the given non systematic form, we have rank(Hs) = N0 - K0 in order the code rate to be rank(Hs K1
R1 = I -
N0 - (K0 - K1) N0 - (K0 - K1)
Proof: By doin g linear combin ation s on the rows of the matrix Hp, only the K0 — K1 last rows of the matrix H3 are manipulated. Then to put Hp in a systematic form, only the K0 last column s of the matrix Hs are permuted. We then obtain the following form of Hs (Eq. (4.32)) called HSsys, where the last K1 column s are associated to the K1 information bits of the subcode, and the K0 — K1 pruned column s are taken among the N0 — K1 columns, which are the column s of a squarred upper trian gular matrix.
1-N0-K0 T(N0-K0) XK0
H Ssys (4.32)
0(Ko-K1) X (N0-K0) IK0-ATI R-(AT0-ATi) x KI J
Equation (4.32) shows that at least the N0 — K1 first column s are independent from each other, then if we prune K0 — K1 of these column s we have rank(Hs) = rank(HSsys) > N0 - K0
But what we prune is redundancy of the subcode (be aware to n ot confuse with redundancy of the precode or the mother code), by con struction of HS32/θ. Therefore, since the number of rows of the parity-check matrix is exactly the number of redundancy bits, we must remove the row corresponding to the pruned column (of same indice as the column , where there is the one on the main diagon al). Then at the end of prunin g, HSsya is of size N0 - K1 - (K0 — K1) x N0 - (K0 - K1) i.e. N0 - K0 x N0 - (K0 - K1), then we have rank(Hs) = rank(HSsys) < N0 - K0
We conclude rank(Hs) = N0 - K0
D
What en sures that whatever the column s we choose to prune away, the code rate of the subcode will be R1 = 1 - ^^H^ = ^^^ Note that The resultin g HSsysprun after prunin g will be equal to HSmotherprun if we chose to prune only information bits of the mother code, otherwise different.
Then , it is sufficient that the condition (4.22) be fulfilled to be able to compute the P matrix and have a code rate of the subcode equal to the one desired, even if we choose column s to be pruned away and best protected column s amon g redundancy of the mother code. Computation of the preprocessing matrix After having verified that we can choose the K0 — K1 bits to be pruned away and the K1 best protected amon g the TV0 bits of the mother code, we are goin g to explain how the P matrix is computed.
Let us describe the solution of the system:
where Asys (size K0 x K0)Is G2 after permutting its column s to tran sfer it into an echelon form, and then transposin g. Bsys (size K0 x K1) is B after permutting column s in the same way as G2, and then tran spose, and Csys (size K0 x K1) is P2 tran spose. Let rg still den ote the rank of G .
sys -*-*sys (4.33)
0 (Ko-rg) x Ko
We can note that provided rg < K0, we have (K0 — rg) degrees of freedom for the ccooeeffffiicciieenntts of Csys.
Vje [i, ivi]
Figure imgf000065_0001
l=rg+l
K0 csys(rg, j) = Σ a.syS(rg, l).csys(l, j) + bsys(rg, j) l=τg+l
These (K1 x rg) equation s determine the elements of P that can be chosen arbitrarily, and the way to compute the remainin g elements from the chosen ones.
We are n ow proposing a method to fix these degrees of freedom. The fact to fix them adds information that we could use by con siderin g the preprocessin g matrix simply as a precoder. A first possibility would be to think of it in terms of a serial concaten ation of two codes (the outer code of generator matrix P, and the in ner one the mother code), and could decode this in an iterative way, for example, if we find P to be an LDPC code (due to its size), and adding an interleaver. However the serial concaten ation of two LDPC codes does n ot improve too much the decodin g, even if the girth is improved. Another possibility is to consider the P matrix as some addition al parity-check equation s, as showed in expression of H5. Let us choose an arbitrary Hp, for example such that it improves the UEP properties of interesting bits by choosing its irregularity accordingly, or as a part of Hmother to decrease the required memory. Note that the user will have to choose the con straints on the optimization and so the stren gth of UEP, according to his available memory and processing power.
Once Hp is chosen , we are n ow describing how to compute Csγs(rg + 1 : K0, 1 : Ki). Hp : (K0 — Ki) x K0 whose elements are h(i,j) and Pr : K0 X Ki whose elements are d(i,j)
Figure imgf000066_0001
is rephrased as
Figure imgf000066_0002
<=>
Figure imgf000066_0006
Finally,
Figure imgf000066_0007
Rewritte in matrix form, this reads:
Figure imgf000066_0003
Let E den ote the matrix resultin g from the multiplication of the two first terms. In order to ensure the existence of a solution , it is sufficient that rank(_B) < K0 — rg. However Eq. (4.22) yields rank(E") < min (.Ko — Ki, K0 — rg), assumin g that Hp is chosen to be of full rank. Thus, we are sure to have a unique solution on Csys(rg + 1 : K0, 1 : Ki), provided the previous condition rank (G ) > Ki holds.
4.3.4.4 Hierarchical optimization algorithm
Let us remember the bounds of Eq. (4.19) on which our optization is based:
Figure imgf000066_0004
with
Figure imgf000066_0005
dGCfc Due to the chosen coding scheme, Kγ and the mother code are fixed at the begin nin g of the optimization , therefore the code rate is fixed to R = N J^1 κ , and the optimization does not con sider it at all. Let us den ote the minimum degree of check n odes of the whole graph by jmin (j and d are used to denote the same thin g), and their average by ~p. The optimization focuses on the two important quantities of bounds (4.19) : ^Cfc) and <r^, and is composed of two main stages, for given class:
• We choose the (akKι) most protected bitn odes.
• At given dmm, we try to put a maximum number of check n odes linked to these bit nodes to dmm in order to decrease ~pCk .
• We check, if the following con straints are fulfilled. If yes:
• We decrease dmm by one if the tolerance that we fixed regardin g the concentration is not yet reached, and start over again .
Note that we work with dmm and n ot dm^ because the composition of the Ck class is updayed at the begin nin g of each iteration of the optimization algorithm, which allows to take advantage of all the possibilities of prunin g of every check n odes. We determine the composition of the clesses only at the very end of our algorithm. In a more detailed way:
Definition 4.13 N0(Ut) denotes the set of check nodes linked to variable node bit. N\ (bit) is the set of bit nodes linked to each check node belonging to No(bit).
Definition 4.14 dι(bιt) denotes the average of degrees of checks linked to that certain variable node bit, and \set\ be the cardinal of the set set. Then we have:
Then , the adopted algorithm can be described as it follows: For each class C^.
• while (the constraints are fulfilled and the tolerance over the break of concentration is not reached)
• I1 over the whole graph are arran ged in an increasin g order
• for each check node in Cfc, we search for a bit to pruned away, such that bitpruned = arg maxω(di(6i£)) under:
• hierarchical con straints:
• bpruned f Gt, Vi < /c • ^pruned Dtius t not be linked with a check n ode of degree greater or equal to the concentration constraint
• avoid unvoluntary prunin g (a column of H can become independent from all the others and then does n ot define a code anymore) usual con straints (described in Chapter 2)
• proportion con straint
Na trraax
fc=l 3=2
Where p is the proportion of check n odes of degree j belon gin g to the Ck class.
• code rate con straint
Let us den ote the number of pruned column s at the current iteration of the optimization procedure by u, then the code rate at this iteration has to be R = jξpt- We then must have icmax trmax
(1 - R)
0=1
• convergence con straint (see Eq. (2.22))
• stability constraint (see Eq. (2.24))
Figure imgf000068_0001
This condition is automatically fulfilled in the case of a regular mother code.
At the end of the optimization . Con straint that en sures the existence of a P matrix (see Condition (4.22)): rank(G') > K1
4.3.4.5 Results
Curves correspond to a regular LDPC mother code of length No = 2000 and code rate R0 = 1/2. The subcode has a len gth of N1 = 1000 and code rate R1 = 1/3. The JVC classes to be optimized are den ned by the proportions a(k) for k < Nc — 1 (the number of in fo bits in the class Ck is a(k) - R1 - N1 H k < Nc - 1, and ∑ζ^1 a(k) = 1, and (1 — Ri)-N1 = (1 — RQ) -N0 in the last one which then contain s the whole redundancy). The optimization is done for Nc = 3 classes with α(l) = 0.1, α(2) = 0.9. The mother code has parameters (2000,3,6).
Optimizations to obtain unconcentrated (degrees for checks between 2 and 6) and almost concentrated (degrees for checks between 4 and 6) degrees codes are done to compare the performances.
The decoding is done bu using only the pruned parity-check matrix of the mother code.
Figure imgf000069_0001
Figure imgf000069_0002
Fig. (A.18) shows EXIT curves defined in Eq. (4.16) for each class of almost concentrated and unconcentrated check irregularity codes. The more the first class is protected, the more the less protected ones are degraded: the best protected class has a faster convergence in the unconcentrated code than the corresponding one in the concentrated code. The intermediate classes are quite equivalent whereas the last class of the unconcentrated code has a slower convergence than the corresponding one in the concentrated one.
Figure (A.19) shows the behavior at low bit-error rates, which cannot be seen from an EXIT curve. This would be near the (1,1) point in the EXIT chart, i.e. at a high number of iterations. Here for 30 iterations. We clearly see that UEP properties remain also at a high number of iterations, which constitutes a huge difference from UEP properties generated by irregularities over bit nodes, which induces convergence speed differences. The check optimization would be a means to achieve UEP at low number of iterations (accelerating the convergence), and at a high number. This behavior can be explained by Fig. (A.14) and the comments following it in the first section. As well we still have better performance at 30 iterations for the first class of the unconcentrated code than for the concentrated one, equivalent performance for the middle class, and poorer performances for the last one. These created UEP properties that remain even at a high number of iterations might be very interesting since techniques to improve a lot the number of iterations without increasing too much complexity exists [5] . 4.4 Puncturing
The puncturin g could be a method to realize UEP by increasin g the code rate and worsening certain bits, but without the possibility to improve some others. In a punctured code, the quality of the messages coming to interestin g checks (i.e. belonging to one class) would be more important than the degrees of these checks. At a check node we add up erasure messages (i.e. with LLR, den ned in Def . (2.2), that equals zero) in stead of makin g them deterministic by prunin g (i.e. LLR equals infinity that makes the bitn ode and the linked edges disappear from the graph). Then in stead of achievin g UEP only by puncturin g the code, we can be more interested in puncturin g the code whose UEP is created by irregularities over check n odes and bit n odes. The puncturin g must then be compatible with the UEP properties. In order to match the definition s of [9], we have to define and redefine some variables.
The old π of the Def. (4.8) that defines the detailed representation of LDPC codes turns to Pi.
Definition 4.15 Gτύ is the set of bit nodes of degree i linked to check nodes of degree j.
Definition 4.16 Tr4 is the proportion of puntured symbols in GhJ before decoding.
Remember the useful following definition (4.10)
Definition 4.17 \(i, j) and p(i, j) are the proportion of bit nodes of degree i among bit nodes linked to check nodes of degree j, and the proportion of check nodes of degree j among check nodes linked to bit nodes of degree i, respectively.
Thus we define
Definition 4.18 The total puncturing fraction p^ is the proportion of punctured variable nodes over the whole graph: p(0) = ΣΣ^(^ <)v %(03)
Proof: p^ ' = p Proba(bitnode be of degree i and be punctured)
p^' = 2 . 2^ Proba(bitnode be of degree i and linked to check of degree j and be punctured)
P = / / Proba(bitnode be punctured | bitnode is of degree i and linked to check of degree j )
* 3 With an analysis with Gaussian approximation, we can follow the evolution of the proportion of punctured symbols when decoding. To do so, we need some other definitions.
Figure imgf000071_0001
These quantities are used to compute the residual puncturin g proportion τiγ and the proportion p^ of punctured symbols at the fcth iteration :
(4.36)
Figure imgf000072_0001
Since we are interested in UEP and our criterion is the difference between the evolution of messages, let us express the mean of messages coming from check n odes in terms of the puncturin g pattern . To do so, we need some other definition s. First in order to shorten the n otation s, let us define
Figure imgf000072_0002
which are the initial proportion of punctured bits of degree i amon g all the bits linked to check n odes of degree j, the initial proportion of unpunctured bits of degree i amon g all the bits linked to check n odes of degree j and the initial proportion of punctured bits of degree i, respectively.
Definition 4.20 Let χn,L be the probability that exactly in messages coming into a bit node of degree m are not erased at the kth iteration. If C™ denotes the binomial coefficient, we have
Figure imgf000072_0003
Thus, we can express the updated mean of a check node of degree j as
(k) / N , — l
Figure imgf000072_0004
The term in the squared brackets is composed of mean of messages comin g from bit n odes punctured (there is n o observation from the chan nel so u0 = 0) at the kth iteration and the mean of messages comin g from bit n odes unpunctured at the kth iteration . We have
Figure imgf000072_0005
The evolution of the puncturin g fraction in Eq. (4.36) indicates that the residual puncturing fraction while decodin g does not depend on SNR but only on the detailed distribution pair
Figure imgf000073_0001
We abbreviate the sum by r ~ and define a function H such that
Figure imgf000073_0002
and
Figure imgf000073_0003
where
Figure imgf000073_0004
with
Definition 4.21 λ^ denotes the proportion of punctured bits of degree i that equals, according to Bayes rule,
Figure imgf000073_0007
Proof:
Figure imgf000073_0005
By de nition
Figure imgf000073_0006
Then
Figure imgf000074_0001
According to Def. (4.21)
Figure imgf000074_0002
Which is exactly
Figure imgf000074_0003
Then we have
Figure imgf000074_0004
For error-free decodin g, this last recursive equation must grow to in finity, which is rriu > πiu for any k > 0, or equivalently, with
Figure imgf000074_0006
which leads to another form for th condition of the convergence of the decodin g
Figure imgf000074_0007
The design goal optimal puncturin g defined in [9] is to maximize the puncturin g fraction p^ for a given Eb/N0, such that Eq. (4.40) is fulfilled.
In our case of UEP code, the UEP using checks irregularity is "defined" by the comparison between the gaps
Figure imgf000074_0008
Finally, puncturin g su a UEP code requires to define a tolerance limitin g how much the gap (4.41) can be decreased (it can n ot be increased) in order to n ot destroy the UEP properties more than we are allowed. These con straints on local gaps defining UEP must be included in the design of the detailed puncturing distribution . The design of the detailed puncturing distribution
Figure imgf000074_0005
could be done with the same mean s as used in [9], i.e. discretized density evolution, but this has n ot been studied further in this work. Chapter 5
Conclusions
In this work we have proposed a method to optimize the unequal error protection properties of LDPC Codes. We have shown that it is possible to adapt the two kinds of irregularities in order to speed up the local convergence. We first discussed the definition of UEP properties, and highlighted the fact that an LDPC code can have UEP properties if decoded by maximum-likelihood, but none if decoded by belief propagation . UEP properties must then be den ned dependin g on the used decodin g.
We have adopted a detailed representation of LDPC codes allowin g to describe subsets of possible interleavers that fit the UEP requirements, to define local convergence and to find a cost function . Since the irregularities of the bit n ode profile have already been studied, we especially focused on the check n ode profile optimization , keeping the bit n ode profile set regular. We found that the irregularities over check n odes does n ot only in fluence the speed of a local convergence, but also generates different behaviors at different parts of the codeword at high number of iteration s, in contrast to irregularities over bit n odes; we tried to explain these two behaviors formally. This fact that UEP properties remain at high number of iteration s is very interestin g if we consider recent work in [5] which reduces the complexity of decodin g, and then allows a higher number of iteration s with the same resources. However, actin g on check irregularities implied sub-optimality of the overall code in the case when the maximum degree of bit n odes is n ot adapted, and we then had to define a validity domain for our optimization , that then can be considered and achieved whether as a second stage in the optimization of the whole code, i.e. after bit n odes optimization , or as a first stage that would add a con straint on the following optimization . We would then have to keep all the parameters in the cost function , and optimize the check n ode profile in terms of the fixed bit nodes profile.
On a practical point of view, we tried to optimize a so-called mother code by prunin g, i.e. by making some bits deterministic, in order to con struct a subcode, with lower code rate, that fulfils the UEP requirements, and that can be decoded by the same decoder as the mother code, or a better one accordin g to the available memory of the receiver. Fin ally we tried to briefly analyze what the optimal puncturing of such UEP codes should be, still usin g the detailed representation of LDPC codes, in order to compensate the code rate loss due to pruning.
Such an optimization provides flexibility in selecting the appropriate scheme from performance, computational-complexity and memory-requirements perspectives.
As further tasks, testing the robustness to variations of proportions of classes should be useful considering practical applications of such codes. Another work would be to optimize both kinds of irregularities in a joint way, and not sequentially anymore, by properly describing the cost function, and still considering the required performance and the constraints of the target system. The difficulty of such an approach would lie in the non-linearity of the optimization.
Appendix A
A.I Transition from proportions of edges to proportions of nodes
X1 = proportion of bit n odes of degree i
number of bit nodes of degree i number of bit nodes in the whole graph number of edges linked to bit n odes of degree »
Σ number of edges linked to bit n odes of degree k k k total number of edges \t
Σ total number of edges \k k fc
Which is translated by
K =
Σ k- k
The same arguin g is carried out to obtain the similar expression for p3.
A.2 Transition from proportions of nodes to proportions of edges
X1 = proportion edges linked to bit n odes of degree %
number edges linked to bit n odes of degree i number of edges in the whole graph number of bit n odes of degree i.i ∑ k number of to bit n odes of degree k.k total number of bit n odes X1A ∑k total number of bit n odes Xk.k
Which is tran slated by
Figure imgf000078_0001
The same arguin g is carried out to obtain the similar expression for p3.
A.3 Expression of the code rate in terms of proportions
total number of check n odes total number of bitnodes According to the previous proofs, we easily obtain :
^P total number of edges P]
T3 _ 1 _ ±^l I
Σ total number of edges \t
Finally y^ P1
Bibliography
[1] A. Amraoui. LDPCopt, http://lthcwww.epfl.ch/research/ldpcopt/.
[2] A. Ashikhmin , G. Kramer, and S. ten Brink. Extrinsic Information Transfer Functions : Model and Erasure Channel Properties. IEEE Trans, on Inform. Theory, 50(ll):2657-2673, November 2004.
[3] R.E. Blahut. Theory and Practice of Error Control Codes . Addison- Wesley, 1984.
[4] LM. Boyarinov and G.L. Katsman . Linear Unequal Error Protection Codes. IEEE Trans, on Inform. Theory, 27(2): 168-175, March 1981.
[5] J.C. Chen, A. Dholakia, E. Eleftheriou, M.P.C Fossorier, and X-Y Hu. Reduced-Complexity Decoding of LDPC Codes. IEEE Trans, on Communications, 53(8):1288-1299, August 2005.
[6] S. Y. Chung, T. Richardson, and R. Urbanke. Analysis of Sum-Product Decoding Low- Density Parity-Check Codes using a Gaussian Approximation. IEEE Trans, on Inform. Theory, 47(2): 657-670, February 2001.
[7] D. Declercq. Optimisation et performances des codes LDPC pour des canaux non-standards. In Habilitation ά dinger des recherches, Universite de Cergy-Pontoise, December 2003.
[8] R.G. Gallager. Low-Density Parity-Check Codes. IRE Trans, on Inform. Theory, pages 21-28, 1962.
[9] J. Ha and S.W McLaughlin . Optimal Puncturing of Irregular Low-Density Parity-Check Codes. In IEEE International Conference on Communications, pages 3110-3114, Anchorage, Alaska, USA, May 2003.
[10] S. Haykin . Communication Systems. J. Wiley and Sons, Inc., 4th edition , 2001.
[11] K. Kasai, T. Shibuya, and K. Sakaniwa. Detailedly Represented Irregular Low-Density Parity- Check Codes. IElCE Trans. Fundamentals, E86-A(10):2435-2443, October 2003.
[12] F.R. Kschischang, BJ. Frey, and H. -A. Loeliger. Factor Graphs and the Sum-Product Algorithm. In IEEE Trans, on Inform. Theory, volume to see, page to see, July 1998.
[13] G. Lechner. Convergence of the Sum-Product Algorithm for Short Low-Density Parity-Check Codes. Master's thesis, Vienna University of Technology, April 2003.
[14] D.J.C. MacKay and R.M. Neal. Near Shannon Limit Performance of Low-Density Parity-Check Codes. Electronics Letters, 33(6):457, March 1997. [15] D.J.C. MacKay, S.T. Wilson, and M C Davey. Comparison of Constructions of Irregular Low- Density Parity-Check Codes. IEEE Trans, on Communications, 47(10) 1449-1453, October 1999.
[16] B. Masnick and J. Wolf. On Linear Unequal Error Protection Codes. IEEE Trans, on Inform. Theory, 3, no.4:600-607, October 1967.
[17] C. Poulhat. Allocation et Optimisation de Ressources pour Ia Transmission de Donnees Multimedia. PhD thesis, Universite de Cergy-Pontoise, 2004.
[18] C. Poulliat, D. Declercq, and I. Fijalkow. Optimization of LDPC Codes for UEP Channels. In ISIT 2004, Chicago, USA, June 2004.
[19] T. Richardson , A. Shokrollahi, and R. Urbanke. Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes. IEEE Transactions on Communications, 47(2):619-637, February 2001.
[20] T. Richardson and R. Urbanke. The capacity of low-density parity-check codes under massage- passing decoding. IEEE Trans, on Inform. Theory, 47 599-618, February 2000.
[21] A. Roumy. Lecture Notes, DEA TIS, 2005.
[22] S. ten Brink. Convergence Behavior of Iteratively Decoded Parallel Concatenated Codes. IEEE Transactions on Communications, 49(10)- 1727-1737, October 2001.
[23] T. Tian, C. Jones, J.D. Villasenor, and R.D. Wesel. Construction of Irregular LDPC Codes with Low Error Floors. In ICC2003, Anchorage, Alaska, USA, 2003
[24] N. von Deetzen. Unequal Error Protection Turbo Codes. Master's thesis, Umversitat Bremen , February 2005.
[25] X. Yang and D. Yuan. New Research on Unequal Error Protection Property of Irregular LDPC Codes. In IEEE Consummer Communications and Networking Conference, pages 361-363, January 2004.
While the invention is herein illustrated and described in detail in the drawings and this description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single or several units may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Any reference signs in the claims should not be construed as limiting the scope.
In the following a further description of the present invention is provided illustrating further embodiments of the invention by way of further figures.

Claims

Claims
1 . Method for unequal error protection for the transmission of data, in particular source-encoded audio and/or video data, allowing to adapt the error correction and coding gains to requirements and sensitivity of the data, using irregular Low-Density Parity-Check (LDPC) codes, characterized in that the encoding and decoding is carried out according to a code that can be represented by a bipartite graph, in particular a Tanner graph, having check nodes which are defined by check equations, in particular the rows of the parity-check matrix, said representing graph having an unequal distribution of check-node connections, not excluding further irregularities on the variable-node side, said check-node distribution being widespread using more than two different numbers of connections at check-node side, said widespread distribution resulting in different protections of the variable nodes linked to that check nodes with different connection degree, wherein said variable nodes are classified into different protection levels and the data requiring these different protection (quality) levels will be placed into the corresponding codeword positions at the transmitter in a systematic or non-systematic fashion, ensuring that the decoder at the receiver operating on the irregular graph of the said code will ensure the desired different protection levels.
2. Method for unequal error protection according to claim 1 , characterized in that the decoding of the LDPC code is performed in an iterative manner, in particular by message passing or a sum-product algorithm.
3. Method for unequal error protection according to claim 1 , characterized in that data is subdivided into sensitivity classes C k, which are protected differently according to an uneven check-node distribution, wherein a check node is considered to belong to a certain sensitivity class, if at least one direct link (edge) in the graph exists between the check node and a data bit of that said class.
4. Method for unequal error protection according to claim 1 , characterized in that the difference between the average mutual information of messages from check nodes of one class C_k, xcv (C-k), and the average mutual information of messages from check nodes of the whole graph, xcv, serves as a measure for the quality of one class relative to the average.
5. Method for unequal error protection according to any one of the preceding claims, characterized in that to achieve a better protection in a certain class the average check connection degree of that class is minimized, resulting in a reduction of the minimum connection degree in the class.
6. Method for unequal error protection according to claim 1 , characterized in that the uneven check degree distribution is achieved by precoding an LDPC mother code by another code leading to a sub-code, so-called pruning.
7. Method for unequal error protection according to claims 1 and 6, characterized in that the precoding is an LDPC code.
8. Method for unequal error protection according to claims 1 and 6, characterized in that the original LDPC mother code has an upper or lower triangular parity-check matrix and the pruning is realized by just omitting columns in the generator matrix in the identity part together with omitting the corresponding row and also deleting the corresponding column in the parity-check matrix, which means omitting an information input to the encoder of the LDPC mother code and setting it to a fixed known value.
9. Method for unequal error protection according to any one of the preceding claims, characterized in that the encoder is chosen according to the selected parity-check equations.
10. Method for unequal error protection according to any one of the preceding claims, wherein the UEP LDPC encoded data sequence fulfils the low-density parity-check equations given by the uneven check-degree distribution.
1 1 . Method for unequal error protection according to any one of the preceding claims, wherein the parity-check equations are defined over any binary or non-binary Galois field.
12. Apparatus for unequal error protection for the transmission of data, in particular source-encoded audio and/or video data, allowing to adapt the error correction and coding gains to requirements and sensitivity of the data, using irregular Low-Density Parity-Check (LDPC) codes, characterized in that the encoding and decoding is carried out according to a code that can be represented by a bipartite graph, in particular a Tanner graph, having check nodes which are defined by check equations, in particular the rows of the parity-check matrix, said representing graph having an unequal distribution of check-node connections, not excluding further irregularities on the variable-node side, said check-node distribution being widespread using more than two different numbers of connections at check-node side, said widespread distribution resulting in different protections of the variable nodes linked to that check nodes with different connection degree, wherein said variable nodes are classified into different protection levels and the data requiring these different protection (quality) levels will be placed into the corresponding codeword positions at the transmitter in a systematic or non-systematic fashion, ensuring that the decoder at the receiver operating on the irregular graph of the said code will ensure the desired different protection levels.
13. Encoding method for encoding an input signal, characterized in that either a generator matrix is used corresponding to the parity-check matrix of the LDPC code with an irregular check-node profile according to claim 1 for encoding said input signal by multiplying the information bits or symbols of the input signal by this generator matrix or that the dependencies given by the rows of the parity-check matrix, which define the Tanner graph, are directly applied, which can be interpreted as an erasure decoding on this graph, where the erasures are the parity bits or symbols of the code.
14. Encoding method according to claim 13 for encoding an input signal in the case of a systematic representation of the parity-check matrix in a upper or lower triangular form according to Claim 8, which has a triangular part at the parity locations and is rectangular at the information part, alternatively to using the corresponding systematic generator matrix, the parity-check matrix is directly used for recursive encoding for determining the parity bits or symbols starting from the given information bits or symbols, wherein the first parity bit (symbol) results from a weighted summation of the information, the following one from a weighted summation of the information and the first parity position, then from the information and the first two parity positions and so forth.
15. Encoder for encoding an input signal, characterized by means for using either a generator matrix corresponding to the parity- check matrix of the LDPC code with an irregular check-node profile according to claim 1 for encoding said input signal by multiplying the information bits or symbols of the input signal by this generator matrix or for directly applying the dependencies given by the rows of the parity-check matrix which define the Tanner graph, which can be interpreted as an erasure decoding on this graph, where the erasures are the parity bits or bytes of the code.
16. Decoding method for decoding a coded data, which have been encoded according to a method of claim 14, wherein the decoding is carried out in an iterative fashion by a sum-product algorithm or a message-passing algorithm.
17. Decoder for decoding a coded data, which have been encoded according to a method of claim 14, wherein the decoder comprises means for carrying out the decoding in an iterative fashion by a sum-product algorithm or a message-passing algorithm.
18. Signal characterized in that it results from a mapping and/or modulation where the input data to this mapping and/or modulation fulfils the parity equations of a check- irregular LDPC code according to claim 1 , which become visible after demodulation and or demapping.
19. Record carrier carrying a signal as claimed in claim 18.
20. Computer program comprising program code means for causing a computer to perform the steps of one of the methods as claimed in claims 1 , 13 or 16 when said computer program is carried out on a computer.
21 . Method for unequal error protection in transmission allowing to adapt the error correction and coding gains to requirements and sensitivity of the data, especially source- encoded data, e.g., coming from a video- or audio-encoder, using Low-Density Parity- Check (LDPC) codes, characterized in that the check nodes of the corresponding representing graph (Tanner graph) possess an unequal distribution of connections (edges) and this distribution is widespread using more than 2 different numbers of connections at check nodes, whereas the bit-node profile may be regular or irregular.
22. Apparatus for unequal error protection according to claims 1 to 1 1 or 21 , such that the UEP-LDPC encoder is placed at the transmitter processing data of different sensitivities delivered from, e.g., a video or audio encoder, and the corresponding UEP- LDPC decoder is part of the receiver, delivering the data again with different reliabilities and the UEP-LDPC decoder implements en uneven check-degree distribution of more that 2 different check-degrees, adapted to the sensitivities of the protected data.
PCT/EP2007/050842 2006-01-27 2007-01-29 Check-irregular ldpc codes for uep WO2007085653A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP07704196A EP1992072A1 (en) 2006-01-27 2007-01-29 Check-irregular ldpc codes for uep

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06100979.1 2006-01-27
EP06100979 2006-01-27

Publications (1)

Publication Number Publication Date
WO2007085653A1 true WO2007085653A1 (en) 2007-08-02

Family

ID=37950565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2007/050842 WO2007085653A1 (en) 2006-01-27 2007-01-29 Check-irregular ldpc codes for uep

Country Status (2)

Country Link
EP (1) EP1992072A1 (en)
WO (1) WO2007085653A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102158696A (en) * 2011-01-25 2011-08-17 天津大学 Three-dimensional video transmission method based on expanding window fountain code
US8375278B2 (en) 2009-07-21 2013-02-12 Ramot At Tel Aviv University Ltd. Compact decoding of punctured block codes
US8516352B2 (en) 2009-07-21 2013-08-20 Ramot At Tel Aviv University Ltd. Compact decoding of punctured block codes
US8516351B2 (en) 2009-07-21 2013-08-20 Ramot At Tel Aviv University Ltd. Compact decoding of punctured block codes
WO2015165093A1 (en) * 2014-04-30 2015-11-05 华为技术有限公司 Data transmission method and device
US9397699B2 (en) 2009-07-21 2016-07-19 Ramot At Tel Aviv University Ltd. Compact decoding of punctured codes
US20190165812A1 (en) * 2017-11-30 2019-05-30 Korea University Research And Business Foundation Method and apparatus for deciding decoding order for shuffled decoding of ldpc codes
US10382064B2 (en) 2015-10-13 2019-08-13 SK Hynix Inc. Efficient LDPC encoder for irregular code
CN112711523A (en) * 2019-10-24 2021-04-27 珠海格力电器股份有限公司 Program problem positioning method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1596501A1 (en) * 2004-05-12 2005-11-16 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding block low density parity check codes with a variable coding rate

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1596501A1 (en) * 2004-05-12 2005-11-16 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding block low density parity check codes with a variable coding rate

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
PISHRO-NIK, RAHNAVARD, FEKRI: "Nonuniform Error Correction Using Low-Density Parity-Check Codes", IEEE TRANSACTIONS ON INFORMATION THEORY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 51, no. 7, July 2005 (2005-07-01), pages 2702 - 2714, XP011135603, ISSN: 0018-9448 *
POULLIAT C ET AL: "Optimization of LDPC codes for UEP channels", INFORMATION THEORY, 2004. ISIT 2004. PROCEEDINGS. INTERNATIONAL SYMPOSIUM ON CHICAGO, ILLINOIS, USA JUNE 27-JULY 2, 2004, PISCATAWAY, NJ, USA,IEEE, 27 June 2004 (2004-06-27), pages 451 - 451, XP010750162, ISBN: 0-7803-8280-3 *
RICHARDSON, SHOKROLLAHI, URBANKE: "Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes", IEEE TRANSACTIONS ON INFORMATION THEORY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 47, no. 2, February 2001 (2001-02-01), XP011027879, ISSN: 0018-9448 *
SCHLEGEL C ET AL: "Trellis and Turbo Coding", TRELLIS AND TURBO CODING, WILEY-IEEE PRESS, PISCATAWAY, NJ, US, March 2004 (2004-03-01), pages complete, XP002425730 *
TAO TIAN ET AL: "Rate-compatible low-density parity-check codes", INFORMATION THEORY, 2004. ISIT 2004. PROCEEDINGS. INTERNATIONAL SYMPOSIUM ON CHICAGO, ILLINOIS, USA JUNE 27-JULY 2, 2004, PISCATAWAY, NJ, USA,IEEE, 27 June 2004 (2004-06-27), pages 153 - 153, XP010749864, ISBN: 0-7803-8280-3 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8375278B2 (en) 2009-07-21 2013-02-12 Ramot At Tel Aviv University Ltd. Compact decoding of punctured block codes
US8516352B2 (en) 2009-07-21 2013-08-20 Ramot At Tel Aviv University Ltd. Compact decoding of punctured block codes
US8516351B2 (en) 2009-07-21 2013-08-20 Ramot At Tel Aviv University Ltd. Compact decoding of punctured block codes
US9397699B2 (en) 2009-07-21 2016-07-19 Ramot At Tel Aviv University Ltd. Compact decoding of punctured codes
CN102158696A (en) * 2011-01-25 2011-08-17 天津大学 Three-dimensional video transmission method based on expanding window fountain code
WO2015165093A1 (en) * 2014-04-30 2015-11-05 华为技术有限公司 Data transmission method and device
CN106464421A (en) * 2014-04-30 2017-02-22 华为技术有限公司 Data transmission method and device
US10135466B2 (en) 2014-04-30 2018-11-20 Huawei Technologies Data sending method and apparatus
CN106464421B (en) * 2014-04-30 2019-10-18 华为技术有限公司 A kind of data transmission method for uplink and device
US10382064B2 (en) 2015-10-13 2019-08-13 SK Hynix Inc. Efficient LDPC encoder for irregular code
US20190165812A1 (en) * 2017-11-30 2019-05-30 Korea University Research And Business Foundation Method and apparatus for deciding decoding order for shuffled decoding of ldpc codes
CN112711523A (en) * 2019-10-24 2021-04-27 珠海格力电器股份有限公司 Program problem positioning method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
EP1992072A1 (en) 2008-11-19

Similar Documents

Publication Publication Date Title
WO2007085653A1 (en) Check-irregular ldpc codes for uep
KR101021465B1 (en) Apparatus and method for receiving signal in a communication system using a low density parity check code
US8161363B2 (en) Apparatus and method to encode/decode block low density parity check codes in a communication system
TWI387212B (en) Apparatus and method for encoding and decoding channel in a communication system using low-density parity-check codes
Jose et al. Analysis of hard decision and soft decision decoding algorithms of LDPC codes in AWGN
EP1665538A2 (en) Low-density parity-check codes for multiple code rates
Luby et al. Verification-based decoding for packet-based low-density parity-check codes
Golmohammadi et al. Concatenated spatially coupled LDPC codes with sliding window decoding for joint source-channel coding
Nozaki Zigzag decodable fountain codes
US8019020B1 (en) Binary decoding for correlated input information
Mitzenmacher Polynomial time low-density parity-check codes with rates very close to the capacity of the $ q $-ary random deletion channel for large $ q$
Liang et al. Rateless polar-spinal coding scheme with enhanced information unequal error protection
TWI713310B (en) Interleaver
Wang et al. Variable-length coding with shared incremental redundancy: Design methods and examples
Tsimbalo et al. CRC error correction for energy-constrained transmission
Morero et al. Novel serial code concatenation strategies for error floor mitigation of low-density parity-check and turbo product codes
Ratzer Error-correction on non-standard communication channels
Thomos et al. Product code optimization for determinate state LDPC decoding in robust image transmission
Liu et al. On the design of generalized LDPC codes with component BCJR decoding
Henkel et al. UEP concepts in modulation and coding
Hu et al. A new coding scheme for the noisy-channel Slepian-Wolf problem: Separate design and joint decoding
Luby et al. Verification codes
Wu et al. Joint source-channel polar-coded modulation
Jose et al. Channel coding using low density parity check codes in AWGN
Nozaki Reduction of decoding iterations for zigzag decodable fountain codes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007704196

Country of ref document: EP