Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Properties of syndrome distribution for blind reconstruction of cyclic codes

  • Original Paper
  • Published:
Applicable Algebra in Engineering, Communication and Computing Aims and scope

Abstract

In the problem of blind reconstruction of channel codes, the receiver does not have the knowledge of the channel code used at the transmitter and the aim is to identify this unknown code from the received data. For the case of cyclic codes, typical blind reconstruction methods make use of some elementary properties of syndromes (remainders) of the received polynomials. The main aim of this paper is to provide a detailed analysis of the properties of the syndromes that could be useful to design more efficient blind reconstruction methods. Specifically, we prove that the syndrome distribution of the noise-free sequence can be either degenerate or uniform or restricted uniform. We also provide necessary and sufficient conditions for the syndrome distribution to be of a given type. For the noise-affected received sequence we identify additional structural properties exhibited by the syndrome distribution. Finally, we apply these results to analyze the performance of the existing blind reconstruction methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. In the literature, a potential low-weight codeword in the dual space can be chosen arbitrarily or can be obtained using information-set decoding algorithms [5, 9, 10].

  2. A linear block code C(n) is said to be degenerate if its generator matrix G can be written as a concatenation of two or more identical copies of a generator matrix \(G^{\prime }\) of some other linear block code [20, Ch. 8].

  3. \(C_1(n_1) + C_2(n_2) :=\Big \{ [{\mathbf {c}}_1 {~} {\mathbf {c}}_2] \text{ such } \text{ that } {\mathbf {c}}_1 \in C_1(n_1) \text{ and } {\mathbf {c}}_2 \in C_2(n_2) \Big \}\) [15].

  4. Assumed length n is correct up to an integer multiple of \(n_0\).

  5. The dual of the code obtained by puncturing C(n) at any d coordinate locations is equal to the code obtained by shortening \(C^{\perp }(n)\) at the same locations [23, Section 3.2.5].

  6. When \(p=0\), \({\mathbb {P}}[{\mathbf {e}}_j(X) \in C(n,f)] = 1\) and \({\mathbb {P}}[{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)] = 0\) for any \({\mathcal {G}}_d(n,f)\).

References

  1. Barbier, J., Letessier, J.: Forward error correcting codes characterization based on rank properties. In: IEEE WCSP, pp. 1–5. Nanjing, China (2009)

  2. Berlekamp, E., McEliece, R., Tilborg, H.: On the inherent intractability of certain coding problems. IEEE Trans. Inf. Theory 24, 384–386 (1978)

    Article  MathSciNet  Google Scholar 

  3. Bonvard, A., Houcke, S., Gautier, R., Marazin, M.: Classification based on Euclidean distance distribution for blind identification of error correcting codes in noncooperative contexts. IEEE Trans. Signal Process. 66(10), 2572–2583 (2018)

    Article  MathSciNet  Google Scholar 

  4. Burel, G., Gautier, R.: Blind estimation of encoder and interleaver characteristics in a non cooperative context. In: Proceedings of the IASTED. Scottsdale, USA (2003)

  5. Canteaut, A., Chabaud, F.: A new algorithm for finding minimum-weight words in a linear code: application to McEliece’s cryptosystem and to narrow-sense BCH codes of length 511. IEEE Trans. Inf. Theory 44(1), 367–378 (1998)

    Article  MathSciNet  Google Scholar 

  6. Carrier, K., Tillich, J.P.: Identifying an unknown code by partial Gaussian elimination. Design Code Cryptogr 87(2–3), 685–713 (2019)

    Article  MathSciNet  Google Scholar 

  7. Chabot, C.: Recognition of a code in a noisy environment. In: Proceedings of IEEE International Symposium on Information Theory, pp. 2211–2215. Nice, France (2007)

  8. Chabot, C.: Reconnaissance de codes, structure des codes quasi-cycliques. PhD thesis, University of Limoges (2009)

  9. Cluzeau, M.: Block code reconstruction using iterative decoding techniques. In: Proceedings of International Symposium on Information Theory, pp. 2269–2273. Seattle, USA (2006)

  10. Cluzeau, M., Finiasz, M.: Recovering a code’s length and synchronization from a noisy intercepted bitstream. In: Proceedings of IEEE International Symposium on Information Theory, pp. 2737–2741. Seoul, Korea (2009)

  11. Côte, M., Sendrier, N.: Reconstruction of a turbo-code interleaver from noisy observation. In: Proceedings of IEEE International Symposium on Information Theory, pp. 2003–2007. Austin, Texas (2010)

  12. Cover, T., Thomas, J.: Elements of Information Theory. Wiley, New York (1991)

    Book  Google Scholar 

  13. Dingel, J., Hagenauer, J.: Parameter estimation of a convolutional encoder from noisy observations. In: Proceedings of IEEE International Symposium on Information Theory, pp. 1776–1780. Nice, France (2007)

  14. Filiol, E.: Reconstruction of convolutional encoders over GF(q). In: Darnell, M. (ed.) Cryptography and Coding, vol. 1335, pp. 101–109. Springer, Berlin (1997)

    Google Scholar 

  15. Huffman, W., Pless, V.: Fundamentals of Error-Correcting Codes. Cambridge University Press, Cambridge (2003)

    Book  Google Scholar 

  16. Jo, D., Kwon, S., Shin, D.: Blind reconstruction of BCH codes based on consecutive roots of generator polynomials. IEEE Commun. Lett. 22(5), 894–897 (2018)

    Article  Google Scholar 

  17. Lee, H., Park, C., Lee, J., Song, Y.: Reconstruction of BCH codes using probability compensation. In: Proceedings of IEEE APCC, pp. 591–594. Jeju Island, Korea (2012)

  18. Lidl, R., Niederreiter, H.: Introduction to Finite Fields and Their Applications. Cambridge University Press, Cambridge (1986)

    MATH  Google Scholar 

  19. Lin, S., Costello, D.: Error Control Coding, 2nd edn. Prentice-Hall, Englewood Cliffs (2004)

    MATH  Google Scholar 

  20. MacWilliams, F., Sloane, N.: The Theory of Error Correcting Codes. North-Holland Publishing Company, Amsterdam (1977)

    MATH  Google Scholar 

  21. Marazin, M., Gautier, R., Burel, G.: Blind recovery of \(k/n\) rate convolutional encoders in a noisy environment. EURASIP J. Wirel. Commun. Netw. 1, 1–9 (2011)

    Google Scholar 

  22. Moosavi, R., Larsson, E.: Fast blind recognition of channel codes. IEEE Trans. Commun. 62(5), 1393–1405 (2014)

    Article  Google Scholar 

  23. Pellikaan, R., Wu, X.W., Bulygin, S., Jurrius, R.: Codes, Cryptology and Curves with Computer Algebra, vol. 1. Cambridge University Press, Cambridge (2017)

    Book  Google Scholar 

  24. Rice, B.: Determining the parameters of a rate 1/n convolutional encoder over GF(q). In: Proceedings of 3rd International Conference on Finite Fields and Applications. Glasgow, Scotland (1995)

  25. Sicot, G., Houcke, S., Barbier, J.: Blind detection of interleaver parameters. Signal Process. 89(4), 450–462 (2009)

    Article  Google Scholar 

  26. Sullivan, D.: A fundamental inequality between the probabilities of binary subgroups and cosets. IEEE Trans. Inf. Theory 13(1), 91–94 (1967)

    Article  MathSciNet  Google Scholar 

  27. Swaminathan, R., Madhukumar, A., Wang, G., Shang Kee, T.: Blind reconstruction of Reed-Solomon encoder and interleavers over noisy environment. IEEE Trans. Broadcast., pp. 1–16 (2018)

  28. Valembois, A.: Detection and recognition of a binary linear code. Discrete Appl. Math. 111, 199–218 (2001)

    Article  MathSciNet  Google Scholar 

  29. Yardi, A.: Blind reconstruction of cyclic codes over binary erasure channel. In: Proceedings of IEEE International Symposium on Information Theory and Its Applications, pp. 301–305. Singapore (2018)

  30. Yardi, A., Vijayakumaran, S., Kumar, A.: Blind reconstruction of binary cyclic codes. In: Proceedings of European Wireless, pp. 849–854. Barcelona, Spain (2014)

  31. Yardi, A., Vijayakumaran, S., Kumar, A.: Blind reconstruction of binary cyclic codes from unsynchronized bitstream. IEEE Trans. Commun. 64(7), 2693–2706 (2016)

    Article  Google Scholar 

  32. Zhou, J., Huang, Z., Liu, C., Su, S., Zhang, Y.: Information-dispersion-entropy-based blind recognition of binary BCH codes in soft decision situations. Entropy 15(5), 1705–1725 (2013)

    Article  MathSciNet  Google Scholar 

  33. Zhou, J., Huang, Z., Su, S., Shaowu, Y.: Blind recognition of binary cyclic codes. EURASIP J. Wirel. Commun. Netw. 2013(1), 1–17 (2013)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by DST-INSPIRE faculty program of Government of India. The authors would also like to acknowledge the support of the Bharti Centre for Communication at IIT Bombay, India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arti D. Yardi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Some properties of syndrome distribution

In this appendix, we will study the distribution of \({\mathbf {r}}(X) = {\mathbf {w}}(X) \bmod f(X)\) in Lemma 1, where \({\mathbf {w}}(X)\) lies in any linear subspace \({\mathcal {W}}(n)\) of \({\mathbb {F}}_2^n\) and \(f(X) \in {\mathbb {F}}_2[X]\). Towards this, we first consider some properties of \({\mathbf {r}}(X)\).

Property 1

Consider a linear subspace \({\mathcal {W}}(n)\) of \({\mathbb {F}}_2^n\) and \(f(X) \in {\mathbb {F}}_2[X]\). For \({\mathbf {w}}(X) \in {\mathcal {W}}(n)\), suppose

$$\begin{aligned} \begin{aligned} {\mathbf {r}}(X)&= {\mathbf {w}}(X) \bmod f(X) = r_0 + r_1X + \cdots + r_{\deg (f)-1}X^{\deg (f)-1}, \end{aligned} \end{aligned}$$
(11)

where each \(r_l \in {\mathbb {F}}_2\), for \(l = 0,1,\ldots ,\deg (f)-1\). Then for every coefficient \(r_l\) of \({\mathbf {r}}(X)\), there exists a vector \({\mathbf {h}}_l \in {\mathbb {F}}_2^n\) such that \(r_l = {\mathbf {w}} {\mathbf {h}}_l^T\).

Proof

For the polynomial f(X), define the map L acting on \({\mathbf {w}} \in {\mathcal {W}}(n)\) as \(L({\mathbf {w}}) :={\mathbf {w}}(X) \bmod f(X) = {\mathbf {r}}(X)\). It can be seen that L is a linear map. Let \({\mathbf {r}}\) be the vector corresponding to \({\mathbf {r}}(X)\), where \({\mathbf {r}} \in {\mathbb {F}}_2^{\deg (f)}\). Since \({\mathbf {w}}\) and \({\mathbf {r}}\) are in one-to-one correspondence with \({\mathbf {w}}(X)\) and \({\mathbf {r}}(X)\) respectively, the linear map L can be given by \(L : {\mathbb {F}}_2^n \rightarrow {\mathbb {F}}_2^{\deg (f)}\). It is known that, corresponding to every linear transformation L there exists some matrix \(A \in {\mathbb {F}}_2^{n \times \deg (f)}\) associated to it such that

$$\begin{aligned} L ({\mathbf {w}}) = {\mathbf {w}} A = {\mathbf {w}} \begin{bmatrix} {\mathbf {h}}_0&{\mathbf {h}}_1&\cdots&{\mathbf {h}}_{\deg (f)-1} \end{bmatrix} = {\mathbf {r}}, \end{aligned}$$
(12)

where \({\mathbf {h}}_l \in {\mathbb {F}}_2^n\) for \(l = 0,1,\ldots ,\deg (f)-1\) are the columns of matrix A. From (12), the lth coefficient of \({\mathbf {r}}\) is equal to \(r_l = {\mathbf {w}}{\mathbf {h}}_l^T\) and the proof is complete. \(\square \)

Property 2

In Property 1, when f(X) is a factor of \(X^n+1\), the set of vectors \(\{ {\mathbf {h}}_0, {\mathbf {h}}_1, \ldots , {\mathbf {h}}_{\deg (f)-1} \}\) form a basis of \(C(n,f^{\perp })\).

Proof

It is known that, any \({\mathbf {w}}(X) \in {\mathcal {P}}_n\) belongs to C(nf) if and only if \({\mathbf {r}}(X) = {\mathbf {w}}(X) \bmod f(X) = 0\). Using this, the proof follows from Property 1 and we skip the details. \(\square \)

Property 3

Let \({\mathcal {W}}^{\perp }(n)\) be the dual code of \({\mathcal {W}}(n)\). In Property 1, when \(\deg (f) > n\), the matrix formed by the set of vectors \(\{ {\mathbf {h}}_0, {\mathbf {h}}_1, \ldots , {\mathbf {h}}_{n-1} \}\) is equal to the identity matrix of size \(n \times n\) and each \({\mathbf {h}}_{j}\) for \(j = n, \ldots , \deg (f)-1\) can be chosen to be any codeword in \({\mathcal {W}}^{\perp }(n)\).

Proof

When \(\deg (f) > n\), we have \({\mathbf {w}}(X) \bmod f(X) = {\mathbf {r}}(X) = {\mathbf {w}}(X)\). For \(j = 0, 1, \ldots , n-1\), we have \(w_j = {\mathbf {w}} {\mathbf {h}}_j^T\). This is possible when each \({\mathbf {h}}_j\) is equal to the jth column of the identity matrix of size n. For \(j = n, \ldots , \deg (f)-1\), we have \(r_j = {\mathbf {w}} {\mathbf {h}}_j^T = 0\) for any \({\mathbf {w}} \in {\mathcal {W}}(n)\) and hence \({\mathbf {h}}_j \in {\mathcal {W}}^{\perp }(n)\). \(\square \)

Lemma 1

Consider a linear block code \({\mathcal {W}}(n)\) of length n. Suppose every \({\mathbf {w}}(X)\) is chosen independently and uniformly from \({\mathcal {W}}(n)\). For \(f(X) \in {\mathbb {F}}_2[X]\), suppose \({\mathbf {r}}(X) = {\mathbf {w}}(X) \bmod f(X)\) and let \({\mathcal {H}}(n)\) be the subspace formed by \(\mathbf {h_0}, \mathbf {h_1}, \ldots , {\mathbf {h}}_{\deg (f)-1}\) defined in Property 1. We then have the following three cases.

  1. 1.

    When \({\mathcal {H}}(n) \subseteq {\mathcal {W}}^{\perp }(n)\), \({\mathbf {r}}(X)\) follows the degenerate distribution.

  2. 2.

    When \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\), \({\mathbf {r}}(X)\) follows the restricted uniform distribution when either \(\deg (f) > n\) or \({\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n) \ne \{ {\mathbf {0}}_n \}\).

  3. 3.

    When \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\), \({\mathbf {r}}(X)\) follows the uniform distribution when \(\deg (f) \le n\) and \({\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n) = \{ {\mathbf {0}}_n \}\).

Proof

Suppose \({\mathbf {r}}(X) = {\mathbf {w}}(X) \bmod f(X)\) is given by,

$$\begin{aligned} \begin{aligned} {\mathbf {r}}(X)&= r_0 +r_1X + \cdots + r_{{\deg (f)}-1}X^{{\deg (f)}-1} \\&= {\mathbf {w}}{\mathbf {h}}_0^T +{\mathbf {w}}{\mathbf {h}}_1^T X + \cdots + {\mathbf {w}}{\mathbf {h}}_{{\deg (f)}-1}^TX^{{\deg (f)}-1}, \end{aligned} \end{aligned}$$
(13)

where the last equality is obtained from Property 1. Let \(\{ R_0, R_1, \ldots , R_{\deg (f)-1} \}\) be the set of random variables corresponding to \(\{ r_0, r_1, \ldots , r_{\deg (f)-1} \}\). For a given \({\mathbf {h}}_l\) and \({\mathcal {W}}(n)\) there are two possibilities, either \({\mathbf {h}}_l \in {\mathcal {W}}^{\perp }(n)\) or \({\mathbf {h}}_l \notin {\mathcal {W}}^{\perp }(n)\). When \({\mathbf {h}}_l \in {\mathcal {W}}^{\perp }(n)\), the corresponding \(R_l\) is always zero and when \({\mathbf {h}}_l \notin {\mathcal {W}}^{\perp }(n)\), the corresponding \(R_l\) is equally likely to be zero or one [20, Lemma 5.2.2]. For the two codes \({\mathcal {W}}^{\perp }(n)\) and \({\mathcal {H}}(n)\) either of the following situations is true.

  1. 1.

    \({\mathcal {H}}(n) \subseteq {\mathcal {W}}^{\perp }(n)\):

    In this case, since every \({\mathbf {h}}_l \in {\mathcal {W}}^{\perp }(n)\), each \(R_l\) will be always zero for \(l = 0, 1, \ldots , \deg (f)-1\). This implies that \({\mathbf {r}}(X)\) follows the degenerate distribution.

  2. 2.

    \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\) and \(\deg (f) > n\):

    When \(\deg (f) > n\), from Property 3 we have \(r_j = 0\) for \(j = n, \ldots , \deg (f)-1\). Thus \({\mathbf {r}}(X)\) cannot take all possible values in \({\mathcal {P}}_{\deg (f)}\) and \({\mathbf {r}}(X)\) cannot follow the uniform distribution. Further from Property 3, \({\mathbf {h}}_j \in {\mathcal {W}}^{\perp }(n)\) for \(j = n, \ldots , \deg (f)-1\). If \({\mathbf {h}}_j \in {\mathcal {W}}^{\perp }(n)\) for \(j = 0,1, \ldots , n-1\), then the condition \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\) is not satisfied. Thus there exists at least one \({\mathbf {h}}_j, 0 \le j < n\) such that \({\mathbf {h}}_j \notin {\mathcal {W}}^{\perp }(n)\). The corresponding \(R_j\) will be equally likely to be zero or one and hence \({\mathbf {r}}(X)\) follows the restricted uniform distribution.

  3. 3.

    \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\), \({\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n) \ne \{ {\mathbf {0}}_n \}\), and \(\deg (f) \le n\):

    Suppose \({\mathbf {h}} \in {\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n)\) such that \({\mathbf {h}} \ne {\mathbf {0}}_n\). Since \({\mathbf {h}} \in {\mathcal {H}}(n)\) we have,

    $$\begin{aligned} {\mathbf {h}} = a_0 {\mathbf {h}}_0 + a_1 {\mathbf {h}}_1 + \cdots + a_{\deg (f)-1} {\mathbf {h}}_{\deg (f)-1}, \end{aligned}$$
    (14)

    where each \(a_l \in {\mathbb {F}}_2\) for \(l = 0,1,\ldots , \deg (f)-1\) such that for some i, \(0 \le i < \deg (f)\), \(a_i \ne 0\). Since \({\mathbf {h}} \in {\mathcal {W}}^{\perp }(n)\), we have \({\mathbf {w}} {\mathbf {h}}^T = 0\) and from (14) we get

    $$\begin{aligned} \begin{aligned} {\mathbf {w}} \Big (a_0 {\mathbf {h}}_0 + a_1 {\mathbf {h}}_1 + \cdots + a_{\deg (f)-1} {\mathbf {h}}_{\deg (f)-1} \Big )^T&= 0 \\ \implies a_0 r_0 + a_1 r_1 + \cdots + a_{\deg (f)-1} r_{\deg (f)-1}&= 0, \end{aligned} \end{aligned}$$
    (15)

    where the last equality is obtained from (13). This implies that the set of random variables \(\{R_0, R_1, \ldots , R_{\deg (f)-1}\}\) satisfy a linear relation given in (15) and the random vector \([R_0 {~} R_1 {~} \ldots {~} R_{\deg (f)-1}]\) cannot take all possible \(2^{\deg (f)}\) values. As each \(R_l\) is either zero with probability one or equally likely to be zero or one, \({\mathbf {r}}(X)\) will follow the restricted uniform distribution.

  4. 4.

    \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\), \({\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n) = \{ {\mathbf {0}}_n \}\), and \(\deg (f) \le n\):

    Since \({\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n) = \{ {\mathbf {0}}_n \}\), random variables \(\{ R_0, R_1, \ldots , R_{\deg (f)-1} \}\) do not satisfy any linear relation and this implies that \(\{ R_0, R_1, \ldots , R_{\deg (f)-1} \}\) are independent. We now prove by contradiction that each \(R_l\) is equally likely to be zero or one. Suppose for some i, \(0 \le i < \deg (f)\), \(R_i\) is always zero, which implies that \({\mathbf {h}}_i \in {\mathcal {W}}^{\perp }(n)\). Since \({\mathbf {h}}_i \in {\mathcal {H}}(n)\) we have \({\mathbf {h}}_i \in {\mathcal {W}}^{\perp }(n) \cap {\mathcal {H}}(n)\) such that \({\mathbf {h}}_i \ne {\mathbf {0}}_n\), which is a contradiction.

\(\square \)

Appendix B: Proof of Theorem 1

In case (a), \(n=qn_0\) for some \(q \in {\mathcal {N}}\), \(q \ge 1\), \(s = s_0\), and f(X) divides \(g_0(X)\). In this case, each \({\mathbf {w}} \in {\mathcal {W}}(n)\) will be formed by the concatenation of q codewords of \(C(n_0,g_0)\) (see Fig. 3). Since f(X) divides \(g_0(X)\), \({\mathbf {w}}(X) \bmod f(X)\) will always be zero and \({\mathbf {w}}(X) \bmod f(X)\) follows the degenerate distribution.

We shall now consider case (b), i.e., when \(n \ne qn_0\) or \(s \ne s_0\) or f(X) does not divide \(g_0(X)\). In our system model, we have assumed that every codeword is chosen independently and uniformly from \(C(n_0, g_0)\). This implies that, each w(X) is equally likely to be the polynomial representation of any of the codewords in the \({\mathcal {W}}(n)\). We can now apply Lemma 1 of Appendix A to study the distribution of \({\mathbf {w}}(X) \bmod f(X)\). When \(C(n_0,g_0)\) is non-degenerate, in the literature it has been proved that, in case (b) there exists a codeword \({\mathbf {w}}_1(X) \in {\mathcal {W}}(n)\) such that \({\mathbf {w}}_1(X) \notin C(n,f)\) [31, Proposition 1]. For this \({\mathbf {w}}_1(X)\), the corresponding syndrome \({\mathbf {r}}(X) = {\mathbf {w}}_1(X) \bmod f(X)\) will be a non-zero polynomial. Since the all-zero vector is always a codeword in any linear block code, \({\mathbf {r}}(X)\) will be the zero polynomial for the all-zero codeword. Since \({\mathbf {r}}(X)\) can take at least two values in \({\mathcal {P}}_{\deg (f)}\) with a non-zero probability, \({\mathbf {r}}(X)\) cannot follow the degenerate distribution and from Lemma 1, the distribution of \({\mathbf {r}}(X)\) will either be uniform or restricted uniform. \(\square \)

Appendix C: Proof of Theorem 2

From (5), the code \({\mathcal {W}}(n)\) is given by,

$$\begin{aligned} {\mathcal {W}}(n) = C_1^{\prime }(d_1) + \underbrace{ C(n_0,g_0) + {~} \cdots {~} + C(n_0,g_0)}_{q \text { times}} + {~} C_2^{\prime }(d_2), \end{aligned}$$
(16)

where recall that \(C_i^{\prime }(d_i)\) is the code obtained by puncturing the last \(n_0-d_i\) coordinates of \(C(n_0,g_0)\), for \(i = 1,2\). To use Lemma 1 we require \({\mathcal {W}}^{\perp }(n)\), which is given by

$$\begin{aligned} {\mathcal {W}}^{\perp }(n) = C_1^{\prime \perp }(d_1) + C(n_0,g_0^{\perp }) + {~} \cdots {~} + C(n_0,g_0^{\perp }) + C_2^{\prime \perp }(d_2). \end{aligned}$$
(17)

When \(d_i \le k_0\), \(C_i^{\prime }(d_i) = {\mathbb {F}}_2^{d_i}\) since for a cyclic code, any consecutive \(k_0\) locations can form an information set. This implies that when \(d_i \le k_0\), \(C_i^{\prime \perp }(d_i) = \{ {\mathbf {0}}_{d_i} \}\), where recall that \({\mathbf {0}}_{d_i}\) denotes the all-zero vector of length \(d_i\). When \(d_i > k_0\), \(C_i^{\prime \perp }(d_i)\) will be equal to the code obtained by shortening \(C(n_0,g_0^{\perp })\) in the last \(n_0-d_i\) locations [23, Section 3.2.5]Footnote 5. Observe that for any value of \(d_i\), every codeword in \(C_i^{\prime \perp }(d_i)\) is a multiple of \(g_0^{\perp }(X)\) and hence any \({\mathbf {w}}^{\prime }(X) \in {\mathcal {W}}^{\perp }(n)\) can be written as

$$\begin{aligned} \begin{aligned} {\mathbf {w}}^{\prime }(X)&= \big [ {\mathbf {b}}_1(X) g_0^{\perp }(X) \big ] + \big [X^{d_1} {\mathbf {u}}_1(X) g_0^{\perp }(X)\big ] + \big [X^{n_0+d_1} {\mathbf {u}}_2(X) g_0^{\perp }(X)\big ] \\&\quad + \cdots + \big [X^{(q-1)n_0+d_1} {\mathbf {u}}_q(X) g_0^{\perp }(X)\big ] + \big [X^{qn_0+d_1} {\mathbf {b}}_2(X) g_0^{\perp }(X) \big ], \\&= {\mathbf {u}}^{\prime }(X) g_0^{\perp }(X), \end{aligned} \end{aligned}$$
(18)

where \({\mathbf {u}}^{\prime }(X) :={\mathbf {b}}_1(X) + X^{d_1} {\mathbf {u}}_1(X) + \cdots + X^{qn_0+d_1} {\mathbf {b}}_2(X)\), \({\mathbf {u}}_i(X) \in {\mathcal {P}}_{n_0-k_0}\) and \({\mathbf {b}}_j(X) \in {\mathcal {P}}_{d_i-k_0}\), for \(i = 1,2,\ldots ,q\) and \(j=1,2\).

Since f(X) is factor of \(X^n+1\), \(\deg (f) < n\) and from Lemma 1, the distribution of \({\mathbf {w}}(X) \bmod f(X)\) is restricted uniform when \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\) and \({\mathcal {H}}(n) \cap {\mathcal {W}}^{\perp }(n) \ne \{ {\mathbf {0}}_n \}\). As explained in the proof of Theorem 1 (Appendix B), the condition \({\mathcal {H}}(n) \nsubseteq {\mathcal {W}}^{\perp }(n)\) is satisfied for case (b). Further from Property 2, \({\mathcal {H}}(n) = C(n,f^{\perp })\). When the condition \(C(n,f^{\perp }) \cap {\mathcal {W}}^{\perp }(n) \ne \{ {\mathbf {0}}_n \}\) is satisfied, there should exist a non-zero polynomial \({\mathbf {h}}(X) \in C(n,f^{\perp }) \cap {\mathcal {W}}^{\perp }(n)\) such that

$$\begin{aligned} {\mathbf {h}}(X) = {\mathbf {m}}(X) f^{\perp }(X) = {\mathbf {u}}^{\prime }(X) g_0^{\perp }(X), \end{aligned}$$
(19)

where \({\mathbf {m}}(X) f^{\perp }(X) \in C(n,f^{\perp })\) and \({\mathbf {u}}^{\prime }(X) g_0^{\perp }(X) \in {\mathcal {W}}^{\perp }(n)\) for some \({\mathbf {m}}(X) \in {\mathcal {P}}_{n-\deg (f^{\perp })}\) and \({\mathbf {u}}^{\prime }(X) \in {\mathcal {P}}_{n-k_0}\). It is given that \(h(X) = \text{ gcd }(g_0^{\perp }, f^{\perp })\) and \(f^{\perp }(X) = h(X) f^{\prime }(X)\). Suppose \(g_0^{\perp }(X) = h(X) g_0^{\prime }(X)\). Using this in (19) we get

$$\begin{aligned} {\mathbf {m}}(X) f^{\prime }(X) = {\mathbf {u}}^{\prime }(X) g_0^{\prime }(X). \end{aligned}$$
(20)

Since \(g_0^{\prime }(X)\) and \(f^{\prime }(X)\) are coprime, in (20) \(g_0^{\prime }(X)\) and \(f^{\prime }(X)\) should divide \({\mathbf {m}}(X)\) and \({\mathbf {u}}^{\prime }(X)\) respectively. A necessary condition for this is \(\deg (m) \ge \deg (g_0^{\prime })\) and \(\deg (u^{\prime }) \ge \deg (f^{\prime })\). Further, since \({\mathbf {m}}(X) \in {\mathcal {P}}_{n-\deg (f^{\perp })}\) and \({\mathbf {u}}^{\prime }(X) \in {\mathcal {P}}_{n-k_0}\), we have \(\deg (m) < n - \deg (f^{\perp })\) and \(\deg (u^{\prime }) < n - k_0\). Simplifying these inequalities using (20) we get \(\deg (h) > k_0 - \deg (f)\), which proves the necessary condition of the theorem. Note that the condition \(\deg (m) \ge \deg (g_0^{\prime })\) is sufficient for \(g_0^{\prime }(X)\) to divide \({\mathbf {m}}(X)\) since \({\mathbf {m}}(X)\) can take any value in \({\mathcal {P}}_{n-\deg (f^{\perp })}\). However, \({\mathbf {u}}^{\prime }(X)\) cannot take all possible values in \({\mathcal {P}}_{n-k_0}\) (see (18)). Thus \(f^{\prime }(X)\) divides \({\mathbf {u}}^{\prime }(X)\) if for some \({\mathbf {u}}_i(X) \in {\mathcal {P}}_{n_0-k_0}\) and \({\mathbf {b}}_j(X) \in {\mathcal {P}}_{d_i-k_0}\), \({\mathbf {u}}^{\prime }(X) \in C(n,f^{\prime })\), where \(i = 1,2,\ldots ,q\) and \(j=1,2\). This proves the sufficient condition of the theorem. \(\square \)

Appendix D

Proof of Theorem 3

Recall that \({\mathcal {S}}\) is the support set of \({\mathbf {w}}_j(X) \bmod f(X)\). For the degenerate distribution \(|{\mathcal {S}}| = 1\), for the uniform distribution \(|{\mathcal {S}}| = 2^{\deg (f)}\), and for the restricted uniform distribution we have \(1< |{\mathcal {S}}| < 2^{\deg (f)}\). Thus the distribution of \({\mathbf {w}}_j(X) \bmod f(X)\) is uniform over the set \({\mathcal {S}}\). The probability that \({\mathbf {y}}_j(X) \bmod f(X)\) takes the value \(b(X) \in {\mathcal {P}}_{\deg (f)}\) is given by,

$$\begin{aligned}&{\mathbb {P}}\Big [ {\mathbf {y}}_j(X) \bmod f(X) = b(X)\Big ] = {\mathbb {P}}\Big [ [{\mathbf {w}}_j(X) + {\mathbf {e}}_j(X)] \bmod f(X) = b(X) \Big ] \nonumber \\&\quad = {\mathbb {P}}\Big [ {\mathbf {e}}_j(X) \bmod f(X) = [{\mathbf {w}}_j(X) \bmod f(X)] + b(X) \Big ] \nonumber \\&\quad {\mathop {=}\limits ^{(a)}} \sum _{a(X) \in {\mathcal {S}}} {\mathbb {P}}\Big [ {\mathbf {e}}_j(X) \bmod f(X) = a(X) + b(X) \Big ] {\mathbb {P}}\Big [{\mathbf {w}}_j(X) \bmod f(X) = a(X)\Big ] \nonumber \\&\quad = \frac{1}{|{\mathcal {S}}|} \sum _{a(X) \in {\mathcal {S}}} {\mathbb {P}}\Big [ {\mathbf {e}}_j(X) \bmod f(X) = a(X) + b(X) \Big ], \end{aligned}$$
(21)

where the equality in (a) is obtained by conditioning over the support set \({\mathcal {S}}\) of \({\mathbf {w}}_j(X) \bmod f(X)\). When \({\mathbf {w}}_j(X) \bmod f(X)\) follows the uniform distribution, \({\mathcal {S}} = {\mathcal {P}}_{2^{\deg (f)}}\) and \(a(X) + b(X)\) in (21) would take all possible values in \({\mathcal {P}}_{2^{\deg (f)}}\). Thus in (21) we have \({\mathbb {P}}[{\mathbf {y}}_j(X) \bmod f(X) = b(X)] = 1/2^{\deg (f)}\) and \({\mathbf {y}}_j(X) \bmod f(X)\) follows the uniform distribution.

We now consider the case when the distribution of \({\mathbf {w}}_j(X) \bmod f(X)\) is either degenerate or restricted uniform. Since \({\mathcal {S}}_b\) is the set of polynomials obtained by adding b(X) to every \(a(X) \in {\mathcal {S}}\), from (21) we get

$$\begin{aligned} {\mathbb {P}}\Big [{\mathbf {y}}_j(X) \bmod f(X) = b(X)\Big ]&= \frac{1}{|{\mathcal {S}}|} \sum _{d(X) \in {\mathcal {S}}_b} {\mathbb {P}}\big [ {\mathbf {e}}_j(X) \bmod f(X) = d(X) \big ]. \end{aligned}$$
(22)

Recall that \({\mathcal {G}}_d(n,f)\) is the coset of C(nf) obtained by adding \(d(X) \in {\mathcal {P}}_{\deg (f)}\) to all possible elements in C(nf). Given any polynomial \({\mathbf {e}}_j(X) \in {\mathcal {P}}_n\) we have \({\mathbf {e}}_j(X) \bmod f(X) = d(X)\) if and only if \({\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)\) and using this in (22) we get,

$$\begin{aligned} \begin{aligned} {\mathbb {P}}\Big [{\mathbf {y}}_j(X) \bmod f(X) = b(X)\Big ]&= \frac{1}{|{\mathcal {S}}|} \sum _{d(X) \in {\mathcal {S}}_b} {\mathbb {P}}\big [{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)\big ] \\&= \frac{1}{|{\mathcal {S}}|} \sum _{d(X) \in {\mathcal {S}}_b} \sum _{i=0}^{n} D_i p^i (1-p)^{n-i}, \end{aligned} \end{aligned}$$
(23)

where the second equality follows from the fact that any coefficient of the error polynomial \({\mathbf {e}}_j(X)\) is one with probability p for BSC(p). \(\square \)

A justification of Property 2 of Section 4: Suppose \(b(X) \in {\mathcal {S}}\) and \(b^{\prime }(X) \notin {\mathcal {S}}\). For \(b(X) \in {\mathcal {S}}\) in (22) we have \({\mathcal {S}} = {\mathcal {S}}_b\) and \({\mathbb {P}}[{\mathbf {y}}_j(X) \bmod f(X) = b(X)]\) will be

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\Big [{\mathbf {y}}_j(X) \bmod f(X) = b(X)\Big ] = \frac{1}{|{\mathcal {S}}|} \sum _{d(X) \in {\mathcal {S}}} {\mathbb {P}}\big [ {\mathbf {e}}_j(X) \bmod f(X) = d(X) \big ]\\&\quad {\mathop {=}\limits ^{(a)}} \frac{1}{|{\mathcal {S}}|} \bigg [ {\mathbb {P}}\big [{\mathbf {e}}_j(X) \bmod f(X) = 0\big ] + \sum _{\begin{array}{c} d(X) \in {\mathcal {S}} \\ d(X) \ne 0 \end{array}} {\mathbb {P}}\big [{\mathbf {e}}_j(X) \bmod f(X) = d(X)\big ] \bigg ] \\&\quad = \frac{1}{|{\mathcal {S}}|} \bigg [ {\mathbb {P}}\big [{\mathbf {e}}_j(X) \in C(n,f) \big ] + \sum _{\begin{array}{c} d(X) \in {\mathcal {S}}, d(X) \ne 0 \end{array}} {\mathbb {P}}\big [{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)\big ] \bigg ], \end{aligned}\nonumber \\ \end{aligned}$$
(24)

where the equality in (a) is obtained since the zero polynomial is always an element in \({\mathcal {S}}\) and in the last equality, \({\mathcal {G}}_d(n,f)\) is the coset of C(nf) obtained by adding d(X) to all elements in C(nf). Note that whenever \(d(X) \ne 0\), \({\mathcal {G}}_d(n,f)\) will be a proper coset of C(nf). For \(b^{\prime }(X) \notin {\mathcal {S}}\), the zero polynomial will not belong to \({\mathcal {S}}_{b^{\prime }}\) and from (23) we have,

$$\begin{aligned} {\mathbb {P}}\Big [{\mathbf {y}}_j(X) \bmod f(X) = b^{\prime }(X)\Big ] = \frac{1}{|{\mathcal {S}}|} \sum _{\begin{array}{c} d(X) \in {\mathcal {S}}_{b^{\prime }} \end{array}} {\mathbb {P}}\big [{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)\big ]. \end{aligned}$$
(25)

For any proper coset \({\mathcal {G}}_d(n,f)\) of C(nf), from Sullivan’s subgroup-coset inequality theorem [26] we have \({\mathbb {P}}[{\mathbf {e}}_j(X) \in C(n,f)] \ge {\mathbb {P}}[{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)]\). Further, it has been observed via simulations that for small values of crossover probability p of BSC, in (24) and (25) the term \({\mathbb {P}}[{\mathbf {e}}_j(X) \in C(n,f)]\) dominates the term \({\mathbb {P}}[{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)]\)Footnote 6. For larger values of p, \({\mathbb {P}}[{\mathbf {e}}_j(X) \in C(n,f)] \approx {\mathbb {P}}[{\mathbf {e}}_j(X) \in {\mathcal {G}}_d(n,f)]\). A justification of the observation \({\mathbb {P}}[{\mathbf {y}}_j(X) \bmod f(X) = b(X)] \ge {\mathbb {P}}[{\mathbf {y}}_j(X) \bmod f(X) = b^{\prime }(X)]\) now follows from (24) and (25). \(\square \)

Appendix E: Proof of Theorem 4

Let P and Q denote the probability mass functions of \({\mathbb {I}}_{\{{\mathbf {r}_j(X)}=0\}}\) under hypotheses \(H_0\) and \(H_1\) respectively. Suppose \(P :=\big [p_0 {~} p_1 \big ]\) and \(Q :=\big [q_0 {~} q_1 \big ]\), where \(p_0\) and \(q_0\) are the probabilities of observing the all-zero syndrome under \(H_0\) and \(H_1\) respectively. A lower bound on the KL-divergence \(D_{KL}(H_0,H_1)\) is given by [12, Sec. 11.6],

$$\begin{aligned} D_{KL}(P,Q) \ge \frac{1}{2 \ln 2} \Big ( |p_0-q_0| + |p_1-q_1| \Big )^2. \end{aligned}$$
(26)

Substituting \(p_1=1-p_0\) and \(q_1=1-q_0\) in (26) we get,

$$\begin{aligned} D_{KL}(P,Q)&\ge \frac{1}{2 \ln 2} \Big ( |p_0-q_0| + |(1-p_0)-(1-q_0)| \Big )^2 \end{aligned}$$
(27)
$$\begin{aligned}&{\mathop {=}\limits ^{(a)}} \frac{1}{2 \ln 2} \Big ( 2|p_0-q_0| \Big )^2 = \frac{2}{\ln 2} \big (p_0-q_0\big )^2, \end{aligned}$$
(28)

where the equality in (a) is obtained since \(|p_0-q_0| = |q_0-p_0|\). Since \(p_0 = P(C(n,f))\), we now obtain an upper bound on \(q_0\) that will give a lower bound on \(D_{KL}(P,Q)\) from (28).

Since the upper bound on \(q_0\) is the same for any j, for simplicity of notation we will ignore the suffix j from \({\mathbf {y}}_j(X)\). Using this \(q_0\) is given by

$$\begin{aligned} \begin{aligned} q_0&= {\mathbb {P}}\big [{\mathbf {y}}(X) \bmod f(X) = 0\big ] = {\mathbb {P}}\big [{\mathbf {y}}(X) \in C(n,f)\big ]. \end{aligned} \end{aligned}$$
(29)

Suppose Q corresponds to the event when \({\mathbf {w}}(X) \in C(n,f)\) and let \(Q^c\) be its complement event. Using total probability law in (29) we get,

$$\begin{aligned} q_0&= {\mathbb {P}}\Big [{\mathbf {y}}(X) \in C(n,f) \Big | Q \Big ] {\mathbb {P}}[Q] + {\mathbb {P}}\Big [{\mathbf {y}}(X) \in C(n,f) \Big | Q^c \Big ] {\mathbb {P}}[Q^c] \nonumber \\&= {\mathbb {P}} \Big [{\mathbf {e}}(X) \in C(n,f) \Big ] {\mathbb {P}}[Q] + {\mathbb {P}} \Big [ {\mathbf {e}}(X) \in {\mathcal {G}}(n,f)) \Big ] {\mathbb {P}}[Q^c], \end{aligned}$$
(30)

where the last equality follows since under the condition event Q is true, \({\mathbf {y}}(X) \in C(n,f)\) if \({\mathbf {e}}(X) \in C(n,f)\) as \({\mathbf {y}}(X) = {\mathbf {w}}(X) + {\mathbf {e}}(X)\). Similarly, when event \(Q^c\) is true, \({\mathbf {y}}(X) \in C(n,f)\) if \({\mathbf {e}}(X)\) belongs to some proper coset \({\mathcal {G}}(n,f)\) of code C(nf).

From Sullivan’s subgroup-coset inequality theorem [26], for any proper coset \({\mathcal {G}}(n,f)\) of C(nf) we have,

$$\begin{aligned} \frac{{\mathbb {P}}[{\mathbf {e}}(X) \in C(n,f)]}{{\mathbb {P}}[{\mathbf {e}}(X) \in {\mathcal {G}}(n,f)]} \ge \frac{1-(1-2p)^{n-\deg (f)+1}}{1+(1-2p)^{n-\deg (f)+1}} = \lambda . \end{aligned}$$
(31)

Substituting (31) in (30) we have,

$$\begin{aligned} q_0&\le {\mathbb {P}} \Big [{\mathbf {e}}(X) \in C(n,f) \Big ] {\mathbb {P}}[Q] + \lambda {\mathbb {P}}\Big [ {\mathbf {e}}(X) \in C(n,f)) \Big ] {\mathbb {P}}[Q^c] \end{aligned}$$
(32)
$$\begin{aligned}&{\mathop {=}\limits ^{(b)}} {\mathcal {P}}(C(n,f)) {\mathbb {P}}[Q] + \lambda {\mathcal {P}}(C(n,f)) \big (1-{\mathbb {P}}[Q]\big ) \end{aligned}$$
(33)
$$\begin{aligned}&= {\mathcal {P}}(C(n,f)) \Big [ {\mathbb {P}}[Q] (1 - \lambda ) + \lambda \Big ] \end{aligned}$$
(34)

The equality in (b) is obtained since \({\mathbb {P}}[{\mathbf {e}}(X) \in C(n,f)] = P(C(n,f))\). From Theorem 1, under hypothesis \(H_1\) the distribution of \({\mathbf {w}}(X) \bmod f(X)\) is either uniform or restricted uniform distribution. This implies that \({\mathbb {P}}[{\mathbf {w}}(X) \in C(n,f)] = {\mathbb {P}}[{\mathbf {w}}(X) \bmod f(X) = 0] = {\mathbb {P}}[Q] \le 1/2\). Using this in (34) we get,

$$\begin{aligned} q_0 \le {\mathcal {P}}(C(n,f)) \left( \frac{1}{2} (1 - \lambda ) + \lambda \right) = {\mathcal {P}}(C(n,f)) \left( \frac{\lambda +1}{2} \right) \end{aligned}$$
(35)

The upper bound of the theorem is now obtained by substituting (35) in (28) and this completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yardi, A.D., Vijayakumaran, S. Properties of syndrome distribution for blind reconstruction of cyclic codes. AAECC 31, 23–41 (2020). https://doi.org/10.1007/s00200-019-00392-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00200-019-00392-0

Keywords

Mathematics Subject Classification

Navigation