Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu

Genetic algorithms in coding theory — a table for A3(n,d)

1993, Discrete Applied Mathematics

Genetic algorithms in coding theory : a table for $A_3(n,d)$ Citation for published version (APA): Vaessens, R. J. M., Aarts, E. H. L., & van Lint, J. H. (1991). Genetic algorithms in coding theory : a table for $A_3(n,d)$. (Memorandum COSOR; Vol. 9122). Eindhoven: Technische Universiteit Eindhoven. Document status and date: Published: 01/01/1991 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne Take down policy If you believe that this document breaches copyright please contact us at: openaccess@tue.nl providing details and we will investigate your claim. Download date: 06. Jun. 2020 EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computing Science Memorandum COSOR 91-22 Genetic Algorithms in Coding Theory A Table for A3(n,d) R.J.M. Vaessens E.H.L. Aarts J.H. van Lint Eindhoven University of Technology Department of Mathematics and Computing Science P.O. Box 513 5600 MB Eindhoven The Netherlands ISSN: 0926-4493 Eindhoven, September 1991 The Netherlands Genetic Algorithms in Coding Theory A Table for A3(n,d) R.J.M. Vaessens 1 E.H.L. Aarts 2 ,1 J.H. van Lint 1 ,2 2 Eindhoven University of Technology, Department of Mathematics and Computing Science, P.O. Box 513, 5600 MB Eindhoven Philips Research Laboratories, P.O. Box 80000, 5600 JA Eindhoven Abstract We consider the problem of finding values of A3(n, d), i.e. the maximal size of a ternary code of length n and minimum distance d. Our approach is based on a search for good lower bounds and a comparison of these bounds with known upper bounds. Several lower bounds are obtained using a genetic local search algorithm. Other lower bounds are obtained by constructing codes. For those cases in which lower and upper bounds coincide, this yields exact values of A3(n, d). A table is included containing the known values of the upper and lower bounds for A3(n, d), with n ::; 16. For some values of nand d the corresponding codes are given. 1 Introd uction We consider the problem of finding values of Aq( n, d), i.e. the maximal size of a code of length n over an alphabet of q elements, having minimum distance d. A code has minimum distance d, if d is the smallest number of positions in which two distinct codewords differ. To prove that Aq( n, d) is equal to a certain value, say M, one has to verify the following two conditions. • There are no codes of length n and minimum distance d over a q-ary alphabet having more than M codewords. • There exists a code of length n and minimum distance d over a q-ary alphabet having M codewords. To verify the first condition, one has to prove that every q-ary code of length n and minimum distance d has at most M codewords. Since for most parameters q, nand d, the number of such codes is very large, it is impracticable to construct all these codes and show that they have at most M codewords. Therefore one resorts to estimating upper bounds for Aq( n, d). In the literature many upper bounds are known and we will give some of them below. 1 To verify the latter condition, one has to give a code or a construction method for such a code. In practice the only thing one can do is searching for codes with a large number of codewords and hope that such a code has a number of codewords equal to an upper bound for Ag( n, d). In this case the value of Ag( n, d) is exactly determined. In the following we mention some known methods for constructing codes from other codes with different parameters. Unfortunately, no other constructive methods for finding codes with a large number of codewords are known. Therefore we use local search algorithms to find such codes. These algorithms originate from the field of combinatorial optimization. They iteratively generate a sequence of subsets of the solution space of a combinatorial optimization problem, such that each subset is in the neighbourhood of the previous subset. Well-known algorithms belonging to this class of generally applicable algorithms are Simulated Annealing, Threshold Accepting and Genetic Algorithms. Applications of such algorithms in coding theory can be found in [6, 16, 5]. For the present work we use a genetic local search algorithm. This paper is organized as follows. In Section 2 we give a formal description of the problem of determining the values Ag( n, d). In Section 3 we study some properties of Ag( n, d). First we treat a number of upper and lower bounds for Ag(n, d). Next we mention some methods for constructing codes from other codes. In Section 4 we present a template of a Genetic Local Search algorithm. This template constitutes a class of algorithms, that are generally applicable on combinatorial optimization problems. Next we discuss such an algorithm for designing large codes. Finally we discuss the performance of this algorithm. In Section 5 we present a full table with bounds for A3 ( n, d) for n::; 16. For some values of nand d we also present codes with a number of codewords equal to the corresponding lower bound. 2 Preliminaries For the following we recall the notation given in Van Lint [17]. Let q, n E IN with q ~ 2; let the set of all n-tuples over 'llg. We call a code 'llg denote the set {O, 1, ... , q-1}, and 'l~ a q-ary (n,M, d)-code or merely an (n,M, d)-code, if C has minimum Hamming C ~ 7l~ distance d and size 1C 1= M.. We call a code C ~ 7l~ a q-ary (n, M, d, w)-code or merely an (n, M, d, w)-code, if C is an (n, M, d)-code in which each word has weight w. We call an (n, M, d)-code or an (n, M, d, w)-code maximal, if it cannot be extended to an (n, Mo, d)-code or an (n, Mo, d, w)-code, respectively, with size Mo > M. Suppose q, n, d and, possibly, ware fixed. Then by Ag( n, d) we denote the maximum of the sizes M over all q-ary (n, M, d)-codes and by Ag( n, d, w) the maximum of the sizes M over all q-ary (n, M, d, w)codes. A central problem in coding theory is that of determining the values of Ag( n, d) and Ag( n, d, w). In the present work we focus our attention on determining the values of Ag( n, d) with q = 3 and n::; 16. 3 3.1 Properties of Aq (n,d) Upper bounds for Aq(n,d) In the literature a number of upper bounds for Ag( n, d) are known; we mention the Singleton bound [24], the Plotkin bound [23], the Elias bound [7], the Linear Programming bound [4], the Hamming bound [12, 10] and the Johnson bound [15]. Here we present an extension of 2 the Hamming bound for even values of d and a generalization of the Johnson bound for q Below we assume that q, nand d are fixed such that q ~ 2 and 1 ~ d ~ n. ~ 2. The following extension of the Hamming bound is based on the idea of the Johnson bound. Theorem 1 (Hamming bound for even d) If d is even, say d = 2e, then (1) where Vq( n, e - 1) denotes the volume of a ~pher in ZZ; with radius e - 1. The following bound is a generalization of the Johnson bound [15]. Its proof is very similar to the proof given in Van Lint [17]. Theorem 2 (Generalized Johnson bound) If d is odd, say d = 2e + 1, then (2) with Aq(n,d,w) < In(q~1)A-,dwlJ < ln(q~l) l(n-~il) loo·l (n-w+r~;1)q J".JJJ (3) for arbitrary wE IN, satisfying w ~ nand d ~ mine n, 2w). 3.2 Lower bounds for Aq(n,d) In this subsection we study the problem of finding lower bounds for Aq( n, d). There are only two analytical lower bounds known, called the Gilbert-Varshamov bound [9, 27] and the Algebraic Geometry bound [19], which both are of low quality for our purpose. Since no other analytical lower bounds for Aq( n, d) are known, to determine appropriate lower bounds we have to design codes with a prescribed length and minimum distance, preferably with a large number of codewords. However, the number of codes with given length nand minimum distance d grows enormously for larger values of n, since in most cases the equivalence class of such a code already contains n!qn codes. Therefore it is impracticable to enumerate all these codes and find a largest one. Thus instead of finding a largest code by enumerating, other methods must be used for finding codes with a large number of codewords. In the literature a number of constructive methods for codes are known, that make use of other codes with different parameters. We mention puncturing, extending, shortening and repeating [20], the (u,u+v)-construction [23] and the (a,a-b,a+b+c)-construction [18]. The last two constructions can be seen as special cases of a construction due to Blokh and Zyablov [2]; see also [20], eh.10 § 8.2. 3 Since these and possibly other generally applicable constructive methods do not have to lead to large codes with given length and minimum distance, we have to find such codes in another way. To this end we introduce an iterative method that tries to find a maximal code in the set of all codes with a given length and minimum distance. Our newly proposed method basically uses a local search algorithm which is augmented with a genetic component to enable this algorithm to escape from locally optimal solutions. This algorithm is subject ofthe following section. 4 Genetic Code Design In 1975 Holland introduced the concept of Genetic Algorithms [14]. These algorithms constitute a class of search algorithms built on concepts, that are based on a strong analogy between biological evolution processes and the problem of solving optimization problems. In this section we describe a variant of these generally applicable approximation algorithms, called genetic local search. Genetic local search algorithms have been applied with moderate success to several combinatorial optimization problems, such as the Travelling Salesman Problem [26, 21] and the Job Shop Scheduling Problem [25]. In this section we develop such an algorithm to construct large codes. 4.1 Genetic Local Search Algorithms In this subsection we present a template of a genetic local search algorithm. Such algorithms combine standard local search algorithms with recombination mechanisms from population genetics. To this end an initial population - a subset of locally optimal solutions - is generated and in each step of the algorithm the population is modified by the following steps. 1. First, the population is enlarged by recombining the solutions to form new solutions. 2. Next, the newly created solutions are improved by using them as start solutions of a local search function. Note that as a result of this step the entire population again consists of locally optimal solutions. 3. Finally, the enlarged population is reduced to its original size by selecting the best ones. The iteration process terminates if some stop criterion is satisfied. Usually this stop criterion is heuristically chosen. The Genetic Local Search algorithm obtained in this way is given in Pseudo Pascal in Figure 1. 4.2 A Genetic Code Design Algorithm In this subsection we introduce a Genetic Local Search algorithm for handling the problem of finding good lower bounds for Aq( n, d). The algorithm follows the template presented in the previous subsection. Filling in the details of the template amounts to constructing the procedures RECOMBINATION, LOCAL SEARCH and SELECTION. For this we reformulate our problem of finding appropriate lower bounds as a combinatorial optimization problem. 4 GENETIC LOCAL SEARCH(input Pstart: population); begin P := Pstart; for all i E P do i := LOCAL SEARCH( i); repeat p' := RECOMBINATION(P); for all i E pI do begin i := LOCAL SEARCH( i); P:=PU{i} end; P:= SELECTION(P) until STOP end; Figure 1: Template of a Genetic Local Search Algorithm 4.2.1 Problem Reformulation The problem of finding q-ary (n, M, d)-codes with large M can be formulated as a combinatorial optimization problem (S, I). The solution space S of an instance of such a problem is chosen as = {C ~ 1 C is a code with minimum distance at least S Zl; d} . The cost function f : S -+ 1R is defined as f(C) the maximum of this function. 4.2.2 =1 C I, for C E S. Obviously we try to find Filling in the Details In the following we describe the various elements in the template of Figure 1 tailored to the problem of finding large codes. The resulting algorithm is called genetic code design algorithm. Input Population In most cases the populations given in the input of our genetic code design algorithm are randomly generated. The number of words in a code of such a population is chosen randomly between 1 and the best known upper bound divided by 10. Here we assume that codes of these sizes do exist. For the instances we investigated, this assumption did not give any problem. 5 Local Search In this part we describe a Local Search procedure for the problem of determining (n, M, d)codes with large M. To this end we define a neighbourhood structure N on the solution space S by GES. N(G) = {G' E S IIG' ~GI = I}, Hence a code is locally optimal with respect to this neighbourhood, if and only if this code is a maximal code. Furthermore we need the following definition of a lexicographical ordering '5:L of the elements in 7l~: Definition 3 (Lexicographical ordering) Let x,yE Then .m;. x '5:L Y ¢:> x = Y V 31'5:1'5:n[(\fl'5: i <l: Xi = yd 1\ Xl < yd· We denote the successor of an element X by succ(x). Note that the element (q - 1)1 has no successor. Starting off with X = 0, we determine in each iteration the value of d(x, G). If d(x, G) ~ d, then we can extend G by x. Otherwise we determine the successor of x. Note that if G is extended by X in this way, all elements y with x '5:L y have distance smaller than d to G u {x}. Therefore it suffices to continue with the successor of x in this case. The resulting Local Search procedure is given in Pseudo Pascal in Figure 2. The local optimum obtained by this procedure with input G is denoted by LOCAL SEARCH( G). Several refinements can be made to speed up this Local Search procedure. For instance, instead of generating words of length n, we can generate partial words with alphabet symbols on only the first I positions, 0 '5: 1 < n. Now in some cases we can observe that we cannot complete this word to a word of length n with distance at least d to the words of the code. In this case we can skip all the words starting with this partial word. Otherwise we fill in the (l + l)th position and repeat the same procedure. However, since these refinements do not change the basic idea of this procedure, we do not treat them here in detail. In this Local Search procedure qn different candidates are considered for enlarging the code. For computing the distance between such a candidate and a codeword n steps are needed. procedure LOCAL SEARCH(input Gstart: solution); begin G:= Gstarti X:=Oi repeat if d(x, G) ~ d then G := G U {x}; x := succ(x) until (x = (q - 1)1) end; Figure 2: A Local Search procedure 6 The number of codewords is bounded by Aq( n, d). Hence the Local Search procedure requires O( n . qn . Aq( n, d)) steps. We observe that for a given initial code C, the Local Search procedure always gives the same local optimum. Consequently, the genetic code design algorithm produces eventually populations consisting of identical elements. In practice it turns out that it is desirable to have more variability in the populations. For this we first transform the initial code C into a randomly chosen equivalent code C ' . Thus, having chosen randomly a permutation of the q - I}, positions {I, 2, ... , n} and for each position a permutation of the symbols {O, 1, we apply these permutations to every codeword in C. Next, we apply the procedure LOCAL SEARCH to C'. Finally, we use the inverses of the given permutations to transform the code LOCAL SEARCH( C') into a maximal code, which has the initial code C as a subset. 0 •• , Recombination of Codes We now present a strategy for recombining a set of codes to a new set of codes. This recombination produces two child codes from two parent codes and is based on the following theorem. Theorem 4 (Recombination) Let C1 , C2 E S be two codes with length n and minimum distance at least d. Let z be a word in and D be an integer with 0 ~ D ~ n + d. Then zz; {XEC1 ! (x,z)~Dd - d} U {xEC2 ! d(x,z);?:D} is a code with length n and minimum distance at least d. From now on we assume that all populations, that are input populations for the procedure RECOMBIN ATION, contain the same even number of codes, say p. Now we select p/2 parent pairs of codes in such a way that good codes, i.e. codes with a large number of words, are chosen as parent with higher probability. If code i of the population has Mi codewords, i E {I, ... ,p}, and Mmin is the minimum of the M i , then code i is selected as parent with probability 1 -(Mi - Mmin + 1), c c where c equals p I)Mj - Mmin + 1). j=l Let C},C2 be a parent pair of codes, selected in this way. Now we randomly choose a word z and an integer D with 0 ~ D ~ n + d. Then the parent codes C 1 and C 2 produce the in 7L~ two child codes {XEC1 I d(x,z)~D - d} U {xECz ! d(x,z);?:D} and {XEC2 ! (x,z)~Dd - d} U {xEC1 ! d(x,z);?:D}. In this way p child codes are produced by p parent codes. Obviously, the recombination of codes in a population needs O(p· n . Aq( n, d» steps. 7 Selection of Codes from a Population From a population of p parent codes and p child codes, with p even, we simply take the p best codes, i.e. those codes with the largest number of codewords. This selection of codes needs O(plogp) steps. A Stopcriterion Obviously, if a code is found having a number of codewords that is equal to one of the upper bounds described in Section 3.1, it is needless to continue the iteration process. If no such code is found we have to stop the algorithm in another way. For this we simply give an upper bound on the number of generations and let the algorithm terminate in case this number of generations has been reached. In our applications this number varies between 20 and 400. 4.3 Performance The performance of our genetic code design algorithm has been investigated by carrying out an empirical analysis. The genetic code design algorithm has been implemented in Pascal on a VAX-ll/750-computer. Most of the time in our numerical experiments was spent on the problem of finding ternary codes. We have restricted ourselves to the instances with n ~ 16, since the Local Search procedure causes the computation time to grow exponentially with n. To store a population of codes we need a working space of O(p· n' Aq( n, d» positions, where p is the population size. Since the number of memory places is limited, we have restricted ourselves to the instances for which the upper bound for the number of codewords is at most 150. We observe that there is a linear correspondence between the computation time and both the number of generations and the population size. That the computation time depends linearly on the number of generations is obvious. That the computation time depends linearly on the population size p can be explained as follows. In each generation we have p calls of the local search procedure that each need O( n . qn . Aq ( n, d» steps. The recombination and selection need O(p· n . Aq( n, d» and O(p log p) steps, respectively. Since in our numerical experiments logp is much smaller than n . qn . Ag(n, d), the computation time depends linearly on the population size. Furthermore we observe that for a fixed population size the quality of the codes increases with an increasing number of generations. An explanation for this behaviour is that there is more time for a population to evolve. For a fixed number of generations the quality of codes have a weak tendency to become worse, when the population size increases. An explanation for this behaviour is that larger populations need more time to evolve. 5 5.1 A Table for A3 (n,d) Exact values of Aq(n,d) In this subsection we treat some special cases for which the exact values of A3(n, d) or Aq(n, d) are already known or can be easily derived. For d equal to 1 or 2 we have the following theorem. 8 Theorem 5 A q (n,l) = qn, A q (n,2) = qn-l. The next theorem shows in which cases A3(n, d) the following lemma from the Plotkin bound. Lemma 6 If Aq( n, d) ~ = 3. q+ 1, then d < l(q-1)(q+2) q(q+1) Before giving the theorem we obtain nJ . Theorem 7 Proof We prove the following: A 3 (n,d) ~ 4 {:} d ~ l~ nJ . Because of Lemma 6 it suffices to give an (n, 4, d)-code if 6d ~ 5n. Note that {(n,d) I n,dElN,6d ~ 5n} = {(6A-p,5A-p) I AElN,pElNo,p < 5A}. Now the code consisting of the words (0,0,0,0,0,0), (0,1,2,1,2,2), (2,0,1,2,1,2) and (1,2,0,2,2,1) is a (6,4, 5)-code. By repeating A times and puncturing p times we obtain a (6A -p, 4, 5A- JL)code. D Theorem 8 A3(7, 5) = 10. Proof: (Sketch) To obtain A3(7, 5) ~ 10 we proved the non-existence of a (7,1l,5)-code. We did this by trying to construct such a code, extensively making use of the fact that a (6,4,5)-code is essentially unique and can be taken as follows: o o 1 0 1 0 1 2 1 0 1 0 1 002 2 0 2 0 2 0 0 2. The following code is a (7,10,5)-code and design algorithm: 0 0 0 0 1 1 1 2 2 2 is equivalent to a code found by the genetic code . 000 000 012 211 120 121 201 112 011 120 101 201 110 012 022 102 202 021 220 210 D 9 5.2 A Table for Aa (n,d) In this subsection we present a table for A3 ( n, d) for 1 ~ n ~ 16. If only one number occurs in a position of this table, then this number is the exact value of A3 ( n, d) for the corresponding nand d values. If two numbers are given, the upper one denotes the best known upper bound for A3 ( n, d) and the lower one the best known lower bound. Furthermore, the entries in the table are explained by the following key. Key to Table 1 • Exact Values: Exact values which are marked with a number are obtained by using the corresponding theorem in Subsection 5.1. • Upper Bounds: Unmarked upper bounds are obtained by using inequalities following from the constructive methods mentioned in Subsection 3.2. Upper bounds marked with a character as superscript are explained as follows: p I h = Plotkin bound [23]; Linear Programming bound [4]; Hamming bound, [12, 10] or Theorem 1. • Lower Bounds: Unmarked lower bounds are obtained by puncturing, extending or shortening (see Subsection 3.2). Lower bounds marked with a character as subscript are explained as follows: Linear code (see Subsection 5.3); Code obtained by the genetic code design algorithm or equivalent to such a code (see Subsection 5.3); r Code obtained by repeating a code [20]; u = Code obtained by the (u,u+v )-construction [23J; a Code obtained by the (a,a-b,a+b+c)-construction [18]; c Code obtained by another construction method (see Subsection 5.3). 9 5.3 Lower Bounds In this subsection we explain how the marked lower bounds in the table for A3 ( n, d) are obtained. In some cases we also give a code with a number of codewords equal to the corresponding lower bound. 5.3.1 Linear Codes If C is a code with minimum distance d that is a linear subspace of 'l~, of this subspace, then we say that C is an [n, k, d]-code . and k is the dimension • Linear codes with distance 3 are Hamming codes, which are given in [12, 10], or shortened Hamming codes. 10 "'-d t1 3~ 32 33 :: :: 2 93 3s 32 , 35 :: :: 9~ 3, 33 =27, gh 34 :: 81, 18' 27, 34 :: 81, 243, 3 I 4 3, 5 GP 3, 18. 4re 3, 108 3, 3, 9 3, 27 G 6 48 , 36 37 : 38 39 I :: :: :: :: 35 729. 311 6561 5 3' = 2187, 38 19683, 312 = 177147, :: 531441, 313 :: 1594323 5 314 = 4782969. 315 :: 14348907, 243. 36 = 729, 2187, 310 :: 59049, I :: 39 :: 6561 5 = 19683, 310 :: 59049. 311 312 313 314 :: :: 1771475 531441. = 1594323 :: 37. 144 48 99 340 33 139 30 243 937 99. 340 27 90 729 2811 243 937 81 2187 7029 729 2643" 6561 1458 7029 1562 4374. 19683 729 4163 8019 59049" 19683' 59049: 153527 5 4782969. , 316 = 43046721. 315 = 14348907 5 118098 434815 729hI r 8 3, 9 3, 3, 6, 3, 3, 12r 4P 3, 3, 369 9P 4, 3, 3, 27, 6r 3, 3, 3, 81 15P 12. 45 6 3, 3, 3, lOP, 22. 127 30 6r 3, 3, 3, 81 243 h 729 I 13. 54 36 138' 9P "p 3, 3, 1562 103' 2187 10736 729 3885 81 836' 36 237' 24057. 2187 153527 29524 2187, 10736 243 2268 81 711 30. 166 6561 77217 2187 2952-1 297 6643 243 2079 81 451 19683, 6561, 729, 216513. 10 18 44. 363' 354294. 72171 1304445 434815 728271, 243' i Table 1: A3 ( n, d) for 1 ~ n ~ 16 297. 243, 54, 11 18, 12 r 13 c 14 15 16 3, ] • The Ternary Golay code, given in [10], is an [1l,6,5]-code. • A [16,9,5] code is obtained by shortening a [20,13,5]-code, which is given in [8]. • The Extended Ternary Golay code, given in [10], is a [12,6,6]-code. • A [14,7 ,6]-code is given in [20], p 483. • A [16,8,6]-code is given in [3], p 321. • A [16,6,7]-code is obtained by puncturing a [24,12,9]-code twice and then shortening six times. A [24,12,9]-code is given in [lJ, p 135, and in [22], p 126. • A [13,3,9]-code is given in [l1J and in [17], p 58. • A [16,5,9]-code is given in [13], p 71. 5.3.2 Codes Found by the Genetic Code Design Algorithm • A (6,37,3 )-code is given in Figure 3. • A (6,18,4)-code is obtained from the ternary Golay code with generator matrix 100000 010000 001000 000100 000010 000001 01221 10122 21012 22101 12210 11111 by taking all words starting with five O's or with four O's and one 1, and then puncturing this subcode in the first five positions. This code is equivalent to a code obtained by the genetic code design algorithm. • Let the code C contain the codewords 00000000 00012221 01200210 01201021 00110022 00122202 00101211 and let it satisfy the following property: Vc E C: i) c + (11102220) E C ii) 2· c E C iii) C(15)(26)(37)(48) E C iv) c(123)(567) E C. Here in iii) and iv) images of codewords are obtained by applying the given permutations on the positions. Then C is an (8,99,4)-code. We observed that a code with minimum distance 4, satisfying the property above, can have at most 99 words, and that such a 12 202120 020202 211101 211212 222211 201022 012222 102112 100222 212021 021001 021122 122022 011020 111221 120121 220012 200000 212200 222102 110100 121200 210122 220220 001211 002002 102201 111002 022010 000021 012111 010210 101010 110011 221110 200111 001100 Figure 3: A (6,37,3)-code 0002121021 1021122102 1022201210 1000010120 1101200001 1112012012 1210120211 2011212221 2120102020 2100221112 2222011101 0111021200 0212200122 Figure 4: A (10,13,7)-code 202110112221 110002220012 110000001120 110001112201 111221120020 112210222200 120112100211 120111211000 120120022021 120222202102 121012112022 221100122112 222211002111 200010120000 200021011011 202222220211 012101000221 022011121120 022220100001 000122200020 001220021110 002002212110 121020210111 121021021202 122102020100 122200011012 100211010122 101101202011 101202101200 210212111110 211011200210 212020012102 220200210220 Figure 5: A (12,44,7)-code 13 112121101112 111122012210 000020102212 001012010201 111112221121 11111 0000002 200102021222 210120201201 211202022001 222002201021 011201211102 000 012 021 200 200 200 020 020 020 002 002 002 000 111 111 211 121 112 012 120 201 210 102 021 000 222 000 021 210 102 211 121 112 120 201 012 000 000 222 120 201 012 210 102 021 211 121 112 Figure 6: The subcode C' of a (12,36,8)-code 22100022102201 22211100021201 22211211102012 22022100102120 22022211210201 20220001222102 20220112000210 20220220111021 20001112111102 21121010120222 21121121201000 21121202012111 21202010201111 21202121012222 21202202120000 21010010012000 21010202201222 01211220210210 01001001000021 02210012121120 02222102211012 00110100100002 12200020020012 12001201221110 20112001111210 20112112222021 21010121120111 00002220022101 12112122010100 10122211001122 Figure 7: A (14,30,9)-code 22212102200112 22210210011220 20021002212200 00110122121100 12220111222001 21121212101011 20000021000121 01122120210221 10201010120212 02001101111022 01012010002002 11102201012110 Figure 8: A (14,12,10)-code 122011201220202 120001021001110 120002102212021 121122212010210 100220201122010 101200222200101 102020022111222 112211110212110 221220101001221 200111002012102 201001210111011 202012120000020 210100121210212 212022011102101 001101011222220 002122200201112 010121222021021 011010102121100 121121120102002 120110010120121 100202110021202 221212022221012 Figure 9: A (15,22,10)-code 14 code with 99 words is essentially unique. Furthermore, note that in each position every symbol occurs exactly 33 times. So by shortening this code we cannot obtain a code of length 7 and distance 4 having more than 33 words. Finally, the (8,99,4)-code above is equivalent to a code obtained by the genetic code design algorithm. • A (10,13,7)-code is given in Figure 4. • A (12,44,7)-code is given in Figure 5. • Let C ' be the code of Figure 6. Then C' U (C' + 1) U (C ' + 2) is a (12,36,8)-code. This code is equivalent to a code obtained by the genetic code design algorithm. • A (14,30,9)-code is given in Figure 7. • A (14,12,10)-code is given in Figure 8. • A (15,22,10)-code is given in Figure 9. 5.3.3 Other Construction Methods • Codes of size 4 are constructed in the proof of Theorem 7. • A (16, 728271,3)-code is obtained by puncturing a (18,6554439,3)-code twice. The latter code is obtained by the (a,a-b,a+b+c)-construction. • The code in Figure 10 is an equidistant (15,10,11)-code. Note that every codeword has weight 9. Since Vw ~ 0: A3(15, 11, w) ~ A3(15, 11) ~ 10, we have A3(15,11,9)=10. Furthermore, by interchanging the symbols 0 and 2 in the first ten positions, we obtain a code in which every word has weight 10, so we have A3 (15,11,10)=10. 01121 10112 21011 12101 11210 22000 02200 00220 00022 20002 21100 02110 00211 10021 11002 22010 02201 10220 01022 20102 10200 01020 00102 20010 02001 21201 12120 01212 20121 12012 Figure 10: An equidistant (15,10,1l)-code • A (16,54,10)-code is obtained by shortening a (18,54,12)-code twice. The latter code is obtained by the (a,a-b,a+b+c)-construction . • A (16,18,11)-code is obtained from a (18,54,12)-code by both puncturing and shortening once. Also a code of size 18 was found by the genetic code design algorithm. 15 Acknowledgement The authors are very grateful to L.M.G.M. Tolhuizen for pointing them at the use of the (a,a-b,a+b+c)-construction. References [1] ASSMUS, E.F. & H.F. MATTSON, New 5-Designs, J. of Comb. Theory 6, 1969, pp 122-151. [2] BLOKH, E.L. & V.V. ZYABLOV, Coding of Generalized Concatenated Codes, Problems Inform. Trans. 10, 1974, pp 218-222. [3] CONWAY, J.H. , V. PLESS & N.J.A. SLOANE, Self-Dual Codes over GF(3) and GF(4) of Length not Exceeding 16, IEEE Trans. Info. Theory 25, 1979, pp 312-322. [4] DELSARTE, P., Bounds for Unrestricted Codes, by Linear Programming, Philips Res. Reports 27, 1972, pp 47-64. [5] DUCK, G. & T. SCHEUER, Threshold Accepting; A General Purpose Optimization Algorithm Appearing Superior to Simulated Annealing, IBM Scientific Center Tech. Report 88.10.011,1988. [6] EL GAMAL, A.A. , L.A. HEMACHANDRA, 1. SHPERLING & V.K. WEI, Using Simulated Annealing to Design Good Codes, IEEE Trans. Info. Theory 33, 1987, pp 116-123. [7] ELIAS, P., Error-Free Coding, Trans Inst. Radio. Engrs. Prof. Group on Inform. Theory, PGIT-4, 1954, pp 29-37. [8] GAMES, R.A., The Packing Problem for Projective Geometries over GF(3) with Dimension Greater Than Five, J. of Comb. Theory, Series A, 35, 1983, pp 126-144. [9] GILBERT, E.N., A Comparison of Signalling Alphabets, Bell Syst. Tech. J. 31, 1952, pp 504-522. [10] GOLAY, M.J .E., Notes on Digital Coding, Proc. IRE 37, 1949, p 657. [11] GULATI, B.R., On the Packing Problem and Its Applications, Thesis, University of Connecticut, Storrs, 1969. [12] HAMMING, R.W., Error Detecting and Error Correcting Codes, Bell Syst. Tech. J. 29, 1950, pp 147-160. [13] HILL, R. & D.E. NEWTON, Some Optimal Ternary Linear Codes, Ars Combinatoria 25A, 1988, pp 61-72. [14] HOLLAND, J .H., Adaptation in Natural and Artificial Systems, (The University of Michigan Press, Ann Arbor, Michigan), 1975. [15] JOHNSON, S.M., A New Upperbound for Error-Correcting Codes, IRE Trans. Info. Theory 8, 1962, pp 203-207. [16] LAARHOVEN, P.J.M. VAN, E.H.L. AARTS, J.H. VAN LINT & L.T. WILLE, New Upper Bounds for The Football Pool Problem for 6,7 and 8 Matches, [17] LINT, J.H. VAN, Introduction to Coding Theory, (Springer- Verlag, New York-Heidelberg-Berlin), 1982. [18] LINT, J .H. VAN, Repeated-Root Cyclic Codes, IEEE Trans. Info. Theory, March 1991 (to appear). [19] LINT, J.H. VAN & T.A. SPRINGER, Generalized Reed-Solomon Codes from Algebraic Geometry, IEEE Trans. Info. Theory 33, 1987, pp 305-309. 16 [20] MACWILLIAMS, F.J. & N.J .A. SLOANE, The Theory of Error-Correcting Codes, (North-HoI/and Publ. Co., Amsterdam-New York-Oxford), 1977. 121] MUHLENBEIN, H., M. GORGES-SCHLEUTER & O. KRAMER, Evolution Algorithms in Combinatorial Optimization, Parallel Computing 7, 1988, pp 65-85. [22] PLESS, V., Symmetry Codes over GF(3) and New Five-Designs, J. of Comb. Theory, Series A, 12, 1972, pp 119-142. [23] PLOTKIN, M., Binary Codes with Specified Minimum Distance, IRE Trans. Info. Theory 6, 1960, pp 445-450. [24] SINGLETON, R.C., Maximum Distance Q-Nary Codes, IEEE Trans. Info. Theory 10, 1964, pp 116-118. [25] ULDER, N .L.J., Genetic Local Search: A Population-based Search Algorithm, Master's Thesis, Eindhoven University of Technology, 1990 .. [26] ULDER, N.L.J. , E. PESCH, P.J.M. VAN LAARHOVEN, H.J. BANDELT & E.H.L. AARTS, Improving TSP Exchange Heuristics by Population Genetics, Proc. International Workshop on Parallel Problem Solving from Nature, Dortmund, October 1-3, 1990. [27] VARSHAMOV, R.R., Estimate of the Number of Signals in Error Correcting Codes, Doklady Akad. Nauk. S.S.S.R. 117, no. 5, 1957, pp 739-741. 17