[ieee ieee international symposium on information theory - ulm, germany (29 june-4 july 1997)]...
TRANSCRIPT
ISlT 1997, Ulm, Germany, June 29 - July 4
The Complexity of Hard-Decision Decoding of Linear Codes A. Barg*l E. Kroukt H. C. A. vanTilborgt
‘Bell Laboratories 2C-375 SSt. Petersburg State Academy tDept. of Mathematics and Computer Science 700 Mountain avenue of Aerospace Instrumentation Eindhoven University of Technology
Murray Hill, N J 07974 Bol’shaja Morskaja 61 5600 MB Eindhoven USA 190000 St. Petersburg, Russia The Netherlands
Abstract - We study a general method of mini- mum distance decoding of linear codes that instead of decoding the original code recovers the transmit- ted codeword by a number of decodings of shortened codes. We present an implementation of this method whose complexity for long linear codes has the small- est known value for any code rate R, 0 < R < 1.
Minimum distance decoding is the most powerful decod- ing method from the point of view of transmission reliability. Its applicability is hindered by high implementation complex- ity. Even in the hard-decision setting all known algorithms have complexity that grows exponentially with the length of the code. Implementations of this decoding include brute- force methods such as successive inspection of all codewords, building up and storing the syndrome table (or the syndrome trellis). We study the worst-case complexity of decoding al- gorithms which is measured either as the number of computer operations (time complexity) or the size of memory used for the decoding (space complexity). Thus, for an [n, k, d] code the decoding complexity does not exceed O(n(min(qk, q ” - k ) ) .
The most efficient implementation of minimum distance de- coding suggests to successively encode groups of k coordinates in the received vector y that correspond to information sets of the code and choose the codeword c closest to y. General al- gorithms that have the smallest known asymptotic complexity [5], [2], [4] are all based on this idea.
After briefly commenting on these methods we discuss a new approach. The idea is to perform a number of decodings of supercodes of the original code C, i.e., linear codes C’ with C c C’. Decoding a supercode amounts to restricting one- self to a subset of parity checks of C, i.e., building a list of candidates based on a part of the received syndrome. This idea proves efficient for short codes allowing us to construct reduced syndrome tables. We work out an example for the [48,24] code. Decoding up to 5 errors requires the memory of about 8K and about 3000 computer operations.
Asymptotic results of our work are established for almost a l l linear codes except for a fraction of codes that decays expo- nentially as the code length n grows. Let C be an [n, k ] linear code and suppose k f n + R as n + 00. Let H be the parity- check matrix of C. We restrict ourselves to correcting all coset leaders of weight up to nGo(R), where &(R) = HY1(l - R). By [l], this is sufficient for the maximum likelihood decoding for almost all long codes.
Each iteration of our algorithm consists of the follow- ing steps. First, we choose a random partition of the set {1 ,2 , . . . , n} into subsets of size x , k - 2, and n - k. Then the last subset is partitioned into s = [(n - k ) / y l segments of
length y. For every placement of the y-segment, we isolate a linear code C(xly) of length x + y with y parity checks, whose panty-check matrix is formed by the corresponding rows and columns of H. This code is decoded using the decoding algo- rithm of [3]. This decoding supplies us with a list of candi- dates for the message set of C. The final list of candidates for a chosen partition is formed from those message vectors that appear as decoding results of C(z(y) for at least two different placements of the y-segment.
Let €(a, R) = (1 - R)[1- H2(=)]. The asymptotic com- plexity of the algorithm is determined by the following theo- rem.
Theorem 1 For almost all linear codes, the decoding algo- rithm studied here performs maximum likelihood decoding. Its sequential implementation has complezity qnr(R)(lto(l)). For q = 2 the function y(R) has the form
0 The function yq( R) is also immediate but requires a few more lines. Computations show that this function improves the best known result [4] by a small but finite value for any q 2 2 and all code rates R E ( 0 , l ) .
REFERENCES [ l j V. M. Blinovskii, “Lower asymptotic bound on the number of
linear code words in a sphere of given radius in Fg,” Problems o f Info. Trans., 23 ( 2 ) (1987), 50-53 (in Russian) and 130-132 (English translation).
[2] J. T. Coffey and R. M. F. Goodman, “The complexity of infor- mation set decoding,” IEEE Trans. Inform. Theory, IT-35 (5)
[3] I. Dumer, “TWO decoding algorithms for linear codes,” Problems of Info. Trans., 25 (1) (1989), 24-32 and 17-23.
[4] -, “On minimum distance decoding of linear codes,” Proc. 5th Joint Soviet-Swedish Int . Workshop Inform. Theory, MOSCOW (1991), pp. 50-52.
[5] E.A. Krouk, “Decoding complexity bound for linear block codes,” Problems of Info. Trans., 25 (3 ) (1989), 103-107 and
(1990), 1031-1037.
251-254.
‘Research done while at Dept.of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands.
0-7803-3956-8/97/$10.00 01997 I E EE 331