the zero-guards algorithm for general minimum-distance decoding problems

4
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1655 Therefore, from Lemma 1 and from (32)–(34), we have Then, the second and third assertion follow since there must exist a which satisfies and simultaneously. V. Proof of Corollary 3.1 Let be a given positive number and let be an approximately optimal code which satisfies and From the assumption on the exponent functions, we may assume that Thus from Lemma 1, we have (35) Combining (35) and Theorem 3, we have, for a given and for a sufficiently large , where we used for This completes the proof of the corollary. ACKNOWLEDGMENT The authors wish to thank Prof. S. Hirasawa and Dr. T. Niinomi of Waseda University for inspiring discussions of erasures-and-errors decoding. They also wish to thank the members of their laboratory, especially H. Tsuchiya, for providing a good computer environment, and anonymous referees for providing good suggestions and opinions. REFERENCES [1] I. Csisz´ ar and J. K¨ orner, Information Theory: Coding Theorems for Discrete Memoryless Systems. New York: Academic, 1981. [2] P. Delsarte and P. Piret, “Algebraic constructions of Shannon codes for regular channels,” IEEE Trans. Inform. Theory, vol. IT-28, pp. 593–599, July 1982. [3] G. D. Forney, Jr., Concatenated Codes. Boston, MA: MIT Press, 1966. [4] , “Exponential error bounds for erasure, list, and decision feedback scheme,” IEEE Trans. Inform. Theory, vol. IT-14, pp. 206–220, Mar. 1968. [5] M. P. C. Fossorier and S. Lin, “Computationally efficient soft-decision decoding of linear block codes based on ordered statistics,” IEEE Trans. Inform. Theory, vol. 42, pp. 738–750, May 1996. [6] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968. [7] T. Hashimoto, “On the coded ARQ with the generalized Viterbi algo- rithm,” in Proc. IEEE Information Theory Workshop (Salvador, Brazil, June 21–26, 1992), pp. 45–47. [8] , “A coded ARQ scheme with the generalized Viterbi algorithm,” IEEE Trans. Inform. Theory, vol. 39, pp. 423–432, Mar. 1993. [9] , “On the error exponent of convolutionally coded ARQ”, IEEE Trans. Inform. Theory, vol. 40, pp. 567–575, Mar. 1994. [10] F. Jelinek, Probabilistic Information Theory: Discrete and Memoryless Models. New York: McGraw-Hill, 1968. [11] T. Kaneko, T. Nishijima, H. Inazumi, and S. Hirasawa, “An efficient maximum likelihood decoding of linear block codes with algebraic decoder,” IEEE Trans. Inform. Theory, vol. 40, pp. 320–327, Mar. 1994. [12] T. Kløve and M. Miller, “The detection of errors after error-correction decoding,” IEEE Trans. Commun., vol. COM-32, pp. 511–517, May 1984. [13] , “Using codes for error correction and detection,” IEEE Trans. Inform. Theory, vol. IT-30, pp. 868–870, Nov. 1984. [14] T. Kløve and V. I. Korzhik, Error-Detecting Codes. Norwell, MA: Kluwer, 1995. [15] B. D. Kudryashov, “Viterbi algorithm for convolutional code decoding in a system with decision feedback,” Probl. Pered. Inform., vol. 20, pp. 18–26, no. 2, 1984. (Also see B. D. Kudryashov, “Error probability for repeat-request systems with convolutional codes,” IEEE Trans. Inform. Theory vol. 39, pp. 1680–1684, Sept. 1993.) [16] , “Convolutional-block coding in channels with decision feedback,” Probl. Pered. Inform., vol. 21, no. 1, pp. 17–27, 1985. [17] S. Lin and D. J. Costello, Jr., Error Control Coding. Englewood Cliffs, NJ: Prentice-Hall, 1983. [18] L. K. Rasmussen and S. B. Wicker, “Trellis-coded, type-I hybrid-ARQ protocols based on CRC error-detecting codes,” IEEE Trans. Commun., vol. 43, pp. 2569–2575, Oct. 1995. [19] H. Yamamoto and K. Itoh, “Viterbi decoding algorithm for convolutional codes with repeat request,” IEEE Trans. Inform. Theory, vol. IT-26, pp. 540–547, Sept. 1980. [20] G. W. Zeoli, “Coupled decoding of block-convolutional concatenated codes,” IEEE Trans. Commun., vol. COM-21, pp. 219–226, Mar. 1973. The Zero-Guards Algorithm for General Minimum-Distance Decoding Problems Yunghsiang S. Han, Member, IEEE, and Carlos R. P. Hartmann, Fellow, IEEE Abstract— In this correspondence we present some properties of an improved version of the Zero-Neighbors algorithm—the Zero-Guards algorithm. These properties can be used to find a Zero-Guards. A new decoding procedure using a Zero-Guards is also given. Index Terms—Decoding, linear block codes, minimum-distance decod- ing. I. INTRODUCTION Minimum-distance decoding for a linear block code has been proved to be an NP-hard computational problem [1]. The com- plexity of the best known decoding algorithms is determined by , where is the code length and is the number of information bits [3]. The Zero-Neighbors algorithm [3] provides a better method for solving the problem. The algorithm uses the concept Manuscript received June 24, 1996; revised March 21, 1997. This work was supported in part by National Science Council ROC under Grant NSC 84-2213-E-211-004. Y. S. Han is with the Department of Electronic Engineering, Hua Fan College of Humanities and Technology, Taipei Hsien, Taiwan, ROC. C. R. P. Hartmann is with the Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY 13244-4100 USA. Publisher Item Identifier S 0018-9448(97)05614-9. 0018–9448/97$10.00 1997 IEEE

Upload: crp

Post on 22-Sep-2016

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The zero-guards algorithm for general minimum-distance decoding problems

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1655

Therefore, from Lemma 1 and from (32)–(34), we have

EPe;Ed(C; Co) � Pe(C) + exp f�NEr(Ro)g:

Then, the second and third assertion follow since there must exist aCo which satisfies

yyy xxx2C

�[yyy 62 S(xxx;N�o)]

� 2n+1

exp f�(1�M=Mo)eN(�=2�� )

g

and

Pe;Ed(C;Co)� Pe(C) � 2 exp f�NEr(Ro)g

simultaneously.

V. Proof of Corollary 3.1

Let �1 be a given positive number and letC be an approximatelyoptimal code which satisfies

Pers;Th(C) � exp f�NEsp(Ro)g

and

Puer;Th(C) � exp f�N [ETh(R;Ro)� �1]g:

From the assumption on the exponent functions, we may assume thatPuer;Th(C) � Pers;Th(C): Thus from Lemma 1, we have

Pe(C) � 2Pers;Th(C): (35)

Combining (35) and Theorem 3, we have, for a given�2

Puer;Ed(C;Co) � exp f�N [ETh(R;Ro)� �1]g

and

Pers;Ed(C;Co) � exp f�N [Esp(Ro)� �2]g

for a sufficiently largeN , where we usedEsp(Ro) = Er(Ro) forRo � Rcr: This completes the proof of the corollary.

ACKNOWLEDGMENT

The authors wish to thank Prof. S. Hirasawa and Dr. T. Niinomiof Waseda University for inspiring discussions of erasures-and-errorsdecoding. They also wish to thank the members of their laboratory,especially H. Tsuchiya, for providing a good computer environment,and anonymous referees for providing good suggestions and opinions.

REFERENCES

[1] I. Csiszar and J. Korner, Information Theory: Coding Theorems forDiscrete Memoryless Systems. New York: Academic, 1981.

[2] P. Delsarte and P. Piret, “Algebraic constructions of Shannon codes forregular channels,”IEEE Trans. Inform. Theory, vol. IT-28, pp. 593–599,July 1982.

[3] G. D. Forney, Jr.,Concatenated Codes. Boston, MA: MIT Press, 1966.[4] , “Exponential error bounds for erasure, list, and decision feedback

scheme,”IEEE Trans. Inform. Theory, vol. IT-14, pp. 206–220, Mar.1968.

[5] M. P. C. Fossorier and S. Lin, “Computationally efficient soft-decisiondecoding of linear block codes based on ordered statistics,”IEEE Trans.Inform. Theory, vol. 42, pp. 738–750, May 1996.

[6] R. G. Gallager,Information Theory and Reliable Communication. NewYork: Wiley, 1968.

[7] T. Hashimoto, “On the coded ARQ with the generalized Viterbi algo-rithm,” in Proc. IEEE Information Theory Workshop(Salvador, Brazil,June 21–26, 1992), pp. 45–47.

[8] , “A coded ARQ scheme with the generalized Viterbi algorithm,”IEEE Trans. Inform. Theory, vol. 39, pp. 423–432, Mar. 1993.

[9] , “On the error exponent of convolutionally coded ARQ”,IEEETrans. Inform. Theory, vol. 40, pp. 567–575, Mar. 1994.

[10] F. Jelinek,Probabilistic Information Theory: Discrete and MemorylessModels. New York: McGraw-Hill, 1968.

[11] T. Kaneko, T. Nishijima, H. Inazumi, and S. Hirasawa, “An efficientmaximum likelihood decoding of linear block codes with algebraicdecoder,”IEEE Trans. Inform. Theory, vol. 40, pp. 320–327, Mar. 1994.

[12] T. Kløve and M. Miller, “The detection of errors after error-correctiondecoding,” IEEE Trans. Commun., vol. COM-32, pp. 511–517, May1984.

[13] , “Using codes for error correction and detection,”IEEE Trans.Inform. Theory, vol. IT-30, pp. 868–870, Nov. 1984.

[14] T. Kløve and V. I. Korzhik,Error-Detecting Codes. Norwell, MA:Kluwer, 1995.

[15] B. D. Kudryashov, “Viterbi algorithm for convolutional code decodingin a system with decision feedback,”Probl. Pered. Inform., vol. 20, pp.18–26, no. 2, 1984. (Also see B. D. Kudryashov, “Error probability forrepeat-request systems with convolutional codes,”IEEE Trans. Inform.Theoryvol. 39, pp. 1680–1684, Sept. 1993.)

[16] , “Convolutional-block coding in channels with decision feedback,”Probl. Pered. Inform., vol. 21, no. 1, pp. 17–27, 1985.

[17] S. Lin and D. J. Costello, Jr.,Error Control Coding. Englewood Cliffs,NJ: Prentice-Hall, 1983.

[18] L. K. Rasmussen and S. B. Wicker, “Trellis-coded, type-I hybrid-ARQprotocols based on CRC error-detecting codes,”IEEE Trans. Commun.,vol. 43, pp. 2569–2575, Oct. 1995.

[19] H. Yamamoto and K. Itoh, “Viterbi decoding algorithm for convolutionalcodes with repeat request,”IEEE Trans. Inform. Theory, vol. IT-26, pp.540–547, Sept. 1980.

[20] G. W. Zeoli, “Coupled decoding of block-convolutional concatenatedcodes,”IEEE Trans. Commun., vol. COM-21, pp. 219–226, Mar. 1973.

The Zero-Guards Algorithm for GeneralMinimum-Distance Decoding Problems

Yunghsiang S. Han,Member, IEEE, andCarlos R. P. Hartmann,Fellow, IEEE

Abstract—In this correspondence we present some properties of animproved version of the Zero-Neighbors algorithm—the Zero-Guardsalgorithm. These properties can be used to find a Zero-Guards. A newdecoding procedure using a Zero-Guards is also given.

Index Terms—Decoding, linear block codes, minimum-distance decod-ing.

I. INTRODUCTION

Minimum-distance decoding for a linear block code has beenproved to be an NP-hard computational problem [1]. The com-plexity of the best known decoding algorithms is determined bymin (2

k; 2n�k), wheren is the code length andk is the numberof information bits [3]. The Zero-Neighbors algorithm [3] provides abetter method for solving the problem. The algorithm uses the concept

Manuscript received June 24, 1996; revised March 21, 1997. This workwas supported in part by National Science Council ROC under Grant NSC84-2213-E-211-004.

Y. S. Han is with the Department of Electronic Engineering, Hua FanCollege of Humanities and Technology, Taipei Hsien, Taiwan, ROC.

C. R. P. Hartmann is with the Department of Electrical Engineering andComputer Science, Syracuse University, Syracuse, NY 13244-4100 USA.

Publisher Item Identifier S 0018-9448(97)05614-9.

0018–9448/97$10.00 1997 IEEE

Page 2: The zero-guards algorithm for general minimum-distance decoding problems

1656 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997

of Zero-Neighbors—a special set of codewords. Only these code-words need to be stored and used in the decoding procedure. The sizeof a Zero-Neighbors is very small compared withmin (2k; 2n�k)for n � 1 and a wide range of code ratesR = k=n. Recently,an improvement of the Zero-Neighbors algorithm, the Zero-Guardsalgorithm (ZGA), was presented in [2] and [4]. The ZGA furtherreduces the number of codewords to be stored. The special set of thesecodewords is called a Zero-Guards. Thus the size of a Zero-Guardsis the main factor that determines the complexity of the algorithm.

In this correspondence, we investigate the properties of a Zero-Guards. These properties can be used to find a Zero-Guards ef-ficiently. Moreover, we also presented a new decoding procedureusing a Zero-Guards that is much simpler than the one given in [3].In Section II, we briefly review the Zero-Neighbors algorithm. InSection III, we give a description of the Zero-Guards algorithm anda new decoding procedure using a Zero-Guards. In the next section,properties of a Zero-Guards are presented. Simulation results andconclusions are given in Section V.

II. THE ZERO-NEIGHBORS ALGORITHM

In this section, we briefly describe the Zero-Neighbors algorithm.First we give some definitions.

Let ZZZ be the set of all the binary vectors of lengthn, and letCCC � ZZZ be a binary linear block code. Ifxxx 2 ZZZ, we callxxx a vectorin the spaceZZZ. Let d(xxx1; xxx2) denote the Hamming distance betweenxxx1; xxx2 2 ZZZ. Let w(xxx) = d(xxx; 0) denote the Hamming weight ofxxxand let� denote the modulo-2 addition. Furthermore, letdmin be theminimum nonzero weight of codewords inCCC.

Definition 1: The domainD(ccc) of a codewordccc 2 CCC is the set ofall xxx 2 ZZZ such thatd(xxx; ccc) � d(xxx; ccc0) for all ccc0 2 CCC.

Definition 2: The vicinity B(xxx) of xxx 2 ZZZ is the set of allyyy 2 ZZZ

such thatd(xxx; yyy) = 1. The domain frameG(ccc) of a codewordccc 2 CCCis the set

G(ccc) =

xxx2D(ccc)

B(xxx)�D(ccc):

Definition 3: A Zero-Neighbors is a setN0 of codewords such that

G(0) �ccc2N

D(ccc)

where

jN0j = min jN j jN � CCC; G(0) �ccc2N

D(ccc) :

It can be shown that ifxxx 62 D(0), then there exists accc 2 N0 such thatw(xxx� ccc) < w(xxx). Thus the Zero-Neighbors algorithm is as follows:

Algorithm: Let yyy = yyy02 ZZZ be the received vector to be decoded.

At the ith step of the algorithm we calculatew(yyyi�1

� ccc) for allccc 2 N0. If there exists accci 2 N0 such thatw(yyy

i�1�ccci) < w(yyy

i�1),

we setyyyi= yyy

i�1�ccci and go to the next step; otherwise, the algorithm

terminates. If the algorithm terminates at the(m+ 1)th step, then

yyym= yyy �

m

i=1

ccci 2 D(0)

and can be taken as a coset leader of minimum weight, while

ccc =

m

i=1

ccci 2 CCC

is a codeword that is one of the closest toyyy.We need only to store the codewords in a Zero-Neighbors to

accomplish this algorithm. It can be shown that the number of stepsm � n. A more complex decoding procedure based on the syndrome

of the received vector is given in [3]. The number of decoding stepsis m � n � k � bdmin=2c for this decoding procedure, wherebacdenotes the integral part ofa. We do not describe the procedure heresince we will give a new decoding procedure in the next section,which also has the number of decoding stepsm � n�k�bdmin=2c.

III. T HE ZERO-GUARDS ALGORITHM

In this section, we will describe a minimum-distance decodingalgorithm that is similar to the Zero-Neighbors algorithm except thatwe use the concept of Zero-Guards instead of Zero-Neighbors.

Definition 4: A vector vvv is an immediate descendant ofxxx if andonly if vvv can be obtained fromxxx by changing one nonzero componentto zero. A vectorvvv is a descendant ofxxx if and only if there is achainxxx0 = xxx; xxx1; xxx2; � � � ; xxxn = vvv such that, for eachi; xxxi is animmediate descendant ofxxxi�1. Furthermore, ifvvv 6= xxx, thenvvv is aproper descendant ofxxx [5].

Definition 5: The frontierF (0) of 0 is the set of allxxx 2 ZZZ suchthat all its proper descendants belong toD(0) andxxx 62 D(0).

Definition 6: A Zero-Guards (ZG) is a setRN0 � CCC of code-words such that

F (0) �ccc2RN

D(ccc)

where

jRN0j = min jN j jN � CCC; F (0) �ccc2N

D(ccc) :

In other words, the set of domains of codewords inRN0 forms aminimum covering ofF (0). It is easy to see thatRN0 always exists,since the number of all such subsetsN � CCC is finite.

It is not difficult to see thatF (0) � G(0). Consequently, thenumber of codewords in Zero-Guards is less than or equal to that inZero-Neighbors.

Next we give the main theorem that the ZGA is based on.Lemma 1: Let xxx 2 ZZZ and xxx 62 D(0). Then there exists a

descendantvvv of xxx such thatvvv 2 F (0).Proof: Let M(xxx) = fvvvjvvv 2 ZZZ; vvv be a descendant ofxxx and

vvv 62 D(0)g. ThusM(xxx) 6= ;, since at leastxxx 2 M(xxx). Then anyvector of minimum weight inM(xxx) belongs toF (0).

Theorem 1: xxx 62 D(0) if and only if there exists accc 2 RN0 suchthat w(xxx � ccc) < w(xxx).

Proof: First, assume thatxxx 62 D(0). From Lemma 1, thereexists a descendant ofxxx, namedvvv; vvv 2 F (0). Consider accc 2 RN0

such thatvvv 2 D(ccc). Hence

w(xxx�ccc) = d(xxx; ccc) � d(xxx; vvv)+d(vvv; ccc) < d(xxx; vvv)+d(vvv;0) = w(xxx):

Next, assumexxx 2 D(0). Then d(xxx; 0) � d(xxx; ccc) for all ccc 2 CCC.Thusw(xxx) � w(xxx � ccc). Therefore, there is noccc 2 RN0 such thatw(xxx� ccc) < w(xxx).

Obviously, if w(ccc) is even, thenw(xxx) � w(xxx � ccc) is even, too.Thus we have the following corollary.

Corollary 1: If all codewords in a Zero-Guards are of even weightandxxx 62 D(0), then there exists accc 2 RN0 such thatw(xxx � ccc) �w(xxx) � 2.

The following algorithm and arguments are similar to those in[3] except that we use the concept of Zero-Guards instead of Zero-Neighbors.

Algorithm 1: Let yyy = yyy02 ZZZ be the received vector to be

decoded. At theith step of the algorithm we calculatew(yyyi�1

� ccc)

for all ccc 2 RN0. If there exists accci 2 RN0 such thatw(yyyi�1

�ccci) <w(yyy

i�1), we setyyy

i= yyy

i�1� ccci and go to the next step; otherwise,

Page 3: The zero-guards algorithm for general minimum-distance decoding problems

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1657

the algorithm terminates. If the algorithm terminates at the(m+1)thstep, then

yyym

= yyy �

m

i=1

ccci 2 D(0)

and can be taken as a coset leader of minimum weight, while

ccc =

m

i=1

ccci 2 CCC

is a codeword that is one of the closest toyyy.We call the algorithm the Zero-Guards algorithm. Only the code-

words in a Zero-Guards must be stored in order to accomplish thisalgorithm. The algorithm will stop whenyyy

m2 D(0). Thus if

w(yyym) � bdmin=2c, the algorithm stops immediately. Since at each

step of the algorithm the weight ofyyy decreases at least by one, thenumber of steps is

m � w(yyy)� bdmin=2c � n� bdmin=2c:

Moreover, it follows from Corollary 1 that for codes with only even-weight codewords, each step of the algorithm decreases the weightof yyy at least by2; therefore, in this case

m � b(w(yyy)� bdmin=2c)=2c � b(n� bdmin=2c)=2c:

Next, we give another algorithm that is simpler than that presentedin [3], which is based on the syndrome of the received vector.

Algorithm 2: Let GGG be a generating matrix of a systematic codeCCC. Let yyy = (y0; y1; � � � ; yn�1) and

ccc = (c0; c1; � � � ; cn�1) = (y0; y1; � � � ; yk�1)GGG:

Then ci = yi for 0 � i � k � 1. Take

xxx = ccc� yyy = (x0; x1; � � � ; xn�1):

Thusxi = 0 for 0 � i � k�1. It is clear thatxxx andyyy are in the samecoset. Instead of decodingyyy directly, we decodexxx as the process inAlgorithm 1. Assume the process in Algorithm 1 terminates at the(m + 1)th step; we then have

yyym

= xxx�

m

i=1

ccci 2 D(0):

Then

ccc�

m

i=1

ccci

is a codeword that is one of the closest toyyy.Sincew(xxx) � n � k, the number of decoding steps

m � n� k � bdmin=2c:

IV. SOME PROPERTIES OF AZERO-GUARDS

In this section, we present some properties of the frontierF (0)and the Zero-Guards that can help to findRN0.

Lemma 2: Let

S(xxx; a) = fvvvjvvv 2 ZZZ; w(vvv) = a andvvv be a descendant ofxxxg:

Thenxxx 2 F (0) if and only ifxxx 62 D(0) andS(xxx; w(xxx)�1) � D(0).Proof: If xxx 2 F (0), then by definitionxxx 62 D(0) and

S(xxx; w(xxx) � 1) � D(0). Assume now thatxxx 62 D(0) andS(xxx; w(xxx) � 1) � D(0). By [5, Theorem 3.9], ifvvv 2 D(0), thenall its descendants also belong toD(0). Thusxxx 62 D(0) and all itsdescendants belong toD(0).

Lemma 3: If xxx 2 F (0), then there exists at least oneccc 2 CCC suchthat xxx 2 D(ccc) andxxx is a descendant ofccc.

Proof: Becausexxx 62 D(0) there exists at least oneccc 2 CCC

such thatxxx 2 D(ccc). Supposexxx is not a descendant ofccc. Thenxxxshould at least differ fromccc in a position whereccc has0. Let vvv be

the immediate descendant ofxxx that differs fromxxx in the positionjust mentioned. Thend(xxx; ccc) > d(vvv; ccc) and w(vvv) = w(xxx) � 1.Sinced(xxx; ccc) < w(xxx), w(vvv) � d(xxx; ccc). Thusw(vvv) > d(vvv; ccc) whichcontradictsvvv 2 D(0). Therefore,xxx is a descendant ofccc.

Lemma 4: Let xxx 2 F (0). If d(xxx; ccc) < w(xxx), then xxx is adescendant ofccc.

Proof: The proof is similar to that in Lemma 3.Lemma 5: For everyccc 2 CCC andccc 6= 0 there exists a descendant

vvv of ccc such thatvvv 2 F (0).Proof: For anyccc 2 CCC, ccc 62 D(0), andccc 2 D(ccc). From Lemma

1 there exists a descendantvvv of ccc such thatvvv 2 F (0).Lemma 6: If ccc 2 RN0, then there exists anxxx 2 F (0) such that

xxx 2 D(ccc) andxxx 62 D(ccc0), ccc0 6= ccc, ccc0 2 RN0.Proof: Assume that there is noxxx 2 F (0) such thatxxx 2 D(ccc)

andxxx 62 D(ccc0); ccc0 6= ccc. Then for everyxxx 2 D(ccc) andxxx 2 F (0)there exists at least oneccc0 2 RN0, ccc0 6= ccc such thatxxx 2 D(ccc0).Therefore, if we removeccc from RN0 we also have

F (0) �ccc2RN

D(ccc):

This contradicts the fact thatRN0 is the minimum set with thisproperty. Therefore, the lemma holds.

Lemma 7: Let r be the covering radius of the codeCCC. If ccc 2 CCC

andw(ccc) > 2r + 1, thenccc 62 RN0.Proof: Assumeccc 2 RN0. From Lemma 6, there exists anxxx 2

D(ccc) andxxx 62 D(ccc0); ccc0 6= ccc. Sincexxx 2 D(ccc); d(xxx; ccc) � r and sincexxx 2 F (0); w(xxx) � r+1. Hence,w(ccc) = w(xxx)+d(xxx; ccc) � 2r+1.Thus if w(ccc) > 2r + 1, thenccc 62 RN0.

Lemma 8: If xxx 2 F (0) and there exists accc 2 CCC such thatd(xxx; ccc) < d(xxx; ccc0), ccc 6= ccc0 for all ccc0 2 CCC, thenccc 2 RN0.

Proof: Assumeccc 62 RN0. Then

xxx 62ccc 2RN

D(ccc0):

Hence

F (0) 6�ccc 2RN

D(ccc0)

which is a contradiction.Theorem 2: Let ccc1,ccc2 2 CCC whereccc1 is a descendant ofccc2. Then

ccc2 62 RN0.Proof: Assumeccc2 2 RN0 andccc3 = ccc1 � ccc2. Then, by Lemma

6, there exists anxxx 2 F (0) such thatxxx 2 D(ccc2) andxxx 62 D(ccc0),ccc0 6= ccc2, ccc0 2 RN0. Furthermore, by Lemma 4,xxx is a descendant ofccc2. By Lemma 4, ifd(xxx; ccc1) < w(xxx), thenxxx is a descendant ofccc1.In this case,xxx 62 D(ccc2), which contradicts the fact thatxxx 2 D(ccc2).Therefore,d(xxx; ccc1) � w(xxx). Similarly, we haved(xxx; ccc3) � w(xxx).Therefore,

d(xxx; ccc2) = d(xxx; ccc1) + d(xxx; ccc3)� w(xxx) � w(xxx):

This contradicts the fact thatxxx 2 D(ccc2).By [3, Theorem 3], ifccc1 andccc3 are inN0 thenccc2 62 N0; however,

by the above theorem, any codeword will not be inRN0 if any ofits nonzero descendant is a codeword.

Theorem 3: All codewords of minimum weight belong toRN0.Proof: Let ccc be a codeword of minimum weight. From Lemma

5, there exists avvv 2 F (0) that is a descendant ofccc. Thus

d(ccc; ccc0) � d(ccc; vvv) + d(vvv; ccc0) = w(ccc)� w(vvv) + d(vvv; ccc0)

whereccc0 6= ccc and ccc0 2 CCC. Hence

d(vvv; ccc0) � w(vvv) + [d(ccc; ccc0)� w(ccc)]:

Sinceccc is of minimum weight,d(ccc; ccc0)�w(ccc) � 0. Thusd(vvv; ccc0) �w(vvv). But vvv 62 D(0), thenvvv 2 D(ccc), andvvv 62 D(ccc0). Therefore,d(vvv; ccc0) > d(vvv; ccc). From Lemma 8,ccc 2 RN0.

Page 4: The zero-guards algorithm for general minimum-distance decoding problems

1658 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997

TABLE ITHE COMPARISON OF ZERO-NEIGHBORS AND ZERO-GUARDS

Corollary 2: If vvv 2 F (0) andvvv is a descendant ofccc, which is aminimum-weight codeword, thenvvv is not a descendant of any otherminimum-weight codeword.

Proof: Supposevvv is also a descendant of another minimum-weight codewordccc0. Then from the proof in Theorem 3,d(vvv; ccc0) >d(vvv; ccc) andd(vvv; ccc) > d(vvv; ccc0) which is a contradiction.

Theorem 4: All descendants of weightbdmin=2c+1 of minimum-weight codewords belong toF (0).

Proof: Assumedmin is even anddmin = 2t. Thenbdmin=2c+1 = t + 1. For any vectorvvv that is a descendant of a minimumcodewordccc, if the weight ofvvv is t+1, thenvvv 2 D(ccc). Moreover, anyvectors of weightt belong toD(0). Thusvvv 2 F (0). The argumentthat dmin is odd is similar to that above.

Theorem 5: If there arem codewords of minimum weight, thenthere are at least

dmin

bdmin=2c+ 1�m

vectors of weightbdmin=2c + 1 belonging toF (0).Proof: The theorem follows directly from Corollary 2 and

Theorem 4.Theorem 6: For an odd minimum-weight linear code, any vector

vvv of weight bdmin=2c + 1 that belongs toF (0) is a descendant ofa minimum-weight codeword.

Proof: From Lemma 3 we can conclude that the vectorvvv is adescendant of a codewordccc andvvv 2 D(ccc). Then

w(ccc) � bdmin=2c+ 1 + bdmin=2c:

Thusw(ccc) � dmin�1+1 = dmin. Therefore,ccc is a minimum-weightcodeword.

From Theorems 5 and 6 we have the following corollary.Corollary 3: For an (n; k) Hamming code, the number of

minimum-weight codewords isn2= 3

2.

V. SIMULATION RESULTS AND CONCLUSIONS

Even though up to now we were not able to find a good methodwith which to estimate the size ofRN0, from the simulation results,jRN0j is dramatically less thanjN0j. Some results are shown inTable I. Just as with the Zero-Neighbors algorithm presented in [3],the ZGA can also be easily generalized for linear codes over GF(p),p > 2. To implement the ZGA, we must find a Zero-Guards. Asexpected, to find a Zero-Guards is an NP-hard problem. However, itneeds to be found only once for a given code.

ACKNOWLEDGMENT

The authors wish to thank the reviewer and the Associate Editor T.Kløve for their invaluable suggestions, which we believe have helpedto improve the presentation of our correspondence. In addition, theauthors wish to thank E. Weinman for her invaluable help on thelanguage check.

REFERENCES

[1] E. R. Berlekamp, R. J. McEliece, and H. C. A. van Tilborg, “On theinherent intractability of certain coding problems,”IEEE Trans. Inform.Theory, vol. IT-24, pp. 384–386, May 1978.

[2] C. R. P. Hartmann and L. B. Levitin, “An improvement of the zero-neighbors minimum distance decoding algorithm: The zero-guards al-gorithm,” in IEEE Int. Symp. on Information Theory(Kobe, Japan),1988.

[3] L. B. Levitin and C. R. P. Hartmann, “A new approach to the generalminimum distance decoding problem: The zero-neighbors algorithm,”IEEE Trans. Inform. Theory, vol. IT-31, pp. 378–384, May 1985.

[4] L. B. Levitin, M. Naidjate, and C. R. P. Hartmann, “Generalizedidentity-guards algorithm for minimum distance decoding of groupcodes in metric space,” inIEEE Int. Symp. on Information Theory(SanDiego, CA), 1990.

[5] W. W. Peterson,Error-Correcting Codes, 2nd ed. Cambridge, MA:MIT Press, 1972.

A Probability-Ratio Approach toApproximate Binary Arithmetic Coding

Linh Huynh and Alistair Moffat

Abstract—We describe an alternative mechanism for approximatebinary arithmetic coding. The quantity that is approximated is the ratiobetween the probabilities of the two symbols. Analysis is given to showthat the inefficiency so introduced is less than 0.7% on average; and inpractice the compression loss is negligible.

Index Terms—Approximate arithmetic coding, bilevel coding, binaryarithmetic coding, data compression.

I. BINARY ARITHMETIC CODING

The need for binary arithmetic coding arises in many applications,including bilevel image compression [1] and general bit-based datacompression [2]. In this correspondence, a novel mechanism forapproximating the various calculations involved in binary arithmeticcoding is described, and error bounds limiting the inaccuracy of theresulting representation are given. We also report experimental results

Manuscript received March 2, 1995; revised February 1, 1997. This workwas supported by the Australian Research Council and the CollaborativeInformation Technology Research Institute. The material in this correspon-dence was presented in part at the 1994 IEEE Data Compression Conference,Snowbird, UT, March 1994.

L. Huynh is with the Department of Computer Science, RMIT, GPO Box2476V, Melbourne 3001, Australia, and with the Department of ComputerScience, University of Melbourne, Parkville, Victoria 3052, Australia.

A. Moffat is with the Department of Computer Science, University ofMelbourne, Parkville, Victoria 3052, Australia.

Publisher Item Identifier S 0018-9448(97)05409-6.

0018–9448/97$10.00 1997 IEEE