iterative joint channel decoding of correlated sources employing serially concatenated convolutional...

11
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7,JULY2005 2721 In order to evaluate (83), we need to count the number of matrices in the set . Assume that a binary tuple has weight . From a simple combinatorial argument, it is easy to derive that is even (84) is odd (85) Let . Since the size of is equal to , the number of matrices is given by (86) where is defined by (87) (see also discussion on the average weight distribution of Ensemble E ([4, p. 903 ])). Note that the value of (86) depends only on the weight of . Thus, combining the above results, we have the average coset weight distribution of (88) ACKNOWLEDGMENT The author would like to thank Dr. Jun Muramatsu of NTT Commu- nication Science Laboratories for inspiring discussion on LDPC code ensembles. The author also wishes to thank the anonymous reviewers for valuable comments. REFERENCES [1] R. G. Gallager, Low Density Parity Check Codes. Cambridge, MA: MIT Press, 1963. [2] D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, Mar. 1999. [3] G. Miller and D. Burshtein, “Bounds on maximum-likelihood decoding error probability of low-density parity-check codes,” IEEE Trans. Inf. Theory, vol. 47, no. 7, pp. 2696–2710, Nov. 2001. [4] S. Litsyn and V. Shevelev, “On ensembles of low-density parity-check codes: Asymptotic distance distributions,” IEEE Trans. Inf. Theory, vol. 48, no. 4, pp. 887–908, Apr. 2002. [5] , “Distance distributions in ensembles of irregular low-density parity-check codes,” IEEE Trans. Inf. Theory, vol. 49, no. 11, pp. 3140–3159, Nov. 2003. [6] D. Burshtein and G. Miller, “Asymptotic enumeration methods for analyzing LDPC codes,” IEEE Trans. Inf. Theory, vol. 50, no. 6, pp. 1115–1131, Jun. 2004. [7] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes. Amsterdam, The Netherlands: North-Holland, 1977. [8] G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein, Covering Codes. Amsterdam, The Netherlands: North-Holland, 1997. [9] G. Dueck and J. Körner, “Reliability function of a discrete memoryless channel at rates above capacity,” IEEE Trans. Inf. Theory, vol. IT-25, no. 1, pp. 82–85, Jan. 1979. Iterative Joint Channel Decoding of Correlated Sources Employing Serially Concatenated Convolutional Codes Fred Daneshgaran, Member, IEEE, Massimiliano Laddomada, Member, IEEE, and Marina Mondin, Member, IEEE Abstract—This correspondence looks at the problem of joint decoding of serially concatenated convolutional codes (SCCCs) used for channel coding of multiple correlated sources. We assume a simple model whereby two correlated sources transmit SCCC encoded data to a single destination re- ceiver. We do not assume the existence of, nor do we use channel side in- formation at the receiver. In particular, we present a novel iterative joint channel decoding algorithm for correlated sources by using the empirical cross-correlation measurements at successive decoding iterations to pro- vide extrinsic information to the outer codes of the SCCC configuration. Two levels of soft metric iterative decoding are used at the receiver: 1) itera- tive maximum a posteriori probability (MAP) decoding is used for efficient decoding of individual SCCC codes (local iterations) and 2) iterative ex- trinsic information feedback generated from the estimates of the empirical cross correlation in partial decoding steps is used to pass soft information to the outer decoders of the global joint SCCC decoder (global iterations). We provide analytical results followed by simulation studies confirming the ro- bustness of the cross-correlation estimates to channel-induced errors, jus- tifying the use of such estimates in iterative decoding. Experimental results suggest that relatively few global iterations (two to five) during which mul- tiple local iterations are conducted are sufficient to reap significant gains using this approach specially when the sources are highly correlated. Index Terms—Concatenated codes, convolutional code, correlated sources, iterative decoding, joint decoding, serially concatenated codes, Slepian–Wolf, soft decoding, turbo codes. I. INTRODUCTION Consider a communication scenario whereby we have two nodes, say nodes and that have data to communicate to an aggregation node for further processing. In connection with source coding, the result of Slepian-Wolf theorem [1], [2] on correlated sources is that rates above the joint entropy are achievable even though the sources are disparate. The question of interest in channel coding is whether the fact that the sources are correlated can be used by a channel decoder to improve the performance of communication links between and (henceforth denoted link-1) and and (link-2), as measured by the bit error rate (BER) and/or frame error rate (FER) of the links. Notice that in connection with both source coding and channel coding, there is no side information available at the receiver. In this correspondence, we present results demonstrating that simple processing at the receiver that jointly decodes the ’s and ’s messages (henceforth taken as two fixed-size data packets) can improve performance of the individual links depending on the degree of correlation between the data packets. Message relay is another potential area where source correlation can be used to improve performance. In particular, suppose node sends a packet to node that has data correlated with the message sent from Manuscript received January 5, 2004; revised February 22, 2005. This work was supported by Euroconcepts S.r.l. (http://www.euroconcepts.it), and a patent is pending on the subject. F. Daneshgaran is with the Electrical and Computer Engineering Department, California State University, Los Angeles, CA 90032 USA (e-mail: fdanesh@ex- change.calstatela.edu). M. Laddomada and M. Mondin are with the Dipartimento di Elettronica, Po- litecnico di Torino, 10129 Turin, Italy (e-mail: [email protected]; mondin@ polito.it). Communicated by A. E. Ashikhmin, Associate Editor for Coding Theory. Digital Object Identifier 10.1109/TIT.2005.850220 0018-9448/$20.00 © 2005 IEEE

Upload: m

Post on 24-Sep-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2721

In order to evaluate (83), we need to count the number of matrices in theset fH 2E : Hzzzt=sssg. Assume that a binary n tuple zzz has weight w.From a simple combinatorial argument, it is easy to derive that

D0 #fhhh 2 Z(n;k) : jhhhzzztj is eveng

=

bk=2c

i=0

w

2i

n� w

k � 2i(84)

D1 #fhhh 2 Z(n;k) : jhhhzzztj is oddg

=n

k�D0: (85)

Let sss=(s1; s2; . . . ; sm)2Fm2 . Since the size of fH 2E : Hzzzt=sssg

is equal toDs �Ds � � � � �Ds , the number of matrices #fH 2E : Hzzzt = sssg is given by

n

k

m

[ (n; k; w)]m�jsssj [1� (n; k; w)]jsssj (86)

where (n; k; w) is defined by

(n; k; w)1nk

bk=2c

i=0

w

2i

n� w

k � 2i(87)

(see also discussion on the average weight distribution of Ensemble E([4, p. 903 ])). Note that the value of (86) depends only on the weight ofzzz. Thus, combining the above results, we have the average coset weightdistribution of E

~Aw(sss) = [ (n; k; w)]m�jsssj [1� (n; k; w)]jsssj n

w: (88)

ACKNOWLEDGMENT

The author would like to thank Dr. Jun Muramatsu of NTT Commu-nication Science Laboratories for inspiring discussion on LDPC codeensembles. The author also wishes to thank the anonymous reviewersfor valuable comments.

REFERENCES

[1] R. G. Gallager, Low Density Parity Check Codes. Cambridge, MA:MIT Press, 1963.

[2] D. J. C. MacKay, “Good error-correcting codes based on very sparsematrices,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, Mar.1999.

[3] G. Miller and D. Burshtein, “Bounds on maximum-likelihood decodingerror probability of low-density parity-check codes,” IEEE Trans. Inf.Theory, vol. 47, no. 7, pp. 2696–2710, Nov. 2001.

[4] S. Litsyn and V. Shevelev, “On ensembles of low-density parity-checkcodes: Asymptotic distance distributions,” IEEE Trans. Inf. Theory, vol.48, no. 4, pp. 887–908, Apr. 2002.

[5] , “Distance distributions in ensembles of irregular low-densityparity-check codes,” IEEE Trans. Inf. Theory, vol. 49, no. 11, pp.3140–3159, Nov. 2003.

[6] D. Burshtein and G. Miller, “Asymptotic enumeration methods foranalyzing LDPC codes,” IEEE Trans. Inf. Theory, vol. 50, no. 6, pp.1115–1131, Jun. 2004.

[7] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-CorrectingCodes. Amsterdam, The Netherlands: North-Holland, 1977.

[8] G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein, CoveringCodes. Amsterdam, The Netherlands: North-Holland, 1997.

[9] G. Dueck and J. Körner, “Reliability function of a discrete memorylesschannel at rates above capacity,” IEEE Trans. Inf. Theory, vol. IT-25, no.1, pp. 82–85, Jan. 1979.

Iterative Joint Channel Decoding of Correlated SourcesEmploying Serially Concatenated Convolutional Codes

Fred Daneshgaran, Member, IEEE,Massimiliano Laddomada, Member, IEEE, and

Marina Mondin, Member, IEEE

Abstract—This correspondence looks at the problem of joint decoding ofserially concatenated convolutional codes (SCCCs) used for channel codingof multiple correlated sources. We assume a simple model whereby twocorrelated sources transmit SCCC encoded data to a single destination re-ceiver. We do not assume the existence of, nor do we use channel side in-formation at the receiver. In particular, we present a novel iterative jointchannel decoding algorithm for correlated sources by using the empiricalcross-correlation measurements at successive decoding iterations to pro-vide extrinsic information to the outer codes of the SCCC configuration.Two levels of soft metric iterative decoding are used at the receiver: 1) itera-tive maximum a posteriori probability (MAP) decoding is used for efficientdecoding of individual SCCC codes (local iterations) and 2) iterative ex-trinsic information feedback generated from the estimates of the empiricalcross correlation in partial decoding steps is used to pass soft information tothe outer decoders of the global joint SCCC decoder (global iterations). Weprovide analytical results followed by simulation studies confirming the ro-bustness of the cross-correlation estimates to channel-induced errors, jus-tifying the use of such estimates in iterative decoding. Experimental resultssuggest that relatively few global iterations (two to five) during which mul-tiple local iterations are conducted are sufficient to reap significant gainsusing this approach specially when the sources are highly correlated.

Index Terms—Concatenated codes, convolutional code, correlatedsources, iterative decoding, joint decoding, serially concatenated codes,Slepian–Wolf, soft decoding, turbo codes.

I. INTRODUCTION

Consider a communication scenario whereby we have two nodes,say nodes A and B that have data to communicate to an aggregationnode C for further processing. In connection with source coding, theresult of Slepian-Wolf theorem [1], [2] on correlated sources is thatrates above the joint entropy are achievable even though the sourcesare disparate. The question of interest in channel coding is whether thefact that the sources are correlated can be used by a channel decoderto improve the performance of communication links betweenA and C(henceforth denoted link-1) and B and C (link-2), as measured by thebit error rate (BER) and/or frame error rate (FER) of the links. Noticethat in connection with both source coding and channel coding, thereis no side information available at the receiver. In this correspondence,we present results demonstrating that simple processing at the receiverthat jointly decodes the A’s and B’s messages (henceforth taken astwo fixed-size data packets) can improve performance of the individuallinks depending on the degree of correlation between the data packets.

Message relay is another potential area where source correlation canbe used to improve performance. In particular, suppose node A sendsa packet to nodeB that has data correlated with the message sent from

Manuscript received January 5, 2004; revised February 22, 2005. This workwas supported by Euroconcepts S.r.l. (http://www.euroconcepts.it), and a patentis pending on the subject.

F. Daneshgaran is with the Electrical and Computer Engineering Department,California State University, Los Angeles, CA 90032 USA (e-mail: [email protected]).

M. Laddomada and M. Mondin are with the Dipartimento di Elettronica, Po-litecnico di Torino, 10129 Turin, Italy (e-mail: [email protected]; [email protected]).

Communicated by A. E. Ashikhmin, Associate Editor for Coding Theory.Digital Object Identifier 10.1109/TIT.2005.850220

0018-9448/$20.00 © 2005 IEEE

Page 2: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

2722 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005

A. In this scenario, node B is forwarding the packet generated fromnodeA in addition to sending its own packet. The potential correlationbetween the packet at node B and the packet sent by A can be used bythe decoder at B that needs to decode A’s message before forwardingit to the next node along the chain. Of course, the previous scenarioscan be combined, and many other scenarios can be envisioned. To havea clear model in mind, in this correspondence, our focus shall be on thefirst scenario.

Let ~X and ~Y be two binary vectors of lengthL (i.e., messages gener-ated by nodes A and B, respectively). Correlation between two binarystreams can bemeasured in twoways. One technique is tomap a logic-0to the integer +1, logic-1 to �1, and to take the inner product of thetwo vectors defining our data packets and divide by the vector length(assumed to be the same for both packets). It is well known that the re-sulting empirical correlation estimate lies in the range �1 � � � +1.The positive value of � signifies a similarity between ~X and ~Y , while anegative � signifies a similarity between ~X and ~Y , where ~Y is the com-plement of vector ~Y . This traditional measure of correlation is not of di-rect interest to us sincewewish to relate a correlationmeasure to an em-pirical probability that is strictly positive. Tomeasure the empirical cor-relation between two data packets, we use the following. Let us definezn = xn � yn as the XOR of the nth component of the vectors ~X and~Y . Similarly, we define ~Z = ~X�~Y whereby ~Z is obtained via compo-nentwise XOR of the components of the vectors ~X and ~Y . The empiricalcross correlation between ~X and ~Y is defined as the ratio � = W (~Z)

L,

whereWH(�) denotes the Hamming weight of the argument. Note that,with this definition, we may take � as the empirical probability thatzn = 1, and a value of, for instance, � = 0:7 and � = 0:3 would havethe same information content as measured by the value of the binaryentropy function (we note in passing that on [1, p. 47], the authors de-fine ameasure of correlation between two identically distributed but notnecessarily independent random variables as � = I(X1;X2)=H(X1),where, I(X1;X2) andH(X1) are the mutual information and entropyof the noted random variables, respectively. With this definition, onceagain it is easily shown that 0 � � � 1). Since, as shall be demon-strated later, intrinsic soft information is generated from probabilityestimates, this measure of correlation is more useful to us than the tradi-tional definition. Note that defining the soft information as log 1��

�,

while the values of � = 0:7 and � = 0:3 would have the same infor-mation content as measured by the Entropy function, the soft informa-tion would have different signs. Hence, it is not really possible to usea definition like � = max(WH(~Z); L�WH(~Z))=2L, which may beintuitively more appealing.

The following are core features of the communication problem rel-evant to our study.

• The data packets to be transmitted by A and B are either verysimilar (i.e., a cross-correlation value near 1 implying ~X and ~Yare very similar) or very different (i.e., a cross-correlation valuenear 0 implying ~X and ~Y are very similar). The correlation mayarise, for instance, if A and B sample an environmental param-eter that does not change significantly at their locations. On theother hand, the data generated by A and B may exhibit a largedifference. The key point is that the data packets generated byA and B cannot be assumed to represent, for instance, two se-quences of independent identically distributed randomvariables.

• Nodes A and B employ serially concatenated convolutionalcodes (SCCCs) for channel coding.

• We assume relative timing synchronicity of the nodes engagedin this communication.

• No side information of any kind is available at the receiver, asidefrom the implicit knowledge at the receiver that some correlationmay exist between the source data transmitted by A and B.

• We assume the nodes engaged in communication to be stationaryat least for the duration of the transmission of the packet of data.

With this background, let us provide a quick survey of the recent lit-erature related to the problem addressed in this correspondence. Thissurvey is by nomeans exhaustive and is meant to simply provide a sam-pling of the literature in this fertile area. We note that most literature inthis area deals with correlated source coding, as opposed to correlatedchannel coding, which is the focus of our correspondence. If sourcecoding is followed by channel coding as may be the case, one needs tolook at the global impact of the use of implicit knowledge of the sourcecorrelation. This joint approach is a topic of further research.

In [3]–[5], the authors deal with applications of sensor networks fortracking and processing of information to be sent to a common desti-nation. In particular, in [4], the authors inspired by information theoryconcepts, and in particular the Slepian–Wolf theorem [2], present theproblem of distributed source coding using syndromes. The approachis based on the use of coset codes, and significant compression gainsare achieved using the proposed technique. We note that what we pro-pose here is fundamentally different in that our focus is on conventionalchannel coding as opposed to source coding. In [6], the authors presenta tutorial perspective on channel coding and transmission aspects inwireless multimedia applications.

In [7], the author proposes the concept of source-controlled channeldecoding as an effective mean to increase the performance of a channeldecoder by exploiting a priori and a posteriori information about thesource bits. The approach focuses on a single communication link andcan lead to potential gains even in a system using a well-designedsource codec since a certain amount of bit correlation can still bepresent in the encoded information frame. This residual correlationcan be used at the decoder employing a joint source–channel decoderin order to further combat the effects of channel noise. A similar ap-proach has been proposed in [8] and [9], whereby in Viterbi decoding,interframe and intraframe correlation are jointly used to increasethe decoder performance (see also [10]–[12]). Almost all proposedworks in the literature use soft metrics for improving the decodingperformance. An excellent work describing the “mathematics of softdecoding” is the paper by Hagenauer et al. [13].

In [14], the authors show that turbo codes can closely reach theSlepian–Wolf bound in lossless distributed source coding. In [15]–[17],the authors have proposed a practical coding scheme for separate en-coding of the correlated sources for the Slepian–Wolf problem. In [18],the authors have proposed the use of punctured turbo codes for com-pression of correlated binary sources whereby compression has beenachieved via puncturing. The proposed source decoder utilizes an iter-ative scheme to estimate the correlation between two different sources.The paper [19] focuses on the problem of reducing the transmission ratein a distributed environment of correlated sources, and the authors pro-pose a source coding scheme for correlated images exploiting moduloencoding of pixel values and compression of the resulting symbols withbinary and nonbinary turbo codes.

The remainder of the correspondence is organized as follows. Sec-tion II deals with the issue of sensitivity of the cross-correlation func-tion between two sequences to channel-induced errors. In the absenceof side information, any correlation measure generated at the receivingnode C in a partial decoding process is based on noise-corrupted es-timates of information bits from transmit nodes A and B. We demon-strate the relative robustness of the empirical cross-correlation measureto channel-induced errors. In Section III, we provide a brief backgroundon soft-decision maximum a posteriori probability (MAP) decodingand follow up with details of our algorithm for iterative joint channeldecoding of correlated sources employing SCCC codes. In Section IV,we present simulation results confirming the potential gains that can

Page 3: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2723

Fig. 1. PMF of �for three different values of raw error rate pwhen the true cross correlation between the data packets is � = 0:8at data block length ofL = 320.

be obtained from our approach for varying degrees of cross correlationbetween the data packets. Conclusions are presented in Section V.

II. SENSITIVITY OF THE CROSS CORRELATION TO

CHANNEL-INDUCED ERRORS

In this section, we wish to demonstrate the relative robustness ofthe empirical cross correlation of the received data to channel-inducederrors. Recall that ~X and ~Y are two binary vectors of length L, zn =xn � yn is the XOR of the nth component of the vectors ~X and ~Y , and~Z = ~X� ~Y is obtained via componentwise XOR of the components ofthe vectors ~X and ~Y .

Let the number of places in which ~X and ~Y agree be r so that theempirical cross correlation between these two vectors is � = r=L.Let us suppose that what is available at the receiver are noisy versions

of ~X and ~Y denoted ~X and ~Y , respectively. For instance, ~X and ~Ycould be erroneous versions of ~X and ~Y obtained after transmissionthrough a noisy channel modeled as a binary symmetric channel (BSC)with transition probability p. We assume that the error events inflictingthe two sequences are independent identically distributed (i.i.d.). Notethat the assumption of the i.i.d. noise samples is the least favorable asfar as the estimation of the correlation parameter is concerned. Any apriori information about the noise statistics can actually improve the

estimate. The problem is, as shall be seen subsequently, ~X and ~Y arethemselves obtained at the output of a soft decision iterative decoder.The noise at the output of such a decoder is difficult to model and maybe subject to change from one iteration to the next. In this sense, themodel that assumes no a priori knowledge of noise statistics is the mostappropriate to use. If the empirical estimate is robust under this model,it is surely robust in the case where the noise is correlated, and thiscorrelation is known a priori.

The receiver generates an empirical estimate of the cross correlation

based on the use of the sequences ~X and ~Y by forming the vector~Z = ~X � ~Y and counting the number of places where ~Z is zero. Letus denote this count as r. Clearly, r is a random variable. The questionis, What is the probability mass function (PMF) of r? Knowledge ofthis PMF allows us to assess the sensitivity of our estimate of the crosscorrelation to errors in the original sequences.

The probability that (zn = zn) corresponds to the probability of theevent that both bits are received wrong or both are received correctlyand is given by

Pr(zn = zn) = (1� p)2 + p2 (1)

Pr(zn 6= zn) = 2p(1� p): (2)

Consider applying a permutation to the sequences ~X and ~Y so that thepermuted sequences agree in the first r locations and disagree in theremaining (L�r) locations. The permutation is applied to simplify theexplanation of how we may go about obtaining the PMF of r and byno means impacts the results. It is evident that the permuted sequence�(~Z) contains r zeros in the first r locations and (L� r) ones in theremaining locations. Now consider evaluation of thePr(r = r+k) fork = 0; 1; . . . ; (L� r). We define �(~Z)r to represent the first r bits of�(~Z) and �(~Z)L�r to represent the remaining (L� r) bits. Similarly,

we define �( ~Z)r and �( ~Z)L�r . For a fixed k, the event fr = r + kg

corresponds to the union of the events of the type: �( ~Z)L�r differs

from �(~Z)L�r in (k+ l) positions for some l 2 f0; 1; . . . ; rg, �( ~Z)rdiffers from �(~Z)r in l positions, and the remaining bits of �( ~Z) and

Page 4: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

2724 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005

Fig. 2. Difference (� �M(�))versus �whereby M(�)denotes the most probable value of �obtained from empirical evaluation of the cross correlation fromnoisy received vectors (block length L = 320).

�(~Z) are identical. The probability of such elementary events are givenby

r

l

L� r

k + l[(1� p)2 + p

2]L�k�2l[2p(1� p)]k+2l: (3)

The probability of the event fr = r+ kg for k = 0; 1; . . . ; (L� r) isgiven by:

Pr(r = r + k)=

r

l=0

r

l

L� r

k + l

�[(1� p)2 + p2]L�k�2l[2p(1� p)]k+2l : (4)

Using similar arguments, form = 1; 2; . . . ; r, we have

Pr(r = r �m) =

r

l=m

r

l

L� r

l �m

�[(1� p)2 + p2]L�2l+m[2p(1� p)]2l�m : (5)

Later in the correspondence, we shall present simulation results foriterative joint SCCC decoding of correlated sources for data packet sizeofL = 320. Hence, let us look at representative results associated withthe estimation of � for this block length.

1) Fig. 1 depicts the PMF of � for three different values of raw errorrate pwhen the true cross correlation between the data packets is� = 0:8 at data block length of L = 320. The following obser-vations are in order. First, there is a bias in the empirical estimate

of � as measured between the most probable value of � and thetrue value of �, which is a strong function of p. Second, at a valueof p = 0:316 (representative of a high raw error rate occurring atlow values of SNR), there is a nonzero probability that � < 0:5,implying that such cross-correlation information when used bythe iterative decoder may actually increase the error rate. For-tunately, for the majority of the received data packets, � > 0:5and cross-correlation feedback actually improves performance.Simulation results shown later suggest that for the majority ofthe data frames, the cross-correlation feedback reduces the errorrate, while for a very small number of data packets, the errorrate increases. The net effect is often such that the overall BERactually decreases even at very low SNR values. Finally, as ex-pected, the variance of the estimate diminishes rapidly, and thebias is reduced as p decreases.

2) As noted previously, the most probable value of �, denotedM(�) (i.e., the mode), obtained from evaluation of the empiricalcross correlation from noisy received vectors, is not necessarilythe true value �. This is particularly so at larger values of p andfor small and large values of �. Fig. 2 captures this behavior fortwo values of p = 0:2 and p = 0:02 as a function of � for blocklength L = 320. In particular, this figure shows the difference(��M(�)) versus � obtained from empirical evaluation of thecross correlation from noisy received vectors.

3) The standard deviation of � is independent of � in the range� = 0:1 to � = 0:9 for a fixed value of p as should be suspected.However, this standard deviation is a strong function of p itself.Fig. 3 depicts the standard deviation of � as a function of p forL = 320. This figure shows that the standard deviation indeedincreases slowly with increasing p. Note that even at values of

Page 5: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2725

Fig. 3. Standard deviation of �as a function of p(block length L = 320).

p as large as p = 0:3, this standard deviation is still relativelysmall for � in the range � = 0:1 to � = 0:9.

While the above analysis has focused on short block length of L =320, our experimental results suggest that similar conclusions also holdvalid for larger values of L. In assessing how large a value of L can beused, what is more critical is the performance of the iterative decoder.Our experimental results suggest that the approach works for values ofL less than 1200. Beyond this value, the iterative decoder does not con-verge. We are currently conducting research to better understand whatis the origin of this convergence problem for large L. The conclusionfrom the above passage is that the computation of the empirical crosscorrelation between two received noisy vectors is relatively insensitiveto the errors inflicting the two sequences even at rather large values oferror probability p. Hence, the empirical cross correlation between twosequences is robust to channel-induced errors.

III. JOINT ITERATIVE DECODING OF SCCC ENCODED

CORRELATED SOURCES

Let the two data sequences be represented by two packets of data xand y, which are correlated. The individual source nodesA andB inde-pendently encode their data using serially concatenated convolutionalcodes and transmit the encoded data block over independent additivewhite Gaussian noise (AWGN) channels. At the receiver, the sufficientstatistics for both sources are processed jointly. We note that aside fromthe fact that the receiver may a priori presume some correlation be-tween the encoded received data might exist, no side information iscommunicated to the receiver. The receiver uses an iterative soft de-cision decoding technique for joint detection of the transmitted datasequences. Hence, the starting point in our development shall be themathematical equations needed for joint soft decision decoding.

Let Z be a random variable in the Galois field F2 assuming valuesfrom the set f+1;�1g with equal probability, where +1 is the “null”element under the modulo-2 addition. As explained in [13], the log-likelihood ratio (LLR) of a binary random variable Z is defined asLZ(z) = log P (z=+1)

P (z=�1),1 where PZ(z) is the probability that the

random variable Z takes on the value z. Under the modulo-2 addition,it is easy to prove that for statistically independent random variablesXand Y , the following relation is valid:

P (X � Y = +1) = P (X = +1)P (Y = +1)

+(1� P (X = +1))(1� P (Y = +1)): (6)

Hence, for Z = X � Y

PZ(z = +1) =eL (z)

1 + eL (z): (7)

Furthermore, the following approximation holds:

LZ(z) = log1 + eL (x)eL (y)

eL (x) + eL (y)

� sign(LX(x)) � sign(LY (y))

�min(jLX(x)j; jLY (y)j): (8)

Soft decision joint iterative decoding of the received signals canbest be described after having described the SCCC decoder shown inFig. 4. The SCCC decoder works at the bit level employing soft-insoft-out (SISO) elementary modules following the decoding algorithmproposed in [20] with some modifications as proposed in [21] to use in-teger arithmetic. In order to keep the presentation concise, we will only

1In the sequel, where not specified, we use the natural logarithm.

Page 6: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

2726 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005

Fig. 4. Architecture of the encoder and iterative decoder for the individualSCCCs.

Fig. 5. Generic trellis section useful to clarify the notation adopted for thedescription of the SISO decoding algorithm.

deal with the modifications made to the SCCC decoder in comparisonwith the standard decoder proposed in [20].

In the classic SCCC decoder [20], at any decoding iteration, the outerSISO decoding module receives the LLRs L(c; I) of its code sym-bols from the inner SISO, while always setting the extrinsic informa-tion L(i)(x; I) to zero because of the assumption that the transmittedsource information symbols are equally likely. In our setup, the jointiterative decoding algorithm is able to estimate the LLRs L(i)(x; I)using cross-correlation information and to pass on this information tothe outer SISO decoding module during the iterations of the SCCC de-coder. Because of this fact, the outer SISO decoder should be modifiedin order to account for the nonzero L(i)(x; I) values. Let us focus onlyon these modifications by omitting the details of the inner SISO de-coder for which the interested reader can refer to [20] for additionaldetails.

The outer SISO decoder operates on the time-invariant trellis of ageneric rate Ro = 1=2 convolutional encoder (the code rate can bedifferent; since in simulations we have used rate 1=2 codes, we makereference to this code rate). Fig. 5 depicts a generic trellis section forsuch a code. In this figure, the trellis edge is denoted by e, and theinformation and code symbols associated with the edge e are denotedby x(e) and c(e), respectively. The starting and ending states of theedge e are identified by sS(e) and sE(e), respectively.

The SISO operates on a block of encoded bits at a time. In order tosimplify the notation, where not specified, x and y indicate blocks ofdata bits. Sequence x is composed of the bits xk for k = 1; . . . ; L.A similar notation is used for the sequence y produced by the othersource. Furthermore, we shall formulate the metric evaluations for thereceived data associated with the first source and denoted by x only.This formulation obviously applies to the received data associated withthe other source y as well. Let us denote the LLR associated with theinformation symbol x by L(x). We use the following notation (seeFig. 4).

• L(i)(x; I) denotes the LLRs of the extrinsic information associ-ated with the source bits x at the input of the outer SISO decoder

at iteration i of the proposed joint decoding algorithm, whichwill be presented shortly. Iteration index i is a global iterationindex. The decoding of each SCCC encoded sequence itself re-quires a number of local iterations whose index is hidden fornow for simplicity.

• L(c; I) denotes the LLRs of the code bits coming from the innerSISO decoder after the application of the inverse permutation �.

• L(cinn; I) denotes the LLRs of the coded symbols cinn at theoutput of the matched filter corresponding to the sufficient sta-tistics from the channel.

• L(x;O) denotes the extrinsic LLRs related to the informationbits x at the output of the outer SISO decoder, evaluated underthe code constraints imposed by the outer code.

• x represents the hard estimates of the source bits x (i.e., thedecoded bits after a predefined number of iterations at the outputof the SCCC decoders).

Consider the SCCC shown in Fig. 4. The outer encoder at the sourcereceives an input data block ofL bits and generates an output data blockof L � R�1o bits, whereby Ro is the rate of the outer convolutional en-coder. It is also evident that the productL �R�1o corresponds to the sizeof the interleaver embedded in the SCCC (there is a small difference inthe actual size due to trellis termination of the outer encoder).

Let the input bit to the convolutional encoder (for a rate–1=2 code)denoted xk(e) represent the input bits xk on a trellis edge at timek(k = 1; . . . ; L), and let the corresponding output symbol of the con-volutional encoder ck(e) at time k be represented by the output bitsck;t(e) with t = 1; 2 and k = 1; . . . ; L. Based on these assumptions,the LLRs of the source bits xk can be evaluated by the outer SISO de-coder at local iteration j of the SCCC, as follows:

L(j)k (xk;O)

=�

maxe:x (e)=1

�k�1[sS(e)] +

2

t=1

ck;t(e)Lk[ck;t; I] + �k[sE(e)]

��

maxe:x (e)=0

�k�1[sS(e)] +

2

t=1

ck;t(e)Lk[ck;t; I]

+ �k[sE(e)] ; k = 1; . . . ; L (9)

where the forward recursion at time k, �k(�) [22], can be evaluatedthrough

�k(s) =�

maxe:s (e)=s

�k�1[sS(e)] + xk(e)L

(i)k [xk;t; I]

+

2

t=1

ck;t(e)Lk[ck;t; I] + h� ; k = 1; . . . ; L� 1 (10)

while the backward recursion �k(�) can be evaluated through

�k(s) =�

maxe:s (e)=s

�k+1[sE(e)] + xk+1(e)L

(i)k+1[xk+1;t; I]

+

2

t=1

ck+1;t(e)Lk+1[ck+1;t; I] +h� ; k= L� 1; . . . ; 1: (11)

To initialize the above recursions, the following are used:

�0(s) =0; if s = S0�1; otherwise

(12)

and

�L(s) =0; if s = SL�1; otherwise

(13)

Page 7: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2727

Fig. 6. Architecture of the global iterative joint channel decoder for correlated sources employing SCCC codes. The block “maxs” in the diagram makes thecomputations as suggested by (8).

Fig. 7. Sample simulation results of the proposed decoding algorithm for correlated sources employing rate-1/4 SCCC codes for varying degrees of correlationbetween the source sequences.

where S0 and SL are the initial and terminal states of the convolutionalcodes (assumed to be the all-zero state). The SISO module operatesin the log-domain so that only summation of terms are needed. Theoperator max� above signifies

maxi

(ai) = log

Q

i=1

ea = max

i(ai) + �(a1; . . . ; aQ) (14)

where �(a1; . . . ; aQ) is a correction term that can be computed usinga lookup table.

Finally, h� and h� are two normalization constants that for a hard-ware implementation of the SISO are selected to prevent buffer over-flows.

The bit decisions x(j) at local iteration j can be obtained from theLLRs of xk , 8 k = 1; . . . ; L by computing

L(j)x = L

(j)k (xk;O) + L

(i)k (xk; I) (15)

and making a hard decision on the sign of these metrics.In the same way, the bit decisions y(j) at local iteration j can be

obtained from the LLRs of yk, 8 k = 1; . . . ; L by computing

L(j)y = L

(j)k (yk;O) + L

(i)k (yk; I) (16)

and making a hard decision on the sign of these metrics. The architec-ture of the joint channel decoder is depicted in Fig. 6. Let us elaborateon the signal processing involved. In particular, as before, letX and Y

Page 8: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

2728 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005

Fig. 8. Sample simulation results of the proposed decoding algorithm for correlated sources employing rate-1/4 SCCC codes as a function of the global iterationindex (two, five, nine, and 13 global iterations) for two correlated sources with 70% correlation. Twelve preliminary local iterations of the SCCCs have beenperformed for the estimation of L

(0)Z (z). During each global iteration, three local iterations are performed for the decoding of the individual SCCCs.

be two correlated binary random variables which can take on the valuesf+1;�1g, and let Z = X � Y . Let us assume that random variableZ takes on the values f+1;�1g with probabilities PZ(z = +1) = pzand PZ(z = �1) = 1 � pz .

Both sources, independently from each other, encode the binary se-quences x and y with a rate RS SCCC. For simplicity, let us con-sider a rate-1/4 SCCC constituted by the serial concatenation of tworate-1=2 convolutional codes. Both encoded sequences are transmittedover independent AWGN channels. The received sequences are rx andry which take on values in<L�R (< is the set of real numbers) in thecase where the transmitted bits are encoded in blocks of length L. Foreach sequence index k, there are R�1

S received statistics that are pro-cessed by the decoder. Hence, to each information symbol xk, we asso-ciate rx , t = 1; 2; . . . ; R�1

S received statistics. LetNo=2 denote thedouble-sided noise-power spectral density and recall that �2

n = No=2.With this setup, the LLRs related to the observation samples rx and ryat the output of the matched filters can be evaluated as

Lk cinn1;k ; I =2

�2n

rx ; k = 1; . . . ; L; t = 1; 2; . . . ; R�1S

Lk cinn2;k ; I =2

�2n

ry : (17)

The LLRs L(i)Z (z) at iteration (i) are evaluated as

L(i)Z (z) = log

1� pzpz

(18)

by counting the number of places in which x(i) and y(i) differ, orequivalently by evaluating the Hamming weightwH(�) of the sequence

z(i) = x(i) � y(i) whereby, in the previous equation, pz = w (z )L

.

In the latter case, by assuming that the sequence ~Z = ~X � ~Y is i.i.d.,we have

L(i)Z (z) = log

L� wH(z(i))

wH(z(i))(19)

where L is the data block size. Finally, applying (8), we can obtainan estimate of the extrinsic information on the source bits for the nextiteration

L(i)(x; I) = L(z(i�1) � y(i)) (20)

and

L(i)(y; I) = L(z(i�1) � x(i)): (21)

Note that as far as the LLR of the difference sequence z is concerned,a correlation of for instance 10% or 90% between x and y carry thesame amount of information. Hence, the performance gain of the itera-tive joint decoder in either case is really the same (we have verified thisexperimentally). From an information theoretic point of view, all thissays is that the entropy of the random variable Z is symmetric aboutthe 50% correlation point.

Formally, the joint decoding algorithm can be formalized as follows.

1) Set the iteration index i = 0, and set the LLRs L(0)(x; I)and L(0)(y; I) to zero (see Fig. 6). Compute the LLRs for thechannel outputs using (17) and for both received sequences rxand ry . Conduct a preliminary set of iterations of each SCCC de-coder in order to obtain an estimate of both sequences x(0) andy(0) and evaluate wH(z(0)) = wH(x(0)� y(0)). Use wH(z(0))

to evaluate L(0)Z (z) in (19). Note that if the receiver already has

Page 9: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2729

Fig. 9. Estimated at the end of the final global decoding iteration of the correlated sources as a function of SNREb=Nofor various block lengths and variousdegrees of correlation between the data generated by the correlated sources.

an estimate of the correlation between the two transmitted se-quences x and y (i.e., with side information), it can directly eval-uate (19). In our simulations, we do not assume the availabilityof any side information.

2) Set L(1)(x; I) and L(1)(y; I) to zero.3) For iteration i = 1; . . . ; q, perform the following steps.

a) Make a predefined total number of iterations of the SCCCdecoder for both received sequences rx and ry by using theLLRs as expressed in (17).

b) Evaluate L(i)Z (z) using (19).

c) Evaluate L(i)(x; I) by using L(i�1)Z (z) and L(i)(y;O).

Evaluate L(i)(y; I) by using L(i�1)Z (z) and L(i)(x;O).

d) Go back to a) and continue until the last iteration q.As it is possible to see from the algorithm, the joint decoder at any

global iteration i estimates the extrinsic LLRsL(i)(x; I) andL(i)(y; I)by using the new estimates of the source bits x(i) and y(i) and theprevious estimate on the difference sequence z(i�1) (note that LLRsL(i)(x; I) and L(i)(y; I) are supplied to the outer encoder in the re-spective SCCCs).

Note that there is no need for subtracting the available a priori in-formation (e.g., from the previous iteration), from one global iterationto the next. Looking at the SCCC decoder for one of the two sourcesat a given global iteration, the updated estimate of the cross correla-tion is used to generate a priori soft information on the source bits that

are combined with the intrinsic information derived from the channelto restart a sequence of local decoding iterations in the SCCC decoder.On the other hand, extrinsic information generated by a given block atiteration (p�1)within the SCCC iterative decoding loop must be sub-tracted at iteration p for proper processing.

IV. SIMULATION RESULTS

We have conducted simulations of the proposed iterative jointchannel decoder for correlated sources to verify functionality andassess the potential gains of the approach. A sample simulation resultfor a rate-1/4 SCCC obtained from a serial concatenation of an outerencoder with generator matrix

G(D) = 1;1 +D +D3 +D4

1 +D +D4

and an inner encoder with generator matrix

G(D) = 1;1 +D2

1 +D +D2

and employing a spread-25 Fragouli/Wesel interleaver of length 640[23] is shown in Fig. 7. In the figure, we show the BER and FER per-formance of the individual SCCCs (without joint decoding) after 35iterations for comparison purposes. We have verified that more than 35iterations did not yield further performance improvement of the indi-vidual SCCC BER and FER. In the same figure, we show the perfor-

Page 10: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

2730 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005

Fig. 10. Variance of the estimate of �at the end of the final global decoding iteration of the correlated sources as a function of SNR Eb=Nofor various blocklengths and various degrees of correlation between the data generated by the correlated sources.

mance of the proposed iterative joint channel decoder after five globaliterations of the proposed algorithm whereby, during each global itera-tion, ten local iterations of the individual SCCCs have been conductedusing theMAX�-log-MAP algorithm with 3-b quantization, as speci-fied in [24]. The simulation results reflect the performance of the iter-ative joint decoder for various correlation coefficients between the twosequences. All simulations have been conducted by counting 100 er-roneous frames. The assumed modulation format is binary phase-shiftkeying (BPSK). The number of the preliminary iterations to initializethe global iterations was set to 12. To have an idea of the maximumachievable performance of the proposed algorithm, we show the per-formance in the case of 100% correlation existing between the two se-quences (i.e., the case in which the two sequences are identical).

To see the impact of global iterations, simulation results shown inFig. 8 refer to the same rate-1/4 SCCC as above and depict the per-formance of the iterative joint decoder as a function of the numberof global iterations. In the figure, we show the BER of the individualSCCCs after 35 iterations for comparison purposes alongside the per-formance of the proposed algorithm for various number of global itera-tions (two, five, nine, and 13 iterations) during each one of which, threelocal iterations of the MAX�-log-MAP algorithm have been appliedfor decoding of the individual SCCC codes. The simulation results arefor a reference correlation coefficient of 70%. The number of the pre-liminary iterations to initialize the global iterations was set to 12.

To verify some of the theoretical results in connection with the esti-mation of the cross-correlation coefficient in the case of real decoding(recall that we assumed the error sequences were i.i.d. in our anal-ysis, and this is clearly not the case during actual joint decoding ofthe SCCC codes), we have compiled data from several simulation runson the same SCCC codes as above employing the iterative joint de-coder and have generated several empirical curves. In particular, Fig. 9shows the estimated � at the end of the final global decoding iterationas a function of signal-to-noise ratio (SNR) Eb=No for various blocklengths and various degrees of correlation between the data generatedby the correlated sources. Note the dependence of the estimate on SNRand the data block length and the existence of the bias, all of whichwere predicted by the theoretical analysis. Finally, Fig. 10 depicts thevariance of the estimate of � at the end of the final global decodingiteration as a function of SNR Eb=No for various block lengths andvarious degrees of correlation between the data generated by the cor-related sources. Once again, the dependence of this variance on SNRand the data block length was correctly predicted by the simplified the-oretical analysis.

V. CONCLUSION

In this correspondence, we have presented a novel algorithm forjoint channel decoding of correlated information sources employing

Page 11: Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2731

serially concatenated convolutional code (SCCC) channel codes. Thealgorithm utilizes the empirical cross-correlation estimates obtainedduring partial global decoding iterations to generate what is in effecta priori soft information on the transmitted bits to feed the soft-in-soft-out (SISO) outer decoders of the local SCCC codes. Normally, thissoft information does not exist for the outer codes of a typical SCCCconfiguration. The sample simulation results demonstrate significantachievable gains, especially when the sources are highly correlated.Our empirical studies of the estimates of the cross correlation duringactual decoding confirms the validity of many of the properties of suchestimates obtained from a simplified theoretical analysis based on theassumption of independent and identically distributed (i.i.d.) error se-quences presented in the correspondence.

REFERENCES

[1] T. M. Cover and J. A. Thomas, Elements of Information Theory. NewYork: Wiley, 1991.

[2] D. Slepian and J. Wolf, “Noiseless coding of correlated informationsources,” IEEE Trans. Inf. Theory, vol. IT-19, no. 4, pp. 471–480, Jul.1973.

[3] L. J. Guibas, “Sensing, tracking, and reasoning with relations,” IEEESignal Processing Mag., vol. 19, no. 3, pp. 73–85, Mar. 2002.

[4] S. S. Pradhan, J. Kusuma, and K. Ramchandran, “Distributed compres-sion in a dense microsensor network,” IEEE Signal Processing Mag.,vol. 19, no. 3, pp. 51–60, Mar. 2002.

[5] F. Zhao, J. Shin, and J. Reich, “Information-driven dynamic sensor col-laboration,” IEEE Signal Processing Mag., vol. 19, no. 3, pp. 61–72,Mar. 2002.

[6] J. Hagenauer and T. Stockhammer, “Channel coding and transmissionaspects for wireless multimedia,” Proc. IEEE, vol. 87, no. 10, pp.1764–1777, Oct. 1999.

[7] J. Hagenauer, “Source-controlled channel decoding,” IEEE Trans.Commun., vol. 43, no. 9, pp. 2449–2457, Sep. 1995.

[8] C. Veaux, P. Scalart, and A. Gilloire, “Channel decoding using inter- andintra-correlation of source encoded frames,” in Proc. Data CompressionConf. (DCC), Snowbird, UT, Mar. 2000, pp. 103–112.

[9] , “Channel decoding using adaptive interframe and intraframe bitprediction in GSM system,” in Proc. Int. Conf. Acoustics, Speech andSignal Processing (ICASSP), vol. 5, Turkey, May 2000, pp. 2589–2592.

[10] M. Adrat, R. Haenel, and P. Vary, “On joint source-channel decodingfor correlated sources,” in Proc. Int. Conf. Acoustics, Speech and SignalProcessing (ICASSP), vol. 3, Orlando, FL, May 2002, pp. III 2505–III2508.

[11] M. Adrat, U. von Agris, and P. Vary, “Convergence behavior of itera-tive source-channel decoding,” in Proc. Int. Conf. Acoustics, Speech andSignal Processing (ICASSP), vol. 4, Hong Kong, China, Apr. 2003, pp.IV 269–IV 272.

[12] M.Adrat, P. Vary, and J. Spittka, “Iterative source-channel decoder usingextrinsic information from softbit-source decoding,” in Proc. Int. Conf.Acoustics, Speech and Signal Processing (ICASSP), vol. 4, Salt LakeCity, UT, May 2001, pp. 2653–2656.

[13] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary blockand convolutional codes,” IEEE Trans. Inf. Theory, vol. 42, no. 2, pp.429–445, Mar. 1996.

[14] A. Aaron and B. Girod, “Compression with side information usingturbo-codes,” in Proc. Data Compression Conf. (DCC), Snowbird, UT,Apr. 2002, pp. 252–261.

[15] J. Bajcsy and P. Mitran, “Coding for the Slepian–Wolf problem withturbo codes,” inProc. IEEE GLOBECOM, vol. 2, SanAntonio, TX, Nov.2001, pp. 1400–1404.

[16] , “Coding for the Wyner-Ziv problem with turbo-like codes,” inProc. Int. Symp. Information Theory, Lausanne, Switzerland, Jun./Jul.2002, p. 91.

[17] J. Bajcsy and I. Deslauriers, “Serial turbo coding for data compressionand the Slepian–Wolf problem,” in Proc. Information Theory Workshop,Paris, France, Mar./Apr. 2003, pp. 296–299.

[18] J. Garcia-Frias and Y. Zhao, “Compression of correlated binary sourcesusing turbo-codes,” IEEE Commun. Lett., vol. 5, no. 10, pp. 417–419,Oct. 2001.

[19] A. D. Liveris, Z. Xiong, and C. N. Georghiades, “A distributed sourcecoding technique for correlated images using turbo-codes,” IEEECommun. Lett., vol. 6, no. 9, pp. 379–381, Sep. 2002.

[20] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concate-nation of interleaved codes: Performance analysis, design, and iterativedecoding,” IEEE Trans. Inf. Theory, vol. 44, no. 3, pp. 909–926, May1998.

[21] G. Montorsi and S. Benedetto, “Design of fixed-point iterative decodersfor concatenated codes with interleavers,” IEEE J. Sel. Areas Commun.,vol. 19, no. 5, pp. 871–882, May 2001.

[22] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linearcodes for minimizing symbol error rate,” IEEE Trans. Inf. Theory, vol.IT-20, no. 2, pp. 284–287, Mar. 1974.

[23] C. Fragouli and R. D. Wesel, “Semi-random interleaver design criteria,”in Proc. IEEE GLOBECOM, vol. 5, Rio de Janeiro, Brazil, Dec. 1999,pp. 2352–2356.

[24] G. Montorsi and S. Benedetto, “Design of fixed-point iterative decodersfor concatenated codes with interleavers,” IEEE J. Sel. Areas Commun.,vol. 19, no. 5, pp. 871–882, May 2001.