research article a doping bits based bp decoding for rateless...

10
Research Article A Doping Bits Based BP Decoding for Rateless Distributed Source Coding Applications N. M. Masoodhu Banu 1 and S. Sasikumar 2 1 Department of Electronics and Communication Engineering, Sethu Institute of Technology, Virudhunagar, India 2 Department of Electronics and Communication Engineering, R.M.D. Engineering College, Gummidipoondi Taluk, iruvallur District, Chennai, India Correspondence should be addressed to N. M. Masoodhu Banu; [email protected] Received 15 September 2013; Revised 25 March 2014; Accepted 25 March 2014; Published 23 April 2014 Academic Editor: Sing Kiong Nguang Copyright © 2014 N. M. Masoodhu Banu and S. Sasikumar. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A novel doping bits based belief propagation decoding algorithm, for rate-adaptive LDPC codes based on fixed bipartite graph code, is proposed. e proposed work modifies the decoding algorithm, by converting the puncturing nodes to regular source nodes and by following the encoding rule at the decoder. e transmitted doping bits in place of punctured bits, with the modified decoding algorithm at the decoder, feed all the punctured nodes with reliable log likelihood ratios. is enables the proposed decoding algorithm to recover all punctured nodes in the early iteration. e fast convergence leads to decoder complexity reduction while providing considerable improvement in performance. 1. Introduction Conventional video coding technologies have been domi- nated by MPEG and H.26x family of standards, of which the most recent H.264/AVC [1] stands for the state of the art. ey offer high quality, low bitrate video for demanding applications but at the cost of complex motion estimation and compensation for better rate-distortion performance. However, in some emerging applications such as low-power wireless video surveillance, wireless PC cameras, and mobile camera phone, low complexity encoding is required. Dis- tributed source coding (DSC) [2, 3], a new paradigm in coding, which allows for sparingly low complexity encoding, is well matched for these applications as it shiſts the encoding complexity to the decoder. Motion estimation [4] plays an important role in H.264 and MPEG4 video compression. It estimates the motion vector between the intra- and inter- frame and transmits only the motion vector for interframe, whereas the source and channel coded data are transmitted for intraframe. Motion vector estimation module is the one which increases the complexity of the encoder. Distributed video coding (DVC) based on DSC does not depend on complex motion estimation. e core idea of DVC comes from some information theoretical results in 70s: Slepian- Wolf [5] and Wyner-Ziv [6] theorems. It states that if and are two correlated sources, can be encoded at a rate greater than or equal to its conditional entropy ( | ). ough ( | ) is less than its entropy (), can be decoded provided that was encoded at a rate () and is available at the decoder as side information. Wyner pointed out the possibility of using linear error correcting codes for Slepian- Wolf coding in [7]. As the practical DVC system needed high coding efficiency channel codes, the implementation was possible only aſter the invention of turbo and LDPC codes [8, 9] which reaches Shannon’s capacity limit with longer length. ough the issue with high coding efficiency is ended with the invention of LDPC codes, the problem with the assumption of conditional entropy at the encoder remains. As the conditional entropy determines the rate of the code to be used for , the transmitted syndrome bits based on presumption may or may not be sufficient to decode . If the bit error rate (BER) at the decoder exceeds a given value, more syndromes are requested from the encoder. In this circumstance, the code designed should be incremental; Hindawi Publishing Corporation Journal of Electrical and Computer Engineering Volume 2014, Article ID 467628, 9 pages http://dx.doi.org/10.1155/2014/467628

Upload: others

Post on 26-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

Research ArticleA Doping Bits Based BP Decoding for Rateless DistributedSource Coding Applications

N. M. Masoodhu Banu1 and S. Sasikumar2

1 Department of Electronics and Communication Engineering, Sethu Institute of Technology, Virudhunagar, India2Department of Electronics and Communication Engineering, R.M.D. Engineering College, Gummidipoondi Taluk,Thiruvallur District, Chennai, India

Correspondence should be addressed to N. M. Masoodhu Banu; [email protected]

Received 15 September 2013; Revised 25 March 2014; Accepted 25 March 2014; Published 23 April 2014

Academic Editor: Sing Kiong Nguang

Copyright © 2014 N. M. Masoodhu Banu and S. Sasikumar.This is an open access article distributed under the Creative CommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work isproperly cited.

A novel doping bits based belief propagation decoding algorithm, for rate-adaptive LDPC codes based on fixed bipartite graph code,is proposed.The proposed work modifies the decoding algorithm, by converting the puncturing nodes to regular source nodes andby following the encoding rule at the decoder. The transmitted doping bits in place of punctured bits, with the modified decodingalgorithm at the decoder, feed all the punctured nodes with reliable log likelihood ratios. This enables the proposed decodingalgorithm to recover all punctured nodes in the early iteration. The fast convergence leads to decoder complexity reduction whileproviding considerable improvement in performance.

1. Introduction

Conventional video coding technologies have been domi-nated by MPEG and H.26x family of standards, of whichthe most recent H.264/AVC [1] stands for the state of theart. They offer high quality, low bitrate video for demandingapplications but at the cost of complex motion estimationand compensation for better rate-distortion performance.However, in some emerging applications such as low-powerwireless video surveillance, wireless PC cameras, and mobilecamera phone, low complexity encoding is required. Dis-tributed source coding (DSC) [2, 3], a new paradigm incoding, which allows for sparingly low complexity encoding,is well matched for these applications as it shifts the encodingcomplexity to the decoder. Motion estimation [4] plays animportant role in H.264 and MPEG4 video compression. Itestimates the motion vector between the intra- and inter-frame and transmits only the motion vector for interframe,whereas the source and channel coded data are transmittedfor intraframe. Motion vector estimation module is the onewhich increases the complexity of the encoder. Distributedvideo coding (DVC) based on DSC does not depend on

complex motion estimation. The core idea of DVC comesfrom some information theoretical results in 70s: Slepian-Wolf [5] andWyner-Ziv [6] theorems. It states that if𝑋 and𝑌are two correlated sources,𝑋 can be encoded at a rate greaterthan or equal to its conditional entropy 𝐻(𝑋 | 𝑌). Though𝐻(𝑋 | 𝑌) is less than its entropy 𝐻(𝑋), 𝑋 can be decodedprovided that 𝑌 was encoded at a rate 𝐻(𝑌) and is availableat the decoder as side information. Wyner pointed out thepossibility of using linear error correcting codes for Slepian-Wolf coding in [7]. As the practical DVC system neededhigh coding efficiency channel codes, the implementationwas possible only after the invention of turbo and LDPCcodes [8, 9] which reaches Shannon’s capacity limit withlonger length. Though the issue with high coding efficiencyis ended with the invention of LDPC codes, the problemwith the assumption of conditional entropy at the encoderremains. As the conditional entropy determines the rate ofthe code to be used for 𝑋, the transmitted syndrome bitsbased on presumptionmay ormay not be sufficient to decode𝑋. If the bit error rate (BER) at the decoder exceeds a givenvalue, more syndromes are requested from the encoder. Inthis circumstance, the code designed should be incremental;

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2014, Article ID 467628, 9 pageshttp://dx.doi.org/10.1155/2014/467628

Page 2: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

2 Journal of Electrical and Computer Engineering

1

2

3

4

1

2

3

4

(a)

1

2

3

4

4

2

(b)

4

2

1

2

3

4

(c)

Figure 1: Decoding graphs if the encoder transmits (a) entire syndrome, (b) accumulated syndrome, and (c) even indexed syndrome bits.

that is, the encoder should not reencode the data. Moreprecisely, the first bits are kept, and only additional bits aresent upon request and this is essentially done by rate-adaptivecodes.

The concept of rate-adaptive codes was originated in thechannel coding literature. One of the key techniques forachieving rate adaptivity is by puncturing, but it is done ata relatively small performance loss [10]. Ha et al. [10] inves-tigated the existence of a good puncturing pattern in LDPCcodes for rate adaptivity, but the length of the code was large.They also proposed a systematic way to find the locations ofpunctured symbols in short length LDPC codes [11], which israte adaptive and minimizes the performance loss due to thepuncturing. Hyo et al. [12] proposed an algorithm to preventthe formation of a stopping set from the punctured variablenodes even when the amount of puncturing is quite large. Inspite of the existingworks on rate-adaptive channel codes, thedesign of rate-adaptive LDPC code for Slepian-Wolf codingis a challenging task due to wide compression rates needed toachieve the theoretical compression limit. It has been studiedextensively by various authors; Varodayan et al. [13] proposeda rate-adaptive scheme by sending doping source bits inaddition to the source bits in a deterministic manner. Theyalso proposed a new construction of codes [14] called LDPCaccumulated (LDPCA) codes to fit the Slepian-Wolf coding.LDPCA code set is designed in syndrome splitting methodkeeping the size of the stopping sets. However, it does noteliminate the short cycles carefully for each member codeand results in low girth [15]. Rate-adaptive PEG codes havebeen suggested by [16] which increase the girth to reduce thecycles. The common shortcoming of these codes is that theyhandle the condition of poor side information at the decoderby sendingmore syndromes on request with variable bipartitegraph for each rate change.

Figure 1 demonstrates how the bipartite graph varies witheach rate change; Figure 1(a) shows the tanner graph for thebase code, whose rate is one, and Figures 1(b) and 1(c) for 1/2rate code. Figure 1(b) combines the nodes connected to oddindexed syndrome with the even indexed one to achieve 1/2

rate code. Figure 1(c)manipulates even indexed syndromes toachieve 1/2 rate code, which leaves with unconnected sourcenodes as shown. It does not favor decoding as the sourcenodes are already available as side information at the decoder.The varying structure of the tanner graph as shown inFigures 1(a), 1(b), and 1(c) needs a largememorymanagementat the encoder. If the same LDPCA codes have to be usedfor Slepian-Wolf theoretical compression limit, leading tovariable source nodes, zeros have to be padded to the actuallength of information bits. Hence not only does that LDPCAneed more complex memory management, but also it doesnot support rate adaptivity for any variation in block length.Skipping highly correlated colocated blocks in block basedvideo coding leads to variable block length and hence sourcenodes. Hence the rate-adaptive codes for Slepian-Wolf codinghave to serve two purposes: handle variable source nodes andcheck nodes length.

Chen et al. [17] showed that, under density evolution,each Slepian-Wolf coding problem is equivalent to a channelcoding problem for a binary-input output-symmetric chan-nel. Based on this any rate-adaptive channel codes designcan be directly converted to Slepian-Wolf code design. Jianget al. have addressed the problem of variable tanner graphand presented a solution based on fixed tanner graph in[18, 19] by using the analytical results on punctured LDPCcodes in [17, 20]. They also showed that any arbitrarily smallgap to capacity for the original code can be preserved afterpuncturing on the binary erasure channel (BEC) and it is alsoshown that punctured LDPC codes are as good as ordinaryLDPC codes. The issue of variable tanner graph is solved in[18] by designing the code with extra (20–25%) puncturingsource and check nodes, which may or may not carry theinformation. This method uses the standard BP decodingto recover the extended punctured parity nodes leading toslow convergence. The main contribution of this paper is todevelop an algorithm for [18] to speed up the convergenceprocess in decoding and at the same time to give improvedBER performance. The work presented here gives a newconstruction method to ease the code design and modifies

Page 3: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

Journal of Electrical and Computer Engineering 3

the decoding algorithm. Thereby it improves some resultsfrom [18], where an earlier version of the algorithm was pre-sented. The developed algorithm is called doping bits basedfixed tanner graph algorithm, since it is based on sendingdoping bits instead of extended parity bits on additionalsyndrome request. PEG constructed LDPC codes are used in[21] as bit plane processing of DSC application requires shortlength codes. The iterative belief propagation (BP) decoder[22] with estimated side information and doping bits leadsto performance improvement in terms of decoder complexityand BER.

The rest of the paper is organized as follows. Someimportant definitions and notations used throughout thispaper are given in Section 2 and the novel concept of dopingbits based decoding algorithm is introduced in Section 3.Simulation results are discussed in Section 4 and finally inSection 5 conclusion is presented.

2. Definitions and Notations

The notations and definitions used throughout the paperare introduced here. LDPC codes are represented by tannergraph [23] and consist of two sets of nodes connected byedges; variable nodes correspond to the 𝑁 columns of theparity-check matrix 𝐻 and the check nodes 𝑀 representthe rows of 𝐻. There are two ways to represent the LDPCfor DSC application: (1) by (𝑁,𝑀), where 𝑁 (input) is thenumber of information bits and𝑀 (output) is the number ofsyndromes; that is, only the syndromes are transmitted (2) by(𝐾,𝑁 − 𝐾), where 𝐾 (input) is the number of informationbits and 𝑁 − 𝐾 (output) is the number of parity bits; that is,only parity bits are transmitted. The work proposed here isbased on syndrome method. Since𝑁 variable nodes are usedfor information bits, from now on it is called source nodes.Apart from 𝑁 source nodes and 𝑀 syndromes as in usualscheme, there are 𝐿 numbers of extended source nodes calledparity nodes and extended syndrome nodes for rate adaption.Figure 2 shows the tanner graph of system described in [18].Finally 𝑀eq = 𝑀 + 𝐿 represents the total syndromes and𝑁eq = 𝑁 + 𝐿 represents total source nodes.

Source nodes are combination of information bits 𝑆1to

𝑆6and parity bits 𝑃

1to 𝑃3. Check nodes are combination of

check nodes 𝐶1to 𝐶3and extended check nodes EC

1to EC

3.

Extended check nodes are equal in number to parity nodes.

3. Doping Bits Based Rate-AdaptiveLDPC Codes

3.1. Algorithm. LDPC codes in syndrome code form havebeen used effectively in fixed-rate distributed source codingdue to its relatively more compression ratio than parity basedscheme.The BP decoding is modified slightly in order to takeinto account the syndrome information [22]. LDPC codesused as Slepian-Wolf codes have to be rate adaptive in twoways, with respect to variation in source length and variationin the quality of side information estimation. Filling extrasource node with zeros is one of the ways to adapt variablesource length, but it results in more encoding complexity.

S1 S2 S3 S4 S5 S6 P1 P2 P3

C1 C2 C3 EC1 EC2 EC3

Figure 2: Code design by [18].

The design used in [18] gives solution for source lengthvariation as well as variation in side information quality.Side information quality variation is handled by incrementalsyndromes. The encoding step includes

(1) generation of parity sequence such that extendedsyndrome becomes zero;

(2) generation of regular syndrome from the actualsource using the generated parity sequence from step1;

(3) mixing a variable portion of the source bits with thegenerated parity sequence according to a predefinedarithmetic operation;

(4) in addition to this, the proposed scheme sendsuncoded source bits called doping bits according tothe desired compression rate. These bits are calleddoping bits for the reason explained later in thedecoding section.

A different decoding method at the decoder facilitatesthese doping bits. Computation of parity sequence, that is,𝑃1to 𝑃𝐿, involves solving 𝐿 linear equations associated with

the corresponding check nodes.The basic idea is that, by wayof modified decoding, the punctured parity node becomesreliable and assists the decoding process to converge fast inearly iteration. The pattern of the doping bits over the sourcenodes is known to the decoder. There are two schemes to beexplained: variable source length and fixed syndrome codingand fixed source length and variable syndrome coding. Thesupporting rate for Scheme A is 𝑀/𝑁 to 𝑀/𝑁eq and forScheme B is 𝑀/𝑁 to 𝑀eq/𝑁eq. The doping technique isapplied when there is a requirement of additional syndromefrom the decoder.

3.1.1. Scheme A: Variable Source Length and Fixed SyndromeCoding. This scheme comes into picture when the sourcestatistics change to give variations in source length. The pro-posed scheme punctures the extended parity bits according tothe compression ratio and estimates the punctured parity bitsusing the modified decoding scheme at the decoder, whereasthe algorithm described in [18] does not do estimation.Estimation makes punctured node a reliable node and hencethe LLR computed for these bits are highly reliable. Thereliable punctured nodes based on modified decoding lead

Page 4: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

4 Journal of Electrical and Computer Engineering

to early convergence at the decoder. Additional 𝑠𝑁+𝑖

numberof source is sent for variable source length coding as

𝑒𝑖= 𝑃𝑖+ 𝑠𝑁+𝑖, where 𝑖 = 0 to 𝐿. (1)

3.1.2. Scheme B: Variable Source Length and Variable Syn-drome Coding. This scheme is applied whenever the decoderrequests for supplementary incremental syndrome due topoor side information estimation. For every additional syn-drome requested, instead of sending the parity bits 𝑃

1to

𝑃𝐿, the encoder sends uncoded source bits to the decoder.

This combined with the estimated 𝑃1to 𝑃𝐿at the decoder

improves the performance and also makes the decodingfast. In a nutshell the role of the rate-adaptive decoderis to recover the source 𝑋 from the uncoded source bitsand syndrome bits so far received from the encoder inconjunction with the block-candidate side information 𝑌for any data length and syndromes. As this uncoded sourceenriches the performance, they are called doping bits.

3.1.3. Decoding. The key to the proposed solution is themodified decoding algorithm which follows the encodingsteps. It differs from [18] by the way of parity nodes compu-tation and BP decoding for the doping bits at the decoder.Once the parity nodes are used to find the extended zerosyndromes, it is used to find the other nonzero syndromesin combination with the other source symbols. The encodingpart is elucidated first in order to appreciate the decoding.For simplicity (8, 4) code is assumed where the puncturednodes are 20%, that is, 2 numbers. The new code dimensionis (10, 6). That is, 2 syndrome nodes must be made zeros.The tanner graph is drawn partially in Figure 3, in orderto highlight only the zero syndrome nodes computation. InFigure 3, EC

1andEC

2are zero syndromenodes and𝑃

1and𝑃2

are parity nodes. The equation for the zero syndrome nodesat the encoder is given by

EC1= 𝑃1⊕ 𝑆6,

EC2= 𝑃2⊕ 𝑆3.

(2)

At the encoder 𝑃1and 𝑃

2are computed such that EC

1and

EC2are zeros, with 𝑆

6and 𝑆

3are known. The same can be

reversed at the decoder as the core idea of DSC coding isthat the side information is correlated with the source beingencoded. Hence at the decoder the equation to find the paritybits is given by

𝑃1= EC1⊕ 𝑆6,

𝑃2= EC2⊕ 𝑆3.

(3)

As punctured nodes are computed to have value, it is notfed with zero as LLR. This modification would leverage thepunctured/parity node’s reliability and hence the decodingperformance. This transforms the intentional puncturingwith heterogeneous parallel channel (BSC/BEC) parameters[20] to homogeneous channel. The strengthened LLR valuesusing the modified algorithm, at the decoder, lead to fastconvergence.

S1 S2 S3 S4 S5 S6 S7 S8 P1 P2

C1 C2 C3 C4 EC1 EC2

Figure 3: Illustrating modified decoding with punctured nodesestimation.

Eliminating the punctured parity nodes as describedabove leaves space to send some other bits in place ofextended syndromes and these bits are called doping bits.As the doping bits are 100% reliable source bits, somemodification can be done in BP decoding; that is, themessagefrom doping bits node to check nodes is unaltered and alwayswill be

𝑝out𝑠= 1 − 𝐷, (4)

where 𝑝out𝑠

, the output, is message from source node tocheck nodes and 𝐷 is the doping bit [13]. This in turnnot only converges fast, but also reduces the computationalcomplexity.

3.2. Code Design. Exclusive properties of the code designneeded by system in [18] on comparison with the standardLDPC code are that (1) the extended source nodes andextended check nodes should be placed at the end in orderto allow for puncturing within binary erasure channel; (2) inaddition to the extended source nodes, other source nodesconnected to the extended check nodes should have one toone connection as shown in Figure 3. Here the parity bitsare directly connected to extended check nodes and only fewother source nodes are connected to make the computationeasier. If EC

2had been connected to two parity nodes as in

Figure 2, then equation for EC2would change to

EC1= 𝑃1⊕ 𝑆1,

EC2= 𝑃2⊕ 𝑃3⊕ 𝑆6,

EC3= 𝑃3⊕ 𝑆2.

(5)

𝑃1, 𝑃2, and 𝑃

3are computed in a way that, EC

1, EC2, and EC

3

are zeros in each input cycle. Solving these equations will becomputationally intensive, if the above three equations arenot independent. It is very hard to place both extended sourcenodes and syndrome nodes at the end with independentcheck node equation for a specified BER parameter. As thescheme is based on fixed tanner graph, extended syndromenodes can be placed anywhere with the only restrictionof having independent check node equations. Selectingextended syndromes in this way opens too many choices,which is otherwise not possible with direct design. Theposition of the extended syndrome nodes has to be knownby the decoder in this case. This is not very complicated,

Page 5: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

Journal of Electrical and Computer Engineering 5

S1 S2 S3 S4 S5 S6 S7 S8 P1 P2

C1 C2 C2C4EC1 EC2

Figure 4: Proposed construction.

as it is one time process. The new construction is shown inFigure 4.

Selection of the number of codes has been explained withan example by taking a QCIF video. The total number ofblocks with the block size of 4×4 for a QCIF (176×144) videois 1584. The number of source node when processed in bitplanes without compressionwill be 1584. Additional exampleis 512 × 512 hyperspectral image [24] which also can beDSC coded due to the high correlation between neighboringbands.

In skip mode the length of the source node changes,due to skipping blocks which are highly correlated with thecolocated blocks in the previous frame. This can be furtherexplained with the help of mother-daughter video, taking 𝑌component of 3rd and 4th frames and hyperspectral imageof 28th and 29th bands. As shown in Figures 5(c) and 5(d),since the grandma sequence is a slow motion video, all theblocks are almost the same and the PSNR metric betweenthem is more than 45 db. The difference between the framesis only due to the camera lighting condition. Skipping theseblocks for coding reduces the source bits for LDPC coding.Changing the PSNR of the skip blocks for a specified ratedistortion performance alsowill change the number of sourcebits for coding. Another factor contributing to variable sourcenode is quantized DCT coefficients which can be discardeddue to its low magnitude. Apart from the above, variationsin correlation with every adjacent frame of a video lead tovariable source length. Due to the similarity of the decodingalgorithm, the degree distributions optimized for LDPCchannel codes (e.g., using [25]) can be applied directly to theproposed rate-adaptive codes.

4. Results and Discussion

The performance of doping bits based rateless code is ana-lyzed on the basis of storage complexity, encoder-decodercomplexity, and BER performance. The experimental setupparameters are as follows.

(i) The channel is modeled as multiple BSC channel inorder to incorporate the precise correlation betweeneach bit plane independently [26].

(ii) The channel code for the proposed system has beendesigned for BSC channel with 20% punctured orextra source nodes using density evolution. Thethreshold cross over probability of the resulting codeis 0.12.

Table 1: Comparison of number of codes required for the proposedscheme with other codes.

Code dimension for otherrate-adaptive codes

Code dimension for theproposed scheme

(500, 600) (600, 350)(600, 700) (720, 420)(700, 800) (864, 504)(800, 900) (1036, 605)(900, 1000) (1244, 726)(1000, 1100) (1493, 871)(1100, 1200) —(1200, 1300) —(1300, 1400) —(1400, 1500) —(1500, 1600) —

(iii) The construction used for the code design is standardand PEG codes.

(iv) The virtual channel correlation was modeled asBSC/BEC when punctured nodes are involved andBSC channel when there is an estimation of punc-tured nodes.

(v) Maximum source length considered is 1584 and theminimum one considered is 500 as explained inSection 3.

(vi) A set of 5 codes has been designed with base code rate1/2, within the range 500–1584, as in Table 1.

4.1. Storage Complexity. There is no difference between thenumber of codes used in the proposed scheme and refer-ence [18], but, to appreciate the advantage of the proposedschemewith the rate-adaptive codes which do only syndromeadaption, Table 1 compares the number of codes.The numberof codes with other rate-adaptive codes for a source lengthvariation from 500 to 1584 is 11 codes, whereas it is only 6codes for the proposed algorithm. Also the resolution step is100 for regular codes and 1 for the proposed algorithm.

4.2. Complexity of Encoder andDecoder. Encoder complexityis given by 𝑆 = 𝑥 ∗ 𝐻 where 𝑆 is syndrome, 𝑥 is informationbits, and 𝐻 is parity check matrix. Let 𝑗 and 𝑘 be thesource and check nodes degrees, respectively; it means that𝑘 number of source nodes is connected to each check nodes.As syndromes are nothing, but check nodes value, eachsyndrome computation needs 𝑘 number of multiplications(MULs) and 𝑘 − 1 xor additions. There is no encodercomplexity saving as such, but the design methodologyalways results in one-to-one relationship between extendedparity nodes and extended syndrome nodes. This gain isachieved by interleaved extended syndrome nodes.

The figures, given in Tables 2 and 3, illustrate the numberof iterations required for (1800 and 1050) rate-adaptivecode, where the number of extended data and syndromes is300. Decoder complexity reduction is achieved by modified

Page 6: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

6 Journal of Electrical and Computer Engineering

(a) (b)

(c) (d)

Figure 5: (a) and (b) 28th and 29th bands of hyperspectral image. (c) and (d) 𝑌 component of 3rd and 4th frames of grandma video.

Table 2: Number of iterations required for Scheme A.

Number of punctured paritynodes

Number of extendeddata

Iteration needed byreference [18]

Iteration needed by modifieddecoding

0 300 10 10

150 150 30 25

300 0 50 38

Table 3: Number of iterations required for Scheme B.

Number of puncturedparity nodes

Number of extended syndromes ornumber of doping bits

Iteration needed byreference [18]

Iteration needed by doping bits alongwith modified decoding

0 300 10 4

150 150 21 10

300 0 50 38

decoding. For Scheme A, the gain comes by making thepunctured nodes as nodes with certainty. For Scheme B,the gain is due to two factors: conversion of puncturednodes to nodes with certainty and high confidence LLRvalues of doping bits. Accordingly the improvement foritem 2 in Table 2 is only due to modified decoding andhence a moderate improvement of only 16.66%, whereas theimprovement for item 2 in Table 3 is due to two factors. Theyare modified decoding for 50% of the nodes and 50% ofdoping bits in place of extended syndromes.

4.3. Bit Error Rate Performance. In this section theperformance of Scheme A and Scheme B is comparedfor various rates and with that of rate-adaptive LDPCcodes of [18] for distributed source coding of two binarycorrelated sources. Simulation parameters are listed inTable 4. The virtual correlation channel is modeled asBSC due to bit plane coding. The variable conditionalentropy was simulated by adding noise with various crossover probabilities with the motion compensated frame1.

Page 7: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

Journal of Electrical and Computer Engineering 7

Table 4: Simulation parameters for rate compatibility.

Base code rate (1500, 750)

Derived code rate for use asrate-adaptive codes

(1800, 1050)(L = 20% of N) = 300;𝑁 = 1500;𝑀 = 750.𝑁eq =𝑁 + 𝐿 = 1800.𝑀eq = 𝑀 + 𝐿 = 1050.

Source X Bit planes of frame 2 ofgrandma video sequence.

Source Y Bit planes of frame 1 ofgrandma video sequence.

Side information Motion compensatedFrame1 at the decoder.

Rate considered forScheme ARate =𝑀/𝑁

𝑖

where 𝑖 = 0 to 300

(1) 750/1500 = 0.5Punctured nodes = 300.(2) 750/1600 = 0.46875Punctured nodes = 200.Extended source nodes =100.

Rate considered forScheme BRate =𝑀/𝑁 to𝑀eq/𝑁eq

(1) 1000/1500 = 0.666;Punctured nodes = 50Doping nodes = 250(extended syndromes)Extended source nodes = 0.(2) 900/1600 = 0.5625Punctured nodes = 50Doping nodes = 150(extended syndromes)Extended source nodes =100

Figure 6 shows the BER performance comparison undervariable source length and fixed syndrome (Scheme A).The rates are modified by changing the number of sourcenodes, keeping the syndromes constant. The performanceimprovement in low rate codes, due to less number ofpunctured nodes, is almost evenhanded by the less numberof zero syndromes. Rate 0.5 and 0.4685 codes have only asmall performance gap for both the proposed and reference[18] schemes and this too goes unseen at high correlation aspunctured nodes converge faster with high correlated sideinformation. However, comparatively, the higher gain of theproposed scheme is due to the estimation of punctured nodesat the decoder; that is, the proposed scheme reaches BERof 10−5 at 0.06 cross over probability, whereas reference [18]scheme reaches only at 0.025 cross over probability.

Figure 7 shows the BER performance comparison undervariable source length and syndrome (Scheme B). The ratesaremodified by changing the number of syndrome nodes andsource length. High rate codes perform better as the dopingbits dole out as requested extra syndromes in addition tomodified decoding.The performance gap remains increasingas the number of doping bits increases.

Performance of the proposed Scheme A and SchemeB is compared with that of reference [18] in Figure 8. Therate 0.46875 code was used for comparison. Scheme A withestimated punctured node performs better than that of [18].

Cross over probability

Bit e

rror

rate

0.05 0.1 0.15 0.2

10−2

10−3

10−4

10−5

10−6

R-reference [18] = 0.5

R-proposed Scheme A = 0.5

R-reference [18] = 0.46875

R-proposed Scheme A = 0.46875

Figure 6: Scheme A: comparison of BERwith cross over probabilityfor rate adoption.

Cross over probability

Bit e

rror

rate

0.05 0.1 0.15 0.2

10−2

10−3

10−4

10−5

10−7

10−6

R-reference [18] = R = 0.666

R-proposed Scheme B = R = 0.666

R-reference [18] = 0.5625

R-proposed Scheme B = 0.5625

Figure 7: Scheme B: comparison of BER with cross over probabilityfor rate adoption.

Due to additional syndrome the rate becomes 0.5625 forScheme B and unsurprisingly it shows better performancethan any other schemes because of doping bits. Figure 9compares the performance of LDPC codes constructed bystandard algorithm and PEG construction algorithm, asexpected PEG based codes outperforms standard one due tohigh girth.

Page 8: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

8 Journal of Electrical and Computer Engineering

Cross over probability

Bit e

rror

rate

0.05 0.1 0.15 0.2

10−2

10−3

10−4

10−5

10−7

10−6

R-reference [18] = 0.46875R-Scheme A = 0.46875

R-Scheme B = 0.5625

Figure 8: Performance comparison of Schemes A and B.

Cross over probability

Bit e

rror

rate

0.05 0.1 0.15 0.2

10−2

10−3

10−4

10−5

10−7

10−6

R-reference [18]-PEG = 0.5

R-Scheme A-PEG = 0.5

R-Scheme B-PEG = 0.5625

R-reference [18]-STD = 0.5

R-Scheme A-STD = 0.5

R-Scheme B-STD = 0.5625

Figure 9: Performance comparison of PEG and standard codes.

5. Conclusion

Modified decoding algorithm presented in this paperconverts the punctured nodes to regular source nodes.The conversion changes the original BSC/BEC parallelchannel to BSC/BSC homogeneous channel and henceimproves decoding performance. Doping bits in place ofadditional syndromes as a result of estimated punctured nodeplay a crucial role in decoding complexity reduction andBER performance. An experimental result also shows that

the proposed scheme effortlessly integrates the skip blockbased compression technique with DSC coding.

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper.

References

[1] J. Ostermann, J. Bormans, P. List et al., “Video coding withH.264/AVC: tools, performance, and complexity,” IEEE Circuitsand Systems Magazine, vol. 4, no. 1, pp. 7–28, 2004.

[2] S. S. Pradhan and K. Ramchandran, “Distributed source codingusing syndromes (DISCUS): design and construction,” IEEETransactions on InformationTheory, vol. 49, no. 3, pp. 626–643,2003.

[3] A. Aaron, R. Zhang, and B. Girod, “Wyner-Ziv coding ofmotion video,” in Proceedings of the 36th Asilomar Conferenceon Signals Systems and Computers, pp. 240–244, Pacific Grove,Calif, USA, November 2002.

[4] N. Brady, “MPEG-4 standardized methods for the compressionof arbitrarily shaped video objects,” IEEE Transactions onCircuits and Systems for Video Technology, vol. 9, no. 8, pp. 1170–1189, 1999.

[5] D. Slepian and J. K. Wolf, “Noiseless coding of correlatedinformation sources,” IEEE Transactions on InformationTheory,vol. 19, no. 4, pp. 471–480, 1973.

[6] A. D. Wyner and J. Ziv, “Rate-distortion function for sourcecoding with side information at the decoder,” IEEE Transactionson Information Theory, vol. 22, no. 1, pp. 1–10, 1976.

[7] A. D. Wyner, “Recent results in the shannon theory,” IEEETransactions on InformationTheory, vol. 20, no. 1, pp. 2–10, 1974.

[8] D. J. C. MacKay, “Good error-correcting codes based on verysparse matrices,” IEEE Transactions on InformationTheory, vol.45, no. 2, pp. 399–431, 1999.

[9] S.-Y. Chung, G. David Forney Jr., T. J. Richardson, and R.Urbanke, “On the design of low-density parity-check codeswithin 0.0045 dB of the Shannon limit,” IEEE CommunicationsLetters, vol. 5, no. 2, pp. 58–60, 2001.

[10] J. Ha, J. Kim, and S. W. McLaughlin, “Rate-compatible punc-turing of low-density parity-check codes,” IEEE Transactions onInformation Theory, vol. 50, no. 11, pp. 2824–2836, 2004.

[11] J. Ha, J. Kim, D. Klinc, and S.W.McLaughlin, “Rate-compatiblepunctured low-density parity-check codes with short blocklengths,” IEEE Transactions on Information Theory, vol. 52, no.2, pp. 728–738, 2006.

[12] Y. P. Hyo, W. K. Jae, S. Kwang, and C. W. Keum, “Efficientpuncturing method for rate-compatible low-density parity-check codes,” IEEE Transactions on Wireless Communications,vol. 6, no. 11, pp. 3914–3919, 2007.

[13] D. Varodayan, Y.-C. Lin, and B. Girod, “Adaptive distributedsource coding,” IEEE Transactions on Image Processing, vol. 21,no. 5, pp. 2630–2640, 2012.

[14] D. Varodayan, A. Aaron, and B. Girod, “Rate-adaptive dis-tributed source coding using low-density parity-check codes,”in Proceedings of the 39th Asilomar Conference on Signals, Sys-tems and Computers, pp. 1203–1207, Pacific Grove, Calif,USA,November 2005.

Page 9: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

Journal of Electrical and Computer Engineering 9

[15] J. Fan, Y. Xiao, and K. Kim, “Design LDPC codes withoutcycles of Length 4 and 6,” Journal of Electrical and ComputerEngineering, vol. 2008, Article ID 354137, 5 pages, 2008.

[16] M. Jang, J. W. Kang, and S.-H. Kim, “A design of rate-adaptive LDPC codes for distributed source coding using PEGalgorithm,” in Proceedings of the IEEEMilitary CommunicationsConference (MILCOM ’10), pp. 277–282, San Jose, Calif, USA,November 2010.

[17] J. Chen, D. He, and A. Jagmohan, “Slepian-Wolf code designvia source-channel correspondence,” in Proceedings of the IEEEInternational Symposium on Information Theory (ISIT ’06), pp.2433–2437, Seattle, Wash, USA, July 2006.

[18] J. Jiang, D. Hc, and A. Jagmohan, “Rateless Slepian-Wolf codingbased on rate adaptive low-density-parity- check codes,” inProceedings of the IEEE International Symposiumon InformationTheory (ISIT ’07), pp. 1316–1320, Nice, France, June 2007.

[19] D.-K.He, A. Jagmohan, L. Lu, andV. Sheinin, “Wyner-Ziv videocompression using rateless LDPC codes,” inVisual Communica-tions and Image Processing, Proceedings of the SPIE 6822, SanJose, Calif, USA, January 2008.

[20] H. Pishro-Nik and F. Fekri, “Results on punctured low-density parity-check codes and improved iterative decodingtechniques,” IEEE Transactions on Information Theory, vol. 53,no. 2, pp. 599–614, 2007.

[21] X.-Y. Hu, E. Eleftheriou, and D. M. Arnold, “Regular and irreg-ular progressive edge-growth tanner graphs,” IEEE Transactionson Information Theory, vol. 51, no. 1, pp. 386–398, 2005.

[22] A. D. Liveris, Z. Xiong, and C. N. Georghiades, “Compressionof binary sources with side information at the decoder usingLDPC codes,” IEEE Communications Letters, vol. 6, no. 10, pp.440–442, 2002.

[23] R. M. Tanner, “Recursive approach to low complexity codes,”IEEE Transactions on InformationTheory, vol. 27, no. 5, pp. 533–547, 1981.

[24] “Hyperspectral Images of natural scenes 2004,” http://personal-pages.manchester.ac.uk/staff/david.foster/Hyperspectral ima-ges of natural scenes 04.html.

[25] “LTHC: LdpcOpt,” 2001, http://ipgdemos.epfl.ch/ldpcopt.[26] M. Vaezi and F. Labeau, “Improved modeling of the correlation

between continuous-valued sources in LDPC-based DSC,” inProceedings of IEEE 46th Asilomar Conference on Signal, Systemand Computers, pp. 1659–1663, Pacific Grove, Calif, USA,November 2012.

Page 10: Research Article A Doping Bits Based BP Decoding for Rateless …downloads.hindawi.com/journals/jece/2014/467628.pdf · 2019. 7. 31. · Research Article A Doping Bits Based BP Decoding

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Journal ofEngineeringVolume 2014

Submit your manuscripts athttp://www.hindawi.com

VLSI Design

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

DistributedSensor Networks

International Journal of