multilevel compress forward

Upload: fawad-uddin

Post on 01-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Multilevel Compress Forward

    1/6

    1

    Multi-Level Compress and Forward Coding for

    Half-Duplex Relays

    Jaweria Amjad, Momin Uppal, and Saad Qaisar

    Dept of Electrical Engineering, NUST School of Electrical Engineering & Computer Sciences, Pakistan Dept of Electrical Engineering, LUMS School of Science and Engineering, Pakistan

    Email:{jaweria.amjad,saad.qaiser}@seecs.edu.pk, [email protected]

    AbstractThis paper presents a multi-level compress andforward coding scheme for a three-node relay network in whichall transmissions are constrained to be from an M-ary PAMconstellation. The proposed framework employs a uniform scalarquantizer followed by Slepian-Wolf coding at the relay. We firstobtain a performance benchmark for the proposed scheme byderiving the corresponding information theoretical achievablerate. A practical coding scheme involving multi-level codes isthen discussed. At the source node, we use multi-level low-

    density parity-check codes for error protection. At the relay node,we propose a multi-level distributed joint source-channel codingscheme that uses irregular repeat-accumulate codes, the rates ofwhich are carefully chosen using the chain rule of entropy. Fora block length of2105 symbols, the proposed scheme operateswithin 0.56and 0.63 dB of the theoretical limits at transmissionrates of 1.0 and 1.5 bits/sample, respectively.

    I. INTRODUCTION

    CODING schemes for the three-node relay network can be

    broadly classified into the Decode and Forward (DF),

    Amplify and Forward (AF), and the Compress and Forward

    (CF) categories [1]. In DF, relay decodes before forwarding it

    to the destination. Its performance is dictated by the qualityof the source to relay channel; a decoding failure at the relay

    results in poor performance overall. On the other hand, in

    AF and CF, the relay does not attempt to decode. In AF, the

    signal received at the relay is simply amplified before being

    transmitted to the destination. On the other hand, in CF, the

    relay compresses the signal that it receives by exploiting its

    correlation with the signal received at the destination. The

    compressed signal is then transmitted to the destination. Unlike

    DF, CF always outperforms direct transmission regardless of

    the source to relay channel quality, and has been shown to

    have optimal asymptotic performance for cooperative ad-hoc

    networks [2].

    There are only a few existing works on practical CF codingschemes in the literature. A CF coding scheme over a half-

    duplex Gaussian relay channel was first proposed in [3], [4]. A

    fixed-rate CF relaying scheme using low-density parity-check

    (LDPC) and irregular repeat-accumulate (IRA) codes with

    binary modulation for the half-duplex Gaussian relay channel

    was presented in [5]. It was shown that for practical purposes,

    a binary quantizer at the relay was sufficient to obtain near-

    optimal performance. The scheme was later extended to fading

    relay channels under a rateless coded setting in [6]. A CF

    coding scheme that implemented Slepian Wolf (SW) coding

    [7] at the relay using polar codes was presented in [8]. A CF

    strategy named Quantize Map and Forward was proposed in

    [9] in which the quantized indices were mapped on to ran-

    dom Gaussian codewords. A practical implementation of this

    scheme using LDPC codes was presented in [10]. A variation

    of [9] was proposed in [11] in which the authors used vector

    quantizers for compression instead of scalar quantization. To

    the best of our knowledge, none of the existing works discuss

    a multilevel scheme for relays.In this paper, we propose a multi-level CF (ML-CF) coding

    scheme for a half-duplex Gaussian relay channel where all

    transmissions (from both the source and the relay) are con-

    strained to be from anM-ary PAM constellation. The schemeutilizes uniform scalar quantization (USQ) followed by SW

    coding for compression of the quantization indices at the relay.

    We first present the information theoretic achievable rates

    under the M-ary constellation constraint, which serve as aperformance benchmark for our subsequent code designs. With

    the help of numerical results, we demonstrate that a quantizer

    with M levels suffices for an M-ary PAM constellation.Since the quantization indices need to be compressed, as

    well as transmitted over a noisy relay-to-destination link, wepropose a multi-level distributed joint source channel coding

    (ML-DJSCC) strategy, implemented with the help of IRA

    codes, to provide joint compression and error protection to

    the quantization indices. The rates of the individual IRA

    codes are carefully chosen using chain rule of entropy. For

    transmissions from the source, we employ multi-level LDPC

    codes to provide error protection. The degree distributions

    for the IRA and the LDPC codes are optimized using the

    EXIT chart strategy [12] and the Gaussian assumption [13].

    Simulations using optimized codes with a block length of

    2 105 symbols show a performance gap of only 0.56 and0.63dB from the theoretical limits at transmission rates of1.0

    and1.5 bits/sample (b/s), respectively.

    I I . SYSTEMM ODEL

    We consider a three-node relay network consisting of a

    source, relay, and destination node. Let dsd, dsr and drd bethe source-destination, source-relay, and relay-destination link

    distances, respectively. Throughout the paper, we assume that

    the distance dsd is fixed while the other two are variable.The exact value at which dsd is fixed is not important sincethe same channel coefficients can be obtained by scaling the

    distances appropriately. However, for expositional clarity, we

    978-1-4673-0921-9/12/$31.00 2012 IEEE

    Globecom 2012 - Wireless Communications Symposium

    4536

  • 8/9/2019 Multilevel Compress Forward

    2/6

    2

    assume that dsd = 1 meters (m). The corresponding (real)channels suffer path-loss, with the channel gains given as

    csd = 1, csr = (dsd/dsr)3/2 and crd = (dsd/drd)

    3/2.

    We assume the presence of global channel state information,

    i.e. each node is assumed to be aware of all three channel

    coefficients. All channels are assumed to have additive white

    Gaussian noise (AWGN), the variance of which we assume,

    without loss in generality, to be unity. The transmissions from

    the source as well as the relay are assumed to be modulated

    on to an M-ary PAM constellation.1 Both the source and therelay are assumed to have an average power constraint of

    Ps and Pr, respectively. Owing to the half-duplex nature ofthe relay node, the total transmission period ofN symbols isdivided into the relay receive period (denoted asT1) of lengthN symbols, and the relay transmit period (denoted as T2)of length N symbols, where [0, 1] is the half-duplextime sharing constant and = 1 . Throughout the restof the paper, we will denote sequences with boldface and the

    associated random variables with italic letters. All logarithms

    used in the paper are to the base 2.

    III. CF RELAYING ANDP ERFORMANCEB OUNDS

    The source partitions its message into m = log M streamsand encodes each stream with a separate length-NLDPC code.The individual rates of these LDPC codes are denoted as R1,. . . , Rm with the overall transmission rate in b/s given asR =

    mi=1 Ri. The LDPC codewords are then modulated to

    an M-ary PAM constellation to obtain N symbols. The firstNsymbols denoted by Xs1 are transmitted duringT1 subjectto a power constraintE[X2s1] Ps1, whereXs1 is the randomvariable associated with the independent and identically dis-

    tributed (i.i.d.) sequence Xs1. The remaining Nsymbols aretransmitted duringT2and satisfy the constraintE[X2s2]

    Ps2.

    Due to the average power constraint at the source, we have

    Ps1 + Ps2 Ps. The length-Nsignal sequences receivedat the relay and destination during T1 are given as

    Yr = csrXs1+ Zr and Yd1 = csdXs1+ Zd1,

    respectively, where Zr and Zd1 are i.i.d. zero-mean unit-

    variance Gaussian noise sequences. The relay quantizes Yrusing anL-level USQ to obtain a sequence Wof quantizationindices. Let k0, . . . , kL be the quantization region boundaries.If q is the quantization step size, we have k0 =, ki =

    i L2

    q fori = 1, . . . , L 1, andkL= +. The quantizer

    output W = w {0, . . . , L 1} if the received signalYr {x: x R, kw x < kw+1}. The quantization indicesare SW coded with Yd1 as the decoder side-information

    and provided error protection to form a length-N codewordsequence Xr drawn from anM-ary PAM constellation subjectto a power constraint E[X2r ] Pr/. Note that since therelay does not transmit anything during T1, normalizing thepower constraint by makes sure that the average powerconsumption at the relay is Pr. The codeword Xr is thentransmitted to the destination duringT2, the same time as Xs2

    1Since anMary PAM represents a quadrature component of an MMQAM, the results presented in this paper can be easily extended to the caseof QAM as well.

    is transmitted. The destination receives the superposition of

    both signals:

    Yd2 = csdXs2+ crdXr+ Zd2,

    where Zd2 once again is an i.i.d. zero-mean, unit-variance

    Gaussian noise sequence.

    The destination first attempts to recover the quantization

    indicesW

    by treating the transmissionX

    s2 from the sourceas interference. It can do so if [5], [7]

    H(W| Yd1) I(Yd2; Xr) (1)The information term on the right hand side of (1) is

    the constrained capacity of an AWGN channel with an M-ary input and an M-ary interference and can be computednumerically as (assuming noise to be of unit-variance)

    C(S, I) = m 1M

    Mi=1

    i(y)log

    Mj=1

    i(y)

    j(y)

    dy (2)

    with

    i(y) =Mk=1

    1M

    fg(y xiS xkI).

    Herexi representi-th point of the unit energy M ary PAMconstellation,S and Iare the received source and interfererpowers respectively and fg(z) is the zero mean, unit varianceGaussian probability density function evaluated at z. WhenI = 0, (2) reduces to the constrained capacity of an AWGNchannel with an M-ary input.

    After recovering W (and consequently Xr), the destination

    cancels the interference caused by Xr and attempts to recover

    the source message jointly from Yd1, Yd2 and W. The

    destination is capable of recovering the source message if the

    transmission rate satisfies

    R I(W, Yd1; Xs1) + I(Xs2; Yd2|Xr) . (3)The information terms in (3) can be evaluated numerically

    using (2) and the conditional probability density function

    f(yr|yd1); we leave out the details because of space limi-tations. It should be noted that the achievable rate expression

    in (3) corresponds to a particular power allocation Ps1, Ps2,the quantization parameters L and q, and the half-duplexingparameter that satisfy the constraint (1) and the averagepower constraintPs1+Ps2 Ps. Thus one needs to searchover these parameters to maximize the achievable rate under

    the given constraints.

    A. Numerical Results and Observations

    In Fig. 1, we present the achievable rates of the CF strategy

    versus Ps where all transmissions from the source as well asthe relay are modulated onto an M= 4-ary PAM constellation.In generating the results, we assume that L = M = 4, andnumerically search over,q, and the power allocation betweenPs1 and Ps2 that yield the maximum achievable rate in (3)while satisfying the constraint (1) (in addition to the constraint

    Ps1+Ps2 Ps). Some observations that can be made fromthese numerical results are:

    4537

  • 8/9/2019 Multilevel Compress Forward

    3/6

    3

    At an overall transmission rate of 1.0 b/s, the CF strategyoutperforms DF by a margin of 1.15 dB, whereas the gain

    from direct transmission is approximately 1.2 dB. Note that

    in order to have a fair comparison, we have assumed that

    for the direct transmission case, the source transmits with a

    power equal to Ps+ Pr. It was shown in [5] that a binary quantizer ( L= 2, q= )

    sufficed for the case when transmission from the source and

    relay were modulated on to a binary constellation (M= 2),i.e. no significant gains were observed with L >2. Resultsin Fig. 1 however indicate that a binary quantizer does

    not suffice for higher order constellations. For example, for

    M= 4, we observe that in order to achieve a transmissionrate of 1.5 b/s, employing a binary quantizer at the relay

    requires 0.89 dB more transmission power than the case with

    L= 4. At the same time, our numerical results (not shownin the figure) seem to indicate that going beyondL = 4doesnot yield any noticeable gains.

    A similar observation is made for the case ofM = 8. Asshown in Fig. 2, achieving a transmission rate of 2.0 b/s with

    a binary quantizer requires 1.2 dB more power at the sourcethan the case with L = 4; the L= 4 quantizer on the otherhand requires 0.36 dB more power than the case with L = 8.Weve also found numerically that going beyond L = 8 inthis case does not give any further noticeable gains.

    Fig. 1. CF achievable rates versus the source power Ps for M = 4 andtheir comparison with DF and direct transmission rates. The other parametersare set to Pr = -6 dB, drd = 0.2 m, and dsr = 0.95 m.

    Based on these observations, as well as those from [5],

    we can reasonably conclude that an M-level quantizer shouldsuffice for the case when the transmissions from the source

    and relay are modulated onto an M-ary PAM constellation.An intuitive explanation behind this notion is the fact that the

    number of bits per quantization index for anM-level quantizerislog M, the same as the maximum channel capacity (in b/s)on the relay to destination link over which the quantization

    indices need to be transmitted. Thus, it should be sufficient

    to employ an M-level quantizer when the relay employs anM-ary PAM constellation.

    Fig. 2. A comparison of CF achievable rates for severalL. The parametersare set to M = 8, Pr = -3 dB, drd = 0.15 m, and dsr = 0.95m.

    IV. PRACTICALML-CF RELAYING S YSTEMWhereas in the past, researchers focused on minimizing

    Euclidean distance and maximizing asymptotic gains, recent

    research in the domain of coded modulation has proved

    that schemes like multi-level coding (MLC) [14] and bit-

    interleaved coded modulation (BICM) [15] can achieve ca-

    pacity while providing both power and bandwidth efficiency.

    In particular, multi-stage decoding (MSD) where each

    code/level is decoded individually instead of jointly has been

    shown to approach channel capacity with limited complex-

    ity [16]. Thus, we implement the practical ML-CF relaying

    scheme with the help of multi-level LDPC codes to encode the

    message at the source, and multi-level IRA codes to provide

    joint compression and error protection at the relay. In thefollowing subsections, we will describe our ML-CF coding

    scheme in detail.

    LDPC Code

    2

    LDPC Code

    m

    message

    Xs1

    Xs2

    M-PAM

    Modulator

    LDPC Code

    1

    P2S

    P2S

    P2S

    1sP

    2sP

    Fig. 3. Encoding at the Source node usingm LDPC codes. The P2S blocksindicate parallel to serial conversion.

    A. Message Encoding

    A block diagram of multi-level message encoder at the

    Source node is shown in Fig. 3. The message to be transmitted

    to the destination is partitioned into m bit-streams. Each bit-stream is encoded with a length-N LDPC code of rate Ri,i= 1, . . . , m so that the length of the ith message bit-streamisN Ri. The sum of the individual code rates gives the overalltransmission rate in b/s, i.e.

    mi=1 Ri = R. The resulting

    codewords are serially fedm bits at a time into a unit energy

    4538

  • 8/9/2019 Multilevel Compress Forward

    4/6

    4

    Scalar

    Quantizer

    IRA

    Xr2

    Xrl

    M-PAM

    Modulator

    P2S

    P2S

    P2S

    Yr

    1N

    1N

    1N

    IRA

    P2S

    P2S

    P2S

    2N

    2N

    2N

    IRA

    P2S

    P2S

    P2S

    mN

    mN

    mN

    2N

    1N

    mN

    NW2

    NWl

    NW1

    rP

    M-PAM

    Modulator

    M-PAM

    Modulator

    rP

    rP

    Xr1

    Fig. 4. Multilevel DJSCC encoding at the relay usingl IRA codes.

    M-PAM modulator, i.e. the kth bit of all codewords forms

    the kth symbol of the PAM sequence, k = 1, . . . , N . Thefirst N symbols of this PAM sequence are scaled by

    Ps1

    to form the sequence Xs1 that satisfies the average power

    constraint ofPs1, and which is transmitted to the relay and thedestination during T1. The remaining N symbols are scaledby

    Ps2 to form the sequence Xs2 that satisfies an average

    power constraint ofPs2 and which is transmitted during T2.

    B. Multilevel Distributed Joint Source Channel Coding

    As mentioned earlier, the relay quantizes the received

    sequence Yr using an L-level quantizer, the quantizationstep-size of which is chosen to maximize the overall

    transmission rate in (3). The quantizer outputs a sequenceW consisting of N quantization indices, each of lengthl= log L bits. The sequence W now needs to be compressedusing SW coding with Yd1 as the decoder side-information.

    At the same time, channel coding is also required to protect

    its transmission against noise on the relay to destination link.

    Instead of providing separate SW and channel coding, we

    resort to Distributed Joint Source Channel Coding (DJSCC)

    [5] in which SW coding and error protection is implemented

    in a joint manner. The challenge however is that for L > 2,each quantization index is composed ofl >1 bits as opposedto the binary case in [5] where the quantization indices were

    one-bit each. Taking this into account, we propose to use

    multiple binary IRA codes to implement multi-level DJSCC,the details of which are given below.

    Encoding: We split the quantization index sequence W intolbit-plane sequences W1, . . . ,Wl each being of length N;W1 corresponds to a sequence comprising of the least-

    significant bits of the original quantization sequence, whereas

    Wl corresponds to the most-significant bits. One possibility

    could have been to encode each one of the l quantizationbit-plane with m IRA codes, the parity bits of which arethen mapped to an M-PAM constellation (similar to Fig. 3).However, this approach requires the use ofl m IRA codes,

    == ===check

    nodes

    parity

    nodes(miN)

    systematic

    nodes(N)

    Soft Demodulation

    LLR Calculation

    1W

    1iW

    Yd1

    Yd2

    Le

    =

    Fig. 5. DJSCC decoder for thei th quantization bit plane.

    which becomes prohibitive when both l and m are large.Instead, we use a single IRA code for each bit-plane as shown

    in Fig. 4. Bit-planei

    ,i = 1, . . . , l

    is encoded with a code that

    hasmiNparity bits (the appropriate choice ofis would bediscussed later). These parity bits are then mappedm bits at atime to a length-iN symbol sequence Xri (with an averagepower Pr/. These symbol sequences are transmitted to thedestination one after the other, starting with Xr1 and ending

    with Xrl. Since the total number of transmissions from the

    relay are N symbols, we have the constraintl

    i=1 i = .Decoding: Note that the systematic bits of the IRA code are

    not transmitted over the physical channel. However since Yd1at the destination is correlated with W, one can think of the

    systematic bits as being transmitted over a virtual correlation

    channel with Yd1 as the output. The decoding of the bit-

    planes is done in stages starting with W1 and ending withWl. Thus when attempting to decode Wi, the calculation of

    log-likelihood ratios (LLRs) corresponding to the systematic

    bits not only use Yd1 but also W1, . . . , Wi1, the decodedversions of the respective quantization index bit-planes from

    the previous stages, as shown in Fig. 5. This decoding strategy

    follows directly from the chain-rule of entropy, using which

    the information theoretic constraint (1) necessary for recovery

    of the quantization index sequence W can be rewritten as

    l

    i=1

    H(Wi|Yd1, W1, . . . , W i1) l

    i=1

    iI(Xr; Yd2) (4)

    For each bit-plane i , i = 1, . . . , l, we choose i to satisfy theindividual constraint

    H(Wi|Yd1, W1, . . . , W i1) iI(Xr; Yd2) . (5)This makes sure that the overall conditional entropy ofW

    given Yd1 satisfies (4). Thus if the codes used are capacity

    achieving, the proposed methodology should guarantee that

    the quantization index sequence W is recoverable.

    Noting that multiple parity-bits of an IRA code are mapped

    to the same modulated symbol, we employ an iterative soft-

    demodulation strategy [15]. The iterative strategy for recover-

    ing the quantization indices is summarized below:

    4539

  • 8/9/2019 Multilevel Compress Forward

    5/6

    5

    1: Repeat for all bit-planes i= 1, . . . , l.2: Initialize extrinsic LLRs (from parity nodes to the soft

    demodulator)Le = 0.3: Use Yd1 and W1, . . . , Wi1 to calculate the a-priori

    LLRs for the systematic nodes.

    4: While (stopping criterion not met)2

    4: (Soft demodulation) Use Yd2 and Le to calculate the

    a-priori LLRs for the parity nodes.5: Run one iteration of belief propagation (BP) algorithm

    on the IRA decoding graph (Le is updated).

    6: end while

    7: Obtain Wi by hard-thresholding the a-posteriori LLRsfrom the systematic nodes.

    After decoding all bit-planes, the DJSCC decoder passes the

    estimates W1, . . . , Wl to the message decoder.

    C. Message Decoding

    Destination uses MSD on the m LDPC decoding graphs torecover the corresponding codewords, and hence the original

    message sequence. We have two types of variable nodes ateach stage of the LDPC decoder. The first type correspond

    to the symbols received during T1. The a-priori LLRs forthese type of nodes are calculated using Yd1, W1, . . . , Wl,and the decoded codewords corresponding to the previous

    stages. The second type of variable nodes are those which

    correspond to T2. For these nodes Yd2 and the decodedcodewords corresponding to the previous stages are used to

    evaluate the a-priori LLRs.

    D. Code Design

    For optimizing the degree distributions of the LDPC and

    IRA codes, we first use the information-theoretic analysis of

    Section III to evaluate (for given relay position and powerPr)the optimum parameters , q, Ps1 and Ps2 that are requiredto achieve a target transmission rate of R b/s. The analysisalso yields the target ratesRi, i = 1, . . . , mfor the individualLDPC codes, as well as i,i = 1, . . . , l, that govern the targetrates of the individual IRA codes.

    The degree distributions for the multi-level LDPC and IRA

    codes are designed using the EXIT chart strategy [17] with

    Gaussian approximation [13]. The approach we use is a direct

    consequence of chain rule of mutual information/entropy, i.e.,

    we assume perfect knowledge of prior bit-planes, and no

    information about subsequent ones. This simplifies the design

    process in the sense that the individual codes can be designedone by one, in a serial fashion. For each LDPC level, we take

    into account the fact that there are two types of variable nodes

    with different SNR characteristics: the first type corresponding

    toT1 and the other to T2. The design process is similar to theone in [5], so we do not include details in this paper. In the

    following, we briefly explain how degree distributions of a

    single level of IRA codes are optimized; the procedure has

    to be repeated for all l levels. Each IRA code has two typesof variable nodes, the systematic and the parity nodes. The

    2In our simulations, we stop when either the maximum number of iterationshave exceeded or the correct codeword has been decoded.

    =

    IchI

    I

    i

    dpIi

    pdI

    1

    i

    dpI

    1

    i

    dpI

    i

    pcI

    i

    cpI

    systematic nodes check nodes parity nodessoft

    demodulator

    Fig. 6. Information flow for IRA codes.

    parity nodes are divided into m groups; corresponding bitsfrom each group map to the same M-ary symbol. As shownin Fig. 5 each parity node is connected to two consecutive

    check nodes and vice-versa, with as many parity nodes as the

    check nodes. Thus the check nodes can also be divided into mtypes. We fix the check nodes within group i, i = 1, . . . , m,to have a regular degree ofdi, and then design the systematicnode degree distribution (x) =

    Dd=1

    jdx

    d1, where Dis the maximum systematic node degree. Since each group

    contains equal number of check nodes, the overall check node

    degree distribution is given as (x) =m

    i=1 ixdi1

    withi = di

    mj=1 dj

    1

    . Let Isc be the a-priori information

    from the systematic nodes to the check nodes as shown in

    Fig. 6. If Iipc is the information flow from the parity tocheck nodes in group i, the information from check to paritynodes can be evaluated using the approximate check to bit-

    node duality[12] as

    Iicp 1 J

    diJ1 (1 Isc) + J1 (1 Ipc)

    (6)

    where J() is the information that a log-likelihood ratiodrawn from a Gaussian distribution of mean and vari-ance 2 conveys about the bit it represents. The informationfrom the parity nodes in group i to the soft demodulator isgiven as Iipd = J

    2J1

    Iicp

    . For informations Ijpd,

    j = 1, . . . , m, and given channel conditions, we evaluate theEXIT function of the soft demodulator using Monte-Carlo

    simulations [18] to obtain Iidp and consequently

    Iipc = J

    J1

    Iicp

    + J1

    Iidp

    (7)

    for alli = 1, . . . , m. This completes one iteration of decodingfrom the check- to parity- to soft demodulator back to parity-

    to check-node. In order to simplify the optimization process,

    we assume that the iterations on this side of the decoding graph

    (right hand side of the check node in Fig. 6) continue untila fixed point is reached. In other words, for a given Isc,we initially assume Ipc to be zero, and then continue theiterations specified by (6) and (7) until the point where all

    I1pc, . . . , I mpc converge to fixed values. If

    Iipc denotes thefixed point, the check-nodes to systematic node information is

    given as

    Ics 1 mi=1

    J

    (di 1)J1 (1 Isc) (8)

    +2J1

    1 Iipc

    4540

  • 8/9/2019 Multilevel Compress Forward

    6/6

    6

    For convergence of the IRA code at the j -th level, we need tosatisfy the constraint

    Dd=1

    jdJ

    (d 1)J1 (Ics) + J1 (Ich)

    > Isc (9)

    for all Isc [0, 1), where Ich is the information onthe systematic nodes obtained from the (virtual correlation)

    channel. When designing the i-th level IRA code, we haveIch = I(Wi; Yd1|W1, . . . , W i1). Note that Ics in (9) isin fact a function of Isc, but we omit that dependence fornotational convenience. In addition to (9), we have the trivial

    constraintsD

    i=1 i = 1, andi > 0 for alli = 1, . . . , D. Un-der these constraint, the IRA code rate needs to be maximized

    which is equivalent to maximizingD

    d=1jd

    d. By discretizing

    Isc on the interval [0, 1), the optimization can be easilysolved using linear programming.

    V. SIMULATION R ESULTS

    In this section, we present simulation results for a 4-PAM

    CF relaying system with transmission rates of 1.0 and 1.5

    b/s. The setup we consider corresponds to drd = 0.2 m,dsr = 0.95 m and Pr = 6 dB. We list the optimizedinformation theoretic parameters required to achieve the target

    transmission rates in Table I. Note that if the above information

    TABLE IOPTIMIZED PARAMETERS FOR 4-PAM CF RELAYING WITH

    dsr = 0.95, drd = 0.2 M ANDPr = 6 D B. ALL POWERS ARE SPECIFIEDIN DB AND RATES IN B/S.

    Rate 0 1 Ps1 Ps2 q R1 R21.0 0.57 0.26 0.17 4.63 2.61 1.55 0.63 0.83

    1.5 0.65 0.28 0.09 8.65 7.25 2.44 0.37 0.67

    theoretic parameters are used to design the LDPC and IRA

    codes, the practical coding losses would imply that the rates

    of the optimized codes are less than those required. Therefore

    we keep , Ps2, 0, and 1 fixed at the theoretical valuesand increase Ps1 gradually until codes of required rates areachieved. The powerPs2 is not increased from its theoreticalminimum so as not to increase the interference that Xs2causes

    while decoding W [5].

    We simulate the optimized degree distributions at a finite

    block-length ofN= 2 105 symbols. In this case too we fixall parameters except Ps1 to the information theoretic valuesand gradually increase this power until the desired bit-error

    rate (BER) of 105

    is achieved. Simulation results indicatethat the overall power Ps required to achieve the target BERat a transmission rate of 1.0 b/s is only 0.56 dB more thanthe theoretical limit. On the other hand, at a transmission rate

    of 1.5 b/s, the proposed ML-CF scheme suffers a loss of only

    0.63 dB from the theoretical bound.

    VI . CONCLUSIONS

    We have presented an ML-CF strategy for a half-duplex

    Gaussian relay network where the transmissions from the

    source and the relay are drawn from an M-ary PAM con-stellation. The compression of the signal received at the relay

    is achieved by quantizing it before applying SW compression.

    Numerical evaluation of information theoretic analysis indi-

    cates that it is sufficient to consider an M-level quantizer,i.e. one does not gain much by going beyond M levels.At the same time, one suffers a significant degradation in

    performance by considering less than M levels. A codingscheme using LDPC and IRA codes was also presented. Multi-

    level LDPC codes were used to encode the source message,

    whereas multiple IRA codes were used to implement multi-

    level DJSCC of the quantization indices. Simulation of the

    proposed methodology indicates performance close to the

    theoretical bound.

    REFERENCES

    [1] T. Cover and A. Gamal, Capacity theorems for the relay channel,Information Theory, IEEE Transactions on, vol. 25, no. 5, pp. 572 584, sep 1979.

    [2] A. Host-Madsen, Capacity bounds for cooperative diversity, Informa-tion Theory, IEEE Transactions on, vol. 52, no. 4, pp. 1522 1544, april2006.

    [3] Z. Liu, V. Stankovic, and Z. Xiong, Practical compress-forward codedesign for the half-duplex relay channel, in Proc. Conf. on Inf. Sci. and

    Systems, 2005.[4] , Wyner-ziv coding for the half-duplex relay channel, in Acous-

    tics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP05).IEEE International Conference on, vol. 5. IEEE, 2005, pp. v1113.

    [5] M. Uppal, Z. Liu, V. Stankovic, and Z. Xiong, Compress-forwardcoding with bpsk modulation for the half-duplex gaussian relay channel,Signal Processing, IEEE Transactions on, vol. 57, no. 11, pp. 4467 4481, nov. 2009.

    [6] M. Uppal, G. Yue, X. Wang, and Z. Xiong, A rateless coded proto-col for half-duplex wireless relay channels, Signal Processing, IEEETransactions on, vol. 59, no. 1, pp. 209 222, jan. 2011.

    [7] D. Slepian and J. Wolf, Noiseless coding of correlated informationsources, Information Theory, IEEE Transactions on, vol. 19, no. 4, pp.471 480, jul 1973.

    [8] R. Blasco Serrano, Coding strategies for compress-and-forward relay-ing, Licentiate Thesis, Royal Institute of Technology (KTH), Stock-holm, Sweden, 2010.

    [9] A. Avestimehr, S. Diggavi, and D. Tse, Wireless network informationflow: A deterministic approach,Information Theory, IEEE Transactionson, vol. 57, no. 4, pp. 1872 1905, april 2011.

    [10] V. Nagpal, I.-H. Wang, M. Jorgovanovic, D. Tse, and B. Nikolic and,Quantize-map-and-forward relaying: Coding and system design, inCommunication, Control, and Computing (Allerton), 2010 48th Annual

    Allerton Conference on, 29 2010-oct. 1 2010, pp. 443 450.[11] S. H. Lim, Y.-H. Kim, A. El Gamal, and S.-Y. Chung, Noisy network

    coding, Information Theory, IEEE Transactions on, vol. 57, no. 5, pp.3132 3152, may 2011.

    [12] A. Ashikhmin, G. Kramer, and S. ten Brink, Extrinsic informationtransfer functions: model and erasure channel properties, InformationTheory, IEEE Transactions on, vol. 50, no. 11, pp. 26572673, 2004.

    [13] S. Chung, T. Richardson, and R. Urbanke, Analysis of sum-productdecoding of low-density parity-check codes using a gaussian approxi-mation, Information Theory, IEEE Transactions on, vol. 47, no. 2, pp.657670, 2001.

    [14] H. Imai and S. Hirakawa, A new multilevel coding method using error-correcting codes, Information Theory, IEEE Transactions on, vol. 23,no. 3, pp. 371 377, may 1977.

    [15] G. Caire, G. Taricco, and E. Biglieri, Bit-interleaved coded modula-tion, Information Theory, IEEE Transactions on, vol. 44, no. 3, pp.927 946, may 1998.

    [16] U. Wachsmann, R. Fischer, and J. Huber, Multilevel codes: theoreticalconcepts and practical design rules, Information Theory, IEEE Trans-actions on, vol. 45, no. 5, pp. 1361 1391, jul 1999.

    [17] S. Ten Brink, Designing iterative decoding schemes with the extrinsicinformation transfer chart,AEU Int. J. Electron. Commun, vol. 54, no. 6,pp. 389398, 2000.

    [18] C. Wang, S. Kulkarni, and H. Poor, Density evolution for asymmet-ric memoryless channels, Information Theory, IEEE Transactions on,vol. 51, no. 12, pp. 42164236, 2005.

    4541