5446 ieee transactions on information theory, vol. 59, no. 9, september 2013 source...

20
5446 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013 Source-Channel Coding Theorems for the Multiple-Access Relay Channel Yonathan Murin, Student Member, IEEE, Ron Dabora, Member, IEEE, and Deniz Gündüz, Member, IEEE Abstract—We study reliable transmission of arbitrarily corre- lated sources over multiple-access relay channels (MARCs) and multiple-access broadcast relay channels (MABRCs). In MARCs only the destination is interested in reconstructing the sources, while in MABRCs, both the relay and the destination want to reconstruct them. In addition to arbitrary correlation among the source signals at the users, both the relay and the destination have side information correlated with the source signals. Our objective is to determine whether a given pair of sources can be losslessly transmitted to the destination for a given number of channel symbols per source sample, dened as the source-channel rate. Sufcient conditions for reliable communication based on operational separation, as well as necessary conditions on the achievable source-channel rates are characterized. Since operational separation is generally not optimal for MARCs and MABRCs, sufcient conditions for reliable communication using joint source-channel coding schemes based on a combination of the correlation preserving mapping technique with Slepian–Wolf source coding are also derived. For correlated sources transmitted over fading Gaussian MARCs and MABRCs, we present condi- tions under which separation (i.e., separate and stand-alone source and channel codes) is optimal. This is the rst time optimality of separation is proved for MARCs and MABRCs. Index Terms—Correlation preserving mapping (CPM), fading, joint source and channel coding, multiple-access relay channel (MARC), separation theorem, Slepian–Wolf (SW) source coding. I. INTRODUCTION T HE multiple-access relay channel (MARC) models a net- work in which several users communicate with a single destination with the help of a relay [1]. The MARC is a fun- damental multiterminal channel model that generalizes both the multiple access channel (MAC) and the relay channel models, and has received a lot of attention in the recent years [1]–[4]. If the relay terminal also wants to decode the source messages, Manuscript received May 13, 2011; revised December 15, 2012; accepted April 02, 2013. Date of publication April 29, 2013; date of current version Au- gust 14, 2013. This work was supported in part by the European Commission’s Marie Curie IRG Fellowship PIRG05-GA-2009-246657 under the Seventh Framework Programme. D. Gündüz was supported in part by the Spanish Government under project TEC2010-17816 (JUNTOS) and in part by the European Commission’s Marie Curie IRG Fellowship with Reference 256410 (COOPMEDIA). This paper was presented in part at the International Sym- posium on Wireless Communication Systems, Aachen, Germany, November 2011, and in part at the 2012 IEEE International Symposium on Information Theory. Y. Murin and R. Dabora are with the Department of Electrical and Com- puter Engineering, Ben-Gurion University, Beer-Sheva 84105, Israel (e-mail: [email protected]; [email protected]). D. Gündüz is with the Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, U.K. (e-mail: d.gunduz@impe- rial.ac.uk). Communicated by Y. Oohama, Associate Editor for Source Coding. Digital Object Identier 10.1109/TIT.2013.2260592 the model is called the multiple-access broadcast relay channel (MABRC). Previous work on MARCs considered independent sources at the terminals. In this study, we allow arbitrary correlation among the sources to be transmitted to the destination in a loss- less fashion, and also let the relay and the destination have side information correlated with the sources. Our objective is to determine whether a given pair of sources can be losslessly transmitted to the destination for a specic number of channel uses per source sample, which is dened as the source-channel rate. In [5], Shannon showed that a source can be reliably trans- mitted over a point-to-point memoryless channel, if its entropy is less than the capacity of the channel. Conversely, if the source entropy is greater than the channel capacity, reliable transmis- sion of the source over the channel is not possible. Hence, a simple comparison of the rates of the optimal source code and the optimal channel code for the respective source and channel sufces to determine whether reliable communication is fea- sible or not. This is called the separation theorem. An impli- cation of the separation theorem is that the independent design of the source and the channel codes is optimal. However, the optimality of source-channel separation does not generalize to multiuser networks [6]–[8], and, in general the source and the channel codes need to be designed jointly for every particular combination of sources and channel. The fact that the MARC generalizes both the MAC and the relay channel models reveals the difculty of the problem studied here. The capacity of the relay channel, which cor- responds to a special case of our problem, is still unknown. While the capacity region of a MAC is known in general, the optimal joint source-channel code for transmission of corre- lated sources over the MAC remains open [7]. Accordingly, the objective of this study is to construct lower and upper bounds for the achievable source-channel rates in MARCs and MABRCs. We shall focus on decode-and-forward (DF) based achievability schemes, such that the relay terminal decodes both source signals before sending cooperative information to the destination. Naturally, DF-based achievable schemes for the MARC directly apply to the MABRC model as well. Moreover, we characterize the optimal source-channel rate in some special cases. Our contributions are listed as follows. 1) We establish an achievable source-channel rate for MARCs based on operational separation [10, Sec. I]. The scheme uses the DF strategy with irregular encoding [2, Sec. I-A], [9], successive decoding at the relay, and backward decoding at the destination. We show that for MARCs with correlated sources and side information, 0018-9448 © 2013 IEEE

Upload: dinhkiet

Post on 26-Jun-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

5446 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

Source-Channel Coding Theorems for theMultiple-Access Relay Channel

Yonathan Murin, Student Member, IEEE, Ron Dabora, Member, IEEE, and Deniz Gündüz, Member, IEEE

Abstract—We study reliable transmission of arbitrarily corre-lated sources over multiple-access relay channels (MARCs) andmultiple-access broadcast relay channels (MABRCs). In MARCsonly the destination is interested in reconstructing the sources,while in MABRCs, both the relay and the destination want toreconstruct them. In addition to arbitrary correlation among thesource signals at the users, both the relay and the destinationhave side information correlated with the source signals. Ourobjective is to determine whether a given pair of sources can belosslessly transmitted to the destination for a given number ofchannel symbols per source sample, defined as the source-channelrate. Sufficient conditions for reliable communication basedon operational separation, as well as necessary conditions onthe achievable source-channel rates are characterized. Sinceoperational separation is generally not optimal for MARCs andMABRCs, sufficient conditions for reliable communication usingjoint source-channel coding schemes based on a combination ofthe correlation preserving mapping technique with Slepian–Wolfsource coding are also derived. For correlated sources transmittedover fading Gaussian MARCs and MABRCs, we present condi-tions under which separation (i.e., separate and stand-alone sourceand channel codes) is optimal. This is the first time optimality ofseparation is proved for MARCs and MABRCs.

Index Terms—Correlation preserving mapping (CPM), fading,joint source and channel coding, multiple-access relay channel(MARC), separation theorem, Slepian–Wolf (SW) source coding.

I. INTRODUCTION

T HE multiple-access relay channel (MARC) models a net-work in which several users communicate with a single

destination with the help of a relay [1]. The MARC is a fun-damental multiterminal channel model that generalizes both themultiple access channel (MAC) and the relay channel models,and has received a lot of attention in the recent years [1]–[4].If the relay terminal also wants to decode the source messages,

Manuscript received May 13, 2011; revised December 15, 2012; acceptedApril 02, 2013. Date of publication April 29, 2013; date of current version Au-gust 14, 2013. This work was supported in part by the European Commission’sMarie Curie IRG Fellowship PIRG05-GA-2009-246657 under the SeventhFramework Programme. D. Gündüz was supported in part by the SpanishGovernment under project TEC2010-17816 (JUNTOS) and in part by theEuropean Commission’s Marie Curie IRG Fellowship with Reference 256410(COOPMEDIA). This paper was presented in part at the International Sym-posium on Wireless Communication Systems, Aachen, Germany, November2011, and in part at the 2012 IEEE International Symposium on InformationTheory.Y. Murin and R. Dabora are with the Department of Electrical and Com-

puter Engineering, Ben-Gurion University, Beer-Sheva 84105, Israel (e-mail:[email protected]; [email protected]).D. Gündüz is with the Department of Electrical and Electronic Engineering,

Imperial College London, London, SW7 2AZ, U.K. (e-mail: [email protected]).Communicated by Y. Oohama, Associate Editor for Source Coding.Digital Object Identifier 10.1109/TIT.2013.2260592

the model is called the multiple-access broadcast relay channel(MABRC).Previous work on MARCs considered independent sources

at the terminals. In this study, we allow arbitrary correlationamong the sources to be transmitted to the destination in a loss-less fashion, and also let the relay and the destination haveside information correlated with the sources. Our objective isto determine whether a given pair of sources can be losslesslytransmitted to the destination for a specific number of channeluses per source sample, which is defined as the source-channelrate.In [5], Shannon showed that a source can be reliably trans-

mitted over a point-to-point memoryless channel, if its entropyis less than the capacity of the channel. Conversely, if the sourceentropy is greater than the channel capacity, reliable transmis-sion of the source over the channel is not possible. Hence, asimple comparison of the rates of the optimal source code andthe optimal channel code for the respective source and channelsuffices to determine whether reliable communication is fea-sible or not. This is called the separation theorem. An impli-cation of the separation theorem is that the independent designof the source and the channel codes is optimal. However, theoptimality of source-channel separation does not generalize tomultiuser networks [6]–[8], and, in general the source and thechannel codes need to be designed jointly for every particularcombination of sources and channel.The fact that the MARC generalizes both the MAC and

the relay channel models reveals the difficulty of the problemstudied here. The capacity of the relay channel, which cor-responds to a special case of our problem, is still unknown.While the capacity region of a MAC is known in general, theoptimal joint source-channel code for transmission of corre-lated sources over the MAC remains open [7]. Accordingly,the objective of this study is to construct lower and upperbounds for the achievable source-channel rates in MARCs andMABRCs. We shall focus on decode-and-forward (DF) basedachievability schemes, such that the relay terminal decodesboth source signals before sending cooperative informationto the destination. Naturally, DF-based achievable schemesfor the MARC directly apply to the MABRC model as well.Moreover, we characterize the optimal source-channel rate insome special cases. Our contributions are listed as follows.1) We establish an achievable source-channel rate forMARCs based on operational separation [10, Sec. I].The scheme uses the DF strategy with irregular encoding[2, Sec. I-A], [9], successive decoding at the relay, andbackward decoding at the destination. We show that forMARCs with correlated sources and side information,

0018-9448 © 2013 IEEE

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5447

DF with irregular encoding yields a higher achievablesource-channel rate than the rate achieved by DF withregular encoding. This is in contrast to the scenariowithout side information, in which DF with regular en-coding achieves the same source-channel rate as DF withirregular encoding. The achievability result obtained forMARCs applies directly to MABRCs as well.

2) We derive two sets of necessary conditions for theachievability of source-channel rates for MARCs (andMABRCs).

3) We investigate MARCs and MABRCs subject to inde-pendent and identically distributed (i.i.d.) fading, for bothphase fading and Rayleigh fading. We find conditionsunder which informational source-channel separation(in the sense of [10, Sec. I]) is optimal for each channelmodel.This is the first time the optimality of separation isproven for some special case of MARCs and MABRCs.Note that these models are not degraded in the senseof [11].

4) We derive two joint source-channel coding achievabilityschemes for MARCs and MABRCs for the source-channelrate . Both proposed schemes use a combination ofSlepian–Wolf (SW) source coding [12] and joint source-channel coding implemented via the correlation preservingmapping (CPM) technique [7]. In the first scheme, CPM isused for encoding information to the relay and SW sourcecoding combined with an independent channel code is usedfor encoding information to the destination. In the secondscheme, SW source coding is used for encoding informa-tion to the relay and CPM is used for encoding informationto the destination. These are the first joint source-channelachievability schemes, proposed for a multiuser networkwith a relay, which take advantage of the CPM technique.

A. Prior Work

The MARC has been extensively studied from a channelcoding perspective. Achievable rate regions for the MARCwere derived in [2], [3], and [13]. In [2], Kramer et al. derivedan achievable rate region for the MARC with independentmessages. The coding scheme employed in [2] is based on DFrelaying, and uses regular encoding, successive decoding atthe relay, and backward decoding at the destination. In [3], itwas shown that, in contrast to the classic relay channel, in aMARC, different DF schemes yield different rate regions. Inparticular, backward decoding can support a larger rate regionthan sliding window decoding. Another DF-based codingscheme that uses offset encoding, successive decoding at therelay, and sliding-window decoding at the destination was pre-sented in [3]. Outer bounds on the capacity region of MARCswere obtained in [13]. More recently, capacity regions for twoclasses of MARCs were characterized in [4].In [14], Shamai and Verdú considered the availability of cor-

related side information at the receiver in a point-to-point sce-nario, and showed that source-channel separation still holds.The availability of correlated side information at the receiverenables transmitting the source reliably over a channel with asmaller capacity compared to the capacity needed in the absence

of side information. In [7] Cover et al. derived finite-letter suf-ficient conditions for communicating discrete, arbitrarily cor-related sources over a MAC, and showed the suboptimality ofsource-channel separation when transmitting correlated sourcesover a MAC. These sufficient conditions were later shown in[15] not to be necessary in general. The transmission techniqueintroduced by Cover et al. is called CPM. In the CPM tech-nique, the channel codewords are correlated with the sourcesequences, resulting in correlated channel inputs. CPM is ex-tended to source coding with side information over a MAC in[16] and to broadcast channels with correlated sources in [17](with a correction in [18]).In [10], Tuncel distinguished between two types of

source-channel separation. The first type, called informa-tional separation, refers to classical separation in the Shannonsense. The second type, called operational separation, refersto statistically independent source and channel codes, whichare not necessarily the optimal codes for the underlying sourceor the channel, coupled with a joint decoder at the destination.Tuncel also showed that when broadcasting a common sourceto multiple receivers, each with its own correlated side infor-mation, operational separation is optimal while informationalseparation is not.In [8] Gündüz et al. obtained necessary and sufficient con-

ditions for the optimality of informational separation in MACswith correlated sources and side information at the receiver.The work [8] also provided necessary and sufficient conditionsfor the optimality of operational separation for the compoundMAC. Transmission of arbitrarily correlated sources overinterference channels (ICs) was studied in [19], in which Salehiand Kurtas applied the CPM technique; however, when thesources are independent, the conditions derived in [19] do notspecialize to the Han and Kobayashi (HK) region [20], whichis the largest known achievable rate region for ICs. Sufficientconditions based on the CPM technique, which specialize to theHK region, were derived in [21]. Transmission of independentsources over ICs with correlated receiver side information wasstudied in [22]. The work [22] showed that source-channelseparation is optimal when each receiver has access to sideinformation correlated with its own desired source. When eachreceiver has access to side information correlated with theinterfering transmitter’s source, [22] provided sufficient condi-tions for reliable transmission based on a joint source-channelcoding scheme that combines HK superposition encoding andpartial interference cancellation.Lossless transmission over a relay channel with correlated

side information was studied in [23]–[26]. In [23], Gündüz andErkip developed a DF-based achievability scheme and showedthat operational separation is optimal for physically degradedrelay channels as well as for cooperative relay-broadcast chan-nels (CRBCs). The scheme of [23] was extended to multiplerelay networks in [24].Prior work on source transmission over fading channels

is mostly limited to point-to-point channels (see [27] andreferences therein). In this study, we consider two types offading models: phase fading and Rayleigh fading. Phase fadingmodels apply to high-speed microwave communications wherethe oscillator’s phase noise and the sampling clock jitter are the

5448 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

key impairments. Phase fading is also the major impairmentin communication systems that employ orthogonal frequencydivision multiplexing [28]. Additionally, phase fading can beused to model systems that employ dithering to decorrelatesignals [29]. For cooperative multiuser scenarios, phase-fadingmodels have been considered for MARCs [2], [13], [31], forbroadcast-relay channels (BRCs) [2], and for interferencechannels [32]. Rayleigh fading models are very common inwireless communications and apply to mobile communicationsin the presence of multiple scatterers without line of sight [30].Rayleigh fading models have been considered for relay chan-nels in [33]–[35], and for MARCs in [31]. The key similaritybetween the two fading models is the uniformly distributedphase of the fading process. The phase fading and the Rayleighfading models differ in the behavior of the fading magnitudecomponent, which is fixed for the former but varies followinga Rayleigh distribution for the latter.The rest of this paper is organized as follows: In Section II,

the model and notations are presented. In Section III, an achiev-able source-channel rate based on operational separation ispresented. In Section IV, necessary conditions on the achievablesource-channel rates are derived. In Section V, the optimalityof separation for correlated sources transmitted over fadingGaussian MARCs is studied, and in Section VI, two achievableschemes based on joint source-channel coding are derived.Concluding remarks are provided in Section VII, followed bythe appendices.

II. NOTATIONS AND MODEL

In the following, we denote the set of real numbers with ,and the set of complex numbers with . We denote random vari-ables (RVs) with upper case letters, e.g., , , and their re-alizations with lower case letters, e.g., , . A discrete RVtakes values in a set . We use to denote the cardinality ofa finite, discrete set , to denote the probability massfunction of a discrete RV over , and to denote theprobability density function (p.d.f.) of a continuous RV on. For brevity, we may omit the subscript when it is the up-percase version of the sample symbol . We use todenote the conditional distribution of given . We denotevectors with boldface letters, e.g., , ; the ’th element of avector is denoted by , and we use where , to denote

; is a short form notation for , andunless specified otherwise, . We denote the empty setwith , and the complement of the set by . We useto denote the entropy of a discrete RV, and to denote themutual information between two RVs, as defined in [36, Ch.2]. We use to denote the set of -strongly typical se-quences with respect to the distribution on , as definedin [37, Ch. 5.1]; when referring to a typical set, we may omit theRVs from the notation, when these variables are clear from thecontext. We use to denote a proper, circularly sym-metric, complex Gaussian distribution with mean and vari-ance [38], and to denote stochastic expectation. Weuse to denote a Markov chain formed by the RVs

as defined in [36, Ch. 2.8], and to denote thatis statistically independent of .

Fig. 1. The MABRC with correlated side information. are the re-constructions of at the relay and are the reconstructionsat the destination.

A. Problem Formulation

The MARC consists of two transmitters (sources), a receiver(destination), and a relay. Transmitter has access to the sourcesequence , for . The receiver is interested in the loss-less reconstruction of the source sequences observed by the twotransmitters. The relay has access to side information , andthe receiver has access to side information . The objectiveof the relay is to help the receiver decode the source sequences.For the MABRC, the relay is also interested in a lossless recon-struction of the source sequences. Fig. 1 depicts the MABRCwith side information setup. The MARC is obtained when thereconstruction at the relay is omitted.The sources and the side information sequences,

are arbitrarily correlated accordingto a joint distribution over a finite alphabet

, and independent across different sampleindices . All nodes know this joint distribution.For transmission, a discrete memoryless MARC with inputs

over finite input alphabets , and outputsover finite output alphabets , is available. The

MARC is memoryless, i.e.,

(1)

Definition 1: An source-channel code for theMABRC with correlated side information consists of twoencoding functions

(2)

a set of causal encoding functions at the relay, ,such that

(3)

and two decoding functions

(4a)

(4b)

An source-channel code for the MARC is defined asin Definition 1 with the exception that the decoding function

does not exist.Definition 2: Let denote the reconstruction of at the

receiver, and denote the reconstruction of at the relay,

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5449

Fig. 2. Sources transmitted over fading GaussianMARCwith side informationat the relay and destination.

for . The average probability of error, , of ancode for the MABRC is defined as

(5)

while for the MARC the average probability of error is definedas

(6)

Definition 3: A source-channel rate is said to be achievablefor the MABRC if, for every , there exist positive integers

such that for all , there existsan code for which . The same definitionapplies to the MARC.

B. Fading Gaussian MARCs

The fading Gaussian MARC is depicted in Fig. 2. In fadingGaussian MARCs, the received signals at time at the receiverand at the relay are given by

(7a)

(7b)

for , where and are independent of each other,i.i.d., circularly symmetric, complex Gaussian RVs, .The channel input signals are subject to per-symbol averagepower constraints: . In the fol-lowing, it is assumed that the destination knows the instanta-neous channel coefficients from the transmitters and the relayto itself, and the relay knows the instantaneous channel coef-ficients from both transmitters to itself. This is referred to asreceiver channel state information (Rx-CSI). Note that the des-tination does not have CSI on the links arriving at the relay, andthat the relay does not have CSI on the links arriving at the des-tination. It is also assumed that the sources and the relay do notknow the channel coefficients on their outgoing links (no trans-mitter CSI). We represent the CSI at the destination with

, the CSI at the relay with ,and define . We consider twotypes of fading; phase fading and Rayleigh fading:1) Phase fading channels: The channel coefficients are char-acterized as , where are con-stants representing the attenuation, and are uniformly

distributed over , i.i.d., and independent of eachother and of the additive noises and .

2) Rayleigh fading channels: The channel coefficients arecharacterized as , whereare constants representing the attenuation, andare circularly symmetric, complex Gaussian RVs,

, i.i.d., and independent of eachother and of the additive noises and . We define

.In both models, the values of are fixed and known at all

nodes. Observe that the magnitude of the phase-fading processis constant, , but for Rayleigh fading the fadingmagnitude varies between different time instances.

III. ACHIEVABLE SOURCE-CHANNEL RATE BASED ONOPERATIONAL SEPARATION

In this section, we derive an achievable source-channel ratefor discrete memoryless (DM) MARCs and MABRCs usingseparate source and channel codes. The achievability is estab-lished by using SW source coding, a channel coding schemesimilar to the one detailed in [3, Secs. II and III], and is based onDF relaying with irregular block Markov encoding, successivedecoding at the relay, and backward decoding at the destination.The results are summarized in the following theorem:Theorem 1: For DM MARCs and DM MABRCs with relay

and receiver side information as defined in Section II-A, source-channel rate is achievable if

(8a)

(8b)

(8c)

(8d)

(8e)

(8f)

for some joint distributionthat factorizes as

(9)

Proof: The proof is given in Appendix A.

A. Discussion

Remark 1: In Theorem 1, (8a)–(8c) are constraints for reli-able decoding at the relay, while (8d)–(8f) are reliable decodingconstraints at the destination.Remark 2: In regular encoding, the codebooks at the sources

and at the relay have the same cardinality; see, for example, [3].Now, note that the achievable source-channel rate of Theorem 1is established by using two different SW coding schemes atdifferent coding rates: one for the relay and one for the desti-nation. The main benefit of different encoding rates is that itallows adapting to the different quality of side information atthe relay and destination. Since the rates are different, such en-coding cannot be realized with regular encoding and requires anirregular coding scheme for the channel code.

5450 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

Had we applied regular encoding, it would have led to themerger of some of the constraints in (8), in order to force thebinning rates to the relay and destination to be equal. For ex-ample,(8a) and (8d) would be merged into the constraint

Hence, regular encoding puts extra constraints on the rates.Accordingly, we conclude that irregular encoding allows higherachievable source-channel rates than regular encoding. Whenthe relay and destination have the same side information (), then the irregular regular encoding schemes achieve the

same source-channel rate. This can be observed by settingin the aforementioned equation, and in (8a) and (8d).Finally, consider regular encoding in the case of a MARC.

Here, the relay is not required to recover the source sequences.Therefore, regular encoding requires the merger of the corre-sponding right-hand sides (RHSs) of the constraints (8a)–(8f).For example, (8a) and (8d) are merged into the following singleconstraint:

This shows that regular encoding is more restrictive than irreg-ular encoding for MARCs as well.

IV. NECESSARY CONDITIONS ON THE ACHIEVABLESOURCE-CHANNEL RATE FOR DISCRETE MEMORYLESS

MARCS AND MABRCS

In this section, we derive necessary conditions for the achiev-ability of a source-channel rate for MARCs and for MABRCswith correlated sources and side information at the relay and atthe destination. The conditions for the MARC are summarizedin the following theorem:Theorem 2: Consider the transmission of arbitrarily corre-

lated sources and over the DMMARC with relay side in-formation and receiver side information . Any achievablesource-channel rate must satisfy the following constraints:

(10a)

(10b)

(10c)

for some input distribution , and the constraints

(11a)

(11b)

(11c)

for some input distribution , with.Proof: The proof is given below in Section IV-A.

Remark 3: The RHS of the constraints in (11) is similar tothe broadcast bound1 when assuming that all relay informationis available at the destination.Remark 4: Setting , constraints in (10) spe-

cialize to the converse of [23, Th. 3.1] for the relay channel.Theorem 3: Consider the transmission of arbitrarily corre-

lated sources and over the DM MABRC with relay sideinformation and receiver side information . Any achiev-able source-channel rate must satisfy the constraints (10) aswell as the following constraints:

(12a)

(12b)

(12c)

for some input distribution .Proof: The proof follows arguments similar to the proof of

Theorem 2, and hence, omitted.

A. Proof of Theorem 2

Let as , for a sequence of encoders anddecoders , such that isfixed. By Fano’s inequality [36, Th. 2.11.1], we have

(13)

where is a nonnegative function that approaches as. Observe that

(14)

where (a) follows from the fact that conditioning reducesentropy [36, Th. 2.6.5]; and (b) follows from the fact that

is a function of .1) Proof of Constraints (10): Constraint (10a) is a conse-

quence of the following chain of inequalities:

(15)

1Here, we use the common terminology for the classic relay channel inwhich the term is referred to as the MAC bound while the term

is called the broadcast bound [39, Ch. 16].

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5451

where (a) follows from the memoryless channel assumption(see (1)) and the Markov relation

(see [40]); (b) followsfrom the fact that conditioning reduces entropy; (c) followsfrom the fact that is a deterministic function of ; (d)follows from the nonnegativity of the mutual information; and(e) follows from the memoryless sources and side informationassumption, and from (13) –(14).Following arguments similar to those that led to (15), we can

also show

(16a)

(16b)

We now recall that the mutual information is concave in the setof joint distributions , [36, Th. 2.7.4]. Thus, takingthe limit as and letting , (15), (16a), and(16b) result in the constraints in (10).2) Proof of Constraints (11): We begin by defining the fol-

lowing auxiliary RV:

(17)

Constraint (11a) is a consequence of the following chain of in-equalities:

(18)

where (a) follows from (17), the fact that is a determin-istic function of , and the memoryless channel as-sumption, (see (1)); (b) follows from the fact that conditioningreduces entropy and causality [40]; (c) follows from the fact

that is a deterministic function of , and conditioning re-duces entropy; (d) follows again from the fact that conditioningreduces entropy, the memoryless sources and side informationassumption, and (13)–(14).Following arguments similar to those that led to (18), we can

also show that

(19a)

(19b)

Next, we introduce the time-sharing RV , indepen-dent of all other RVs, and let with probability

. We can write

(20)

where , , , , and. Since and are independent

given , for we have

(21)

Hence, the probability distribution is of the form given inTheorem 2 for the constraints in (11). Finally, repeating thesteps leading to (20) for (19a) and (19b), and taking the limit

, leads to the constraints in (11).

V. OPTIMALITY OF SOURCE-CHANNEL SEPARATION FORFADING GAUSSIAN MARCS AND MABRCS

In this section, we study source-channel coding for fadingGaussian MARCs and MABRCs. We derive conditions forthe optimality of source-channel separation for the phase andRayleigh fading models. We begin by considering phase fadingGaussian MARCs, defined in (7). The result is stated in thefollowing theorem:Theorem 4: Consider the transmission of arbitrarily corre-

lated sources and over a phase fading Gaussian MARCwith receiver side information and relay side information. Let the channel inputs be subject to per-symbol power con-

straints specified by

(22)

and the channel coefficients and power constraints sat-isfy

(23a)

(23b)

(23c)

5452 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

A source-channel rate is achievable if

(24a)

(24b)

(24c)

Conversely, if source-channel rate is achievable, then condi-tions (24) are satisfied with replaced by .

Proof: The necessity part is proved in Section V-A1 andsufficiency is shown in Section V-A2.Remark 5: To achieve the source-channel rates stated in

Theorem 4, we use channel inputs distributed according to, all mutually independent, and gen-

erate the codebooks in an i.i.d. manner. The relay employs theDF scheme.Remark 6: Note that the phase fading MARC is not degraded

in the sense of [11]; see also [2, Remark 33].Remark 7: The result of Theorem 4 relies on the assumptions

of additive Gaussian noise, Rx-CSI, and i.i.d. fading coefficientssuch that the phases of the fading coefficients are mutually inde-pendent, uniformly distributed, and independent of their magni-tudes. These assumptions are essential for the result.Remark 8: Observe that from the achievability result of

[31, Appendix A], it follows that the optimal source code andchannel code used in the proof of Theorem 4 are separate andstand-alone. Thus, informational separation is optimal. We nowprovide an intuitive explanation for the optimality of separationfor the current scenario: note that when separate and stand-alonesource and channel codes are used, the channel inputs of thetwo transmitters and are mutually independent, i.e.,

. This puts a restriction on the feasiblejoint distributions for generating the channel codebooks. Usinga joint source-channel code allows generating channel inputsthat are statically dependent on the source symbols. Sinceand are correlated, this induces statistical dependence

between the channel inputs and . This, in turn, enlargesthe set of feasible joint input distributions which can be realizedfor generating the channel codebooks, and therefore, the set ofachievable transmission rates over the channel may increase.However, for fading Gaussian MARCs, due to the uniformlydistributed phases of the channel coefficients, in the absence ofTx-CSI, the received signal components (from the sources andfrom the relay) at the destination are uncorrelated. Therefore,there is no advantage, from the perspective of channel coding,in generating correlated channel inputs. Coupled with the en-tropy maximization property of the Gaussian RVs, we concludethat the optimal channel inputs are mutually independent.From this discussion, it follows that there is no benefit fromjoint source-channel coding, and source-channel separation isoptimal.Remark 9: There exist examples of channels that are not

fading Gaussian channels, but satisfy the rest of the assump-tions detailed in Section II-B, for which the DF-based sufficientconditions of Theorem 1 are not optimal. One such example isthe Gaussian relay channel with fixed channel coefficients; seealso discussion in [2, Sec. VII-B].Next, we consider source transmission over Rayleigh fading

MARCs.

Theorem 5: Consider transmission of arbitrarily correlatedsources and over a Rayleigh fading GaussianMARCwithreceiver side information and relay side information . Letthe channel inputs be subject to per-symbol power constraintsas in (22), and let the channel coefficients and the power con-straints satisfy

(25a)

(25b)

(25c)

where ; see [42, eq. (5.1.1)]. A source-channel rate is achievable if

(26a)

(26b)

(26c)

Conversely, if source-channel rate is achievable, then condi-tions (26) are satisfied with replaced by .

Proof: The proof uses [31, Corollary 1], and follows sim-ilar arguments to those in the proof of Theorem 4.Remark 10: The source-channel rate in Theorem 5 is

achieved by using , all i.i.d. andindependent of each other, and applying DF at the relay.

A. Proof of Theorem 4

1) Necessity Proof of Theorem 4: Consider the necessaryconditions of Theorem 2. We first note that the phase fadingMARC model specified in Section II-B exactly belongs tothe class of fading relay channels with Rx-CSI2 stated in [2,Th. 8]. Thus, from [2, Th. 8], it follows that for phase fadingMARCs with Rx-CSI, the mutual information expressions onthe RHS of (10) are simultaneously maximized bymutually independent, zero-mean complex Gaussian RVs,

. Applying this input p.d.f. to(10) yields the expressions in (24). Therefore, for phase fadingMARCs, when conditions (23) hold, the conditions in (24)coincide with the necessary conditions of Theorem 2, afterreplacing with .2) Sufficiency Proof of Theorem 4:• Codebook construction: For , assign every

to one of bins independently according to a uni-form distribution over . Denote

2Rx-CSI is incorporated into Theorem 2 by replacing with in (10),and with in (11), and then by using the fact that due to theabsence of Tx-CSI, ; see [2, eq. (50)].

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5453

these assignments by . Set , , andlet , , all mutually indepen-dent. Construct a channel code based on DF with rates

and , and with blocklength , as detailed in [31,Appendix A].

• Encoding: Consider sequences of length ,, . Partition each sequence

into length- subsequences, , , and ,. A total of source samples are

transmitted over blocks of channel symbolseach. Setting , and increasing , we obtaina source-channel rate as

.At block , source terminal , ob-serves and finds its corresponding bin index .Each transmitter sends its corresponding bin index usingthe channel code described in [31, Appendix A]. Assumethat at time , the relay knows . The relaysends these bin indices using the encoding scheme de-scribed in [31, Appendix A].

• Decoding and error probability analysis: We apply thedecoding rule of [31, eq. (A2)]. From the error proba-bility analysis in [31, Appendix A], it follows that, whenthe channel coefficients and the channel input power con-straints satisfy the conditions in (23), the RHSs of the con-straints in (24) characterize the ergodic capacity region (inthe sense of [2, eq. (51)],) of the phase fading GaussianMARC (see [2, Th. 9], [31, Appendix A]). Hence, whenconditions (23) are satisfied, the transmitted bin indices

can be reliably decoded at the destinationas long as

(27a)

(27b)

(27c)

Decoding the sources at the destination: The decoded binindices, denoted , are givento the source decoder at the destination. Using the bin in-dices and the side information , the sourcedecoder at the destination estimates by lookingfor a unique pair of sequences thatsatisfies and

. From the SW theorem [36, Th. 14.4.1],can be reliably decoded at the destination if

(28a)

(28b)

(28c)

Combining conditions (27) and (28) yields (24), and com-pletes the achievability proof.

Remark 11: Note that in the sufficiency proof inSection V-A2, we used the code construction and the de-coding procedure of [31, Appendix A], which are designedspecifically for fading MARCs. The reason we did not use theresult of Theorem 1 is that for the channel inputs to be mutuallyindependent, we must set in Theorem 1. But

with such an assignment, the decoding rule of the channel codeat the destination given by (A.2) does not apply, as this ruledecodes the information carried by the auxiliary RVs. For thesame reason, we did not simply cite [2, Th. 9] for the channelcoding part of the sufficiency proof of Theorem 4. We concludethat a specialized channel code must be constructed for fadingchannels. The issue of channel coding for fading MARCshas already been addressed in [31], and we refer to [31] for adetailed discussion.

B. Fading MABRCs

Optimality of informational separation can also be estab-lished for MABRCs by using the results for MARCs with threeadditional constraints. The result is stated in the followingtheorem:Theorem 6: For phase fading MABRCs for which the condi-

tions in (23) hold together with

(29a)

(29b)

(29c)

a source-channel rate is achievable if conditions (24) are sat-isfied. Conversely, if a source-channel rate is achievable, thenconditions (24) are satisfied with replaced by . The samestatement holds for Rayleigh fading MABRCs with (25) re-placing (23) and (26) replacing (24).

Proof: The sufficiency proof of Theorem 6 differs fromthe sufficiency proof of Theorem 4 only due to decoding re-quirement of the source sequences at the relay. Conditions (23)imply that reliable decoding of the channel code at the des-tination implies reliable decoding of the channel code at therelay. Conditions (29) imply that the relay achievable sourcerate region contains the destination achievable source rate re-gion, and therefore, reliable decoding of the source code at thedestination implies reliable decoding of the source code at therelay. Hence, if conditions (23), (24), and (29) hold, ,

, can be reliably decoded at both the relay andthe destination. Necessity of (24) follows from the necessaryconditions of Theorem 3, and by following similar argumentsto the necessity proof of Theorem 4.The extension to Rayleigh fading is similar to the one done

for MARCs (from Theorem 4 to Theorem 5).Remark 12: Conditions (29) imply that for the scenario de-

scribed in Theorem 4, regular and irregular encoding yield thesame source-channel achievable rates (see Remark 2); hence,the channel code construction of [31, Appendix A] can be usedwithout any rate loss.

VI. JOINT SOURCE-CHANNEL ACHIEVABLE RATES FORDISCRETE MEMORYLESS MARCS AND MABRCS

In this section, we derive two sets of sufficient conditionsfor the achievability of source-channel rate for DMMARCs and MABRCs with correlated sources and side infor-mation. Both achievability schemes are established by using acombination of SW source coding, the CPM technique, and aDF scheme with successive decoding at the relay and backwarddecoding at the destination. The techniques differ in the way the

5454 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

source codes are combined. In the first scheme (Theorem 7), SWsource coding is used for encoding information to the destina-tion and CPM is used for encoding information to the relay. Inthe second scheme (Theorem 8), CPM is used for encoding in-formation to the destination, while SW source coding is used forencoding information to the relay.Before presenting the results, we first motivate this section

by recalling that separate source-channel coding is suboptimalfor the MAC [7]. This implies that in general, separate source-channel coding is suboptimal for the MARC and MABRC aswell.

A. Joint Source-Channel Coding for MARCs and MABRCs

Theorems 7 and 8 below present two new sets of sufficientconditions for the achievability of source-channel rate ,obtained by combining SW source coding and CPM. For thesources and , we define common information in the senseof Gács, Körner [44], and Witsenhausen [45], as

, where and are deterministic functions. We nowstate the theorems.Theorem 7: For DM MARCs and MABRCs with relay and

receiver side information as defined in Section II-A, and sourcepair with common part , asource-channel rate is achievable if

(30a)

(30b)

(30c)

(30d)

(30e)

(30f)

(30g)

for some joint distribution that factorizes as

(31)

Proof: The proof is given in Appendix B.Theorem 8: For DM MARCs and MABRCs with relay and

receiver side information as defined in Section II-A, and sourcepair with common part , asource-channel rate is achievable if

(32a)

(32b)

(32c)

(32d)

(32e)

(32f)

(32g)

for some joint distribution that factorizes as

(33)

Proof: The proof is given in Appendix C.

Fig. 3. (a) Diagram of the Markov chain for the joint distribution consideredin (31). (b) Diagram of the Markov chain for the joint distribution consideredin (33).

B. Discussion

Fig. 3 illustrates the Markov chains for the joint distributionsconsidered in Theorems 7 and 8.Remark 13: Conditions (30a)–(30d) in Theorem 7 and con-

ditions (32a)–(32c) in Theorem 8 are constraints for decodingat the relay, while conditions(30e)–(30g) and (32d)–(32g) aredecoding constraints at the destination.Remark 14: Each mutual information expression on the RHS

of the constraints in Theorems 7 and 8 represents the rate ofone of two encoding types: either source-channel encoding viaCPM or SW encoding. Consider Theorem 7: here, andrepresent the binning information for and , respectively.Observe that the left-hand side (LHS) of condition (30a) is theentropy of when are known. On the RHS of (30a),as , , , , , and are given, the mutual informa-tion expression representsthe available rate that can be used for encoding informationon the source , in excess of the bin index represented by. The LHS of condition (30e) is the entropy of when

are known. The RHS of condition (30e) expressesthe amount of binning information that can be reliably trans-mitted cooperatively by transmitter 1 and the relay to thedestination. This can be seen by rewriting the mutual infor-mation expression in (30e) as

. As is given, this ex-pression represents the rate at which the bin index of source

can be transmitted to the destination in excess of thesource-channel rate for encoding (see Appendix A). There-fore, each mutual information expression in (30a) and (30e)represents different types of information sent by the source:either source-channel codeword to the relay as in (30a); or binindex to the destination as in (30e). This difference is becauseSW source coding is used for encoding information to thedestination while CPM is used for encoding information to therelay.Similarly, consider the RHS of (32a) in Theorem 8. The

mutual information expressionrepresents the rate that can

be used for encoding the bin index of source to the relay(see Appendix C), since is given. In contrast, the mutualinformation expression on theRHS of (32d) represents the available rate that can be used for

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5455

Fig. 4. The CRBC with correlated side information. and are the esti-mates of at the relay and destination, respectively.

cooperative source-channel encoding of the source to thedestination. This follows as , , , and are given.Remark 15: Theorems 7 and 8 establish different sufficient

conditions. In [7], it was shown that separate source andchannel coding is generally suboptimal for transmitting corre-lated sources over MACs. It then directly follows that separatecoding is also suboptimal for DM MARCs and MABRCs. InTheorem 7, the CPM technique is used for encoding informa-tion to the relay, while in Theorem 8, SW coding concatenatedwith independent channel coding is used for encoding infor-mation to the relay. Coupled with the above observation, thisimplies that the relay decoding constraints of Theorem 7 aregenerally looser compared to the relay decoding constraints ofTheorem 8. Using similar reasoning, we conclude that the desti-nation decoding constraints of Theorem 8 are looser comparedto the destination decoding constraints of Theorem 7 (as longas coordination is possible, see Remark 18). Considering thedistribution chains in (31) and (33), we conclude that these twotheorems represent different sets of sufficient conditions, andneither are special cases of each other nor include one another.Remark 16: Theorem 7 coincides with Theorem 1 for

and no common information: consider the case in which thesource pair has no common part, that is, , andlet as well. For an input distribution

conditions (30) specialize to conditions (8), and the transmis-sion scheme of Theorem 7 (see Appendix B) specializes to aseparation-based achievability scheme of Theorem 1 for ,under these assumptions.Remark 17: In both Theorems 7 and 8, the conditions stem-

ming from the CPM technique can be specialized to the suffi-cient conditions of [7, Th. 1] derived for a MAC. In Theorem7, letting specializes the relay con-ditions in (30a)–(30d) to the ones in [7, Th. 1], with as thedestination. In Theorem 8, letting specializes thedestination conditions in (32d)–(32g) to the ones in [7, Th. 1],with as the destination.Remark 18: Theorem 7 is optimal in some scenarios: con-

sider the cooperative relay-broadcast channel (CRBC) depictedin Fig. 4. This model is a special case of aMARC obtained whenthere is a single source terminal. For the CRBC with correlatedrelay and destination side information, we can identify exactlythe optimal source-channel rate using Theorems 1 and 3. Thisresult was previously obtained in [23]:

Corollary ([23, Th. 3.1]): For the CRBC with relay and re-ceiver side information, source-channel rate is achievable if

(34a)

(34b)

for some input distribution . Conversely,if rate is achievable, then the conditions in (34) are sat-isfied with replaced by for some input distribution

.Proof: The achievability follows from Theorem 1 by as-

signing and . The conversefollows from Theorem 3.For source-channel rate , the conditions in (34) can

also be obtained from Theorem 7 by letting ,and considering an input distribution

independent of the sources. Observe that Theorem 8 is not op-timal for the CRBC: consider the conditions in Theorem 8 with

. For this assignment, we obtain thefollowing sufficient conditions:

(35a)

(35b)

for some input distribution that factorizes as

(35c)

Note that the RHSs of (35a) and(35b) are smaller than orequal to the RHSs in(34a) and(34b), respectively. Moreover, notall joint input distributions that are feasible for [23, Th. 3.1] arealso feasible with (35c). Hence, the conditions obtained fromTheorem 8 for the CRBC setupwith are stricter than thoseobtained from Theorem 7, further illustrating the fact that thetwo sets of sufficient conditions are not equivalent. We concludethat the downside of using CPM to the destination as applied inthis work is that it places constraints on the distribution chain,thereby constraining the achievable coordination between thesources and the relay. Due to this restriction, when there is onlya single source, the joint distributions of the source and the relay( and ) permitted for the scheme of Theorem 8 do not ex-haust the entire space of joint distributions, and as a result, thesource-channel sufficient conditions obtained from Theorem 8are generally more restrictive than those obtained fromTheorem7 for the single source case. However, in the case of a MARC,it is not possible to determine whether either of the schemes isuniversally better than the other.Remark 19: Note that in Theorem 7, it is not useful to

generate the relay channel input statistically dependent onthe common information, that is, on the auxiliary RV . Tounderstand why, recall that in Theorem 7, SW source coding isused for encoding information to the destination, while CPMis used for encoding information to the relay. The optimalityof SW encoding [12] implies that letting the decoder know thecommon information will not improve the constraints for thesource decoder at the destination, as these are based on the SWdecoder (see (B.46)). Moreover, note that even though CPM isused for encoding information to the relay, sending commoninformation via the relay channel input will not improve the

5456 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

decoding constraints at the relay. This follows from the factthat in the DF scheme, cooperation information is used witha delay of one block. Therefore, common information at therelay channel input corresponds to the source sequences ofthe previous block, which cannot improve the decoding of thesource sequences of the current block at the relay, in contrast toTheorem 8. We conclude that in Theorem 7 we cannot benefitfrom generating the relay channel input statistically dependenton the common information.Remark 20: In both Theorems 7 and 8, we used a combi-

nation of SW coding and CPM. Since CPM can generally sup-port higher source-channel rates, a natural question that arisesis whether it is possible to design a scheme based only on CPM,namely to encode both the cooperation information forwardedby the relay (together with the sources) and the new informa-tion transmitted from the sources, using a superposition CPMscheme. This approach cannot be implemented in the frame-work of the current paper. This follows as the current work usesdecoding based on joint typicality, but joint typicality does notapply to different blocks of the same RV, that is, we cannot testthe joint typicality of and as they belong to differenttime blocks. Using a CPM-only scheme would require us tocarry out such tests. For example, consider the case in whichthe source pair has no common part, that is, ,and also let . Using the CPM technique for sending in-formation to both the relay and the destination would lead tothe following relay decoding rule: assume that the relay knows

at the end of block . The relay decodes, by looking for a unique pair

such that:

(36)

Note that and cannot be jointly typicalsince they correspond to different block indices: corre-sponds to block , while corresponds to block

, and hence, they are independent of each other. Similarly,the destination would require to check typicality across differentblocks.We conclude that a CPM-only scheme cannot be used to-

gether with a joint typicality decoder. It may be possible to con-struct schemes based on a different decoder, or to implementCPM through intermediate RVs to overcome this difficulty, butthese topics are left for future research.Remark 21: A comparison of the decoding rules of Theorem

7 (see Appendix B-C) and Theorem 8 (see Appendix C-C) re-veals a difference in the side information block indices used toassist in decoding at the relay and the destination. The decodingrules of Theorem 7 use side information block with the sameindex as that of the received vector, while the decoding rulesof Theorem 8 use side information block with an index earlierthan that of the received vector. The difference stems from thefact that in Theorem 7 cooperation between the relay and thesources is achieved via auxiliary RVs which represent bin in-dices, while in Theorem 8 the cooperation is based on the sourcesequences. In the DF scheme, cooperation information is usedwith a delay of one block. Therefore, when cooperation is based

on the source sequences (Theorem 8), then the side informationfrom the previous block is used for decoding since this is theside information that is correlated with the source sequences.

VII. CONCLUSION

In this paper, we considered transmission of arbitrarily corre-lated sources over MARCs and MABRCs with correlated sideinformation at both the relay and the destination. We first de-rived an achievable source-channel rate for MARCs based onoperational separation, which applies directly to MABRCs aswell. This result is established by using an irregular encodingscheme for the channel code. We also showed that for bothMABRCs and MARCs, regular encoding is more restrictivethan irregular encoding. Additionally, we obtained necessaryconditions for the achievability of source-channel rates.Then, we considered phase fading and Rayleigh fading

MARCs with side information and identified conditions underwhich informational separation is optimal for these channels.Conditions for the optimality of informational separation forfading MABRCs were also obtained. The importance of thisresult lies in the fact that it supports a modular system de-sign (separate design of the source and channel codes) whileachieving the optimal end-to-end performance. We note herethat this is the first time that optimality of separation is shownfor a MARC or a MABRC configuration.Finally, we considered joint source-channel coding for DM

MARCs and MABRCs for source-channel rate . We pre-sented two new joint source-channel coding schemes for whichwe use a combination of SW source coding and joint source-channel coding based on CPM. While in the first scheme, CPMis used for encoding information to the relay and SW coding isused for encoding information to the destination; in the secondscheme, SW coding is used for encoding information to the relayand CPM is used for encoding information to the destination.The different combinations of SW coding and CPM enable flex-ibility in the system design by choosing one of the two schemesaccording to the qualities of the side information sequences andreceived signals at the relay and the destination. In particular,the first scheme generally has looser decoding constraints at therelay, and therefore, it is better when the source-relay link isthe bottleneck, while the second scheme generally has looserdecoding constraints at the destination, and is more suitable toscenarios where the source–destination link is more limiting.

APPENDIX APROOF OF THEOREM 1

Fix a distribution .

A. Codebook Construction

For , assign every to one of binsindependently according to a uniform distribution on

. We refer to these two sets as the relay bins.Denote these assignments by . Independent from the relaybin assignments, for , assign every to one of

bins independently according to a uniform distribution on. We refer to these two sets as the desti-

nation bins. Denote these assignments by .

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5457

Next, generate a superposition channel codebook withblocklength , rates , , auxiliary vectors

, , and channel codewords ,, , as detailed in [2, Appendix A].

B. Encoding

Consider the sequences and side information, , and , all of length

. Partition each sequence into length subsequences,, , , and , . A total of

source samples are transmitted in blocks of channelsymbols each. For any fixed with , we canachieve a rate arbitrarily close to by increasing ,i.e., as .At block 1, transmitter , observes source subse-

quence and finds its corresponding relay bin index. It transmits the channel codeword . In

block , source terminal transmits the channelcodeword , where , and

. In block , the source terminaltransmits .At block , the relay simply transmits . Assume

that at block , the relay estimates. Let denote the estimates.

The relay then finds the corresponding destination bin indices, and transmits the channel codeword

.

C. Decoding

The relay decodes the source sequences sequentially tryingto reconstruct source blocks , at the end of channelblock as follows. Let be the estimates of

obtained at the end of block . Applyingand , the relay finds the corresponding destination bin indices

. At time , the relay channel decoder decodesby looking for a unique pair

such that:

(A.1)

The decoded relay bin indices, denoted , are thengiven to the relay source decoder, which estimates .The relay source decoder declaresas the decoded sequences if it is the unique pair of se-quences that satisfies and

. The decoded sequencesare denoted by .Decoding at the destination is done using backward de-

coding. The destination node waits until the end of channelblock . It first tries to decode using thereceived signal at channel block and its side information. Going backward from the last channel block to the first,

we assume that the destination has estimatesof , and consider decoding of . From

, the destination finds the relay bin indices. At block , the destination channel

decoder first estimates the destination bin indicesby looking for a unique pair such that:

(A.2)

The decoded destination bin indices, denoted ,are then given to the destination source decoder, whichestimates the source sequences . The desti-nation source decoder declaresas the decoded sequences if it is the unique pair of se-quences that satisfies and

. The decoded sequences aredenoted by .

D. Error Probability Analysis

Using standard techniques, it can be shown that decoding thesource sequences at the relay can be done reliably as long as(8a)–(8c) hold, and decoding the source sequences at the desti-nation can be done reliably as long as (8d)–(8f) hold.

APPENDIX BPROOF OF THEOREM 7

Fix a distribution.

A. Codebook Construction

For , assign every to one ofbins independently according to a uniform distribution on

. Denote this assignment by .For the channel codebook, for each , generatecodewords , by choosing the letters

, independently according to thedistribution . For each , generate onelength codeword by choosing the letters indepen-dently with distribution , . For each pair

, find the corresponding ,and generate one length codeword , ,by choosing the letters independentlywith distribution ,

. Finally, generate one length- relaycodeword for each pairby choosing independently with distribution

, .

B. Encoding

Consider the sequences , ,and , all of length . Partition each sequenceinto length- subsequences, , , , and ,

. A total of source samples are trans-mitted in blocks of channel symbols each. At block1, source terminal , , finds , andtransmits the channel codeword . Atblock , source terminal , , transmitsthe channel codeword , where

5458 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

is the bin index of source vector. Let be two sequences generated

i.i.d., according to . Thesesequences are known to all nodes. At block , sourceterminal transmits .At block , the relay transmits . Assume that

at block , the relay has estimatesof . It then finds the corre-

sponding bin indices , andtransmits the channel codeword .

C. Decoding

The relay decodes the source sequences sequentially, trying toreconstruct at the end of channel block as follows.Let be the estimates of obtainedat the end of block . The relay thus knows the correspondingbin indices . Using this information, its receivedsignal , and the side information , the relay decodes

, by looking for a unique pairsuch that:

(B.1)

where . Denote the decoded sequence.

Decoding at the destination is done using backward de-coding. Let be a sequence generated i.i.d. accordingto . The destinationnode waits until the end of channel block . It first triesto decode using the received signal at channelblock and . Going backward from the last channelblock to the first, we assume that the destination has estimates

of , and therefore has the esti-mates and . At block

, the destination channel decoder first estimates the des-tination bin indices , corresponding to , based

on its received signal and the side information , bylooking for a unique pair such that:

(B.2)

The decoded destination bin indices, denoted by ,are then given to the destination source decoder, which esti-mates by looking for a unique pair of sequences

that satisfiesand . The decoded sequencesare denoted by .

D. Error Probability Analysis

We start with the relay error probability analysis. Letdenote the relay decoding

error event in block , assuming areavailable at the relay, and are the source se-quences at block . Thus, this error event is the event that

. Let D denote the event that. The average de-

coding error probability at the relay in block , , is definedin (B.3) shown at the bottom of the page. In the following, weshow that the inner sum in (B.3) can be upper bounded inde-pendently of . Therefore, for any fixed valueof , we have (B.4) at the bottom of the page,where (B.4a) follows from the union bound and (B.4b) followsfrom the AEP [37, Ch. 5.1], for sufficiently large , and asis a deterministic function of . This deterministicrelationship implies thatif and only if .Note also that (B.4b) follows similarly to [7, eq. 16]. Next, weshow that for , thesummands in (B.4b) can be upper bounded independently of

, for any fixed value of .Let , and be positive numbers such that

and as . Assuming correct decoding at block

(B.3)

(B.4a)

D (B.4b)

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5459

(hence are available at the relay), wedefine the following events:

From the AEP [37, Ch. 5.1], for sufficiently large ,D . Therefore, we can bound

D

(B.5)

The event is the union of the following events:

Hence, by the union bound, it follows that. To bound , we first de-

fine the event as follows:

(B.6)

Recalling that for then , we have

(B.7)

Note that in (B.7) we consider ,as otherwise . Therefore, in

the following, we upper bound

via an expression that is independent

of . To reduce clutter, let us denote , , , ,, , ,

, , by , , , , , , , ,, , respectively. Note that the joint distribution obeys

(B.8)

Next, we use the assignments

(B.9)

Equation (B.8) shows that the assignments (B.9) satisfy the as-sumptions of [7, Lemma, Appendix A]. Using this lemma, webound asfollows:

(B.10)

where , and we used the fact that is a function of ,and the Markov chains , and

. Plugging (B.10) into(B.7), we have

(B.11)

which can be bounded by , for large enough , as long as

(B.12)

Following similar arguments as in (B.6)–(B.11), we canalso show that can be bounded by , for largeenough , as long as

(B.13)

5460 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

and can be bounded by , for large enough ,as long as

(B.14)

To bound , we first define the eventas follows:

(B.15)

Recalling that , , , we have

(B.16)

Let A denote the event that. Note that if A holds, then

A . Hence, we can write(B.17) at the bottom of the page, where (B.17) follows from[37, Th. 6.7]. In the following, we upper bound the summandsin (B.17) independently of , and . To reduce clutter,let us denote , , ,

, , , by , ,, , , , , respectively. The joint distribution obeys

(B.18)

Moreover, note that the independence of from ,and conditioning on A , implies that for large enough,

. Hence, we can use[7, Lemma, Appendix A] with the following assignments:

(B.19)

to bound

(B.20)

where follows from the Markov chain. From (B.20) and (B.17), we

obtain

A

(B.21)

and by using the bound , [37, Th.6.2], we have that

A

(B.22)

Plugging (B.22) into (B.16), we have

(B.23)

A

A A

A

A

A (B.17)

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5461

which can be bounded by , for large enough , if

(B.24)

Lastly, to bound , we first define the eventas follows:

(B.25)

Recalling that , we have

(B.26)

Then, we have

A

A

A

A (B.27)

where (B.27) follows from the same argument leading to(B.17). In the following, we upper bound the summandsin (B.27) independently of , and . Let us denote

, , , ,, , by ,

respectively. Note that the joint distribution obeys

(B.28)

Similarly to the analysis for , we have that. Hence, we can use

[7, Lemma, Appendix A] with the following assignments:

(B.29)

Then, we get the following bound:

(B.30)

where follows from the Markov chain. From (B.30) and (B.27), we

have

A

A

A (B.31)

However, for , the following holds:

A

A

Hence, using the fact that , we havethat

A

(B.32)

Finally, plugging (B.32) into (B.26), we have

(B.33)

which can be bounded by , for large enough , as long as

(B.34)

However, condition (B.34) is redundant since it is dominatedby condition (B.24); hence, we conclude that if conditions(30a)–(30d) hold, then for large enough

(B.35)

5462 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

Combining (B.4), (B.5), and (B.35) yields

(B.36)

Next, the destination error probability analysis is derived.Channel decoder: Let denote

the channel decoding error event for decoding atthe destination at block , assuming is avail-able at the destination, namely, the event that

. Let . The av-erage probability of channel decoding error at the destinationat block , , is defined in (B.37) shown at the bottom ofthe page, where in step (a) leading to (B.37) we apply sim-ilar reasoning as [7, eq. (16)]. In the following, we show thatthe inner sum in (B.37) can be upper bounded independentlyof . Assuming correct decoding at block(hence are available at the destination), we nowdefine the following events:

The average probability of error for decoding at thedestination at block , for fixed , subject to theevent D , is then upper bounded by

D

D (B.38)

where (B.38) follows from the union bound. From the AEP[37, Ch. 5.1], for sufficiently large , D can beupper bounded by for large enough. Let be a positivenumber such that and as . To bound

, we first define the event asfollows:

(B.39)

Recalling that , we can bound

(B.40)

Using [36, Th. 14.2.3], can be bounded by

(B.41)

where (a) follows from the Markov chainsand

. Plugging(B.41) into (B.40), we have

(B.42)

D (B.37)

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5463

which can be bounded by , for large enough , as long as

(B.43)

Following similar arguments as in (B.39)–(B.43), we canalso show that can be bounded by , for largeenough , as long as

(B.44)

and can be bounded by , for large enough ,as long as

(B.45)

Hence, if conditions (B.43)–(B.45) hold, for large enough ,.

Source decoder: From the SW theorem [12] it follows that,given correct decoding of , the average probabilityof error in decoding at the destination can be madearbitrarily small for sufficiently large , as long as

(B.46a)

(B.46b)

(B.46c)

Combining conditions (B.43)–(B.45) with conditions (B.46)yields the destination decoding constrains (30e)–(30g) in The-orem 7.

APPENDIX CPROOF OF THEOREM 8

Fix a distribution.

A. Codebook Construction

For , assign every to one ofbins independently according to a uniform distribution on

. Denote this assignment by .For each , generate one -length codeword by

choosing the letters independently with distribution ,for . For each pair ,set , and generate one -length codeword

, by choosing theletters independently with distribution

for . Finally, generateone length- relay codeword for each pair

by choosing indepen-dently with distribution for

.

B. Encoding

Consider the sequences , ,and , all of length . Partition each se-quence into length- subsequences, , , ,and , . A total of source samples are

transmitted in blocks of channel symbols each.Let be two sequences generated i.i.d.according to . Thesesequences are known to all nodes. At block 1, source ter-minal , observes , finds its correspondingbin index , and transmits the channelcodeword . At block ,source terminal , transmits the channel codeword

, where .At block , source terminal , transmits

.Let . At block , the

relay transmits . Assume that at block, the relay has estimates

of , and let. The relay then transmits the channel codeword

.

C. Decoding

The relay decodes the source sequences sequentially tryingto reconstruct source block , at the end ofchannel block as follows. Let be the esti-mates of at the end of block , and let

. The relay channel decoderat time decodes , by looking for a unique pair

such that:

(C.1)

The decoded bin indices, denoted , arethen given to the relay source decoder, which estimates

by looking for a unique pair of sequencesthat satisfies , ,

and . Denote the decodedsequences by .Decoding at the destination is done using backward de-

coding. The destination node waits until the end of channelblock . It first tries to decode , using thereceived signal at channel block and its side information. Going backward from the last channel block to the first, at

channel block , we assume that the destination has estimatesof and consider decoding of

. From , the destination finds thecorresponding . Then, the destination decodes

by looking for a unique pairsuch that:

(C.2)

where . Denote the decoded sequences byand .

5464 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 9, SEPTEMBER 2013

D. Error Probability Analysis

Following arguments similar to those in Appendix B-D, it canbe shown that decoding the source sequences at the relay canbe done reliably as long as (32a)–(32c) hold, and decoding thesource sequences at the destination can be done reliably as longas (32d)–(32g) hold.

REFERENCES

[1] G. Kramer and A. J. Wijnngaarden, “On the white Gaussian multiple-access relay channel,” in Proc. IEEE Int. Symp. Inf. Theory, Sorrento,Italy, Jun. 2000, p. 40.

[2] G. Kramer, M. Gastpar, and P. Gupta, “Cooperative strategies and ca-pacity theorems for relay networks,” IEEE Trans. Inf. Theory, vol. 51,no. 9, pp. 3037–3063, Sep. 2005.

[3] L. Sankaranarayanan, G. Kramer, and N. B. Mandayam, “Offset en-coding for multiaccess relay channels,” IEEE Trans. Inf. Theory, vol.53, no. 10, pp. 3814–3821, Oct. 2007.

[4] R. Tandon and H. V. Poor, “On the capacity region of multiple-accessrelay channels,” in Proc. Conf. Inf. Sci. Syst., Baltimore, MD, USA,Mar. 2011, pp. 1–5.

[5] C. E. Shannon, “A mathematical theory of communication,” Bell Syst.Tech. J., vol. 27, pp. 379–423 and pp. 623–656, 1948.

[6] C. E. Shannon, “Two-way communication channels,” in Proc. 4thBerkeley Symp. Math. Statist. Probab., 1961, vol. 1, pp. 611–644.

[7] T. M. Cover, A. E. Gamal, and M. Salehi, “Multiple access channelswith arbitrarily correlated sources,” IEEE Trans. Inf. Theory, vol. 26,no. 6, pp. 648–657, Nov. 1980.

[8] D. Gündüz, E. Erkip, A. Goldsmith, and H. V. Poor, “Source andchannel coding for correlated sources over multiuser channels,” IEEETrans. Inf. Theory, vol. 55, no. 9, pp. 3927–3944, Sep. 2009.

[9] T. M. Cover and A. A. E. Gamal, “Capacity theorems for the relaychannel,” IEEE Trans. Inf. Theory, vol. 25, no. 5, pp. 572–584, Sep.1979.

[10] E. Tuncel, “Slepian–Wolf coding over broadcast channels,” IEEETrans. Inf. Theory, vol. 52, no. 4, pp. 1469–1482, Apr. 2006.

[11] L. Sankar, N. B. Mandayam, and H. V. Poor, “On the sum-capacityof the degraded Gaussian multiaccess relay channel,” IEEE Trans. Inf.Theory, vol. 55, no. 12, pp. 5394–5411, Dec. 2009.

[12] D. Slepian and J. K. Wolf, “Noiseless coding of correlated informationsources,” IEEE Trans. Inf. Theory, vol. 19, no. 4, pp. 471–480, Jul.1973.

[13] L. Sankaranarayanan, G. Kramer, and N. B. Mandayam, “Capacitytheorems for the multiple-access relay channel,” in Proc. 42nd Annu.Allerton Conf. Commun., Control, Comput., Monticello, IL, USA, Sep.2004, pp. 1782–1791.

[14] S. Shamai and S. Verdú, “Capacity of channels with side information,”Eur. Trans. Commun., vol. 6, no. 5, pp. 587–600, Sep. 1995.

[15] G. Dueck, “A note on the multiple access channel with correlatedsources,” IEEE Trans. Inf. Theory, vol. 27, no. 2, pp. 232–235, Mar.1981.

[16] R. Ahlswede and T. S. Han, “On source coding with side informationvia a multiple access channel and related problems in multiuser infor-mation theory,” IEEE Trans. Inf. Theory, vol. 29, no. 3, pp. 396–412,May 1983.

[17] T. S. Han andM. H.M. Costa, “Broadcast channels with arbitrarily cor-related sources,” IEEE Trans. Inf. Theory, vol. 33, no. 5, pp. 641–650,Sep. 1987.

[18] G. Kramer and C. Nair, “Comments on broadcast channels with arbi-trarily correlated sources,” in Proc. IEEE Int. Symp. Inf. Theory, Seoul,Korea, Jul. 2009, pp. 2777–2779.

[19] M. Salehi and E. Kurtas, “Interference channels with correlatedsources,” in Proc. IEEE Int. Symp. Inf. Theory, San Antonio, TX,USA, Jan. 1993, p. 208.

[20] T. Han and K. Kobayashi, “A new achievable rate region for the inter-ference channel,” IEEE Trans. Inf. Theory, vol. 27, no. 1, pp. 49–60,Jan. 1981.

[21] W. Liu and B. Chen, “Interference channels with arbitrarily correlatedsources,” IEEE Trans. Inf. Theory, vol. 57, no. 12, pp. 8027–8037, Dec.2011.

[22] N. Liu, D. Gündüz, A. Goldsmith, and H. V. Poor, “Interferencechannels with correlated receiver side information,” IEEE Trans. Inf.Theory, vol. 56, no. 12, pp. 5984–5998, Dec. 2010.

[23] D. Gündüz and E. Erkip, “Reliable cooperative source transmissionwith side information,” in Proc. IEEE Inf. Theory Workshop, Bergen,Norway, Jul. 2007, pp. 22–26.

[24] D. Gündüz, E. Erkip, A. Goldsmith, and H. V. Poor, “Reliable jointsource-channel cooperative transmission over relay networks,” IEEETrans. Inf. Theory, vol. 59, no. 4, pp. 2442–2458, Apr. 2013.

[25] R. Kwak, W. Lee, A. El Gamal, and J. Cioffi, “Relay with side infor-mation,” in Proc. IEEE Int. Symp. Inf. Theory, Nice, France, Jun. 2007,pp. 606–610.

[26] M. Sefidgaran, B. Akhbari, Y. Mohsenzadeh, and M. R. Aref, “Reli-able source transmission over relay networks with side information,”in Proc. IEEE Int. Symp. Inf. Theory, Seoul, Korea, Jul. 2009, pp.699–703.

[27] D. Gündüz and E. Erkip, “Joint source-channel codes forMIMO block-fading channels,” IEEE Trans. Inf. Theory, vol. 54, no. 1, pp. 116–134,Jan. 2008.

[28] S. Wu and Y. Bar-Ness, “OFDM systems in the presence of phasenoise: consequences and solutions,” IEEE Trans. Commun., vol. 52,no. 11, pp. 1988–1996, Nov. 2004.

[29] U. Erez, M. D. Trott, and G. W. Wornell, “Rateless coding and perfectrate-compatible codes for Gaussian channels,” inProc. IEEE Int. Symp.Inf. Theory, Seattle, WA, USA, Jul. 2006, pp. 528–532.

[30] B. Sklar, “Rayleigh fading channels in mobile digital communicationsystems Part I: characterization,” IEEE Commun. Mag., vol. 35, no. 7,pp. 90–100, Jul. 1997.

[31] R. Dabora, “The capacity region of the fading interference channel witha relay in the strong interference regime,” IEEE Trans. Inf. Theory, vol.58, no. 8, pp. 5172–5184, Aug. 2012.

[32] C. T. K. Ng, N. Jindal, A. J. Goldsmith, and U. Mitra, “Capacity gainfrom two-transmitter and two-receiver cooperation,” IEEE Trans. Inf.Theory, vol. 53, no. 10, pp. 3822–3827, Oct. 2007.

[33] G. Farhadi and N. C. Beaulieu, “On the ergodic capacity of wirelessrelaying systems over Rayleigh fading channels,” IEEE Trans.WirelessCommun., vol. 7, no. 11, pp. 4462–4467, Nov. 2008.

[34] G. Farhadi and N. C. Beaulieu, “On the ergodic capacity of multi-hopwireless relaying systems,” IEEE Trans. Wireless Commun., vol. 8, no.5, pp. 2286–2291, May 2009.

[35] I. Stanojev, O. Simeone, Y. Bar-Ness, and C. You, “Performance ofmulti-relay collaborative hybrid-ARQ protocols over fading channels,”IEEE Commun. Lett., vol. 10, no. 7, pp. 522–524, Jul. 2006.

[36] T. M. Cover and J. Thomas, Elements of Information Theory. NewYork, NY, USA: Wiley, 1991.

[37] R. W. Yeung, A First Course in Information Theory. New York, NY,USA: Springer-Verlag, 2002.

[38] F. D. Nesser and J. L. Massey, “Proper complex random processes withapplications to information theory,” IEEE Trans. Inf. Theory, vol. 39,no. 4, pp. 1293–1302, Jul. 1993.

[39] A. El Gamal and Y. H. Kim, Network Information Theory. Cam-bridge, U.K.: Cambridge Univ. Press, 2012.

[40] J. L. Massey, “Causality, feedback and directed information,” in Proc.IEEE Int. Symp. Inform. Theory Appl., Waikiki, HI, USA, Nov. 1990,pp. 303–305.

[41] Y. Oohama, “Gaussian multiterminal source coding,” IEEE Trans. Inf.Theory, vol. 43, no. 6, pp. 1912–1923, Nov. 1997.

[42] M. Abramowitz and I. A. Stegun, Exponential Integral and RelatedFunctions. Handbook of Mathematical Functions with Formulas,Graphs, and Mathematical Tables, 9th ed. New York, NY, USA:Dover, 1972.

[43] F. M. J. Willems, “Informationtheoretical Results for the DiscreteMemoryless Multiple Access Channel”. Doctor in de WetenschappenProefschrift dissertation, Katholieke Univ. Leuven, Leuven, Belgium,Oct. 1982.

[44] P. Gács and J. Körner, “Common information is much less than mutualinformation,” Probl. Control Inf. Theory, vol. 2, pp. 149–162, 1973.

[45] H. S. Witsenhausen, “On sequences of pairs of dependent random vari-ables,” SIAM J. Appl. Math., vol. 28, pp. 100–113, Jan. 1975.

YonathanMurin received his B.Sc. degree in Electrical Engineering and Com-puter Science from Tel-Aviv University, Israel, in 2004, and his M.Sc. (magnacum laude) degree in Electrical Engineering from Ben-Gurion University, Is-rael, in 2011. He is currently pursuing a Ph.D. degree in Electrical Engineeringat Ben-Gurion University. From 2004 to 2010 he worked as a DSP and algo-rithms engineer, and as a team leader at Comsys Communication and SignalProcessing. His research interests include network information theory, wirelesscommunications and power line communications.

MURIN et al.: SOURCE-CHANNEL CODING THEOREMS FOR THE MULTIPLE-ACCESS RELAY CHANNEL 5465

Ron Dabora (M’03) received his B.Sc. and M.Sc. degrees in 1994 and 2000,respectively, from Tel-Aviv University, and his Ph.D. degree in 2007 from Cor-nell University, all in Electrical Engineering. From 1994 to 2000 he worked asan engineer at the Ministry of Defense of Israel, and from 2000 to 2003, he waswith the Algorithms Group at Millimetrix Broadband Networks, Israel. From2007 to 2009 he was a postdoctoral researcher at the Department of ElectricalEngineering, Stanford University. Since 2009 he is an assistant professor at theDepartment of Electrical and Computer Engineering, Ben-Gurion University,Israel. His research interests include network information theory, wireless com-munications and power line communications. He currently serves as an asso-ciate editor for the IEEE SIGNAL PROCESSING LETTERS.

Deniz Gündüz (M’12) received the B.S. degree in electrical and electronics en-gineering from the Middle East Technical University, Ankara, Turkey in 2002,and the M.S. and Ph.D. degrees in electrical engineering from Polytechnic In-stitute of New York University, Brooklyn, NY in 2004 and 2007, respectively.Currently he is a lecturer in the Electrical and Electronic Engineering Depart-

ment of Imperial College London, London, UK. He was a research associateat CTTC in Barcelona, Spain from November 2009 until September 2012. Healso held a visiting researcher position at Princeton University from November2009 until November 2011. Previously he was a consulting assistant professorat the Department of Electrical Engineering, Stanford University, and a postdoc-toral Research Associate at the Department of Electrical Engineering, PrincetonUniversity.He is the recipient of a Marie Curie Reintegration Grant funded by the Eu-

ropean Commission, the 2008 Alexander Hessel Award of Polytechnic Instituteof New York University given to the best PhD Dissertation, and a recipient ofthe Best Student Paper Award at the 2007 IEEE International Symposium onInformation Theory (ISIT).He is an Editor of the IEEETRANSACTIONS ONCOMMUNICATIONS, and served

as a guest editor of the EURASIP Journal on Wireless Communications andNetworking, Special Issue on Recent Advances in Optimization Techniques inWireless Communication Networks. He is a co-chair of the Network TheorySymposium at the 2013 IEEE Global Conference on Signal and InformationProcessing (GlobalSIP), and also served as a co-chair of the 2012 IEEE Euro-pean School of Information Theory (ESIT). His research interests lie in the areasof communication theory and information theory with special emphasis on jointsource-channel coding, multi-user networks, energy efficient communicationsand security.