46351_ch2

44
2. Channel Capacity and Coding Dr. Lim H.S. v1.1 (Last Updated: 4 Nov 2010) Lim H.S. @MMU

Upload: rubern-stylo

Post on 29-Jan-2016

217 views

Category:

Documents


0 download

DESCRIPTION

good omr

TRANSCRIPT

Page 1: 46351_ch2

2. Channel Capacity and Coding

Dr. Lim H.S.

v1.1 (Last Updated: 4 Nov 2010)

Lim H.S. @MMU

Page 2: 46351_ch2

Presentation outline

◦ Discrete Memoryless Channels

◦ Channel Capacity

◦ Channel Coding Theorem

◦ Capacity of a Gaussian Channel

Lim H.S. @MMU Page 1/43

Page 3: 46351_ch2

Discrete Memoryless Channels

• Definition

¦ A discrete memoryless channel is a statistical model with an input X andan output Y (a noisy version of X); both X and Y are random variables.

¦ Every unit of time, the channel accepts an input symbol X selected froman alphabet X

X = {x0, x1, ..., xJ−1} (1)

and, in response, it emits an output symbol Y from an alphabet Y

Y = {y0, y1, ..., yK−1} (2)

¦ The channel is discrete when both of the alphabets X and Y have finitesizes.

¦ It is said to be memoryless when the current output symbol depends onlyon the current input symbol and not any of the previous ones.

Lim H.S. @MMU Page 2/43

Page 4: 46351_ch2

Discrete Memoryless Channels

• Definition (cont.)

¦ The channel is described in terms of an input alphabet X , an outputalphabet Y, and a set of transition probabilities

p(yk|xj) = P (Y = yk|X = xj) for all j and k (3)

Figure 1: Discrete memoryless channel.

¦ The input alphabet X and output alphabet Y need not have the same size.

Lim H.S. @MMU Page 3/43

Page 5: 46351_ch2

Discrete Memoryless Channels

• Definition (cont.)

¦ A convenient way of describing a discrete memoryless channel is to arrangethe various transition probabilities of the channel in the form of a matrix asfollows:

P =

p(y0|x0) p(y1|x0) · · · p(yK−1|x0)p(y0|x1) p(y1|x1) · · · p(yK−1|x1)

... ...p(y0|xJ−1) p(y1|xJ−1) · · · p(yK−1|xJ−1)

(4)

¦ The J-by-K matrix P is called the channel matrix.¦ The sum of the elements along any row of the matrix is equal to one; that

is,K−1∑

k=0

p(yk|xj) = 1 for all j (5)

Lim H.S. @MMU Page 4/43

Page 6: 46351_ch2

Discrete Memoryless Channels

• Definition (cont.)

¦ Suppose that the inputs to a discrete memoryless channel are selectedaccording to the probability distribution

p(xj) = P (X = xj) for j = 0, 1, ..., J − 1 (6)

¦ The probabilities p(xj) are called the a priori probabilities of the variousinput symbols.

¦ The joint probability distribution of the random variable X and Y is givenby

p(xj, yk) = P (X = xj, Y = yk)

= P (Y = yk|X = xj)P (X = xj)

= p(yk|xj)p(xj) (7)

Lim H.S. @MMU Page 5/43

Page 7: 46351_ch2

Discrete Memoryless Channels

• Definition (cont.)

¦ The marginal probability distribution of the output random variable Y isobtained by averaging out the dependence of p(xj, yk) on xj, as shown by

p(yk) = P (Y = yk)

=J−1∑

j=0

P (Y = yk|X = xj)P (X = xj)

=J−1∑

j=0

p(yk|xj)p(xj) for k = 0, 1, ..., K − 1 (8)

¦ Eq. (8) states that if we are given the input a priori probabilities p(xj) andthe channel matrix, then we can calculate the probabilities of the variousoutput symbols, p(yk).

Lim H.S. @MMU Page 6/43

Page 8: 46351_ch2

Discrete Memoryless Channels

• Binary Symmetric Channel

¦ The binary symmetric channel is a special case of the discrete memorylesschannel with J = K = 2.

¦ The channel has two input symbols (x0 = 0, x1 = 1) and two outputsymbols (y0 = 0, y1 = 1).

¦ The channel is symmetric because the probability of receiving a 1 if a 0is sent is the same as the probability of receiving a 0 if a 1 is sent. Thisconditional probability of error is denoted by p.

Figure 2: Transition probability diagram of binary symmetric channel.

Lim H.S. @MMU Page 7/43

Page 9: 46351_ch2

Channel Capacity

• What is Conditional Entropy?

¦ Of practical interest in many communication applications is the numberof bits that may be reliably transmitted per second through a givencommunication channel.

¦ In this section we shall provide a theoretical definition of this channelcapacity.

¦ Given that we think of the channel output Y as a noisy version of thechannel input X, and that the entropy H(X ) is a measure of the prioruncertainty about X, how can we measure the uncertainty about X afterobserving Y ?

¦ To answer this question, we define the conditional entropy of X given thatY = yk as

H(X|Y = yk) =J−1∑

j=0

p(xj|yk) log2

[1

p(xj|yk)

](9)

Lim H.S. @MMU Page 8/43

Page 10: 46351_ch2

Channel Capacity

• What is Conditional Entropy? (cont.)

¦ This quantity is itself a random variable that takes on the values H(X|Y =y0), ..., H(X|Y = yK−1) with probabilities p(y0), ..., p(yK−1), respectively.

¦ The mean of entropy H(X|Y = yk) over the output alphabet Y is thereforegiven by

H(X|Y) =K−1∑

k=0

H(X|Y = yk)p(yk)

=K−1∑

k=0

J−1∑

j=0

p(xj|yk)p(yk) log2

[1

p(xj|yk)

]

=K−1∑

k=0

J−1∑

j=0

p(xj, yk) log2

[1

p(xj|yk)

](10)

Lim H.S. @MMU Page 9/43

Page 11: 46351_ch2

Channel Capacity

• What is Conditional Entropy? (cont.)

¦ The quantity H(X|Y) is called a conditional entropy.¦ It represents the amount of uncertainty remaining about the channel input

after the channel output has been observed.

Lim H.S. @MMU Page 10/43

Page 12: 46351_ch2

Channel Capacity

• What is Mutual Information?

¦ Since the entropy H(X ) represents our uncertainty about the channelinput before observing the channel output, and the conditional entropyH(X|Y) represents our uncertainty about the channel input after observingthe channel output, it follows that the difference H(X ) − H(X|Y) mustrepresent our uncertainty about the channel input that is resolved byobserving the channel output.

¦ This important quantity is called the mutual information of the channel:

I(X ;Y) = H(X )−H(X|Y) (11)

¦ Similarly, we may write

I(Y;X ) = H(Y)−H(Y|X ) (12)

where H(Y) is the entropy of the channel output and H(Y|X ) is theconditional entropy of the channel output given the channel input.

Lim H.S. @MMU Page 11/43

Page 13: 46351_ch2

Channel Capacity

• What is Mutual Information? (cont.)

¦ Mutual information has a number of properties:- It is nonnegative

I(X ;Y) > 0 (13)

- It is symmetricI(X ;Y) = I(Y;X ) (14)

- By combining the expressions for H(X ) and H(X|Y) it may be shownthat

I(X ;Y) =J−1∑

j=0

K−1∑

k=0

p(xj, yk) log2

[p(xj|yk)p(xj)

]

=J−1∑

j=0

K−1∑

k=0

p(xj, yk) log2

[p(yk|xj)p(yk)

](15)

Lim H.S. @MMU Page 12/43

Page 14: 46351_ch2

Channel Capacity

• What is Mutual Information? (cont.)

Figure 3: Illustrating the relations among various channel parameters.

Lim H.S. @MMU Page 13/43

Page 15: 46351_ch2

Channel Capacity

• How to find Channel Capacity?

¦ The channel capacity of a discrete memoryless channel is the maximumaverage mutual information I(X ;Y) in any single use of the channel(i.e., signaling interval), where the maximization is over all possible inputprobability distributions {p(xj)} on X :

C = max{p(xj)}

I(X ;Y) (16)

¦ It is measured in bits per channel use.¦ Note that the channel capacity C is a function of only the transition

probabilities p(yk|xj), which define the channel. The calculation of Cinvolves maximization of I(X ;Y) over J variables p(x0), ..., p(xJ−1) subjectto two constraints:

p(xj) > 0 ∀j andJ−1∑

j=0

p(xj) = 1 (17)

¦ In general, finding the C is a challenging task.

Lim H.S. @MMU Page 14/43

Page 16: 46351_ch2

Channel Capacity

• How to find Channel Capacity? (cont.)

¦ Example: Consider again the binary symmetric channel. The entropy H(X)is maximized when the channel input probability p(x0) = p(x1) = 1/2. Themutual information I(X ;Y) is similarly maximized, so that we may write

C = I(X ;Y)|p(x0)=p(x1)=12

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

p(x0)

Cha

nnel

Cap

acity

, C

Figure 4: Channel capacity of a binary symmetric channel with p = 0.1.

Lim H.S. @MMU Page 15/43

Page 17: 46351_ch2

Channel Capacity

• How to find Channel Capacity? (cont.)

¦ The transition probabilities are given by

p(y0|x1) = p(y1|x0) = p

andp(y0|x0) = p(y1|x1) = 1− p

we find the capacity of the binary symmetric channel

C =1∑

j=0

1∑

k=0

p(xj, yk) log2

[p(yk|xj)p(yk)

]∣∣∣∣∣∣p(x0)=p(x1)=

12

= 1 + p log2 p + (1− p) log2(1− p) (18)

= 1−H(p)

Lim H.S. @MMU Page 16/43

Page 18: 46351_ch2

Channel Capacity

• How to find Channel Capacity? (cont.)

¦ The channel capacity C varies with the transition probability p.

Figure 5: Variation of channel capacity of a binary symmetric channel with transitionprobability p.

Lim H.S. @MMU Page 17/43

Page 19: 46351_ch2

Channel Capacity

• How to find Channel Capacity? (cont.)

¦ We make the following observations:- When the channel is noise free (p = 0), the channel capacity C attains

its maximum value of one bit per channel use, which is exactly theinformation in each channel input. At this value of p, the entropy functionH(p) attains its minimum value of zero.

- When the conditional probability of error p = 1/2 due to noise, the channelcapacity C attains its minimum value of zero, whereas the entropy functionH(p) attains its maximum value of unity; in such a case the channel issaid to be useless.

Lim H.S. @MMU Page 18/43

Page 20: 46351_ch2

Channel Coding Theorem

• What is Channel Coding?

¦ The presence of noise in a channel causes errors between the output andinput data sequences of a digital communication system.

¦ For a relatively noisy channel, the probability of error may have a valuehigher than 10−2, which means that more than 1 out of 100 transmittedbits are received as errors.

¦ For many applications, this level of reliability is far from adequate. Indeed,a probability of error equal to 10−6 or lower is often a necessary requirement.

¦ To achieve such a high level of performance, we may have to resort to theuse of channel coding.

¦ The design goal of channel coding is to increase the resistance of a digitalcommunication system to channel noise.

Lim H.S. @MMU Page 19/43

Page 21: 46351_ch2

Channel Coding Theorem

• What is Channel Coding? (cont.)

¦ Channel coding consists of mapping (or encoding) the incoming datasequence into a channel input sequence, and inverse mapping (or decoding)the channel output sequence into an output data sequence in such a waythat the overall effect of channel noise on the system is minimized.

Figure 6: Block diagram of digital communication system. To simplify the exposition, wehave not included source encoding and decoding.

¦ The channel encoder and decoder are both under the designer’s control andshould be designed to optimize the overall effectiveness of the communicationsystem.

Lim H.S. @MMU Page 20/43

Page 22: 46351_ch2

Channel Coding Theorem

• What is Channel Coding? (cont.)

¦ The approach taken is to introduce redundancy in the channel encoder soas to reconstruct the original source sequence as accurately as possible.

¦ Thus, in a rather loose sense, we may view channel coding as the dual ofsource coding in that the former introduces controlled redundancy to improvereliability, whereas the latter reduces redundancy to improve efficiency.

¦ For the purpose of our present discussion of channel coding, it suffices toconfine our attention to block codes.

¦ In this class of codes, the message sequence is subdivided into sequentialblocks each k bits long, and each k-bit block is mapped into an n-bit block,where n > k.

¦ The number of redundant bits added by the encoder to each transmittedblock is n− k bits.

Lim H.S. @MMU Page 21/43

Page 23: 46351_ch2

Channel Coding Theorem

• What is Channel Coding? (cont.)

¦ The ratio k/n is called the code rate:

r =k

n(19)

where, of course, r 6 1.¦ The accurate reconstruction of the original source sequence at the destination

requires that the average probability of symbol error be arbitrarily low.¦ This raises the following important question:

Does there exist a sophisticated channel coding scheme such that theprobability that a message bit will be in error is less than any positivenumber ε (i.e., as small as we want it), and yet the channel coding schemeis efficient in that the code rate need not be too small?

¦ The answer is “yes” and is provided by Shannon’s second theorem in termsof the channel capacity C.

Lim H.S. @MMU Page 22/43

Page 24: 46351_ch2

Channel Coding Theorem

• How to construct the Best Channel Code?

¦ Suppose that the discrete memoryless source with alphabet S and entropyH(S) bits per source symbol.

¦ We assume that the source emits symbol once every Ts seconds. Hence,the average information rate of the source is H(S)/Ts bits per second.

¦ The decoder delivers decoded symbols to the destination from the sourcealphabet S and at the same source rate of one symbol every Ts seconds.

¦ The discrete memoryless channel has a channel capacity equal to C bits peruse of the channel.

¦ We assume that the channel is capable of being used once every Tc seconds.Hence, the channel capacity per unit time is C/Tc bits per second, whichrepresents the maximum rate of information transfer over the channel.

Lim H.S. @MMU Page 23/43

Page 25: 46351_ch2

Channel Coding Theorem

• How to construct the Best Channel Code? (cont.)

¦ The channel coding theorem for a discrete memoryless channel is stated intwo parts as follows:

1. IfH(S)

Ts6 C

Tc(20)

there exists a coding scheme for which the source output can betransmitted over the channel and be reconstructed with an arbitrarilysmall probability of error. The parameter C/Tc is called the criticalrate.

2. Conversely, ifH(S)

Ts>

C

Tc(21)

it is not possible to transmit information over the channel andreconstruct it with an arbitrarily small probability of error.

Lim H.S. @MMU Page 24/43

Page 26: 46351_ch2

Channel Coding Theorem

• How to construct the Best Channel Code? (cont.)

¦ The channel coding theorem is the single most important result ofinformation theory.

¦ The theorem specifies the channel capacity C as a fundamental limit onthe rate at which the transmission of reliable error-free messages can takeplace over a discrete memoryless channel.

¦ It is important to note that the channel coding theorem does not show ushow to construct a good code.

¦ Rather, the theorem can be characterized as an existence proof in the sensethat it tells us that if the condition of Eq. (20) is satisfied, then good codesdo exist.

Lim H.S. @MMU Page 25/43

Page 27: 46351_ch2

Channel Coding Theorem

• Application to Binary Symmetric Channels

¦ Consider a discrete memoryless source that emits equally likely binarysymbols (0s and 1s) once every Ts seconds. The source entropy is thereforeequals to one bit per source symbol.

¦ The information rate of the source is 1/Ts bits per second.¦ The source sequence is applied to a channel encoder with code rate r.¦ The channel encoder produces a symbol once every Tc seconds. Hence, the

encoded symbol transmission rate is 1/Tc symbols per second.¦ The channel encoder engages a binary symmetric channel once every Tc

seconds. Hence, the channel capacity per unit time is C/Tc bits per second,where (see Eq. (18))

C = 1 + p log2 p + (1− p) log2(1− p)

and p is the channel transition probability.

Lim H.S. @MMU Page 26/43

Page 28: 46351_ch2

Channel Coding Theorem

• Application to Binary Symmetric Channels (cont.)

¦ The channel coding theorem implies that if

1Ts

6 C

Tc(22)

the probability of error can be made arbitrarily low by the use of a suitablechannel encoding scheme.

¦ Since the code rate of the channel encoder r = Tc/Ts, we may rewrite thecondition as

r 6 C (23)

¦ That is, for r 6 C, there exists a code capable of achieving an arbitrarilylow probability of error.

Lim H.S. @MMU Page 27/43

Page 29: 46351_ch2

Channel Coding Theorem

• Application to Binary Symmetric Channels (cont.)

¦ Example: Repetition CodeConsider a binary symmetric channel with transition probalility p = 10−2.The channel capacity is therefore

C = 1 + p log2 p + (1− p) log2(1− p) = 0.9192

Hence, from the channel coding theorem, we may state that for any ε > 0and r 6 0.9192, there exists a code of large enough length n and code rater, and an appropriate decoding algorithm, such that when the coded bitstream is sent over the given channel, the average probability of channeldecoding error is less than ε.

Lim H.S. @MMU Page 28/43

Page 30: 46351_ch2

Channel Coding Theorem

• Application to Binary Symmetric Channels (cont.)

¦ Consider a simple repetition code where each bit of the message is repeatedn times and n = 2m + 1 is an odd integer. For example, for n = 3, wetransmit 0 and 1 as 000 and 111, respectively.

¦ Intuitively, it would seem logical to use a majority rule for decoding, whichoperates as follows:If in a block of n received bits, the number of 0s exceeds the number of1s, the decoder decides in favor of 0. Otherwise, it decides in favor of 1.

¦ Hence, an error occurs when m + 1 or more bits out of n bits are receivedincorrectly.

¦ It can be shown that the average probability of error Pe is given by

Pe =n∑

i=m+1

(n

i

)pi(1− p)n−i (24)

where p is the transition probability of the channel.

Lim H.S. @MMU Page 29/43

Page 31: 46351_ch2

Channel Coding Theorem

• Application to Binary Symmetric Channels (cont.)

¦ Average probability of error for repetition code assuming binary symmetricchannel with p = 10−2.

Code Rate, r = 1/n Average Probability of Error, Pe

1 1× 10−2

1/3 3× 10−4

1/5 1× 10−6

1/7 4× 10−7

1/9 1× 10−8

1/11 5× 10−10

¦ It is not necessary to have the code rate r approach zero so as to achievemore and more reliable operation of the communication link. The channelcoding theorem merely requires that the code rate be less than the channelcapacity C.

Lim H.S. @MMU Page 30/43

Page 32: 46351_ch2

Channel Coding Theorem

• Application to Binary Symmetric Channels (cont.)

Figure 7: Illustrating significance of the channel coding theorem.

Lim H.S. @MMU Page 31/43

Page 33: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel

¦ Consider a zero-mean stationary process X(t) that is band-limited to B Hz.Let Xk, k = 1, 2, ..., K, denote the continuous random variables obtainedby uniform sampling of the process X(t) at the Nyquist rate of 2B samplesper second.

¦ These samples are transmitted in T seconds over a noisy channel, alsoband-limited to B Hz. Hence, the number of samples, K, is given by

K = 2BT (25)

¦ We refer to Xk as a sample of the transmitted signal.¦ The channel output is perturbed by additive white Gaussian noise of zero

mean and power spectral density N0/2. The noise is band-limited to B Hz.¦ Let the continuous random variables, Yk, denote samples of the received

signal,Yk = Xk + Nk, k = 1, 2, ..., K (26)

Lim H.S. @MMU Page 32/43

Page 34: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ The noise sample Nk is Gaussian with zero mean and variance given by

σ2 = N0B (27)

¦ We assume that the samples Yk, k = 1, ..., K, are statistically independent.¦ A channel for which the received signal and the noise are as described in

Eqs. (26) and (27) is called discrete-time memoryless Gaussian channel.

Figure 8: Model of discrete-time memoryless Gaussian channel.

Lim H.S. @MMU Page 33/43

Page 35: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ Typically, the transmitter is power limited with average transmitted power

E[X2k] = P, k = 1, 2, ..., K (28)

¦ The power-limited Gaussian channel is of not only theoretical but alsopractical importance in that it models many communication channels,including radio and satellite link.

¦ Let I(Xk; Yk) denote the average mutual information between Xk and Yk.We may then define the information capacity of the channel as

C = maxfXk

(x){I(Xk; Yk) : E[X2

k] = P} (29)

where the maximization is performed with respect to fXK(x), the probability

density function of Xk.

Lim H.S. @MMU Page 34/43

Page 36: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ Perform this optimization is beyond the scope of this subject but the resultis

C =12

log2

(1 +

P

σ2

)bits per use (30)

¦ With the channel used K times for the transmission of K samples of theprocess X(t) in T seconds, the information capacity per unit time is

C =K

T× 1

2log2

(1 +

P

σ2

)

= B log2

(1 +

P

N0B

)bits/s (31)

where in the last line we have used K = 2BT and σ2 = N0B.

Lim H.S. @MMU Page 35/43

Page 37: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ We may now state Shannon’s third (and most famous) theorem, theinformation capacity theorem, as follows:The information capacity of a continuous channel of bandwidth B Hz,perturbed by additive white Gaussian noise of power spectral density N0/2and limited in bandwidth to B is given by

C = B log2

(1 +

P

N0B

)bits/s (32)

where P is the average transmitted power.¦ The information capacity theorem is one of the most remarkable results of

information theory for, in a single formula, it highlights most vividly theinterplay among three key system parameters: channel bandwidth, averagetransmitted power (or, equivalently, average received signal power), andnoise power spectral density at the channel output.

Lim H.S. @MMU Page 36/43

Page 38: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ The theorem implies that, for a given average transmitted power P andchannel bandwidth B, we can transmit information at the rate of C bits/s,as defined in Eq. (32), with arbitrary small probability of error by employingsufficiently complex encoding systems.

¦ It is not possible to transmit at a rate higher than C bits/s by any encodingsystem without a definite probability of error.

¦ Hence, the channel capacity theorem defines the fundamental limit on therate of error-free transmission for a power-limited, band-limited Gaussianchannel.

¦ To approach this limit, however, the transmitted signal must have statisticalproperties approximating those of white Gaussian noise.

Lim H.S. @MMU Page 37/43

Page 39: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ From Eq. (32), it is clear that the capacity increases by increasing P , andin fact, C →∞ as P →∞.

¦ Increasing B, however, has a dual role on the capacity. On one hand, itcauses the capacity to be increased because higher bandwidth means moretransmission over the channel per unit time. On the other hand, increasingB decreases the SNR defined by P/N0B. This is so because increasing thebandwidth increases the effective noise power entering the receiver.

¦ To see how the capacity changes as B → ∞, we need to use the relationln(1 + x) → x as x → 0 to get

C∞ = limB→∞

B log2

(1 +

P

N0B

)= (log2 e)

P

N0≈ 1.44

P

N0bits/s (33)

Lim H.S. @MMU Page 38/43

Page 40: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ It is clear that having infinite bandwidth cannot increase the capacityindefinitely, and its effect is limited by the amount of available power.

¦ This is in contrast to the effect of having infinite power that, regardless ofthe amount of available bandwidth, can increase the capacity indefinitely.

¦ To derive a fundamental relation between the bandwidth and power efficiencyof a communication system, we note that for reliable communication wemust have Rb < C which in the case of band-limited AWGN channel isgiven by

Rb < B log2

(1 +

P

N0B

)(34)

¦ Dividing both sides by B and define the bandwidth efficiency as γ = Rb/B,we obtain

γ < log2

(1 +

P

N0B

)(35)

Lim H.S. @MMU Page 39/43

Page 41: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ Using the relation Eb = P/Rb, we obtain

γ < log2

(1 +

EbRb

N0B

)= log2

(1 + γ

Eb

N0

)(36)

from which we haveEb

N0>

2γ − 1γ

(37)

¦ This relation states the condition for reliable communication in terms ofbandwidth efficiency γ and Eb/N0 which is a measure of power efficiency ofa system.

Lim H.S. @MMU Page 40/43

Page 42: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ A plot of bandwidth efficienty γ = Rb/B versus Eb/N0 is called thebandwidth-efficiency diagram.

Figure 9: Bandwidth-efficiency diagram.

Lim H.S. @MMU Page 41/43

Page 43: 46351_ch2

Capacity of a Gaussian Channel

• Discrete-Time Memoryless Gaussian Channel (cont.)

¦ We make the following observations:1. The minimum value of Eb/N0 for which reliable communication is possible

is obtained by letting γ → 0 (or equivalently B →∞) in Eq. (37), whichresults in

Eb

N0> ln 2 = 0.693 ∼ −1.6 dB (38)

This value is called the Shannon limit. It is the minimum required valueof Eb/N0 for any communication systems. No system can transmit reliablybelow this limit.

2. The capacity boundary (Rb = C), separates combinations of systemparameters that have the potential for supporting error-free transmission(Rb < C) from those for which error-free transmission is not possible(Rb > C).

Lim H.S. @MMU Page 42/43

Page 44: 46351_ch2

References

[1] S. Haykin and M. Moher, ”Communication systems”, 5th Edition, John Wiley & Sons, 2010.

[2] J. G. Proakis and M. Salehi, ”Digital communications”, 5th Edition, McGraw-Hill, 2008

Lim H.S. @MMU Page 43/43