st at ft t () ()cos(2 () ) = + + π θ θ cstaff.ustc.edu.cn/~jingxi/lecture 9_10.pdf · one way to...
TRANSCRIPT
Differential PSK (DPSK) and Its Performance
( ) ( )cos(2 ( ) )cs t A t f t tπ θ θ= + +
A(t) = amplitude information
θ(t) = phase information
fc = the carrier frequency
θ = the phase shift introduced by the channel
• In coherent communications, θ is known to receivers
• In non-coherent communications, θ is unknown to receivers and
assumed to be a random variable distributed uniformly over (-π, π)
Coherent Detection of Differentially Encoded
M-PSK Signals
• In coherent communications, phase estimation is required.
• The receiver usually derives its frequency and phase demodulation references
from a carrier synchronization loop.
• The synchronization loop may introduce a phase ambiguity
where is the estimate acquired by the receiver through the synchronization
loop.
• For M-PSK signals, the phase ambiguity Φmay take any value in
Thus any value of Φ other than 0 will cause a correctly detected phase to be
erroneously mistaken for one of the other possible phases, even in the
absence of noise.
• Solution: differential encoding while maintaining coherent detection
ˆφ θ θ= −θ̂
2 2 ( 1){0, , }
M
M M
π π −⋯
Differentially Encoded M-PSK Signals
• Instead of encoding information into absolute phases, the
information is now encoded using phase differences between
successive signal transmission.
Example: 0 mapped to phase 0, 1 mapped to phase π
π 0 0 0 π 00Differentially encoded
Phase sequence
π π 0 0 π π0Absolute phase
sequence {∆θn}
1 1 0 0 1 10Binary sequence {an}
The actually transmitted
Phase sequenceInitial value
� The information sequence {an} is now carried by phase difference
in {θn}.
π 0 0 0 π 00Differentially encoded
Phase sequence
π π 0 0 π π0Absolute phase
sequence {∆θn}
1 1 0 0 1 10Binary sequence {an}
The actually transmitted
Phase sequenceInitial value
Differentially Encoded M-PSK Signals
•Assume the phase ambiguity Φ introduced by the synchronization
loop is constant during successive signal intervals. In the absence
of noise, the receiver would convert the received phase sequence
into 0+Φ, π+Φ, 0+Φ, 0+Φ, 0+Φ, π+Φ, 0+Φ.
•The received sequence would be then differentially decoded into π,
π, 0, 0, π, π.
•Then we get the original binary sequence (110011) back.
Differentially Encoded M-PSK Signals
Example: M=4. Assume that the following Gray mapping is used
Binary sequence 00 10 11 01
absolute phase 0 π/2 π 3π/2
Binary sequence {an} 10 11 00 10 01 11
Absolute phase sequence {∆θn} π/2 π 0 π/2 3π/2 π
Differentially encoded π/2 3π/2 3π/2 0 3π/2 π/2
phase sequence {θn}
Estimated phase sequence π/2+Φ 3π/2+Φ 3π/2+Φ 0+Φ 3π/2+Φ π/2+Φ
(in the absence of noise)
Differentially decoded sequence π 0 π/2 3π/2 π
Decoded binary sequence 11 00 10 01 11
ˆnθ
ˆna
Φ є { 0, π/2, π, 3π/2} is the phase ambiguity
The Structure of Differential Encoder and Decoder
ˆ{ }nθ
Delay T
{∆θn}{θn}
Modulo 2π addition
Delay T
Figure a. A differential encoder
Figure b. A differential decoder
{∆θn} = the absolute phase
sequence
{θn} = the transmitted phase
sequence
ˆ{ }nθ∆
A digital comm. system that
employ differential encoding of the
inform. Symbols and coherent
detection of successive
differentially encoded M-PSK
signals is called a differentially
encoded coherent M-PSK system.
-
Optimal Coherent Receiver for Differentially
Encoded M-PSK
,1
n
,
tann s
n c
r
rη −=
ˆ2 cos(2 )cf tπ θ+
ˆ2 sin(2 )cf tπ θ− +
r(t)
X
X
( 1 )n T
n Td t
+
∫
( 1)n T
nTdt
+
∫
rn,c
rn,s
900
Carrier
Sync. loop
Sub-
tractor
Select
the
smallest
ˆ{ }nθ
Delay T
Modulo 2π addition
ˆ{ }nθ∆
-
|ηn-β0|
|ηn-β1|
|ηn-βM-1|
1
2( ) cos(2 )
( 1) , 0, 1, 2,
phase shift introduced by the channel
ˆ estimate of θ provided by carrier
synch. loop
ˆ the phase ambiguity
,
{ } represents t
sc n
n n n
n
Er t f t
T
nT t n T n
π θ θ
θ
θ
φ θ θ
θ θ θ
θ
−
= + +
≤ ≤ + = ± ±
=
=
= − =
= −
⋯
△
△ he inform. sequence
Performance
0ˆLet denote Pr{ | }i iP m m m m= =
2
0 20
0
s in1
= 1 - e x p { }s in
e n e rg y / s y m b o l.
sM
s
EMP d
N
E
ππ
π
απ α
−−
=
∫
in the optimal coherent
receiver fro M-PSK.
One can show that the prob. of correct decision in the optimal
coherent receiver for differentially encoded M-PSK is
12
0
12
0
i 0
1
0 coherent M-PSK
0
1 1
Since P for i 0, it follows that
|
M
c i
i
M
e c i
i
M
c o i C
i
P P
P P P
P
P P P P P
−
=
−
=
−
=
=
⇒ = − = −
< ≠
< = =
∑
∑
∑
The symbol error
Pe > Pe|coherent M-PSK
12 2
0
1
12 2
| |
1
1
= 2 ( )
M
e i
i
M
e M PSK e M PSK i
i
P P P
P P P
−
=
−
− −=
= − −
− −
∑
∑
Differential PSK (DPSK)
• The phase shift is treated as a random variable distributed uniformly over
(-π, π), and no phase estimation is provided at the receiver.
• We are now concerned with non-coherent communications
• The phase shift θ is the same during two consecutive signal transmission
intervals [(n-1)T, nT) and [nT, (n+1)T).
• The information phase sequence {∆θn} is still differentially encoded as
before.
• The transmitted signal s(t) in the interval [(n-1)T, (n+1)T] is
• ∆θn= θn- θn-1. θn and θn-1 are independent. Both of them take values
uniformly over
• {∆θn} and {θn} are i.i.d. sequences
1
2cos(2 ), (n-1)T t<nT
( )2
cos(2 ), nT t<(n+1)T
sc n
sc n
Ef t
Ts t
Ef t
T
π θ θ
π θ θ
−
+ + ≤= + + ≤
2 2 ( 1)0, ,
M
M M
π π − ⋯
Differential PSK (DPSK)
• The received waveform is then r(t)=s(t)+n(t)
• To derive the optimal receiver, the observation interval should be
taken as (n-1)T ≤ t <(n+1)T since θis the same in this interval.
• We have four orthonormal basis functions:
1
2
3
2cos 2 ( -1)
(t)= T
0 ( 1)
2sin 2 ( -1)
(t)= T
0 ( 1)
0 ( -1)
(t)=
c
c
f t n T t nT
nT t n T
f t n T t nT
nT t n T
n T t
πφ
πφ
φ
≤ ≤ < ≤ +− ≤ ≤ < ≤ +
≤ ≤
4
2cos 2 ( 1)
T
0 ( -1)
(t)= 2sin 2 ( 1)
T
c
c
nT
f t nT t n T
n T t nT
f t nT t n T
π
φπ
< ≤ + ≤ ≤− < ≤ +
2cos 2 cf t
Tπ
2sin 2 cf t
Tπ−
r(t)
X
X
( 1 )n T
n Td t
+
∫
( 1)n T
nTdt
+
∫
rn,c
rn,s
900
Local
Oscillator
Delay
T
Delay
T
rn-1,c
rn-1,s
Differential PSK (DPSK)
Waveform
channelVector
channel
, , , ,
, , 1, 1,
0
cos( ) , sin( )
, , , are i.i.d
Each of them N(0, N /2)
n c s n n c n s s n n s
n c n s n c n s
r E r Eθ θ η θ θ η
η η η η− −
= + + = + +
∼
We estimate ∆θn from the new observation (rn,c, rn,s, rn-1,c, rn-1,s)
i
1
, , 1 1, 1,
2 2
, ,
1
2
0 0
1
0
2Let ,0 1.
Given , ,and
( , , , , | , )
[( cos( )] [( sin( )]1
exp( )
2exp | | cos( )n
n i n
n c n s n n s n i n
n
j c s j j s s j
j n
s j
n n
ii M
M
P r r r c r
r E r E
N N
Ec r e r
N
θ
πβ
θ β θ θ
θ β θ θ
θ θ θ θ
π
θ α
−
− − −
= −
− ∆−
= ≤ ≤ −
∆ =
∆ =
− + + − +
= −
= − + −
∑
1) c does not depend on θ, θn-1 and ∆θn
2) rn = rn,c + jrn,s and rn-1=rn-1,c +jrn-1,s
3) αis given by rne-jθn + rn-1e
-jθn-1=| rne-jθn + rn-1|e
jα
Differential PSK (DPSK)
θ is distributed uniformly over (-π, π) and θn-1 tales values uniformly
over 2 2 ( 1)
{0, , , }M
M M
π π −⋯
, , 1, 1, 1,
1
0
0 1 0 1
0 0
cos
0
( , , , | , )
21exp{ | | cos( )}
2
2 2( | |) ( | |)
1where I (x)= is modified Bessel function
2
The above
n
n i
n c n s n c n s n i n
s j
n n
s j s j
n n n n
x y
P r r r r
Ec r e r d
N
E EcI r e r cI r e r
N N
e dy
π θ
π
θ β
π
π
θ β θ
θ α θπ
π
− − −
− ∆−−
− ∆ − ∆− −
−
∆ =
= + −
= + = +
∫
∫n-1
, , 1, 1, 0 1
0
formula is independent of
2( , , , | ) ( | |)is j
n c n s n c n s n i n n
EP r r r r cI r e r
N
β
θ
θ β − ∆− − −⇒ ∆ = = +
Differential PSK (DPSK)
n k
, , 1, 1, , , 1, 1,
0 1 0 1
0 0
0
1
ˆchoose as iff
( , , , | ) max ( , , , | )
2 2( | |) max ( | |)
( 0, ( ) is an increasing function)
| |
k i
k
n c n s n c n s n k n c n s n c n s n ii
s j s j
n n n ni
j
n n
P r r r r P r r r r
E EI r e r I r e r
N N
x I x
r e r
β β
β
θ β
θ β θ β− − − −
− ∆ − ∆− −
− ∆−
∆
∆ = = ∆ =
⇔ + = +
>
⇔ + =
1
1
1 1 1 1
1 1
1 1
max | |
| || | cos( ) max[| || | cos( )]
| | min | |
where | | and | |
i
n n
j
n ni
n n n n k n n n n ii
n n k n n ii
j j
n n n n
r e r
r r r r
r r e r r e
β
η η
η η β η η β
η η β η η β
−
− ∆−
− − − −
− −
− −
+
⇔ − − = − −
⇔ − − = − −
= =
Differential PSK (DPSK)
Applying the MAP rule, we now
Optimal Receiver for M-DPSK
,1
n
,
tann s
n c
r
rη −=
2cos 2 cf t
Tπ
2sin 2 cf t
Tπ−
r(t)
X
X
( 1 )n T
n Td t
+
∫
( 1)n T
nTdt
+
∫
rn,c
rn,s
900
Local
Oscillator
Sub-
tractor
Select
the
smallest
Delay T
ˆnθ∆
-
| ψ-β0|
| ψ -β1|
| ψ -βM-1|
ψ
Difference from the receiver for differentially encoded M-PSK?
+
Performance of M-DPSK
)2
N N(0,~ ,,
)sin(
)cos(
)sin(
)cos()cos(
440
,1,1,,,
,11,1
,11,1
,1,
,1,,
xsncnsncn
snnssn
cnnscn
snnnssn
cnnnscnnscn
I
Er
Er
Er
EEr
−−
−−−
−−−
−
−
++=
++=
++∆+=
++∆+=++=
ηηηη
ηθθ
ηθθ
ηθθθ
ηθθθηθθ
One way to calculate the performance of M-DPSK is to find first the pdf of
phase difference
cn
sn
n
cn
sn
n
nn
r
r
r
r
,1
,11
1
,
,1
1
tan and ,tan−
−−−
−
−
==
−=
ηη
ηηψ
(rn,c, rn,s)
(rn-1,c, rn-1,s)
noise vector (ηn,c, ηn,s)
noise vector
(ηn-1,c, ηn-1,s)
It follows that the pdf of ψ is
independent of the angle θn-1+ θ.
It can be shown that the
conditional pdf of ψ given ∆θn
satisfies the following equation
2
1
1 2
2 1 1 2
2 1 1 2
2 0
2
( | ) ( | )
( ) ( ) 1
( ) ( ) or
exp{ [1 cos( ) cos ]}sin( )
( )4 1 cos( ) cos
n n
n n
n
n n
n
n n
sn
n
n
P f d
F F
F F
Et
NF dt
t
ψ
ψψ
θ θ
θ θ
π
θ π
ψ ψ ψ θ ψ θ ψ
ψ ψ ψ θ ψ
ψ ψ ψ θ θ ψ
θ ψθ ψ
ψπ θ ψ
∆ ∆
∆ ∆
∆−
≤ ≤ ∆ = ∆
− + <∆ <= − >∆ ∆ >
− − ∆ −−∆ −
=− ∆ −
∫
∫
θn-1 + θ
ψ
∆θn
Performance of M-DPSK
∫
∫
∫
−
−
−
−
−−=⇒
−
−−−=⇒
−
−−−=
+−−+=
=∆+≤≤−=
=∆=∆
2
2
0
2
2
0
2
2
0
coscos1
]}cos4
cos1(exp{
4
sin
coscos1
]}cos4
cos1(exp{
4
sin
1
coscos1
]}cos4
cos1(exp{
4
sin1
1)()(
)|(
)|ˆ(
π
π
π
π
π
π
ββ
π
π
π
π
π
π
π
π
π
π
π
π
πβ
πβ
βθπ
βψπ
β
βθβθ
dt
tM
tN
E
MP
dt
tM
tN
E
MP
dt
tM
tN
E
M
MF
MF
MMP
P
s
e
s
c
s
ii
inii
inin
ii
Performance of M-DPSK
We then get
Special case
• M=2
Pb=Pe=1/2e-Eb/N0
• For large M, M-DPSK requires 3dB more Eb/N0 than coherent M-PSK
Comparison of Digital Modulation Methods
• To make a fair comparison among different digital modulation
methods, one must investigate the tradeoff among the bandwidth
efficiency, bit error probability, and bit SNR.
• The bandwidth efficiency of a modulation method is defined as the
bit rate (Rb) to bandwidth (W) ratio Rb/W
• The bandwidth is calculated from the power spectral density function
of the transmitted waveform, which in turn depends on the signal set
used.
Assume that the optimal signal set is used.
Theorem 3.7.1: The maximum number N of dimension of the signal
source spanned by a set of signals of duration of T and “bandwidth”
W grows linearly with time T and bandwidth W, respectively:
N=KWT
where K is a constant around 2.
The value of K depends on specific definitions of bandwidth. We
normally choose K=2.
Comparison of Digital Modulation Methods
Examples
PAM signals: N=1 => W=1/(2T) (single sideband)
The bandwidth efficiency of M-ary PAM is
M-PSK: N=2 => W=1/T
log /2 log bits/second/Hz
1/ 2
bR M TM
W T= =
log /log bits/second/Hz
1/
bR M TM
W T= =
2log /2 log bits/second/Hz
1/ 2
bR M TM
W T= =
log / 2 log bits/second/Hz
/ 2 M
bR M T M
W M T= =
M2-QAM: N=2 => W=1/T
M-FSK: N=M => W=M/(2T)
As M approaches infinity, Rb/W goes to 0
Comparison of Digital Modulation Methods
• In the case of PAM, QAM, and PSK, increasing M results in a higher
bit rate-to-bandwidth ratio R/W.
• However, the cost of achieving the higher data rate is an increase in
the SNR per bit.
• Therefore, these modulation methods are appropriate for channels
that are bandlimited, where we desire R/W>1 and where there is
sufficient high SNR to support increases in M.
• Example: Telephone channels
• M-ary orthogonal signals yields R/W≤1. As M increases, R/W
decreases.
• However, the SNR/bit required to achieve a given error probability
(10-5) decreases as M increases.
• Consequently, M-ary orthogonal signals are appropriate for power-
limited channels that have sufficiently large bandwidth to
accommodate a large number of signals.
• As M goes to infinity, Pe approaches zero provided that SNR/bit≥-
1.6 dB
Comparison of Digital Modulation Methods
Information
source
Source
encoderModulator
Channel
DemodulatorSource
decoderUser
Channel
encoder
Channel
decoder
Text, speech,
Audio, image,
video
Natural
redundancy
removal
Controlled
redundancy
insertion
noise
Source coding Channel coding Discrete channel
Digital Communication Block Diagram
数字通信系统框图
Fig 1.1 数字通信系统框图
Channel Capacity and Coded Modulation
Channel Models and Channel Capacity
Binary Symmetric Channels
AWGN
Channel
{an} BPSK
Modulator
s(t) ˆ{ }naDemodulator
and
detector
Assume that an=0 or 1, i.i.d, and symbol 0 and 1 are equally likely.
Then
0
2ˆ( ) b
b n n
EP P a a Q
N
= ≠ =
0
1
0
1
1-q
q
q
1-q
Equivalent diagram
Binary Symmetric Channels
1 2 1 2ˆ ˆ ˆ( ) 1 [1 ]nn n bP a a a a a a p≠ = − −⋯ ⋯
For any fixed bit SNR Eb/N0, the block error prob. goes to 1 as n
approaches infinity.
In order to transmit information reliably over the BSC, one has to
employ channel encoders and decoders.
BSC
{an} Channel
encoder
ˆ{ }naChannel
decoder
An important question: with the use of channel encoders and
decoders, how many number of bits of information can be reliably
transmitted over the BSC?
The answer is the channel capacity of the BSC.
Channel Capacity of the BSC
max ( ; )X
C I X Y=
X is the input random variable to the BSC, Y is the corresponding output.
The maximization is taken over all possible input random variable X.
For BSC, C = 1 – H(q) with the maximum achieved by the equally likely
input X.
AWGN
Channel
{an} BPSK
Modulator
s(t) ˆ{ }naDemodulator
and
detector
Channel
encoder
Channel
decoder
•The concatenation of the channel encoder and BPSK modulator is a
binary coded modulation scheme and gives binary coded signals.
•The BPSK detector is not optimal any more since coded signals are
correlated and the BPSK makes a hard decision (0 or 1) bit by bit.
•To improve the performance, we may consider a detector that outputs
Q>2 outputs. That is, a detector quantizes the demodulator output into
Q>2 outputs (or no quantization). In this case, we say that the detector
has made a soft decision.
•With a detector making a soft decision, we get an equivalent discrete
channel with two inputs and Q>2 possible outputs.
Channel Capacity of the BSC
Discrete Memoryless Channels
m0
m1
mM-1
Y0
Y1
YQ-1
P(y0|m0)
P(yQ-1|mM-1)
•With any M-ary modulator and a soft (or hard) decision detector, we get
an equivalent discrete channel with M inputs and Q≥M possible outputs.
•The maximum number of bits of information that can be reliably
transmitted over the channel is given by channel capacity
A general discrete channel
max ( ; )X
C I X Y=
Discrete Input, Continuous-Output Channels
AWGN
Channel
{an} PAM
Modulator
s(t) Demodulator {yn}
•The demodulator converts the received waveform into scalar vectors,
no estimation is made at this point. The corresponding composite
channel is then equivalent to the scalar channel
Y = X + n
•X takes values in the amplitude set of the PAM modulator
•n is a Gaussian random variable with n ~ N(0, N0/2)
•Channel capacity
max ( ; )X
C I X Y=
•The modulator is given, but coded PAM signal and their receiver
including the detector and channel decoder are left open.
Band-limited, power-limited AWGN channels
Assume the channel is band limited and power limited
r(t) = x(t) + n(t)
x(t) is the input waveform band-limited to W and power-limited to P
n(t) = the AWGN with Sn(f) = N0/2.
To compute the channel capacity, we first convert the waveform
channel into discrete time Gaussian channel.
( ) ( )2
sin 2( ), ( ) sin 22 2
n
n
n
nx t x g t
W
n wtx x g t c Wt
W wt
π
π
+∞
=−∞
= −
= = =
∑
This is equivalent to projecting x(t) into the space spanned by the
orthonormal basis { 2 ( )} :2
2 ( ) ( - )2
n
nW g t
W
nx w x t g t dt
W
+∞−∞
+∞
−∞
−
= ∫
( ) ( )2
n
n
nx t x g t
W
+∞
=−∞
= −∑nx
, n=0, 1, 2,2
nt
W= ± ± ⋯
n(t)
-W W -W W
1/2W 1/2W
Sample at
n n nr x n= +
{rn}
{ } in+∞−∞ is i.i.d and each ni is Gaussian, ni~N(0, N0W)
Since x(t) is power-limited to P, each sample xn is then energy limited
to P. The channel capacity CN per use of the discrete-time Gaussian
channel is
CN = max {I(Xn; rn: Xn is an input with2
0
1[ ] } log(1 )
2n
PE x P
N W≤ = +
During T seconds, we have 2WT samples, and the discrete-time channel
is used 2WT times. Thus, the channel capacity C in bits per second is
0
2 log(1 ) bits/secondN
PC W C W
N W= × = +
Band-limited, power-limited AWGN channels
Bandwidth Efficiency versus Bit SNR
0 0
/
0
log(1 ) log(1 )
Energy/second
2 1
/
b
b
C W
b
E CC P
W N W N W
P E C
E
N C W
= + = +
= =
−⇒ =
0-1.61
0.1
10C/W
Shannon limit
Eb/N0
dB
Ideal diagram of bandwidth efficiency
versus bit SNR
Eb/N0 bit SNR
C/W bandwidth efficiency
0
/
b
/ 00
Let , we have
log bits/second
E 2 1 lim ln 2 1.
/
1
6N
)
C W
C W
W
PC e
N
dBC W
∞
−>
→∞
=
−= = ≈−
2) In order to achieve essentially error free transmission, Eb/N0 must
be ≥-1.6 dB (Shannon limit)
3) An transmission rate Rb< C is achievable, any rate Rb>C is not
achievable (Shannon noisy channel coding theorem)
4) The proof of noise channel coding theorem involves the well-
known random coding argument
Bandwidth Efficiency versus Bit SNR
Achieving Channel Capacity with Orthogonal Signals
• The error probability can be made arbitrarily small as T goes to infinity (M
goes to infinity for a fixed bit rate Rb = log M/T ).
• For a power-limited AWGN channel with unbounded bandwidth orthogonal
signaling (or M-FSK) is asymptotically optimal in the sense that the
corresponding (symbol) error probability goes to o exponentially as M goes
to infinity if the bit rate Rb = log M/T is less than the channel capacity.
2
( ) 2
b
( )
b
2 2 if 0 R C /4
2 2 if C /4 R C
b
b
CT R
eT C R
P
∞
∞
− −
∞
− −∞ ∞
≤ ≤= ≤ ≤
i
i
Coded Modulation-A Probabilistic Approach
• Although orthogonal signaling is asymptotically optimal, there is a
considerable gap between the performance of practical uncoded
communication systems and the optimal performance theoretically
achievable given by the noisy channel coding theorem.
• To reduce the gap, one must resort to coded modulation.
� Algebraic approach (specific coding design techniques)
� Probabilistic approach (analysis of the performance of a general
class of coded signals)
Random Coding Based on M-ary Binary Coded Signals
• Objective
– Design a binary code C consisting of M binary codewords
Ci=(Ci1,Ci2,···, Cin), 0≤ i ≤M-1
– The corresponding coded signals {si(t)} gives good error prob.
performance
• The encoding bit rate is Rb bits/second
• Blocks of k bits are encoded at a time into one of the M waveforms
, M = 2k = 2RbT
• Assume Rb<1/Tc, T=nTc. This implies that k < n, and
1
2( ) (2 1) cos 2 , 1 , [( -1) , ]
[(2 1), , (2 1)]
ci ij c
b
i i in c
s t C f t j n t j Tc jTcT
s C C
επ
ε
= − ≤ ≤ ∈
= − −⋯
1
0{ ( )}M
i is t −=
( )( )22 2 , 1/
2 2b
kT D Rn k
cn n
MD T
− −− −= = = =
• We consider the ensemble of (2n)M distinct ways in which we can
select M codewords from the total 2n possible binary sequences.
• Associated with each of the (2n)M binary codes, there is a coded
communication system.
AWGN
Channel
{an} BPSK
Modulator
ˆ{ }naOptimal
Receiver
Channel
Code C1
1{ ( )}is t
AWGN
Channel
{an} BPSK
Modulator
ˆ{ }naOptimal
Receiver
Channel
Code C2
2{ ( )}is t
AWGN
Channel
{an} BPSK
Modulator
ˆ{ }naOptimal
Receiver
Channel
Code C2nM
2{ ( )}nM
is t
Random Coding Based on M-ary Binary Coded Signals
• Instead of evaluating the error prob. of each coded system
individually, we compute the average error prob. of the 2nM coded
communication systems:
• This is equivalent to choosing a binary code from the set of all possible
code Cj, 1≤j ≤2nM, and evaluate the average error prob. , where
{{ ( )}}j
e iP s t
2
1
1{{ ( )}}
2
nM
j
e e inMj
P P s t=
= ∑
[ ( )]eE P C
1( ) , 1 j 2
2
Mn
MnP
jC C= = ≤ ≤
C
2
1
1 1
0 0, ,0 0
[ ( )] {{ ( )}} ( )
Applying the union bound, we get
2 ( , )|| ( ) ( ) ||1 1{{ ( )}} ( )
2 2
where denotes the codeword
nM
j
e e e i
j
j jj jM Mc i lj i l
e i
i l l i i li l
j
i
P E P P s t P
d C Cs t s tP S t Q Q
M MN N
C ith
jC C
ε
=
− −
= = ≠≠
= =
− ≤ =
∑
∑ ∑ ∑
of the codebook .jthjC
Random Coding Based on M-ary Binary Coded Signals
2
2
, 0
2
1 , 0
, 0
, 0
( , )1Since ( ) for any 0, {{ ( )}} exp{ }
( , )1 1exp{ }
2
( , )1 1[ exp{ }]2
( , )1[exp{ }]
nM
x j jj c i l
e i
i li l
j j
c i le nM
j i li l
j j
c i l
nMi li l
c i l
i li l
d C CQ x e x P s t
M N
d C CP
M N
d C C
M N
d C CE
M N
ε
ε
ε
ε
−
≠
=≠
≠
≠
≤ > ≤ −
≤ −
= −
= −
∑
∑ ∑
∑
∑
where Ci and Cl are the ith and lth codeword of the random codebook
respectively.
C
Random Coding Based on M-ary Binary Coded Signals
0
0 0
00
( ( , ) ) 2
( , )[exp{ }] 2
2 1 2 [1 log(1 )]
c
c c
n
i l
dnNnc i l
d
n
N Nn n
nP d C C d
d
nd C CE e
dN
e e
ε
ε ε
ε
−
−−
=
− −− −
= =
⇒ − =
= + = − +
∑
Ci and Cl are independent, each taking values uniformly over the
set (0,1)
0 b 0 0 b
0
R ( R )
0
( 1)2 [1 log(1 )] < 2 2
1where 1 log(1 ), ,
C
C
N T TDR T DRn
e
N
c
P M e
R e D n TDT
ε
ε
−− − −−
−
⇒
≤ − − + =
= − + = =
Random Coding Based on M-ary Binary Coded Signals
Since is the average error prob. of the 2nM coded comm. Systems,
There exists at least one binary code Cj such that the corresponding
error prob.
Pe(Cj)<2-T(DR0-Rb)< 2-n(R0-Rc)
where Rc = k/n = Rb/D is referred to as the code rate.
eP
When Rb<DR0 (or Rc<R0), one can always design a binary coded
system so that the block error prob. goes to 0 exponentially fast as
the block length n approached infinity.
Random Coding Based on M-ary Binary Coded Signals
• If one selects a code at random, then this code is good with a very
high probability.
• The probability that the error probability of the randomly chosen code
is less than 1/α.
• As the dimension n is large enough, good codes are abundant. But the
implementation complexity is high.
• The rate R0 is called the cutoff rate.
e eP Pα≥
R0
dB
bits/d
imen
sio
n
-5 0 5 10 Ec/N0—SNR per dimension
0.20.4
0.6
0.8
1.0
Random Coding Based on M-ary Binary Coded Signals
Review of Finite Fields
Group Structures
Definition 1 A group is a set G together with a binary operation *
on G such that the following three properties hold:
1) * is associative, that is,
for any a,b,c G, (a*b)*c=a*(b*c)∈
2) There is an identity or (unity)
element e in G such that for all a G,
a*e = e*a = a
∈
3) For each a G, there exists an
inverse element a-1 G such that
a*a-1 = e
Sometimes, we denote the group
as a triple (G, *, e). If the group
also satisfies
4) for all a, b
a * b = b * a
Then the group is called abelian
or commutative.
∈
∈
Example 1 Let
• Z, the set consisting of all integers
• Q, the set of all rational numbers
+ and · are ordinary addition and multiplication.
Then (Z, +, 0), (Q, +, 0), (Q*, ·, 1)
are all groups where Q* is the all nonzero rational
numbers.
Furthermore, they are abelian
How about (Z*, ·, 1) ?
Let n be a positive integer (n>1)
and Zn represent the set of
remainder of all integers on
division n, i.e.,
Zn = {0, 1, 2, ···, n-1}
We define a+b and ab the
ordinary sum and product of a
and b reduced by modulo n
respectively.
Let Zn* = { a Zn| a 0}∈ ≠
Proposition 1
(a) (Zn, +, 0) forms a group
(b) (Zp*, ., 1) forms a group
for any prime p
Definition 2 A multiplicative group
is said to be cyclic if there is an
element a G such that for any b G
there is some integer i with b = ai.
Such and element is called the
generator of the cyclic group, and we
write
G = <a>
∈ ∈
G
1
a
a2
b=ai
Examples
(Z3*,·,1) is a cyclic group with generator 2.
Z3 = {1, 2} = <2> , 20 = 1, 22 = 1 mod 3
(Z7*,·,1) is a cyclic group with generator 3.
31 = 3, 32 = 2, 33 = 6, 34 = 4, 35=5, 36 = 1 mod 7
(Z5*,·,1) is a cyclic group with generator 2.
21 = 2, 22 = 4, 23 = 3, 24 = 1 mod 7
Every element in Z5* can be written in a power of 2
Finite Groups
Definition 3 A group G is called finite if it contains a finite
number of elements. The number of elements in G is called
the order of G, denoted as |G|
Definition 4 A ring (R,+, ·) is a set R, together with two binary
operations, denoted by + and · such that
1) R is an abelian group with respective to +
2) · is associative, that is
( a · b) · c = a · (b · c ) for all a, b, c R.
3) The distributive law holds:
a · (b + c ) = a · b + a · c
(b + c ) · a = b · a + c · a
(Z, +, ·) and (Q, +, ·) are rings
(Zn, +, ·) forms a ring, called residue class ring modulo n
(Z4, +, ·) is a ring.
Rings
∈
Let (F,+, ·) be a ring, and let
F* = { a F| a 0}
The set of nonzero elements in F.
∈ ≠
Definition 5 A field is a ring (F ,+, ·) such that F* together
with the multiplication · forms an abelian group.
Fields
•(Q ,+, ·),
•(R ,+, ·), and
•(C ,+, ·) are fields
where R is the set of all real numbers,
and C is the set of all complex numbers
Definition 6 A finite field F is a field that contains a finite
number q of elements. This number is called the order of the
field. F is also called a Galois field, denoted by GF(q).
Finite Fields
Proposition 2 Let P be a prime, then (Zp ,+, ·) is finite field with
order p. This field is denoted as GF(p).
The addition and multiplication are carried out modulo p.
Examples: GF(2) and GF(3) on page 266
Polynomials
Let R be an arbitrary ring. A polynomial over R is an expression of
the form
f(x) = a0 + a1x + ···anxn
where n is an nonnegative integer, the coefficients ai are elements
of R. x is a symbol not belonging to R, called an indeterminant
over R.
f(x) is said to to be irreducible if f(x) cannot be factored into a
product of lower degree polynomials over R
x2+x+1 is irreducible in GF(2)
Construction of GF(2n) and GF(qn)
Step 1 Selcet n, a positive integer and p a prime.
Step 2 Choose that f(x) is an irreducible polynomial over GF(p) of
degree n.
Step 3 We agree that αis an element that satisfies f(α) = 0.
Let GF(pn) = {a0 + a1 α + ···an αn| ai GF(p) }∈
For two elements g(α) and h(α) in GF(Pn),
g(α)=a0 + a1α+ ··· + an-1αn-1
h(α)=b0 + b1α+ ··· + bn-1αn-1
Define two operations:
a) Addition
g(α)+ h(α)= a0+b0+(a1+b1)α+···+(an-1+bn-1) αn-1
b) Multiplication
g(α) h(α)= r(α)
where r(x) is the remainder of g(x)h(x) divided by f(x)
Theorem: The set GF(pn) together with the two operations above forms
a finite field, and the order of the field is pn
Construction of GF(2n) and GF(pn)
Example: Let p=2 and f(x) = 1+x+x3. Then f(x) is irreducible
over GF(2). Let α be a root of f(x). That is, f(α)=0. The finite
field GF(23) is defined by
GF(23)={a0 + a1 α + ···a2 α2| ai GF(2) }∈
α7 = 1
=α61+ α2101=
=α51+ α + α2111=
=α4α + α2110=
=α31+ α011=
=α2α2100=
=αα010=
=11001=
=00000=
As a power of αAs a polynomialAs a 3-tupe
Primitive Elements and Primitive Polynomials
Fact: For any finite field F, its mutliplcative field F*, the set of
nonzero elements in F is cyclic.
Definition: A generator of the cyclic group GF(pn)* is called a
primitive element of GF(pn). A polynomial has a primitive element
as zero is called a primitive polynomial
Example: x3+x+1 is primitive polynormial over GF(2)
x7-1 = (x+1)(x3+x+1)(x3+x2+1)