e2 205 error-control coding lecture 18pvkece/pdfs/ecc19/lec18.pdf= q(p e b d p n o=2 = q(r 2e b d n...

6
E2 205 Error-Control Coding Lecture 18 Himani Kamboj October 14, 2019 1 More on Error Probability Setting: Consider an AWGN channel. C is a linear code of block length n. c i F n 2 = s i =(-1) c i 0 i M - 1 Let c 0 =0 (Since code is linear, all zero codeword exists). Define c i = c i1 c i2 . . . c in T A i = (-1) c i1 (-1) c i2 . . . (-1) c in Goal : To show that the probability of codeword error is independent un- der (appropriate) MLD of the transmitted codeword in a linear block code. Furthermore, the same holds true for the residual error vector. ˆ e = s - ˆ s , s =(-1) c 1

Upload: others

Post on 07-Sep-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: E2 205 Error-Control Coding Lecture 18pvkece/pdfs/ECC19/Lec18.pdf= Q(p E b d p N o=2 = Q(r 2E b d N 0) = Q(s 2E b (d free + (d d free)) N 0) Q(r 2E b d free N 0) exp((d d free) E 0

E2 205 Error-Control CodingLecture 18

Himani Kamboj

October 14, 2019

1 More on Error Probability

Setting: Consider an AWGN channel. C is a linear code of block length n.

ci ∈ Fn2 =⇒ si = (−1)ci 0 ≤ i ≤M − 1

Let c0 = 0 (Since code is linear, all zero codeword exists).

Define

ci =[ci1 ci2 . . . cin

]T

Ai =

(−1)ci1

(−1)ci2

...

(−1)cin

Goal: To show that the probability of codeword error is independent un-der (appropriate) MLD of the transmitted codeword in a linear block code.Furthermore, the same holds true for the residual error vector.

e = s− s , s = (−1)c

1

Page 2: E2 205 Error-Control Coding Lecture 18pvkece/pdfs/ECC19/Lec18.pdf= Q(p E b d p N o=2 = Q(r 2E b d N 0) = Q(s 2E b (d free + (d d free)) N 0) Q(r 2E b d free N 0) exp((d d free) E 0

By si = (−1)ci , we mean[si1 si2 . . . sin

]=[(−1)ci1 (−1)ci2 . . . (−1)cin

]Note: Over the AWGN channel, MLD is equivalent to minimum Euclideandistance decoding.

Proof:Let y is the received vector.Define R0 = {y | P (y|c0) ≥ P (y|ci) , i 6= 0} = {y | dE(y, s0) ≤ dE(y, si)}and Ri = AiR0

Declare c = ci if y ∈ Ri, 0 ≤ i ≤M − 1.

Claim: Decoding using decision regions Ri ensures MLD.

Suppose y ∈ Ri =⇒ P (y|ci) ≥ P (y|cj) ∀ i 6= j

then dE(y, si) ≤ dE(y, sj) , i 6= j

Proof:Let y = Ai x , x ∈ R0

dE(y, si) = dE(Ai x, si) = dE(Ai(Ai x), Ai si)(Since Ai is an orthogonal matrix, it is isometric hence preservers norm)dE(y, si) = dE(x,Ai si)Similarly dE(y, sj) = dE(x,Ai sj)Let ci + cj = ckdE(y, sj) = dE(x, sk)Thus dE(x, s0) ≤ dE(x, sk), x ∈ R0 , k 6= 0=⇒ dE(y, si) ≤ dE(y, sj) y ∈ Ri

Next we want to show that the probability of codeword error is independentof the transmitted codeword.

Pr(Correct | c0) =

∫R0

P (y|s0) dy

Pr(Correct | ci) =

∫Ri

P (y|si) dy =

∫Ri

P (y|Ai s0) dy =

∫Ri

P (Ai y|s0) dy

2

Page 3: E2 205 Error-Control Coding Lecture 18pvkece/pdfs/ECC19/Lec18.pdf= Q(p E b d p N o=2 = Q(r 2E b d N 0) = Q(s 2E b (d free + (d d free)) N 0) Q(r 2E b d free N 0) exp((d d free) E 0

Let xi = Ai y =⇒ dx = |Ai|dy

y ∈ Ri =⇒ y ∈ AiRi = R0

Pr(Correct | ci) =

∫R0

P (x0|s0)dx

|Ai|

=

∫R0

P (x0|s0) dx (Since|Ai| = 1|)

Next we will show that the residual error is independent of the transmittedcodeword.

Let ci + cj = ck

=⇒ si sj = sk

Ai Aj = Ak

Pr(cj | ci) =

∫Rj

P (y|ci) dy =

∫Rj

P (y|si) dy =

∫Rj

P (y|Ai s0) dy =

∫Rj

P (Ai y|s0) dy

=

∫Ai, Rj

P (x|s0) dx =

∫Rk

P (x|s0) dx

3

Page 4: E2 205 Error-Control Coding Lecture 18pvkece/pdfs/ECC19/Lec18.pdf= Q(p E b d p N o=2 = Q(r 2E b d N 0) = Q(s 2E b (d free + (d d free)) N 0) Q(r 2E b d free N 0) exp((d d free) E 0

(Since Ai Rj = Ai Aj R0 = Ak R0)

Therefore, residual error = ci + cj

2 Bit Error Probability (BEP) of rate k/n co-

volutional code:

Assume K(N − ν) message bits and Kν flush bits.

Pbe =1

K(N − ν)

K(N−ν)∑n=1

Pbe, i

Set J = K(N − ν) (counting in two different ways)Assuming all zero codeword is transmitted.

Pbe =1

J

∑IP

Pr(P = IP ) IW (IP )

whereP = path associated with message sequenceP = decoded pathIP = incorrect pathDPS = diverged path segmentIW = input weight (hamming weight of input message sequence associatedto IP)

Pbe =1

J

∑IP

Pr(P = IP )∑

DPS in IP

IW (DPS)

=1

J

∑DPS

IW (DPS)∑

IP through DPS

Pr(P = IP )

=1

J

N−ν∑j=1

∑DPSj

IW (DPSj)∑

IP through DPSj

Pr(P = IP )

(DPSj : DPS that diverges at node level j in the trellis)

4

Page 5: E2 205 Error-Control Coding Lecture 18pvkece/pdfs/ECC19/Lec18.pdf= Q(p E b d p N o=2 = Q(r 2E b d N 0) = Q(s 2E b (d free + (d d free)) N 0) Q(r 2E b d free N 0) exp((d d free) E 0

Pr(P = IP ) ≤ Pr( d(DPSj, y) ≤ d(0, y) )

Let d(DPSj, y) ≤ d(0, y) = DPSj prevails

Pbe ≤1

J

N−ν∑j=0

∑DPSj

IW (DPSj)∑

IP through DPSj

Pr(DPSj prevails)

≤ 1

J

N−ν∑j=0

∑all DPS

IW (DPS)∑

IP through DPS

Pr(DPS prevails)

=1

K

∑all DPS

IW (DPS)∑

IP through DPS

Pr(DPS prevails)

5

Page 6: E2 205 Error-Control Coding Lecture 18pvkece/pdfs/ECC19/Lec18.pdf= Q(p E b d p N o=2 = Q(r 2E b d N 0) = Q(s 2E b (d free + (d d free)) N 0) Q(r 2E b d free N 0) exp((d d free) E 0

= Q (

√Eb d√No/2

)

= Q (

√2Eb d

N0

)

= Q (

√2Eb (dfree + (d− dfree))

N0

)

≤ Q (

√2Eb dfree

N0

) exp (−(d− dfree) E0

N0

)

= Q (

√2Eb dfree

N0

) exp (dfree EbN0

) exp (−d EbN0

)

Therefore,

Pbe ≤1

K

∂AEND(L, I,D)

∂IQ (

√2Eb dfree

N0

) exp (dfree EbN0

)

where L = 1, I = 1, D = exp(−Eb/N0), Q(x) =1√2π

∫ ∞x

e−y2/2 dy

and using Q(√x+ y) ≤ Q(

√x) exp(−y/2)

For BSC,

Pbe ≤1

K

∂AEND(L, I,D)

∂I

where L = 1, I = 1, D = 2√ε(1− ε)

6