a mathematical theory of communication jin woo shin sang joon kim paper review by c.e. shannon
TRANSCRIPT
A Mathematical Theory of Communication
Jin Woo Shin
Sang Joon Kim
Paper Review
By C.E. Shannon
Contents
Introduction Summary of Paper Discussion
Introduction
This paper opened the information theory.
Before this paper, people believed the only way to make the err. Prob. smaller is to reduce the data rate.
This paper revealed that there is an achievable positive data rate with negligible errors.
C.E. Shannon
Summary of Paper
Preliminary Discrete Source & Discrete Channel Discrete Source & Cont. Channel Cont. Source & Cont. Channel
[Summary of Paper]
Preliminary
Entropy
Ergodic source
Irreducible, aperiodic property Capacity
n
iii ppH
1
log
dxxpxph )(log)(
T
TNC
T
)(loglim
[Summary of Paper]
Disc. Source & Disc. Channel
Capacity Theory (Theorem 11 at page 22)
-The most important result of this paper
If the discrete source entropy H is less than or equal to the channel capacity C then there exists a code that can be transmitted over the channel with arbitrarily small amount of errors. If H>C then there is no method of encoding which gives equivocation less than H-C.
[Summary of Paper]
Disc. Source & Cont. Channel Domain size of input and output channel becomes i
nfinity. The capacity of a continuous channel is:
Tx rate does not exceed the channel capacity.
))|()((max)(
yxhxhCxp
)|()()|()( vuHuHyxhxh
u x
[Discrete]message
Encode
[Cont.]Tx signal
y v
[Cont.]Rx Signal
Decode
[Discrete]Recovered message
Channel
[Cont.]Noise
[Summary of Paper]
Cont. Source & Cont. Channel Continuous source needs an infinite number of binary digits for
exact specification. Fidelity: the measurement of how much distortion we allow Rate with fidelity constraint D of Cont. source P(X) is :
with For given fidelity constraint D,
);(min)|(
YXIRyxp
achievableCR
u’ x
[Discrete]Binary digits
Encode
[Cont.]Tx signal
y v’
[Cont.]Rx Signal
Decode
[Discrete]Recovered
Binary digits
Channel
[Cont.]Noise
u
[Cont.]message
Mapping
w\ fidelityv
[Cont.]Recovered message
Remapping
allow distortion
dxdyyxdyxPD ),(),(
Discussion
Ergodic source Practical approach Rate distortion
[Discussion]
Ergodic source Ergodic Source assumption is the essential one in th
e paper. Source is ergodic -> AEP holds -> capacity theorem Finding a source that is not ergodic and holds AEP i
s a meaningful work. One example:
10
2/12/1P A B
1/2
1/2 1
100
[Discussion]
Practical approach -1 This paper provides the upper bound of achievable
data rate. Finding a good encoding scheme is another problem. Turbo code, LDPC code are most efficient codes. Block size, rate, BER, decoding complexity are
important factors when choosing a code for a specific system.
[Discussion]
Practical approach -2
Year Rate ½ Code SNR Required for BER < 10-5
1948 SHANNON 0dB
1967 (255,123) BCH 5.4dB
1977 Convolutional Code 4.5dB
1993 Iterative Turbo Code 0.7dB
2001 Iterative LDPC Code 0.0245dB
0 1 2 3 4 5 6
10
-3
10
10
10
Turbo
LDPC-4
10
-2
-1
0
SNR
BE
R
SNR vs. BER for rate 1/2 codes
Code
Conv. Code ML decoding
Uncoded
Bound
4 dB
** This graph and chart are modified from the presentation data of Engling Yeo at Jan 15 2003
C. Berrou and A. Glavieux, "Near Optimum Error Correcting Coding And Decoding: Turbo-Codes," IEEE Trans. Comms., Vol.44, No.10, Oct 1996.
[Discussion]
Rate distortion The ‘Fidelity’ concept motives ‘Rate Distortion’ theo
ry. Rate with D distortion(fidelity) of Discrete source P
(x) is defined as: subject to
H(Entropy) is the rate with 0 distortion. (The Rate Distortion Theory) We can compress a D
isc. source P(x) up to ratio when allowing D distortion.
);(min)()|(
)( YXIDRyxp
I yx
yxdEyxdyxPD,
)],([),(),(
)()( DR I