unit -i amplitude modulation - wordpress.com · 2016-09-19 · angle modulation am dsb-sc ssb vsb...
TRANSCRIPT
Communication Systems
Unit -I
Amplitude Modulation
Communication Systems Continuous-Wave Modulation
2
contents • Time-domain and frequency-domain
descriptions of continuous-wave modulation
• Noise performance pertaining to modulation
schemes
Amplitude modulation
Angle modulation
AM
DSB-SC
SSB
VSB
FM
PM
Communication Systems Continuous-Wave Modulation
3
2.1 Introduction
Figure 2.1
Components of a continuous-wave modulation system:
(a) transmitter, and (b) receiver.
In addition to the signal received from the transmitter, the
receiver input includes channel noise. The degradation in
receiver performance due to channel noise is determined by the
type of modulation used. So, it is necessary to study various
modulation types and their noise performance.
Communication Systems Continuous-Wave Modulation
4
Basic concepts about modulation
Message signal
Information-bearing signal
Baseband signal
Modulating signal/wave
Carrier, sinusoidal wave
Modulated signal/wave
Communication Systems Continuous-Wave Modulation
5
Modulation and Demodulation
• Modulation: refers to the process by which
some characteristic of a carrier is varied in
accordance with a modulating signal. It is
also a process of shifting frequency range.
• Demodulation: is the reverse of the
modulation process.
Communication Systems Continuous-Wave Modulation
6
(a) Carrier wave
(b) Sinusoidal
modulating signal
(c) Amplitude-
modulated signal
(d) Frequency-
modulated signal
Demo for AM and FM signals
Communication Systems Continuous-Wave Modulation
7
2.2 Amplitude Modulation AM
Carrier wave c(t): ( ) cos(2 )c c
c t A f t
Ac: carrier amplitude fc: carrier frequency
Amplitude-modulated wave s(t)
( ) 1 ( ) cos(2 )c a c
s t A k m t f t
Ka : amplitude sensitivity
AM is defined as a process in which the
amplitude of the carrier wave c(t) is varied about
a mean value, linearly with the baseband signal.
Communication Systems Continuous-Wave Modulation
8
Baseband signal m(t)
AM wave for | kam(t) | < 1 AM wave for | kam(t) | > 1
Figure 2.3
Illustrating the amplitude modulation process.
Communication Systems Continuous-Wave Modulation
9
Observations from figure 2.3
• If |kam(t)|<1 for all t, the envelope of modulated
signal s(t) is linear with the modulating signal m(t).
Therefore, we can use envelope detector to recover
the message signal in the receiver.
• If |kam(t)|>1 for any t, carrier phase reversals
happen. This is called as overmodulation. In this
case, the envelope of modulated signal s(t) is no
longer linear with the modulating signal m(t).
Message signal can not be recovered by envelope
detector.
Communication Systems Continuous-Wave Modulation
10
In AM, two requirements must be satisfied:
1. , W: message bandwidth, so that the
envelope of s(t) can be visualized satisfactorily.
Wfc
2. , so that
overmodulation can be avoided.
( ) 1 for all t a
k m t
Communication Systems Continuous-Wave Modulation
11
Descriptions of AM signal
Time-domain description:
Frequency-domain description:
( ) 1 ( ) cos(2 )c a c
s t A k m t f t
m(t) M(f) FT
IFT
( ) [ ( ) ( )] [ ( ) ( )]2 2
c a cc c c c
A k AS f f f f f M f f M f f
Communication Systems Continuous-Wave Modulation
12
(a) Spectrum of baseband signal (b) Spectrum of AM wave
Negative frequency
Upper band and lower band
Transmission bandwidth
BT=2W
Communication Systems Continuous-Wave Modulation
13
AM practical circuits
In transmitter, it is accomplished using a
nonlinear device. In receiver, it is also
accomplished using a nonlinear device.
Communication Systems Continuous-Wave Modulation
14
Virtues and Limitations of AM
Virtue:
Simplicity of implementation
Limitations:
1. AM is wasteful of power
2. AM is wasteful of bandwidth
(a) Spectrum of baseband signal (b) Spectrum of AM wave
Communication Systems Continuous-Wave Modulation
15
How to overcome these limitations ?
Step 1: Suppress the carrier: DSB-SC
Step 2: Modify the sidebands: SSB, VSB
DSB-SC: double sideband-suppressed carrier,where only
the upper and lower sidebands are transmitted. No carrier
frequency component.
SSB: single sideband, where only one sideband (the lower
sideband or the upper sideband) is transmitted.
VSB: vestigial sideband, where only a vestige of one of the
sidebands and a corresponding modified version of the other
sideband are transmitted.
Communication Systems Continuous-Wave Modulation
16
2.3 linear Modulation Schemes
Linear modulation is defined by: (Narrowband signal)
( ) ( )cos(2 ) ( )sin(2 )I c Q c
s t s t f t s t f t
SI(t): in-phase component of s(t) SQ(t): quadrature component of s(t)
In linear modulation, both SI(t) and SQ(t) are low-pass
signals that are linearly related to the message signal m(t).
Communication Systems Continuous-Wave Modulation
17
Linear modulation is defined by
( ) ( )cos(2 ) ( )sin(2 )I c Q c
s t s t f t s t f t
Communication Systems Continuous-Wave Modulation
18
0 1+kam(t) AM
-½m’(t) ½ m(t) V-USB
+½ m’(t) ½ m(t) V-LSB VSB
-½ ½ m(t) LSB
+½ ½ m(t) USB SSB
0 m(t) DSB-SC
SQ(t) SI(t) Type of modulation
ˆ ( )m t
ˆ ( )m t
Table 2.1 different forms of linear modulation
m(t)=message signal
=Hilbert transform of m(t) ˆ ( )m t
Communication Systems Continuous-Wave Modulation
19
Descriptions of AM signals
AM: ( ) 1 ( ) cos(2 )c a c
s t A k m t f t
DSB: ( ) ( )cos(2 )c c
s t A m t f t
SSB:
VSB: 1 1cos 2 cos 2
2 2c c c c
s t A m t f t m t A m t f t
+ vestige of the upper sideband
- vestige of the lower sideband
)2sin()(ˆ2
1)2cos()(
2
1)( tftmAtftmAts cccc
+ lower sideband transmitted
- upper sideband transmitted
Communication Systems Continuous-Wave Modulation
20
Two important points from table 2.1
1. The in-phase component SI(t) is solely dependent
on the message signal m(t).
2. The quadrature component SQ(t) is a filtered
version of m(t). The spectral modification of the
modulated wave s(t) is solely due to SQ(t).
To be more specific, the role of the quadrature component is
merely to interfere with the in-phase component, so as to
reduce or eliminate power in one of the sidebands of the
modulated signal s(t), depending on how the quadrature
component is defined.
Communication Systems Continuous-Wave Modulation
21
DSB-SC Double Sideband-Suppressed
Carrier Modulation
AM:
( ) 1 ( ) cos(2 )c a c
s t A k m t f t
)]()([2
)]()([2
)( ccca
ccc ffMffM
Akffff
AfS
DSB:
( ) ( )cos(2 )c c
s t A m t f t
1( ) ( ) ( )
2c c c
S f A M f f M f f
Communication Systems Continuous-Wave Modulation
22
Figure 2.5 (a) Block diagram of product modulator.
(b) Baseband signal. (c) DSB-SC modulated wave.
The DSB modulated signal undergoes a phase reversal
whenever the message signal crosses zero. Consequently, the
envelope of DSB signal is different from the message signal.
This is unlike the case of AM wave.
Communication Systems Continuous-Wave Modulation
23
(a) Spectrum of baseband signal (b) Spectrum of DSB wave
(a) Spectrum of baseband signal (b) Spectrum of AM wave
Communication Systems Continuous-Wave Modulation
24
Demodulation of DSB
The baseband signal m(t) can be recovered from a DSB wave by using
coherent detection.
Can we use envelope detector to demodulate
DSB signals? Why?
How to demodulate DSB signals?
NO. The envelope of DSB signal is no longer
linear with modulating signal.
Communication Systems Continuous-Wave Modulation
25
Coherent Detection
Coherent detection
Synchronous demodulation
Why is it called Coherent Detection?
Local oscillator signal is exactly synchronized with
carrier in both frequency and phase.
Communication Systems Continuous-Wave Modulation
26
Coherent Detection Process
cos 2 ( )
cos 2 cos 2
1 1cos 4 cos
2 2
c c
c c c c
c c c c c
v t A f t s t
A A f t f t m t
A A f t m t A A m t
Then, output of product modulator is
the local oscillator signal is supposed as
DSB signal: ( ) ( )cos(2 )c c
s t A m t f t
' cos(2 )c c
A pf t f
Communication Systems Continuous-Wave Modulation
27
Output of low-pass filter
)(cos'2
1)(0 tmAAtv cc
1 1cos 4 cos
2 2c c c c c
v t A A f t m t A A m t
Communication Systems Continuous-Wave Modulation
28
Discussion :
Phase difference )( cos'
2
1)(0 tmAAtv cc
m(t) can be recovered without any distortion.
0
1constant, ( ) ' ( )
2
c cv t A A km t
Case 1:
0
10, ( ) ' ( )
2
c cv t A A m t Maximum
0
090 , ( ) 0 v t Maximum
Quadrature Null Effect
Communication Systems Continuous-Wave Modulation
29
In practice, varies randomly with time. So
synchronism must be ensured both in frequency
and phase.
Case 2:
How to keep synchronization?
Phase Lock Loop
Square Loop
Costas Loop
Communication Systems Continuous-Wave Modulation
30
Virtues and limitations of DSB
Virtue: saving transmitted power
The resulting system complexity is the price
that must be paid for suppressing the carrier
wave to save transmitted power.
Limitations:
Complexity
Waste of bandwidth
Communication Systems Continuous-Wave Modulation
31
Quadrature-Carrier Multiplexing
Quadrature Amplitude Modulation QAM
Theory basis: quadrature null effect
The quadrature null effect of the coherent
detector may also be put to good use in the
construction of quadrature-carrier multiplexing.
Communication Systems Continuous-Wave Modulation
32
QAM is a bandwidth-conservation scheme.
why?
This scheme enables two DSB signals to
occupy the same channel bandwidth, and
yet it allows for the separation of the two
message signals at the output. It is therefore
a bandwidth-conservation scheme.
Communication Systems Continuous-Wave Modulation
33
Figure 2.10
Quadrature-carrier multiplexing system
Transmitter Receiver.
Communication Systems Continuous-Wave Modulation
34
1 2cos 2 sin 2c c c c
s t A m t f t m t A m t f t
Modulation and Demodulation process
The transmitted signal s(t) consists of sum of two
product modulator outputs, as shown by
m1(t) and m2(t) are two different message signals.
demodulation
)()2cos(2)( 1 tmALPFtfts cc
)()2sin(2)( 2 tmALPFtfts cc
Communication Systems Continuous-Wave Modulation
35
Single-Sideband Modulation SSB
SSB: single sideband, where only one
sideband (the lower sideband or the upper
sideband) is transmitted.
ttmttm
tmtmts
cc
tc
sin )(ˆ cos )(=
}e ])(ˆ j)(Re{[=)( j
SSB
Communication Systems Continuous-Wave Modulation
36
Hilbert Transform
Definition : 1ˆ( ) ( )x t x tt
, 01sgn( )
, 0
FTj
jjt
( )x t ˆ( )x t1
( )h tt
Communication Systems Continuous-Wave Modulation
37
Methods to generate SSB signals
1) Frequency-discrimination method to
generate SSB
2) Phase-shift method to generate SSB
3) Weaver’s method to generate SSB
Communication Systems Continuous-Wave Modulation
38
Energy Gap
Figure 2.11 (a) Spectrum of a message signal m(t) with an
energy gap of width 2fa centered on the origin. (b) Spectrum
of corresponding SSB signal containing the upper sideband.
Typical example: telephone voice
f ~(300, 3100Hz) energy gap: 600 Hz
Communication Systems Continuous-Wave Modulation
39
SSB Coherent Detection
tA cc cos'
)(SSB ts Low Pass filter )(tm
How to keep synchronization between local
oscillator and carrier in the transmitter?
1. A low-power pilot carrier
2. A highly stable oscillator
Communication Systems Continuous-Wave Modulation
40
Vestigial Sideband Modulation VSB
One of the sideband is partially suppressed and a
vestige of the other sideband is transmitted to
compensate for that suppression.
Vestige of lower sideband
Vestige of upper sideband
Communication Systems Continuous-Wave Modulation
41
Frequency Discrimination Method to
Generate VSB
Key: the design of band-pass filter
Communication Systems Continuous-Wave Modulation
42
Magnitude response of VSB filter
Odd symmetry
around the fc
1. The sum of the values of the magnitude response
|H(f)| at any two frequencies equally displaced
above and below fc is unity.
2. The phase response arg(H(f)) is linear.
( ) ( ) 1 for W f W c c
H f f H f f
BT=W+fv
Communication Systems Continuous-Wave Modulation
43
VSB description in time-domain
+ vestige of the upper sideband
- vestige of the lower sideband
phase –shift method to generate VSB
-π/ 2
tccos
)(tm
)(' tm
)(VSB ts)(1
fHj
Q
1 1( ) ( )cos(2 ) '( )sin(2 )
2 2c c c c
s t A m t f t A m t f t
Communication Systems Continuous-Wave Modulation
44
2.4 Frequency Translation
• The basic operation in SSB is in fact a form
of frequency translation.
• SSB modulation is also called frequency
changing, mixing, or heterodyning.
Communication Systems Continuous-Wave Modulation
45
Mixer
tftf l 2cos2cos 1
Sum frequency
Difference frequency
lfff 12
lfff 12
The mixer is a device that consists of a product
modulator followed by a band-pass filter.
Communication Systems Continuous-Wave Modulation
46
Up conversion
1212 ffffff ll or
2112 ffffff ll or
Down conversion
Communication Systems Continuous-Wave Modulation
47
Figure 2.17 Mixer
Spectrum of modulated signal s1(t) at the mixer input
Spectrum of the corresponding signal s´(t) at the output of the
product modulator in the mixer
BPF
Communication Systems Continuous-Wave Modulation
48
2.5 Frequency-Division Multiplexing FDM
Multiplexing refers to a number of independent
signals are combined into a composite signal
suitable for transmission over a common channel.
Communication Systems Continuous-Wave Modulation
49
Types of Multiplexing
• FDM
Separate the signals according to frequency.
• TDM
Separate the signals according to time.
• CDM
Separate the signals according to code.
Communication Systems Continuous-Wave Modulation
50
Block diagram of FDM system
Communication Systems Continuous-Wave Modulation
51
Figure 2.19 Illustrating the modulation steps in an FDM system.
Example 2.1 Carrier Telephone System SSB/FDM
Communication Systems Continuous-Wave Modulation
52
2.9 Superheterodyne Receiver =superhet
Since Armstrong invented the superheterotyne
radio receiver in 1918, almost all radio and TV
receivers now being made are of the
superhetrodyne type.
Communication Systems Continuous-Wave Modulation
53
Receivers in a broadcasting system performs
following functions:
• Carrier-frequency tuning
• Filtering
• amplification
Seperhet is a special type of receiver that fulfills
all three functions in an elegant and practical
fashion.
Communication Systems Continuous-Wave Modulation
54
Image frequency
co
coc
ffff
fffff
LIF
LIF
如果
如果象频
2
2
c
Image Interference
Communication Systems Continuous-Wave Modulation
55
Figure 2.32 Basic elements of an AM radio receiver of the
superheterodyne type.
IF LO RFf f f
Communication Systems
Unit -II
Angle Modulation
Communication Systems Continuous-Wave Modulation
57
2.6 Angle Modulation
• Definition:
The angle of the carrier wave is varied
according to the baseband signal. Whereas,
the amplitude of the carrier is maintained
constant. It consists of PM and FM.
• An important feature:
It provides better discrimination against
noise and interference than amplitude
modulation.
Communication Systems Continuous-Wave Modulation
58
Tradeoff
This improvement in performance is achieved
at the expense of increased transmission
bandwidth.
That is, angle modulation provides us with a
practical means of exchanging channel
bandwidth for improved noise performance.
Such a tradeoff is not possible with amplitude
modulation, regardless of its form.
Communication Systems Continuous-Wave Modulation
59
Basic Definition of Angle Modulation
Let i(t) denote the angle of a modulated carrier,
then angle-modulated wave can be expressed as
( ) cosc i
s t A t )()( tmti
If i(t) increases monotonically with time, the
average frequency in Hertz, over an interval
from t to t+t, is given by
( )
2
i i
t
t t tf t
t
2. 20
Communication Systems Continuous-Wave Modulation
60
Instantaneous frequency
0
0
( ) lim ( )
lim2
1 2.21
2
i tt
i i
t
i
f t f t
t t t
t
d t
dt
dttft ii )(2)(
In the simple case of an unmodulated carrier, the
angle i(t) is :
2i c c
t f t
The constant c is the value of i(t) at t=0. Usually it is
assumed to be zero for convenience.
Communication Systems Continuous-Wave Modulation
61
Angle modulation is defined as the angle of the
carrier wave varying with modulating signal .
There are an infinite number of ways in which
the angle may be varied in some manner with
the message signal.
However, we shall consider only two commonly
used methods. They are FM and PM.
)()( tmti
Communication Systems Continuous-Wave Modulation
62
Phase Modulation PM
The angle i(t) is varied linearly with the message
signal m(t)
2 ( )i c p
t f t k m t
kp: phase sensitivity
PM signal is described in the time domain by
( ) cos[ ( )] cos[2 ( )] 2.23PM c i c c p
s t A t A f t k m t
Communication Systems Continuous-Wave Modulation
63
Frequency Modulation FM
The instantaneous frequency fi(t) is varied
linearly with the message signal m(t)
kf: frequency sensitivity
FM signal is described in the time domain by
( ) 2.24i c f
f t f k m t
t
fcii dmktfdttft0
)(22)(2)( 2.25
0
( ) cos[ ( )]
cos[2 2 ( ) ] 2.26
FM c i
t
c c f
s t A t
A f t k m d
Communication Systems Continuous-Wave Modulation
64
Relationship between FM and PM
dttft ii )(2)( dt
tdtf i
i
)(
2
1)(
)](2cos[)( tmktfAts pccPM 2.23
So, we may deduce all the properties of PM signals
from those of FM signals and vice versa. Hence, we
concentrate our attention on FM signals.
0( ) cos[2 2 ( ) ] 2.26
t
FM c c fs t A f t k m d
Communication Systems Continuous-Wave Modulation
65
Modulating
signal Integrator
Phase modulator
cos(2 )c c
A f t
FM signal
Modulating
signal Frequency modulator
cos(2 )c c
A f t
FM signal
Direct method to generate FM signal
Indirect method to generate FM signal
Communication Systems Continuous-Wave Modulation
66
Modulating
signal Differentiator
Frequency modulator
cos(2 )c c
A f t
PM signal
Modulating
signal Phase modulator
cos(2 )c c
A f t
PM signal
Direct method to generate PM signal
Indirect method to generate PM signal
Communication Systems Continuous-Wave Modulation
67
2.7 Frequency Modulation FM
• The FM signal s(t) is a nonlinear function of
the modulating signal m(t), which makes the
frequency modulation a nonlinear modulation
process.
• Consequently, unlike amplitude modulation,
the spectrum of an FM signal is not related in
a simple manner to that of the modulating
signal; rather, its analysis is much more
difficult than that of an AM signal.
Communication Systems Continuous-Wave Modulation
68
How can we tackle the spectral analysis
of an FM signal?
1. First consider the simplest case:
a single-tone modulation, narrowband FM
2. Then consider more general case:
a single-tone modulation, wideband FM
Objective: to establish an empirical formula between
the transmission bandwidth of an FM signal and the
bandwidth of message signal.
We propose to provide an empirical answer to this
question by proceeding in the following manner:
Communication Systems Continuous-Wave Modulation
69
Single-tone frequency modulation
A single tone modulating signal is defined by
The instantaneous frequency of the FM signal:
Where f m
f k A f : Frequency deviation
2.27 )2cos()( tfAtm mm
cos 2
cos 2 2.28
i c f m m
c m
f t f k A f t
f f f t
fm: modulation frequency of modulating signal
Communication Systems Continuous-Wave Modulation
70
m
f
f
2 sin 2i c m
t f t f t
FM signal for single-tone modulating signal:
( ) cos 2 sin 2 2.33c c m
s t A f t f t
0
2 2 sin 2 2.30t
i i c m
m
ft f d f t f t
f
We define Modulation index
Communication Systems Continuous-Wave Modulation
71
Modulation index
f : Frequency deviation
Two important concepts in FM Page 110
maximum departure of the instantaneous
frequency of the FM signal from the carrier
frequency fc. It is proportional to the amplitude
of the modulating signal and is independent of
the modulation frequency fm.
Phase deviation, the maximum departure of the
angle i(t) from the angle 2fct of the unmodulated
carrier.
Communication Systems Continuous-Wave Modulation
72
Narrowband and Wideband FM
Depending on the value of the modulation index ,
there are two cases of frequency modulation:
Narrowband FM: <<1
Wideband FM: otherwise
Communication Systems Continuous-Wave Modulation
73
2.7.1 Narrowband Frequency Modulation NBFM
( ) cos 2 sin 2c c m
s t A f t f t
When <<1
Hence, NBFM signal can be expressed as:
FM signal for single-tone m(t):
Expanding it:
1)]2sin(cos[ tfm
)2sin()]2sin(sin[ tftf mm
( ) cos 2 cos sin 2
sin 2 sin sin 2 2.34
c c m
c c m
s t A f t f t
A f t f t
( ) ( ) cos(2 ) sin(2 )sin(2 )NBFM c c c m c
s t s t A f t A f t f t
Communication Systems Continuous-Wave Modulation
74
)2sin()2sin()2cos()( tftfAtfAts cmcccNBFM
Narrowband Frequency Modulator
Modulating
signal Integrator
sin(2 )c c
A f t
NBFM signal
-π/ 2
phase-shifter cos(2 )
c cA f tCarrier wave
-
+
NBPM modulator
Communication Systems Continuous-Wave Modulation
75
NBFM is different from ideal FM. Why?
1. The envelope contains a residual amplitude
modulation
2. The angle contains harmonic distortion
If <0.3 radians, the effects of residual AM and
harmonic PM are limited to negligible levels.
Communication Systems Continuous-Wave Modulation
76
NBFM is similar to AM.
Basic difference:
the algebraic sign of the lower side frequency
]})(2cos[])(2{cos[2
1)2cos(
)2sin()2sin()2cos()(
tfftffAtfA
tftfAtfAts
mcmcccc
mccccNBFM
2.36
]})(2cos[])(2{cos[2
1)2cos(
)2cos()]2cos(1[
)2cos()](1[)(
tfftffAktfA
tftfAkA
tftmkAts
mcmccacc
cmmac
cacAM
2.37
Communication Systems Continuous-Wave Modulation
77
2.7.2 Wideband Frequency Modulation WBFM
For convenience, we use complex form to
describe band pass signal. It is changed into:
We have known that single-tone FM signal is:
( ) Re exp 2 sin 2
Re exp 2 2.38
c c m
c
s t A j f t j f t
s t j f t
( ) cos 2 sin 2 2.33c c m
s t A f t f t
Communication Systems Continuous-Wave Modulation
78
Where, Fourier coefficient cn
Complex envelope
exp sin 2 2.39c m
s t A j f t
exp 2 2.40n m
s t c j nf t
1/ 2
1/ 2
1/ 2
1/ 2
exp 2
exp sin 2 2
m
m
m
m
f
n m mf
f
m c m mf
c f s t j nf t dt
f A s t j f t j nf t dt
Communication Systems Continuous-Wave Modulation
79
2m
x f t
Hence, we may rewrite equation 2.41 in new form
exp sin2
c
n
Ac j x nx dx
So, we may reduce Cn
1exp sin 2.44
2n
J j x nx dx
2.45n c n
c A J
Communication Systems Continuous-Wave Modulation
80
exp 2c n m
s t A J j nf t
Therefore,
Re exp 2 2.48c n c m
s t A J j f nf t
2
c
n c m c m
AS f J f f nf f f nf
FT
Substituting Cn in , we get ( )s t
Communication Systems Continuous-Wave Modulation
81
Figure 2.23 Plots of Bessel functions of the first kind for
varying order.
Communication Systems Continuous-Wave Modulation
82
Bessel Function Properties
1. 1 for all n ,both positive and negative n
n nJ J
3. 2 1n
n
J
2.52
2. For small values of modulation index ,
2,0)(2
)(
1)(
1
0
nJ
J
J
n
2.51
Communication Systems Continuous-Wave Modulation
83
Observations About FM
1. The spectrum of an FM signal contains a
carrier component and an infinite set of side
frequencies.
21
2c
P A
2. If <1, the FM signal is effectively composed of
a carrier and a single pair of side frequencies at
fc±fm. This situation corresponds to NBFM.
3. The average power of FM signal is constant.
Communication Systems Continuous-Wave Modulation
84
How to estimate transmission bandwidth of FM signals?
• Percent method
• Cason’s rule
Communication Systems Continuous-Wave Modulation
85
mT fnB max2
percent method
Carson’s Rule
12 2 2 (1 ) 2 (1 )
2( 1) 2( 1) 2.55
m
T m
m m
m
fB f f f f
f
ff f
f
Communication Systems Continuous-Wave Modulation
86
Single tone signal arbitrary signal
Deviation ratio
W
fD
frequency modulationhighest
deviationfrequency
WDD
fWfBT )1(2)1
1(222
Carson’s rule is modified
D is similar to .
Communication Systems Continuous-Wave Modulation
87
Example 2.3
FM radio broadcasting:
W=15 kHz, f=75 kHz, BT =?
Deviation ratio is: D=75/15=5
1) According to Carson’s rule, we get transmission bandwidth:
BT=2(75+15)=180 kHz
2) According to percent method, universal curve tells:
BT=3.2 f=3.275=240 kHz
In practice, a bandwidth of 200 kHz is
allocated to each FM transmitter.
Communication Systems Continuous-Wave Modulation
88
Generation of FM signals
Two basic methods:
Voltage-controlled
Oscillator m(t) sFM(t)
2. Indirect FM
m(t) NBFM Modulator
Frequency Multiplier sFM(t)
1. Direct FM
Communication Systems Continuous-Wave Modulation
89
Figure 2.27 Block diagram of the indirect
method of generating a wideband FM signal
NBFM
Communication Systems Continuous-Wave Modulation
90
Why can a frequency multiplier change
NBFM into WBFM?
Figure 2.28 Block diagram of frequency multiplier
2
1 2
n
nt a s t a s t a s t
The input-output relation of a nonlinear device may
be expressed in the general form
Communication Systems Continuous-Wave Modulation
91
Instantaneous frequency is
i c ff t f k m t
Instantaneous frequency of output signal s’(t) is
Output signal s’(t) is
0
cos 2 2t
c c fs t A nf t nk m d
i c ff t nf nk m t
So cc nffnfnf '
NBFMWBFM
0
cos 2 2t
c c fs t A f t k m d
The input is an FM signal defined by
Communication Systems Continuous-Wave Modulation
92
Demodulation of FM Signals Page 121
Two basic methods:
2. Direct method: frequency discriminator
1. Indirect method: phase-locked loop
FM wave Slope circuit
Envelope Detector
Baseband signal
Figure 2.51
X
Communication Systems Continuous-Wave Modulation
93
Fig. 2.30 balanced frequency discriminator
+
Communication Systems
Unit –III
Random Process
Review of last lecture
• The point worth noting are :
– The source coding algorithm plays an
important role in higher code rate
(compressing data)
– The channel encoder introduce redundancy
in data
– The modulation scheme plays important role
in deciding the data rate and immunity of
signal towards the errors introduced by the
channel
– Channel can introduce many types of errors
due to thermal noise etc.
–
95
Review:
Layering of Source Coding
• Source coding includes
– Sampling
– Quantization
– Symbols to bits
– Compression
• Decoding includes
– Decompression
– Bits to symbols
– Symbols to sequence of numbers
– Sequence to waveform (Reconstruction)
96
Review:
Layering of Source Coding
97
Review:
Layering of Channel Coding
• Channel Coding is divided into
– Discrete encoder\Decoder • Used to correct channel Errors
– Modulation\Demodulation • Used to map bits to waveform for transmission
98
Review:
Layering of Channel Coding
99
Review:
Resources of a Communication System
• Transmitted Power
– Average power of the transmitted signal
• Bandwidth (spectrum)
– Band of frequencies allocated for the signal
• Type of Communication system
– Power limited System • Space communication links
– Band limited Systems • Telephone systems
100
Review:
Digital communication system • Important features of a DCS:
– Transmitter sends a waveform from a finite set of possible waveforms during a limited time
– Channel distorts, attenuates the transmitted signal and adds noise to it.
– Receiver decides which waveform was transmitted from the noisy received signal
– Probability of erroneous decision is an important measure for the system performance
101
Review of Probability
Sample Space and Probability
• Random experiment: its outcome, for
some reason, cannot be predicted with
certainty.
– Examples: throwing a die, flipping a coin and
drawing a card from a deck.
• Sample space: the set of all possible
outcomes, denoted by S. Outcomes are
denoted by E’s and each E lies in S, i.e., E ∈ S.
• A sample space can be discrete or
continuous.
•
103
Three Axioms of Probability
• For a discrete sample space S, define a
probability measure P on as a set function
that assigns nonnegative values to all
events, denoted by E, in such that the
following conditions are satisfied
• Axiom 1: 0 ≤ P(E) ≤ 1 for all E ∈ S
• Axiom 2: P(S) = 1 (when an experiment is
conducted there has to be an outcome).
• Axiom 3: For mutually exclusive events
E1, E2, E3,. . . we have
104
Conditional Probability
• We observe or are told that event E1 has occurred but are actually interested in event E2: Knowledge that of E1 has occurred changes the probability of E2 occurring.
• If it was P(E2) before, it now becomes P(E2|E1), the probability of E2 occurring given that event E1 has occurred.
• This conditional probability is given by
• If P(E2|E1) = P(E2), or P(E2 ∩ E1) = P(E1)P(E2), then E1 and E2 are said to be statistically independent.
• Bayes’ rule
– P(E2|E1) = P(E1|E2)P(E2)/P(E1) 105
Mathematical Model for Signals
Mathematical models for representing signals Deterministic
Stochastic
Deterministic signal: No uncertainty with respect to the signal value at any time. Deterministic signals or waveforms are modeled by explicit mathematical
expressions, such as
x(t) = 5 cos(10*t). Inappropriate for real-world problems???
Stochastic/Random signal: Some degree of uncertainty in signal values before it actually occurs. For a random waveform it is not possible to write such an explicit expression.
Random waveform/ random process, may exhibit certain regularities that can be described in terms of probabilities and statistical averages.
e.g. thermal noise in electronic circuits due to the random movement of electrons 106
107
Energy and Power Signals
• The performance of a communication system depends on the
received signal energy: higher energy signals are detected more
reliably (with fewer errors) than are lower energy signals.
• An electrical signal can be represented as a voltage v(t) or a current
i(t) with instantaneous power p(t) across a resistor defined by
OR
)(
)(2
tvtp
)()( 2titp
108
Energy and Power Signals
• In communication systems, power is often normalized by assuming R to be 1.
• The normalization convention allows us to express the instantaneous power as
where x(t) is either a voltage or a current signal.
• The energy dissipated during the time interval (-T/2, T/2) by a real signal with
instantaneous power expressed by Equation (1.4) can then be written as:
• The average power dissipated by the signal during the interval is:
)()( 2txtp
109
Energy and Power Signals
• We classify x(t) as an energy signal if, and only if, it has nonzero but
finite energy (0 < Ex < ∞) for all time, where
• An energy signal has finite energy but zero average power
• Signals that are both deterministic and non-periodic are termed as Energy
Signals
110
Energy and Power Signals
• Power is the rate at which the energy is delivered
• We classify x(t) as an power signal if, and only if, it has nonzero but
finite energy (0 < Px < ∞) for all time, where
• A power signal has finite power but infinite energy
• Signals that are random or periodic termed as Power Signals
Random Variable
• Functions whose domain is a sample
space and whose range is a some set of
real numbers is called random variables.
• Type of RV’s
– Discrete • E.g. outcomes of flipping a coin etc
– Continuous • E.g. amplitude of a noise voltage at a particular instant of time
111
112
Random Variables
Random Variables
• All useful signals are random, i.e. the receiver does not know a priori
what wave form is going to be sent by the transmitter
• Let a random variable X(A) represent the functional relationship
between a random event A and a real number.
• The distribution function Fx(x) of the random variable X is given by
Random Variable
• A random variable is a mapping from the
sample space to the set of real numbers.
• We shall denote random variables by
boldface, i.e., x, y, etc., while individual
or specific values of the mapping x are
denoted by x(w).
113
Random process
• A random process is a collection of time functions, or signals,
corresponding to various outcomes of a random experiment. For each
outcome, there exists a deterministic function, which is called a sample
function or a realization.
Sample functions
or realizations
(deterministic
function)
Random
variables
time (t)
Rea
l n
um
ber
114
Random Process
• A mapping from a sample space to a set of time functions.
115
Random Process contd
• Ensemble: The set of possible time functions that one
sees.
• Denote this set by x(t), where the time functions x1(t,
w1), x2(t, w2), x3(t, w3), . . . are specific members of
the ensemble.
• At any time instant, t = tk, we have random variable
x(tk).
• At any two time instants, say t1 and t2, we have two
different random variables x(t1) and x(t2).
• Any realationship b/w any two random variables is
called Joint PDF
116
Classification of Random Processes
• Based on whether its statistics change
with time: the process is non-stationary or
stationary.
• Different levels of stationary:
– Strictly stationary: the joint pdf of any order
is independent of a shift in time.
– Nth-order stationary: the joint pdf does not
depend on the time shift, but depends on time
spacing 117
Cumulative Distribution Function (cdf)
• cdf gives a complete description of the random variable.
It is defined as:
FX(x) = P(E ∈ S : X(E) ≤ x) = P(X ≤ x). • The cdf has the following properties:
– 0 ≤ FX(x) ≤ 1 (this follows from Axiom 1 of the probability measure).
– Fx(x) is non-decreasing: Fx(x1) ≤ Fx(x2) if x1 ≤ x2 (this is because event x(E) ≤ x1 is contained in event x(E) ≤ x2).
– Fx(−∞) = 0 and Fx(+∞) = 1 (x(E) ≤ −∞ is the empty set, hence an impossible event, while x(E) ≤ ∞ is the whole sample space, i.e., a certain event).
– P(a < x ≤ b) = Fx(b) − Fx(a).
118
Probability Density Function
• The pdf is defined as the derivative of the cdf: fx(x) = d/dx Fx(x)
• It follows that:
• Note that, for all i, one has pi ≥ 0 and ∑pi = 1.
119
Cumulative Joint PDF Joint PDF
• Often encountered when dealing with combined experiments or repeated trials of a single experiment.
• Multiple random variables are basically multidimensional functions defined on a sample space of a combined experiment.
• Experiment 1 – S1 = {x1, x2, …,xm}
• Experiment 2 – S2 = {y1, y2 , …, yn}
• If we take any one element from S1 and S2 – 0 <= P(xi, yj) <= 1 (Joint Probability of two or more outcomes) – Marginal probabilty distributions
• Sum all j P(xi, yj) = P(xi) • Sum all i P(xi, yj) = P(yi)
120
121
Expectation of Random Variables
(Statistical averages) • Statistical averages, or moments,
play an important role in the characterization of the random variable.
• The first moment of the probability distribution of a random variable X is called mean value mx or expected value of a random variable X
• The second moment of a probability distribution is mean-square value of X
• Central moments are the moments of the difference between X and mx, and second central moment is the variance of x.
• Variance is equal to the difference between the mean-square value and the square of the mean
Contd
• The variance provides a measure of the
variable’s “randomness”. • The mean and variance of a random
variable give a partial description of its
pdf.
122
Time Averaging and Ergodicity
• A process where any member of the
ensemble exhibits the same statistical
behavior as that of the whole ensemble.
• For an ergodic process: To measure
various statistical averages, it is sufficient
to look at only one realization of the
process and find the corresponding time
average.
• For a process to be ergodic it must be
stationary. The converse is not true.
123
Gaussian (or Normal) Random Variable
(Process)
• A continuous random variable whose pdf
is:
μ and are parameters. Usually denoted as
N(μ, ) . • Most important and frequently
encountered random variable in
communications.
124
Central Limit Theorem
• CLT provides justification for using
Gaussian Process as a model based if
– The random variables are statistically
independent
– The random variables have probability with
same mean and variance
125
CLT
• The central limit theorem states that
– “The probability distribution of Vn approaches a normalized Gaussian
Distribution N(0, 1) in the limit as the
number of random variables approach
infinity”
• At times when N is finite it may provide a
poor approximation of for the actual
probability distribution
126
127
Autocorrelation
Autocorrelation of Energy Signals
• Correlation is a matching process; autocorrelation refers to the
matching of a signal with a delayed version of itself
• The autocorrelation function of a real-valued energy signal x(t) is
defined as:
• The autocorrelation function Rx() provides a measure of how closely
the signal matches a copy of itself as the copy is shifted units in time.
• Rx() is not a function of time; it is only a function of the time
difference between the waveform and its shifted copy.
128
Autocorrelation
• symmetrical in about zero
• maximum value occurs at the
origin
• autocorrelation and ESD
form a Fourier transform
pair, as designated by the
double-headed arrows
• value at the origin is equal to
the energy of the signal
129
AUTOCORRELATION OF A PERIODIC (POWER)
SIGNAL
• The autocorrelation function of a real-valued
power signal x(t) is defined as:
• When the power signal x(t) is periodic with
period T0, the autocorrelation function can be
expressed as:
130
Autocorrelation of power signals
• symmetrical in about zero
• maximum value occurs at the
origin
• autocorrelation and PSD
form a Fourier transform
pair, as designated by the
double-headed arrows
• value at the origin is equal to
the average power of the
signal
The autocorrelation function of a real-valued periodic signal
has properties similar to those of an energy signal:
131
132
Spectral Density
SPECTRAL DENSITY
• The spectral density of a signal characterizes the
distribution of the signal’s energy or power, in the frequency domain
• This concept is particularly important when considering
filtering in communication systems while evaluating the
signal and noise at the filter output.
• The energy spectral density (ESD) or the power spectral
density (PSD) is used in the evaluation.
• Need to determine how the average power or energy of
the process is distributed in frequency.
134
Spectral Density
• Taking the Fourier transform of the
random process does not work
135
136
ENERGY SPECTRAL DENSITY
• Energy spectral density describes the energy per unit bandwidth
measured in joules/hertz
• Represented as x(t), the squared magnitude spectrum
x(t) =|x(f)|2
• According to Parseval’s Relation
• Therefore
• The Energy spectral density is symmetrical in frequency about origin
and total energy of the signal x(t) can be expressed as
137
Power Spectral Density
• The power spectral density (PSD) function Gx(f) of the periodic
signal x(t) is a real, even ad nonnegative function of frequency that
gives the distribution of the power of x(t) in the frequency domain.
• PSD is represented as (Fourier Series):
• PSD of non-periodic signals:
• Whereas the average power of a periodic signal x(t) is represented
as:
138
Noise in the Communication System
• The term noise refers to unwanted electrical signals that are always
present in electrical systems: e.g. spark-plug ignition noise, switching
transients and other electro-magnetic signals or atmosphere: the sun and
other galactic sources
• Can describe thermal noise as zero-mean Gaussian random process
• A Gaussian process n(t) is a random function whose value n at any
arbitrary time t is statistically characterized by the Gaussian probability
density function
139
WHITE NOISE
• The primary spectral characteristic of thermal noise is that its power
spectral density is the same for all frequencies of interest in most
communication systems
• A thermal noise source emanates an equal amount of noise power per
unit bandwidth at all frequencies—from dc to about 1012 Hz.
• Power spectral density G(f)
• Autocorrelation function of white noise is
• The average power P of white noise if infinite
WHITE NOISE
140
White Noise
• Since Rw( T) = 0 for T = 0, any two
different samples of white noise, no
matter how close in time they are taken,
are uncorrelated.
• Since the noise samples of white noise are
uncorrelated, if the noise is both white and
Gaussian (for example, thermal noise)
then the noise samples are also
independent. 141
• The effect on the detection process of a channel
with Additive White Gaussian Noise (AWGN) is
that the noise affects each transmitted symbol
independently
• Such a channel is called a memoryless channel
• The term “additive” means that the noise is simply superimposed or added to the signal—that there are
no multiplicative mechanisms at work
ADDITIVE WHITE GAUSSIAN NOISE
(AWGN)
142
Random Processes and Linear Systems
• If a random
process
forms the
input to a
time-
invariant
linear
system, the
output will
also be a
random
process 143
Distortion less Transmission
Remember linear and non-linear
group delays in DSP
144
145
DISTORTION LESS
TRANSMISSION
What is required of a network for it to behave
like an ideal transmission line?
• The output signal from an ideal
transmission line may have some time delay
and different amplitude as compared with
the input
• It must have no distortion—it must have the
same shape as the input
• For idea distortion less transmission
146
Ideal Distortion Less Transmission
• The overall system response must have a constant magnitude response
• The phase shift must be linear with frequency
• All of the signal’s frequency components must also arrive with identical time delay in order to add up correctly
• The time delay t0 is related to the phase shift and the radian frequency
= 2f by
• A characteristic often used to measure delay distortion of a signal is
called envelope delay or group delay, which is defined as
147
BANDWIDTH OF DIGITAL DATA
• Baseband signals
– Signals containing frequencies ranging from 0 to
some frequency fs
• Bandpass or Passband Signals
– Signals containing frequencies ranging from fs1 to
some frequency fs2
Communication Systems
Unit –IV
Noise Characterization
Communication Systems Continuous-Wave Modulation
149
2.10 Noise in CW Modulation Systems
Formulating two models:
1. Channel model
2. Receiver model
AWGN: additive white Gaussian noise
Band-pass
filter Modulated
signal s (t) Demodulator
Output
signal
Noise w(t)
x(t)
Communication Systems Continuous-Wave Modulation
150
Signal-to-Noise Ratio: Basic Definitions
average power of signal
average power of noise SNR
Power spectral density of White noise w(t):
),(2
)( 0 - fN
fSW
N0 is the average noise power per unit
bandwidth.
Communication Systems Continuous-Wave Modulation
151
Furthermore, the output noise of filter can be
regarded as narrowband noise
cos 2 sin 2I c Q c
n t n t f t n t f t
Ideal band-pass filtered noise
Average noise power N0BT
Communication Systems Continuous-Wave Modulation
152
x t s t n t
The input of the demodulator is
s(t) is useful modulated signal
n(t) is narrowband noise
Band-pass
filter Modulated
signal s (t) Demodulator
Output
signal
Noise w(t)
x(t)
SNRI SNRO
Communication Systems Continuous-Wave Modulation
153
Requirements of Fair comparison:
1. The modulated signal s(t) has the same
average power.
2. The channel noise w(t) has the same average
power measured in the message bandwidth W.
Communication Systems Continuous-Wave Modulation
154
Figure 2.35 The baseband transmission model, assuming a message signal of
bandwidth W, used for calculating the channel signal-to-noise ratio.
C
O
(SNR)
(SNR)merit of Figure
SNRc
Communication Systems Continuous-Wave Modulation
155
2.11 Noise in Linear Receivers Using Coherent
Detection
• Linear receiver:
DSB-SC and SSB coherent detector
• Nonlinear receiver:
AM envelope detector
Take DSB for example to analyze noise
performance of coherent detection.
Communication Systems Continuous-Wave Modulation
156
cos 2DSB c c
s t CA m t f t
C: system-dependent scaling factor
设消息信号m(t)的功率谱密度为SM(f), 则它的平均功率P 为
W
MW
P S f df
Model of DSB-SC receiver using coherent detection
BPF DSB signal
s (t) LPF
y(t)
Noise w(t)
x(t) v(t)
cos2c
f t
Communication Systems Continuous-Wave Modulation
157
Average power of DSB 2
22 PACS c
DSB
Average noise power 0WNN
,
2 2
0
SNR 2.842
c
C DSB
C A P
WN
02i
N WN
so 2 2
0
SNR4
c
I
C A P
WN
Communication Systems Continuous-Wave Modulation
158
SNRO ?
cos 2 cos 2 cos 2
c c I c Q c
x t s t n t
CA f t m t n t f t n t f t
The output of the product-modulator is
cos 2
1 1
2 2
1 1cos 4 sin 4
2 2
c
c I
c I c Q c
v t x t f t
CA m t n t
CA m t n t f t n t f t
Output of LPF 1 1 2.86
2 2c I
y t CA m t n t
Input of the product-modulator is
Communication Systems Continuous-Wave Modulation
159
Output message signal:
The receiver output 1 1
2 2c I
y t CA m t n t
)(2
1)( tmCAts co
Power of output signal PACS co22
4
1
Noise output )(
2
1)(0 tntn I
Average Power of filtered noise n(t)
Average power of the in-phase noise component nI(t) is
the same as that of the filtered noise n(t).
Output noise power 00
2
2
12)
2
1( WNWNNout
WNNi 02
Communication Systems Continuous-Wave Modulation
160
The output SNR for a DSB-SC receiver using
coherent detection is therefore
,
2 2 2 2
_0 0
/ 4SNR 2.87
/ 2 2
c c
O DSB SC
C A P C A P
WN WN
Therefore SNR
Figure of merit= 1SNR
O
C DSB
SNR
Figure of merit= 1SNR
O
C SSB
Similar analysis to SSB demodulator Problem
2.49, we can get
SSB has the same figure of merit as DSB.
Communication Systems Continuous-Wave Modulation
161
2.12 Noise in AM receivers Using
Envelope Detection
AM signal 1 cos 2c a c
s t A k m t f t
Power of AM signal )1(2
1 22 PkAS acAM
Noise power 0WNN
Figure 2.37 Model of AM receiver.
Communication Systems Continuous-Wave Modulation
162
02I
N WN
2 2
,0
1SNR
4
c aI AM
I AMI I
A k PS S
N N WN
Channel SNR
2 2
,0
1SNR 2.90
2
c a
C AM
A k P
WN
Communication Systems Continuous-Wave Modulation
163
It’s difficult to get the relationship between
signal and noise. So, we just discuss it under
different conditions.
cos 2 sin 2
c c a I c Q c
x t s t n t
A A k m t n t f t n t f t
2.91
2 2
envelope of x t
= 2.92c c a I Q
y t
A A k m t n t n t
Communication Systems Continuous-Wave Modulation
164
1) When Signal>>Noise
2 2 c c a I Q
y t A A k m t n t n t
2 2 2 2 c c a c c a I I Q
A A k m t A A k m t n t n t n t
2
2 c c a c c a I
A A k m t A A k m t n t
( ) ( ) ( )c a I
y t A k m t n t
2 2[1 ( )] ( ) ( )c a I Q
A k m t n t n t
2 1
I
c c a
c c a
n tA A k m t
A A k m t
时当++ 1<< 2111 xxx
1
I
c c a
c c a
n tA A k m t
A A k m t
( ) ( ) ( ) c c a I
y t A A k m t n t
Communication Systems Continuous-Wave Modulation
165
Output SNR 0
22
,2
)(WN
PkASNR ac
AMO
2 2
O c aS A K P
02O
N WN
2
2
( )figure of merit =
( ) 1
O a
C aAM
SNR k P
SNR k P
2
2
2
1
o a
AM
I a
SNR k PG
SNR k P
( ) ( ) ( )c a I
y t A k m t n t
Communication Systems Continuous-Wave Modulation
166
• the figure of merit of DSB and SSB receivers
using coherent detection are always unity, the
corresponding figure of merit of an AM
receiver using envelope detection is always
less than unity.
Comparison: DSB SSB AM
• In other words, the noise performance of a
full AM receiver is always inferior to that of
a DSB or SSB receiver. This is due to the
wastage of transmission power, which results
from transmitting the carrier as a
component of the AM wave.
Communication Systems Continuous-Wave Modulation
167
Example 2.4 Single-tone Modulation
Single-tone modulating signal
cos 2m m
m t A f t
The corresponding AM wave
1 cos 2 cos 2AM c m c
s t A f t f t Modulation factor
ma Ak
Average power of m(t)
21
2m
P A
2 22
22 2
1SNR 2
1SNR 21
2
a mO
C AM a m
k A
k A
Communication Systems Continuous-Wave Modulation
168
discussion
2 22
22 2
1SNR 2
1SNR 21
2
a mO
C AM a m
k A
k A
• When =1, it corresponds to 100%
modulation, we get a figure of merit equal
to 1/3.
• This means that, other factors being equal,
an AM system must transmit three times as
much as average power as a suppressed-
carrier system to achieve the same quality
of noise performance.
Communication Systems Continuous-Wave Modulation
169
Figure 2.37 Model of AM receiver.
2 2
envelope of x t
= 2.92c c a I Q
y t
A A k m t n t n t
Communication Systems Continuous-Wave Modulation
170
2) When Signal<<Noise
cos 2c
n t r t f t t
In this case, the detector output has no component
strictly proportional to the message signal m(t).
)](cos[)()](cos[)()( ttmkAtAtrty acc
2 2[1 ( )] ( ) ( ) ( )c a I Q
A k m t n t n t r t
( )t( )r t
[1 ( )]c a
A k m t( )y t
Communication Systems Continuous-Wave Modulation
171
Threshold effect: the loss of a message in an
envelope detector that operates at a low SNR is
referred to as threshold effect.
Threshold Effect
Threshold : we mean a value of the SNR
below which the noise performance of a
detector deteriorates much more rapidly than
proportionately to the SNR.
Communication Systems
Unit –IV
Noise Characterization
Communication Systems Continuous-Wave Modulation
173
2.13 Noise in FM Receivers
BPF
sFM (t)
LPF
y(t)
Noise w(t)
x(t) v(t)
Limiter Discriminator
Figure 2.40 Model of an FM receiver.
In theory, discriminator consists two parts;
In practice, these two parts are usually implemented
as integral parts as a single physical unit.
Communication Systems Continuous-Wave Modulation
174
cos 2 sin 2I c Q c
n t n t f t n t f t
cos 2c
n t r t f t t
In terms of its envelope and phase
1/ 22 2
I Qr t n t n t
1tan
Q
I
n tt
n t
r(t) is Rayleigh distributed, (t) is uniformly distributed.
BPF
sFM (t)
LPF
y(t)
Noise w(t)
x(t) v(t)
Limiter Discriminator
Figure 2.40 Model of an FM receiver.
Communication Systems Continuous-Wave Modulation
175
The incoming
FM signal
0cos 2 2
t
c c fs t A f t k m d
We define 0
2t
ft k m d
thus cos 2c c
s t A f t t
cos 2 cos 2
c c c
x t s t n t
A f t t r t f t t
The noisy signal at the band-pass filter output
BPF sFM (t)
LPF
y(t)
Noise w(t)
x(t) v(t)
Limiter Discriminator
Figure 2.40 Model of an FM receiver.
Communication Systems Continuous-Wave Modulation
176
Figure 2.41 Phasor diagram for FM wave plus narrowband noise
for the case of high carrier-to-noise ratio.
1sin
tancos
c
r t t tt t
A r t t t
2.137
The envelope of x(t) is of no interest to us, because any
envelope variations at the band-pass filter output are
removed by the limiter. So, we only focus on the phase
of x(t).
Communication Systems Continuous-Wave Modulation
177
The discriminator output
When CNR >>1, that is r(t)<<Ac CNR: Carrier-to-noise
)]()(sin[)(
)(2
)]()(sin[)(
)()(
0tt
A
trdmk
ttA
trtt
t
cf
c
2.139
Noise term of V(t)
1
2f d
d tv t k m t n t
dt
2.140
sin1
2d
c
d r t t tn t
A dt
2.141
Equation 2.140 shows that if CNR is large, the output of
the discriminator consists of message signal plus noise.
Communication Systems Continuous-Wave Modulation
178
Then, we may simplify Equation 2.141 as:
)]}(sin[)({2
1)( ttr
dt
d
Atn
cd
2.142
Because
)](sin[)()( ttrtnQ Thus
)(t is uniformly distributed over 2 radians. IF:
If CNR is high, it can be proved that )()( tt
is also uniformly distributed over 2 radians.
Then:
This means that the additive noise nd(t) is determined
by the carrier amplitude Ac and the quadrature
component nQ(t) of the narrowband noise n(t).
dt
tdn
Atn
Q
cd
)(
2
1)(
2.144
Communication Systems Continuous-Wave Modulation
179
To calculate SNRO
outputreceiver theat noise the ofpower average
signal ddemodulate the ofpower average)( OSNR
Output 1
2f d
d tv t k m t n t
dt
Output signal= )(tmk f Average signal power= Pk f2
P is the power of m(t).
nQ(t) Differentiator
H(f) nd(t)
cc A
jf
A
fjfH
2
2)(
Communication Systems Continuous-Wave Modulation
180
To calculate output noise power
2
2d QN N
c
fS f S f
A
The power spectral density of nd(t):
2( ) ( ) ( )Y XS f H f S f 1.58
Because
nQ(t) is low-pass filtered
noise. Thus
Power spectral density of n0(t) at the receiver out
(after low-pass filter)
2
0
2,
0,
oN c
N ff W
S f A
otherwise
2.147
2
0
2,
2
0,
d
T
N c
N f Bf
S f A
otherwise
2.146
Communication Systems Continuous-Wave Modulation
181
Figure 2.42 Noise analysis of FM receiver.
(a) Power spectral density of quadrature component nQ(t) of
narrowband noise n(t).
(b) Power spectral density of noise nd(t) at the discriminator output.
(c) Power spectral density of noise no(t) at the receiver output.
Communication Systems Continuous-Wave Modulation
182
Average signal power= Pk f2
Average power of FM
2
2c
FM
AS
Average noise power
0WNN
Therefore, 2
,0
2.1502
c
C FM
ASNR
N W
2 2
3,0
32.149
2
c f
O FM
A k PSNR
N W Therefore,
320 0
2 2
2Average power of output noise
3
W
Wc c
N N Wf df
A A 2.148
Communication Systems Continuous-Wave Modulation
183
2
2
3fO
C FM
SNR k P
SNR W
2 2
3,0
3
2
c f
O FM
A k PSNR
N W
2
,02
c
C FM
ASNR
N W
2.151
Deviation ratio 偏移率, 频偏比
W
fD
frequency modulationhighest
deviationfrequency
WDD
fWfBT )1(2)1
1(222
Since fkf Thus W
kD
f
So 23
)(
)(merit figure PD
SNR
SNR
FMC
O
D is similar to
modulation
index .
Communication Systems Continuous-Wave Modulation
184
• when carrier-to-noise is high, an increase in the
transmission bandwidth BT provides a
corresponding quadratic increase in the output
signal-to-noise ratio of figure of merit.
• FM improves noise performance at the cost of
transmission bandwidth.
23)(
)(merit figure PD
SNR
SNR
FMC
O
D Figure merit Good !
D Transmission bandwidth BT Bad !
Conclusion
Communication Systems Continuous-Wave Modulation
185
FM Threshold Reduction
Figure 2.47 FM demodulator with negative feedback.
Threshold reduction in FM receivers may be achieved
by using an FM demodulator with negative feedback,
or by using a phase-locked loop demodulator.
Communication Systems Continuous-Wave Modulation
186
Pre-Emphasis and De-Emphasis in FM
Figure 2.48 (a) Power spectral density of noise at FM receiver output.
(b) Power spectral density of a typical message signal.
Communication Systems Continuous-Wave Modulation
187
Fig. 2.49 Use of pre-emphasis and de-emphasis in an FM system.
Pre-emphasis
Hpe(f)
m(t) Noise w(t)
FM
Transmitter
FM
receiver
De-emphasis
Hde(f)
Recoverd
signal m(t)
Communication Systems Continuous-Wave Modulation
188
• Pre-emphasize the high-frequency components of
the message signal only in the transmitter;
• De-emphasize the high-frequency components of
the message signal and noise in the receiver.
• So effectively increase the output SNR
Frequency response Hpe(f) of the pre-emphasis filter
and frequency response Hde(f) satisfy following form:
WfWfH
fHpe
de ,)(
1)(
Communication Systems Continuous-Wave Modulation
189
Without pre- and depre-
2
0
2,
2
0,
d
T
N c
N f Bf
S f A
otherwise
2.146
With pre- and depre-
2
20
2 2| ( ) | ,
| ( ) | 2
0,
d
Tde
de N c
N f BH f f
H f S f A
otherwise
After Low Pass filter (-W,W),
2W
2 20
2
Average output noise power with de-emphasis
= | ( ) |de
Wc
N ff H f df
A
Communication Systems Continuous-Wave Modulation
190
Average output noise power pre-emphasis and de-emphasis
Average output noise power pre-emphasis and
wit
de
hout
wit -emp ish hasI
2
2 2
2 2.160
3 | ( ) |W
deW
WI
f H f df
Communication Systems Continuous-Wave Modulation
191
Example 2.6
(a) Pre-emphasis filter. (b) De-emphasis filter.
0
1pe
jfH f
f
0
1
1 /de
H fjf f
Communication Systems Continuous-Wave Modulation
192
32
0
2 1
0 0
2
0
( / )2
3[( / ) tan ( / )]3
1 ( / )
W
W
W fWI
f W f W fdf
f f
In commercial FM broadcasting:
0 2.1 , 15f kHz W kHz
22 13I dB
The improvement is Remarkable.
Unit – 1: Information Theory
1.1 Introduction:
• Communication
Communication involves explicitly the transmission of information from one point to another, through a succession of processes.
• Basic elements to every communication system
o Transmitter
o Channel and
o Receiver
• Information sources are classified as:
• Source definition
Analog : Emit a continuous – amplitude, continuous – time electrical wave from.
Discrete : Emit a sequence of letters of symbols.
The output of a discrete information source is a string or sequence of symbols.
1.2 Measure the information:
To measure the information content of a message quantitatively, we are required to arrive at an intuitive concept of the amount of information.
Consider the following examples:
A trip to Mercara (Coorg) in the winter time during evening hours,
1. It is a cold day
2. It is a cloudy day
3. Possible snow flurries
INFORMATION SOURCE
ANALOG DISCRETE
Message signal
Receiver
User of
information
Transmitter
Source of information
CHANNEL
Transmitted signal
Received signal
Estimate of message signal
Communication System
Amount of information received is obviously different for these messages.
o Message (1) Contains very little information since the weather in coorg is ‘cold’ for most part of the time during winter season.
o The forecast of ‘cloudy day’ contains more information, since it is not an event that occurs often.
o In contrast, the forecast of ‘snow flurries’ conveys even more information, since the occurrence of snow in coorg is a rare event.
� On an intuitive basis, then with a knowledge of the occurrence of an event, what can be said about the amount of information conveyed?
It is related to the probability of occurrence of the event.
� What do you conclude from the above example with regard to quantity of information?
Message associated with an event ‘least likely to occur’ contains most information.
� The information content of a message can be expressed quantitatively as follows:
The above concepts can now be formed interns of probabilities as follows:
Say that, an information source emits one of ‘q’ possible messages m1, m2 …… mq with p1, p2 …… pq as their probs. of occurrence.
Based on the above intusion, the information content of the kth message, can be written as
I (mk) α kp
1
Also to satisfy the intuitive concept, of information.
I (mk) must � zero as pk � 1
Therefore,
I (mk) > I (mj); if pk < pj
I (mk) � O (mj); if pk � 1 ------ I
I (mk) ≥ O; when O < pk < 1
Another requirement is that when two independent messages are received, the total information content is –
Sum of the information conveyed by each of the messages.
Thus, we have
I (mk & mq) I (mk & mq) = Imk + Imq ------ I
∴ We can define a measure of information as –
I (mk ) = log
kp
1 ------ III
Unit of information measure
Base of the logarithm will determine the unit assigned to the information content.
Natural logarithm base : ‘nat’
Base - 10 : Hartley / decit
Base - 2 : bit
Use of binary digit as the unit of information?
Is based on the fact that if two possible binary digits occur with equal proby (p1 = p2 = ½) then the correct identification of the binary digit conveys an amount of information.
I (m1) = I (m2) = – log2 (½ ) = 1 bit
∴ One bit is the amount if information that we gain when one of two possible and equally likely events occurs.
Illustrative Example
A source puts out one of five possible messages during each message interval. The probs. of
these messages are p1 = 2
1 ; p2 =
4
1 ; p1 =
4
1 : p1 =
16
1, p5
16
1
What is the information content of these messages?
I (m1) = - log2
2
1 = 1 bit
I (m2) = - log2
4
1 = 2 bits
I (m3) = - log
8
1 = 3 bits
I (m4) = - log2
16
1 = 4 bits
I (m5) = - log2
16
1 = 4 bits
HW: Calculate I for the above messages in nats and Hartley
SJBIT/ECE 8
Digital Communication System:
Entropy and rate of Information of an Information Source /
Model of a Mark off Source
1.3 Average Information Content of Symbols in Long Independence Sequences
Suppose that a source is emitting one of M possible symbols s0, s1 ….. sM in a statically independent sequence
Let p1, p2, …….. pM be the problems of occurrence of the M-symbols resply. suppose further that during a long period of transmission a sequence of N symbols have been generated.
On an average – s1 will occur NP1 times
S2 will occur NP2 times
: :
si will occur NPi times
The information content of the i th symbol is I (si) = log
ip
1 bits
∴ PiN occurrences of si contributes an information content of
PiN . I (si) = PiN . log
ip
1 bits
∴ Total information content of the message is = Sum of the contribution due to each of
Source of information
Source encoder
Channel encoder
Modulator
Channel
User of information
Source decoder
Channel decoder
Demodulator
Message signal
Transmitter Receiver
Source code word
Channel code word
Estimate of source codeword
Waveform Received
signal
Estimate of channel codeword
Estimate of the Message signal
M symbols of the source alphabet
i.e., Itotal = ∑=
M
1i i1 p
1logNP bits
bygiveninsymbolerp
contentninforamtioAverage∴ H = ∑
=
=
M
1i i1
total
p
1logNP
N
I symbol
per bits ---- IV
This is equation used by Shannon
Average information content per symbol is also called the source entropy.
1.4 The average information associated with an extremely unlikely message, with an extremely likely message and the dependence of H on the probabilities of messages
consider the situation where you have just two messages of probs. ‘p’ and ‘(1-p)’.
Average information per message is H = p1
1log)p1(
p
1logp
−−+
At p = O, H = O and at p = 1, H = O again,
The maximum value of ‘H’ can be easily obtained as,
Hmax = ½ log2 2 + ½ log2 2 = log2 2 = 1
∴ Hmax = 1 bit / message
Plot and H can be shown below
The above observation can be generalized for a source with an alphabet of M symbols.
Entropy will attain its maximum value, when the symbol probabilities are equal,
i.e., when p1 = p2 = p3 = …………………. = pM = M
1
∴ Hmax = log2 M bits / symbol
Hmax = ∑M
M p
1logp
Hmax = ∑M
M 11
logp
H
1
O ½ P
10
∴ Hmax = ∑ = MlogMlogM
122
• Information rate
If the source is emitting symbols at a fixed rate of ‘’rs’ symbols / sec, the average source information rate ‘R’ is defined as –
R = rs . H bits / sec
• Illustrative Examples
1. Consider a discrete memoryless source with a source alphabet A = { so, s1, s2} with respective probs. p0 = ¼, p1 = ¼, p2 = ½. Find the entropy of the source.
Solution: By definition, the entropy of a source is given by
H = ∑=
M
i ii p
p1
1log bits/ symbol
H for this example is
H (A) = ∑=
2
0
1log
i ii p
p
Substituting the values given, we get
H (A) = op log oP
1 + P1 log
22
1
1log
1
pp
p+
= ¼ 2log 4 + ¼ 2log 4 + ½ 2log 2
=
2
3 = 1.5 bits
if sr = 1 per sec, then
H′ (A) = sr H (A) = 1.5 bits/sec
2. An analog signal is band limited to B Hz, sampled at the Nyquist rate, and the samples are quantized into 4-levels. The quantization levels Q1, Q2, Q3, and Q4 (messages) are assumed independent and occur with probs.
P1 = P2 = 8
1 and P2 = P3 =
8
3. Find the information rate of the source.
Solution: By definition, the average information H is given by
H = 1p log 1
1
p + 2p log
2
1
p + 3p log
3
1
p+ 4p log
4
1
p
Substituting the values given, we get
11
H = 8
1 log 8 +
8
3 log
3
8 +
8
3 log
3
8 +
8
1 log 8
= 1.8 bits/ message.
Information rate of the source by definition is
R = sr H
R = 2B, (1.8) = (3.6 B) bits/sec
3. Compute the values of H and R, if in the above example, the quantities levels are so chosen that they are equally likely to occur,
Solution:
Average information per message is
H = 4 (¼ log2 4) = 2 bits/message
and R = sr H = 2B (2) = (4B) bits/sec
1.5 Mark off Model for Information Sources
Assumption
A source puts out symbols belonging to a finite alphabet according to certain probabilities
depending on preceding symbols as well as the particular symbol in question.
• Define a random process
A statistical model of a system that produces a sequence of symbols stated above is and which
is governed by a set of probs. is known as a random process.
Therefore, we may consider a discrete source as a random process
And the converse is also true.
i.e. A random process that produces a discrete sequence of symbols chosen from a finite set
may be considered as a discrete source.
• Discrete stationary Mark off process?
Provides a statistical model for the symbol sequences emitted by a discrete source.
General description of the model can be given as below:
1. At the beginning of each symbol interval, the source will be in the one of ‘n’ possible states 1, 2,
….. n
Where ‘n’ is defined as
12
n ≤ (M)m
M = no of symbol / letters in the alphabet of a discrete stationery source,
m = source is emitting a symbol sequence with a residual influence lasting
‘m’ symbols.
i.e. m: represents the order of the source.
m = 2 means a 2nd order source
m = 1 means a first order source.
The source changes state once during each symbol interval from say i to j. The probabilityy of
this transition is Pij. Pij depends only on the initial state i and the final state j but does not depend on
the states during any of the preceeding symbol intervals.
2. When the source changes state from i to j it emits a symbol.
Symbol emitted depends on the initial state i and the transition i � j.
3. Let s1, s2, ….. sM be the symbols of the alphabet, and let x1, x2, x3, …… xk,…… be a sequence of
random variables, where xk represents the kth symbol in a sequence emitted by the source.
Then, the probability that the kth symbol emitted is sq will depend on the previous symbols x1, x2,
x3, …………, xk–1 emitted by the source.
i.e., P (Xk = sq / x1, x2, ……, xk–1)
4. The residual influence of
x1, x2, ……, xk–1 on xk is represented by the state of the system at the beginning of the kth symbol
interval.
i.e. P (xk = sq / x1, x2, ……, xk–1) = P (xk = sq / Sk)
When Sk in a discrete random variable representing the state of the system at the beginning of the
kth interval.
Term ‘states’ is used to remember past history or residual influence in the same context as the use
of state variables in system theory / states in sequential logic circuits.
System Analysis with regard to Markoff sources
Representation of Discrete Stationary Markoff sources:
o Are represented in a graph form with the nodes in the graph to represent states and the transition between states by a directed line from the initial to the final state.
13
o Transition probs. and the symbols emitted corresponding to the transition will be shown marked along the lines of the graph.
A typical example for such a source is given below.
o It is an example of a source emitting one of three symbols A, B, and C
o The probability of occurrence of a symbol depends on the particular symbol in question and the symbol immediately proceeding it.
o Residual or past influence lasts only for a duration of one symbol.
Last symbol emitted by this source
o The last symbol emitted by the source can be A or B or C. Hence past history can be represented by three states- one for each of the three symbols of the alphabet.
• Nodes of the source
o Suppose that the system is in state (1) and the last symbol emitted by the source was A.
o The source now emits symbol (A) with probability ½ and returns to state (1).
OR
o The source emits letter (B) with probability ¼ and goes to state (3)
OR
o The source emits symbol (C) with probability ¼ and goes to state (2).
State transition and symbol generation can also be illustrated using a tree diagram.
Tree diagram
• Tree diagram is a planar graph where the nodes correspond to states and branches correspond to transitions. Transitions between states occur once every Ts seconds.
Along the branches of the tree, the transition probabilities and symbols emitted will be indicated.Tree diagram for the source considered
P1(1) = 1/3
P2(1) = 1/3
P3(1) = 1/3
½
¼
½
¼ B
¼
B
A ¼
¼ C
½ A 1
C
2
3
C
A
¼
B
1 A
½ B ¼
C ¼
To state 3
To state 2
14
1
1
3
1
2
3
1
2
3
2
1
2
3
1/3
½
¼
¼
A
½
¼
¼
A
C
B
AA
AC
AB
A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
2
1
3
1
2
3
1
2
3
2
1
2
3
1/3
¼
½
¼
A
A
C
B
AA
AC
AB A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
3
1
3
1
2
3
1
2
3
2
1
2
3
1/3
¼
¼
½
A
A
C
B
AA
AC
AB
A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
Symbol probs.
Symbols emitted
State at the end of the first symbol internal
State at the end of the second symbol internal
Symbol sequence
Initial state
15
Use of the tree diagram
Tree diagram can be used to obtain the probabilities of generating various symbol sequences.
Generation a symbol sequence say AB
This can be generated by any one of the following transitions:
OR
OR
Therefore proby of the source emitting the two – symbol sequence AB is given by
P(AB) = P ( S1 = 1, S2 = 1, S3 = 3 )
Or
P ( S1 = 2, S2 = 1, S3 = 3 ) ----- (1)
Or
P ( S1 = 3, S2 = 1, S3 = 3 )
Note that the three transition paths are disjoint.
Therefore P (AB) = P ( S1 = 1, S2 = 1, S3 = 3 ) + P ( S1 = 2, S2 = 1, S3 = 3 )
+ P ( S1 = 2, S2 = 1, S3 = 3 ) ----- (2)
The first term on the RHS of the equation (2) can be written as
P ( S1 = 2, S2 = 1, S3 = 3 )
= P ( S1 = 1) P (S2 = 1 / S1 = 1) P (S3 = 3 / S1 = 1, S2 = 1)
= P ( S1 = 1) P (S2 = 1 / S1= 1) P (S3 = 3 / S2 = 1)
1 2 3
2 1 3
3 1 3
16
Recall the Markoff property.
Transition probability to S3 depends on S2 but not on how the system got to S2.
Therefore, P (S1 = 1, S2 = 1, S3 = 3 ) = 1/3 x ½ x ¼
Similarly other terms on the RHS of equation (2) can be evaluated.
Therefore P (AB) = 1/3 x ½ x ¼ + 1/3 x ¼ x ¼ + 1/3 x ¼ x ¼ = 48
4 =
12
1
Similarly the probs of occurrence of other symbol sequences can be computed.
Therefore,
In general the probability of the source emitting a particular symbol sequence can be computed by summing the product of probabilities in the tree diagram along all the paths that yield the particular sequences of interest.
Illustrative Example:
1. For the information source given draw the tree diagram and find the probs. of messages of lengths 1, 2 and 3.
Source given emits one of 3 symbols A, B and C
Tree diagram for the source outputs can be easily drawn as shown.
C ¼
C ¼
2 B 3/4 1 A
3/4
p1 = ½ P2 = ½
17
Messages of length (1) and their probs
A � ½ x ¾ = 3/8
B � ½ x ¾ = 3/8
C � ½ x ¼ + ½ x ¼ = 8
1
8
1 + = ¼
Message of length (2)
How may such messages are there?
Seven
Which are they?
AA, AC, CB, CC, BB, BC & CA
What are their probabilities?
Message AA : ½ x ¾ x ¾ = 32
9
Message AC: ½ x ¾ x ¼ = 32
3 and so on.
1 ½ ¾
¼
A 1 2 ¼
1
2
AAA
AAC
ACC
ACB
1 1
2 ¾
A
C
¾ A
¼ C ¼ C 3/4 B
2 2
¼
1
2
CCA
CCC
CBC
CBB
1 1
2
¾
C
B
A
C C
B
¾
¼ ¼
3/4
C
2 ½ ¼
¾
C 1 2 ¼
1
2
CAA
CAC
CCC
CCB
1 1
2 ¾
A
C
¾ A
¼ C ¼ C 3/4 B
2 2
¼
1
2
BCA
BCC
BBC
BBB
1 1
2
¾
C
B
A
C C
B
¾
¼ ¼
3/4
B
18
Messages of Length (1) Messages of Length (2) Messages of Length (3)
A
8
3 AA
32
9 AAA
128
27
B
8
3 AC
32
3 AAC
128
9
C
4
1 CB
32
3 ACC
128
3
CC
32
2 ACB
128
9
BB
32
9 BBB
128
27
BC
32
3 BBC
128
9
CA
32
3 BCC
128
3
BCA
128
9
CCA
128
3
CCB
128
3
CCC
128
2
CBC
128
3
CAC
128
3
CBB
128
9
CAA
128
9
19
• A second order Markoff source Model shown is an example of a source where the probability of occurrence of a symbol
depends not only on the particular symbol in question, but also on the two symbols proceeding it.
No. of states: n ≤ (M)m; 4 ≤ M2
∴ M = 2 m = No. of symbols for which the residual influence lasts
(duration of 2 symbols) or
M = No. of letters / symbols in the alphabet. Say the system in the state 3 at the beginning of the symbols emitted by the source were BA. Similar comment applies for other states.
1.6 Entropy and Information Rate of Markoff Sources
• Definition of the entropy of the source
Assume that, the probability of being in state i at he beginning of the first symbol interval is the same as the probability of being in state i at the beginning of the second symbol interval, and so on.
The probability of going from state i to j also doesn’t depend on time, Entropy of state ‘i’ is defined as the average information content of the symbols emitted from the i-th state.
∑=
=n
j ijij p
p1
2i
1logH bits / symbol ------ (1)
Entropy of the source is defined as the average of the entropy of each state.
i.e. H = E(Hi) = ∑=
n
1jii Hp ------ (2)
Where,
Pi = the proby that the source is in state ‘i'.
Using eqn (1), eqn. (2) becomes,
(AA)
1/8
B
A
7/8
1 2
3 4 1/8
7/8
1/4
1/4
(AA)
(AB)
(BB)
P1 (1) 18
2
P2 (1) 18
7
P3 (1) 18
7
P4 (1) 18
2
B
3/4
3/4
A
B
B B
20
H =
∑∑
==
n
j ijij
n
ii p
pp11
1log bits / symbol ------ (3)
Average information rate for the source is defined as
R = rs . H bits/sec
Where, ‘rs’ is the number of state transitions per second or the symbol rate of the source.
The above concepts can be illustrated with an example
Illustrative Example:
1. Consider an information source modeled by a discrete stationary Mark off random process shown in the figure. Find the source entropy H and the average information content per symbol in messages containing one, two and three symbols.
• The source emits one of three symbols A, B and C.
• A tree diagram can be drawn as illustrated in the previous session to understand the various symbol sequences and their probabilities.
1 ½ ¾
¼
A 1 2 ¼
1
2
AAA
AAC
ACC
ACB
1 1
2 ¾
A
C
¾ A
¼ C ¼ C 3/4 B
2 2
¼
1
2
CCA
CCC
CBC
CBB
1 1
2
¾
C
B
A
C C
B
¾
¼ ¼
3/4
C
2 ½ ¼
¾
C 1 2 ¼
1
2
CAA
CAC
CCC
CCB
1 1
2 ¾
A
C
¾ A
¼ C ¼ C 3/4 B
2 2
¼
1
2
BCA
BCC
BBC
BBB
1 1
2
¾
C
B
A
C C
B
¾
¼ ¼
3/4
B
C ¼
C ¼
2 B 3/4 1 A
3/4
p1 = ½ P2 = ½
21
As per the outcome of the previous session we have Messages of Length (1) Messages of Length (2) Messages of Length (3)
A
8
3 AA
32
9 AAA
128
27
B
8
3 AC
32
3 AAC
128
9
C
4
1 CB
32
3 ACC
128
3
CC
32
2 ACB
128
9
BB
32
9 BBB
128
27
BC
32
3 BBC
128
9
CA
32
3 BCC
128
3
BCA
128
9
CCA
128
3
CCB
128
3
CCC
128
2
CBC
128
3
CAC
128
3
CBB
128
9
CAA
128
9
ITC 10EC55
SJBIT/ECE 22
By definition Hi is given by
∑=
=n
1j ijiji p
1logpH
Put i = 1,
∑=
==
2n
1j j1j1i p
1logpH
12
1211
11
1log
p
1 log
ppp +
Substituting the values we get,
( )
+=4/1
1log
4
1
4/3
1log
4
3221H
= ( )4log4
1
3
4log
4
322
+
H1 = 0.8113
Similarly H2 = 4
1 log 4 +
4
3log
3
4 = 0.8113
By definition, the source entropy is given by,
∑∑==
==2
1iii
n
1iii HpHpH
= 2
1 (0.8113) +
2
1 (0.8113)
= (0.8113) bits / symbol
To calculate the average information content per symbol in messages containing two symbols.
• How many messages of length (2) are present? And what is the information content of these messages?
There are seven such messages and their information content is:
I (AA) = I (BB) = log )(
1
AA = log
)(
1
BB
i.e., I (AA) = I (BB) = log )32/9(
1 = 1.83 bits
Similarly calculate for other messages and verify that they are
I (BB) = I (AC) =
I (CB) = I (CA) = log
)32/3(
1 = 3.415 bits
23
I (CC) = log = )32/2(P
1 = 4 bits
• Computation of the average information content of these messages.
Thus, we have
H(two) = .sym/bitsP
1logP
i
7
1ii∑
=
= i
7
1ii I.P∑
=
Where Ii = the I’s calculated above for different messages of length two
Substituting the values we get,
)83.1(x32
9)415.3(x
32
3
)415.3(x32
3)4(
32
2)415.3(
32
3)415.3(x
32
3)83.1(
32
9 H (two)
++
++++=
bits56.2 H(two) =∴
• Computation of the average information content per symbol in messages containing two symbols using the relation.
GN = messagetheinsymbolsofNumber
Nlength of messages theofcontent n informatio Average
Here, N = 2
∴ GN = 2
(2)length of messages theofcontent n informatio Average
= 2
H )two(
= symbol/bits28.12
56.2 =
28.1G2 =∴
Similarly compute other G’s of interest for the problem under discussion viz G1 & G3.
You get them as
G1 = 1.5612 bits / symbol
And G3 = 1.0970 bits / symbol
• from the values of G’s calculated
We note that,
SJBIT/ECE 24
G1 > G2 > G3 > H
• Statement
It can be stated that the average information per symbol in the message reduces as the length of the message increases.
• The generalized form of the above statement
If P (mi) is the probability of a sequence mi of ‘N’ symbols form the source with the average information content per symbol in the messages of N symbols defined by
GN = N
)m(Plog)m(Pi
ii∑−
Where the sum is over all sequences mi containing N symbols, then GN is a monotonic decreasing function of N and in the limiting case it becomes.
Lim G N = H bits / symbol
N ���� ∞
Recall H = entropy of the source
The above example illustrates the basic concept that the average information content per symbol from a source emitting dependent sequence decreases as the message length increases.
• It can also be stated as,
Alternatively, it tells us that the average number of bits per symbol needed to represent a message decreases as the message length increases.
Problems:
Example 1
The state diagram of the stationary Mark off source is shown below
Find (i) the entropy of each state
(ii) The entropy of the source (iii) G1, G2 and verify that G1 ≥ G2 ≥ H the entropy of the source.
Example 2
P(state1) = P(state2) =
P(state3) = 1/3
½
¼
½
¼ B
¼
B
A ¼
¼ C
½ A 1
C 2
3
C
A
¼
B
ITC 10EC55
SJBIT/ECE 25
For the Mark off source shown, calculate the information rate.
Solution:
By definition, the average information rate for the source is given by
R = rs . H bits/sec ------ (1)
Where, rs is the symbol rate of the source
And H is the entropy of the source.
To compute H
Calculate the entropy of each state using,
sym/bitsp
1logpH
ij
n
1jiJi ∑
== ----- (2)
For this example,
3,2,1i;p
1logpH
ij
3
1jiji ==∑
= ------ (3)
Put i = 1
jj
ji ppH 1
3
11 log∑
==∴
= - p11 log p11 – p12 log p12 – p13 log p13
Substituting the values, we get
H1 = - 2
1 x log
2
1 -
2
1 log
2
1 - 0
= + 2
1 log (2) +
2
1 log (2)
∴ H1 = 1 bit / symbol
Put i = 2, in eqn. (2) we get,
H2 = - ∑=
3
1jj2j2 plogp
i.e., H2 = - [ ]232322222121 plogpplogpplogp ++
Substituting the values given we get,
R ¼
S ½
3 R 1/2 L ¼
S ½
2 1 L
½
p1 = ¼ P2 = ½
S ½
P3 = ¼
26
H2 = -
+
+
4
1log
4
1
2
1log
2
1
4
1log
4
1
= + 4
1 log 4 +
2
1 log 2 +
4
1 log 4
= 2
1 log 2 +
2
1 + log 4
∴ H2 = 1.5 bits/symbol
Similarly calculate H3 and it will be
H3 = 1 bit / symbol
With Hi computed you can now compute H, the source entropy, using.
H = ∑=
3
1iii
HP
= p1 H1 + p2 H2 + p3 H3
Substituting the values we get,
H = 41
x 1 + 21
x 1.5 + 41
x 1
= 41
+ 25.1
+ 41
= 2
1 +
2
5.1 =
2
5.2 = 1.25 bits / symbol
∴ H = 1.25 bits/symbol
Now, using equation (1) we have
Source information rate = R = rs 1.25
Taking ‘rs’ as one per second we get
R = 1 x 1.25 = 1.25 bits / sec
27
Review questions:
(1) Explain the terms (i) Self information (ii) Average information (iii) Mutual Information.
(2) Discuss the reason for using logarithmic measure for measuring the amount of information.
(3) Explain the concept of amount of information associated with message. Also explain what infinite information is and zero information.
(4) A binary source emitting an independent sequence of 0’s and 1’s with probabilities p and (1-
p) respectively. Plot the entropy of the source.
(5) Explain the concept of information, average information, information rate and redundancy as referred to information transmission.
(6) Let X represents the outcome of a single roll of a fair dice. What is the entropy of X?
(7) A code is composed of dots and dashes. Assume that the dash is 3 times as long as the dot and
has one-third the probability of occurrence. (i) Calculate the information in dot and that in a dash; (ii) Calculate the average information in dot-dash code; and (iii) Assume that a dot lasts for 10 ms and this same time interval is allowed between symbols. Calculate the average rate of information transmission.
(8) What do you understand by the term extension of a discrete memory less source? Show that
the entropy of the nth extension of a DMS is n times the entropy of the original source.
(9) A card is drawn from a deck of playing cards. A) You are informed that the card you draw is spade. How much information did you receive in bits? B) How much information did you receive if you are told that the card you drew is an ace? C) How much information did you receive if you are told that the card you drew is an ace of spades? Is the information content of the message “ace of spades” the sum of the information contents of the messages ”spade” and “ace”?
(10) A block and white TV picture consists of 525 lines of picture information. Assume that each
consists of 525 picture elements and that each element can have 256 brightness levels. Pictures are repeated the rate of 30/sec. Calculate the average rate of information conveyed by a TV set to a viewer.
(11) A zero memory source has a source alphabet S= {S1, S2, S3} with P= {1/2, 1/4, 1/4}. Find
the entropy of the source. Also determine the entropy of its second extension and verify that H (S2) = 2H(S).
(12) Show that the entropy is maximum when source transmits symbols with equal probability.
Plot the entropy of this source versus p (0<p<1).
(13) The output of an information source consists OF 128 symbols, 16 of which occurs with probability of 1/32 and remaining 112 occur with a probability of 1/224. The source emits 1000 symbols/sec. assuming that the symbols are chosen independently; find the rate of information of the source.
28
2
1
3
1
2
3
2
1
2
3
1/3
¼
½
¼
A
A
C
B
A
C
B
BA
BC
BB
C
B
3
1
3
1
2
3
1
2
3
2
1
2
3
1/3
¼
¼
½
A
A
C
B
AA
AC
AB
A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
State at the end of the first symbol internal
State at the end of the second symbol internal
Symbol sequence
Initial state
29
Unit - 2: SOURCE CODING
Syllabus:
Encoding of the source output, Shannon’s encoding algorithm. Communication Channels, Discrete communication channels, Continuous channels.
Text Books:
• Digital and analog communication systems, K. Sam Shanmugam, John Wiley, 1996.
Reference Books:
• Digital Communications - Glover and Grant; Pearson Ed. 2nd Ed 2008
30
Unit - 2: SOURCE CODING
2.1 Encoding of the Source Output:
• Need for encoding
Suppose that, M – messages = 2N, which are equally likely to occur. Then recall that average information per messages interval in H = N.
Say further that each message is coded into N bits,
∴ Average information carried by an individual bit is = 1N
H = bit
If the messages are not equally likely, then ‘H’ will be less than ‘N’ and each bit will carry less than one bit of information.
• Is it possible to improve the situation?
Yes, by using a code in which not all messages are encoded into the same number of bits. The more likely a message is, the fewer the number of bits that should be used in its code word.
• Source encoding
Process by which the output of an information source is converted into a binary sequence.
• If the encoder operates on blocks of ‘N’ symbols, the bit rate of the encoder is given as
Produces an average bit rate of GN bits / symbol
Where, GN = – ∑i
ii )m(plog)m(pN
1
)m(p i = Probability of sequence ‘m i’ of ‘N’ symbols from the source,
Sum is over all sequences ‘mi’ containing ‘N’ symbols.
GN in a monotonic decreasing function of N and
∞→N
Lim GN = H bits / symbol
Performance measuring factor for the encoder
Coding efficiency: ηc
Definition of ηc = encoder theof ratebit output Average
rateormationinf Source
ηc = ^
NH
)S(H
Symbol sequence emitted by the information source
Source Encoder
Input Output : a binary sequence
ITC 10EC55
SJBIT/ECE 31
2.2 Shannon’s Encoding Algorithm:
• Formulation of the design of the source encoder
Can be formulated as follows:
‘q’ messages : m1, m2, …..mi, …….., mq
Probs. of messages : p1, p2, ..…..pi, ……..., pq
ni : an integer
• The objective of the designer
To find ‘ni’ and ‘ci’ for i = 1, 2, ...., q such that the average number of bits per symbol ^
NH
used in the coding scheme is as close to GN as possible.
Where, ^
NH = ∑=
q
1iii pn
N
1
and GN = ∑=
q
1i ii p
1logp
N
1
i.e., the objective is to have
^
NH � GN as closely as possible
• The algorithm proposed by Shannon and Fano
Step 1: Messages for a given block size (N) m1, m2, ....... mq are to be arranged in decreasing order of probability.
Step 2: The number of ‘ni’ (an integer) assigned to message mi is bounded by
log2 i
2ii p
1log1n
p
1 +<<
Step 3: The code word is generated from the binary fraction expansion of ‘F i’ defined as
One of ‘q’ possible messages
A message
N-symbols
Source encoder
INPUT OUTPUTA unique binary code word ‘ci’ of length ‘ni’ bits for the message ‘mi’
Replaces the input message symbols by a sequence of
binary digits
ITC 10EC55
32
Fi = ∑−
=
1i
1kkp , with F1 taken to be zero.
Step 4: Choose ‘n i’ bits in the expansion of step – (3)
Say, i = 2, then if ni as per step (2) is = 3 and
If the Fi as per stop (3) is 0.0011011
Then step (4) says that the code word is: 001 for message (2)
With similar comments for other messages of the source.
The codeword for the message ‘m i’ is the binary fraction expansion of Fi upto ‘ni’ bits.
i.e., Ci = (Fi)binary, ni bits
Step 5: Design of the encoder can be completed by repeating the above steps for all the messages of block length chosen.
• Illustrative Example
Design of source encoder for the information source given,
Compare the average output bit rate and efficiency of the coder for N = 1, 2 & 3.
Solution:
The value of ‘N’ is to be specified.
Case – I: Say N = 3 Block size
Step 1: Write the tree diagram and get the symbol sequence of length = 3.
Tree diagram for illustrative example – (1) of session (3)
C ¼
C ¼
2 B 3/4 1 A
3/4
p1 = ½ P2 = ½
33
From the previous session we know that the source emits fifteen (15) distinct three symbol messages.
They are listed below:
Messages AAA AAC ACC ACB BBB BBC BCC BCA CCA CCB CCC CBC CAC CBB CAA
Probability
128
27
128
9
128
3
128
9
128
27
128
9
128
3
128
9
128
3
128
3
128
2
128
3
128
3
128
9
128
9
Step 2: Arrange the messages ‘mi’ in decreasing order of probability.
Messages
mi AAA BBB CAA CBB BCA BBC AAC ACB CBC CAC CCB CCA BCC ACC CCC
Probability
pi
128
27
128
27
128
9
128
9
128
9
128
9
128
9
128
9
128
3
128
3
128
3
128
3
128
3
128
3
128
2
Step 3: Compute the number of bits to be assigned to a message ‘mi’ using.
Log2 i
2ii p
1log1n
p
1 +<< ; i = 1, 2, ……. 15
Say i = 1, then bound on ‘ni’ is
log 27
128log1n
27
1281 +<<
1 ½ ¾
¼
A 1 2 ¼
1
2
AAA
AAC
ACC
ACB
1 1
2 ¾
A
C
¾ A
¼ C ¼ C 3/4 B
2 2
¼
1
2
CCA
CCC
CBC
CBB
1 1
2
¾
C
B
A
C C
B
¾
¼ ¼
3/4
C
2 ½ ¼
¾
C 1 2 ¼
1
2
CAA
CAC
CCC
CCB
1 1
2 ¾
A
C
¾ A
¼ C ¼ C 3/4 B
2 2
¼
1
2
BCA
BCC
BBC
BBB
1 1
2
¾
C
B
A
C C
B
¾
¼ ¼
3/4
B
34
i.e., 2.245 < n1 < 3.245
Recall ‘ni’ has to be an integer
∴ n1 can be taken as,
n1 = 3
Step 4: Generate the codeword using the binary fraction expansion of Fi defined as
Fi = ∑−
=
1i
1kkp ; with F1 = 0
Say i = 2, i.e., the second message, then calculate n2 you should get it as 3 bits.
Next, calculate F2 = 128
27pp
1
1kk
12
1kk == ∑∑
=
−
=
. Get the binary fraction expansion of 128
27. You
get it as : 0.0011011
Step 5: Since ni = 3, truncate this exploration to 3 – bits.
∴ The codeword is: 001
Step 6: Repeat the above steps and complete the design of the encoder for other messages listed above.
The following table may be constructed
Message mi
pi Fi ni Binary expansion of
Fi Code word ci
AAA
BBB
CAA
CBB
BCA
BBC
AAC
ACB
128
27
128
27
128
9
128
9
128
9
128
9
128
9
128
9
128
3
0
27/128
54/128
63/128
72/128
81/128
90/128
99/128
3
3
4
4
4
4
4
4
.00000
.001101
0110110
0111111
.1001100
1010001
1011010
1100011
000
001
0110
0111
1001
1010
1011
1100
35
CBC
CAC
CCB
CCA
BCC
ACC
CCC
128
3
128
3
128
3
128
3
128
3
128
2
108/128
111/128
114/128
117/128
120/128
123/128
126/128
6
6
6
6
6
6
6
110110
1101111
1110010
1110101
1111000
1111011
1111110
110110
110111
111001
111010
111100
111101
111111
• the average number of bits per symbol used by the encoder
Average number of bits = ∑ ii pn
Substituting the values from the table we get,
Average Number of bits = 3.89
∴ Average Number of bits per symbol = N
nnH ii
^
N∑=
Here N = 3,
∴ 3
89.3H
^
3 = = 1.3 bits / symbol
State entropy is given by
Hi =
∑
= ij
n
1jij p
1logp bits / symbol
Here number of states the source can be in are two
i.e., n = 2
Hi =
∑
= ij
2
1jij p
1logp
Say i = 1, then entropy of state – (1) is
Hi = ∑=
2
1jijp
1212
1111
j1 p
1logp
p
1logp
p
1log +=
Substituting the values known we get,
36
H1 = ( ) 4/1
1log
4
1
4/3
1logx
4
3 +
= ( )4log4
1
3
4logx
4
3 +
∴ H1 = 0.8113
Similarly we can compute, H2 as
H2 = 22
2
12221
2121
1log
1log
ppp
pp
j∑
=
+=
Substituting we get,
H2 = ( )
+4/3
1log
4
3
4/1
1logx
4
1
= ( )
+3
4log
4
34log
4
1x
H2 = 0.8113
Entropy of the source by definition is
H = ;Hpn
1jii∑
=
Pi = Probability that the source is in the ith state.
H = ;2
1∑
=iii Hp = p1H1 + p2H2
Substituting the values, we get,
H = ½ x 0.8113 + ½ x 0.8113 = 0.8113
∴ H = 0.8113 bits / sym.
37
• What is the efficiency of the encoder?
By definition we have
ηc = %4.621003.1
8113.0100100
^
3
^
2
==== xxH
Hx
H
H
∴ ηc for N = 3 is, 62.4%
Case – II
Say N = 2
The number of messages of length ‘two’ and their probabilities (obtained from the tree diagram) can be listed as shown in the table.
Given below
N = 2
Message pi ni ci
AA
BB
AC
CB
BC
CA
CC
9/32
9/32
3/32
3/32
3/32
3/32
2/32
2
2
4
4
4
4
4
00
01
1001
1010
1100
1101
1111
Calculate ^
NH and verify that it is 1.44 bits / sym.
∴ Encoder efficiency for this case is
ηc = 100xH
H^
N
Substituting the values we get,
ηc = 56.34%
Case – III: N = 1
Proceeding on the same lines you would see that
N = 1
Message pi ni ci
A
B
C
3/8
3/8
1/4
2
2
2
00
01
11
38
^
1H = 2 bits / symbol and
ηc = 40.56%
• Conclusion for the above example
We note that the average output bit rate of the encoder
^
NH decreases as ‘N’ increases and
hence the efficiency of the encoder increases as ‘N’ increases.
Operation of the Source Encoder Designed:
I. Consider a symbol string ACBBCAAACBBB at the encoder input. If the encoder uses a block size of 3, find the output of the encoder.
Recall from the outcome of session (5) that for the source given possible three symbol sequences and their corresponding code words are given by –
Message
mi ni
Codeword
ci
Determination of the code words and their size as illustrated in the previous session
AAA
BBB
CAA
CBB
BCA
BBC
AAC
ACB
CBC
CAC
CCB
CCA
BCC
ACC
3
3
4
4
4
4
4
4
6
6
6
6
6
6
000
001
0110
0111
1001
1010
1011
1100
110110
110111
111001
111010
111100
111101
C ¼
C ¼
2 B 3/4 1 A
3/4
p1 = ½ P2 = ½
SOURCE ENCODER
OUTPUT
INFORMN. SOURCE
39
CCC 6 111111
Output of the encoder can be obtained by replacing successive groups of three input symbols by the code words shown in the table. Input symbol string is
{ string symbol theof version Encoded BBB AAC BCA ACB
011101110011100 ←321321321
II. If the encoder operates on two symbols at a time what is the output of the encoder for the same symbol string?
Again recall from the previous session that for the source given, different two-symbol sequences and their encoded bits are given by
N = 2
Message
mi
No. of bits
ni
ci
AA
BB
AC
CB
BC
CA
CC
2
2
4
4
4
4
4
00
01
1001
1010
1100
1101
1111
For this case, the symbol string will be encoded as –
{ { { {{ { message Encoded BBCBAA CA BB AC
011010001101011001 ←
DECODING
• How is decoding accomplished?
By starting at the left-most bit and making groups of bits with the codewords listed in the table.
Case – I: N = 3
i) Take the first 3 – bit group viz 110 why?
ii) Check for a matching word in the table.
iii) If no match is obtained, then try the first 4-bit group 1100 and again check for the matching word.
iv) On matching decode the group.
40
NOTE: For this example, step (ii) is not satisfied and with step (iii) a match is found and the decoding results in ACB.
Repeat this procedure beginning with the fifth bit to decode the remaining symbol groups. Symbol string would be ���� ACB BCA AAC BCA • Conclusion from the above example with regard to decoding
It is clear that the decoding can be done easily by knowing the codeword lengths apriori if no errors occur in the bit string in the transmission process.
• The effect of bit errors in transmission
Leads to serious decoding problems.
Example: For the case of N = 3, if the bit string, 1100100110111001 was received at the decoder input with one bit error as
1101100110111001
What then is the decoded message?
Solution: Received bit string is
For the errorless bit string you have already seen that the decoded symbol string is
ACB BCA AAC BCA ----- (2)
(1) and (2) reveal the decoding problem with bit error.
Illustrative examples on source encoding
1. A source emits independent sequences of symbols from a source alphabet containing five symbols with probabilities 0.4, 0.2, 0.2, 0.1 and 0.1.
i) Compute the entropy of the source
ii) Design a source encoder with a block size of two.
Solution: Source alphabet = (s1, s2, s3, s4, s5)
Probs. of symbols = p1, p2, p3, p4, p5
= 0.4, 0.2, 0.2, 0.1, 0.1
(i) Entropy of the source = H = symbol/bitsplogp5
1iii∑
=
−
Substituting we get,
H = - [p1 log p1 + p2 log p2 + p3 log p3 + p4 log p4 + p5 log p5 ]
= - [0.4 log 0.4 + 0.2 log 0.2 + 0.2 log 0.2 + 0.1 log 0.1 + 0.1 log 0.1]
1101 10 011 0 1 110 1
Error bit
CBC CAA CCB ----- (1)
41
H = 2.12 bits/symbol
(ii) Some encoder with N = 2
Different two symbol sequences for the source are:
(s1s1) AA ( ) BB ( ) CC ( ) DD ( ) EE
A total of 25 messages
(s1s2) AB ( ) BC ( ) CD ( ) DE ( ) ED
(s1s3) AC ( ) BD ( ) CE ( ) DC ( ) EC
(s1s4) AD ( ) BE ( ) CB ( ) DB ( ) EB
(s1s5) AE ( ) BA ( ) CA ( ) DA ( ) EA
Arrange the messages in decreasing order of probability and determine the number of bits ‘ni’ as explained.
Messages Proby. pi
No. of bits ni
AA 0.16 3
AB AC BC BA CA
0.08 0.08 0.08 0.08 0.08
4
...
...
...
...
...
...
...
0.04 0.04 0.04 0.04 0.04 0.04 0.04
5
...
...
...
...
...
...
...
...
0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02
6
...
...
...
...
0.01 0.01 0.01 0.01
7
Calculate ^
1H =
Substituting, ^
1H = 2.36 bits/symbol
2. A technique used in constructing a source encoder consists of arranging the messages in decreasing order of probability and dividing the message into two almost equally probable
42
groups. The messages in the first group are given the bit ‘O’ and the messages in the second group are given the bit ‘1’. The procedure is now applied again for each group separately, and continued until no further division is possible. Using this algorithm, find the code words for six messages occurring with probabilities, 1/24, 1/12, 1/24, 1/6, 1/3, 1/3
Solution: (1) Arrange in decreasing order of probability
m5 1/3 0 0
m6 1/3 0 1
m4 1/6 1 0
m2 1/12 1 1 0
m1 1/24 1 1 1 0
m3 1/24 1 1 1 1
∴ Code words are
m1 = 1110
m2 = 110
m3 = 1111
m4 = 10
m5 = 00
m6 = 01
Example (3)
a) For the source shown, design a source encoding scheme using block size of two symbols and variable length code words
b) Calculate ^
2H used by the encoder
c) If the source is emitting symbols at a rate of 1000 symbols per second, compute the output bit rate of the encoder.
R ¼
S ½
3 R 1/2 L ¼
S ½
2 1 L
½
p1 = ¼ P2 = ½
S ½
P3 = ¼
1st division
2nd division
3rd division
4th division
43
Solution (a)
1. The tree diagram for the source is
2. Note, there are seven messages of length (2). They are SS, LL, LS, SL, SR, RS & RR.
3. Compute the message probabilities and arrange in descending order.
4. Compute ni, Fi. Fi (in binary) and ci as explained earlier and tabulate the results, with usual notations.
Message
mi pi ni Fi Fi (binary) ci
SS 1/4 2 0 .0000 00
LL 1/8 3 1/4 .0100 010
LS 1/8 3 3/8 .0110 011
SL 1/8 3 4/8 .1000 100
SR 1/8 3 5/8 .1010 101
RS 1/8 3 6/8 .1100 110
RR 1/8 3 7/8 .1110 111
1 ¼ ¼
¼
1
LL
LS
C
2 ½ ¼
¾
1
2
½
2 ½
1 ½
2 3
¼ 1
¼
2 ½
2 ½
1 ½
¼ 2
3
¼ 1 2
½
3 ½
2 ½
3 ¼ ½
½ 3
C
3 ½
2 ½
2 3
¼ 1
¼
2 ½
SL
SS SR
LL
LS
SL
SS
SR
RS
RS
RR
SL
SS
SR
(1/16)
(1/16)
(1/32)
(1/16) (1/32)
(1/16)
(1/16)
(1/16)
(1/8)
(1/16)
(1/8)
(1/16)
(1/16)
(1/32)
(1/16)
(1/32)
LL
LS
SL
SS
SR
RS
RR
Different Messages of Length Two
44
G2 =
−∑
−
7
12log
2
1
iii pp = 1.375 bits/symbol
(b) ^
2H =
∑
−
7
12
1
iii np = 1.375 bits/symbol
Recall, ^
NH ≤ GN + N
1 ; Here N = 2
∴ ^
2H ≤ G2 + 2
1
(c) Rate = 1375 bits/sec.
2.3 SOURCE ENCODER DESIGN AND COMMUNICATION CHANNELS
• The schematic of a practical communication system is shown.
Fig. 1: BINARY COMMN. CHANNEL CHARACTERISATION
• ‘Communication Channel’
Communication Channel carries different meanings and characterizations depending on its terminal points and functionality.
(i) Portion between points c & g:
� Referred to as coding channel
� Accepts a sequence of symbols at its input and produces a sequence of symbols at its output.
� Completely characterized by a set of transition probabilities pij. These probabilities will depend on the parameters of – (1) The modulator, (2) Transmission media, (3) Noise, and (4) Demodulator
Channel Encoder
Channel Encoder
Electrical Commun-
ication channel
OR Transmission medium
Demodulator Channel Decoder Σ
b c d e f g h
Noise
Data Communication Channel (Discrete)
Coding Channel (Discrete)
Modulation Channel (Analog)
Transmitter Physical channel Receiver
45
� A discrete channel
(ii) Portion between points d and f:
� Provides electrical connection between the source and the destination.
� The input to and the output of this channel are analog electrical waveforms.
� Referred to as ‘continuous’ or modulation channel or simply analog channel.
� Are subject to several varieties of impairments –
� Due to amplitude and frequency response variations of the channel within the passband.
� Due to variation of channel characteristics with time.
� Non-linearities in the channel.
� Channel can also corrupt the signal statistically due to various types of additive and multiplicative noise.
2.4 Mathematical Model for Discrete Communication Channel:
Channel between points c & g of Fig. – (1)
• The input to the channel?
A symbol belonging to an alphabet of ‘M’ symbols in the general case is the input to the channel.
• he output of the channel
A symbol belonging to the same alphabet of ‘M’ input symbols is the output of the channel.
• Is the output symbol in a symbol interval same as the input symbol during the same symbol interval?
The discrete channel is completely modeled by a set of probabilities –
t
ip � Probability that the input to the channel is the ith symbol of the alphabet.
(i = 1, 2, …………. M)
and
ijp � Probability that the ith symbol is received as the jth symbol of the alphabet at the output of
the channel.
• Discrete M-ary channel
If a channel is designed to transmit and receive one of ‘M’ possible symbols, it is called a discrete M-ary channel.
• discrete binary channel and the statistical model of a binary channel
46
Shown in Fig. – (2).
Fig. – (2)
• Its features
� X & Y: random variables – binary valued
� Input nodes are connected to the output nodes by four paths.
(i) Path on top of graph : Represents an input ‘O’ appearing
correctly as ‘O’ as the channel
output.
(ii) Path at bottom of graph :
(iii) Diogonal path from 0 to 1 : Represents an input bit O appearing
incorrectly as 1 at the channel output
(due to noise)
(iv) Diagonal path from 1 to 0 : Similar comments
Errors occur in a random fashion and the occurrence of errors can be statistically modelled by assigning probabilities to the paths shown in figure (2).
• A memory less channel:
If the occurrence of an error during a bit interval does not affect the behaviour of the system during other bit intervals.
Probability of an error can be evaluated as
p(error) = Pe = P (X ≠ Y) = P (X = 0, Y = 1) + P (X = 1, Y = 0)
Pe = P (X = 0) . P (Y = 1 / X = 0) + P (X = 1), P (Y = 0 / X= 1)
Can also be written as,
Pe = top p01 + t
1p p10 ------ (1)
We also have from the model
O P00 O
1 p11 1
Transmitted digit X
Received digit X
pij = p(Y = j / X=i) t
op = p(X = o);
t
1p P(X = 1)
r
op = p(Y = o);
r
1p P(Y = 1)
poo + po1 = 1 ; p11 + p10 = 1
P10
P01
47
11t101
to
r1
10t100
to
ro
ppppp
and,p.pppp
+=
+= ----- (2)
• Binary symmetric channel (BSC)
If, p00 = p11 = p (say), then the channel is called a BSC.
• Parameters needed to characterize a BSC
• Model of an M-ary DMC.
This can be analysed on the same lines presented above for a binary channel.
∑=
=M
1iij
ti
rj ppp ----- (3)
• The p(error) for the M-ary channel
Generalising equation (1) above, we have
== ∑∑≠==
M
ij1j
ij
M
1i
tie ppP)error(P ----- (4)
• In a DMC how many statistical processes are involved and which are they?
Two, (i) Input to the channel and
(ii) Noise
1
2
j
M
1
2
i
M
INPUT X OUTPUT Y
t
ip = p(X = i)
rjp = p(Y = j)
ijp = p(Y = j / X = i)
Fig. – (3)
p11
p12
pij
piM
48
• Definition of the different entropies for the DMC.
i) Entropy of INPUT X: H(X).
( ) ( )∑=
−=M
1i
ti
ti plogpXH bits / symbol ----- (5)
ii) Entropy of OUTPUT Y: H(Y)
( ) ( )∑=
−=M
1i
ri
ri plogpYH bits / symbol ----- (6)
iii) Conditional entropy: H(X/Y)
( ) ( )∑∑==
====−=M
1j
M
1i
)jY/iX(plog)jY,iX(PY/XH bits/symbol - (7)
iv) Joint entropy: H(X,Y)
( ) ( )∑∑==
====−=M
1i
M
1i
)jY,iX(plog)jY,iX(PY,XH bits/symbol - (8)
v) Conditional entropy: H(Y/X)
( )∑∑==
====−=M
1i
M
1i
)iX/jY(plog)jY,iX(P(Y/X)H bits/symbol - (9)
Representation of the conditional entropy
• H(X/Y) represents how uncertain we are of the channel input ‘x’, on the average, when we know
the channel output ‘Y’.
Similar comments apply to H(Y/X)
vi) H(X/Y) H(Y) H(Y/X) H(X) Y)H(X,Entropy Joint +=+= - (10)
ENTROPIES PERTAINING TO DMC
• To prove the relation for H(X Y)
By definition, we have,
H(XY) = ∑∑−M
j
M
i
)j,i(plog)j,i(p
49
i � associated with variable X, white j � with variable Y.
H(XY) = [ ]∑∑−ji
)i/j(p)i(plog)i/j(p)i(p
=
+− ∑∑∑∑ )i(plog)i/j(p)i(p)i(plog)i/j(p)i(p
ji
Say, ‘i’ is held constant in the first summation of the first term on RHS, then we can write
H(XY) as
H(XY) =
+− ∑∑∑ )i/j(plog)ij(p)i(plog1)i(p
)X/Y(H)X(H)XY(H +=∴
Hence the proof.
1. For the discrete channel model shown, find, the probability of error.
P(error) = Pe = P(X ≠ Y) = P(X = 0, Y = 1) + P (X = 1, Y = 0)
= P(X = 0) . P(Y = 1 / X = 0) + P(X = 1) . P(Y = 0 / X = 1)
Assuming that 0 & 1 are equally likely to occur
P(error) = 2
1 x (1 – p) +
2
1 (1 – p) =
2
1 -
2
p +
2
1 -
2
p
∴ P(error) = (1 – p)
2. A binary channel has the following noise characteristics:
X Y
Transmitted digit
Received digit
0 0
1 1
p
p
Since the channel is symmetric, p(1, 0) = p(0, 1) = (1 - p) Proby. Of error means, situation when X ≠ Y
50
If the input symbols are transmitted with probabilities ¾ & ¼ respectively, find H(X), H(Y), H(XY), H(Y/X).
Solution:
Given = P(X = 0) = ¾ and P(Y = 1) ¼
∴ H(X) = ∑ =+=−i
22ii symbol/bits811278.04log4
1
3
4log
4
3plogp
Compute the probability of the output symbols.
Channel model is-
p(Y = Y1) = p(X = X1, Y = Y1) + p(X = X2, Y = Y1) ----- (1)
To evaluate this construct the p(XY) matrix using.
P(XY) = p(X) . p(Y/X) =
=
6
1
12
1
4
1
y
2
1
y
x
x
4
1.
3
2
4
1.
3
1
4
3.
3
1
4
3.
3
221
2
1
----- (2)
∴ P(Y = Y1) = 12
7
12
1
2
1 =+ -- Sum of first column of matrix (2)
Similarly P(Y2) = 12
5 � sum of 2nd column of P(XY)
Construct P(X/Y) matrix using
P(Y/X) Y
0 1
X 0 2/3 1/3
1 1/3 2/3
x1 y1
x2
y2
51
P(XY) = p(Y) . p(X/Y) i.e., p(X/Y) = )Y(p
)XY(p
∴ p ( )
( ) onsoand7
6
14
12
12
72
1
Yp
YXp
Y
X
1
11
1
1 ====
∴ p(Y/X) =
5
2
7
15
3
7
6
-----(3)
= ?
H(Y) = ∑ =+=i i
i 979868.05
12log
12
5
7
12log
12
7
p
1logp bits/sym.
H(XY) = ∑∑ji )XY(p
1log)XY(p
= 6log6
112log
12
14log
4
1 2 log
2
1 +++
∴ H(XY) = 1.729573 bits/sym.
H(X/Y) = ∑∑ )Y/X(p
1log)XY(p
= 2
5log
6
17log
12
1
3
5log
4
1
6
7 log
2
1 +++ = 1.562
H(X/Y) = ∑∑ )X/Y(p
1log)XY(p
= 2
3log
6
13log
12
13log
4
1
2
3 log
2
1 +++
3. The joint probability matrix for a channel is given below. Compute H(X), H(Y), H(XY), H(X/Y) & H(Y/X)
P(XY) =
1.0005.005.0
1.02.000
01.01.00
05.02.0005.0
Solution:
Row sum of P(XY) gives the row matrix P(X)
∴ P(X) = [0.3, 0.2, 0.3, 0.2]
52
Columns sum of P(XY) matrix gives the row matrix P(Y)
∴ P(Y) = [0.1, 0.15, 0.5, 0.25]
Get the conditional probability matrix P(Y/X)
P(Y/X) =
5
20
3
1
2
15
2
5
200
02
1
2
10
6
1
3
20
6
1
Get the condition probability matrix P(X/Y)
∴ P(X/Y) =
5
20
3
1
2
15
2
5
200
05
1
3
20
5
1
5
20
2
1
Now compute the various entropies required using their defining equations.
(i) H(X) = ( ) ( )
+
=∑ 2.0
1log2.02
3.0
1log3.02
Xp
1log.Xp
i
∴ H (X) = 1.9705 bits / symbol
(ii) ( ) ( )
25.0
1log25.0
5.0
1log5.0
15.0
1log15.0
1.0
1log1.0
Yp
1log.Yp H(Y)
j
++
+==∑
∴ H (Y) = 1.74273 bits / symbol
(iii) H(XY) = ∑∑ )XY(p
1log)XY(p
+
+
=2.0
1log2.02
1.0
1log1.04
05.0
1log05.04
53
∴ H(XY) = 3.12192
(iv) H(X/Y) = ∑∑ )Y/X(p
1log)XY(p
Substituting the values, we get.
∴ H(X/Y) = 4.95 bits / symbol
(v) H(Y/X) = ∑∑ )X/Y(p
1log)XY(p
Substituting the values, we get.
∴ H(Y/X) = 1.4001 bits / symbol
4. Consider the channel represented by the statistical model shown. Write the channel matrix and compute H(Y/X).
For the channel write the conditional probability matrix P(Y/X).
P(Y/X) =
3
1
3
1
6
1
6
1
6
1
6
1
3
1
3
1
x
x
yyyy
2
1
4321
NOTE: 2nd row of P(Y/X) is 1st row written in reverse order. If this is the situation, then channel is called a symmetric one.
First row of P(Y/X) . P(X1) = 6
1x
2
1
6
1x
2
1
3
1x
2
1
3
1x
2
1 ++
+
Second row of P(Y/X) . P(X2) = 3
1x
2
1
2
1x
6
1
6
1x
2
1
6
1x
2
1 +++
INPUT OUTPUT
X1
Y1
Y2
Y3
Y4
1/3
1/6 1/3
1/6 1/6
1/3 1/6 1/3
X2
54
Recall
P(XY) = p(X), p(Y/X)
∴ P(X1Y1) = p(X1) . p(Y1X1) = 1 . 3
1
P(X1Y2) = 3
1, p(X1, Y3) =
6
1 = (Y1X4) and so on.
P(X/Y) =
6
1
6
1
12
1
12
1
12
1
12
1
6
1
6
1
H(Y/X) = ∑∑ )X/Y(p
1log)XY(p
Substituting for various probabilities we get,
3log6
13log
6
16log
12
1
6log12
16log
12
16log
12
13log
6
13log
6
1 H(Y/X)
+++
++++=
= 4 x 6log12
1x43log
6
1 +
= 2 x 6log3
13log
3
1 + = ∴ ?
5. Given joint proby. matrix for a channel compute the various entropies for the input and output rv’s of the channel.
P(X . Y) =
2.002.006.00
06.001.004.004.0
002.002.00
01.001.001.01.0
02.002.0
Solution:
P(X) = row matrix: Sum of each row of P(XY) matrix.
∴ P(X) = [0.4, 0.13, 0.04, 0.15, 0.28]
P(Y) = column sum = [0.34, 0.13, 0.26, 0.27]
1. H(XY) = ∑∑ =)XY(p
1log)XY(p 3.1883 bits/sym.
55
2. H(X) = ∑ =)X(p
1log)X(p 2.0219 bits/sym.
3. H(Y) = ∑ =)Y(p
1log)Y(p 1.9271 bits/sym.
Construct the p(X/Y) matrix using, p(XY) = p(Y) p(X/Y)
or P(X/Y) = )Y(p
)XY(p =
27.0
2.0
26.0
02.0
13.0
06.00
27.0
06.0
26.0
01.0
13.0
04.0
34.0
04.0
026.0
02.0
13.0
02.00
27.0
01.0
26.0
01.0
13.0
01.0
34.0
1.0
036.0
2.00
34.0
2.0
4. H(X/Y) = ∑∑ =− )Y/X(plog)XY(p 1.26118 bits/sym.
Problem:
Construct p(Y/X) matrix and hence compute H(Y/X).
Rate of Information Transmission over a Discrete Channel :
• For an M-ary DMC, which is accepting symbols at the rate of rs symbols per second, the average amount of information per symbol going into the channel is given by the entropy of the input random variable ‘X’.
i.e., H(X) = ∑=
−M
1i
ti2
ti plogp ----- (1)
Assumption is that the symbol in the sequence at the input to the channel occur in a statistically independent fashion.
• Average rate at which information is going into the channel is
Din = H(X), rs bits/sec ----- (2)
• Is it possible to reconstruct the input symbol sequence with certainty by operating on the received sequence?
56
• Given two symbols 0 & 1 that are transmitted at a rate of 1000 symbols or bits per second.
With 2
1p&
2
1p t
1t0 ==
Din at the i/p to the channel = 1000 bits/sec. Assume that the channel is symmetric with the probability of errorless transmission p equal to 0.95.
Rate of transmission of information:
• Recall H(X/Y) is a measure of how uncertain we are of the input X given output Y.
• What do you mean by an ideal errorless channel?
• H(X/Y) may be used to represent the amount of information lost in the channel.
• Define the average rate of information transmitted over a channel (Dt).
Dt ∆ srlostninformatio
ofAmount
channeltheintogoing
ninformatioofAmount
−
Symbolically it is,
Dt = [ ] sr.)Y/X(H)H(H − bits/sec.
When the channel is very noisy so that output is statistically independent of the input, H(X/Y) = H(X) and hence all the information going into the channel is lost and no information is transmitted over the channel.
DISCRETE CHANNELS:
1. A binary symmetric channel is shown in figure. Find the rate of information transmission
over this channel when p = 0.9, 0.8 & 0.6. Assume that the symbol (or bit) rate is
1000/second.
Input X
Output Y
1 – p
1 – p
p
p
p(X = 0) = p(X = 1) = 2
1
57
Example of a BSC
Solution:
H(X) = .sym/bit12log2
12log
2
122 =+
sec/bit1000)X(HrD sin =∴
By definition we have,
Dt = [H(X) – H(X/Y)]
Where, H(X/Y) = ( ) ( )[ ]∑∑−i j
Y/XplogXYp . rs
Where X & Y can take values.
X Y
0
0
1
1
0
1
0
1
∴ H(X/Y) = - P(X = 0, Y = 0) log P (X = 0 / Y = 0)
= - P(X = 0, Y = 1) log P (X = 0 / Y = 1)
= - P(X = 1, Y = 0) log P (X = 1 / Y = 0)
= - P(X = 1, Y = 1) log P (X = 1 / Y = 1)
The conditional probability p(X/Y) is to be calculated for all the possible values that X & Y can
take.
Say X = 0, Y = 0, then
P(X = 0 / Y = 0) = 0) p(Y
0) p(X0) X/0 Y(p
====
Where
p(Y = 0) = p(Y = 0 / X = 0) . p(X = 0) + p (X = 1) . p
==
1X
0Y
= p . 2
1
2
1 + (1 – p)
58
∴ p(Y = 0) = 2
1
∴ p(X = 0 /Y = 0) = p
Similarly we can calculate
p(X = 1 / Y = 0) = 1 – p
p(X = 1 / Y = 1) = p
p(X = 0 / Y = 1) = 1 – p
∴ H (X / Y) = -
−−+
+−−+
)p1(log)p1(2
1plogp
2
1
)p1(log)p1(2
1plogp
2
1
2
22
= - [ ])p1(log)p1(plogp 22 −−+
∴ Dt rate of inforn. transmission over the channel is = [H(X) – H (X/Y)] . rs
with, p = 0.9, Dt = 531 bits/sec.
p = 0.8, Dt = 278 bits/sec.
p = 0.6, Dt = 29 bits/sec.
• What does the quantity (1 – p) represent?
• What do you understand from the above example?
2. A discrete channel has 4 inputs and 4 outputs. The input probabilities are P, Q, Q, and P.
The conditional probabilities between the output and input are.
Write the channel model.
Solution: The channel model can be deduced as shown below:
Given, P(X = 0) = P
P(X = 1) = Q
P(X = 2) = Q
P(y/x) Y
0 1 2 3
X
0 1 – – – 1 – p (1–p) – 2 – (1–p) (p) – 3 – – – 1
59
P(X = 3) = P
Off course it is true that: P + Q + Q + P = 1
i.e., 2P + 2Q = 1
∴ Channel model is
• What is H(X) for this?
H(X) = - [2P log P + 2Q log Q]
• What is H(X/Y)?
H(X/Y) = - 2Q [p log p + q log q]
= 2 Q . α
1. A source delivers the binary digits 0 and 1 with equal probability into a noisy channel at a
rate of 1000 digits / second. Owing to noise on the channel the probability of receiving a
transmitted ‘0’ as a ‘1’ is 1/16, while the probability of transmitting a ‘1’ and receiving a ‘0’
is 1/32. Determine the rate at which information is received.
Solution:
Rate of reception of information is given by –
R = H1(X) - H1(X/Y) bits / sec -----(1)
Where, H(X) = ∑−i
.sym/bits)i(plog)i(p
H(X/Y) = ∑∑−ji
.sym/bits)j/i(plog)ij(p -----(2)
H(X) = .sym/bit12
1log
2
1
2
1log
2
1 =
+−
Input X
Output Y
1 – p = q
(1 – p) = q
p
p
1 0
1
1
2
3
0
1
2
3
60
Channel model or flow graph is
Probability of transmitting a symbol (i) given that a symbol ‘0’ was received was received is denoted
as p(i/j).
• What do you mean by the probability p
==
0j
0i?
• How would you compute p(0/0)
Recall the probability of a joint event AB p(AB)
P(AB) = p(A) p(B/A) = p(B) p(A/B)
i.e., p(ij) = p(i) p(j/i) = p(j) p(i/j)
from which we have,
p(i/j) = )j(p
)i/j(p)i(p -----(3)
• What are the different combinations of i & j in the present case?
Say i = 0 and j = 0, then equation (3) is p(0/0) )0j(p
)0/0(p)0(p
=
• What do you mean by p(j = 0)? And how to compute this quantity?
Substituting, find p(0/0)
Thus, we have, p(0/0) = )0(p
)0/0(p)0/0(p
=
64
3116
15x
2
1
= 31
30 = 0.967
∴ p(0/0) = 0.967
• Similarly calculate and check the following.
33
22
1
0p;
33
31
1
1p ,
31
1
0
1p =
=
=
Input Output
1/32
1/16
15/16
31/32
0
1
0
1
Index ‘i' refers to the I/P of the channel and index ‘j’ referes to the output (Rx)
61
• Calculate the entropy H(X/Y)
+
+
+
−=
1
1plog)11(p
0
1plog)10(p
1
0plog)01(p
0
0plog)00(p H(X/Y)
Substituting for the various probabilities we get,
+
+
+
−=
31
1log
64
1
33
31plog
64
31
33
2log
32
1
31
30log
32
15 H(X/Y)
Simplifying you get,
H(X/Y) = 0.27 bit/sym.
∴ [H(X) – H(X/Y)] . rs
= (1 – 0.27) x 1000
∴ R = 730 bits/sec.
2. A transmitter produces three symbols ABC which are related with joint probability shown.
p(i) i
p(j/i) j
A B C
9/27 A
i
A 0 5
4
5
1
16/27 B B 2
1
2
1 0
2/27 C C 2
1
5
2
10
1
Calculate H(XY)
Solution:
By definition we have
H(XY) = H(X) + H(Y/X) -----(1)
Where, H(X) = ∑−i
symbol/bits)i(plog)i(p -----(2)
and H(Y/X) = ∑∑−ji
symbol/bits)i/j(plog)ij(p -----(3)
62
From equation (2) calculate H(X)
∴ H(X) = 1.257 bits/sym.
• To compute H(Y/X), first construct the p(ij) matrix using, p(ij) = p(i), p(j/i)
p(i, j) j
A B C
i
A 0 15
4
15
1
B 27
8
27
8 0
C 27
1
135
4
135
1
• ∴ From equation (3), calculate H(Y/X) and verify, it is
H(Y/X) = 0.934 bits / sym.
• Using equation (1) calculate H(XY)
∴ H(XY) = H(X) + H(Y/X)
= 1.25 + 0.934
2.5 Capacity of a Discrete Memory less Channel (DMC):
• Capacity of noisy DMC
Is Defined as –
The maximum possible rate of information transmission over the channel.
In equation form –
[ ]t)x(P
DMaxC = -----(1)
i.e., maximized over a set of input probabilities P(x) for the discrete
Definition of Dt?
Dt: Ave. rate of information transmission over the channel defined as
Dt [ ] sr)y/x(H)x(H −∆ bits / sec. -----(2)
∴ Eqn. (1) becomes
[ ]{ }s)x(P
r)y/x(H)x(HMaxC −= -----(3)
63
• What type of channel is this? • Write the channel matrix
• Do you notice something special in this channel? • What is H(x) for this channel?
Say P(x=0) = P & P(x=1) = Q = (1 – P) H(x) = – P log P – Q log Q = – P log P – (1 – P) log (1 – P)
• What is H(y/x)? H(y/x) = – [p log p + q logq]
DISCRETE CHANNELS WITH MEMORY:
In such channels occurrence of error during a particular symbol interval does not influence the occurrence of errors during succeeding symbol intervals – No Inter-symbol Influence This will not be so in practical channels – Errors do not occur as independent events but tend to occur as bursts. Such channels are said to have Memory. Examples: – Telephone channels that are affected by switching transients and dropouts – Microwave radio links that are subjected to fading In these channels, impulse noise occasionally dominates the Gaussian noise and errors occur in infrequent long bursts. Because of the complex physical phenomena involved, detailed characterization of channels with memory is very difficult. GILBERT model is a model that has been moderately successful in characterizing error bursts in such channels. Here the channel is modeled as a discrete memoryless BSC, where the probability of error is a time varying parameter. The changes in probability of error are modeled by a Markoff process shown in the Fig 1 below.
P(Y/X) Y
0 ? 1
X 0 p q p
1 o q p
64
The error generating mechanism in the channel occupies one of three states. Transition from one state to another is modeled by a discrete, stationary Mark off process. For example, when the channel is in State 2 bit error probability during a bit interval is 10-2 and the channel stays in this state during the succeeding bit interval with a probability of 0.998. However, the channel may go to state 1wherein the bit error probability is 0.5. Since the system stays in this state with probability of 0.99, errors tend to occur in bursts (or groups). State 3 represents a low bit error rate, and errors in this state are produced by Gaussian noise. Errors very rarely occur in bursts while the channel is in this state. Other details of the model are shown in Fig 1. The maximum rate at which data can be sent over the channel can be computed for each state of the channel using the BSC model of the channel corresponding to each of the three states. Other characteristic parameters of the channel such as the mean time between error bursts and mean duration of the error bursts can be calculated from the model.
2. LOGARITHMIC INEQUALITIES:
Fig 2 shows the graphs of two functions y1 = x - 1 and y2 = ln x .The first function is
a linear measure and the second function is your logarithmic measure. Observe that the log function always lies below the linear function , except at x = 1. Further the straight line is a tangent to the log function at x = 1.This is true only for the natural logarithms. For example, y2 = log2x is equal to y1 = x - 1 at two points. Viz. at x = 1 and at x = 2 .In between these two values y1 > y2 .You should keep this point in mind when using the inequalities that are obtained. From the graphs shown, it follows that, y1 ≤ y2; equality holds good if and only if x = 1.In other words:
…… (2.1) Multiplying equation (2.1) throughout by ‘-1’ and ln x ≤ (x-1), equality iffy x = 1
65
noting that -ln x = ln (1/x), we obtain another inequality as below. ……..…… (2.2) This property of the logarithmic function will be used in establishing the extremal property of the Entropy function (i.e. Maxima or minima property). As an additional property, let {p1, p2, p3,…..pn } and {q1, q2, q3,…..qn } be any two sets of probabilities
such that pi ≥ 0 , qj ≥ 0 ; ∀ i,j and ∑∑∑∑∑∑∑∑========
====n
1jj
n
1ii qp . Then we have:
====
∑∑∑∑∑∑∑∑======== i
in
1ii2
i
i2
n
1ii p
qln.p.elog
pq
log.p
Now using Eq (2.1), it follows that:
n,...2,1i,1pq
.p.elogpq
log.pn
1i i
ii2
i
i2
n
1ii ====∀∀∀∀
−−−−≤≤≤≤
∑∑∑∑∑∑∑∑========
[[[[ ]]]] [[[[ ]]]],pq.elogn
1ii
n
1ii2 ∑∑∑∑∑∑∑∑
========−−−−≤≤≤≤
This, then implies
∑∑∑∑==== i
i2
n
1ii p
qlog.p ≤ 0
That is,
≤≤≤≤
∑∑∑∑∑∑∑∑======== i
2
n
1ii
i2
n
1ii q
1log.p
p1
log.p equality iffy pi = qi ∀∀∀∀ i=1, 2, 3...n .......... (2.3)
This inequality will be used later in arriving at a measure for code efficiency 3. PROPERTIES OF ENTROPY:
We shall now investigate the properties of entropy function 1. The entropy function is continuous in each and every independent variable ,pk in the
interval (0,1) This property follows since each pk is continuous in the interval (0, 1) and that the logarithm of a continuous function is continuous by itself.
2. The entropy function is a symmetrical function of its arguments; i.e. H (pk, 1- pk) = H (1- pk, pk) ∀ k =1, 2, 3...q.
That is to say that the value of the function remains same irrespective of the locations (positions) of the probabilities. In other words, as long as the probabilities of the set are same, it does not matter in which order they are arranged. Thus the sources S1, S2 and S3
with probabilities:
(((( )))),x1x1
ln −−−−≥≥≥≥
equality iffy x = 1
66
P1 = {p1, p2, p3, p4}, P2 = {p3, p2, p4, p1} and P3 = {p4, p1, p3, p2} with 1p4
1kk ====∑∑∑∑
==== all have
same entropy, i.e. H (S1) = H (S2) = H (S3)
3. Extremal property: Consider a zero memory information source with a q-symbol alphabet S ={s1 , s2 , s3 ,….. ,sq } with associated probabilities P = {p1 , p2 , p3 ,….. ,pq }.
Then we have for the entropy of the source (as you have studied earlier):
∑∑∑∑========
q
1k kk p
1log.p)S(H
Consider log q –H(S). We have:
log q –H(S) = log q - ∑∑∑∑====
q
1k kk p
1log.p
= ∑∑∑∑ ∑∑∑∑−−−−==== ====
q
1k
q
1k kkk p
1log.pqlog.p
= ∑∑∑∑ ∑∑∑∑−−−−==== ====
q
1k
q
1k kkk p
1ln.pqln.pelog
= )p1
lnqln(p.elogk
q
1kk∑∑∑∑ −−−−
====
= )pqln(p.elogq
1kkk∑∑∑∑
====
Invoking the inequality in Eq (2.2), we have:
log q –H(S) ≥ )pq1
1(p.elogq
1k kk∑∑∑∑ −−−−
==== Equality iffy q p k = 1, ∀∀∀∀ k =1, 2, 3…q.
≥
−−−−∑∑∑∑ ∑∑∑∑==== ====
q
1k
q
1kk q
1pelog Equality iffy p k = 1/q, ∀ k =1, 2, 3…q.
Since 1q1
pq
1k
q
1kk ====∑∑∑∑ ∑∑∑∑====
==== ====, it follows that log q –H(S) ≥ 0
Or in other words H(S) ≤ log q …………………. (2.4) The equality holds good iffy p k = 1/q,∀∀∀∀ k =1, 2, 3…q. Thus “for a zero memory information source, with a q-symbol alphabet, the Entropy becomes a maximum if and only if all the source symbols are equally probable” .From Eq (2.4) it follows that: H(S) max = log q … iffy p k = 1/q,∀ k =1, 2, 3 … q ………………….. (2.5) Particular case- Zero memory binary sources: