dvb-t2 graduation project book 2011

279
Digital Video Broadcasting 2 nd Generation Terrestrial Simulation 2011 I Graduation Team Members 1. Amr Kamal El-Din Gamal El-Din. 2. Eman Magdy Ibrahim. 3. Islam Haytham Ahmad. 4. Khaled Salah Mohamad. 5. Mohamad Hashem Abd-El-Rehem. 6. Omnia Ahmad Mohmoud Akl. 7. Sarah Mostafa Mohamad. Acknowledgments We wrote this book throughout our fourth year at the Department of Communications and Electronics Engineering in Alexandria University and it applies our communications theoretical studies and we spared no effort to make this graduation project appears in the best form, Certainly, it could not have been written without the support and patience of many people. Therefore, we are indebted to many people for their information, feedback, and assistance during the development of this book. First of all, we must thank Allah who always assists us, we owing to him with any success and progress we made in our life. We want to express our gratitude to our supervisor Doctor \ Karim G. Seddik for all the helpful advices, encouragement, and discussions. The opportunity to work under his supervision was a precious experience, he exerts all the effort and time to help us to learn, search, and do our best in this project. Also we want to thank Our Professors in the communication department, who made their best to give us their experience in the field of Communication Engineering, and deep thanks to teacher assistant / Karim Banwan who was our beacon through our project journey. Most of all, we thank our beloved families for their immeasurable support, encouragement, and patience while working on this project.

Upload: zont-hunt

Post on 12-Jul-2015

537 views

Category:

Engineering


13 download

TRANSCRIPT

Page 1: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

I

Graduation Team Members

1. Amr Kamal El-Din Gamal El-Din. 2. Eman Magdy Ibrahim. 3. Islam Haytham Ahmad. 4. Khaled Salah Mohamad. 5. Mohamad Hashem Abd-El-Rehem. 6. Omnia Ahmad Mohmoud Akl. 7. Sarah Mostafa Mohamad.

Acknowledgments

We wrote this book throughout our fourth year at the Department of Communications and Electronics Engineering in Alexandria University and it applies our communications theoretical studies and we spared no effort to make this graduation project appears in the best form, Certainly, it could not have been written without the support and patience of many people. Therefore, we are indebted to many people for their information, feedback, and assistance during the development of this book.

First of all, we must thank Allah who always assists us, we owing to him with any success and progress we made

in our life. We want to express our gratitude to our supervisor Doctor \ Karim G. Seddik for all the helpful advices,

encouragement, and discussions. The opportunity to work under his supervision was a precious experience, he exerts all the effort and time to help us to learn, search, and do our best in this project.

Also we want to thank Our Professors in the communication department, who made their best to give us their

experience in the field of Communication Engineering, and deep thanks to teacher assistant / Karim Banwan who was our beacon through our project journey.

Most of all, we thank our beloved families for their immeasurable support, encouragement, and patience while

working on this project.

Page 2: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

II

Preface

Technological developments should not be regarded as exogenous determining factors but rather as the product of activities and relationships within society as a whole. Beside the technical factors involved, scientific, economic, market, political, and legal factors can determine the establishment of technologies in society. This book aims to provide an overview of these aspects with regard to DTV and to help explain how DTV, including conditional access, can be successfully embedded in society.

Broadcasting is the distribution of audio and video content to a dispersed audience via radio, television, or other. Receiving parties may include the general public or a relatively large subset of thereof.

The original term broadcast referred to the literal sowing of seeds on farms by scattering them over a wide field. It was first adopted by early radio engineers from the Midwestern United States to refer to the analogous

dissemination of radio signals. Broadcasting forms a very large segment of the mass media. Broadcasting to a very narrow range of audience is called narrowcasting.

The DVB-T standard is the most successful digital terrestrial television standards in the world. First published in 1995, it has been adopted by more than half of all countries in the world. Since the publication of the DVB-T standard, however, research in transmission technology has continued, and new options for modulating and error-protecting broadcast steams have been developed. Simultaneously, the demand for broadcasting frequency spectrum has increased as has the pressure to release broadcast spectrum for non-broadcast applications, making it is ever more necessary to maximize spectrum efficiency. In response, the DVB Project has developed the second-generation digital terrestrial television (DVB-T2) standard. The specification, first published by the DVB Project in June 2008, has been standardized by European Telecommunication Standardizations Institute (ETSI) since September 2009. Implementation and product development using this new standard has already begun. The possibility to increase the capacity in a digital terrestrial television (DTT) multiplex is one of the key benefits of the DVB-T2 standard. In comparison with the current digital terrestrial television standard, DVB-T, the second-generation standard, DVB-T2, provides a minimum increase in capacity of at least 30% in equivalent reception conditions using existing receiving antennas. Some preliminary testing, however, suggests that the increase in capacity obtained in practice may be closer to 50%. This can make possible the launch of new broadcast services that make intensive use of frequency capacity.

However, the implementation of a new digital terrestrial television (DTT) standard will have a profound impact upon the broadcast industry. The cost of developing, distributing, and implementing new equipment will need to be borne by manufacturers, network operators, and viewers. Business issues related to financing the launch of services the DVB-T2 standard need to be explored. The demand for services using DVB-T2 will likely vary depending on the demands of the market as will the approach for launching such services. Broadcasters will also need to consider possible business cases and how current revenue streams can be maintained and/or augmented.

Two excellent documents, the DVB-T2 specification (ETSI EN302755) and the Implementation Guidelines (DVB Bluebook A133), are available with the details of the technology. However, this handbook seeks to provide a wider understanding of the DVB-T2 standard to encompass issues beyond the technology. It addresses the key technical, business, and regulatory issues that must be taken into consideration by the broadcast industry when contemplating a launch of services using the DVBT2 standard.

Page 3: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

A | P a g e

Table of Contents

Chapter Number 1: Television History ------------------------------------------------------------------------------------ 1

1.1 – Early History of Television ------------------------------------------------------------------------------------------------------ 1

1.2 – Why it’s called Television? ------------------------------------------------------------------------------------------------------ 3

1.3 – The very first Broadcasts -------------------------------------------------------------------------------------------------------- 6

1.4 – Color Television Transmission ------------------------------------------------------------------------------------------------- 8

1.5 – Analog Television (ATV) ------------------------------------------------------------------------------------------------------- 10 1.5.1 –Scanning an Original Black/White Picture --------------------------------------------------------------------------------------------- 12 1.5.2 –Horizontal and Vertical Synchronization Pulses -------------------------------------------------------------------------------------- 13 1.5.3 –Adding Colors Information ----------------------------------------------------------------------------------------------------------------- 14

1.6 – Digital Television (DTV) -------------------------------------------------------------------------------------------------------- 16 1.6.1 –What is Digital Television? ----------------------------------------------------------------------------------------------------------------- 16 1.6.2 –Shannon’s Information Theorem --------------------------------------------------------------------------------------------------------- 17 1.6.3 –Digitizing a Video Signal --------------------------------------------------------------------------------------------------------------------- 17

1.6.3.1 – Why? -------------------------------------------------------------------------------------------------------------------------------------- 17 1.6.3.2 – How? -------------------------------------------------------------------------------------------------------------------------------------- 18

1.6.4 –Digital Video Signal --------------------------------------------------------------------------------------------------------------------------- 19 1.6.5 –Compressing Digital Signal ----------------------------------------------------------------------------------------------------------------- 20

1.6.5.1 – Compressing Still Images ------------------------------------------------------------------------------------------------------------ 20 1.6.5.2 – Compressing Non-Still Images (Movies) ----------------------------------------------------------------------------------------- 20

1.6.6 –Encapsulating into Transport Stream Packets ---------------------------------------------------------------------------------------- 20 1.6.7 –System Information -------------------------------------------------------------------------------------------------------------------------- 22 1.6.8 –Picture and Sound Quality ----------------------------------------------------------------------------------------------------------------- 22 1.6.9 –Advantages of Digital Transmission ----------------------------------------------------------------------------------------------------- 22

1.7 – High Definition Television (HDTV) ------------------------------------------------------------------------------------------ 23 1.7.1 –Historical view over HD development -------------------------------------------------------------------------------------------------- 23 1.7.2 –Interlaced and Progressive Scanning ---------------------------------------------------------------------------------------------------- 24 1.7.3 –HDTV Display Resolutions ------------------------------------------------------------------------------------------------------------------ 24 1.7.4 –Practical Aspects of Receiving HDTV ---------------------------------------------------------------------------------------------------- 24

1.8 – Stereoscopic Television (3DTV) ---------------------------------------------------------------------------------------------- 25 1.8.1 –Historical View -------------------------------------------------------------------------------------------------------------------------------- 25 1.8.2 –Used Technologies --------------------------------------------------------------------------------------------------------------------------- 25

1.8.2.1 – Anaglyphic 3D -------------------------------------------------------------------------------------------------------------------------- 26 1.8.2.2 – Polarization 3D ------------------------------------------------------------------------------------------------------------------------- 27 1.8.2.3 – Alternate Frame Sequencing 3D --------------------------------------------------------------------------------------------------- 28 1.8.2.4 – Alternate Frame Sequencing 3D --------------------------------------------------------------------------------------------------- 28

Chapter Number 2: Digital Video Broadcasting ---------------------------------------------------------------------- 29

2.1 – History ------------------------------------------------------------------------------------------------------------------------------ 29

2.2 – Digital Video Broadcasting Standards ------------------------------------------------------------------------------------- 30 2.2.1 –Digital Video Broadcasting – Satellite (DVB-S) ---------------------------------------------------------------------------------------- 30 2.2.2 –Digital Video Broadcasting – Cable (DVB-C) ------------------------------------------------------------------------------------------- 32

Page 4: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

B | P a g e

2.2.2.1 – DVB-C Transmitter -------------------------------------------------------------------------------------------------------------------- 32 2.2.2.2 – DVB-C Receiver ------------------------------------------------------------------------------------------------------------------------- 33

2.2.3 –Digital Video Broadcasting – Handheld (DVB-H) ------------------------------------------------------------------------------------- 34 2.2.4 –Digital Video Broadcasting – Satellite services to Handheld (DVB-SH) --------------------------------------------------------- 36

2.2.4.1 – DVB-H vs. DVB-SH --------------------------------------------------------------------------------------------------------------------- 37 2.2.5 –Digital Video Broadcasting – Terrestrial (DVB-T) ------------------------------------------------------------------------------------- 37

2.2.5.1 – DVB-T2 ------------------------------------------------------------------------------------------------------------------------------------ 39 2.2.5.2 – DVB-T2 vs. DVB-T ---------------------------------------------------------------------------------------------------------------------- 40 2.2.5.3 – DVB-T representation on world map --------------------------------------------------------------------------------------------- 40

Chapter Number 3: Source Coding --------------------------------------------------------------------------------------- 41

3.1 – Introduction ----------------------------------------------------------------------------------------------------------------------- 41

3.2 – Moving Pictures Experts Group (MPEG) Data Stream ----------------------------------------------------------------- 42

3.3 – Video Compression Technique ----------------------------------------------------------------------------------------------- 44

3.4 – The Packetized Elementary Stream (PES) --------------------------------------------------------------------------------- 44

3.5 – MPEG-2 Coding ------------------------------------------------------------------------------------------------------------------- 45

3.6 – MPEG-2 Transport Stream Packet ------------------------------------------------------------------------------------------ 47

3.7 – Information for the Receiver ------------------------------------------------------------------------------------------------- 49 3.7.1 –Synchronizing to the Transport Stream ------------------------------------------------------------------------------------------------ 49 3.7.2 –Reading out the Current Program Structure ------------------------------------------------------------------------------------------ 49 3.7.3 –Accessing a Program ------------------------------------------------------------------------------------------------------------------------- 51 3.7.4 –Accessing Scrambled Programs ----------------------------------------------------------------------------------------------------------- 51 3.7.5 –Program Synchronization (PCR, DTS, and PTS) --------------------------------------------------------------------------------------- 52 3.7.6 –Additional Information in the Transport Stream ------------------------------------------------------------------------------------- 52 3.7.7 –Non-Private and Private Sections and Tables ----------------------------------------------------------------------------------------- 52

3.8 – Scalability -------------------------------------------------------------------------------------------------------------------------- 53

3.9 – MPEG-2 Picture Types ---------------------------------------------------------------------------------------------------------- 53

3.10 – MPEG-2 Problems-------------------------------------------------------------------------------------------------------------- 54 3.10.1 –Problems in Coding at Low Bit-Rate --------------------------------------------------------------------------------------------------- 54 3.10.2 –Problems in Coding of Chrominance Components in Interlaced Video ------------------------------------------------------ 54

Chapter Number 4: Digital Modulation Techniques ----------------------------------------------------------------- 55

4.1 – Concept of Modulation -------------------------------------------------------------------------------------------------------- 55 4.1.1 – Importance of Modulation ---------------------------------------------------------------------------------------------------------------- 55

4.1.1.1 – Main Aim of Analog Modulation -------------------------------------------------------------------------------------------------- 55 4.1.1.2 – Main Aim of Digital Modulation --------------------------------------------------------------------------------------------------- 55 4.1.1.3 – Benefits of Signal Modulation ------------------------------------------------------------------------------------------------------ 55

4.1.2 – Digital vs. Analog ----------------------------------------------------------------------------------------------------------------------------- 56 4.1.3 – Modulation Techniques Performance -------------------------------------------------------------------------------------------------- 56

4.1.3.1 – Power Efficiency ----------------------------------------------------------------------------------------------------------------------- 57 4.1.3.2 – Bandwidth Efficiency ( )----------------------------------------------------------------------------------------------------------- 57 4.1.3.3 – Tradeoff between Power Efficiency and Bandwidth Efficiency ( ) ---------------------------------------------------- 57 4.1.3.4 – Power Spectral Density (PSD) ------------------------------------------------------------------------------------------------------ 58 4.1.3.5 – System Complexity -------------------------------------------------------------------------------------------------------------------- 58

4.1.4 – Digital Modulation Schemes -------------------------------------------------------------------------------------------------------------- 58 4.1.5 – Geometric Representation of Modulated Signals ----------------------------------------------------------------------------------- 59

Page 5: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

C | P a g e

4.1.5.1 – Basis Signal “ ” Conditions ---------------------------------------------------------------------------------------------------- 59 4.1.5.2 – Constellation Diagrams -------------------------------------------------------------------------------------------------------------- 59 4.1.5.3 – Probability of Error Calculation using Constellation Diagrams ------------------------------------------------------------ 60

4.1.6 – Types of Modulation Technique used in different Communication Systems ------------------------------------------------ 61

4.2 – Line Codes ------------------------------------------------------------------------------------------------------------------------- 62 4.2.1 –Non Return to Zero (NRZ) Line Coding -------------------------------------------------------------------------------------------------- 62

4.2.1.1 – Unipolar Non Return to Zero Line Coding --------------------------------------------------------------------------------------- 62 4.2.1.2 – Polar Non Return to Zero Line Coding ------------------------------------------------------------------------------------------- 63 4.2.1.3 – Non Return to Zero Space Line Coding ------------------------------------------------------------------------------------------ 63 4.2.1.4 – Non Return to Zero Inverted (Mark) Line Coding ----------------------------------------------------------------------------- 63

4.2.2 –Return to Zero (RZ) Line Coding ---------------------------------------------------------------------------------------------------------- 63 4.2.2.1 – Unipolar Return to Zero Line Coding --------------------------------------------------------------------------------------------- 64 4.2.2.2 – Polar Return to Zero Line Coding -------------------------------------------------------------------------------------------------- 64 4.2.2.3 – Bipolar Return to Zero Line Coding (AMI) --------------------------------------------------------------------------------------- 64

4.2.3 –Manchester Line Coding -------------------------------------------------------------------------------------------------------------------- 65 4.2.4 –Differential Line Coding --------------------------------------------------------------------------------------------------------------------- 65

4.3 – Amplitude Shift Keying (ASK) Modulation Technique ----------------------------------------------------------------- 66 4.3.1 –Binary Amplitude Shift Keying (BASK) --------------------------------------------------------------------------------------------------- 66

4.3.1.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 66 4.3.1.2 – Calculating Probability of Error ---------------------------------------------------------------------------------------------------- 67

4.3.2 –M’ary Amplitude Shift Keying (MASK) -------------------------------------------------------------------------------------------------- 68 4.3.2.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 68 4.3.2.2 – Calculating Probability of Error ---------------------------------------------------------------------------------------------------- 68

4.4 – Phase Shift Keying (PSK) Modulation Technique ----------------------------------------------------------------------- 69 4.4.1 –Binary Phase Shift Keying (BPSK) --------------------------------------------------------------------------------------------------------- 69

4.4.1.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 69 4.4.1.2 – BPSK Representation ----------------------------------------------------------------------------------------------------------------- 69

4.4.1.2.1 – Equations Representation ---------------------------------------------------------------------------------------------------- 69 4.4.1.2.2 – Time Domain Representation ------------------------------------------------------------------------------------------------ 70 4.4.1.2.3 – Spectrum and Bandwidth Representation-------------------------------------------------------------------------------- 70 4.4.1.2.4 – Constellation Representation ------------------------------------------------------------------------------------------------ 71

4.4.1.3 – BPSK Modulator ------------------------------------------------------------------------------------------------------------------------ 71 4.4.1.4 – BPSK De-Modulator ------------------------------------------------------------------------------------------------------------------- 71 4.4.1.5 – Power and Bandwidth Properties of BPSK -------------------------------------------------------------------------------------- 72 4.4.16 – Probability of Error for BPSK Modulation Scheme ---------------------------------------------------------------------------- 72

4.4.2 –Differential Phase Shift Keying (DPSK) -------------------------------------------------------------------------------------------------- 72 4.4.2.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 72 4.4.2.2 – Differential Encoding and Decoding Method ---------------------------------------------------------------------------------- 72 4.4.2.3 – DPSK Modulation ---------------------------------------------------------------------------------------------------------------------- 72 4.4.2.4 – DPSK De-Modulation ----------------------------------------------------------------------------------------------------------------- 73

4.4.2.4.1 – Suboptimum Receiver ---------------------------------------------------------------------------------------------------------- 73 4.4.2.4.2 – Optimum Receiver -------------------------------------------------------------------------------------------------------------- 73

4.4.2.5 – DPSK Advantages and Disadvantages -------------------------------------------------------------------------------------------- 74 4.4.2.6 – DPSK Power Spectral Density Representation --------------------------------------------------------------------------------- 74 4.4.2.7 – Probability of Error in DPSK System ---------------------------------------------------------------------------------------------- 74

4.4.3 –M’ary Phase Shift Keying (MPSK) --------------------------------------------------------------------------------------------------------- 75 4.4.3.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 75 4.4.3.2 – MPSK Representations --------------------------------------------------------------------------------------------------------------- 75

4.4.3.2.1 – Signal Equation Representation --------------------------------------------------------------------------------------------- 75 4.4.3.2.2 – Constellation Representation ------------------------------------------------------------------------------------------------ 76

Page 6: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

D | P a g e

4.4.3.3 – Probability of Error in MPSK Scheme --------------------------------------------------------------------------------------------- 76 4.4.3.4 –Power and Bandwidth Efficiency of MPSK --------------------------------------------------------------------------------------- 76 4.4.3.5 –MPSK Modulator ----------------------------------------------------------------------------------------------------------------------- 78 4.4.3.6 –MPSK De-Modulator ------------------------------------------------------------------------------------------------------------------- 78

4.4.4 –Special MPSK Techniques: “QPSK” ------------------------------------------------------------------------------------------------------- 79 4.4.4.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 79 4.4.4.2 – QPSK Representations---------------------------------------------------------------------------------------------------------------- 79

4.4.4.2.1 – Signal Equation Representation --------------------------------------------------------------------------------------------- 79 4.4.4.2.2 – Constellation Representation ------------------------------------------------------------------------------------------------ 80

4.4.4.3 – QPSK Probability of Error ------------------------------------------------------------------------------------------------------------ 80 4.4.4.4 – Bandwidth of QPSK ------------------------------------------------------------------------------------------------------------------- 80 4.4.4.5 – QPSK Modulator ----------------------------------------------------------------------------------------------------------------------- 81 4.4.4.6 – QPSK De-Modulator ------------------------------------------------------------------------------------------------------------------ 81

4.4.5 – Special MPSK Techniques: “OQPSK” ---------------------------------------------------------------------------------------------------- 82 4.4.6 – Special MPSK Techniques: “π/4-QPSK” ------------------------------------------------------------------------------------------------ 83

4.4.6.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 83 4.4.6.2 – “π/4” QPSK Constellation Representation -------------------------------------------------------------------------------------- 83 4.4.6.3 – “π/4” QPSK Phase Distribution ---------------------------------------------------------------------------------------------------- 83 4.4.6.4 – “π/4” QPSK Illustrating Example --------------------------------------------------------------------------------------------------- 84

4.5 – Frequency Shift Keying (FSK) Modulation Technique ----------------------------------------------------------------- 85 4.5.1 –Historical View -------------------------------------------------------------------------------------------------------------------------------- 85 4.5.2 –Binary Frequency Shift Keying (BFSK) --------------------------------------------------------------------------------------------------- 85

4.5.2.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 85 4.5.2.2 – BFSK Representations ---------------------------------------------------------------------------------------------------------------- 85

4.5.2.2.1 – Signal Representation ---------------------------------------------------------------------------------------------------------- 85 4.5.2.2.2 – Orthogonality Condition ------------------------------------------------------------------------------------------------------- 86

4.5.2.3 – BFSK Illustrating Example ------------------------------------------------------------------------------------------------------------ 86 4.5.2.4 – Power Spectral Density Evaluations ---------------------------------------------------------------------------------------------- 87 4.5.2.5 – BFSK Modulator ------------------------------------------------------------------------------------------------------------------------ 87 4.5.2.6 – BFSK De-Modulators ------------------------------------------------------------------------------------------------------------------ 88

4.5.2.6.1 – Coherent Detector -------------------------------------------------------------------------------------------------------------- 88 4.5.2.6.2 – Non-Coherent Detector ------------------------------------------------------------------------------------------------------- 88

4.5.2.7 – Probability of Error for BFSK -------------------------------------------------------------------------------------------------------- 89 4.5.2.7.1 –Coherent Detector --------------------------------------------------------------------------------------------------------------- 89 4.5.2.7.2 – Non-Coherent Detector ------------------------------------------------------------------------------------------------------- 89

4.5.3 –M’ary Frequency Shift Keying (MFSK) --------------------------------------------------------------------------------------------------- 90 4.5.3.1 – Overview --------------------------------------------------------------------------------------------------------------------------------- 90 4.5.3.2 – MFSK Representations --------------------------------------------------------------------------------------------------------------- 90

4.5.3.2.1 – Signal Representation ---------------------------------------------------------------------------------------------------------- 90 4.5.3.2.2 – Orthogonality Condition ------------------------------------------------------------------------------------------------------- 90

4.5.3.3 – Symbol and Bit Probability of Error in MFSK Systems ----------------------------------------------------------------------- 90 4.5.3.3.1 – Symbol Probability of Error --------------------------------------------------------------------------------------------------- 90 4.5.3.3.2 – Bit Probability of Error --------------------------------------------------------------------------------------------------------- 90 4.5.3.3.3 – Effect of Changing “M” on the Probability of Error -------------------------------------------------------------------- 91

4.5.4 –Other FSK Techniques ----------------------------------------------------------------------------------------------------------------------- 91 4.5.4.1 – Minimum Shift Keying (MSK) ------------------------------------------------------------------------------------------------------- 91 4.5.4.2 – Gaussian Minimum Shift Keying (GMSK) ---------------------------------------------------------------------------------------- 92

4.6 – Quadrature Amplitude Modulation (QAM) Modulation Technique ----------------------------------------------- 93 4.6.1 –Overview ---------------------------------------------------------------------------------------------------------------------------------------- 93 4.6.2 –QAM types-------------------------------------------------------------------------------------------------------------------------------------- 93

4.6.2.1 – Circular QAM---------------------------------------------------------------------------------------------------------------------------- 93

Page 7: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

E | P a g e

4.6.2.1.1 – 16-QAM as an Example on Circular QAM --------------------------------------------------------------------------------- 94 4.6.2.2 – Rectangular QAM ---------------------------------------------------------------------------------------------------------------------- 94

4.6.2.2.1 – 16-QAM as an Example on Rectangular QAM --------------------------------------------------------------------------- 95 4.6.3 –Calculating Probability of Error ----------------------------------------------------------------------------------------------------------- 95 4.6.4 –QAM Modulator ------------------------------------------------------------------------------------------------------------------------------ 96 4.6.5 –QAM De-Modulator -------------------------------------------------------------------------------------------------------------------------- 96 4.6.6 –QAM Bandwidth Efficiency ---------------------------------------------------------------------------------------------------------------- 96

4.7 –Coherent Detection -------------------------------------------------------------------------------------------------------------- 97 4.7.1 –Carrier Recovery and Symbol Synchronization --------------------------------------------------------------------------------------- 97 4.7.2 –Clock Recovery -------------------------------------------------------------------------------------------------------------------------------- 97

4.8 –Comparison between different Modulation Techniques -------------------------------------------------------------- 98 4.8.1 –Probability of Error --------------------------------------------------------------------------------------------------------------------------- 98 4.8.2 –Bit Error Rate Curves ------------------------------------------------------------------------------------------------------------------------ 99

4.8.2.1 – Phase Shift Keying BER curves --------------------------------------------------------------------------------------------------- 100 4.8.2.1.1 – Notes on Phase Shift Keying BER curves -------------------------------------------------------------------------------- 101

4.8.2.2 – Frequency Shift Keying BER curves --------------------------------------------------------------------------------------------- 101 4.8.2.2.1 – Notes on Frequency Shift Keying BER curves -------------------------------------------------------------------------- 102

4.8.2.3 – Quadrature Amplitude Modulation BER curves ----------------------------------------------------------------------------- 103 4.8.2.3.1 – Notes on Quadrature Amplitude Modulation BER curves ---------------------------------------------------------- 103

4.8.2.4 – Comparative Simulations ---------------------------------------------------------------------------------------------------------- 104 4.8.2.4.1 – At Fixed Modulation Order ------------------------------------------------------------------------------------------------- 104 4.8.2.4.2 – All Introduced Modulation Techniques --------------------------------------------------------------------------------- 105

4.8.3 –Overall Modulation Techniques Discussion ----------------------------------------------------------------------------------------- 106 4.8.4 –Modulation and Demodulation using Matlab -------------------------------------------------------------------------------------- 106

Chapter Number 5: Wireless Channel Problems -------------------------------------------------------------------- 108

5.1 – Introduction ---------------------------------------------------------------------------------------------------------------------- 108 5.1.1 –Analog and Digital Channel Models --------------------------------------------------------------------------------------------------- 108 5.1.2 –Noise in Wireless Channel ---------------------------------------------------------------------------------------------------------------- 109 5.1.3 –Basic Propagation Mechanisms --------------------------------------------------------------------------------------------------------- 109

5.1.3.1 – Reflection ------------------------------------------------------------------------------------------------------------------------------ 109 5.1.3.2 – Diffraction ----------------------------------------------------------------------------------------------------------------------------- 110 5.1.3.3 – Scattering ------------------------------------------------------------------------------------------------------------------------------ 112

5.2 – Path-loss in Wireless Communication Channels ----------------------------------------------------------------------- 113 5.2.1 –Free Space Model -------------------------------------------------------------------------------------------------------------------------- 114 5.2.2 – 2-Ray Model --------------------------------------------------------------------------------------------------------------------------------- 114 5.2.3 – General Ray Model ------------------------------------------------------------------------------------------------------------------------ 115 5.2.4 – Empirical Path Loss Models ------------------------------------------------------------------------------------------------------------- 115

5.2.4.1 – Okumura’s Model ------------------------------------------------------------------------------------------------------------------- 116 5.2.4.2 – Hata’s Model -------------------------------------------------------------------------------------------------------------------------- 117

5.2.4.2.1 – Hata Model for Suburban Areas ------------------------------------------------------------------------------------------- 117 5.2.4.2.2 – Hata Model for Urban Areas ----------------------------------------------------------------------------------------------- 117 5.2.4.2.4 – Hata Model for Open Areas ------------------------------------------------------------------------------------------------ 118

5.2.4.3 – Cost 231-Hata Model --------------------------------------------------------------------------------------------------------------- 118 5.2.4.4 – Walfisch-Bertoni Model------------------------------------------------------------------------------------------------------------ 119 5.2.4.5 – Cost 231-walfisch Ikegami Model ----------------------------------------------------------------------------------------------- 121 5.2.4.6 – Stanford University Interim (SUI) Model -------------------------------------------------------------------------------------- 123 5.2.4.7 – Comparison between Empirical Models --------------------------------------------------------------------------------------- 124

5.3 – Interference in Wireless Communication Channels ------------------------------------------------------------------- 126

Page 8: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

F | P a g e

5.3.1 –Main reasons of Interference ----------------------------------------------------------------------------------------------------------- 126 5.3.1.1 – Multiple signals in the Front Ends of a Communication System -------------------------------------------------------- 127 5.3.1.2 – Receiver Overload ------------------------------------------------------------------------------------------------------------------- 127 5.3.1.3 – Out Of Band Emission (OOBE) ---------------------------------------------------------------------------------------------------- 127 5.3.1.4 – Base-Station Intermodulation Products --------------------------------------------------------------------------------------- 127 5.3.1.5 – Skip -------------------------------------------------------------------------------------------------------------------------------------- 127 5.3.1.6 – Ducting --------------------------------------------------------------------------------------------------------------------------------- 127 5.3.1.7 – General RF Noises ------------------------------------------------------------------------------------------------------------------- 127

5.3.2 –Inter-Symbol Interference (ISI)---------------------------------------------------------------------------------------------------------- 128 5.3.2.1 – Causes of Inter-Symbol Interference ------------------------------------------------------------------------------------------- 128 5.3.2.2 – Countering Inter-Symbol Interference ----------------------------------------------------------------------------------------- 129

5.3.3 –Inter-Carrier Interference (ICI) ---------------------------------------------------------------------------------------------------------- 130 5.3.3.1 – Doppler Effect ------------------------------------------------------------------------------------------------------------------------ 130 5.3.3.2 – Synchronization Error -------------------------------------------------------------------------------------------------------------- 131 5.3.3.3 – Multi-Path Fading ------------------------------------------------------------------------------------------------------------------- 131 5.3.3.4 – Solutions for Inter-Carrier Interference --------------------------------------------------------------------------------------- 132

5.3.3.4.1 – CFO Estimation ----------------------------------------------------------------------------------------------------------------- 132 5.3.3.4.2 – Windowing Estimation ------------------------------------------------------------------------------------------------------- 132 5.3.3.4.3 – Inter-Carrier Interference Self-Cancellation ---------------------------------------------------------------------------- 132

5.4 – Large Scale Fading in Wireless Communication Channels ----------------------------------------------------------- 133 5.4.1 –Shadowing Model -------------------------------------------------------------------------------------------------------------------------- 133 5.4.2 –Combined Path Loss and Shadowing Model ---------------------------------------------------------------------------------------- 135 5.4.3 –Outage Probability under Path Loss and Shadowing Effects -------------------------------------------------------------------- 136

5.5 – Small Scale Fading in Wireless Communication Channels ----------------------------------------------------------- 136 5.5.1 –Introduction ---------------------------------------------------------------------------------------------------------------------------------- 136 5.5.2 –Flat Fading ------------------------------------------------------------------------------------------------------------------------------------ 137 5.5.3 –Fast Fading ----------------------------------------------------------------------------------------------------------------------------------- 139 5.5.4 –Slow Fading ---------------------------------------------------------------------------------------------------------------------------------- 139 5.5.5 –Rayleigh Fading ----------------------------------------------------------------------------------------------------------------------------- 140 5.5.6 –Ricean Fading -------------------------------------------------------------------------------------------------------------------------------- 142 5.5.7 –Nakagami-m Fading ------------------------------------------------------------------------------------------------------------------------ 143

Chapter Number 6: Channel Coding ----------------------------------------------------------------------------------- 146

6.1 – Introduction ---------------------------------------------------------------------------------------------------------------------- 146 6.1.1 –Difference between Channel Coding and Source Coding ------------------------------------------------------------------------ 147 6.1.2 –Minimum Distance Considerations ---------------------------------------------------------------------------------------------------- 147 6.1.3 –Modulo-2 Arithmetic Operations ------------------------------------------------------------------------------------------------------ 149

6.2 – Comparison between Typical Coded versus Un-coded Systems Performance -------------------------------- 150 6.2.1 –Error Performance versus Band-Width Performance ----------------------------------------------------------------------------- 150 6.2.2 –Power Performance versus Band-Width Performance --------------------------------------------------------------------------- 150 6.2.3 –Data Rate Performance versus Band-Width Performance ---------------------------------------------------------------------- 150

6.3 – Hard Decisions versus Soft Decisions ------------------------------------------------------------------------------------- 151 6.3.1 –Hard Decision Decoding ------------------------------------------------------------------------------------------------------------------ 152 6.3.2 –Soft Decision Decoding-------------------------------------------------------------------------------------------------------------------- 152

6.4 – Error Control Coding ----------------------------------------------------------------------------------------------------------- 152 6.4.1 –Linear Block Codes ------------------------------------------------------------------------------------------------------------------------- 152

6.4.1.1 – Design Equations -------------------------------------------------------------------------------------------------------------------- 153 6.4.1.2 – Properties of Linear Block Codes ------------------------------------------------------------------------------------------------ 155

6.4.2 –Cyclic Codes ---------------------------------------------------------------------------------------------------------------------------------- 155

Page 9: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

G | P a g e

6.4.2.1 – Introduction --------------------------------------------------------------------------------------------------------------------------- 155 6.4.2.2 – Generator Polynomial -------------------------------------------------------------------------------------------------------------- 156 6.4.2.3 – Parity-Check Polynomial ----------------------------------------------------------------------------------------------------------- 157 6.4.2.4 – Generator and Parity-Check Matrices representations ------------------------------------------------------------------- 158 6.4.2.5 – Case Study I: Cyclic Redundancy Check Codes (CRC) ----------------------------------------------------------------------- 159

6.4.2.5.1 – 16-Bits CRC- CCITT (USA Model) ------------------------------------------------------------------------------------------- 159 6.4.2.5.2 – 16-Bits CRC- ATM -------------------------------------------------------------------------------------------------------------- 160

6.4.2.6 – Case Study II: B. C. H. Codes ------------------------------------------------------------------------------------------------------ 160 6.4.2.6.1 – Introduction--------------------------------------------------------------------------------------------------------------------- 160 6.4.2.6.2 – Encoding Algorithm ----------------------------------------------------------------------------------------------------------- 161 6.4.2.6.3 – Decoding Algorithm ---------------------------------------------------------------------------------------------------------- 161

6.4.3 –Convolutional Codes ----------------------------------------------------------------------------------------------------------------------- 163 6.4.3.1 – Introduction --------------------------------------------------------------------------------------------------------------------------- 163 6.4.3.2 – Encoder Structure ------------------------------------------------------------------------------------------------------------------- 163 6.4.3.3 – Connection Representation ------------------------------------------------------------------------------------------------------- 164 6.4.3.4 – Convolutional Encoder Representation --------------------------------------------------------------------------------------- 165

6.4.3.4.1 – Polynomial Representation ------------------------------------------------------------------------------------------------- 165 6.4.3.4.2 – State Representation --------------------------------------------------------------------------------------------------------- 165 6.4.3.4.3 – Tree Diagram Representation ---------------------------------------------------------------------------------------------- 166 6.4.3.4.4 – Trellis Diagram Representation -------------------------------------------------------------------------------------------- 167

6.4.3.5 – Case Study: Low Parity Check Codes (LDPC) --------------------------------------------------------------------------------- 169 6.4.3.5.1 – Introduction--------------------------------------------------------------------------------------------------------------------- 169 6.4.3.5.1 – Matrix Representation of LDPC Codes ----------------------------------------------------------------------------------- 169 6.4.3.5.2 – Graphical Representation of LDPC Codes ------------------------------------------------------------------------------- 170 6.4.3.5.3 – Regular and Irregular LDPC Codes ---------------------------------------------------------------------------------------- 170 6.4.3.5.4 – Constructing LDPC Codes---------------------------------------------------------------------------------------------------- 170 6.4.3.5.5 – Hard Decoding of LDPC Codes --------------------------------------------------------------------------------------------- 170 6.4.3.5.6 – Soft-Decision Decoding ------------------------------------------------------------------------------------------------------ 172 6.4.3.5.7 – LDPC Performance ------------------------------------------------------------------------------------------------------------ 173

Chapter Number 7: Orthogonal Frequency Division Multiplexing --------------------------------------------- 174

7.1 – Introduction ---------------------------------------------------------------------------------------------------------------------- 174

7.2 – OFDM Historical Overview --------------------------------------------------------------------------------------------------- 175

7.3 – Important Definitions --------------------------------------------------------------------------------------------------------- 175 7.3.1 – Inter-Symbol Interference (ISI) --------------------------------------------------------------------------------------------------------- 175 7.3.2 – Inter-Carrier Interference (ICI) --------------------------------------------------------------------------------------------------------- 176

7.4 – OFDM as a Multicarrier Transmission Technique ---------------------------------------------------------------------- 177

7.5 – OFDM Concept ------------------------------------------------------------------------------------------------------------------ 179 7.5.1 –Illustrative Example ------------------------------------------------------------------------------------------------------------------------ 179 7.5.2 – Analysis --------------------------------------------------------------------------------------------------------------------------------------- 179

7.5.2.1 – Time Domain Analysis -------------------------------------------------------------------------------------------------------------- 180 7.5.2.2 – Frequency Domain Analysis ------------------------------------------------------------------------------------------------------ 180 7.5.2.3 – Conclusion from both Domains Analysis -------------------------------------------------------------------------------------- 181

7.6 – Orthogonality of OFDM ------------------------------------------------------------------------------------------------------- 181

7.7 – Comparing FDM to OFDM ---------------------------------------------------------------------------------------------------- 182 7.7.1 –Frequency Division Multiplexing (FDM) ---------------------------------------------------------------------------------------------- 182 7.7.2 – Orthogonal Frequency Division Multiplexing (OFDM) --------------------------------------------------------------------------- 183

Page 10: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

H | P a g e

7.8 – OFDM Implementation ------------------------------------------------------------------------------------------------------- 184 7.8.1 –Implementation using “FFT/IFFT” ------------------------------------------------------------------------------------------------------ 185

7.9 – OFDM Transmitter ------------------------------------------------------------------------------------------------------------- 185

7.10 – OFDM Receiver ---------------------------------------------------------------------------------------------------------------- 186

7.11 – Cyclic Prefix --------------------------------------------------------------------------------------------------------------------- 186 7.11.1 – Cyclic Prefix Representation in OFDM System ----------------------------------------------------------------------------------- 187 7.11.2 – Advantages of using Cyclic Prefix ---------------------------------------------------------------------------------------------------- 188 7.11.3 – Disadvantages of using Cyclic Prefix ------------------------------------------------------------------------------------------------ 188

7.12 – Coded Orthogonal Frequency Division Multiplexing (COFDM) --------------------------------------------------- 188

7.13 –Orthogonal Frequency Division Multiple Access (OFDMA) -------------------------------------------------------- 190

7.14 – Peak to Average Power Ratio (PAPR) ----------------------------------------------------------------------------------- 191 7.14.1 –Conceptual Meaning --------------------------------------------------------------------------------------------------------------------- 191 7.14.2 – The cause of PAPR ----------------------------------------------------------------------------------------------------------------------- 192 7.14.3 – PAPR Effect -------------------------------------------------------------------------------------------------------------------------------- 193 7.14.4 – PAPR Reduction Techniques ---------------------------------------------------------------------------------------------------------- 193

7.14.4.1 – Distorting Method ----------------------------------------------------------------------------------------------------------------- 193 7.14.4.2 – Non-Distorting Method ---------------------------------------------------------------------------------------------------------- 193

7.14.4.2.1 – Selective Mapping Method ----------------------------------------------------------------------------------------------- 193

Chapter Number 8: Space Time Coding ------------------------------------------------------------------------------- 194

8.1 – Space Time Coding Concept ------------------------------------------------------------------------------------------------- 194

8.2 – Diversity --------------------------------------------------------------------------------------------------------------------------- 194 8.2.1 –Diversity Classes ---------------------------------------------------------------------------------------------------------------------------- 194

8.2.1.1 – Time Diversity ------------------------------------------------------------------------------------------------------------------------ 194 8.2.1.2 – Frequency Diversity ----------------------------------------------------------------------------------------------------------------- 195 8.2.1.3 – Antenna Diversity-------------------------------------------------------------------------------------------------------------------- 196

8.2.1.3.1 – Multiple Input Single Output (MISO) ------------------------------------------------------------------------------------- 197 8.2.1.3.2 – Single Input Multiple Output (SIMO) ------------------------------------------------------------------------------------- 197 8.2.1.3.3 – Multiple Input Multiple Output (MIMO) -------------------------------------------------------------------------------- 198 8.2.1.3.4 – Multiple Input Multiple Output Multi Users (MIMO-Mu) ---------------------------------------------------------- 199

8.2.2 –Diversity Effect on Bit-Error-Rate ------------------------------------------------------------------------------------------------------ 199 8.2.3 –Diversity Condition ------------------------------------------------------------------------------------------------------------------------- 200

8.2.3.1 – Polarization Diversity --------------------------------------------------------------------------------------------------------------- 201 8.2.3.2 – Angle Diversity ----------------------------------------------------------------------------------------------------------------------- 201

8.3 – Spatial Multiplexing------------------------------------------------------------------------------------------------------------ 201

8.4 – MIMO Concept ------------------------------------------------------------------------------------------------------------------ 201 8.4.1 –Advantages of MIMO Systems ---------------------------------------------------------------------------------------------------------- 202 8.4.2 –MIMO General Model --------------------------------------------------------------------------------------------------------------------- 202 8.4.3 –MIMO General Capacity Equation ----------------------------------------------------------------------------------------------------- 203 8.4.4 –Factors Affecting MIMO System Capacity ------------------------------------------------------------------------------------------- 204 8.4.5 –How MIMO System works? -------------------------------------------------------------------------------------------------------------- 205

8.5 – Receiving Space Time Codes ------------------------------------------------------------------------------------------------- 205 8.5.1 – Maximum Likelihood Decoder --------------------------------------------------------------------------------------------------------- 205 8.5.2 – Zero Forcing Decoder --------------------------------------------------------------------------------------------------------------------- 206 8.5.3 – Minimum Mean Square Error (MMSE) Decoder ---------------------------------------------------------------------------------- 206

Page 11: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

I | P a g e

8.5.4 – Successive Cancellation Decoder ------------------------------------------------------------------------------------------------------ 206 8.5.4.1 – Order Successive Interference Cancellation Algorithm ------------------------------------------------------------------- 207 8.5.4.2 – Vertical-Ball Laboratories Layered Space-Time Algorithm --------------------------------------------------------------- 207

Chapter Number 9: Digital Video Broadcasting 2nd Generation Terrestrial Simulation ------------------ 208

9.1 – Introduction ---------------------------------------------------------------------------------------------------------------------- 208 9.1.1 –DVB-T2 Key Features ---------------------------------------------------------------------------------------------------------------------- 208

9.1.1.1 – Physical Layer Pipes (PLPs) -------------------------------------------------------------------------------------------------------- 208 9.1.1.1.1 – Input Mode “A” ---------------------------------------------------------------------------------------------------------------- 208 9.1.1.1.2 – Input Mode “B” ---------------------------------------------------------------------------------------------------------------- 208

9.1.1.2 – Additional Band-Widths (1.7 MHz, 10 MHz) --------------------------------------------------------------------------------- 209 9.1.1.3 – Extended Carrier Mode (8K, 16K, 32K)----------------------------------------------------------------------------------------- 209 9.1.1.4 – Alamouti-Based MISO (In Frequency direction) ----------------------------------------------------------------------------- 209 9.1.1.5 – Pre-ambles (P1 and P2) ------------------------------------------------------------------------------------------------------------ 209 9.1.1.6 – Pilot Patterns ------------------------------------------------------------------------------------------------------------------------- 210 9.1.1.7 – 256-QAM Modulation Technique ----------------------------------------------------------------------------------------------- 210 9.1.1.8 – Rotated Constellations ------------------------------------------------------------------------------------------------------------- 210 9.1.1.9 – 16K and 32K FFT Sizes and (1/128) guard-interval fraction -------------------------------------------------------------- 211 9.1.1.10 – LDPC and BCH error correcting codes ---------------------------------------------------------------------------------------- 211 9.1.1.11 – Interleavers (Bit, Cell, Time, and Frequency) ------------------------------------------------------------------------------- 211 9.1.1.12 – Peak-Average Power Reduction Techniques ------------------------------------------------------------------------------- 211 9.1.1.13 – Future Extension Frames (FEFs) ------------------------------------------------------------------------------------------------ 212

9.1.2 –System Architecture ----------------------------------------------------------------------------------------------------------------------- 212 9.1.2.1 – Mode Adaptation Module -------------------------------------------------------------------------------------------------------- 212 9.1.2.2 – Stream Adaptation Module ------------------------------------------------------------------------------------------------------- 213 9.1.2.3 – Bit Interleaved Coding and Modulation Module ---------------------------------------------------------------------------- 213 9.1.2.4 – Frame Mapper Module ------------------------------------------------------------------------------------------------------------ 213 9.1.2.5 – Modulator Module ------------------------------------------------------------------------------------------------------------------ 213

9.2 – Mode Adaptation Module --------------------------------------------------------------------------------------------------- 213 9.2.1 –Input Formats-------------------------------------------------------------------------------------------------------------------------------- 214 9.2.2 –Input Interface ------------------------------------------------------------------------------------------------------------------------------ 214 9.2.3 –Input Stream Synchronization ---------------------------------------------------------------------------------------------------------- 214 9.2.4 –Compensating Delay for Transport Stream ------------------------------------------------------------------------------------------ 215 9.2.5 –Null Packet Deletion ----------------------------------------------------------------------------------------------------------------------- 215 9.2.6 –CRC-Encoding -------------------------------------------------------------------------------------------------------------------------------- 216 9.2.7 –Base-Band Header Insertion ------------------------------------------------------------------------------------------------------------- 216 9.2.8 –Mode Adaptation sub-system Output Stream Formats -------------------------------------------------------------------------- 217

9.2.8.1 – Normal Mode, GDPS and TS ------------------------------------------------------------------------------------------------------ 217 9.2.8.2 – High Efficiency Mode, Transport Stream -------------------------------------------------------------------------------------- 218 9.2.8.3 – Normal Mode, GCS and GSE ------------------------------------------------------------------------------------------------------ 219 9.2.8.4 – High Efficiency Mode, GSE -------------------------------------------------------------------------------------------------------- 219

9.3 – Stream Adaptation Module -------------------------------------------------------------------------------------------------- 220 9.3.1 –Scheduler ------------------------------------------------------------------------------------------------------------------------------------- 220 9.3.2 –Padding ---------------------------------------------------------------------------------------------------------------------------------------- 220 9.3.3 –Using Padding Fields for in-band Signaling ------------------------------------------------------------------------------------------ 220 9.3.4 –Scrambler ------------------------------------------------------------------------------------------------------------------------------------- 221

9.4 – Bit Interleaving and Modulation Module -------------------------------------------------------------------------------- 222 9.4.1 –Forward Error Correction Encoding --------------------------------------------------------------------------------------------------- 222

9.4.1.1 – Outer Coding (B. C. H.) ------------------------------------------------------------------------------------------------------------- 223 9.4.1.2 – Inner Coding (LDPC) ----------------------------------------------------------------------------------------------------------------- 223

Page 12: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

J | P a g e

9.4.1.2.1 – Inner Coding for Normal FEC Frame -------------------------------------------------------------------------------------- 224 9.4.1.2.2 – Inner Coding for Short FEC Frame ---------------------------------------------------------------------------------------- 225 9.4.1.2.2 – Inner Coding Simulation Results ------------------------------------------------------------------------------------------ 225

9.4.2 –Bit Interleaver ------------------------------------------------------------------------------------------------------------------------------- 226 9.4.3 –Mapping Bits onto Constellations ------------------------------------------------------------------------------------------------------ 227

9.4.3.1 – Bits to Cells Word De-Multiplexer ---------------------------------------------------------------------------------------------- 227 9.4.3.3 – Cell Word Mapping into I/Q constellations ----------------------------------------------------------------------------------- 230

9.4.3.3.1 – Equation Representation ---------------------------------------------------------------------------------------------------- 230 9.4.3.3.2 – Constellation Representation ---------------------------------------------------------------------------------------------- 231

9.4.4 –Constellation Rotation and Cyclic Q-Delay ------------------------------------------------------------------------------------------ 234

9.5 – Frame Mapper Module ------------------------------------------------------------------------------------------------------- 235 9.5.1 –Cell Interleaver ------------------------------------------------------------------------------------------------------------------------------ 235 9.5.2 –Time Interleaver ---------------------------------------------------------------------------------------------------------------------------- 236

9.5.2.1 – Case I --------------------------------------------------------------------------------------------------------------------------------- 237 9.5.2.2 – Case II ----------------------------------------------------------------------------------------------------------------------------------- 237 9.5.2.2 – Case III ---------------------------------------------------------------------------------------------------------------------------------- 237

9.5.3 –Frame Builder -------------------------------------------------------------------------------------------------------------------------------- 238 9.5.3.1 – Frame Structure ---------------------------------------------------------------------------------------------------------------------- 239 9.5.3.2 – Super Frame -------------------------------------------------------------------------------------------------------------------------- 239 9.5.3.3 – T2-Frame ------------------------------------------------------------------------------------------------------------------------------- 240 9.5.3.4 – Duration of the T2-Frame --------------------------------------------------------------------------------------------------------- 240 9.5.3.5 – Capacity and Structure of T2-Frame -------------------------------------------------------------------------------------------- 240 9.5.3.6 – Signaling of the T2-Frame structure and PLPs ------------------------------------------------------------------------------- 241 9.5.3.7 – Overview over T2-Frame Mapping --------------------------------------------------------------------------------------------- 242 9.5.3.8 – Auxiliary Stream Insertion --------------------------------------------------------------------------------------------------------- 242 9.5.3.9 – Future Extension Frames ---------------------------------------------------------------------------------------------------------- 243

9.5.4 –Frequency Interleaver --------------------------------------------------------------------------------------------------------------------- 243

9.6 – Modulator Module ------------------------------------------------------------------------------------------------------------- 244 9.6.1 –Alamouti Space Time Coding ------------------------------------------------------------------------------------------------------------ 244

9.6.1.1 – Alamouti Encoding ------------------------------------------------------------------------------------------------------------------ 245 9.6.1.2 – Alamouti Modified Encoding ----------------------------------------------------------------------------------------------------- 246 9.6.1.3 – Alamouti Modified Decoding using Zeros Forcing Decoder -------------------------------------------------------------- 247 9.6.1.4 – Alamouti Performance ------------------------------------------------------------------------------------------------------------- 247

9.6.2 –Pilot Insertion -------------------------------------------------------------------------------------------------------------------------------- 248 9.6.2.1 – Definition of Reference Plane ---------------------------------------------------------------------------------------------------- 249

9.6.2.1.1 – Symbol Level -------------------------------------------------------------------------------------------------------------------- 249 9.6.2.1.2 – Frame Level --------------------------------------------------------------------------------------------------------------------- 249

9.6.2.2 – Scattered Pilots ---------------------------------------------------------------------------------------------------------------------- 249 9.6.2.2.1 – Locations ------------------------------------------------------------------------------------------------------------------------- 250 9.6.2.2.2 – Modulation ---------------------------------------------------------------------------------------------------------------------- 250

9.6.2.3 – Continual Pilots ---------------------------------------------------------------------------------------------------------------------- 250 9.6.2.3.1 – Locations ------------------------------------------------------------------------------------------------------------------------- 251 9.6.2.2.2 – Modulation ---------------------------------------------------------------------------------------------------------------------- 251

9.6.2.4 – Edge Pilots ----------------------------------------------------------------------------------------------------------------------------- 251 9.6.2.5 – P2 Pilots -------------------------------------------------------------------------------------------------------------------------------- 251

9.6.2.5.1 – Locations ------------------------------------------------------------------------------------------------------------------------- 251 9.6.2.5.2 – Modulation ---------------------------------------------------------------------------------------------------------------------- 251

9.6.2.6 – P2 Pilots -------------------------------------------------------------------------------------------------------------------------------- 252 9.6.2.6.1 – Locations ------------------------------------------------------------------------------------------------------------------------- 252 9.6.2.5.2 – Modulation ---------------------------------------------------------------------------------------------------------------------- 252

9.6.2.7 – Modifications on Pilots for MISO Operation --------------------------------------------------------------------------------- 252

Page 13: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

K | P a g e

9.6.3 –Channel Estimation ------------------------------------------------------------------------------------------------------------------------ 253 9.6.3.1 – Least Square Estimator------------------------------------------------------------------------------------------------------------- 254 9.6.3.2 – Interpolation -------------------------------------------------------------------------------------------------------------------------- 254

9.6.3.2.1 – Temporal Interpolation ------------------------------------------------------------------------------------------------------ 254 9.6.3.2.1 – Frequency Interpolation ----------------------------------------------------------------------------------------------------- 255

9.6.4 –OFDM implementation using IFFT ----------------------------------------------------------------------------------------------------- 255 9.6.5 –Peak-to-Average Power Reduction ---------------------------------------------------------------------------------------------------- 255

9.6.5.1 – PAPR De-Mapping ------------------------------------------------------------------------------------------------------------------- 257 9.6.5 –Guard Interval Insertion ------------------------------------------------------------------------------------------------------------------ 257

9.7 – Synchronization using Control Signals ------------------------------------------------------------------------------------ 257 9.7.1 –P1-Symbols ----------------------------------------------------------------------------------------------------------------------------------- 257

9.7.1.1 – P1-Symbol Over-view --------------------------------------------------------------------------------------------------------------- 258 9.7.1.2 – P1-Symbol Description ------------------------------------------------------------------------------------------------------------- 258 9.7.1.3 – P1-Symbol Generation ------------------------------------------------------------------------------------------------------------- 259 9.7.1.4 – Carrier Distribution in P1-Symbol ----------------------------------------------------------------------------------------------- 259 9.7.1.5 – Modulation of the Active Carriers in P1-Symbol ---------------------------------------------------------------------------- 259 9.7.1.6 – P1-Symbol Decoding ---------------------------------------------------------------------------------------------------------------- 261

9.7.2 –P2-Symbols ----------------------------------------------------------------------------------------------------------------------------------- 261 9.7.2.1 – L1-Signaling Data -------------------------------------------------------------------------------------------------------------------- 261 9.7.2.2 – Repetition of L1-Post dynamic data -------------------------------------------------------------------------------------------- 262 9.7.2.3 – L1-Post Extension Field------------------------------------------------------------------------------------------------------------- 263 9.7.2.4 – CRC for the L1-Post Signaling ----------------------------------------------------------------------------------------------------- 263 9.7.2.5 – Error Correction Coding and Modulation of L1-Pre Signaling ----------------------------------------------------------- 263 9.7.2.6 – Error Correction Coding and Modulation of L1-Post Signaling ---------------------------------------------------------- 263

Page 14: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

L | P a g e

Book Structure

Chapter Number one: Television History This chapter illustrates the history of television that records the work of numerous engineers and inventors in

several countries over many decades. The fundamental principles of television were initially explored using electromechanical methods to scan, transmit and reproduce an image. As electronic camera and display tubes were perfected, electromechanical television gave way to all-electronic systems in nearly all applications.

Chapter Number two: Digital Video Broadcasting This chapter discusses Digital Video Broadcasting (DVB) which is a suite of internationally accepted open

standards for digital television. DVB standards are maintained by the DVB Project, an international industry consortium with more than 270 members, and they are published by a Joint Technical Committee (JTC) of European Telecommunications Standards Institute (ETSI), European Committee for Electro-technical Standardization (CENELEC) and European Broadcasting Union (EBU). The interaction of the DVB sub-standards is described in the DVB Cookbook. Many aspects of DVB are patented, including elements of MPEG video coding and audio coding.

Chapter Number three: Source Coding This chapter devoted to source coding specially MPEG-2 which is a standard for "the generic coding of moving

pictures and associated audio information". It describes a combination of lossy compression and lossy audio data compression methods which permit storage and transmission of movies using currently available storage media and transmission bandwidth.

Chapter Number four: Digital Modulation Techniques This chapter illustrates Digital modulation schemes which transform digital signals into waveforms that are

compatible with the nature of the communications channel. There are two major categories of digital modulation. One category uses a constant amplitude carrier and the other carries the information in phase or frequency variations (FSK, PSK). The other category conveys the information in carrier amplitude variations and is known as amplitude shift keying (ASK) and discussing its various types like BPSK,QPSK,M-ary(16-QAM,64-QAM,256-QAM) and highlights the advantages and disadvantages of each one.

Chapter Number five: Wireless Channel Problems Chapter five deal with the problems and impairments that degenerate the performance of the wireless channels.

The chapter begins with the noise and interference effects on the channel then continues with the analysis of large scale fading and the mathematical and statistical models the describe this phenomena, after that the small scale fading is introduced in details as it is very detrimental to signal transmission and it is the main problem in the wireless channels. Then the parameters and classifications of the small scale fading channels are listed, then the chapter ended with details of the main characteristics of the channels and the mathematical and empirical models that deal with the multipath fading channels.

Chapter Number six: Channel Coding

This chapter gives short notes of selected channel coding topics which become relevant in subsequent chapters. Starting with a basic description of linear block and present comprehensive summary convolutional codes, BCH,and ending with the LDPC codes. The chapter gives the properties of each code and the advantages and disadvantages of these codes.

Page 15: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

M | P a g e

Chapter Number seven: Orthogonal Frequency Division Multiplexing This chapter illustrates orthogonal frequency-division multiplexing (OFDM), essentially identical to coded

OFDM (COFDM) and discrete multi-tone modulation (DMT), is a frequency (FDM) scheme used as a digital multi-carrier modulation method. A large number of closely-spaced orthogonal sub-carriers are used to carry data. The data is divided into several parallel data streams or channels, one for each sub-carrier. Each sub-carrier is modulated with a conventional modulation scheme (such as quadrature amplitude modulation or phase-shift keying) at a low symbol rate, maintaining total data rates similar to conventional single-carrier modulation schemes in the same bandwidth.

Chapter Number eight: Space Time Coding

This chapter gives short notes a space–time code (STC) is a method employed to improve the reliability of data transmission in wireless communication systems using multiple transmit antennas. STCs rely on transmitting multiple, redundant copies of a data stream to the receiver in the hope that at least some of them may survive the physical path between transmission and reception in a good enough state to allow reliable decoding.

Chapter Number nine: DVB-T2 Physical layer This chapter discusses the block diagram of DVB-T2 system in details and introduces the key features of the

standard which improve its performance compared to previous DVB standards, The chapter will also include simulation results for the system.

Page 16: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

1 | P a g e

Figure (1.1)

Figure (1.2)

Chapter Number 1 Television History

1.1 – Early History of Television In Ireland 1873 A young telegraph operator, Joseph May, discovered the photoelectric effect; selenium bars,

exposed to sunlight, show a variation in resistance. Variations in light intensity can therefore be transformed into electrical signals. That means they can be transmitted as shown below in “Figure (1.1)”.

In Boston (USA) 1875 George Carey proposed a system as shown in “Figure (1.2)” based on the exploration of every point in the image simultaneously: a large number of photoelectric cells are arranged on a panel, facing the image, and wired to a panel carrying the same number of bulbs.

This system was impracticable if any reasonable quality criteria were to be respected. Even to match the quality

of cinema films of that period, thousands of parallel wires would have been needed from one end of the circuit to the other.

Page 17: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

2 | P a g e

Figure (1.3)

In France 1881, Constantine Senlecq published a sketch detailing a similar idea in an improved form: two rotating switches were proposed between the panels of cells and lamps, and as these turned at the same rate they connected each cell, in turn, with the corresponding lamp. With this system, all the points in the picture could be sent one after the other along a single wire.

This is the basis of modern television: the picture is converted into a series of picture elements. Nonetheless,

Senlecq's system, like that proposed by Carey, needed a large number of cells and lamps.

In Germany 1884 Paul Nipkow applied for a patent covering another image scanning system as shown below in “Figure (1.3)”, it was to use a rotating disk with a series of holes arranged in a spiral, each spaced from the next by the width of the image; a beam of light shining through the holes would illuminate each line of the image.

The light beam, whose intensity depended on the picture element, was converted into an electrical signal by the cell. At the receiving end, there was an identical disc turning at the same speed in front of a lamp whose brightness changed according to the received signal.

After a complete rotation of the discs, the entire picture had been scanned. If the discs rotated sufficiently

rapidly, in other words if the successive light stimuli followed quickly enough one after the other, the eye no longer perceived them as individual picture elements. Instead, the entire picture was seen as if it were a single unit.

The idea was simple but it could not be put into practice with the materials available at the time. Other scientific developments were to offer an alternative. The electron, the tiny grain of negative electricity

which revolutionised physical science at the end of the 19th century, was the key. The extreme narrowness of electron beams and their absence of inertia caught the imagination of many researchers and oriented their studies towards what in time became known as electronics. The mechanical approach nevertheless stood its ground, and the competition lasted until 1937.

The cathode ray tube with a fluorescent scene was invented in 1897. Karl Ferdinand Braun, of the University of Strasbourg, had the idea of placing two electromagnets around the neck of the tube to make the electron beam move horizontally and vertically. On the fluorescent screen the movement of the electron beam had the effect of tracing visible lines on the screen.

A Russian scientist, Boris Rosing, suggested this might be used as a receiver screen and conducted experiments in

1907 in his laboratory in Saint Petersburg.

Page 18: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

3 | P a g e

Figure (1.4)

As early as 1908 the Scotsman A. A. Campbell Swinton outlined a system using cathode ray tubes at both sending and receiving ends and represented it as shown in “Figure (1.4)”, This was the first purely electronic proposal. He published a description of it in 1911 including the following:

The image is thrown onto a photoelectric mosaic fixed to one of the tubes;

A beam of electrons then scans it and produces the electric signal.

At the receiving end, this electric signal controls the intensity of another beam of electrons which scans the fluorescent screen.

The methods proposed by Nipkow and Campbell Swinton were at the time theoretical ideas only. The available cells were not sensitive enough and they reacted too slowly to changes in light intensity. The signals were very weak and amplifiers had not yet been invented.

1.2 – Why it’s called Television? The names given to the first systems, at the end of the 19th century, highlighted the form of energy used for

transmission; names such as "télectroscope" and "electrical telescope" were used. The German word "Fernsehen" was first used in 1890, by the physicist Eduard Liesegang. This became "fjer-syn"

in Danish.

The French word "Télévision" was used for the first time in 1900 by the Russian physicist Constantin Perskyi who delivered a speech on the subject during the great Paris exhibition. "Télévision" caught on, and it became "television". in English, "televisie" in Dutch, "televisione" in Italian, "television" in Spanish, etc.

But science marched on. There was the potassium cell, which reacted much more rapidly than the selenium cell,

then came the triode, manufactured in large quantities from about 1915, the development of which owed much to the new-born "wireless".

There was also the neon lamp, whose light intensity could be varied rapidly, making it suitable for use in disc

receivers. It was Nipkow's ideas which were the first to benefit from these inventions, and were the first to become practical realities.

In 1925, an electrical engineer from Scotland, John Logie Baird, exhibited in Selfridge’s department store in

London an apparatus with which he reproduced a simple image, in fact white letters on a black background, at a distance. It was not really television because the two discs which served to transmit the image and to reproduce it were mounted on the same shaft as shown in “Figure (1.5)”.

Page 19: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

4 | P a g e

Figure (1.5)

However Baird did effectively demonstrate that the principle of successive scanning could be applied in practice,

he did it again in 1926, in his laboratory, with the first transmission of a real scene the head of a person, the picture was scanned in 30 lines, with 5 full pictures every second.

Similar machines were built in Germany. A smaller mechanical apparatus was presented at the Berlin Radio Show

in 1928 by Denes von Mihaly. It was called the "Telehor",. Here too the picture was scanned with 30 lines, but at a picture rate of 10 frames per second.

In France, sometime later, the "Semivisor" appeared. It also used 30-line scanning and was built by René

Bartholemy.It was about this time that the first tests with the radio-electric transmission of television took place, using the medium-wave radio band.

These transmissions attracted the attention of many amateur enthusiasts who built their own disc receivers. The

public slowly became aware of the research that was under way. Manufacturers joined in the new adventure, organising systematic studies in their laboratories. New companies

were born, such as "Fernseh" in Germany (1929). But what happened to Rosing's experiments? Had everyone forgotten them? In fact many researchers kept his work in mind but they had to wait for developments in the design of cathode-

ray tubes before these could be put to any practical use. Around 1930, a number of researchers independently developed the principle of interlaced scanning, which involves exploring first all the odd-numbered lines, followed by the even-numbered lines; this technique avoids flicker. Industry developed techniques to achieve a very great vacuum in tubes. Receivers with cathode-ray tubes came onto the market in 1933.

However, the use of cathode-ray tubes at the transmission source, where the picture was scanned, remained the

stumbling-block for many years. Initially, the spot of light produced on the fluorescent screen was made to substitute for the light beam in the Nipkow system. In Germany, Manfred von Ardenne built the first "flying spot" cathode-ray tube, thereby enabling transparencies to be scanned. A complete transmission system was presented at the 1931 Berlin Radio Show. This scanning method was subsequently used for all television films.

Page 20: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

5 | P a g e

Figure (1.6)

The process nonetheless posed enormous problems when applied to real scenes because the light beam had to operate in a darkened environment. Outdoor scenes, for example, were totally impossible. Another process, known as the "intermediate film" system, provided a roundabout solution to this problem for a number of years. Scenes were shot on film, and this was immediately developed and scanned by a disc or flying spot scanner.

The solution to the problem of out-door shooting came from across the Atlantic. Following up an idea he had had

in 1923, Vladimir Zworykin (one of Rosing's assistants who had immigrated to the United States) invented the "lconoscope" shown in “Figure (1.6)”. This was a globe-shaped cathode-ray tube and it contained the first photoelectric mosaic made from metal particles applied to both sides of a sheet of mica.

This first camera tube was more compact than the disc, easier to use and more sensitive. The electron beam

which "visits" the elements of the mosaic at a considerable speed, collects from each point all the photoelectric charge which has accumulated there since the last visit, whereas in the mechanical systems the photoelectric cell receives the light from each point only during the very short period while it is actually being scanned.

Zworykin presented the first prototype iconoscope at a meeting of engineers in New York in 1929. The apparatus

was built by RCA in 1933. It scanned the image in 120lines, at rate of 24 frames per second.

Progress was then rapid: as early as 1934, 343-line definition had been achieved and interlacing was being used. In England, lsaac Schoenberg (another Russian emigrant and childhood friend of Zworykin) led developments in

the EMI Company on a camera tube similar to the iconoscope. This was the Emitron and it had certain advantages over its rival. EMI, too, adopted interlacing. Also, as early as 1934, EMI. Schoenberg was aiming at a greater number of lines than RCA the target was 405 lines.

The system of mechanical analysis, based on the Nipkow disk, nevertheless continued to hold favour with some. In 1929, Baird convinced the BBC that it ought to make television transmission outside normal radio programme

hours using a 30-line system giving 12½ frames per second. He marketed his first disc receivers, known as "televisors". He steadily improved his equipment, increasing the scanning to 60, 90, 120 and even 180 lines.

In France, René Bartholemy embarked on the development of a particular variant of the disc. During 1931, he gave two demonstrations, which brought him considerable renown, involving 30-line transmission and reception.

Page 21: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

6 | P a g e

Figure (1.7)

Figure (1.8)

Bartholemy's system, which had been tried by certain German engineers, had a mirror drum instead of a disc with holes. The mirrors, which served to illuminate the subject with light from a bright source, were inclined to an increasing degree with respect to the drum axis. They therefore scanned the subject in a series of parallel lines. Potassium cells collected the light reflected from the subject.

Baird, too, built similar systems. However the mirror drum was bulky and was unsuitable for the high speeds that

had to be used to achieve a large number of lines. It was therefore abandoned in 1933 and work on Nipkow disc systems was resumed.

1.3 – The very first Broadcasts March 1935. A television service was started in Berlin using iconoscope camera shown below in “Figure (1.7)”

(180 lines/frame, 25frames/second). Pictures were produced on film and then scanned using a rotating disk. Electronic cameras were developed in 1936, in time for the Berlin Olympic Games.

November 1935. Television broadcasting began in a studio in Paris which is shown in “Figure (1.8)”, again using a

mechanical system for picture analysis (180 lines/frame, 25 frames/second).

Page 22: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

7 | P a g e

That same year, spurred on by the work of Schoenberg, the EMI Company in England developed a fully electronic television system with 405-line definition, 25frames/second, and interlace.

The Marconi Company provided the necessary support regarding the development of transmitters. The British

government authorised the use of this standard, along with that of Baird, for the television service launched by the BBC in London in November 1936 (the Baird system used mechanical scanning, 240 lines, and 25 frames per second and no interlace). The two systems were used in turn, during alternate weeks.

The 240-line mechanical scanning system pushed the equipment to the limit and suffered from poor sensitivity,

the balance thus swung in favour of the all-electronic 405-line system which was finally adopted in England in February 1937.

The same year, France introduced a 455-line all-electronic system. Germany followed suit with 441 lines, and this

standard was also adopted by Italy. The iconoscope was triumphant. It was sensitive enough to allow outdoor shooting.

It was by means of a monster no less than 2.2 m long, the television canon, (in fact an iconoscope camera built by

Telefunken) that the people of Berlin and Leipzig were able to see pictures from the Berlin Olympic Games. Viewing rooms, known as Fernsehstuben were built for the purpose. Equipment that was easier to manipulate was used by the BBC for the coronation of His Majesty King George VI in 1937 and, the following year, for the Epsom Derby.

Public interest was aroused. From 1937 to 1939 receiver sales in London soared from 2,000 to 20,000. Research

in the United States (Zworykin and the RCA Company) bore fruit at about the same time. The first public television service was inaugurated in New York in 1939 with a 340-line system operating at 30 frames per second.

Two years later, the United States adopted a 525-line 60 frames/second standard. The first transmitters were

installed in the capital cities (London, Paris, Berlin, Rome, New York) and only a small proportion of the population of each country was therefore able to benefit. Plans were made to cover other regions.

The War stopped the expansion of television in Europe. However the intensive research into electronic systems

during the War, and the practical experience it gave, led to enhancements of television technology. Work on radar screens, for example, benefited cathode-ray tube design; circuits able to operate at higher frequencies were developed.

When the War was over, broadcasts resumed in the national standards fixed previously: 405 lines in England, 441

lines in Germany and Italy, 455 lines in France. Research showed the advantages of higher picture definition, and systems with more than 1000 lines were

investigated. The 819-line standard emerged in France. It was not until 1952 that a single standard (625 lines, 50 frames/second) was proposed, and progressively

adopted, for use throughout Europe. Modern television was born.

Page 23: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

8 | P a g e

Figure (1.9)

Figure (1.10)

1.4 – Color Television Transmission The first practical demonstration of colour television was given back in 1928 by Baird; he used mechanical

scanning with a Nipkow disk having three spirals, one for each primary. Each spiral was provided with a separate set of colour filters. In 1929, H.E. Ives and his colleagues at Bell Telephone Laboratories presented a system using a single spiral through the holes of which the light from three coloured sources was passed; the signal for each primary was then sent over a separate circuit.

As 1940 approached, only cathode-ray tubes were envisaged, at least for displaying the received picture. In 1938, Georges Valensi, in France, proposed the principle of dual compatibility:

Programmes transmitted in colour should also be received by black and White Receivers

Programmes transmitted in black and white should also be seen as black and white by colour receivers

In 1940, Peter Goldmark, demonstrated a sequential system for transmitting three primaries obtained using three colour filters placed in the light path before scanning represented below in “Figure (1.9)”.

The system was barely practicable. In addition, it required three times as large a range of frequencies (i.e. band-

width) as compared to black-and-white transmission; other researchers were looking for a non-mechanical solution which would not require such a large bandwidth.

In 1953, simultaneous research at RCA and the Hazeltine laboratories, in the US, led to the first compatible

system. This was standardised by the National Television System Committee, made up of television experts working in industry, and is known as the National Television Systems Committee (NTSC) system shown in “Figure (1.10)”.

Page 24: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

9 | P a g e

Figure (1.11)

The signal is no longer transmitted in the form of three primaries, but as a combination of these primaries. This provides a "luminance" signal Y which can be used by black and white receivers. The colour information is combined to constitute a single "chrominance" signal C. The Y and C signals are brought together for transmission.

The isolation of the chrominance and luminance information in the transmitted signal also allows bandwidth

savings to be made. In effect, the bandwidth of the chrominance information can be made much smaller than that for the luminance because the acuity of the human eye is lower for changes of colour than it is for changes of brightness.

The visual appearance of a colour can be defined in terms of three physical parameters for which words exist in

our everyday vocabulary:

The hue (which is generally indicated by a noun).

The saturation (indicated by an adjective, with the extreme values referred to as "pure" colour and "washed-out" colours).

The brightness or lightness (also indicated by an adjective, the extremes here being "bright" colours and "dark" colours).

The compatible colour television signal is made up in such a way as to ensure that these parameters are

incorporated as shown below in “Figure (1.11)”.

The amplitude of the C signal corresponds to the colour saturation, and its phase corresponds to the hue. The system was launched in the United States as early as 1954. The first American equipment was very

susceptible to hue errors cause by certain transmission conditions. European researchers tried to develop a more robust signal, less sensitive to phase distortions.

Page 25: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

10 | P a g e

Figure (1.12)

In 1961, Henri de France, put forward the SECAM system (Sequentiel Couleur à Memoire) in which the two chrominance components are transmitted in sequence, line after line, using frequency modulation. In the receiver, the information carried in each line is memorised until the next line has arrived, and then the two are processed together to give the compete colour information for each line.

In 1963, Dr. Waiter Bruch, in Germany, proposed a variant of the NTSC system, known as PAL (Phase Alternation

by Line). It differs from the NTSC system by the transmission of one of the chrominance components in opposite phase on successive lines, thus compensating the phase errors automatically. Both solutions found application in the colour television services launched in 1967 in England, Germany and France, successively.

1.5 – Analog Television (ATV) Throughout the world, there are only two major analog television standards, the 625-line system with a 50 Hz

frame rate and the 525-line system with a 60 Hz frame rate. The composite color video-and-blanking signal (CVBS, CCVS) of these systems is transmitted in the following color transmission standards:

PAL (Phase Alternating Line).

NTSC (National Television System Committee).

SECAM (Séquentiel Couleur a Mémoire).

PAL, NTSC and SECAM color transmission is possible in 625-line systems and in 525-line systems as illustrated by “Figure (1.12)”. However, not all the possible combinations have actually been implemented. The video signal with its composite coding is then modulated onto a carrier, the vision carrier, mostly with negative going amplitude modulation. It is only in Std. L (France) that positive going modulation (sync inside) is used. The first and second sound subcarrier is usually an FM-modulated subcarrier but an amplitude-modulated sound subcarrier is also used (Standard L, France). In Northern Europe, the second sound subcarrier is a digitally modulated NICAM subcarrier. Although the differences between the methods applied in the various countries are only minor, together they result in a multiplicity of standards which are mutually incompatible. The analog television standards are numbered through alphabetically from A to Z and essentially describe the channel frequencies and bandwidths in VHF bands I and III (47 ... 68 MHz, 174 ... 230 MHz) and UHF bands IV and V (470 ... 862MHz);

An example is Standard B, G Germany: B =7 MHz VHF, G = 8 MHz UHF.

Page 26: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

11 | P a g e

Figure (1.13)

In the television camera, each field is dissected into a line structure of 625 or 525 lines. Because of the finite beam fly-back time in the television receiver, however, a vertical and horizontal blanking interval became necessary and as a result, not all lines are visible but form part of the vertical blanking interval. In a line, too, only a certain part is actually visible. In the 625-line system, 50 lines are blanked out and the number of visible lines is 575. In the 525-line system, between 38 and 42 lines fall into the area of the vertical blanking interval

To reduce the flickering effect, each frame is divided into two fields combining the even-numbered lines and odd-

numbered lines in each case. The fields are transmitted alternately and together they result in a field repetition rate of twice the frame rate. The beginning of a line is marked by the horizontal sync pulse, a pulse which is below the zero volt level in the video signal and has a magnitude of -300 mV.. All the timing in the video signal is referred to the front edge of the sync pulse and there exactly to the 50% point. 10 s after the sync pulse falling edge, the active image area in the line begins in the 625-line system. The active image area itself has a length of 52 s.

In the matrix in the television camera, the luminance (luminous density) signal (Y signal or black/white signal) is

first obtained and converted into a signal having a voltage range from 0 Volt (corresponding to black level) to 700 mV (100% white). The matrix in the television camera also produces the color difference signals from the Red, Green and Blue outputs. It was decided to use color difference signals because, on the one hand, the luminance has to be transmitted separately for reasons of compatibility with black/white television and, on the other hand, color transmission had to conserve bandwidth as effectively as possible. Due to the reduced color resolution of the human eye, it was possible to reduce the bandwidth of the color information. In fact, the color bandwidth is reduced quite significantly compared with the luminance bandwidth: The luminance bandwidth is between 4.2 MHz (PAL M), 5 MHz (PAL B/G) and 6 MHz (PAL D/K, L) whereas the chrominance bandwidth is only 1.3 MHz in most cases.

“Figure (1.13)” shows Vector diagram of composite PAL video signal, “Figure (1.14)” shows Analog composite

video signal.

In the studio, the color difference signals U=B-Y and V=R-Y are still used directly. For transmission purposes,

however, the color difference signals U and V are vector modulated (IQ modulated) onto a color subcarrier in PAL and NTSC. In SECAM, the color information is transmitted frequency-modulated. The common feature of PAL, SECAM

Figure (1.14)

Page 27: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

12 | P a g e

Figure (1.15)

and NTSC is that the color information is modulated onto a color subcarrier of a higher frequency which is placed at the upper end of the video frequency band and is simply added to the luminance signal. The frequency of the color subcarrier was selected such that it causes as little interference to the luminance channel as possible. It is frequently impossible, however, to avoid crosstalk between luminance and chrominance and conversely, e.g. if a newsreader is wearing a pinstriped suit. The color effects which are then visible on the pinstriped pattern are the result of this crosstalk (cross-color or cross-luminance effects).

Vision terminals can have the following video interfaces:

CVBS, CCVS 75 Ohms 1 VPP (video signal with composite coding).

RGB components (SCART, Peritel).

Y/C (separate luminance and chrominance to avoid cross color or cross luminance effects). In the case of digital television, it is advisable to use an RGB (SCART) connection or a Y/C connection for the

cabling between the receiver and the TV monitor in order to achieve optimum picture quality. In digital television only frames are transmitted, no fields. It is only at the very end of the transmission link that fields are regenerated in the set top box or in the decoder of the IDTV receiver. The original source material, too, is provided in interlaced format which must be taken into account in the compression (field coding).

1.5.1 –Scanning an Original Black/White Picture

At the beginning of the age of television, the pictures were only in “black and white”. The circuit technology available in the 1950s consisted of tube circuits which were relatively large and susceptible to faults and consumed a lot of power. The television technician was still a real repairman and, in the case of a fault, visited his customers carrying his box of vacuum tubes. Let us look at how such a black/white signal, the “luminance signal”, is produced. Using the letter “A” as an example, its image is filmed by a TV camera which scans it line by line as shown in “Figure (1.15)” In the early days, this was done by a tube camera in which a light-sensitive layer, onto which the image was projected by optics, was scanned line by line by an electron beam deflected by horizontal and vertical magnetic fields.

Today, CCD (charge coupled device) chips are universally used in the cameras and the principle of the deflected

electron beam is now only preserved in TV receivers; and even there the technology is changing to LCD and plasma screens.

Page 28: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

13 | P a g e

Figure (1.16)

Figure (1.17)

The result of scanning the original is the luminance signal where 0 mV corresponds to 100% black and 700 mV is 100% white. The original picture is scanned line by line from top to bottom, resulting in 625 or 525 active lines depending on the TV standard used. However, not all lines are visible. Because of the finite beam fly-back time, a vertical blanking interval of up to 50 lines had to be inserted. In the line itself, too, only a certain part represents visible picture content, the reason being the finite fly-back time from the right-hand to the left-hand edge of the line which results in the horizontal blanking interval.

1.5.2 –Horizontal and Vertical Synchronization Pulses

However, it is also necessary to mark the top edge and the bottom edge of the image in some way, in addition to the left-hand and right-hand edges. This is done by means of the horizontal and vertical synchronization pulses. Both types of pulses were created at the beginning of the television age so as to be easily recognizable and distinguishable by the receiver and are located in the blacker than black region below zero volts. The horizontal sync pulse marks the beginning of a line. The beginning is considered to be the 50% value of the front edge of the sync pulse (nominally -150 mV).

All the timing within a line is referred to this time. By definition, the active line, which has a length of 52 seconds,

begins 10 s after the sync pulse front edge. The sync pulse itself is 4.7 seconds long and stays at -300 mV during this time.

At the beginning of television, the capabilities of the restricted processing techniques of the time which,

nevertheless, were quite remarkable, had to be sufficient. This is also reflected in the nature of the sync pulses. The horizontal sync pulse (H sync) was designed as a relatively short pulse (approximately 5 seconds) and shown in “Figure (1.16)” whereas the vertical sync pulse (V sync) has a length of 2.5 lines (approximately 160 seconds). In a 625-line system, the length of a line including H sync is 64 seconds. The V sync pulse can, therefore, be easily distinguished from H sync. The V sync pulse shown in “Figure (1.17)” is also in the blacker than black region below zero volts and marks the beginning of a frame or field respectively.

Page 29: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

14 | P a g e

Figure (1.18)

As already mentioned a frame, which has a frame rate of 25 Hz = 25 frames per second in a 625-line system, is subdivided into 2 fields. This makes it possible to cheat the eye, rendering flickering effects largely invisible. One field is made up of the odd-numbered lines and the other one is made up of the even-numbered lines. They are transmitted alternatingly, resulting in a field rate of 50 Hz in a 625-line system. A frame (beginning of the first field) begins when the V sync pulse goes to the -300 mV level for 2.5 lines at the precise beginning of a line. The second field begins when the, V sync pulse drops to the -300 mV level for 2.5 lines at the center of line 313.

The first and second field are transmitted interlaced with one another, thus reducing the flickering effect.

Because of the limitations of the pulse technology at the beginnings of television, a 2.5-line-long V sync pulse would have caused the line oscillator to lose lock. For this reason, additional pre- and post-equalizing pulses were gated in which contribute to the current appearance of the V sync pulse, today’s signal processing technology renders these unnecessary.

“Figure (1.18)” shows Vertical synchronization pulses with pre- and post-equalizing pulses in the 625 line-system.

1.5.3 –Adding Colors Information

At the beginning of the television age, black/white rendition was adequate because the human eye has its highest resolution and sensitivity in the area of brightness differences and the brain receives its most important information from these. There are many more black/white receptors than color receptors in the retina. But just as in the cinema, television managed the transition from black/white to color because its viewers desired it. Today this is called innovation. When color was added in the sixties, knowledge about the anatomy of the human eye was taken into consideration. With only about 1.3 MHz, color (chrominance) was allowed much less resolution, i.e. bandwidth, than brightness (luminance) which is transmitted with about 5 MHz At the same time, chrominance is embedded compatibly into the luminance signal so that a black/white receiver was undisturbed but a color receiver was able to reproduce both color and black/white correctly. If a receiver falls short of these ideals, so-called cross-luminance and cross-color effects are produced.

Page 30: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

15 | P a g e

Figure (1.19)

In all three systems, PAL, SECAM and NTSC, the Red, Green and Blue color components are first acquired in three separate pickup systems (initially tube cameras, now CCD chips) and then supplied to a matrix where the luminance signal is formed as the sum of R + G + B, and the chrominance signal. The chrominance signal consists of two signals, the color difference signals Blue minus luminance and Red minus luminance. However, the luminance signal and the chrominance signal formed must be matrixed, i.e. calculated, provided correctly with the appropriate weighting factors according to the eye’s sensitivity, using the following formula:

Y = 0.3 ●R + 0.59 ●G + 0.ll ●B U = 0.49 ● (B-Y) V = 0.88 ●(R-Y) The luminance signal Y can be used directly for reproduction by a black/white receiver. The two chrominance

signals are also transmitted and are used by the color receiver. From Y, U and V it is possible to recover R, G and B. The color information is then available in correspondingly reduced bandwidth, and the luminance information in greater bandwidth “paint box principle”.

To embed the color information into a CVBS (composite video, blanking and sync) signal intended initially for

black/white receivers, a method had to be found which has the fewest possible adverse effects on a black/white receiver, i.e. keeps it free of color information, and at the same time contains all that is necessary for a color receiver.

Two basic methods were chosen, namely embedding the information either by analog amplitude/phase

modulation (IQ modulation) as in PAL or NTSC, or by frequency modulation as in SECAM. In PAL and NTSC, the color difference signals are supplied to an IQ modulator with a reduced bandwidth compared to the luminance signal show in “Figure (1.19)” The IQ modulator generates a chrominance signal as amplitude/phase modulated color subcarrier, the amplitude of which carries the color saturation and the phase of which carries the hue. An oscilloscope would only show, therefore, if there is color, and how much, but would not identify the hue. This would require a vector scope which supplies information on both. In PAL and in NTSC, the color information is modulated onto a color subcarrier which lies within the frequency band of the luminance signal but is spectrally intermeshed with the latter in such a way that it is not visible in the luminance channel. This is achieved by the appropriate choice of color subcarrier frequency.

Equation (1.1)

Page 31: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

16 | P a g e

Figure (1.20)

In SECAM, the frequency modulated color difference signals are alternately modulated onto two different color subcarriers from line to line. The SECAM process is currently only used in France and in French-speaking countries in North Africa, and also in Greece. Countries of the previous Eastern Block changed from SECAM to PAL in the nineties. Compared with NTSC, PAL has a great advantage due to its insensitivity to phase distortion because its phase changes from line to line. The color cannot be changed by phase distortion on the transmission path, therefore. NTSC is used in analog television, mainly in North America, where it is sometimes ridiculed as “Never Twice the Same Color” because of the color distortions.

The composite PAL, NTSC or SECAM video signal represented by is generated by mixing the black/white signal,

the sync information and the chrominance signal and is now called a CCVS (Composite Color, Video and Sync) signal. “Figure (1.20)”shows the CCVS signal of a color bar signal. The color burst can be seen clearly. It is used for

conveying the reference phase of the color subcarrier to the receiver so that its color oscillator can lock to it.

1.6 – Digital Television (DTV)

1.6.1 –What is Digital Television?

Digital television (DTV) is a new television service representing the most significant development in television technology since the advent of color television DTV can provide movie theater quality pictures and sound, a wider screen, better color rendition, multiple video programming or a single program of high definition television (HDTV), and other new services currently being developed. DTV can be HDTV, or the simultaneous transmission of multiple programs of standard definition television (SDTV), which is a lesser quality picture than HDTV but significantly better than today’s television.

The rationale often cited for the digital transition is that aside from offering superior broadcast quality to

consumers, DTV will allow over-the-air broadcasters to offer the same kinds of digitally-based services (such as pay-per-view) currently offered by cable and satellite television providers. Additionally, it is argued that digital television uses the radiofrequency spectrum more efficiently than traditional analog television, thereby conserving a scarce resource (bandwidth) that can be used for other wireless applications.

Page 32: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

17 | P a g e

Equation (1.2)

There are three major components of DTV service that must be present in order for consumers to enjoy a fully realized “high definition” television viewing experience. First, digital programming must be available. Digital programming is content produced with digital cameras and other digital production equipment. Such equipment is distinct from what is currently used to produce conventional analog programming. Second, digital programming must be delivered to the consumer via a digital signal. Digital signals can be broadcast over the airwaves (requiring new transmission towers or DTV antennas on existing towers), transmitted by cable or satellite television technology, or delivered by a prerecorded source such as a digital video disc (DVD). And third, consumers must have a digital television product capable of receiving the digital signal and displaying digital programming on their television screens.

1.6.2 –Shannon’s Information Theorem

There are certain natural laws that put limitations on how information may be transferred. Shannon’s information theorem is such a natural law, saying that all transfer of information is limited by two factors: the received amount of signal power and the bandwidth of the channel used for the transmission and can be closely represented with “Equation (1.2)”.

( ⁄ )

: Bandwidth of the signal.

⁄ : Signal to Noise Ratio.

This is a phenomenon that should be regarded as a law of nature and therefore is impossible to circumvent. The

amount of received power is decided by the strength of the transmitter and the efficiency of the antenna used for reception.

The bandwidth is the amount of frequency space that is occupied by the transmitted signal. Actual radio signals are always limited in bandwidth as well as signal power. This applies to radio signals

distributed by satellites, terrestrial transmitters and cable TV networks. If we want to increase the amount of information in the signal, we either have to increase the power of transmission, the amount of bandwidth or both, the alternative is to decrease the quality of the signal—something we probably wish to avoid.

When using satellite for distribution, there are several transmitters (called transponders) in each satellite. A

transponder is limited in bandwidth and output power. A traditional satellite transponder has a bandwidth of about 30 MHz and may be used to transmit one TV channel. To transmit more than one T channel through such a transponder, we must find a way to decrease the amount of information that is required for each TV channel.

The principles for reduction of unnecessary information have been known for long. The unnecessary parts containing repeated information has to be removed and a reduced compressed version of the signal has to be created. In the receiver the original signal has to be re-created from the compressed signal.

1.6.3 –Digitizing a Video Signal

1.6.3.1 – Why?

In order to be able to let a computer handle a TV signal we first have to get it digitized. When we have a digital signal it is quite simple to manipulate it as we wish. Getting the signal digitized means that the audio and video signals are represented by a series of digits, rather than any physical media.

Page 33: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

18 | P a g e

Figure (1.21)

The digits are then sent to a receiver which has the ability to recreate the physical analog audio and video signals from this information. A digital signal contains a representation of the signal rather than the actual signal itself.

1.6.3.2 – How?

The values of the brightness or colour of picture elements along a television line can be represented by a series of numbers, each value can be transformed into a sequence of electrical pulses.

The operation which converts from the "analogue" world to the "digital" world comprises two stages

as shown in “Figure (1.21)”:

Sampling in which the value is measured at regular intervals.

Quantification in which each measurement is converted into a binary number.

These operations are carried out by an analogue to digital converter. The series of "1" and "0"s obtained after quantification can be modified (i.e. coded) to counteract more effectively the disturbances the signal will meet during transmission. Digital television technology is an extension of computer and image processing technology. Advantages are easy storage and great scope for image processing.

Each picture element is isolated and can be called up independently according to varied and complex

since the signal has only two possible values (0 or 1), detection is based on the presence or absence of the signal.

Page 34: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

19 | P a g e

Equation (1.3)

Figure (1.22)

1.6.4 –Digital Video Signal

Uncompressed digital video signals have been used for some time in television studios, based on the original CCIR Standard CCIR 601, designated as IBU-BT.R601 today; this data signal is obtained as follows:

To start with, the video camera supplies the analog Red, Green and Blue(R, G, B) signals.

These signals are matrixed in the camera to form luminance (Y: Black and White content) and chrominance (color difference CB and CR) signals.

These signals are produced by simple addition or subtraction as shown below in “Equation (1.3)”

: Red Component. : Green Component. : Blue Component.

The luminance bandwidth is then limited to 5.75 MHz using a low-pass filter. The two color difference signals

are limited to 2.75 MHz, i.e. the color resolution is clearly reduced compared with the brightness resolution. In analog television (NTSC, PAL, SECAM), too, the color resolution is reduced to about 1.3 MHz the low-pass filtered Y, CB and CR signals are then sampled and digitized by means of analog/digital converters.

Most A/D converters use seven- or eight- bit resolution. For reference, eight bits corresponds to 256 levels, and

ordinary bitmap image software provides an excellent gray scale picture if the pixels are described by 8 bits. Audio requires much better dynamic than could be achieved by 8 bits and 256 levels. The audio of a CD record is stored in 16-bit samples. This corresponds to 65,536 levels. The A/D converter in the luminance branch operates at a sampling frequency of 13.5 MHz and the two CB and CR color difference signals are sampled at 6.75 MHz each as shown in “Figure (1.22)”.

Page 35: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

20 | P a g e

The three A/D converters (for CB, CR, and Y) can all have a resolution of 8 or 10 bits. With a resolution of 10 bits, this will result in a gross data rate 10 x (13.5 + 6.75 +6.75) million bits per second equaling 270 Mbit/s which is suitable for distribution in the studio but much too high for TV transmission via existing channels (terrestrial, satellite or cable), where a satellite transponder (that previously could be used to distribute one TV channel) can only house between 38 and 44 Mbit/s, if QPSK modulation is to be used. The same problem exists in a cable TV channel, and the lack of bandwidth is even worse when it comes to terrestrial television where 22 Mbit/s in a conventional transmission channel is quite common.

The conclusion will be that uncompressed digital TV channels requires much more bandwidth than is the case for analog TV channels. Therefore we must find a way to compress the digital TV signal in a way that it requires less bandwidth, the uncompressed signal becomes more or less unusable when we leave the building of the TV station.

1.6.5 –Compressing Digital Signal

In order to be able to handle the digital video signal, we have to go further in trying to minimize the amount of information in the signal. To solve this, a number of experts got the task to find new and smart methods to eliminate unnecessary information in digital images and digital video sequences. For this purpose, the expert groups JPEG and MPEG were formed.

1.6.5.1 – Compressing Still Images

The Joint Photographic Experts Group (JPEG) got the task to develop a standard to compress digital still images. This was very important in order to be able to store images in computers as well as transferring pictures in between computers in an efficient way. An uncompressed picture with a certain number of pixels will always require the same number of bits and bytes independent of the content of the picture. Compressing a picture means that the picture data file is recalculated using a certain algorithm (calculation rule) that takes several things into account.

1.6.5.2 – Compressing Non-Still Images (Movies)

The JPEG format is developed for compression of digital stills. When this task was done, another group of experts, the Moving Pictures Experts Group (MPEG), got a similar task to do the corresponding work for standardizing compression formats for moving pictures. The aim was to make it easier to handle movie clips in computers and to transfer these files between computers in a less bit rate-consuming way.

The first part of the standard got ready in the beginning of the 1990s and is called MPEG-1, the

compression algorithm is optimized for video files with a small bandwidth below approximately 2 Mbit/s, and the compression is based on the principle of analyzing each picture in the video signal and to find the differences in between the pictures.

MPEG-1 has been very popular for distributing video clips on the Internet but the standard cannot

achieve the performance that is required to replace analog television.

The next step was to establish a standard that could be used for standard television, thus completing the dream about distribution of digital TV via satellite, cable and terrestrial transmitters and even to store video on DVD records. The second standard, MPEG-2 is optimized for higher bitrates from 2 Mbit/s and up.

1.6.6 –Encapsulating into Transport Stream Packets

We have now achieved a compressed video signal at a bit rate of about 4 Mbit/s.

Page 36: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

21 | P a g e

Figure (1.23)

Figure (1.24)

However we also need to transmit audio and possibly tele-text signals. The audio is compressed in an audio compression format called “Musicam”. The audio encoding can be chosen at different bitrates just as the video compression.

A common bitrate for a stereo pair is 256 Kbit/s, however also 192 Kbit/s or 128 Kbit/s may be chosen. A digital

TV signal consists of at least two or three signals—one video, one audio and perhaps also a tele-text signal at 200 Kbit/s.

Fortunately it is quite easy to combine several digital signals into one single signal. This process of combining the

signals is called multiplexing; however, in order to multiplex the signals, each signal first has to be divided into packages. By transmitting the packages at different intensities, it is possible to mix fast signals (video) with a high bitrate with those having a low bitrate (audio and tele-text) as in “Figure (1.23)” below.

The packages into which the signals are divided are called transport stream packets (see “Figure (1.24)”), the

length of each packet is 188 byte, and the first four bytes in each packet is called the header. The first byte in the header contains a synchronization word that is unique and that indicate the start of a new packet.

This byte is followed by the Package Identification Data (PID), two bytes with the identity of the signal to

which the packet belongs. This is the label that makes it possible to separate the signals again (called demultiplexing) at the receiving end. The fourth byte contains a counter that indicates the order between the packets that belong to a specific signal. This counter is also a way to determine if any packet has been lost on the way to the receiver. The remaining 184 bytes contain useful information, the payload. Together, these packets form the transport stream.

Page 37: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

22 | P a g e

Splitting the information into packets achieves several advantages, in addition providing the ability to combine signals with different bitrates. The signal also becomes less sensitive to noise and other disturbances. Radio disturbances are actually short spikes of power and often have a very short duration. If one packet is disturbed, the receiver can reject that specific packet and then continue to unpack the rest of the packets.

IP traffic across the Internet is two-way communication. If a packet is lost, the sender is notified and can

retransmit that specific package. Broadcasting signals are fed through a one-way distribution chain and no retransmission of lost packages can be done. To compensate for this, we have to use error protection. Error protection means adding extra bits according to clever algorithms.

These extra bits make it possible for the receiver to repair the content of broken packages. Therefore to secure

the signal even further, an additional 16 bytes are added to each individual packet. The information in these bytes is calculated based on the information in the 184 bytes of payload. The calculation is conducted using the Reed-Solomon encoding algorithm that, in this case, is configured to be able to correct up to eight errors in the 184 bytes of payload, and if there are more than eight errors the complete packet will be rejected.

1.6.7 –System Information

It is not enough to deliver packets containing audio and video. The receiver also must be able to know which packets contain what data. This is solved by including system information in the transmitted signal. The system information is a number of tables that are distributed in separate bit streams. The most important table is the Program Association Table (PAT), and this signal always has the packet address PID=0. The first thing the receiver has to do is to find the PAT and read the content of the table. In the PAT, there are references to which PID addresses contain the second-most important kind of table, the Program Map Table (PMT). Each radio or TV channel that is distributed in the transport stream is called a “service” and each service has its own PMT. In the PMT of each service, the receiver can find the PID for each component of that service. For a TV channel, that would be the PIDs that are associated with the video, the audio and the tele-text information bit streams.

1.6.8 –Picture and Sound Quality

Bad analog signals result in noisy pictures and noisy sound. There is a big difference between analog and digital signals when it comes to distortions. Analog terrestrial TV is subject to reflections and white Gaussian noise. Analog frequency modulated signals are subject to spike noise but there are no reflections.

In digital TV, the audio and video is always the same, as long as the signal is strong enough to allow for reception.

If reception is bad audio and video disappears completely, leaving a black screen and no audio at all.

1.6.9 –Advantages of Digital Transmission

DTV has several advantages over analog TV:

The most significant being that digital channels take up less bandwidth, and the bandwidth needs are continuously variable, at a corresponding reduction in image quality depending on the level of compression as well as the resolution of the transmitted image.

Digital signals react differently to interference than analog signals. For example, common problems with analog television include ghosting of images, noise from weak signals, and many other potential problems which degrade the quality of the image and sound, although the program material may still be watchable.

Page 38: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

23 | P a g e

Digital TV also permits special services such as multiplexing (more than one program on the same channel), electronic program guides and additional languages (spoken or subtitled). The sale of non-television services may provide an additional revenue source.

This technique is already in wide-spread use for special effects on existing images. It lies at the root of computerized image synthesis systems.

1.7 – High Definition Television (HDTV) High Definition Television (HDTV) as well as Standard Television (SDTV) are both parts of Digital Television

technology (DTV).The major difference compared with present-day television technology is that:

HDTV uses a higher resolution and therefore offers a better picture quality than SDTV or the old analog.

Wider image format (16:9).

Higher spatial resolution (about 1000 lines).

Larger viewing screens.

High-definition television would offer a quality comparable to that of 35-mm film and would therefore allow films to be shot electronically.

1.7.1 –Historical view over HD development

The ideas about increasing the resolution of the TV pictures have been around for a long time. The intention of HDTV was from the beginning to increase the resolution by a factor of four. In principle, this would give the viewer the possibility to sit at half the distance from the screen and then double the picture size in the field of vision.

By the 1980s, an analog HDTV system was introduced in Japan. This system was based on 1,125 lines, of which

1,035 lines are active in vertical resolution. It used an analog compression system called MUSE to make it possible to distribute the high resolution signal by means of the technology available at the time.

In Europe in the early 1990s, a HDTV standard based on 1,250 lines (including 1,080 active lines) was proposed.

This would facilitate conversion to/from the European 625 line SDTV (Standard Definition Television) system since it was simply a matter of doubling the number of lines.

Although HDTV broadcasts had been demonstrated in Europe since the early 1990s, the first regular broadcasts

started on January 1, 2004 when the Belgian company Euro-1080 launched the HD1 channel with the traditional “Vienna New Year's Concert”. Test transmissions had been active since the IBC exhibition in September 2003, but the New Year's Day broadcast marked the official start of the HD1 channel, and the start of HDTV in Europe.

Euro-1080, a division of the Belgian TV services company “Alfacam”, broadcast HDTV channels to break the pan

European stalemate of "no HD broadcasts mean no HD TVs bought means no HD broadcasts..." and kick-start HDTV interest in Europe. The HD1 channel was initially free-to-air and mainly comprised sporting, dramatic, musical and other cultural events broadcast with a multi-lingual soundtrack on a rolling schedule of 4 or 5 hours per day.

These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a DVB-S signal

from SES Astra's 1H satellite. Euro-1080 transmissions later changed to MPEG-4/AVC compression on a DVB-S2 signal in line with subsequent broadcast channels in Europe.

Page 39: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

24 | P a g e

Figure (1.25)

1.7.2 –Interlaced and Progressive Scanning

Before we dig deeper into the various HDTV systems available today, we need to look closer at the principles of scanning a TV picture.

Conventional TV is based on interlaced scanning the picture is split up in two frames, each containing every other line of the picture, this concept is good as long as the picture does not contain too much movement, However, as seen in “Figure (1.25)”, there will be “weaving” destroying the sharpness of vertical contours, if the objects in the picture move too much in the horizontal direction between two frames.

In progressive scanning, the lines are drawn right after each other. In addition to providing more

stable vertical contours, this is also much better from a compression point of view.

1.7.3 –HDTV Display Resolutions

Video Format

Screen Resolution

Pixels Aspect Ratio Description

Actual Advertised Image Pixel

720p 1780 X 965 “Clean Aperture”

876,096 0.9 Mega

16:9 1:1 Used for 750-line video with faster artifact/over scan compensation.

1080p 1888 X 1062 “Clean Aperture”

2,005,056 2.0 Mega

16:9 1:1 Used for 1125-line video with faster artifact/over scan compensation.

1080i 1440 X 1080 “HDCAM / HDV”

1,555,200 1.6 Mega

16:9 1:4 Used for anamorphic 1125-line video in the HDCAM and HDV formats.

1.7.4 –Practical Aspects of Receiving HDTV

The signals received from the HDTV channels have been received using a computer-based satellite receiver from the Austrian company Digital-Everywhere; this unit belongs to the first generation of HDTV satellite receivers and can also receive conventional MPEG-2, DVB-S signals.

This is the same kind of satellite signals that is used for standard definition digital TV channels. As a result, the

bitrate will increase by a factor of four compared to the standard definition signals. This also requires quite a lot of processing power.

Page 40: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

25 | P a g e

A 3.2 GHz Intel processor will have to work hard to cope with the 16 Mbit/s MPEG-2 bit stream that is provided by the Fire DTV set-top box. This also means that the processing power of an HDTV receiver is much larger than in a conventional standard definition receiver

1.8 – Stereoscopic Television (3DTV) No satisfactory method has yet been found for giving an impression of relief in television (3-D). One of the main

problems is that systems relying on colour separation create an artificial impression. Researchers are today investigating techniques using neutral polarized glasses.

1.8.1 –Historical View

In the late-1890's, a British film pioneer named William Friese-Greene filed a patent for a 3-D movie process, when viewed stereoscopically, it showed that the two images are combined by the brain to produce 3-D depth perception. On June 10, 1915, Edwin S. Porter and William E. Waddell presented tests to an audience at the Astor Theater in New York City. In red-green anaglyph, the audience was presented three reels of tests, which included rural scenes, test shots of Marie Doro, a segment of John Mason playing a number of passages from Jim the Penman (a film released by Famous Players-Lasky that year, but not in 3-D), Oriental dancers, and a reel of footage of Niagara Falls, however, according to Adolph Zukor in his 1953 autobiography The Public Is Never Wrong: My 50 Years in the Motion Picture Industry, nothing was produced in this process after these tests.

The stereoscope was improved by Louis Jules Duboscq, and a famous picture of “Queen Victoria” was displayed

at “The Great Exhibition” in 1851. In 1855 the “Kinematoscope” was invented, i.e., the stereo animation camera. The first anaglyph (use of red-and-blue glasses,invented by L.D. DuHauron) movie was produced in 1915 and in 1922 the first public 3D movie was displayed. Stereoscopic 3D television was demonstrated for the first time on August 10, 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electro-mechanical and cathode-ray tube techniques. In 1935 the first 3D color movie was produced, by the Second World War, stereoscopic 3D still cameras for personal use were already fairly common.

In the 1950s, when TV became popular in the United States, many 3D movies were produced. The first such

movie was Bwana Devil from United Artists that could be seen all across the US in 1952. One year later, in 1953, came the 3D movie “House of Wax” which also featured stereophonic sound. Alfred

Hitchcock produced his film Dial M for Murder in 3D, but for the purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display 3D films.

Subsequently, television stations started airing 3D serials in 2009 based on the same technology as 3D movies.

1.8.2 –Used Technologies

3D-TVs with lenses:

Anaglyphic 3D (with passive red-cyan lenses).

Polarization 3D (with passive polarized lenses).

Alternate-frame sequencing (with active shutter lenses).

Head-mounted display (with a separate display positioned in front of each eye, and lenses used primarily to relax eye focus).

3D-TVs without lenses:

Auto-stereoscopic Displays (commercially called Auto 3D displays).

Page 41: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

26 | P a g e

Figure (1.26)

1.8.2.1 – Anaglyphic 3D

Anaglyph images are used to provide a stereoscopic 3D effect, when viewed with glasses where the two lenses are different (usually chromatically opposite) colors, such as red and cyan. Images are made up of two color layers, superimposed, but offset with respect to each other to produce a depth effect. Usually the main subject is in the center, while the foreground and background are shifted laterally in opposite directions. The picture contains two differently filtered colored images, one for each eye. When viewed through the "color coded" "anaglyph glasses", they reveal an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition.

Anaglyph images have seen a recent resurgence due to the presentation of images and video on

the Internet, Blu-ray HD discs, CDs, and even in print. Low cost paper frames or plastic-framed glasses hold accurate color filters that typically, after 2002, make use of all 3 primary colors. The current norm is red and cyan, with red being used for the left channel.

The cheaper filter material used in the monochromatic past dictated red and blue for convenience and

cost, there is a material improvement of full color images, with the cyan filter, especially for accurate skin tones.

“Figure (1.26)” shows an anaglyphic monochrome 3D image.

Page 42: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

27 | P a g e

Figure (1.27)

1.8.2.2 – Polarization 3D

Polarization is a property of certain types of waves that describes the orientation of their oscillations. Electromagnetic waves, such as light, and gravitational waves exhibit polarization; acoustic waves (sound waves) in a gas or liquid do not have polarization because the direction of vibration and direction of propagation are the same.

By convention, the polarization of light is described by specifying the orientation of the wave's electric

field at a point in space over one period of the oscillation. When light travels in free space, in most cases it propagates as a transverse wave—the polarization is perpendicular to the wave's direction of travel. In this case, the electric field may be oriented in a single direction (linear polarization), or it may rotate as the wave travels (circular or elliptical polarization as shown in “Figure (1.27)”).

In the latter cases, the oscillations can rotate either towards the right or towards the left in the direction

of travel. Depending on which rotation is present in a given wave it is called the wave's chirality or handedness. In general the polarization of an electromagnetic (EM) wave is a complex issue. For instance in a waveguide such as an optical fiber, or for radially polarized beams in free space, the description of the wave's polarization is more complicated, as the fields can have longitudinal as well as transverse components. Such EM waves are either TM or hybrid modes.

For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the

direction of travel, so there is no polarization. In a solid medium, however, sound waves can be transverse. In this case, the polarization is associated with the direction of the shear stress in the plane perpendicular to the propagation direction. This is important in seismology.

Polarization is significant in areas of science and technology dealing with wave propagation, such

as optics, seismology, telecommunications and radar science. The polarization of light can be measured with a polarimeter.

Page 43: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

28 | P a g e

Figure (1.28)

1.8.2.3 – Alternate Frame Sequencing 3D

Alternate Frame Sequencing is a method of showing 3-D film that is used in some venues. It is also used on PC systems to render 3-D games into true 3-D.

The movie is filmed with two cameras like most other 3-D films. Then the images are placed into a single

strip of film in alternating order. In other words, there is the first left-eye image, then the corresponding right-eye image, then the next left-eye image, followed by the corresponding right-eye image and so on.

The film is then run at 48 frames-per-second instead of the traditional 24 frames-per-second. The

audience wears very specialized LCD shutter glasses that have lenses that can open and close in rapid succession. The glasses also contain special radio receivers. The projection system has a transmitter that tells the glasses which eye to have open. The glasses switch eyes as the different frames come on the screen.

1.8.2.4 – Alternate Frame Sequencing 3D

A head-mounted display or helmet mounted display, both abbreviated HMD, is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one (monocular HMD) or each eye (binocular HMD).

A typical HMD like the one shown below in “Figure (1.28)” has either one or two small displays with lenses and semi-transparent mirrors embedded in a helmet, eye-glasses (also known as data glasses) or visor. The display units are miniaturized and may include CRT, LCDs, Liquid crystal on silicon (LCos), or OLED. Some vendors employ multiple micro-displays to increase total resolution and field of view.

Page 44: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

29 | P a g e

Chapter Number 2 Digital Video Broadcasting

2.1 – History Digital Video Broadcasting (DVB) is a suite of internationally accepted open standards for digital television. DVB

standards are maintained by the DVB Project, an international industry consortium with more than 270 members, and they are published by a Joint Technical Committee (JTC) of European Telecommunications Standards Institute (ETSI), European Committee for Electro technical Standardization (CENELEC) and European Broadcasting Union (EBU). The interaction of the DVB sub-standards is described in the DVB Cookbook.Many aspects of DVB are patented, including elements of the MPEG video coding and audio coding.

Until late 1990, digital television broadcasting to the home was thought to be impractical and costly to implement. During 1991, broadcasters and consumer equipment manufacturers discussed how to form a concerted pan-European platform to develop digital terrestrial TV. Towards the end of that year, broadcasters, consumer electronics manufacturers and regulatory bodies came together to discuss the formation of a group that would oversee the development of digital television in Europe.

The DVB Project began the first phase of its work in 1993. The project’s philosophy was as follows:

The initial task was to develop a complete suite of digital satellite, cable, and terrestrial broadcasting technologies in one ‘pre-standardisation’ body.

Rather than having a one-to-one correspondence between a delivery channel and a programme channel, the systems would be ‘containers’ which carry any combination of image, audio, or multimedia. They would thus be open and ready for SDTV, EDTV, HDTV, surround sound, or any kind of new media which arose over time.

The work should result in ETSI standards for the physical layers, error correction, and transport for each delivery medium.

Wherever possible there should be commonality across the different delivery platforms, to lower costs for users and manufacturers. Only when there was no other choice would there be differences between different delivery media.

The DVB Project should not re-invent anything, and would use existing open standards whenever they are available.

At the beginning of the 1990s, change was coming to the European satellite broadcasting industry, and it was becoming clear that the once state-of-the-art MAC systems would have to give way to all-digital technology. It became clear that satellite and cable would deliver the first broadcast digital television services. Fewer technical problems and a simpler regulatory climate meant that they could develop more rapidly than terrestrial systems. Market priorities meant that digital satellite and cable broadcasting systems would have to be developed rapidly. Terrestrial broadcasting would follow.

The DVB-S system for digital satellite broadcasting was developed in 1993. It is a relatively straightforward

system using QPSK. The specification described different tools for channel coding and error protection which were later used for other delivery media systems.

A higher efficiency digital satellite broadcasting system DVB-S2 has recently been developed. It has both DVB-S

backwards-compatible and non-backwards-compatible versions. The non-compatible version allows about 30% more data capacity for the same receiving dish size compared to DVB-S. It uses 8-PSK and LDPC to achieve the efficiency increase. DVB-S2 is likely to be used for all future new European digital satellite multiplexes, and satellite receivers will be equipped to decode both DVB-S and DVB-S2.

Page 45: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

30 | P a g e

The DVB-C system for digital cable networks was developed in 1994. It is centered on the use of 64 QAM, and for the European satellite and cable environment can, if needed, convey a complete satellite channel multiplex on a cable channel. The DVB-CS specification described a version which can be used for satellite master antenna television installations.

A more flexible and robust digital terrestrial system, DVB-H has also recently been developed. The system is

intended to be receivable on handheld receivers and thus includes features which will reduce battery consumption (time slicing) and a 4K OFDM mode, together with other measures. DVB-H services will probably also use more efficient video compression systems such as MPEG4 AVC or SMPTE VC1.

The digital terrestrial television system DVB-T was more complex because it was intended to cope with a

different noise and bandwidth environment, and multi-path. The system has several dimensions of receiver ‘agility’, where the receiver is required to adapt its decoding according to signaling. The key element is the use of OFDM. There are two modes: 2K carriers plus QAM, 8K carriers plus QAM. The 8K mode can allow more multi-path protection, but the 2K mode can offer Doppler advantages where the receiver is moving.

2.2 – Digital Video Broadcasting Standards More than one standard is developed to cover big number of applications each of them have its own needs

and requirements and in this section we will quickly highlight the features of each one of these standards.

2.2.1 –Digital Video Broadcasting – Satellite (DVB-S)

The DVB-S system is based on QPSK modulation and convolutional forward error correction (FEC), concatenated with Reed–Solomon coding. Since then, DVB-S links also started to be proposed for professional point-to-point transmission of television programs, to convey directly to the broadcaster’s premises audio/video material originated in the studios (TV contribution) and/or from remote locations by outside broadcasting vans or portable uplink terminals [digital satellite news gathering (DSNG)], without requiring a local access to the fixed telecom network. In 1998, DVB produced its second standard for satellite applications, DVB-DSNG, which extends the functionalities of DVB-S to include higher order modulations (8PSK and 16QAM) for DSNG and other TV contribution applications by satellite.

In the last decade, studies in the field of digital communications and, in particular, of error correcting techniques suitable for recursive decoding, have brought new impulse to the technology innovations. The results of this evolutionary trend, together with the increase in the operators’ and consumers’ demand for larger capacity and innovative services by satellite, led DVB to define in 2003 the second-generation system for satellite broad-band services, DVB-S2 , now recognized as ITU-R and European Telecommunications Standards Institute (ETSI) standards, This system has been designed for different types of applications like:

Broadcasting of standard definition and high-definition TV (SDTV and HDTV);

Interactive Services, including Internet access, for consumer applications (for integrated receivers–decoders (IRDs) and personal computers).

Professional applications, such as digital TV contribution and news gathering.

Data content distribution and Internet trucking. To be able to cover all the application areas while still keeping the single-chip decoder at reasonable complexity

levels, DVB-S2 is structured as a toolkit, thus also enabling the use of mass market products for professional or niche applications.

Page 46: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

31 | P a g e

Figure (2.1)

The DVB-S2 standard has been specified around three key concepts: 1. Best transmission performance. 2. Total flexibility. 3. Reasonable receiver complexity.

To achieve the best performance-complexity trade off, quantifiable in about 30% capacity gain over DVB-S,

DVB-S2 benefits from more recent developments in channel coding and modulation. For interactive point-to-point applications such as IP unicasting, the adoption of the adaptive coding and

modulation (ACM) functionality allows optimization of the transmission parameters for each individual user on a frame-by-frame basis, dependent on path conditions, under closed-loop control via a return channel (connecting the IRD/PC to the DVB-S2 uplink station via terrestrial or satellite links, signalling the IRD/PC reception condition). This results in a further increase in the spectrum utilization efficiency of DVB-S2 over DVB-S, allowing the optimization of the space segment design, thus making possible a drastic reduction in the cost of satellite-based IP services.

DVB-S2 is so flexible that it can cope with any existing satellite transponder characteristics, with a large variety of

spectrum efficiencies and associated SNR requirements. Furthermore it is designed to handle a variety of advanced audio–video formats.DVB-S2 accommodates any input

stream format, including single or multiple MPEG transport streams (TSs) (characterized by 188-byte packets), IP as well as ATM packets, continuous bit-streams.

The DVB-S2 transmission system is structured as a sequence of functional blocks schematically represented next in “Figure (2.1)”.

Signal generation is based on two levels of framing structures:

BBFRAME at base-band (BB) level, carrying a variety of signalling bits, to configure the receiver flexibly according to the application scenario.

PLFRAME at physical layer (PL) level, carrying few highly protected signalling bits, to provide robust synchronization and signalling at the physical layer.

Page 47: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

32 | P a g e

Equation (2.1)

Equation (2.2)

Equation (2.3)

2.2.2 –Digital Video Broadcasting – Cable (DVB-C)

We will see that in the DVB-C modulator, the MPEG-2 transport stream passes through almost the same stages of conditioning as in the DVB-S satellite standard. It is only the last stage of convolutional coding which is missing here: it is simply not needed because the medium of propagation is so much more robust. This is followed by the 16, 32, 64, 128 or 256QAM quadrature amplitude modulation. In coax cable systems, 64QAM is used virtually ways whereas optical fibre networks frequently use 256QAM.

Considering then a conventional coax system with channel spacing of 8 MHz, It normally uses a 64QAM-

modulated carrier signal with a symbol rate of, for example, 6.9 MS/s. The symbol rate must be lower than the system bandwidth of 8 MHz in the present case. The modulated signal is rolled off smoothly towards the channel edges with a roll-off factor of r = 0.15. Given 6.9 MS/s and 64 QAM (6 bits/symbol), a gross data rate is obtained as shown in “Equation (2.1)”.

In DVB-C, only Reed-Solomon error protection is used which is the same as in DVB-S. Thus, an MPEG-2 transport

stream packet of 188 bytes length is provided with 16 bytes of error protection, resulting in a total packet length of 204 bytes during the transmission. The resultant net data rate is shown in “Equation (2.2)”

Thus, a 36-MHz-wide satellite channel with a symbol rate of 27.5 MS/s and a code rate of 3/4 has the same net

data rate, i.e. the same transport capacity as this DVB-C channel with a width of only 8 MHz, The following “Equation (2.3)” generally applies for DVB-C:

As well, however, the DVB-C channel has a much better signal/noise ratio (S/N) with about >30 dB compared

with about 10 dB in the case of DVB-S. The constellations provided in the DVB-C standard are 16QAM, 32QAM, 64QAM, 128QAM and 256QAM,

according to DVB-C, the spectrum is roll-off filtered with a roll-off factor of r = 0.15. The transmission method specified in DVB-C is also known as the international standard ITU-T J83A. There is also the parallel standard ITU-T J83B used in North America, which will be described later, and ITU-T J83C which is used in 6 MHz-wide channels in Japan. In principle, J83C has the same structure as DVB-C but it uses a different roll-off factor for 128QAM (r = 0.18) and for 256QAM (r= 0.13). Everything else is identical. ITU-T J83B, the method found in the US and in Canada, has a completely different FEC.

2.2.2.1 – DVB-C Transmitter

The DVB-C modulator does not need to be described in so much detail since most of the stages are completely identical with the DVB-S modulator and it’s shown below in “Figure (2.2)”. The modulator locks to the MPEG-2 transport stream fed to it at the baseband interface and consisting of 188 byte-long transport stream packets. The TS packets consist of a 4 byte header, beginning with the sync byte (0x47), followed by 184 bytes of payload.

Page 48: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

33 | P a g e

Figure (2.2)

Following this, every sync byte is inverted to 0xB8 to carry long-term time markers in the data stream to the receiver for energy dispersal and its cancellation. This is followed by the energy dispersal stage (or randomizer) proper, and then the Reed- Solomon coder which adds 16 bytes of error protection to each 188 byte long TS packets. The packets, which are then 204 bytes long, are then supplied to the interleaver to make the data stream more resistant to error bursts. The error bursts are broken up by the cancellation of the interleaving in the DVB-C demodulator which makes it easier for the Reed Solomon block decoder to correct errors.

The error-protected data stream is then fed into the mapper where the QAM quadrant must be differentially

coded, in contrast to DVB-S and DVB-T. This is because the carrier can only be recovered in multiples of 90o in the 64QAM demodulator and the DVB-C receiver can lock to any multiples of 90o carrier phases.

The mapper is followed by the quadrature amplitude modulation which is now done digitally. Usually, 64QAM is

selected for coaxial links and 256QAM for fibre-optical links. The signal is roll-off filtered with a roll-off factor of r = 0.15. This gradual roll-off to- wards the band edges optimizes the eye opening of the modulated signal.

After power amplification, the signal is then injected into the broadband cable system.

2.2.2.2 – DVB-C Receiver

The DVB-C receiver - set-top box or integrated - receives the DVB-C channel in the 50- 860 MHz band. The transmission has added effects due to the transmission link such as noise, reflections and amplitude and group delay distortion.

The first module of the DVB-C receiver is the cable tuner which is essentially identical with a tuner for analog television. The tuner converts the 8 MHz-wide

DVB-C channel down to an IF with a band center at about 36 MHz, These 36 MHz also correspond to the band

center of an analog TV IF channel according to ITU standard BG/Europe. Adjacent channel components are suppressed by a downstream SAW filter which has a bandwidth of exactly 8 MHz where 7 or 6 MHz channels are possible, the filter must be replaced accordingly. This band-pass filtering to 8, 7 or 6 MHz is followed by further down conversion to a lower intermediate frequency in order to simplify the subsequent analog/digital conversion.

Page 49: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

34 | P a g e

Figure (2.3)

Before the A/D conversion, however, all frequency components above half the sampling rate must be removed by means of a low-pass filter. The signal is then sampled at about 20 MHz with a resolution of 10 bits. The IF, which is now digitized, is supplied to an IQ demodulator and then to a root cosine squared matched filter operating digitally. In parallel with this, the carrier and the clock are recovered. The recovered carrier with an uncertainty of multiples of 90 degrees is fed into the carrier input of the IQ demodulator.

This is followed by a channel equalizer, partly combined with the matched filter, a complex FIR filter in which it is

attempted to correct the channel distortion due to amplitude response and group delay errors. This equalizer operates in accordance with the maximum likelihood principle, i.e. it is attempted to optimize the signal quality by “tweaking” digital “setscrews” which are the taps of the digital filter. The signal, thus optimized, passes into the demapper where the data stream is recovered.

This data stream will still have bit errors and, therefore, error protection is added. Firstly, the interleaving is

cancelled and error bursts are turned into single errors. The following Reed Solomon decoder can eliminate up to 8 errors per 204-byte-long RS packet. The result is again transport stream packets with a length of 188 bytes which, however, are still energy dispersed. If there are more than 8 errors in a packet, they can no longer be repaired and the transport error indicator in the TS header is then set to ‘one’. After the RS decoder, the energy dispersal and the inversion of every 8th sync byte are cancelled and the MPEG-2 transport stream is again present at the physical baseband interface. In practice, all modules from the A/D converter to the transport stream output are implemented in one chip.

The essential components in a DVB-C set-top box are the tuner, some discrete components, the DVB-C

demodulator chip and the MPEG-2 decoder chip, all of which are controlled by a microprocessor.

2.2.3 –Digital Video Broadcasting – Handheld (DVB-H)

The DVB-H system is defined based on the existing DVB-T standard for fixed and in-car reception of digital TV, the main additional elements in the link layer (i.e., the layer above the physical layer) are time slicing and additional forward error correction (FEC) coding. Time slicing reduces the average power in the receiver front-end significantly

Page 50: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

35 | P a g e

Figure (2.4)

up to about 90%~95% and also enables smooth and seamless frequency handover when the user leaves one service area in order to enter a new cell. Use of time slicing is mandatory in DVB-H.

FEC for multiprotocol encapsulated data (MPE-FEC) gives an improvement in carrier-to-noise (C/N) performance

and Doppler performance in mobile channels and, moreover, also improves tolerance to impulse interference. Use of MPE-FEC is optional for DVB-H. It should be emphasized that neither time slicing nor MPE-FEC technology elements, as they are implemented on the link layer, touch the DVB-T physical layer in any way. This means that the existing receivers for DVB-T are not disturbed by DVB-H signals—DVB-H is totally backward compatible to DVB-T. It is also important to notice that the payload of DVB-H is IP-datagrams or other network layer datagrams encapsulated into MPE-sections. In view of the restricted data rates suggested for individual DVB-H services and the small displays of typical handheld terminals, the classical audio and video coding schemes used in digital broadcasting do not suit DVB-H well. It is therefore suggested to exchange MPEG-2 video by H.264/AVC or other high-efficiency video coding standards. The physical layer has four extensions to the existing DVB-T physical layer.

First, the bits in transmitter parameter signalling (TPS) have been upgraded to include two additional bits to

indicate presence of DVB-H services and possible use of MPE-FEC to enhance and speed up the service discovery. Second, a new 4K mode orthogonal frequency division multiplexing (OFDM) mode is adopted for trading off

mobility and single-frequency network (SFN) cell size, allowing single-antenna reception in medium SFNs at very high speeds. This gives additional flexibility for the network design. 4K mode is an option for DVB-H complementing the 2K and 8K modes that are as well available. Also all the modulation formats, QPSK, 16QAM and 64QAM with non-hierarchical or hierarchical modes, are possible to use for DVB-H.

Third, a new way of using the symbol interleaver of DVB-T For 2K and 4K modes, the operator may select (instead

of native interleaver that interleaves the bits over one OFDM symbol) the option of an in-depth interleaver that interleaves the bits over four or two OFDM symbols, respectively. This approach brings the basic tolerance to impulse noise of these modes up to the level attainable with the 8K mode and also improves the robustness in mobile environment. Finally, the fourth addition to DVB-T physical layer is the 5-MHz channel bandwidth to be used in non-broadcast bands. This is of interest, e.g., in the United States, where a network at about 1.7 GHz is running using DVB-H with a 5-MHz channel.

The conceptual structure of DVB-H user equipment is depicted in “Figure (2.4)”. It includes a DVB-H receiver and

a DVB-H terminal. The DVB-T demodulator recovers the MPEG-2 transport stream (TS) packets from the received DVB-T RF signal. It offers three transmission modes: 8K, 4K, and 2K with the corresponding signalling.

The time-slicing module controls the receiver to decode the wanted service and shut off during the other service

bits. It aims to reduce receiver power consumption while also enabling a smooth and seamless frequency handover.

Page 51: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

36 | P a g e

Figure (2.5)

The MPE-FEC module, provided by DVB-H, offers in addition to the error correction in the physical layer transmission, a complementary FEC function that allows the receiver to cope with particularly difficult reception situations. An example of using DVB-H for transmission of IP-services is given in “Figure (2.5)”.

In this example, both traditional MPEG-2 services and time-sliced “DVB-H services” are carried over the same

multiplex. The handheld terminal decodes/uses IP-services only. Note that 4K mode and the in-depth interleavers are not available, for compatibility reasons, in cases where the multiplex is shared between services intended for fixed DVB-T receivers and services for DVB-H devices.

2.2.4 –Digital Video Broadcasting – Satellite services to Handheld (DVB-SH)

DVB-SH, Digital Video Broadcasting - Satellite services to Handhelds, is a physical layer standard for delivering IP based media content and data to handheld terminals such as mobile phones or PDAs, based on a hybrid satellite/terrestrial downlink and for example a GPRS uplink. The DVB Project published the DVB-SH standard in February 2007.

The DVB-SH system was designed for frequencies below 3 GHz , supporting UHF band, L Band or S-band. It complements and improves the existing DVB-H physical layer standard. Like its sister specification (DVB-H), it is based on DVB IP Data cast (IPDC) delivery, electronic service guides and service purchase and protection standards.

DVB-SH specifies two operational modes:

SH-A: specifies the use of COFDM modulation on both satellite and terrestrial links with the possibility of running both links in SFN mode.

SH-B: uses Time-Division Multiplexing (TDM) and not COFDM on the satellite link and COFDM on the terrestrial link.

Page 52: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

37 | P a g e

Figure (2.6)

2.2.4.1 – DVB-H vs. DVB-SH

The DVB-SH incorporates a number of enhancements when compared to DVB-H as shown in “Figure (2.6)”:

More alternative coding rates are available.

The inclusion of support for 1.7 MHz bandwidth and 1K FFT.

FEC using turbo coding algorithms.

Improved time interleaving.

Support for antenna diversity in both terminals.

Recently, results from BMCO forum [citation needed] (Alcatel April 2008) shows a radio improvement of at least 5.5 dB on signal requirements between DVB-H and DVB-SH in the UHF frequencies. The improvements to signal requirements translates to better in-building penetration, better in-car coverage and extension of outdoor coverage. DVB-SH chipsets are being developed now by DiBcom and NXP Semiconductors, and are expected to be available in beginning of 2008. Initial specifications show that the chipsets supports both UHF and S-Band and are compatible with DVB-H.

DiBcom has announced a DVB-SH chip with availability in 2008 Q3. Dibcom DVB-SH 2008 Q3. The chip "has dual RF tuners supporting VHF, UHF, L-Band and S-Band frequencies".

2.2.5 –Digital Video Broadcasting – Terrestrial (DVB-T)

In 1995, the terrestrial standard for the transmission of digital TV programs was defined in ETS 300744 in connection with the DVB-T project.

A DVB-T channel can have a bandwidth of 8, 7 or 6 MHz; there are two different operating modes: the 2K mode

and the 8K mode where 2K stands for 2046-points IFFT and 8K stands for an 8192-points IFFT, the number of COFDM subcarriers must be a power of two. In DVB-T, it was decided to use symbols with a length of about 250 μs (2K mode) or 1 ms (8 K modes). Depending on requirements, one or the other mode can be selected. The 2K mode has greater subcarrier spacing of about 4 kHz but the symbol period is much shorter. Compared with the 8K mode with a

Page 53: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

38 | P a g e

subcarrier spacing of about 1 kHz, it is much less susceptible to spreading in the frequency domain caused by doppler effects due to mobile reception and multiple echoes but much more susceptible to greater echo delays. In single frequency networks, for example, the 8K mode will always be selected because of the greater transmitter spacing possible.

In mobile reception, the 2K mode is better because of the greater subcarrier spacing. The DVB-T standard allows

for flexible control of the transmission parameters. Apart from the symbol length, which is a result of the use of 2K or 8K mode, the guard interval can also be

adjusted within a range of 1/4 to 1/32 of the symbol length. It is possible to select the type of modulation (QPSK, 16QAM or 64QAM)). The error protection (FEC) is designed to be the same as in the DVB-S satellite standard. The DVB-T transmission can be adapted to the respective requirements with regard to robustness or net data rates by adjusting the code rate (1/2 ... 7/8).

In addition, the DVB-T standard provides for hierarchical coding as an option. In hierarchical coding, the

modulator has two transport stream inputs and two independently configurable but identical FECs. The idea is to apply a large amount of error correction to a transport stream with a low data rate and then to transmit it with a very robust type of modulation this transport stream path is then called the high priority (HP) path.

The second transport stream has a higher data rate and is transmitted with less error correction and, e.g. 64QAM

modulation and the path is called the low priority (LP) path. It would then be possible, e.g. to subject the identical program packet to MPEG-2 coding, once at the higher data rate and once at the lower data rate, and to combine the two packets in two multiplex packets transported in independent transport streams. Higher data rate automatically means better (picture) quality. The data stream with the lower data rate and correspondingly lower picture quality is fed to the high priority path and that with the higher data rate is supplied to the low priority path. At the receiving end, the high priority signal is demodulated more easily than the low priority one. Depending on the conditions of reception, the HP path or the LP path will be selected at the receiving end. If the reception is poor, there will at least still be reception due to the lower data rate and higher compression, even if the quality of the picture and sound is inferior.

In DVB-T, coherent COFDM modulation is used, i.e. the payload carriers are mapped absolutely and are not

differentially coded. However, this requires channel estimation and correction for which numerous pilot signals are provided in the DVB-T spectrum and are used as test signal for the channel estimation.

Complete DVB-T modulator is shown below at “Figure (2.7)”.

Page 54: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

39 | P a g e

Figure (2.7)

2.2.5.1 – DVB-T2

DVB-T2 is an abbreviation for Digital Video Broadcasting – Second Generation Terrestrial; it is the extension of the television standard DVB-T, issued by the consortium DVB, devised for the broadcast transmission of digital terrestrial television.

This system transmits compressed digital audio, video, and other data in "physical layer pipes" (PLPs),

using OFDM modulation with concatenated channel coding and interleaving. The higher offered bit rate, with respect to its predecessor DVB-T, makes it a suited system for carrying HDTV signals on the terrestrial TV channel (though many broadcasters still use plain DVB-T for this purpose).

It is currently broadcasting in UK (Free view, four channels), Italy (Europa 7 HD, twelve channels) and since November 1, 2010 in Sweden (five channels).

Page 55: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

40 | P a g e

Figure (1.7)

The DVB-T2 draft standard was ratified by the DVB Steering Board on June 26, 2008, and published on the DVB homepage as DVB-T2 standard Bluebook; it was handed over to the European Telecommunications Standards Institute (ETSI) by DVB.ORG on June 20, 2008.

The ETSI process resulted in the DVB-T2 standard being adopted on September 9, 2009. The ETSI process had several phases, but the only changes were text clarifications. Since the DVB-T2 physical layer specification was complete, and there would be no further technical enhancements, receiver VLSI chip design started with confidence in stability of specification. A draft PSI/SI (program and system information) specification document was also agreed with the DVB-TM-GBS group.

2.2.5.2 – DVB-T2 vs. DVB-T

The following table shows comparison between the physical layers of both DVB-T, DVB-T2 systems where the bolded items are the new added items in DVBT-2.

DVB-T DVB-T2

Forward Error Correction Convolutional Coding + Reed Solomon LDPC + BCH

Coding Rates 1/2, 2/3, 3/4, 5/6, 7/8 1/2, 3/5, 2/3, 3/4, 4/5, 5/6

Modulations QPSK, 16QAM, 64QAM QPSK, 16QAM, 64QAM , 256QAM

Guard Intervals 1/4, 1/8, 1/16, 1/32 1/4, 19/256, 1/8, 19/128, 1/16, 1/32, 1/128

FFT Sizes 2k, 8k 1k, 2k, 4k, 8k, 16k, 32k

Scattered Pilots 8% of total 1%, 2%, 4%, 8% of total

Continual Pilots 2.6% of total 0.35% of total

2.2.5.3 – DVB-T representation on world map

Page 56: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

41 | P a g e

Chapter Number 3 Source Coding

3.1 – Introduction In computer science, source code is text written in a computer programming language. Such a language is

specially designed to facilitate the work of computer programmers, who specify the actions to be performed by a computer mostly by writing source code, which can then be automatically translated to binary machine code that the computer can directly read and execute. An interpreter translates to machine code and executes it on the fly, while a compiler only translates to machine code that it stores as executable files; these can then be executed as a separate step.

Most computer applications are distributed in a form that includes executable files, but not their source code,

which is only useful to a computer programmer who wishes to understand or modify the program. The source code which constitutes a program is usually held in one or more text files stored on a computer's hard

disk; usually these files are carefully arranged into a directory tree, known as a source tree. Source code can also be stored in a database (as is common for stored procedures) or elsewhere.

Source code also appears in books and other media; often in the form of small code snippets, but occasionally

complete code bases; a well-known case is the source code of PGP. The notion of source code may also be taken more broadly, to include machine code and notations in graphical

languages, neither of which are textual in nature. For the purpose of clarity ‘source code’ is taken to mean any fully executable description of a software system. It

is therefore so construed as to include machine code, very high level languages and executable graphical representations of systems.

The code base of a programming project is the larger collection of all the source code of the entire computer

programs which make up the project, it has become common practice to maintain code bases in version control

systems.

The source code for a particular piece of software may be contained in a single file or many files. Though the practice is uncommon, a program's source code can be written in different programming languages. For example, a program written primarily in the C programming language might have portions written in assembly language for optimization purposes. It is also possible for some components of a piece of software to be written and compiled separately, in an arbitrary programming language, and later integrated into the software using a technique called library linking. This is the case in some languages, such as Java: each class is compiled separately into a file and linked by the interpreter at runtime.

Yet another method is to make the main program an interpreter for a programming language, either designed

specifically for the application in question or general-purpose, and then write the bulk of the actual user functionality as macros or other forms of add-ins in this language, an approach taken for example by the GNU “Emacs" text editor.

Moderately complex software customarily requires the compilation or assembly of several, sometimes dozens or

even hundreds, of different source code files. In these cases, instructions for compilations, such as a Make file, are

Page 57: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

42 | P a g e

included with the source code. These describe the relationships among the source code files, and contain information about how they are to be compiled, the revision control system is another tool frequently used by developers for source code maintenance.

Here in our Digital Video Broadcasting – 2nd generation terrestrial, MPEG is the used source code so in this

chapter we will focus only on different types of MPEG source coding.

3.2 – Moving Pictures Experts Group (MPEG) Data Stream The abbreviation MPEG, first of all, stands for Moving Pictures Experts Group, that is to say MPEG deals mainly

with the digital transmission of moving pictures. However, the data signal defined in the MPEG-2 Standard can also generally carry data which have nothing at all to do with video and audio and could be Internet data.

MPEG can be divided into more than one branch of source codes; “Figure (3.1)” shows different standards of

MPEG.

Fig 3.1.MPEG standards

All the same, the description of the data signal structure will begin with the uncompressed video and audio signals. An SDTV (Standard Definition Television) signal without data reduction has a data rate of 270 Mbit/s and a digital stereo audio signal in CD quality has a data rate of about 1.5 Mbit/s

The video signals are compressed to about 1 Mbit/s in MPEG-1 and to about 2 – 7 Mbit/s in MPEG-2. The video

data rate can be constant or variable (statistical multiplex). The audio signals have a data rate of about 100- 400 Kbit/s (mostly 192 Kbit/s) after compression, but the audio data rate is always constant and a multiple of 8 Kbit/s. The compressed video and audio signals in MPEG are called “elementary streams”, ES in brief. There are thus video streams, audio streams and, quite generally, data streams, the latter containing any type of compressed or uncompressed data. Immediately after having been compressed (i.e. encoded), all the elementary streams are divided into variable-length packets, both in MPEG-1 and in MPEG-2 (“Figure (3.2)”).

Figure (3.1)

Page 58: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

43 | P a g e

Since it is possible to have sometimes more and sometimes less compression depending on the instantaneous video and audio content, variable length containers are needed in the data signal. These containers carry one or more compressed frames in the case of the video signal and one or more compressed audio signal segments in the case of the audio signal.

These elementary streams shown above in “Figure (3.2)” thus divided into packets are called “packetized

elementary streams”, or simply PES for short. Each PES packet usually has a size of up to 64 Kbytes as shown below in “Figure (3.3)”. It consists of a relatively short header and of a payload. The header contains inter alia a 16-bit-long length indicator for the maximum packet length of 64 Kbytes. The payload part contains either the compressed video and audio streams or a pure data stream.

Figure (3.2)

Figure (33)

Page 59: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

44 | P a g e

According to the MPEG Standard, however, the video packets can also be longer than 64 Kbytes in some cases, the length indicator is then set to zero and the MPEG decoder has to use other mechanisms for finding the end of the packet.

3.3 – Video Compression Technique To compress data, it is possible to remove redundant or irrelevant information from the data stream; redundant

means superfluous, irrelevant means unnecessary as shown below in “Figure (3.4)”.

Superfluous information is information which exists several times in the data stream, or information which has no information content, or simply information which can be easily and losslessly recovered by mathematical processes at the receiving end. Redundancy reduction can be achieved, e.g. by variable-length coding. Instead of transmitting ten zeroes, the information ‘ten times zero’ can be sent by means of a special code which is much shorter.

Irrelevant information is the type which cannot be perceived by the human senses. In case of the video signal,

they are the components which the eye does not register due to its anatomy. The human eye has far fewer color receptors than detection cells for brightness information. For this reason, the “sharpness in the color” can be reduced which means a reduction in the bandwidth of the color information. The receptors for black/white are called rods and the color receptors are cones.

In MPEG, the following steps are carried out in order to achieve a data reduction factor of up to 130:

8 bits resolution instead of 10 bits (irrelevance reduction).

Omitting the horizontal and vertical blanking interval (redundancy reduction).

Reducing the color resolution also in the vertical direction (4:2:0) (irrelevance reduction).

Differential pulse code modulation (DPCM) of moving pictures (redundancy reduction).

Discrete cosine transform (DCT) followed by quantization (irrelevance reduction).

Zigzag scanning with variable-length coding (redundancy reduction).

Huffman coding (redundancy reduction).

3.4 – The Packetized Elementary Stream (PES) All elementary streams in MPEG are first packetized in variable-length packets called PES packets. The packets,

which primarily have a length of 64 Kbytes, begin with a PES header of 6 bytes minimum length. The first 3 bytes of this header represent the “start code prefix”, the content of which is always 00 00 01 and which is used for identifying the start of a PES packet.

Figure (3.4)

Page 60: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

45 | P a g e

The byte following the start code is the “stream ID” which describes the type of elementary stream following in the payload. It indicates whether it is, e.g. a video stream, an audio stream or a data stream which follows. After that there are two “packet length” bytes which are used to address the up to 64 Kbytes of payload. If both of these bytes are set to zero, a PES packet having a length which may exceed these 64 Kbytes can be expected. The MPEG decoder then has to use other arrangements to find the PES packet limits, e.g. the start code.

After these 6 bytes of PES header, an “optional PES header” is transmitted which is an optional extension of the

PES header and is adapted to the requirements of the elementary stream currently being transmitted. It is controlled by 11 flags in a total of 12 bits in this optional PES header.

These flags show which components are actually present in the “optional fields” in the optional PES header and

which are not. The total length of the PES header is shown in the “PES header data length” field. The optional fields in the optional header contain, among other things, the “Presentation Time Stamps” (PTS) and the “decoding time stamps” (DTS) which are important for synchronizing video and audio. At the end of the optional PES header there may also be stuffing bytes. Following the complete PES header, the actual payload of the elementary stream is transmitted which can usually be up to 64 Kbytes long or even longer in special cases, plus the optional header.

In MPEG-1, video PES packets are simply multiplexed with PES packets and stored on a data medium shown in

“Figure (3.5)”.

The maximum data rate is about 1.5 Mbit/s for video and audio and the data stream only includes a video stream

and an audio stream. This “Packetized Elementary Stream” (PES) with its relatively long packet structures is not, however, suitable for transmission and especially not for broadcasting a number of programs in one multiplexed data signal.

3.5 – MPEG-2 Coding MPEG-2 is an extension of the MPEG-1 international standard for digital compression of audio and video signals.

MPEG-1 was designed to code progressively scanned video at bit rates up to about 1.5 Mbit/s for applications such as CD-i (compact disc interactive). MPEG-2 is directed at broadcast formats at higher data rates; it provides extra algorithmic 'tools' for efficiently coding interlaced video, supports a wide range of bit rates and provides for multichannel surround sound coding.

MPEG-2 is used in Digital Video Broadcast and Digital Versatile as shown below in “Figure (3.6)”.

Figure (3.5)

Page 61: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

46 | P a g e

In MPEG-2, on the other hand, the objective has been to assemble up to 6, 10 or even 20 independent TV or

radio programs to form one common multiplexed MPEG-2 data signal. This data signal is then transmitted via satellite, cable or terrestrial transmission links. To this end, the long PES

packets are additionally divided into smaller packets of constant-length, from the PES packets, 184-byte-long pieces are taken and to these another 4-byte-long header is added like shown in “Figure (3.7)”, making up 188-byte-long packets called “transport stream packets” which are then multiplexed.

To do this, first the transport stream packets of one program are multiplexed together. A program can consist of one or more video and audio signals, all the multiplexed data streams of all the programs are then multiplexed again and combined to form a complete data stream which is called an “MPEG-2 transport stream” (TS for short).

An MPEG-2 transport stream contains the 188-byte-long transport stream packets of all programs with all their video, audio and data signals. Depending on the data rates, packets of one or the other elementary streams will occur more or less frequently in the MPEG-2 transport stream. For each program there is one MPEG encoder which encodes all elementary streams, generates a PES structure and then packetizes these PES packets into transport stream packets. The data rate for each program is usually approx. 2 - 7 Mbit/s. The transport streams of all the programs are then combined in a multiplexed MPEG-2 data stream to form one overall transport stream (“Figure (3.8)”) which can then have a data rate of up to about 40 Mbit/s. There are often up to 6, 8 or 10 or even 20 programs in one transport stream. The data rates can vary during the transmission but the overall data rate has to remain constant. A program can contain video and audio, only audio (audio broadcast) or only data, and the structure is thus flexible and can also change during the transmission.

Figure (3.6)

Figure (3.7)

Page 62: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

47 | P a g e

3.6 – MPEG-2 Transport Stream Packet The MPEG-2 transport stream consists of packets having a constant length as shown in “Figure (3.9)”. This length

is always 188 bytes, with 4 bytes of header and 184 bytes of payload. The payload contains the video, audio or general data. The header contains numerous items of importance to the transmission of the packets. The first header byte is the “sync byte”.

The sync byte is used for synchronizing the packet to the transport stream and it is its value plus the constant

spacing which is being used for synchronization. According to MPEG, synchronization at the decoder occurs after five transport stream packets have been received. Another important component of the transport stream is the 13 bit-long “packet identifier” or PID for short. The PID describes the current content of the payload part of this packet. The hexadecimal 13 bit number in combination with tables also included in the transport stream show this which elementary stream or content.

The bit immediately following the sync bit is the “transport error indicator” bit as shown in “Figure (3.9)”. With

this bit, transport stream packets are flagged as erroreded after their transmission. It is set by demodulators at the end of the transmission link if e.g. too many errors have occurred and there had been no further possibility to correct these by means of error correction mechanisms used during the transmission.

Figure (3.8)

Figure (3.9)

Page 63: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

48 | P a g e

In DVB (Digital Video Broadcasting), e.g., the primary error protection used is always the Reed Solomon error correction code shown in “Figure (3.10)”. In one of the first stages of the (DVB-S, DVBC or DVB-T) modulator, 16 bytes of error protection are added to the initially 188 bytes of the packet. These 16 bytes of error protection are a special checksum which can be used for repairing up to 8 errors per packet at the receiving end. If, however, there are more than 8 errors in a packet, there is no further possibility for correcting the errors, the error protection has failed and the packet is flagged as erroreded by the transport error indicator. This packet must now no longer be decoded by the MPEG decoder which, instead, has to mask the error which, in most cases, can be seen as a type of “blocking” in the picture.

It may be necessary occasionally to transmit more than 4 bytes of header per transport stream packet. The header is extended into the payload field in this case. The payload part becomes correspondingly shorter but the total packet length remains a constant 188 bytes. This extended header is called an “adaptation field” (“Figure (3.11)”). The other contents of the header and of the adaptation field will be discussed later. “Adaptation control bits” in the 4 byte-long header show if there is an adaptation field or not.

Figure (3.10)

Figure (3.11)

Page 64: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

49 | P a g e

The fact that the MPEG-2 transport stream is a completely asynchronous data signal is of particularly decisive significance. There is no way of knowing what information will follow in the next time slot (transport stream packet). This can only be determined by means of the PID of the transport stream packet. The actual payload data rates in the payload can fluctuate; there may be stuffing to supplement the missing 184 bytes. This a-synchronism has great advantages with regard to future flexibility.

3.7 – Information for the Receiver In the following paragraphs, the components of the transport stream which are necessary for the receiver will be

considered. Necessary components means in this case: What does the receiver, i.e. the MPEG decoder, need for extracting from the large number of transport stream packets with the most varied contents exactly those which are needed for decoding the desired program? In addition, the decoder must be able to synchronize correctly to this program. The MPEG-2 transport stream is a completely asynchronous signal and its contents occur in a purely random fashion or on demand in the individual time slots. There is no absolute rule which can be used for determining what information will be contained in the next transport stream packet. The decoder and every element on the transmission link must lock to the packet structure. The PID (packet identifier) can be used for finding out what is actually being transmitted in the respective element. On the one hand, this a-synchronism has advantages because of the total flexibility provided but there are also disadvantages with regard to power saving. Every single transport stream packet must first be analyzed in the receiver.

3.7.1 –Synchronizing to the Transport Stream

When the MPEG-2 decoder input is connected to an MPEG-2 transport stream, it must first lock to the transport stream, i.e. to the packet structure. The decoder, therefore, looks for the sync bytes in the transport stream. These always have the value of 0x47hex and always appear at the beginning of a transport stream packet. They are thus present at constant intervals of 188 bytes. These two factors together, the constant value of 0x47hex and the constant spacing of 188 bytes, are used for the synchronization. If a byte having a value of 0x47 appears, the decoder will examine the positions of “n” times 188 bytes before and after this byte in the transport stream for the presence of another sync byte. If there is, then this is a sync byte. If not, then this is simply some code word which has accidentally assumed this value. It is inevitable that the code word of 0x47 will also occur in the continuous transport stream. Synchronization will occur after 5 transport stream packets and the decoder will lose lock after a loss of 3 packets (as quoted in the MPEG-2 Standard).

3.7.2 –Reading out the Current Program Structure

The number and the structure of the programs transmitted in the transport stream is flexible and open. The transport stream can contain one program with one video and audio elementary stream, or there can be 20 programs or more, some with only audio, some with video and audio and some with video and a number of audio signals which are being broadcast. It is, therefore, necessary to include certain lists in the transport stream which describe the instantaneous structure of the transport stream. These lists provide the so-called “program specific information”, or PSI in short (“Figure (3.12)”). They are tables which are occasionally transmitted in the payload part. The first table is the “Program Association Table” (PAT).

This table occurs precisely once per transport stream but is repeated every 0.5 sec.. This table shows how many

programs there are in this transport stream. Transport stream packets containing this table have the value zero as packet identifier (PID) and can thus be easily identified. In the payload part of the program association table, a list of special PIDs is transmitted; there is exactly one PID per program in the program association table (“Figure (3.11)”).

These PIDs are pointers, as it were, to other information describing each individual program in more detail. They point to other tables, the so-called “Program Map Tables” (PMT).

Page 65: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

50 | P a g e

The program map tables, in turn, are special transport stream packets with a special payload part and special PID. The PIDs of the PMTs are transmitted in the PAT. If it is intended to receive, e.g. program No.3, PID no. 3 is selected in the list of all PIDs in the payload part in the program association table (PAT). If this is, e.g. 0x1FF3, the decoder looks for transport stream packets having PID = 0x1FF3 in their header. These packets are then the program map table for program no. 3 in the transport stream. The program map table, in turn, contains PIDs which are the PIDs for all elementary streams contained in this program (video, audio, and data).

Since there can be a number of video and audio streams the viewer must select the elementary streams to be

decoded. Ultimately he will select exactly 2 PIDs- one for the video stream and one for the audio stream, resulting e.g. in the two hexadecimal numbers PID1 = 0x100 and PID2 = 0x110. PID1 is then e.g. the PID for the video stream to be decoded and PID2 is the PID for the audio stream to be decoded. From now on, the MPEG-2 decoder will only be interested in these transport stream packets, collect them i.e. de-multiplex them and assemble them again to form the PES packets. It is precisely these PES packets which are supplied to the video and audio decoder in order to generate another video-and-audio signal.

The composition of the transport stream can change during the transmission, e.g. local programs can only be

transmitted within certain windows. A set-top box decoder, e.g. for DVB-S signals must, therefore, continuously monitor in the background the instantaneous structure of the transport stream, read out the PAT and PMTs and adapt to new situations. The header of a table contains a so-called version management for this purpose which signals to the receiver whether something has changed in the structure. It is regrettable that this does still not hold true for all DVB receivers. A receiver often recognizes a change in the program structure only after a new program search has been started. In many regions in Germany, so called “regional window programs” are inserted into the public service broadcast programs at certain times of the day. These are implemented by a so-called “dynamic PMT”, i.e. the contents of the PMT are altered and signal changes in the PIDs of the elementary streams.

Figure (3.12)

Page 66: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

51 | P a g e

3.7.3 –Accessing a Program

After the PIDs of all elementary streams contained in the transport stream have become known from the information contained in the PAT and the PMTs and the user has committed himself to a program, a video and audio stream, precisely two PIDs are now defined the PID for the video signal to be decoded and the PID for the audio signal to be decoded.

The MPEG-2 decoder, on instruction by the user of the set-top box, will now only be interested in these packets,

assuming then that the video PID is 0x100 and the audio PID is 0x110: in the following demultiplexing process all TS packets with 0x100 will be assembled into video PES packets and supplied to the video decoder. The same applies to the 0x110 audio packets which are collected together and reassembled to form PES packets which are supplied to the audio decoder. If the elementary streams are not scrambled, they can now also be decoded directly.

“Figure (3.13)” represents accessing a program via video and audio PIDs.

3.7.4 –Accessing Scrambled Programs

However, the elementary streams are transmitted scrambled. All or some of the elementary streams are transmitted protected by an electronic code in the case of pay TV or for licensing reasons involving local restrictions on reception. The elementary streams are scrambled (“Figure (3.14)”) by various methods (Viaccess, Betacrypt, Irdeto, Conax, Nagravision etc.) and cannot be received without additional hardware and authorization. This additional hardware must be supplied with the appropriate descrambling and authorization data from the transport stream. For this purpose, a special table is transmitted in the transport stream, the “conditional access table” (CAT).

The CAT supplies the PIDs for other data packets in the transport stream in which this descrambling information is transmitted. This additional descrambling information is called ECM (entitlement control message) and EMM (entitlement management message). The ECMs are used for transmitting the scrambling codes and the EMMs are used for user administration.

Figure (3.13)

Page 67: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

52 | P a g e

3.7.5 –Program Synchronization (PCR, DTS, and PTS)

Once the PIDs for video and audio have been determined and any scrambled programs have been descrambled and the streams have been demultiplexed, video and audio PES packets are generated again. These are then supplied to the video and audio decoder. The actual decoding, however, requires a few more synchronization steps. The first step consists of linking the receiver clock to the transmitter clock. As indicated initially, the luminance signal is sampled at 13.5 MHz and the two chrominance signals are sampled at 6.75 MHz 27 MHz is a multiple of these sampling frequencies, which is why this frequency is used as reference, or basic, frequency for all processing steps in the MPEG encoding at the transmitter end. A 27 MHz oscillator in the MPEG encoder feeds the “system time clock” (STC). At the receiving end, another system time clock (STC) must be provided.

To accomplish this, reference information is transmitted in the MPEG data stream .In MPEG-2, these are the

“Program Clock Reference” (PCR) values which are nothing else than an up-to-date copy of the STC counter fed into the transport stream at a certain time. The data stream thus carries an accurate internal “clock time”. All coding and decoding processes are controlled by this clock time. To do this, the receiver, i.e. the MPEG decoder, must read out the “clock time”, namely the PCR values, and compare them with its internal system clock. If the received PCR values are locked to the system clock in the decoder, the 27 MHz clock at the receiving end matches the transmitting end. If there is a deviation, a controlled variable for a PLL can be generated from the magnitude of the deviation, i.e. the oscillator at the receiving end can be corrected.

3.7.6 –Additional Information in the Transport Stream

According to MPEG, the information transmitted in the transport stream is fairly hardware-oriented, only relating to the absolute minimum requirements, as it were. However, this does not make the operation of a set-top box particularly user-friendly. For example, it makes sense, and is necessary, to transmit program names for identification purposes. It is also desirable to simplify the search for adjacent physical transmission channels. It is also necessary to transmit electronic program guides (EPG) and time and date information.

3.7.7 –Non-Private and Private Sections and Tables

To cope with any extensions, the MPEG Group has incorporated an “open door” in the MPEG-2 Standard. In addition to the “program specific information” (PSI), the “program map table” (PMT) and the “conditional access table” (CAT), it created the possibility to incorporate so-called “private sections and private tables” in the transport stream.

Figure (3.14)

Page 68: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

53 | P a g e

The group has defined mechanisms which specify what a section or table has to look like, what its structure has to be and by what rules it is to be linked into the transport stream, according to MPEG-2 Systems (ISO/IEC 13818-1), the following was specified for each type of table:

A table is transmitted in the payload part of one or more transport stream packets with a special PID which is reserved for only this table (DVB) or some types of tables (ATSC).

Each table begins with a table ID which is a special byte which identifies only this table alone. The table ID is the first payload byte of a table.

Each table is subdivided into sections which are allowed to have a maximum size of 4 bytes. Each section of a table is terminated with a 32-bit-long CRC checksum over the entire section.

The “Program Specific Information” (PSI) has exactly the same structure. The PAT has a PID of zero and begins with a table ID of zero. The PMT has the PIDs defined in the PAT as PID and has a table ID of 2. The CAT has a PID and a table ID of one in each case. The PSI can be composed of one or more transport stream packets for PAT, PMT and CAT depending on content.

Apart from the PSI tables PAT, PMT and CAT mentioned above, another table, the so-called “network information table” (NIT) was provided in principle but not standardized in detail. It was actually implemented as part of the DVB (Digital Video Broadcasting) project.

All tables are implemented through the mechanism of sections. There are non-private and private sections . Non-

private sections are defined in the original MPEG-2 Systems Standard. All others are correspondingly private. The non-private sections include the PSI tables and the private ones include the SI sections of DVB and the MPEG-2 DSM-CC (Digital Storage Media Command and Control) sections which are used for data broadcasting. The header of a table contains administration of the version number of a table and information about the number of sections of which a table is made up. A receiver must first of all scan through the header of these sections before it can evaluate the rest of the sections and tables. Naturally, all sections must be broken down from an original maximum length of 4 Kbytes to maximally 148 bytes payload length of a MPEG-2 transport stream packet before they are transmitted.

3.8 – Scalability The purpose of scalability video is to achieve video of more than one resolution, quality or implementation

complexity simultaneously. MPEG-2 supports four types of scalability modes: SNR, spatial, temporal and data partitioning. These allow a different set of tradeoffs in bandwidth, video resolution, or quality and overall implementation complexity. Data partitioning is bit stream scalability, where a single-coded video bit stream is artificially partitioned into two or more layers. In the SNR scalability quality scalability each layer is at different quality but at the same spatial resolution. Spatial scalability is spatial resolution scalability, where each layer has the same or different spatial resolution. Temporal scalability is frame rate scalability, where each layer has the same or different temporal resolution but is at the same spatial resolution.

3.9 – MPEG-2 Picture Types In MPEG-2, three 'picture types' are defined. The picture type defines which prediction modes may be used to

code each block.

'Intra' pictures (I-pictures) are coded without reference to previous pictures. Moderate compression is achieved by reducing spatial redundancy, but not temporal redundancy. They can be used periodically to provide access points in the bit stream where decoding can begin.

'Predictive' pictures (P-pictures) can use the previous I- or P-picture for motion compensation and may be used as a reference for further prediction. Each block in a P-picture can either be predicted or intra-

Page 69: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

54 | P a g e

coded. By reducing spatial and temporal redundancy, P-pictures offer increased compression compared to I-pictures.

'Bidirectional-predictive' pictures (B-pictures) can use the previous and next I- or P-pictures for motion-compensation, and offer the highest degree of compression. Each block in a B-picture can be forward, backward or bidirectionally predicted or intra-coded. To enable backward prediction from a future frame, the coder reorders the pictures from natural 'display' order to 'bitstream' order so that the B-picture is transmitted after the previous and next pictures it references. This introduces a reordering delay dependent on the number of consecutive B-pictures.

3.10 – MPEG-2 Problems

3.10.1 –Problems in Coding at Low Bit-Rate

The MPEG-2 video-coded bit stream is classified syntactically into coding modes, motion vectors and DCT(Digital Cosine Transform) coefficients. When using a conventional MEPG-2 encoder that employs the frame picture structure (PS =frame) and FPFD (frame_pred_frame_dc t) = ‘0’. Since the motion vector codes account for most of the coding bits for HDTV sequences with large motions irrespective of the target bitrates, sufficient coding bits cannot be assigned to the DCT coefficients, resulting in picture-quality degradation at the lower bitrate.

3.10.2 –Problems in Coding of Chrominance Components in Interlaced Video

There are two major causes of color degradation in fast moving pictures in interlaced video:

Incorrect predictions of chrominance samples.

Non-adaptive DCT mode for chrominance components.

Most existing encoders use only luminance samples for motion estimation because of their simple implementation. However, doing so sometimes causes remarkable color degradation.

Moreover, in the frame-based prediction of 4:2:0 interlaced video, the prediction of chrominance samples in two

fields may use opposite-parity and non-optimum temporal displacements. Such inefficient prediction will increase prediction errors for the chrominance components and will thus cause

chrominance degradation.

Page 70: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

55 | P a g e

Chapter Number 4 Digital Modulation Techniques

4.1 – Concept of Modulation Modulation is generally known as the process of varying one or more properties of a high-frequency

periodic waveform, called the carrier signal, with respect to a modulating signal (which typically contains information to be transmitted). This is done in a similar fashion to a musician modulating a tone (a periodic waveform) from a musical instrument by varying its characteristics which here may be the volume, timing and pitch. The three key parameters of a periodic waveform are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"), all of which can be modified to obtain a Modulated output signal. In telecommunications, modulation is the process of conveying a message signal, for example a digital bit stream or an analog audio signal, inside another signal that can be physically transmitted. A device that performs modulation is known as a Modulator and a device that performs its inverse operation is known as a Demodulator or Detector, A device that can do both operations is a Modem (modulator–demodulator).

4.1.1 – Importance of Modulation

4.1.1.1 – Main Aim of Analog Modulation

Analog modulation aims to transfer an analog baseband signal, for example an audio signal or TV signal, over an analog bandpass channel, for example a limited radio frequency band or a cable TV network channel.

4.1.1.2 – Main Aim of Digital Modulation

Digital modulation aims to transfer a digital bit stream over an analog bandpass channel, for example over the public switched telephone network (where a bandpass filter limits the frequency range to between 300 and 3400 Hz), or over a limited radio frequency band.

4.1.1.3 – Benefits of Signal Modulation

In order to ease propagation process by transmitting over higher frequencies which results the allowance of using an antenna of a suitable length, Since the effective radiation of Electro Magnetic waves requires antenna dimensions depending on the needed to be received signal wavelength (λ). Example: Antenna required to receive signal of λ = 3 KHz would approximately be 100 Km long. While antenna required to receive signal of λ = 3 GHz would approximately be 10 Cm long.

Sharing the access to the telecommunication channel resources using Frequency Division Multiplexing “FDM” technique.

In order to transmit larger power for wide area knowing that if we amplify the data power using power amplifiers, it will be distorted, so we perform modulation and amplify the carrier power.

In order to reduce noise effects in case of non-white Gaussian noise.

Page 71: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

56 | P a g e

4.1.2 – Digital vs. Analog

Modern mobile communication systems use digital modulation techniques. Advancements in very large-scale integration and digital signal processing technology have made digital modulation more cost effective than analog transmission systems. Digital modulation offers many advantages over analog modulation. Some advantages include greater noise immunity and robustness to channel impairments, easier multiplexing of various forms of information (e.g., voice, data, and video), and greater security. Furthermore, digital transmissions accommodate digital error-control codes which detect and/or correct transmission errors, and support complex signal conditioning and processing techniques such as source coding, encryption, and equalization to improve the performance of the overall communication link. New multipurpose programmable digital signal processors have made it possible to implement digital modulators and demodulators completely in software. Instead of having a particular modem design permanently frozen as hardware, embedded software implementations now allow alterations and improvements without having to redesign or replace the modem. Next table shows a comparison between analog and digital modulation schemes to conclude the assessment of both modulation schemes usage in Wireless communication systems.

Point of view

Analog System. Digital System.

Bandwidth Low BW usage. Advantage. High BW usage. Disadvantage.

Quality High. No Quantization took place.

Advantage. Low. Due to errors arises from Quantization operation which is a non-reversed operation.

Disadvantage.

Complexity

Low. As no signal processing and conditioning required.

Advantage.

High. Complex signal conditioning and processing techniques such as source coding, encryption, and equalization are required.

Disadvantage.

Security Low. Disadvantage. High. Due to availability of using Ciphering and Authentication.

Advantage.

Noise Immunity

Low. Disadvantage. High. Due to using channel coding techniques in addition to sending only 2 levels of data.

Advantage.

Available Multiplexing Techniques

FDM Disadvantage. FDM – TDM – CDM – OFDM Advantage.

Supported Services

Voice only. Disadvantage. Voice, SMS, Data “Internet access”, Images, Video calls Advantage.

Hardware Design

Difficult. Disadvantage. Simple. Advantage.

4.1.3 – Modulation Techniques Performance

Performance of any modulation technique is measured by the following parameters:

Power efficiency.

Bandwidth efficiency.

Power spectral density.

System complexity.

Page 72: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

57 | P a g e

Equation (4.1)

Equation (4.2)

4.1.3.1 – Power Efficiency

The power efficiency is defined as the required ratio of the signal energy per bit “Eb” to noise power spectral density “No” so power efficiency is “Eb/No” at the input of the receiver for a certain bit error probability “Pb” over an AWGN channel. Power efficiency describes the ability of a modulation technique to preserve the bit error probability of digital message at low power levels. In digital modulation systems, in order to increase the noise immunity, it is necessary to increase the signal power, so there is a trade-off between the signal power and the bit error probability. The power efficiency is a measure of how favorably this trade-off is made.

4.1.3.2 – Bandwidth Efficiency ( )

The bandwidth efficiency describes the ability of a modulation scheme to accommodate data within a limited bandwidth. As the data rate increases, pulse width of the digital symbols decreases and hence the bandwidth increases as shown in “Equation (4.1)”.

The system capacity of a digital mobile communication system is directly related to the bandwidth efficiency

for a modulation scheme. So a modulation scheme with greater bandwidth efficiency will transmit more data in a given spectrum allocation, But as well-known that the maximum possible bandwidth efficiency is limited by the noise in the channel according to Shannon's Theorem:

4.1.3.3 – Tradeoff between Power Efficiency and Bandwidth Efficiency ( )

Adding error control coding to message increases the required bandwidth, then Bandwidth efficiency decreases, but the required received power for a particular bit error rate decreases and hence Power efficiency increases. On the other hand using high levels M'ary modulation schemes*, decreases the bandwidth occupancy, Bandwidth efficiency increases, but the required received power for a particular bit error rate increases and hence Power efficiency decreases.

* All M’ary except for M’ary FSK modulation which isn’t bandwidth limited modulation scheme.

Page 73: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

58 | P a g e

Equation (4.3)

Figure (4.1)

4.1.3.4 – Power Spectral Density (PSD)

Power spectral density describes how the power of a signal is distributed with frequency. Here power can be the actual physical power, or more often, for convenience with abstract signals, can be defined as the squared value of the signal Since a signal with nonzero average power is not square integrable, the Fourier transforms do not exist in this case. Fortunately, the Wiener–Khinchin theorem provides a simple alternative; The PSD is the Fourier transform of the autocorrelation function of the signal if the signal can be treated as a wide-sense stationary random process.

4.1.3.5 – System Complexity

System complexity refers to the amount of circuits involved and the technical difficulty of the system. Associated with the system complexity is the cost of manufacturing, which is of course a major concern in choosing a modulation technique. Usually the demodulator is more complex than the modulator. Coherent demodulator is much more complex than non-coherent demodulator since carrier recovery is required. For some demodulation methods, sophisticated algorithms like the Viterbi algorithm are required. Also note that, for all personal communication systems which serve a large user community, the cost and complexity of the subscriber receiver must be minimized, and a modulation which is simple to detection is most attractive all these are basis for complexity comparison. We will always pay attention power efficiency, bandwidth efficiency, and system complexity as they are the main criteria of choosing a modulation technique.

4.1.4 – Digital Modulation Schemes

The modulation schemes are classified into two large categories as shown in “Figure (4.1)”

Digital Modulation Schemes

Constant Envelope Non-Constant Envelope

FSK PSK ASK M’ary QAM

-BFSK -M’ary FSK -MSK -G-MSK

-BPSK -DPSK -M’ary PSK -QPSK - OQPSK -π/4 QPSK

-ON OFF Keying -M’ary ASK

-Rectangular QAM -Circular QAM

Page 74: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

59 | P a g e

Figure (4.2)

Equation (4.4)

Equation (4.5)

Digital modulation techniques may be also classified into coherent and non-coherent schemes as shown in “Figure (4.2)” depending on whether the receiver is equipped with a phase-recovery circuit or not. The phase recovery circuit ensures that the oscillator supplying the locally generated carrier wave in the receiver is synchronized (in both frequency and phase) to the transmitter oscillator.

4.1.5 – Geometric Representation of Modulated Signals

To proceed with the analysis of the digital modulation schemes, choosing particular signals from a finite set of possible signal waveforms (symbols) based on the information bits applied to modulator. If there are total of M possible signals “S= * 1, 2… 𝑀]” For binary information bit “S” will contain two signals, And for signal size of “M” It is possible to transmit “

𝑀” bits to represent a symbol.

Vector space analysis provides valuable insight into the performance of particular modulation scheme.

4.1.5.1 – Basis Signal “ ” Conditions

Any signal can be represented by linear combination of basis function.

Basis signals are orthogonal to each other in time as shown in “Equation (4.5a).

Basis signals are normalized to unit energy as shown in “Equation (4.5b).

4.1.5.2 – Constellation Diagrams

A constellation diagram is a representation of a signal modulated by a digital modulation scheme by displaying the signal as a two-dimensional scatter diagram in the complex plane at symbol sampling instants as

Digital Modulation Schemes

Coherent Schemes Non-Coherent Schemes

All types of modulation All types of modulation except PSK

Page 75: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

60 | P a g e

Figure (4.3)

shown in “Figure (4.3)”, In a more abstract sense, it represents the possible symbols that may be selected by a given modulation scheme as points in the complex plane. By representing a transmitted symbol as a complex number and modulating a cosine and sine carrier signal with the real and imaginary parts (respectively), the symbol can be sent with two carriers on the same frequency. They are often referred to as quadrature carriers. A coherent detector is able to independently demodulate these carriers. Plotting several symbols in a scatter diagram produces the constellation diagram, The points on a constellation diagram are called constellation points, And They are a set of modulation symbols which comprise the modulation alphabet

Di, j = Space separation between ith and jth points is called “Euclidean Distance”.

4.1.5.3 – Probability of Error Calculation using Constellation Diagrams

The constellation diagram can be used to calculate the upper bound for symbol error rate as shown in “Equation (4.6)”.

Page 76: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

61 | P a g e

Equation (4.6)

| ∑ .

/

(

*

|

4.1.6 – Types of Modulation Technique used in different Communication Systems

Communication System Used Modulation Technique

GSM “2G Mobile System” Global System for Mobile Communications.

GMSK

GPRS “2.5G Mobile System” General Packet Radio Service.

GMSK

EDGE “2.75G Mobile System” Enhanced Data Rates for GSM

8PSK

CDMA200 Code Division Multiple Access.

QPSK in Down Link OQPSK in Up Link

UMTS “3G Mobile System” Universal Mobile Telecommunication System.

QPSK

HSDPA “3.5G Mobile System” High Speed Downlink Packet Access.

Adaptive modulation depending on the cell usage and signal quality QPSK , 16QAM

HSUPA “3.75G Mobile System” High Speed Uplink Packet Access.

Adaptive modulation depending on the cell usage and signal quality QPSK , 16QAM , 64QAM

WiMAX Worldwide Interoperability for Microwave Access.

Adaptive modulation using: QPSK , 16QAM , 64QAM

LTE “4G Mobile Systems” Long Term Evolution.

Adaptive modulation using: QPSK , 16QAM , 64QAM

DVB-T Digital Video Broadcasting – Terrestrial.

Adaptive modulation using: QPSK , 16QAM , 64QAM

DVB-T2 Digital Video Broadcasting – Terrestrial 2.

Adaptive modulation using: QPSK , 16QAM , 64QAM , 256QAM

Page 77: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

62 | P a g e

Figure (4.4)

4.2 – Line Codes Line coding consists of representing the digital signal to be transported by an amplitude and time-discrete

signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment) assuming that:

symbols 0 and 1 are equiprobable,

the average power is normalized to unity, and

The frequency f is normalized with respect to the bit rate 1/Tb. A variety of waveforms have been proposed in an effort to find ones with some desirable properties, such as good bandwidth and power efficiency, and adequate timing information. These baseband modulation waveforms are variably called line codes, baseband formats (or waveforms), PCM waveforms (or formats, or codes). After line coding, the signal is put through a "physical channel", either a "transmission medium" or "data storage medium" in one of the following shapes:

The line-coded signal can directly be put on a transmission line, in the form of variations of the voltage or current (often using differential signaling).

The line-coded signal (the "base-band signal") undergoes further pulse shaping (to reduce its frequency bandwidth) and then modulated (to shift its frequency bandwidth) to create the "RF signal" that can be sent through free space.

The line-coded signal can be used to turn on and off a light in Free Space Optics, most commonly infrared remote control.

The line-coded signal can be printed on paper to create a bar code.

The line-coded signal can be converted to pits on optical disc. Unfortunately, most long-distance communication channels cannot transport a DC component. The DC component is also called the disparity, the bias, or the DC coefficient. The simplest possible line code, called unipolar because it has an unbounded DC component, gives too many errors on such systems. Most line codes eliminate the DC component — such codes are called DC balanced, zero-DC, zero-bias or DC equalized etc. There are two ways of eliminating the DC component: Line coding should make it possible for the receiver to synchronize itself to the phase of the received signal. If the synchronization is not ideal, then the signal to be decoded will not have optimal differences (in amplitude) between the various digits or symbols used in the line code. This will increase the error probability in the received data. It is also preferred for the line code to have a structure that will enable error detection.

4.2.1 –Non Return to Zero (NRZ) Line Coding

In telecommunication, a non-return-to-zero (NRZ) line code is a binary code in which 1's are represented by one significant condition(usually a positive voltage) and 0's are represented by some other significant condition, with no other neutral or rest condition.

4.2.1.1 – Unipolar Non Return to Zero Line Coding

In this line code shown in “Figure (4.4)”, symbol 1 is represented by transmitting a pulse of amplitude “A” for the duration of the symbol, and symbol 0 is represented by switching off the pulse (sending zero amplitude pulse), So this line code is referred to as On-Off signaling.

Page 78: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

63 | P a g e

Figure (4.5)

Figure (4.6)

Figure (4.7)

Disadvantages of on-off signaling:

The waste of power due to transmitting the DC level.

The fact that the power spectrum of the transmitted signal does not approach zero at zero frequency.

4.2.1.2 – Polar Non Return to Zero Line Coding

In this line code shown in “Figure (4.5)”, symbol 1 and 0 are represented by transmitting pulse of amplitudes “+A” and “–A” respectively.

This line code is relatively easy to be generated but its main disadvantage is that the power spectrum of the signal is large near zero frequency.

4.2.1.3 – Non Return to Zero Space Line Coding

"One" is represented by no change in physical level while "Zero" is represented by a change in physical level as shown in “Figure (4.6)”.

This "change-on-zero" is used by High-Level Data Link Control and USB. They both avoid long periods of no transitions (even when the data contains long sequences of 1 bits) by using zero-bit insertion.

4.2.1.4 – Non Return to Zero Inverted (Mark) Line Coding

"One" is represented by a transition of the physical level while "Zero" has no transition as shown in “Figure (4.7)”.

4.2.2 –Return to Zero (RZ) Line Coding

In telecommunication, a return-to-zero (RZ) line code is a binary code in which 1's are represented by one significant condition(usually a positive voltage) and 0's are represented by some other significant condition, with periodic presence of neutral or rest condition.

Page 79: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

64 | P a g e

Figure (4.8)

Figure (4.9)

Figure (4.10)

An attractive feature of this line code is the presence of delta functions at f = 1/Tb in the power spectrum of the transmitted signal, which can be used for bit-timing recovery at the receiver. However, its disadvantage is that it requires 3db more power than polar return-to-zero signaling for the same probability of symbol error

4.2.2.1 – Unipolar Return to Zero Line Coding

In this other line code, symbol 1 is represented by a rectangular pulse of amplitude “A” and half-symbol 0 width, and symbol 0 is represented by transmitting no pulse, as illustrated in “Figure (4.8)”.

4.2.2.2 – Polar Return to Zero Line Coding

In this line code, symbol 1 is represented by a rectangular pulse of amplitude “A” and half-symbol 0 width, and symbol 0 is represented by transmitting also rectangular pulse of same period like the pulse due transmitting 1 but with reversed amplitude “-A”, as illustrated in “Figure (4.9)”.

4.2.2.3 – Bipolar Return to Zero Line Coding (AMI)

This line code uses three amplitude levels as indicated in “Figure (4.10)”. Specifically, positive and negative pulses of equal amplitude (i.e., +A, –A) are used alternately for symbol 1, with each pulse having a half-symbol width; no pulse is always used for symbol 0. A useful property of the BRZ signaling is that the power spectrum of the transmitted signal has no DC component and relatively insignificant low-frequency components for the case when symbols 1 and 0 occur with equal probability. This line code is also called alternate mark inversion (AMI) signaling.

Since this line code has three level (+A, -A, 0), so it’s called Pseduternary system, this cause the receiver to be more complex, since it should distinguish three levels.

Page 80: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

65 | P a g e

Figure (4.11)

Figure (4.12)

4.2.3 –Manchester Line Coding

In telecommunication, Manchester code is a line code in which the encoding of each data bit has at least one transition and occupies the same time as shown in “Figure (4.11)”. It therefore has no DC component, and is self-clocking, which means that it may be inductively or capacitively coupled, and that a clock signal can be recovered from the encoded data. The DC component of the encoded signal is not dependent on the data and therefore carries no information, allowing the signal to be conveyed conveniently by media (e.g. Ethernet) which usually do not convey a DC component so having a line code free of Dc component is a big advantage.

Manchester code is widely used (e.g. in Ethernet, also RFID or Near Field Communication). There are more complex codes, such as 8B/10B encoding, that use less bandwidth to achieve the same data rate but may be less tolerant of frequency errors. Manchester code ensures frequent line voltage transitions, directly proportional to the clock rate, this helps clock recovery.

4.2.4 –Differential Line Coding

This method is used to encode information in terms of signal transitions. In particular, a transition is used to designate symbol 0 in the incoming binary data stream, while no transition is used to designate symbol l. The differentially encoded data stream is shown down in the example. Example:

Original Binary data Reference 0 1 1 0 1 0 0 1

Differentially encoded data 1 0 0 0 1 1 0 1 1

This output differentially encoded data will be then represented as shown in “Figure (4.12)”.

Page 81: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

66 | P a g e

Equations (4.7)

Figure (4.13)

4.3 – Amplitude Shift Keying (ASK) Modulation Technique Amplitude-shift keying (ASK) is a form of modulation that represents digital data as variations in the amplitude of

a carrier wave, the amplitude of an analog carrier signal (modulated signal) varies in accordance with the bit stream (modulating signal), keeping frequency and phase constant. The level of amplitude can be used to represent binary logic 0s and 1s. We can think of a carrier signal as an ON or OFF switch. In the modulated signal, logic 0 is represented by the absence of a carrier, thus giving OFF/ON keying operation, ASK is also linear and sensitive to atmospheric noise, distortions, etc. Both ASK modulation and demodulation processes are relatively inexpensive.

4.3.1 –Binary Amplitude Shift Keying (BASK)

4.3.1.1 – Overview

A binary amplitude-shift keying (BASK) signal is defined as shown in “Equations (4.7)”.

A: Constant. : ‘1’ or ‘0’ depending on the Message. : Carrier Frequency. : Bit Duration.

But:

: Power of the Signal.

Thus √

Then √ √

And since: : Energy of the Signal.

Then √

Let √

: Orthonormal Basis Function

Then √ Binary amplitude shift keying constellation structure can be represented using the orthonormal basis function which had been declared before as clearly shown in “Figure (4.13)”.

It’s important also to have a look on the frequency response of BASK (See “Equations (4.8)”) from which we can estimate the BASK bandwidth efficiency.

Page 82: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

67 | P a g e

Equations (4.8)

Figure (1.14)

Figure (4.15)

Equation (4.9)

∫ ( )

∫ ( )

: Fourier Transform of

𝑀

𝑀

From the previous equations we can represent the response of the BASK across frequency as shown below in “Figure (4.14)”.

Finally “Figure (4.15)” shows simple illustrating example for encoding a sequence of binary bits (0 1 0 1 0 0 1) using special case of BASK modulation technique which is called ON / OF Keying (OOK).

4.3.1.2 – Calculating Probability of Error

Probability of error for BASK or its special case OOK would be given directly by “Equation (4.9).

: Probability of error. : Noise Coefficient.

Page 83: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

68 | P a g e

Equations (4.10)

Figure (4.16)

Equation (4.11)

4.3.2 –M’ary Amplitude Shift Keying (MASK)

4.3.2.1 – Overview

An M’ary amplitude-shift keying (MASK) signal is defined as shown in “Equations (4.10)”.

Given that 𝑀 : Constant. : 0, 1, 2… (M-1). 𝑀>4 (Must)

And also:

Then √

Let √

: Orthonormal Basis Function

Then √

”Figure (4.16)” shows simple illustrating example for encoding stream of binary bits using 4-ASK technique.

4.3.2.2 – Calculating Probability of Error

Probability of error for MASK would be given directly by “Equation (4.11).

𝑀

𝑀 √

𝑀

𝑀

: Average Energy.

Page 84: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

69 | P a g e

Equations (4.12)

4.4 – Phase Shift Keying (PSK) Modulation Technique Phase Shift Keying (PSK) is a digital modulation scheme that conveys data by changing, or modulating,

the phase of a reference signal (the carrier wave). Any digital modulation scheme uses a finite number of distinct signals to represent digital data. PSK uses a finite number of phases; each assigned a unique pattern of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular phase. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the phase of the received signal and maps it back to the symbol it represents, thus recovering the original data. This requires the receiver to be able to compare the phase of the received signal to a reference signal; such a system is termed coherent (and referred to as CPSK). Alternatively, instead of using the bit patterns to set the phase of the wave, it can instead be used to change it by a specified amount. The demodulator then determines the changes in the phase of the received signal rather than the phase itself. Since this scheme depends on the difference between successive phases, it is termed differential phase-shift keying (DPSK). DPSK can be significantly simpler to implement than ordinary PSK since there is no need for the demodulator to have a copy of the reference signal to determine the exact phase of the received signal (it is a non-coherent scheme). In exchange, it produces more erroneous demodulations. The exact requirements of the particular scenario under consideration determine which scheme is used.

4.4.1 –Binary Phase Shift Keying (BPSK)

4.4.1.1 – Overview

Here the phase of constant amplitude carrier signal is switched between two values according to the possible signals M1, M2 which corresponds to 1, 0. Normally M1, M2 phases are separated by 180 phase shift and of amplitude “Ac”.

4.4.1.2 – BPSK Representation

In this clause we will talk about different BPSK representations in different domains which will allow us to show briefly all the features of this modulation scheme.

4.4.1.2.1 – Equations Representation

BPSK can be represented as shown in “Equations (4.12)”.

, Are known to be Antipodal Signals

: Message or called Modulating wave form.

Page 85: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

70 | P a g e

Figure (4.17)

Equations (4.13)

Figure (4.18)

4.4.1.2.2 – Time Domain Representation

”Figure (4.16)” shows an example for BPSK modulated signal for the binary modulating wave form (1 0 1 1 0).

4.4.1.2.3 – Spectrum and Bandwidth Representation

Power Spectral Density (PSD) of BPSK can be represented as shown in “Equations (4.13)”.

[ ( ) ( )]

From the previous equations it’s obvious that the Null to Null Bandwidth would be: and from the power spectral distribution shown in “Figure (4.18)” it’s clear that most of the signal energy is contaminated in this Null to Null Bandwidth so rectangular or raised cosine filters can be used with this type of modulation.

Page 86: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

71 | P a g e

Figure (4.19)

Figure (4.20)

Figure (4.21)

4.4.1.2.4 – Constellation Representation

Representing the BPSK using basis function and two points separated from each other as shown down in “Figure (4.19)” by so it’s called one dimensional constellation representation.

4.4.1.3 – BPSK Modulator

Using a balanced modulator and data on its polar NRZ form (See Clause “4.2.1.2”) we can generate the BPSK signal ash shown in “Figure (4.20)” taking into consideration that the carrier frequency must satisfy that [ = ] to ensure the repetition condition.

4.4.1.4 – BPSK De-Modulator

As we pointed out before (See “Figure (4.2)”) PSK modulation techniques must be coherently demodulated so a carrier recovery circuit must be applied to obtain the carrier back again at the receiver side. To detect the original binary sequence of 1‘s and zero‘s we apply the noisy PSK signal to a correlator which is supplied with the locally generated carrier the correlator output is compared with a secondary stage threshold detector of zero volts; if the output exceeds zero the receiver decides in favor of symbol 1 otherwise the receiver decides in favor of zero which is illustrated in “Figure (4.21)”.

Page 87: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

72 | P a g e

Equations (4.14)

Equations (4.15)

4.4.1.5 – Power and Bandwidth Properties of BPSK

As shown before in “Figure (4.19)”; BPSK constellation representation contains only “2” constellation points which make it obvious to deduce the following:

BPSK is high power efficiency system.

But BPSK is low bandwidth efficiency system as:

**** Power and BW efficiency concepts are defined clearly at “Clause (4.1.3.2)”.

4.4.16 – Probability of Error for BPSK Modulation Scheme

Probability of error for BPSK modulated system is given by “Equations (4.14)”.

√ √

For BPSK and

So ( √

* ⁄ (√

*

4.4.2 –Differential Phase Shift Keying (DPSK)

4.4.2.1 – Overview

As we previously saw in BPSK modulation technique, it’s a coherent scheme so needs coherent detection at the receiver side which is complex and expensive detector, Noncoherent version of BPSK appeared to solve this problem.

4.4.2.2 – Differential Encoding and Decoding Method

Encoding of the modulating signal before passing over carrier is required here and this process done mainly depending on the input bit and the previous bit and of course it needed to be reversed at the receiver side as shown in “Equations (4.15)”, It need memory.

Encoding:

Decoding: Differential encoding example:

- 1 0 0 1 0 1 1 0

- 1 1 0 1 1 0 0 0

1 1 0 1 1 0 0 0 1

4.4.2.3 – DPSK Modulation

It consists of a one bit delay element and a logic circuit interconnected so as to generate the differentially encoded sequence from the input binary sequence as shown in “Figure (4.22)”. The output is passed through a product modulator to obtain the DPSK signal i.e. output bit is delayed by 1 bit duration and XNORed with newer I/P bit, then the O/P sequence is transformed to polar NRZ and then it will be like BPSK.

Page 88: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

73 | P a g e

Figure (4.22)

Figure (4.23)

Figure (4.24)

4.4.2.4 – DPSK De-Modulation

DPSK can be demodulated back using so many different demodulators; we will discuss two of them in this clause.

4.4.2.4.1 – Suboptimum Receiver

Complementary process to the process used for encoding which is shown below at “Figure (4.23)” is used to retrieve back the signal stream.

4.4.2.4.2 – Optimum Receiver

This demodulator shown in “Figure (4.24)” doesn’t require phase synchronization between the reference and received signals which is the main advantage for which the DPSK technique is introduced. But it does require the reference frequency be the same as the received signal this can be maintained by using stable oscillators, such as crystal oscillators, in both transmitter and receiver. However, in the case where Doppler shift exists in the carrier frequency, such as in mobile communications, frequency tracking is needed to maintain the same frequency Therefore the suboptimum receiver is more practical, and indeed it is the usual-sense DBPSK receiver. Its error performance is slightly inferior to that of the optimum.

Page 89: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

74 | P a g e

Equation (4.16)

Figure (4.25)

4.4.2.5 – DPSK Advantages and Disadvantages

Advantage: Simple detector at the receiver side. Disadvantage: Energy efficiency is less than the ordinary coherent PSK by 3 dB.

4.4.2.6 – DPSK Power Spectral Density Representation

Power spectral representation of DPSK is same like the Power spectral density of BPSK which is illustrated clearly before in “Clause (4.4.1.1.3)”

4.4.2.7 – Probability of Error in DPSK System

Probability of error at using DSPK modulation technique can be shown using “Equation (4.16)”.

DPSK provides gain of 3 dB over noncoherent ordinary PSK systems for same value of ( ⁄ as shown in “Figure

(4.25)”.

Page 90: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

75 | P a g e

Equations (4.17)

4.4.3 –M’ary Phase Shift Keying (MPSK)

4.4.3.1 – Overview

The aim of introducing the MPSK as a modulation technique is to increase the bandwidth efficiency of the PSK modulation schemes. In BPSK, a data bit is represented by “1” symbol, But in MPSK, “n” data bits are represented by “1” symbol, such that ( 𝑀 ) thus the bandwidth efficiency is increased to “n” times. Among all MPSK schemes, QPSK is the most-often-used scheme since it does not suffer from BER degradation while the bandwidth efficiency is increased to a good extend, while other MPSK schemes increase bandwidth efficiency at the expenses of BER performance.

4.4.3.2 – MPSK Representations

4.4.3.2.1 – Signal Equation Representation

“Equations (4.17)” shows the MPSK signal representation.

.

𝑀 /

𝑀

𝑀

: 1, 2… M. : “1” symbol’s duration.

: “1” bit’s duration. : “1” symbol’s energy.

: “1” bit’s energy. Using trigonometric simplifications techniques the previous equation would be:

{ .

𝑀/ .

𝑀/ }

Let: √

and √

: 1st basis function. : 2nd basis function. Then the previous equation would be denoted by:

√ { .

𝑀/ .

𝑀/ }

Page 91: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

76 | P a g e

Figure (4.26)

Equations (4.18)

4.4.3.2.2 – Constellation Representation

From the previous clause it’s clear that the constellation representation of MPSK would be 2-Dimensional representation as it has two basic functions. ”Figure (4.26)” shows some examples for MPSK constellation representations at different values of M.

4.4.3.3 – Probability of Error in MPSK Scheme

Probability of error can be calculated using the inter-distance between constellation points as shown in “Equations (4.18)”.

𝑀

: Inter-distance between any “2” successive constellation points.

Then Probability of error can be represented by:

(√

𝑀

(

𝑀 ) )

For M ≥ 4 the previous equation can be approximated to be:

(

𝑀 )

4.4.3.4 –Power and Bandwidth Efficiency of MPSK

As the value of “M” increases, the bandwidth efficiency increases. At the same time, increasing M implies that the constellation is more densely packed, and hence the power efficiency (noise tolerance) is decreased So according to “Equations (4.19)”; Increasing M:

Increases bandwidth efficeincy.

Decreases power efficeincy.

Page 92: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

77 | P a g e

Equations (4.19)

Figure (4.27)

𝑀

𝑀

Bandwidth effeciency.

The relation between Symbol error rate and signal to noise ratio is shown by “Figure (4.27)”.

Page 93: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

78 | P a g e

Figure (4.28)

Figure (4.29)

4.4.3.5 –MPSK Modulator

“Figure (4.28)” shows simple hardware modulator circuit for MPSK modulation scheme.

4.4.3.6 –MPSK De-Modulator

“Figure (4.29)” shows simple hardware de-modulator circuit for MPSK modulation scheme.

Page 94: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

79 | P a g e

Equations (4.20)

4.4.4 –Special MPSK Techniques: “QPSK”

4.4.4.1 – Overview

In this modulation technique “2” bits are transmitted in a single modulation symbol so QPSK has twice bandwidth efficiency of BPSK, The phase of the carrier takes on “1” of “4” equally spaced value like the values shown in the next table, where each value of phase corresponds to a unique pair of message bits.

Message Phase

0 0 0

0 0 ⁄

1 1 π

1 0 ⁄

** The arrangement of message bits as shown in the table “1st column” is the best arrangement and it applies Gray’s coding which guarantees that each “2” adjacent symbols only differs from each other is one bit in order to decrease the bite error rate as much as possible. Similarly like BPSK; QPSK can be differentially encoded to support noncoherent detection at the receiver side.

4.4.4.2 – QPSK Representations

4.4.4.2.1 – Signal Equation Representation

QPSK signal can be represented as shown in “Equations (4.20)”.

.

𝑀 /

: 1,2,3,4.

Using trigonometric simplifications techniques the previous equation would be:

{ .

𝑀/ .

𝑀/ }

Let: √

and √

: 1st basis function. : 2nd basis function. Then the previous equation would be denoted by:

√ { .

𝑀/ .

𝑀/ }

Page 95: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

80 | P a g e

Figure (4.30)

Equations (4.21)

Figure (4.31)

4.4.4.2.2 – Constellation Representation

QPSK can be represented with two different constellations shown in “Figure (4.30)”

4.4.4.3 – QPSK Probability of Error

From the general equation of probability of error we can derive the probability of error for QPSK modulation schematic as shown in “Equations (4.21)”

(√

(

𝑀 ))

(√

) ⁄ (√

)

4.4.4.4 – Bandwidth of QPSK

Bandwidth of QPSK which can be easily calculated from the spectrum distribution shown in “Figure (4.31)”; so it’s half of BPSK’s bandwidth.

Page 96: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

81 | P a g e

Figure (4.32)

Figure (4.33)

4.4.4.5 – QPSK Modulator

Modulation process done on more than one step as shown in “Figure (4.32)”; these steps are illustrated briefly in the very next lines.

Initially the unipolar binary input stream must be converted into bipolar non return to zero sequence using unipolar to bipolar converter.

The data sequence is separated by the serial-to-parallel converter to form the odd numbered bit sequence for “I” channel and the even numbered bit sequence for “Q” channel.

Next the “I” channel train is multiplied by “ ” and the “Q” channel train is multiplied by “ ”.

Finally an adder is used to add these two waveforms together to produce the final QPSK signal.

The BPF at the output of the modulator confines the power spectrum of the QPSK signal within the allocated band; this prevents spill-over of signal energy into adjacent channels.

4.4.4.6 – QPSK De-Modulator

De-modulation process done also on more than one step as shown in “Figure (4.33)”; these steps are illustrated briefly in the very next lines.

The frontend bandpass filter is complementary filter to the one used at the modulator and it removes band noise and adjacent channel interference. The filtered output is split into two portions; each portion is coherently demodulated using the in-phase and quadrature carriers which are recovered from the received signal using carrier recovery circuit. The outputs of the demodulators are passed through decision circuits which generate the in-phase and quadrature binary streams, the two components are then multiplexed to reproduce the original binary stream.

Page 97: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

82 | P a g e

Figure (4.35)

Figure (4.36)

4.4.5 – Special MPSK Techniques: “OQPSK”

Offset Quadrature phase-shift keying (OQPSK) is a variant of phase-shift keying modulation using “4” different values of the phase to transmit exactly the same like QPSK modulation technique with some non-conceptual differences as show below in “Figure (4.34)”. Taking four values of the phase (“2” bits) at a time to construct a QPSK symbol can allow the phase of the signal to jump by as much as 180° at a time. The amplitude of a QPSK signal is ideally constant. However, when QPSK signals are pulse shaped, they lose the constant envelope property!! The phase shift of “π” radians can cause the signal envelope to pass through zero for just an instant. Any kind of hard limiting or nonlinear amplification of the zero-crossings brings back the filtered side lobes since the fidelity of the signal at small voltage levels is lost in transmission which prevents the regeneration of side lobes and spectral widening; it is imperative that QPSK signals be amplified only using linear amplifiers, which are less efficient. A modified form of QPSK, called offset QPSK is less susceptible to these deleterious effects and supports more efficient amplification. By offsetting the timing of the odd and even bits by one bit-period, or half a symbol-period, the in-phase and quadrature components will never change at the same time. This will limit the phase-shift to no more than 90° at a time; this yields much lower amplitude fluctuations than non-offset QPSK and is sometimes preferred in practical implementation.

The modulated signal shown below at “Figure (4.36)” represents a short segment of a random binary data-stream. The sudden phase-shifts occur about twice as often as for QPSK (since the signals no longer change together), but they are less severe. In other words, the magnitude of jumps is smaller in OQPSK when compared to QPSK.

Page 98: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

83 | P a g e

Figure (4.37)

4.4.6 – Special MPSK Techniques: “π/4-QPSK”

4.4.6.1 – Overview

π/4 QPSK is a shifted version of QPSK modulation which offers a compromise performance between OQPSK and QPSK in terms of the allowed maximum phase transitions. It may be demodulated in a coherent or noncoherent fashion. In π/4 QPSK, the maximum phase change is limited to ± 135° as compared to 180° for QPSK and 90° for OQPSK. Hence, the band limited π/4 QPSK signal preserves the constant envelope property better than band limited QPSK signal, but it’s more susceptible to envelope variations than OQPSK. An extremely attractive feature of π/4 QPSK is that it can be noncoherently detected which greatly simplifies receiver design and decreases the overall system’s cost. Further, it has been found that in the presence of multipath spread and fading, π/4 QPSK performs better that OQPSK. Very often, π/4 QPSK signals are differentially encoded to facilitate easier implementation of differential detection or coherent demodulation with phase ambiguity in the recovered carrier when differentially encoded π/4 QPSK is called π/4 DQPSK.

4.4.6.2 – “π/4” QPSK Constellation Representation

π/4 QPSK uses dual identical constellations which are rotated by 45° with respect to one another as shown below in “Figure (4.37)”. Usually, either the even or odd data bits are used to select points from one of the constellations or the other bits select points from the other constellation. This also reduces the phase-shifts from a maximum of 180°, but only to a maximum of 135° and so the amplitude fluctuations of π/4 QPSK are between OQPSK and non-offset QPSK. One property this modulation scheme possesses is that the signal does not pass through the origin which lowers the dynamical range of fluctuations in the signal which is desirable in communications, π/4 QPSK modulator, signaling points of the modulated signal are selected from two QPSK constellations which are shifted by π/4 with respect to each other.

4.4.6.3 – “π/4” QPSK Phase Distribution

Next table shows the phase distribution in π/4 QPSK modulation scheme.

Data stream bits Phase

1 1 ⁄

0 1 ⁄

0 0 ⁄

1 0 ⁄

Page 99: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

84 | P a g e

Figure (4.38)

4.4.6.4 – “π/4” QPSK Illustrating Example

”Figure (4.38)”shows modulating stream of binary data (1 1 0 0 0 1 1 0) using π/4 QPSK modulation technique.

Successive symbols are taken from the two constellations shown in the diagram. Thus, the first symbol (1 1) is taken from the 'blue' constellation and the second symbol (0 0) is taken from the 'green' constellation shown

above in “Figure (4.37)”.

Page 100: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

85 | P a g e

Equations (4.21)

4.5 – Frequency Shift Keying (FSK) Modulation Technique Frequency Shift Keying is also known as frequency shift signaling. Frequency Shift Keying is a data signal

converted into a specific frequency or tone or in other words it’s is a frequency modulation scheme in which digital information is transmitted through discrete frequency changes of a carrier wave in order to transmit it over wire, cable, optical fiber or wireless media to a destination point.

4.5.1 –Historical View

The history of FSK dates back to the early 1900s, when this technique was discovered and then used to work alongside teleprinters to transmit messages by radio. But FSK, with some modifications, is still effective in many instances including the digital world where it is commonly used in conjunction with computers and low speed modems. In fact, the contributions of FSK are much more far reaching. For example, the principle of FSK has laid the path to the development of other similar techniques such as the Audio Frequency Shift Keying (AFSK) and Multiple Frequency Shift Keying (MFSK) just to name a few. In Frequency Shift Keying, the modulating signals shift the output frequency between predetermined levels. Technically FSK has two classifications, the non-coherent and coherent FSK. In non-coherent FSK, the instantaneous frequency is shifted between two discrete values named mark and space frequency, respectively. On the other hand, in coherent Frequency Shift Keying or binary FSK, there is no phase discontinuity in the output signal. In this digital era, the modulation of signals are carried out by a computer, which converts the binary data to FSK signals for transmission, and in turn receives the incoming FSK signals and converts it to corresponding digital low and high, the language the computer understands best. The basic principle of Frequency Shift Keying is at least a century old. Despite its age, FSK has successfully maintained its use during more modern times and has adapted well to the digital domain, and continues to serve those that need to transfer data via computer, cable, or wire. There is no doubt that FSK will be around as long as there is a need to transmit information in a highly effective and affordable manner.

4.5.2 –Binary Frequency Shift Keying (BFSK)

4.5.2.1 – Overview

In binary frequency shift keying (BFSK), the frequency of a constant amplitude carrier signal is switched between two values according to the two possible message states (High and Low), corresponding to a binary stream value (1 or 0).

4.5.2.2 – BFSK Representations

4.5.2.2.1 – Signal Representation

“Equations (4.21)” shows signal representation equations for BFSK modulation technique.

{ (

* }

{ (

* }

: Constant offset from the nominal carrier frequency.

Page 101: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

86 | P a g e

Equations (4.22)

Figure (4.39)

4.5.2.2.2 – Orthogonality Condition

The most important factor to keep in mind while designing FSK system is to keep the frequency of the different symbols orthogonal to minimize the correlation between the two symbols to the zero assuming perfect synchronization of receiver oscillators. To achieve this situation we have to discuss orthogonality conditions which is briefly derived below at “Equations (4.22)”

Replacing “ ” from “Equations (4.21)”:

∫ { (

* } { (

*}

Simplification using trigonometric principles:

Finally by integrating the previous equation:

{

}

Since in practice “ ” Then the term on the right can be ignored so the equation can be approximately converted to:

{ } General orthogonality condition is:

Then we can derive the condition of orthogonality in FSK systems to be:

Then:

: Integer number.

4.5.2.3 – BFSK Illustrating Example

The next “Figure (4.39)” shows BFSK modulating of the binary sequence (0 1 0 1 0 0 1 1 0).

Page 102: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

87 | P a g e

Equations (4.23)

4.5.2.4 – Power Spectral Density Evaluations

As well known by evaluating the PSD we can obtain the bandwidth properties of the system so it’s important to evaluate the PSD of any system. To evaluate the PSD of FSK modulated signal we have to spread it as shown below in “Equations (4.23)”.

{(

* }

(

* (

*

: Constant depends on the binary message (+1 or -1). By dividing the equation into “2” separated equations we got:

Appling Fourier transformation on each one of the “2” equations separately would produce the following: First equation:

| { (

*} |

{ (

* (

*}

Second equation:

| { (

*} |

{ ( )

}

Adding the “2” results in frequency domain would produce:

{ (

* (

*}

{ ( )

}

4.5.2.5 – BFSK Modulator

To generate a binary FSK signal we may use the simplified illustrating circuit shown in “Figure (4.40)”.the input binary sequence is represented in its On-Off form i.e. Unipolar form, with symbol “1” represented by constant amplitude of “ ” volts and symbol “0” represented by “ ” volts. By using an inverter in the lower channel, we in fact make sure that when we have symbol “1” at the input, the oscillator with frequency “ 1” in the upper channel is switched on while the oscillator with frequency “ 2” in the lower channel is switched off, with the result that frequency “ 1” is only transmitted. Conversely, when we have symbol “0” at the input, the oscillator in the upper channel is switched off, and the oscillator in the lower channel is switched on, with the result that frequency “ 2”is only transmitted.

The two frequencies “ 1”and “ 2” are chosen integer multiple of the bit rate “ ⁄ ” to achieve orthogonality as

proved before. In this transmitter we assume that the two oscillators are perfectly synchronized, so that their outputs satisfy the requirements of the two orthogonal basis functions. We may use a single keyed (voltage controlled) oscillator. In either case, the frequency of the modulated wave is shifted with continues phase, in accordance with the input binary wave that is to say, phase continuity is always maintained, including the inter-bit switching time.

Page 103: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

88 | P a g e

Figure (4.40)

Figure (4.41)

4.5.2.6 – BFSK De-Modulators

4.5.2.6.1 – Coherent Detector

In order to detect the original binary sequence given the noisy received wave “ ”, we may use the receiver shown in “Figure (4.41)”, which mainly consists of two correlators with common input, which are supplied with local generated coherent reference signals. The correlator outputs are then subtracted, one from the other, and the resulting difference is then compared with a threshold of zero volts. If this difference greater than “0”, the receiver decides in favor of 1. On the other hand, if the difference less than “0”, it decides in favor of 0.

4.5.2.6.2 – Non-Coherent Detector

For the noncoherent detection, the receiver consists of a pair of matched filters followed by envelope detectors, as clearly shown in “Figure (4.42)” the filter in the upper path of the receiver is matched to the first symbol signal with frequency “ 1”and the filter in the upper path of the receiver is matched to the first symbol signal with frequency “ 2”.

Page 104: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

89 | P a g e

Figure (4.42)

Equation (4.24)

Equation (4.25)

The resulting envelope detector outputs are sampled at “ ” and their values are compared. The envelope sample of the upper and lower paths are shown as “ 1” and “ 2”respectively; then, if “ 1 > 2”, the receiver decides in favor of symbol “1”, and if “ 1 < 2”, the receiver decides in favor of symbol 0.

4.5.2.7 – Probability of Error for BFSK

4.5.2.7.1 –Coherent Detector

To study the coherent demodulator error performance of the transmitted FSK signal knowing that the distance between the two message points is equal to “ ”, We have to derive “ ” as shown down in “Equations (4.24)”.

|

(√

)

(√

)

4.5.2.7.2 – Non-Coherent Detector

Probability of error in case of using noncoherent detectors at the receiver side can be represented by “Equation (4.25)”.

Page 105: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

90 | P a g e

Equation (4.26)

Equation (4.27)

Equation (4.28)

Equation (4.29)

4.5.3 –M’ary Frequency Shift Keying (MFSK)

4.5.3.1 – Overview

Multiple frequency-shift keying (MFSK) is a variation of frequency-shift keying that uses more than two frequencies. MFSK is a form of M’ary orthogonal modulation, where each symbol consists of one element from an alphabet of orthogonal waveforms. M, the size of the alphabet, is usually a power of two so that each symbol represents “ ” bits and “M” is usually between “2 and 64”.

4.5.3.2 – MFSK Representations

4.5.3.2.1 – Signal Representation

Transmitted signal using MFSK can be represented using “Equation (4.26)”.

,

-

: 1, 2, 3…M : Constant integer.

4.5.3.2.2 – Orthogonality Condition

Orthogonality condition can be derived as done before, “Equation (4.27)”.

Here “ ”

4.5.3.3 – Symbol and Bit Probability of Error in MFSK Systems

4.5.3.3.1 – Symbol Probability of Error

Average probability of error for only one transmitted symbol is shown below (Equation “1.28”).

𝑀

(√

𝑀

)

4.5.3.3.2 – Bit Probability of Error

Average probability of error for only one bit is shown below (Equation “1.29”).

𝑀

𝑀

Substituting “ ” from “Equation (4.28)” we got:

𝑀

(√

𝑀

)

Page 106: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

91 | P a g e

Figure (4.43)

4.5.3.3.3 – Effect of Changing “M” on the Probability of Error

As any M’ary modulation technique increasing “M”, increases probability of error as clearly shown in “Figure (4.43)”.

4.5.4 –Other FSK Techniques

In this clause we will quickly show the concept and main features of some other FSK techniques.

4.5.4.1 – Minimum Shift Keying (MSK)

In the coherent detection of binary FSK signal described before, the phase information contained in the receiver signal was not fully exploited, other than to provide for synchronization of the receiver to the transmitter. And by proper utilization of the phase when performing detection, it is possible to improve the noise performance of the receiver significantly. This improvement is achieved at the expense of increasing receiver complexity. In digital modulation, minimum-shift keying (MSK) is a type of continuous-phase frequency-shift keying that was developed in the late 1950s and 1960s.Similar to OQPSK, MSK is encoded with bits alternating between quadrature components, with the Q component delayed by half the symbol period. However, instead of square pulses as OQPSK uses, MSK encodes each bit as a half sinusoid. This results in a constant-modulus signal, which reduces problems caused by non-linear distortion. In addition to being viewed as related to OQPSK, MSK can also be viewed as a continuous phase frequency shift keyed (CPFSK) signal with a frequency separation of one-half the bit rate.

Page 107: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

92 | P a g e

4.5.4.2 – Gaussian Minimum Shift Keying (GMSK)

Gaussian minimum shift keying (GMSK) is a modified version of MSK. A Gaussian filter is used which shapes the digital data stream before being applied to the frequency modulator to reduce the bandwidth of a baseband pulse train prior to modulation and it’s called a pre-modulation filter. The Gaussian pre-modulation filter smoothes the phase trajectory of the MSK signal thus limiting the instantaneous frequency variations. The result after passing the filtered out signal over the frequency modulator is an FM modulated signal with a much narrower bandwidth. This bandwidth reduction does not come for free since the pre-modulation filter smears the individual pulses in pulse train. As a consequence of this smearing in time, adjacent pulses interfere with each other generating what is commonly called inter-symbol interference or ISI. In the applications where GMSK is used, the trade-off between power efficiency and bandwidth efficiency is well worth the cost. GMSK is most notably used in the Global System for Mobile Communications (GSM). There are two methods to generate GMSK, one is frequency shift keyed modulation, and the other is quadrature phase shift keyed modulation.

Page 108: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

93 | P a g e

Equation (4.30)

Figure (4.44)

4.6 – Quadrature Amplitude Modulation (QAM) Modulation Technique

4.6.1 –Overview

Quadrature amplitude modulation (QAM) is both an analog and a digital modulation scheme. It conveys two analog message signals, or two digital bit streams, by modulating the amplitudes of two carrier waves, using the amplitude-shift keying (ASK) digital modulation scheme or amplitude modulation (AM) analog modulation scheme. The two carrier waves, usually sinusoids, are out of phase with each other by 90 ° and are thus called quadrature carriers or quadrature components — hence the name of the scheme. The modulated waves are summed, and the resulting waveform is a combination of both phase-shift keying (PSK) and amplitude-shift keying (ASK), or in the analog case of phase modulation (PM) and amplitude modulation (AM). In the digital QAM case, a finite number of at least two phases and at least two amplitudes are used. PSK modulators are often designed using the QAM principle, but are not considered as QAM since the amplitude of the modulated carrier signal is constant. QAM is used extensively as a modulation scheme for digital telecommunication systems. From another point of view; M’ary PSK systems are consisted of fixed step phase shifts with constant envelope. In a try to increase such system capacity, the constellation points will get closer to each other increasing the bit error rate. A simple solution is to increase the radius of the constellation points, but of course it‘ll also increase the power used. A new technique was developed to overcome that problem by making use of available space inside the constellation circle.

4.6.2 –QAM types

QAM modulation technique can be classified into two main types according to the constellation point arrangement.

4.6.2.1 – Circular QAM

It’s simply a type of QAM modulation can be represented by “Equation (4.30)”.

( )

: Normalization level’s constant. : Symbol’s phase.

And can be also represented on the constellation diagram as shown in “Figure (4.44)”.

Page 109: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

94 | P a g e

Equation (4.31)

Figure (4.46)

Figure (4.45)

4.6.2.1.1 – 16-QAM as an Example on Circular QAM

Applying the concept shown above in “Figure (4.44)” on 16-QAM modulation technique we got the following.

4.6.2.2 – Rectangular QAM

It’s simply a type of QAM modulation can be represented by “Equation (4.31)”.

: Pair of independent integers chosen to specify a certain constellation point, i є *-L+1 L-1]. Such that:

[

]

√𝑀 And can be also represented on the constellation diagram as shown in “Figure (4.46)”.

Page 110: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

95 | P a g e

Equation (4.32)

Figure (4.47)

Equations (4.33)

4.6.2.2.1 – 16-QAM as an Example on Rectangular QAM

Applying the concept shown above in “Equation (4.31)” on 16-QAM modulation technique we got:

[

]

And using the conceptual diagram shown in “Figure (4.46)” on 16-QAM which needs 4X4 constellation points and applying Gray’s code which ensures that every step only one bit is changed we got the following diagram shown in “Figure (4.47)”.

4.6.3 –Calculating Probability of Error

QAM modulation technique always deals with the Symbol error rate not the bit error rate so here we will focus on calculating the probability of error presence in one given symbol as shown in “Equations (4.33)”.

(

* (√

)

And √𝑀

( )

(

√𝑀* (√

)

But:

Then finally we got:

(

√𝑀* (√

𝑀 )

Page 111: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

96 | P a g e

Figure (4.48)

Figure (4.49)

Equation (4.34)

4.6.4 –QAM Modulator

Binary data are split into 2 parallel paths, in each path a number of bits “L= M” is amplitude shift keyed to “L” levels then phase shift keyed using the 2 independent carriers. Then the paths are combined again to form the M’ary QAM signal as shown below in “Figure (4.48)”.

4.6.5 –QAM De-Modulator

As PSK systems, In QAM modulation, coherent and differentially coherent detection could be used, but the mostly used detector is the coherent one which we will show its block in “Figure (4.49)”.

4.6.6 –QAM Bandwidth Efficiency

It’s identical to M’ary Phase shift keying bandwidth efficiency as shown in “Equation (4.34)”.

𝑀

Page 112: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

97 | P a g e

Figure (4.51)

4.7 –Coherent Detection The coherent detection of a digitally modulated signal is a very wide topic that involves lots of details; here

we will show quick introduction on some of its main points, irrespective of its form, requires good synchronization between both transmitter and receiver. We say that two sequences of events (representing a transmitter and a receiver) are synchronous relative to each other when the events in one sequence and the corresponding in the other occur simultaneously. The process of making situation synchronous and maintaining in this situation is called synchronization. From the discussion presented on the operation of digital modulation techniques, we recognize the need for two basic modes of synchronization: When coherent detection is used, knowledge of both the frequency and the phase of the carrier are necessary. The estimation of the carrier phase and frequency is called carrier recovery or carrier synchronization. To perform demodulation, the receiver has to know the instants of time at which the modulation can change its state. That is, it has to know the starting and finishing times of individual symbols, so that it may determine when to sample and when to quench the product-integrators. The estimation of these times is called clock recovery or symbol synchronization.

4.7.1 –Carrier Recovery and Symbol Synchronization

Symbol synchronization is required in every digital communication system which transmits information synchronously, Carrier recovery is required if the signal is detected coherently and there are two main types of synchronizers which are “Mth” power loop shown in “Figure (4.50)” at left and the Costas loop shown in “Figure (4.50)” at right.

4.7.2 –Clock Recovery

The clock or symbol timing recovery can be classified into two basic groups. One group is the open loop synchronizer which uses nonlinear devices. These circuits recover the clock signal directly from the data stream by nonlinear operations on the received data stream. Another group is the closed-loop synchronizers which attempt to lock a local clock signal onto the received data stream by use of comparative measurements on the local and received signals.

Page 113: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

98 | P a g e

4.8 –Comparison between different Modulation Techniques To sum up the whole chapter we will compare between all the modulation techniques shown before from more than one point of view in this clause.

4.8.1 –Probability of Error

In the next table probability of error for different modulation techniques which previously calculated are collected.

Modulation Scheme Probability of Error

ASK √

M’ary ASK 𝑀

𝑀 √

𝑀

𝑀

BPSK ( √

) ⁄ (√

)

DPSK

M’ary PSK (√

(

𝑀 ))

QPSK (√

) ⁄ (√

)

MSK (√

)

GMSK (√

)

BFSK

(√

)

M’ary FSK 𝑀

(√

𝑀

)

QAM (

√𝑀* (√

𝑀 )

Page 114: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

99 | P a g e

Figure (4.52)

4.8.2 –Bit Error Rate Curves

Using Matlab’s bite error rate tool shown below in “Figure (4.52)” we can represent the bit error rate – signal to noise ratio relation which considered to be from the most important relations in digital advanced communications that shows brief details for performance of different modulation techniques and hence fair comparison between these different modulation schemes can be easily applied.

This powerful bite error rate tool which is located in the embedded communication tool box in Matlab supports lots of features like:

1. Managing sequence of simulations with different values of signal to noise ratios. 2. Plotting the produced results to compare between varies modulation techniques. 3. Choosing between Theoretical, Semi-Analytical or Monte Carlo analysis. 4. Simulation can be achieved in non-fading environment or Rayleigh fading environment. 5. Simulation results can be exported to Matlab’s workspace in order to be free to work with it. 6. Channel coded sequence (Convolution & Block encodings). 7. Coherent and noncoherent detection. 8. Differential and normal encoded sequences. 9. Synchronization practical errors simulation.

Page 115: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

100 | P a g e

Figure (4.53)

Figure (4.54)

4.8.2.1 – Phase Shift Keying BER curves

Next “Figure (4.53)” compares between differentially encoded binary phase shift keying and normally encoded binary phase shift keying.

But “Figure (4.54)” shows brief comparison between all phase shift keying modulation techniques in presence of AWGN channel.

Page 116: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

101 | P a g e

Figure (4.57)

Figure (4.55)

Figure (4.56)

4.8.2.1.1 – Notes on Phase Shift Keying BER curves

By simulating PSK modulation schemes from BPSK to 64-PSK as shown above and in the signal to noise ratio range starting from “0 dB” till it ends at “25 dB” we will notice the following:

BPSK and QPSK have the same probability of error but QPSK has higher spectral efficiency.

As “M” increases the probability of error increases.

Arranging PSK modulation schemes from the bit error rate point of view in “Figure (4.55)”.

Arranging PSK modulation schemes from the bit rate point of view in “Figure (4.56)”.

From “Figure (4.54)” power efficiency and spectral efficiency can be calculated to be:

Modulation Scheme BPSK QPSK 8-PSK 16PSK 64-PSK

Spectral Efficiency

𝑀

Power Efficiency @ BER = 10-6 10.5 dB 10.5 dB 18.5 dB 23.2 dB 28.5 dB

From the above comparisons it’s obvious that choosing “M” is a power/bandwidth efficiency trade off.

4.8.2.2 – Frequency Shift Keying BER curves

As well know that FSK modulation can be detected using coherent or non-coherent detectors; so next “Figure (4.57)” compares between performance of coherent and non-coherent detection.

BPSK and

QPSK 8-PSK

BER Increasing 16-PSK

BER Increasing 64-PSK

BER Increasing

8-PSK Bit Rate

Increasing

16-PSK 64-PSK QPSK BPSK Bit Rate

Increasing Bit Rate

Increasing

Bit Rate Increasing

Page 117: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

102 | P a g e

Figure (4.58)

Proceeding “Figure (4.58)” shows brief comparison between all frequency shift keying modulation schemes.

4.8.2.2.1 – Notes on Frequency Shift Keying BER curves

By simulating FSK modulation schemes from BFSK to 32-FSK as shown above and in the signal to noise ratio range starting from “0 dB” till it ends at “25 dB” we will notice the following:

Coherent detection in better than non-coherent detection from the BER point of view, But of course non-coherent detection still better from the complexity point of view.

As the order of modulation increases (M increases) the BER decreases.

From “Figure (4.58)” power efficiency and spectral efficiency can be calculated to be:

Modulation Scheme BFSK QFSK 8-FSK 16FSK 32-FSK

Spectral Efficiency

𝑀

Power Efficiency @ BER = 10-6 13.5 dB 10.7 dB 9.2 dB 8.2 dB 7.5 dB

From the above comparison it’s obvious that choosing “M” is a power efficiency /required transmission bandwidth trade off.

Page 118: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

103 | P a g e

Figure (4.59)

4.8.2.3 – Quadrature Amplitude Modulation BER curves

”Figure (4.59)” shows brief comparison between the performances of all types of QAM modulation techniques.

4.8.2.3.1 – Notes on Quadrature Amplitude Modulation BER curves

By simulating QAM modulation schemes from 4-QAM to 1024-QAM as shown above and in the signal to noise ratio range starting from “0 dB” till it ends at “25 dB” we will notice the following:

As M increases the BER increases.

2-QAM is BPSK while 4-QAM is QPSK which are previously simulated.

As M increases the spectral efficiency increases

From “Figure (4.58)” power efficiency and spectral efficiency can be calculated to be:

Modulation Scheme

8-QAM 16-QAM 32-QAM 64-QAM 128-QAM 256-QAM 512-QAM 1024-QAM

Spectral Efficiency

𝑀

1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Power Efficiency @ BER = 10-6

13.5 dB 14.5 dB 17.5 dB 18.7 dB 22 dB 23.5 dB 27 dB 28.5 dB

From the above comparison it’s obvious that choosing “M” is a power/bandwidth efficiency trade off.

Page 119: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

104 | P a g e

Figure (4.60)

Figure (4.61)

4.8.2.4 – Comparative Simulations

4.8.2.4.1 – At Fixed Modulation Order

“Figure (4.60)” compares between different modulations schemes at fixed modulation order of “16”.

From previous Figure we can notice the following:

16 FSK is the best modulation scheme since it trades the better performance by the excessive transmission bandwidth as shown before. 16 QAM is better than 16PSK since the symbols in 16QAM cover all the spaces in the constellation diagram and not confined to a densely packed circle.

Hence we can sum this up to find that when it is required to achieve same spectral efficiency square QAM is used instead PSK. While when linear amplification is considered PSK is used.

We can get the same conclusions if we fix the modulation order on another value “For example 8” exactly like what is shown below in “Figure (4.61)”.

Page 120: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

105 | P a g e

Figure (4.61)

4.8.2.4.2 – All Introduced Modulation Techniques

In “Figure (4.62)” all pre-introduced modulation techniques including the phase shift keying, Frequency shift keying, and Quadrature amplitude modulation schemes and all it’s types are present.

Descending arrangement of some modulation schemes from the BER or power efficiency point of view from the most to the least using the previous figure: 32-FSK 16-FSK 8-FSK BPSK/QPSK 4-FSK BFSK 8-PSK 16-QAM

32-QAM 16-PSK 64-QAM 256-QAM 512-QAM 1024-QAM

Page 121: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

106 | P a g e

4.8.3 –Overall Modulation Techniques Discussion

In this clause we will show some important points concerning each modulation scheme from the modulations schemes introduced during this chapter.

1. Coherent reception provides better performance than differential normal reception, but requires a more complex receiver.

2. The above detailed information about the various shown modulations techniques shows that bandwidth efficiency is always traded off against power efficiency.

3. MFSK is power efficient, but not bandwidth efficient (because the probability of error decreases by increasing M and that would increase the transmission bandwidth).

4. MPSK and QAM are bandwidth efficient but not power efficient. 5. Mobile radio systems are bandwidth limited, therefore PSK is more suited. 6. Phase Shift Keying is often used, as it’s counted to be a highly bandwidth efficient modulation scheme. 7. The constant envelope class is generally suitable for communication systems whose power amplifiers must

operate in the nonlinear region of the input-output characteristic in order to achieve maximum amplifier efficiency.

8. QPSK, modulation is very robust, but requires some form of linear amplification. 9. OQPSK and π/4-QPSK can be implemented easier than normal QPSK, moreover they are able to reduce the

envelope variations of the signal. 10. Constant envelope schemes (such as GMSK) can be employed since an efficient, non-linear amplifier can be

used. 11. The generic non-constant envelope schemes, such as ASK and QAM, are generally not suitable for systems

with nonlinear power amplifiers. However QAM, with a large signal constellation, can achieve extremely high bandwidth efficiency.

12. QAM has been widely used in modems used in telephone networks, such as computer modems. QAM can even be considered for satellite systems.

13. High level M’ary schemes (such as 256-QAM) are very bandwidth efficient, but more susceptible to noise and require linear amplification.

14. In our “Digital Video Broadcasting - Second Generation Terrestrial” advanced communication system some techniques from those discussed ones are used which are “BPSK, QPSK, 16-QAM, 64-QAM, and 256-QAM”.

4.8.4 –Modulation and Demodulation using Matlab

In this clause we will show the syntax of commands required to implement modulation and de-modulation processes directly using Matlab’s embedded functions located in communications toolbox.

Operation Matlab Syntax Quick help

Differential phase shift keying modulation.

“dpskmod”

Y = dpskmod(X,M) “Y” Complex envelope of the DPSK modulated signal. “X” Message eignal. “M” The alphabet size and must be an integer.

Differential phase shift keying demodulation.

“dpskdemod”

Z = dpskdemod(Y,M) “Z” DPSK demodulated output. “Y” Complex envelope of the DPSK modulated signal. “M” as modulation.

To be continue next page

Page 122: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

107 | P a g e

Operation Matlab Syntax Quick help

Offset quadrature phase shift keying modulation

“oqpskmod” Y = oqpskmod(X) “Y” Complex envelope of the OQPSK modulated signal. “X” Message signal.

Offset quadrature phase shift keying de-modulation

“oqpskdemod” Z = oqpskdemod(Y) “Z” OQPSK demodulated output. “Y” Complex envelope of the OQPSK modulated signal.

Minimum shift keying modulation

“mskmod”

Y = mskmod(X,NSAMP) “Y” Complex envelope of the MSK modulated signal. “X” Message signal. “NSAMP” The number of samples per symbol and must be an integer greater than 1.

Minimum shift keying de-modulation

“mskdemod”

Z = mskdemod(Y,NSAMP) “Z” MSK demodulated output. “Y” Complex envelope of the MSK modulated signal. “NSAMP” as modulation.

Phase shift keying modulation

“pskmod”

Y = pskmod(X,M) “Y” Complex envelope of the PSK modulated signal. “X” Message signal. “M” The alphabet size and must be an integer.

Phase shift keying de-modulation

“pskdemod”

Z = pskdemod(Y,M) “Z” PSK demodulated output. “Y” Complex envelope of the PSK modulated signal. “M” as modulation.

Frequency shift keying modulation

“fskmod”

Y = fskmod(X,M,FREQ_SEP,NSAMP) “Y” Complex envelope of the FSK modulated signal. “X” Message signal. “M” The alphabet size and must be an integer. “FREQ_SEP” Desired separation between successive frequencies (in Hz). “NSAMP” The number of samples per symbol and must be an integer greater than 1.

Frequency shift keying de-modulation

“fskdemod”

Z = fskdemod(Y,M,FREQ_SEP,NSAMP) “Z” Noncoherently FSK demodulated output. “Y” Complex envelope of the FSK modulated signal. “M”, “FREQ_SEP”, “NSAMP” as modulation.

Quadrature amplitude modulation

“qammod”

Y = qammod(X,M) “Y” Complex envelope of the QAM modulated signal. “X” Message signal. “M” The alphabet size, must be an integer power of two (8, 16, 64...1024).

Quadrature amplitude de-modulation

“qamdemod”

Z = qamdemod(Y,M) “Z” QAM demodulated output. “Y” Complex envelope of the QAM modulated signal. “M” as modulation.

Page 123: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

108 | P a g e

Chapter Number 5 Wireless Channel Problems

5.1 – Introduction In telecommunications and computer networking, a communication channel, or channel, refers either to a

physical transmission medium such as a wire or to a logical connection over a multiplexed medium such as a radio channel. A channel is used to convey an information signal, for example a digital bit stream, from one or several senders (or transmitters) to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second

A channel can be modeled physically by trying to calculate the physical processes which modify the transmitted signal. For example in wireless communications the channel can be modeled by calculating the reflection off every object in the environment. A sequence of random numbers might also be added in to simulate external interference and/or electronic noise in the receiver.

Statistically a communication channel is usually modeled as a triple consisting of an input alphabet, an output

alphabet, and for each pair (i, o) of input and output elements a transition probability p(i, o). Semantically, the transition probability is the probability that the symbol o is received given that i was transmitted over the channel. Statistical and physical modeling can be combined. For example in wireless communications the channel is often modeled by a random attenuation (known as fading) of the transmitted signal, followed by additive noise. The attenuation term is a simplification of the underlying physical processes and captures the change in signal power over the course of the transmission. The noise in the model captures external interference and/or electronic noise in the receiver. If the attenuation term is complex it also describes the relative time a signal takes to get through the channel. The statistics of the random attenuation are decided by previous measurements or physical simulations.

5.1.1 –Analog and Digital Channel Models

Digital channel models can be summarized as following:

Binary symmetric channel (BSC), a discrete memory less channel with a certain bit error probability.

Binary burst bit error channel model, a channel "with memory.

Binary erasure channel (BEC), a discrete channel with a certain bit error detection (erasure) probability.

Packet erasure channel, where packets are lost with a certain packet loss probability or packet error rate.

Arbitrarily varying channel (AVC), where the behavior and state of the channel can change randomly. While Analog channel models can be summarized as following:

Additive white Gaussian noise (AWGN) channel, a linear continuous memory less model.

Interference model, for example cross-talk (co-channel interference) and inter-symbol interference (ISI).

Distortion model, for example a non-linear channel model causing inter modulation distortion (IMD).

Frequency response model, including attenuation and phase-shift.

Group delay model.

Fading model, for example Rayleigh fading, Ricean fading, log-normal shadow fading and frequency selective (dispersive) fading.

Doppler shift model, which combined with fading results in a time-variant system. Ray tracing models, which attempt to model the signal propagation and distortions for specified transmitter-

receiver geometries, terrain types, and antennas.

Page 124: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

109 | P a g e

Figure (5.1)

Figure (5.2)

5.1.2 –Noise in Wireless Channel

In the wireless communication systems an accepted and simple model for the noise is the AWGN channel model "Additive White Gaussian Noise", such that, additive white Gaussian noise (AWGN) is a channel model represented below in “Figure (5.1)” in which the only impairment to communication is a linear addition of wideband or white noise with a constant spectral density (expressed as watts per hertz of bandwidth) and a Gaussian distribution of amplitude. The model does not account for fading, frequency selectivity, interference, nonlinearity or dispersion. However, it produces simple and tractable mathematical models which are useful for gaining insight into the underlying behavior of a system before these other phenomena are considered.

This noise can be from a number of natural sources such as the thermal vibrations of atoms in conductors

(referred to as thermal noise ) , shot noise, black body radiation from the earth and other warm objects, and from celestial sources such as the Sun.

The AWGN channel is a good model for many satellite and deep space communication links. It is not a good

model for most terrestrial links because of multipath, terrain blocking, interference, etc. However, for terrestrial path modeling, AWGN is commonly used to simulate background noise of the channel under study, in addition to multipath, terrain blocking, interference, ground clutter and self-interference that modern radio systems encounter in terrestrial operation.

5.1.3 –Basic Propagation Mechanisms

5.1.3.1 – Reflection

There are electric and magnetic waves that serve to propagate radio energy. The electric waves can be represented as a sum of two orthogonal polarization components, for example, vertical and horizontal, or left-hand and right-hand circular. What happens when these two components of the electric field hit the boundary between two deferent dielectric media? We talk about the plane of incidence, that is, the plane containing the direction of travel of the waves (incident, reflected, and transmitted), and perpendicular to the surface (plane where the two media meet).

Geometry of calculating the dielectric coefficient between two dielectrics is shown below in “Figure (5.2)”.

Page 125: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

110 | P a g e

Equations (5.1)

And to get the reflected and transmitted field strengths, the reflections coefficients will be given by “Equations (5.1)”.

||: Is the parallel electric field strength.

: Is the perpendicular electric field strength. And can be represented by:

Where is determined by Snell's law:

But angle of incidence is equal to angle of reflection:

Then:

5.1.3.2 – Diffraction

Radio signals may also undergo diffraction. It is found that when signals encounter an obstacle they tend to travel around them. This can mean that a signal may be received from a transmitter even though it may be "shaded" by a large object between them. This is particularly noticeable on some long wave broadcast transmissions. For example the BBC long wave transmitter on 198 kHz is audible in the Scottish glens where other transmissions could not be heard. As a result the long wave transmissions can be heard in many more places than transmissions on VHF FM.

“Figure (5.3)” shows diffraction due to a natural obstacle with sharp edge like a mountain as a diffraction illustrating example and “Figure (5.4)” shows another to types of diffractions.

To understand how this happen it is necessary to look at Huygens's Principle. This states that each point on a spherical wave front can be considered as a source of a secondary wave front. Even though there will be a shadow zone immediately behind the obstacle, the signal will diffract around the obstacle and start to fill the void. It is found that diffraction is more pronounced when the obstacle becomes sharper and more like a "knife edge". For a radio signal a mountain ridge may provide a sufficiently sharp edge. A more rounded hill will not produce such a marked effect. It is also found that low frequency signals diffract more markedly than higher frequency ones.

Page 126: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

111 | P a g e

Figure (5.3)

Figure (5.4)

Equations (5.2)

Equations (5.2)

Equations (5.3)

For this reason that signals on the long wave band are able to provide coverage even in hilly or mountainous terrain where signals at VHF and higher would not.

Fresnel-Kirchhoff’s Diffraction parameter can be represented using “Equations (5.2)”.

And

: Is Fresnel-Kirchhoff’s Diffraction parameter.

Fresnel zone: Successive regions with “ ” constructive or destructive interference as shown below in “Equations (5.3)”.

(

*

: Is path difference. For n= 1: “null”, n=2: “peak”, n=3: “null”.

Page 127: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

112 | P a g e

Equations (5.4)

Figure (5.5)

Figure (5.6)

The path loss associated with knife-edge diffraction is generally a function of “ ”and Approximations for knife-edge diffraction path loss (in dB) relative to LOS path loss are given by Lee as shown below in “Equation (5.4)”.

“Figure (5.5)” shows comparison between the expected and real Knife-edge diffraction gain as a function of Fresnel diffraction’s parameter.

5.1.3.3 – Scattering

Occurs when the medium consists of objects with dimensions small compared to the wavelength and when the number of obstacles per unit volume is large as shown below in “Figure (5.6)”.

Page 128: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

113 | P a g e

Equation (5.5)

Figure (5.7)

Equation (5.6)

The received signal due to a scattered ray is given by the biostatic radar “Equation (5.5)”.

5.2 – Path-loss in Wireless Communication Channels Path loss (or path attenuation) is the reduction in power density (attenuation) of an electromagnetic wave as it

propagates through space. Path loss is a major component in the analysis and design of the link budget of a telecommunication system.

This term is commonly used in wireless communications and signal propagation. Path loss may be due to many

effects, such as free-space loss, refraction, diffraction, reflection, aperture-medium coupling loss, and absorption as shown below in “Figure (5.7)”.

The simples modeling for this path loss can be done by using of Friis transmission equation shown below in “Equation (5.6)”.

Page 129: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

114 | P a g e

Equations (5.7)

Equations (5.8)

There are a number of models the path loss can be modeled, as: 1. Free Space model. 2. 2-Ray model. 3. General Ray model. 4. Empirical Path Loss models.

5.2.1 –Free Space Model

In telecommunication, free-space path loss (FSPL) is the loss in signal strength of an electromagnetic wave that would result from a line-of-sight path through free space (usually air), with no obstacles nearby to cause reflection or diffraction, it can be expressed as shown below in “Equations (5.7)”.

: Distance from the transmitter. λ: Signal wavelength in meters. The previous equation can be represented in dB scale as following:

: Distance from TX in Km. : Frequency of signal in MHZ.

5.2.2 – 2-Ray Model

A single line-of-sight path between two mobile nodes is seldom the only means of propagation. The two-ray ground reflection model considers both the direct path and a ground reflection path. It is shown that this model gives more accurate prediction at a long distance than the free space model. The received power at distance is predicted by “Equations (5.8)”.

: Total electric field of signal.

: Initial energy of signal. : The transmitted power.

And : Transmitter and Receiver gains. And : Transmitter and Receiver heights. : Distance between Transmitter and Receiver.

Page 130: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

115 | P a g e

Figure (5.8)

Equation (5.9)

Equation (5.10)

2-Ray model can be represented as shown below in “Figure (5.8)”.

5.2.3 – General Ray Model

General Ray Tracing (GRT) can be used to predict field strength and delay spread for any building configuration and antenna placement ,For this model, the building database (height, location, and dielectric properties) and the transmitter and receiver locations relative to the buildings must be specified exactly.

In general and in practical life there is more than 2-rays, and it will be very difficult to trace each one of them, so

the general ray model is adopted as shown in “Equation (5.9)”. Where: δ is value between “2” and “6”.

5.2.4 – Empirical Path Loss Models

Empirical studies confirm for many propagation environments that, while the basic path loss behaviors are somewhat similar to those predicted in “Equation (5.10)”.

There can be significant variations in detailed behaviors versus range, frequency, and antenna heights. It is not

surprising that at Earth model does not achieve a perfect match to empirical data given the model's simplicity and lack of scattering and diffraction phenomena. A particular deficiency involves the absence of frequency dependence in “Equation (5.10)”; such behaviors are very rare empirically.

Page 131: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

116 | P a g e

Equation (5.11)

Equations (5.12)

A typical form of simplified empirical path loss is represented by “Equation (5.11)”.

5.2.4.1 – Okumura’s Model

The Okumura model for Urban Areas is a Radio propagation model that was built using the data collected in the city of Tokyo, Japan. The model is ideal for using in cities with many urban structures but not many tall blocking structures. The model served as a base for the Hata Model.

Okumura model was built into three modes. The ones for urban, suburban and open areas. The model for urban areas was built first and used as the base for others.

Important facts for Okumura's model that it models systems of frequency range starting from 150 MHz and up to 1920 MHz when Mobile station antenna height is between 1 m and 10 m while the Base station antenna height: between 30 m and 1000 m in case that Link distance: between 1 km and 100 km. Okumura's model is formally mathematically expressed by “Equations (5.12)”.

: Free space losses. : Median attenuation. : Mobile station antenna height gain factor. : Base station antenna height gain factor. : Correction factor gain (such as type of environment, water surfaces, isolated obstacle etc.). According to his intensive measurements, his model can be expressed as:

: Median value of propagation path loss. : Median attenuation but from Okumura's curves resulted from his intensive measurements. And for transmitter and receiver, Okumura provide these equations:

Okumura's model is wholly based on measured data and does not provide any analytical explanation. For

many situations, extrapolations of the derived curves can be made to obtain values outside the measurement range, although the validity of such extrapolations depends on the circumstances and the smoothness of the curve in question.

Page 132: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

117 | P a g e

Equations (5.13)

Okumura's model is considered to be among the simplest and best in terms of accuracy in path loss prediction for mature cellular and land mobile radio systems in cluttered environments. It is very practical and has become a standard for system planning in modern land mobile radio systems in Japan. The major disadvantage with the model is its slow response to rapid changes in terrain; therefore the model is fairly good in urban and suburban areas, but not as good in rural areas. Common standard deviations between predicted and measured path loss values are around 10 dB to 14 dB.

5.2.4.2 – Hata’s Model

The Hata model is an empirical formulation of the graphical path loss data provided by Okumura and is valid over roughly the same range of frequencies. This empirical model simplifies calculation of path loss since it is a closed-form formula and is not based on empirical curves for the different parameters.

Practically Hata model divided into 3 models: 1. Hata model for suburban areas. 2. Hata model for urban areas. 3. Hata model for open areas.

5.2.4.2.1 – Hata Model for Suburban Areas

The Hata Model for Suburban Areas, also known as the Okumura-Hata model for being a developed version of the Okumura Model, is the most widely used model in radio frequency propagation for predicting the behavior of cellular transmissions in city outskirts and other rural areas. This model incorporates the graphical information from Okumura model and develops it further to better suit the need. This model also has two more varieties for transmission in Urban Areas and Open Areas.

Hata Model predicts the total path loss along a link of terrestrial microwave or other type of cellular

communications. And is a function of transmission frequency and the average path loss in urban areas, Hata Model for Suburban Areas is formulated as shown below in “Equation (5.13)”.

LSU = Path loss in suburban areas. Unit: decibel (dB). LU = Average Path loss in urban areas. Unit: decibel (dB). = Frequency of Transmission. Unit: megahertz (MHz).

This particular version of Hata model is applicable to the transmissions just out of the cities and on rural areas where man-made structures are there but not so high and dense as in the cities. To be more precise, this model is suitable where buildings exist, but the mobile station does not have a significant variation of its height, this model is suited for both point-to-point and broadcast transmissions.

5.2.4.2.2 – Hata Model for Urban Areas

This particular version of the Hata model is applicable to the radio propagation within urban areas, this model is suited for both point-to-point and broadcast transmissions and it is based on extensive empirical measurements taken, PCS is another extension of the Hata model. The Walfisch and Bertoni Model is further advanced.

Page 133: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

118 | P a g e

Equations (5.14)

Equations (5.15)

The coverage capabilities of this model are limited to:

Frequency: 150 MHz to 1500 MHz

Mobile Station Antenna Height: between 1 m and 10 m

Base station Antenna Height: between 30 m and 200 m

Link distance: between 1 km and 20 km. The mathematical formulation for this model can be showed by “Equations (5.14)”.

But for small or medium sized city, CH can be calculated from:

And for large cities, CH can be calculated from:

: Path loss in Urban Areas. Unit: decibel (dB). : Height of base station Antenna. Unit: meter (m). : Height of mobile station Antenna. Unit: meter (m). : Frequency of Transmission. Unit: megahertz (MHz). : Antenna height correction factor. : Distance between the base and mobile stations. Unit: kilometer (km).

5.2.4.2.4 – Hata Model for Open Areas

This particular version of Hata model is applicable to the transmissions in open areas where no obstructions block the transmission link; this model is suited for both point-to-point and broadcasting transmission

Mathematical representation for this model is shown below in “Equation (5.15)”.

LO = Path loss in open area. Unit: decibel (dB)

LU = Path loss in urban area. Unit: decibel (dB) f = Frequency of transmission. Unit: megahertz (MHz).

5.2.4.3 – Cost 231-Hata Model

Okumura-Hata model for medium to small cities has been extended to cover 1500 MHz to 2000 MHz

Cost 231 takes the characteristics of the city structure into account:

Heights of buildings hRoof.

Widths of roads w.

Building separation b.

Road orientation with respect to the direct radio path Φ.

Page 134: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

119 | P a g e

Equations (5.16)

Equations (5.17)

Figure (5.9)

Increases accuracy of the propagation estimation. But it’s a more complex model which allows estimation from 20 m (instead of 1 km for Okumura-Hata-model).

Model can be represented by “Equations (5.16)”.

Where:

And

{ 𝑀 𝑀

5.2.4.4 – Walfisch-Bertoni Model

First theoretical model to explain and address effects of buildings on propagation of signal in urban areas, when operating at UHF band (300 MHz-3GH).

“Equation (5.17)” describes the influence of buildings in neighborhoods composed of residential, commercial, and other light industrial buildings which take up the majority of urban land area.

Elevated fixed (base station) antenna is viewed as radiating fields that propagate over the rooftops by a process of multiple diffractions past rows of buildings.

“Figure (5.9)” shows various ray paths in presence of buildings.

Page 135: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

120 | P a g e

Equation (5.18)

Model’s Considerations:

Transmitting antenna is not visible from the street level, thus propagation must take place through buildings, between them, or over rooftops with the field diffracted at the roofs down to the street level.

Primary propagation path lies over the tops of buildings.

Fields reaching street level results from diffraction of the field’s incident on the rooftops in the vicinity of the mobile.

Rows of buildings have the form of cylindrical obstacles.

Propagation over rooftops involves diffraction past a series of parallel cylinders with dimensions large compared to wavelength.

At each cylinder a portion of the field will diffract to the ground but will rejoin those above the building after a series of multiple reflections and diffractions.

Model’s Assumptions:

Field incident on the top of each row of buildings is backward diffracted as well as forward diffracted.

All the rows are of the same height.

Elevated fixed antenna is achieved by using the local plane-wave approximation to find the influence of the buildings on the spherical-wave radiation by the elevated antenna.

Propagation is perpendicular to the rows of buildings and magnetic field is polarized parallel to the ground.

Propagation path loss factors

Path loss between antennas in free space L0.

Reduction of the amplitude of the field at the roof tops due to a plane wave of unit amplitude incident at the glancing angle on an array of building rows due to settling Lms.

Effect of diffraction of the rooftop fields down to ground level Lrts. Path loss here can be represented by “Equation (5.18)”.

Model’s versus measurements:

Model was compared the measurements of average received signals made in Philadelphia.

6 fixed antenna locations with antenna heights ranging between 45 and 255 feet above street level.

Frequency of 820 MHz with radiated power and antenna gains totaling 23.3 dB.

Mobile antenna of height 5 feet. “Figure (5.10)” shows comparison between “6” different antenna heights in meter, where the model’s prediction appeared in dotted lines while the theoretical prediction appeared in solid lines.

Page 136: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

121 | P a g e

Figure (5.10)

Figure (5.11)

5.2.4.5 – Cost 231-walfisch Ikegami Model

Cost 231-WI model shown in “Figure (5.11)” takes the characteristics of the city structure into account:

Heights of buildings hRoof.

Widths of roads w.

Building separation b.

Road orientation with respect to the direct radio path Φ

Page 137: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

122 | P a g e

Equations (5.19)

Model’s Restrictions:

Frequency f between 800 MHz and 2000 MHz.

TX height hBase between 4 and 50 m

RX height hMobile between 1 and 3 m

TX - RX distance d between 0.02 and 5 km

And here we have 2 cases (Line Of Sight and No Line Of Sight case); “Equations (5.19)” shows path loss in each of them.

𝑀

Where

𝑀

: Roof-to-street loss.

𝑀

: Street-Diffraction loss.

{

: Multi-Diffraction loss.

𝑀

Where:

{

And

{

And

{

And Finally

{

𝑀

Page 138: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

123 | P a g e

Equations (5.20)

5.2.4.6 – Stanford University Interim (SUI) Model

The proposed standards for the frequency bands below 11 GHz contain the channel models developed by Stanford University, namely the SUI models. Note that these models are defined for the Multipoint Microwave Distribution System (MMDS) frequency band which is from 2.5 GHz to 2.7 GHz. Their applicability to the 3.5 GHz frequency band that is in use in the UK has so far not been clearly established. The SUI models are divided into three types of terrains1, namely A, B and C. Type A is associated with maximum path loss and is appropriate for hilly terrain with moderate to heavy foliage densities. Type C is associated with minimum path loss and applies to flat terrain with light tree densities. Type B is characterized with either mostly flat terrains with moderate to heavy tree densities or hilly terrains with light tree densities.

Where, d is the distance between the Access Points (AP) and the Customer Premises Equipment (CPE)

antennas in meters, d0 = 100 m and s is a log normally distributed factor that is used to account for the shadow fading owing to trees and other clutter and has a value between 8.2 dB and 10.6 dB . The other parameters are defined as shown in “Equation (5.20)”.

Where, the parameter hb is the base station height above ground in meters and should be between 10 m and 80 m. The constants used for a, b and c are given in the following Table . The parameter γ in previous equation is equal to the path loss exponent. For a given terrain type the path loss exponent is determined by hb.

Next tables show the parameters of SUI model in different types of environments:

The correction factors for the operating frequency and for the CPE antenna height for the model are given by “Equation (5.21)”.

Page 139: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

124 | P a g e

Equations (5.21)

Figure (5.12)

And for Terrain types A and B:

But for terrain type C, it would be:

Where, is the frequency in MHz and is the CPE antenna height above ground in meters. The SUI model is

used to predict the path loss in all three environments, namely rural suburban and urban.

5.2.4.7 – Comparison between Empirical Models

“Figure (5.12)” shows readings of measured path loss in different areas at changing the distance between transmitter and receiver.

“Figure (5.13)” shows comparison between performances of empirical models in an urban environment.

Page 140: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

125 | P a g e

Figure (5.13)

Figure (5.14)

“Figure (5.14)” shows comparison between performances of empirical models in a suburban environment.

Page 141: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

126 | P a g e

5.3 – Interference in Wireless Communication Channels Wireless communication enables the transfer of data over both short and long distances by utilizing a very

natural phenomenon known as radiation, more specifically Electromagnetic (EM) radiation. An antenna is a transducer, device that converts one form of energy to another, designed to transmit or receive electromagnetic radiation. For the simplest wireless communication two antennas are required. One antenna has an alternating current applied to it causing electrons to move and radiate an electromagnetic field while the other antenna, at some distance away, receives that EM wave through its elements and induces an alternating current to wires applied to it. Through this conversion-transmission-conversion process data is being transferred and devices are communicating with each other almost magically. EM waves, like light, can travel through the vacuum of space where other waves such as sound cannot.

Interference is a fundamental nature of wireless communication systems, in which multiple transmissions often take place simultaneously over a common communication medium. In recent years, there has been a rapidly growing interest in developing reliable and spectral efficient wireless communication systems. One primary challenge in such a development is how to deal with the interference, which may substantially limit the reliability and the throughput of a wireless communication system. In most existing wireless communication systems, interference is dealt with by coordinating users to orthogonalize their transmissions in time or frequency, or by increasing transmission power and treating each other's interference as noise. Over the past twenty years, a number of sophisticated receiver designs, for example, multiuser detection, have been proposed for interference suppression under various settings. Recently, the paradigm has shifted to focus on how to intelligently exploit the knowledge and/or the structure of interference to achieve improved reliability and throughput of wireless communication systems.

WIRELESS SYSTEM designers have always had to contend with interference from both natural sources and other

users of the medium. Thus, the classical wireless communications design cycle has consisted of measuring or predicting channel impairments, choosing a modulation method, signal preconditioning at the transmitter, and processing at the receiver to reliably reconstruct the transmitted information. These methods have evolved from simple (like FM and pre emphasis) to relatively complex (like code-division multiple-access (CDMA) and adaptive equalization). However, all share a common attribute—once the modulation method is chosen, it is difficult to change. For example, an amplitude shift-keying (ASK) system cannot be simply modified to obtain a phase shift-keying (PSK) system owing to the complexities of the transmission and reception hardware. Universal radios change this paradigm by providing the communications engineer with a radio which can be programmed to produce almost arbitrary output waveforms and act as an almost arbitrary receiver type. Thus, it is no longer unthinkable to instruct the transmitting and receiving radios to use a more effective modulation in a given situation. Of course, practical radios of this sort are probably many years away. Nonetheless, if Moore’s law holds true, they are certainly on the not-too-distant horizon. It is, therefore, probable that wireless systems of the near future will have elements which adapt dynamically to changing patterns of interference by adjusting modulation and processing methods in much the same way that power control, is used today, albeit on a possibly slower time scale. Furthermore if the release of 300 MHz of unlicensed spectrum in the 5-GHz range is any indication, one might expect there to be an abundance of mutually interfering independent systems and no central control for efficient coordination. This provides added impetus to understand mutual interference of systems at some general level and implicit coordination in a multi system environment.

5.3.1 –Main reasons of Interference

Interference can result due to more than one reason; in this clause we will show in quick the main reasons producing interference between electromagnetic signals.

Page 142: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

127 | P a g e

5.3.1.1 – Multiple signals in the Front Ends of a Communication System

The first amplification stages, the front end, of communications type radios are wide (possibly 50 MHz or more) and very sensitive. If the radio is near (usually 1 km or less) several radio transmission sites such as cellular, Nextel, other communications sites, private two-way sites, etc., the signals from these sites mix in the non-linear front end of the communications radio. If the frequencies are just right, an inter modulation product will emerge on or near the desired reception frequency and cause interference.

5.3.1.2 – Receiver Overload

If a receiver is near a transmitting system, the RF from that system will be too strong for the 1st amplification stage to handle. The transistors saturate and the receiver sensitivity decreases. Additional inter modulation products may be created within the receiver. Since many communications receivers have wideband front ends, they will be affected by strong out of band signals entering their antennas. This is not the same as out of band emissions from a nearby transmitter.

5.3.1.3 – Out Of Band Emission (OOBE)

All transmitters emit radio frequency energy at frequencies other than the exactly desired frequencies. This is known generally as out of band emissions, or “OOBE”. Since these undesired signals may be on the desired receive frequencies, they can only be eliminated by filtering the offending transmitter or moving the offending transmitter to another location.

5.3.1.4 – Base-Station Intermodulation Products

Often there is more than one transmission system at a tower site or more than one frequency involved in a signal transmission site. Since all physical systems are non-linear, these frequencies can mix a cause other frequencies that could possibly have energy in the communications band. This is a common occurrence. It can usually be tracked down and filtered or other solutions put in place so that interference does not occur. To demonstrate how tricky this can be, I know of one case in which a rusty tower joint (a non-linear system) on an FM radio tower in Florida caused interference 40 miles away. The good news is that this type of interference can usually be found and fixed.

5.3.1.5 – Skip

Skip is a phenomenon caused by the ionization of gases in the ionosphere. RF energy leaves the antenna and reflects off of this ionized layer back to earth at a great distance. This phenomenon is much more prevalent at frequencies below 30 or 40 MHz and is time dependent. However, it does occur at much higher frequencies from time to time. The interference usually occurs more at night and in the evenings. Thus, a two-way facility at 150 MHz in California could conceivable interfere with a 150 MHz system in Georgia.

5.3.1.6 – Ducting

Ducting occurs when air of different temperatures and humidity forms layers in the lower atmosphere. RF energy is refracted and reflected from these layers and travels much farther than normal. Ducting usually occurs when the atmosphere is still in the early mornings usually always ending by noon. This phenomenon can cause severe interference often blocking the desired signal just miles from the main transmitter site.

5.3.1.7 – General RF Noises

Devices other than radio equipment can cause radio frequency energy. Typical of items that can cause interference are arc welders, electric motors, faulty spark plug wires, lightning, and even rusty tower bolts. These types of problems can usually be tracked down and eliminated.

Page 143: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

128 | P a g e

Figure (5.15)

5.3.2 –Inter-Symbol Interference (ISI)

In telecommunication, inter symbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have similar effect as noise, thus making the communication less reliable. ISI is usually caused by multipath propagation or the inherent non-linear frequency response of a channel causing successive symbols to "blur" together. The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible. Ways to fight inter symbol interference include adaptive equalization and error correcting codes.

“Figure (5.15)”shows the effect of ISI on a received signal.

5.3.2.1 – Causes of Inter-Symbol Interference

One of the causes of inter symbol interference is what is known as multipath propagation in which a wireless signal from a transmitter reaches the receiver via many different paths. The causes of this include reflection (for instance, the signal may bounce off buildings),refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and ionosphere reflection.

Since all of these paths are different lengths and some of these effects will also slow the signal down these

results in the different versions of the signal arriving at different times. This delay means that part or all of a given symbol will be spread into the subsequent symbols, thereby interfering with the correct detection of those symbols. Additionally, the various paths often distort the amplitude and/or phase of the signal thereby causing further interference with the received signal.

Page 144: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

129 | P a g e

Figure (5.16)

Figure (5.17)

Another cause of inter-symbol interference is the transmission of a signal through a band-limited channel, i.e., one where the frequency response is zero above a certain frequency (the cutoff frequency). Passing a signal through such a channel results in the removal of frequency components above this cutoff frequency; in addition, the amplitude of the frequency components below the cutoff frequency may also be attenuated by the channel.

This filtering of the transmitted signal affects the shape of the pulse that arrives at the receiver. The effects of filtering a rectangular pulse; not only change the shape of the pulse within the first symbol period, but it is also spread out over the subsequent symbol periods. When a message is transmitted through such a channel, the spread pulse of each individual symbol will interfere with following symbols.

5.3.2.2 – Countering Inter-Symbol Interference

There are several techniques in telecommunication and data storage that try to work around the problem of inter-symbol interference as shown following:

1. Design systems such that the impulse response is short enough that very little energy from one symbol smears into the next symbol as shown below in “Figure (5.16)”.

2. Separate symbols in time with guard periods. ( which may called Cyclic prefix " CP" ) as shown below in “Figure (5.17)”.

Page 145: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

130 | P a g e

Equation (5.22)

Figure (5.18)

Equation (5.23)

5.3.3 –Inter-Carrier Interference (ICI)

Inter-carriers interference (ICI) is a special problem in the OFDM system , and it is different from the co-channel interference in MIMO systems. The co-channel interference is caused by reused channels in other cells , while ICI results from the other sub-channels in the same data block of the same user. Even if only one user is in communication, ICI might occur, yet the co-channel interference will not happen. There are two factors that cause the ICI, namely frequency offset and time variation. some kinds of time variations of channels can be modeled as a white Gaussian random noise when N is large enough, while other time variations can be modeled as frequency offsets, such as Doppler shift. Only frequency-offset is discussed in this chapter. ICI problem would become more complicated when the multipath fading is present.

OFDM transmitted signal can be represented by “Equation (5.22)”.

5.3.3.1 – Doppler Effect

The relative motion between receiver and transmitter, or mobile medium among them, would result in the Doppler effect, a frequency shift in narrow-band communications. For example, the Doppler effect would influence the quality of a cell phone conversation in a moving car.

In general, the Doppler frequency shift can be formulated in a function of the relative velocity, the angle between the velocity direction and the communication link, and the carrier frequency.

Doppler Efeect can be simply represented by “Figure (5.18)”.

This Doppler shift could be mathematically represented as shown below in “Equation (5.23)”.

Page 146: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

131 | P a g e

Equation (5.24)

where θ is the angle between the velocity and the communication link, which is generally modeled as a uniform distribution between 0 and 2π , v is the receiver velocity, and the λ is the carrier wavelength.

Three kinds of Doppler effect models will be discussed here the classical model, the uniform model, and the two-

ray model. The classical model is also referred as Jake model, which was proposed in 1968 . In this model, the transmitter was assumed to be fixed with vertically polarized antenna. There was no Light of Sight (NLOS) path, and all path gains were subject to identical statistics. It had been proved in 1972 that the spectrum of this kind of Doppler shift could be given as shown below in “Equation (5.24)”.

: is fixed constant to given channel and antenna. : is the maximum Doppler shift. The uniform model is much simpler. Both velocity and angle are supposed to be uniformly distributed . In this case, the power spectrum could be written as:

The two-ray model assumed that there were only two paths between the transmitter and receiver. Accordingly, the resulting power spectrum is given as:

5.3.3.2 – Synchronization Error

It can be assumed that most of the wireless receivers cannot make perfect frequency synchronization. In fact, practical oscillators for synchronization are usually unstable, which introduce frequency offset. Although this small offset is negligible in traditional communication systems, it is a severe problem in the OFDM systems. In most situations, the oscillator frequency offset varies from 20 ppm (Parts Per Million) to 100 ppm [31]. Provided an OFDM system operates at 5 GHz, the maximum offset would be 100 KHz to 500 KHz (20-100 ppm.). However, the subcarriers frequency spacing is only 312.5 KHz. Hence; the frequency offset could not be ignored, the frequency offset can be normalized by the reciprocal of symbol duration. For example, if a system has a bandwidth of 10 MHz, and the number of subcarriers is 128, then the subcarrier frequency spacing would be 10M/128= 78 KHz. If the receiver frequency offset is 1 KHz, then the normalized frequency offset will be 1/78=1.3%. If the normalized frequency offset is larger than 1, only the decimal part needs to be considered.

5.3.3.3 – Multi-Path Fading

The influence of multipath fading on ICI is seldom discussed before. As a matter of fact, the multipath fading does not cause ICI, but it will make the ICI problem worse. Since ICI cannot be neglected in practice, the impact of multipath fading should be discussed. It is recognized that the cyclic prefix has been used to eliminate ISI entirely and therefore only ICI needs to be concerned. Because there are many time-delayed versions of received signals with different gains and different phase offsets, the ICI is more complicated to calculate.

Page 147: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

132 | P a g e

Figure (5.19)

There are many multipath channel models. Even in the European COST 207 standard, there are four different typical models. In addition, Rayleigh and Ricean channels could also be considered as multipath models.

5.3.3.4 – Solutions for Inter-Carrier Interference

Many existing scheme were proposed to attack ICI problem. Three kinds of approaches are addressed here. The first approach is based on CFO estimation and compensation, which makes use of pilot sequences, virtual carriers or blind signal processing techniques. The second approach is based on the windowing technique ineither time domain or frequency domain, such as Nyquist windowing and Hanning windowing. The third one is called ICI self-cancellation, or Polynomial cancellation coded (PCC), where the repeated bits are transmitted to matigate inter-carrier interfernce.

5.3.3.4.1 – CFO Estimation

In order to compensate CFO, CFO must be estimated at first. Once a precise CFO estimate is obtained, a perfect equalizer then can be designed to eliminate ICI. Signal processing methods are applied to solve this problem. Liu and Tureli proposed MUSIC-based and ESPRIT-based algorithms to estimate CFO. Basically, their methods are non-blind because they used virtual carriers, Other researchers proposed Maximum Likelihood for CFO estimation , which was proved to be equivalent to Music-based algorithm. Other CFO estimation methods involve with training sequences .

5.3.3.4.2 – Windowing Estimation

“Figure (5.19)” shows the OFDM transmitter structure with the windowing technique, while the receiver remains the same as the principal OFDM receiver. It is well known that the multiplication operation in the frequency domain is equivalent to the circular convolution in the time domain. Different windowing by choosing parameters wi can result in different digital filtering.

There are many kinds of windowing schemes to reduce the ICI due to CFO, such as the Hanning

window, the Nyquist window, and the Kaiser window.

5.3.3.4.3 – Inter-Carrier Interference Self-Cancellation

The ICI self-cancellation scheme is a method involving with encoded redundancy. Compared with other schemes, only half or less of bandwidth is used for information transmission. In other words, only half or less of full data rate could be achieved. Same bandwidth efficiency will be achieved for ICI self-cancellation and ½-rate convolution coding. However, empirical results have shown that ICI self-cancellation outperforms the convolution coding in most channel environments.

The redundant information in an ICI self-cancellation encoder can be applied to eliminate the ICI at

the receiver. The principles of ICI self-cancellation can be illustrated in “Figure (5.20)”.

Page 148: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

133 | P a g e

Figure (5.20)

Figure (5.21)

As shown in “Figure (5.20)” –the on one the left, only the first and second subcarriers have a large

gain difference. The difference between any other neighboring subcarriers is much smaller. The (l+1)th subcarrier carries the inverted signal as the l th subcarrier does. The intercarrier interference from l th , and (l+1)th subcarriers can cancel each other. This is the key idea for the ICI self-cancellation schemes.

Armstrong extended the original ICI self-cancellation encoder to any larger amount of redundant

repetitions. This new scheme was also referred as Polynomial Cancellation Coded (PCC) scheme. Another ICI self- cancellation scheme is proposed recently . In this new scheme, a different bit encoder is applied, and it will result in a better performance in certain channel models than the PCC scheme.

5.4 – Large Scale Fading in Wireless Communication Channels Large Scale fading also can be termed to be Shadowing effect.

5.4.1 –Shadowing Model

Wireless signal often experience variation caused by blockage (say buildings) in the signal path. The variation may also come from changes in reflecting surfaces and scattering objects. We call it shadow fading . We usually model the shadow fading channel gain as a log-normal random variable (then the shadow fading channel gain in dB is a Gaussian random variable A typical value of variance is 8.9dB. Usually, we set μ = 0. and it's Random due to random number and type of obstructions.

Experiments reported by Egli in 1957 showed that, for paths longer than a few hundred meters, the received (local-mean) power fluctuates with a 'log-normal' distribution about the area-mean power. By 'log-normal' is meant that the local-mean power expressed in logarithmic values, such as dB or neper, has a normal (i.e., Gaussian) distribution, “Figure (5.21)” shows variation in the path profile within a constant range from the transmitter.

Zhao’s ICI self-cancellation scheme

Page 149: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

134 | P a g e

Equation (5.25)

Equation (5.25)

We need now to clearly distinguish between:

Local means : average over about 40l, to remove multipath fading denoted by a single overline.

Area means : average over tens or hundreds of meters, to remove multipath fading and shadowing

denoted by a double overbar. The received power PLog expressed in logarithmic units (neper), is defined as the natural logarithm of the local-

mean power over the area-mean power as shown below in “Equation (5.25)”.

It has the normal probability density:

Egli studied the error in a propagation model predicting the path loss, using only distance, antenna heights and frequency. For average terrain, he reported a logarithmic standard deviation of about s = 8.3 dB and 12 dB for VHF and UHF frequencies, respectively. Such large fluctuations are caused not only by local shadow attenuation by obstacles in the vicinity of the antenna, but also by large-scale effects (hills, foliage, etc.) along the path profile, which cause attenuation. Hence, any estimate of the area-mean power which ignores these effects may be coarse.

This log-normal fluctuation was called 'large-area shadowing' by Marsan, Hess and Gilbert. They measured semi-

circular routes in Chicago , thus fixing distance to the base station, antenna heights and frequency, but measuring different path profiles. The standard deviation of the path loss ranged from 6.5 dB to 10.5 dB, with a median of 9.3 dB. This 'large-area' shadowing thus reflects shadow fluctuations if the vehicle moves over many kilometers.

Performance in log-normal shadowing is typically parameterized by the log mean μψ dB, which is referred to as

the average dB path loss and is in units of dB. The linear mean path loss in dB, 10 log10μψ, is referred to as the average path loss.

In the log-normal shadowing model the path loss ψ is assumed random with a log-normal distribution given by

“Equation (5.27)”. Where ξ= 10/ln10, μψ dB is the mean of ψdB=10 log ψ in dB and σψ dB is the standard deviation of ψ dB ,

standard deviation of ψ dB Note that if the path loss is log-normal, then the received power and receiver SNR will also be log-normal since these are just constant multiples of ψ The mean of ψ (the linear average path loss) can be defined as shown below in “Equations (5.26)”.

Page 150: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

135 | P a g e

Equation (5.26)

Equation (5.27)

Figure (5.22)

Performance in log-normal shadowing is typically parameterized by the log mean μψ dB, which is referred to as

the average dB path loss and is in units of dB. The linear mean path loss in dB, 10 log10μψ, is referred to as the average path loss.

With a change of variables we see that the distribution of the dB value of ψ is Gaussian with mean μψ dB and

standard deviation σψ dB (“Equation (5.27)”).

The log-normal distribution is defined by two parameters: μψ dB and σψ dB . Since blocking objects cause signal

attenuation, μψ dB is always nonnegative. However, in some cases the average attenuation due to both path loss and shadowing is incorporated into the path loss model. For example, piecewise linear path loss models based on empirical data will incorporate the average shadowing associated with the measurements into the path loss model. In this case the shadowing model superimposed on the simplified path loss model should have μψ dB = 0. However, if the path loss model does not incorporate average attenuation due to shadowing or if the shadowing model incorporates path loss via its mean, then μψ dB as well as σψ dB will be positive, and must be obtained from an analytical model, simulation, or empirical measurements.

5.4.2 –Combined Path Loss and Shadowing Model

Models for path loss and shadowing can be superimposed to capture power falloff versus distance along with the

random attenuation about this path loss from shadowing. In this combined model, average dB path loss (μψdB ) is characterized by the path loss model and shadow fading, with a mean of 0 dB, creates variations about this path loss, as illustrated by the path loss and shadowing curve in “Figure (5.22)”. Specifically, this curve plots the combination of the simplified path loss model and the log-normal shadowing random process . For this combined model the ratio of received to transmitted power in dB is given by “Equation (5.28)”.

Page 151: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

136 | P a g e

Equation (5.28)

Equation (5.29)

where ψdB is a Gauss-distributed random variable with mean zero and variance σ 2 ψdB .

As shown in Figure 5.19, the path loss decreases linearly relative to log10 d with a slope of 10γ dB/decade,

where γ is the path loss exponent. The variations due to shadowing change more rapidly, on the order of the decorrelation distance Xc .

5.4.3 –Outage Probability under Path Loss and Shadowing Effects

The combined effects of path loss and shadowing have important implications for wireless system design. In wireless systems there is typically a target minimum received power level Pmin below which performance becomes unacceptable (e.g. the voice quality in a cellular system is too poor to understand). However, with shadowing the received power at any given distance from the transmitter is log-normally distributed with some probability of falling below Pmin. We define outage probability pout(Pmin, d) under path loss and shadowing to be the probability that the received power at a given distance d, Pr(d), falls below Pmin: pout(Pmin, d) = p(Pr(d) < Pmin). For the combined path loss and shadowing model ,this becomes as shown below in “Equations (5.29)”.

where the Q function is defined as the probability that a Gaussian random variable x with mean zero and variance one is bigger than z:

The conversion between the Q function and complementary error function is:

5.5 – Small Scale Fading in Wireless Communication Channels

5.5.1 –Introduction

The type of fading experienced by a signal propagating through a mobile radio channel depends on the nature of the transmitted signal with respect to the characteristics of the channel. Depending on the relation between the signal parameters (such as bandwidth, symbol period, etc.) and the channel parameters (such as rms delay spread and Doppler spread), different transmitted signals will undergo different types of fading. The time dispersion and frequency dispersion mechanisms in a mobile radio channel lead to four possible distinct effects, which are manifested depending on the nature of the transmitted signal, the channel, and the velocity. While multipath delay spread leads to time dispersion and frequency selective fading, Doppler spread leads to frequency dispersion and time selective fading. The two propagation mechanisms are independent of one another. “Figure (5.23)” shows a tree of the four different types of fading.

Page 152: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

137 | P a g e

Figure (5.23)

5.5.2 –Flat Fading

If the mobile radio channel has a constant gain and linear phase response over a bandwidth which is greater than the bandwidth of the transmitted signal, then the received signal will undergo flat fading. This type of fading is historically the most common type of fading described in the technical literature. In flat fading, the multipath structure of the channel is such that the spectral characteristics of the transmitted signal are preserved at the receiver. However the strength of the received signal changes with time, due to fluctuations in the gain of the channel caused by multipath. The characteristics of a flat fading channel are illustrated in Figure 5.12. It can be seen from Figure 5.12 that if the channel gain changes over time, a change of amplitude occurs in the received signal. Over time, the received signal r(t) varies in gain, but the spectrum of the transmission is preserved. In a flat fading channel, the reciprocal bandwidth of the transmitted signal is much larger than the multipath time delay spread of the channel, and can be approximated as having no excess delay (i.e., a single delta function with .

Flat fading channels are also known as amplitude varying channels and are sometimes referred to as narrowband

channels, since the bandwidth of the applied signal is narrow as compared to the channel flat fading bandwidth. Typical flat fading channels cause deep fades, and thus may require 20 or 30 dB more transmitter power to achieve low bit error rates during times of deep fades as compared to systems operating over non-fading channels. The distribution of the instantaneous gain of flat fading channels is important for designing radio links, and the most common amplitude distribution is the Rayleigh distribution. The Rayleigh flat fading channel model assumes that the channel induces amplitude which varies in time according to the Rayleigh distribution. To summarize, a signal undergoes flat fading if , where is the reciprocal bandwidth (e.g., symbol period) and is the bandwidth, respectively, of the transmitted modulation, while and are the root mean square delay spread and coherence bandwidth, respectively, of the channel.

“Figure (5.24)” shows flat fading channel characteristics.

Page 153: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

138 | P a g e

Figure (5.24)

Figure (5.25)

If the channel possesses a constant-gain and linear phase response over a bandwidth that is smaller than the

bandwidth of transmitted signal, then the channel creates frequency selective fading on the received signal. Under such conditions, the channel impulse response has a multipath delay spread which is greater than the reciprocal bandwidth of the transmitted message waveform. When this occurs, the received signal includes multiple versions of the transmitted waveform which are attenuated (faded) and delayed in time, and hence the received signal is distorted. Frequency selective fading is due to time dispersion of the transmitted symbols within the channel. Thus the channel induces intersymbol interference (ISI). Viewed in the frequency domain, certain frequency components in the received signal spectrum have greater gains than others. Frequency selective fading channels are much more difficult to model than flat fading channels since each multipath signal must be modeled and the channel must be considered to be a linear filter. It is for this reason that wideband multipath measurements are made, and models are developed from these measurements. When analyzing mobile communication systems, statistical impulse response models such as the two-ray Rayleigh fading model (which considers the impulse response to be made up of two delta functions which independently fade and have sufficient time delay between them to induce frequency selective fading upon the applied signal), or computer generated or measured impulse responses, are generally used for analyzing frequency selective small-scale fading.

“Figure (5.25)” illustrates the characteristics of a frequency selective fading channel. For frequency selective

fading, the spectrum S (f) of the transmitted signal has a bandwidth which is greater than the coherence bandwidth B c of the channel. Viewed in the frequency domain, the channel becomes frequency selective, where the gain is different for different frequency components. Frequency selective fading is caused by multipath delays which approach or exceed the symbol period of the transmitted symbol.

Page 154: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

139 | P a g e

Frequency selective fading channels are also known as wideband channels since the bandwidth of the signal s(t) is wider than the bandwidth of the channel impulse response. As time varies, the channel varies in gain and phase across the spectrum of s(t), resulting in time varying distortion in the received signal r(t).

To summarize, a signal undergoes frequency selective fading if , A common rule of thumb is

that a channel is flat fading if , And a channel is frequency selective if .

5.5.3 –Fast Fading

Depending on how rapidly the transmitted base band signal changes as compared to the rate of change of the channel, a channel may be classified either as a fast fading or slow fading channel. In a fast fading channel, the channel impulse response changes rapidly within the symbol duration. That is, the coherence time of the channel is smaller than the symbol period of the transmitted signal. This causes frequency dispersion (also called time selective fading) due to Doppler spreading, which leads to signal distortion.

Therefore, a signal undergoes fast fading if and it should be noted that when a channel is

specified as a fast or slow fading channel, it does not specify whether the channel is flat fading or frequency selective in nature. Fast fading only deals with the rate of change of the channel due to motion. In the case of the flat fading channel, we can approximate the impulse response to be simply a delta function (no time delay). Hence, a flat fading, fast fading channel is a channel in which the amplitude of the delta function varies faster than the rate of change of the transmitted base band signal. In the case of a frequency selective, fast fading channel, the amplitudes, phases, and time delays of any one of the multipath components vary faster than the rate of change of the transmitted signal. In practice, fast fading only occurs for very low data rates.

5.5.4 –Slow Fading

In a slow fading channel, the channel impulse response changes at a rate much slower than the transmitted base band signal . In this case, the channel may be assumed to be static over one or several reciprocal bandwidth intervals. In the frequency domain, this implies that the Doppler spread of the channel is much less than the bandwidth of the base-band signal. Therefore, a signal undergoes slow fading only if , It should be clear that the velocity of the mobile (or velocity of objects in the channel) and the base band signaling determines whether a signal undergoes fast fading or slow fading.

The relation between the various multipath parameters and the type of fading experienced by the signal are

summarized in “Figure (5.26)”. Over the years, some authors have confused the terms fast and slow fading with the terms large-scale and small-

scale fading. It should be emphasized that fast and slow fading deal with the relationship between the time rate of change in the channel and the transmitted signal, and not with propagation path loss models.

Page 155: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

140 | P a g e

Figure (5.26)

5.5.5 –Rayleigh Fading

Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal, such as that used by wireless devices. Rayleigh fading models assume that the magnitude of a signal that has passed through such a transmission medium (also called a communications channel) will vary randomly, or fade, according to a Rayleigh distribution — the radial component of the sum of two uncorrelated Gaussian random variables.

Rayleigh fading is viewed as a reasonable model for tropospheric and ionospheric signal propagation as well as

the effect of heavily built-up urban environments on radio signals. Rayleigh fading is most applicable when there is no dominant propagation along a line of sight between the transmitter and receiver. If there is a dominant line of sight, Rician fading may be more applicable.

Rayleigh fading is a reasonable model when there are many objects in the environment that scatter the radio

signal before it arrives at the receiver. The central limit theorem holds that, if there is sufficiently much scatter, the channel impulse response will be well-modeled as a Gaussian process irrespective of the distribution of the individual components. If there is no dominant component to the scatter, then such a process will have zero mean and phase evenly distributed between 0 and 2π radians. The envelope of the channel response will therefore be Rayleigh distributed.

Calling this random variable R, it will have a probability density function as shown below in “Equation (5.30)”.

Page 156: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

141 | P a g e

Equation (5.30)

Equation (5.31)

Figure (5.27)

Ω = E (R2).

Often, the gain and phase elements of a channel's distortion are conveniently represented as a complex number.

In this case, Rayleigh fading is exhibited by the assumption that the real and imaginary parts of the response are modeled by independent and identically distributed zero-mean Gaussian processes so that the amplitude of the response is the sum of two such processes as shown below in “Equation (5.31)”.

“Figure (5.27)” shows Rayleigh distribtion for different values of σ.

Important thing that we need to always remember is that Rayleigh fading is a small-scale effect, there will be bulk properties of the environment such as path loss and shadowing upon which the fading is superimposed.

The requirement that there be many scatterers present means that Rayleigh fading can be a useful model in

heavily built-up city centers where there is no line of sight between the transmitter and receiver and many buildings and other objects attenuate, reflect, refract and diffract the signal. Experimental work in Manhattan has found near-Rayleigh fading there. In tropospheric and ionospheric signal propagation the many particles in the atmospheric layers act as scatterers and this kind of environment may also approximate Rayleigh fading. If the environment is such that, in addition to the scattering, there is a strongly dominant signal seen at the receiver, usually caused by a line of sight, then the mean of the random process will no longer be zero, varying instead around the power-level of the dominant path. Such a situation may be better modeled as Rician fading.

Since it is based on a well-studied distribution with special properties, the Rayleigh distribution lends itself to

analysis, and the key features that affect the performance of a wireless network have analytic expressions.

Page 157: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

142 | P a g e

Equation (5.31)

Figure (5.28)

Equation (5.32)

The Doppler power spectral density of a fading channel describes how much spectral broadening it causes. This shows how a pure frequency e.g. a pure sinusoid, which is represented in the frequency domain to be an impulse, is spread out across frequency when it passes through the channel. It is the Fourier transform of the time-autocorrelation function. For Rayleigh fading with a vertical receive antenna with equal sensitivity in all directions, this has been shown in “Equation (5.31)”.

: is the frequency shift relative to the carrier frequency This equation is only valid for values of between The spectrum is zero outside this range. This spectrum is shown in the figure for a maximum Doppler shift of 10

Hz. The 'bowl shape' or 'bathtub shape' is the classic form of this Doppler spectrum, “Figure (5.28)” shows the normalized Doppler power spectrum of Rayleigh fading with a maximum Doppler shift of 10Hz.

5.5.6 –Ricean Fading

Ricean fading is a stochastic model for radio propagation anomaly caused by partial cancellation of a radio signal by itself — the signal arrives at the receiver by several different paths (hence exhibiting multipath interference), and at least one of the paths is changing (lengthening or shortening). Rician fading occurs when one of the paths, typically a line of sight signal, is much stronger than the others. In Rician fading, the amplitude gain is characterized by a Rician distribution.

Ricean fading follows the distributions shown below in “Equation (5.32)”.

“Figure (5.29)” shows Rician distribtion for different values of σ.

Page 158: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

143 | P a g e

Figure (5.29)

Equations (5.33)

A Ricean fading channel can be described by two parameters: K and Ω.[1] K is the ratio between the power in the

direct path and the power in the other, scattered, paths. Ω is the power in the direct path, and acts as a scaling factor to the distribution.

The received signal amplitude (not the received signal power) R is then Rice distributed as shown below in

“Equations (5.33)”. Rician distribution depends on these parameters:

So distribution can be represented to be:

5.5.7 –Nakagami-m Fading

Data communications in wireless networks generally take place over fading channels with time-varying characteristics. The extent to which the dynamic nature of the wireless medium impacts the Quality of Service (QoS) of transmitted data depends on factors such as the severity of the fading channel and the resource allocation policy being employed to adapt to the time-varying channel. This is in contrast to wired point-to-point links where QoS is exclusively a function of the data traffic arrival statistics and the fixed capacity of the transmitter. In the wired case, QoS attributes, such as delay performance, can usually be studied by using appropriate queuing models and analyses. The time-varying wireless channel, on the other hand, poses a challenge in terms of queuing analysis and performance evaluation.

Page 159: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

144 | P a g e

Equations (5.34)

The measurement of achievable performance of wireless communications over fading channels has historically been relegated to the realm of information theory, where channel capacity is the figure of merit. The delay component that accounts for the time that data spends in a transmit buffer, as well as other measures of QoS, are typically decoupled from the information theoretic problem, and often times simply ignored. This separation is reasonable for wired links where a constant transmit data rate can be assumed, but results in an inability to capture the important relationship between physical layer behavior and higher layer performance in a wireless network.

Nakagami-m distribution has gained a lot of attention lately, due to its ability to model a wider class of fading

channel conditions and to fit well the empirical data. However, the simulation models for Nakagami-m distributed fading channels exist in literature only for values of the Nakagami parameter m between 0.5 and 10. Also, the modeling for 0.5 < m < 0.65 is typically computationally quite expensive. The simplest model for simulating Nakagami-m fading channels, based on a simplified approximation of the inverse of the cumulative distribution function (CDF) of the Nakagami distribution, was given by Beaulieu &al. Although the model in is meant to cover arbitrary values for m between 0.65 and 10, only 12 values in this interval are tabulated, and the procedure of obtaining an arbitrary value besides those tabulated is not explicit or straightforward.

Here, we will investigate a single user channel subject to time varying, slow, flat fading with additive white

Gaussian noise (AWGN) at the receiver. The fading is modeled as Nakagami-m, which has been shown to be a suitable model for a number of wireless environments. Under this type of fading, the SNR is gamma distributed; therefore the probability density function of the SNR is given by “Equation (5.34)”.

: Is the Nakagami fading parameter. : Is the Gamma function. : Bar is the expected value of the SNR random variable δ. At low SNR, the data transmission rate is reduced to a level such that transmission may not be justified. Hence,

links are considered to be in outage when the SNR is below a predetermined threshold, denoted δth and referred to as the SNR threshold. When in an outage, no data may be transmitted over the link.

In the face of outages and time varying fading, adaptive resource allocation strategies have been developed to

more efficiently use the wireless resource. Typically, the performance of resource allocation policies is measured in terms of mean capacity and mean spectral efficiency. Unfortunately, in a wireless network these metrics may not be ideal measures of link performance, because links, rather than data, are their frame of reference. A more practical measure of link performance is delay, which measures the experience of data in the system. The delay experienced by data influences application performance and the overall utility of the network. Because delay is composed of transmit time and time waiting in transmit buffers, we develop and utilize a queuing analysis to measure the mean delay on links for various resource allocation strategies.

The Nakagami-m PDF is shown below in “Figure (5.30)”.

Page 160: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

145 | P a g e

Figure (5.30)

Page 161: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

146 | P a g e

Chapter Number 6 Channel Coding

6.1 – Introduction The engineering problem treated by the subject of error-control codes is that of protecting digital data against

the errors that occur during transmission or storage. The storage and transmission of digital data lies at the heart of modern computers and telecommunications. If data is corrupted in storage or transmission, the consequences can range from mildly annoying to disastrous. Many error-protection techniques have been developed based on a rich mathematical theory, and the rapid advances in digital integrated circuitry have made possible the implementation of these algorithms. The channel coding is considered as an important signal processing operation which provides a reliable transmission of digital information over channel. It is used mainly to minimize the effect of NOISE by facilitate two basic operations, Error detection and Error correction.

The task facing the designer of a digital communication system is that of providing a cost effective facility for transmitting information from one end of the system at a rate and a level of reliability and quality that are acceptable to a user at the other end. The two key system parameters available to the designer are transmitted signal power and channel bandwidth. These two parameters, together with the power spectral density of receiver noise, determine the signal energy per bit-to-noise power spectral density ratio Eb/No we showed before that this ratio uniquely determines the bit error rate for a particular modulation scheme. Practical considerations usually place a limit on the value that we can assign to Eb/No. Accordingly, in practice, we often arrive at a modulation scheme and find that it is not possible to provide acceptable data quality (i.e., low enough error performance). For a fixed Eb/No, the only practical option available for changing data quality from problematic to acceptable is to use error-control coding.

Another practical motivation for the use of coding is to reduce the required Eb/No for a fixed bit error rate. This

reduction in Eb/No may, in turn, be exploited to reduce the required transmitted power or reduce the hardware costs by requiring a smaller antenna size in the case of radio communications.

Error control for data integrity may be exercised by means of forward error correction (FEC). “Figure (6.1a)”

shows the model of a digital communication system using such an approach. The discrete source generates information in the form of binary symbols. The channel encoder in the transmitter accepts message bits and adds redundancy according to a prescribed rule, thereby producing encoded data at a higher bit rate. The channel decoder in the receiver exploits the redundancy to decide which message bits were actually transmitted. The combined goal of the channel encoder and decoder is to minimize the effect of channel noise. That is, the number of errors between the channel encoder input (derived from the source) and the channel decoder output (delivered to the user) is minimized.

For a fixed modulation scheme, the addition of redundancy in the coded messages implies the need for increased

transmission bandwidth. Moreover, the use of error-control coding adds complexity to the system, especially for the implementation of decoding operations in the receiver. Thus, the design trade-offs in the use of error-control coding to achieve acceptable error performance include considerations of bandwidth and system complexity.

There are many different error-correcting codes (with roots in diverse mathematical disciplines) that we can use. Historically, these codes have been classified into block codes and convolutional codes. The distinguishing feature for this particular classification is the presence or absence of memory in the encoders for the two codes.

To generate an (n, k) block code, the channel encoder accepts information in successive k-bit blocks; for each block, it adds n - k redundant bits that are algebraically related to the k message bits, thereby producing an overall encoded block of n bits, where n > k. The n-bit block is called a code word, and n is called the block length of the

Page 162: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

147 | P a g e

Figure (6.1)

code. The channel encoder produces bits at the rate , where R, is the bit rate of the information source. The dimensionless ratio r = kin is called the code rate, where o < r < 1. The bit rate Ro, coming out of the encoder, is called the channel data rate. Thus, the code rate is a dimensionless ratio, whereas the data rate produced by the source and the channel data rate are both measured in bits per second.

In a convolutional code, the encoding operation may be viewed as the discrete time convolution of the input

sequence with the impulse response of the encoder. The duration of the impulse response equals the memory of the encoder. Accordingly, the encoder for a convolutional code operates on the incoming message sequence, using a "sliding window" equal in duration to its own memory. This, in turn, means that in a convolutional code, unlike a block code, the channel encoder accepts message bits as a continuous sequence and there by generates a continuous sequence of encoded bits at a higher rate.

In the model depicted in “Figure (6.1a)”, the operations of channel coding and modulation are performed

separately in the transmitter; likewise for the operations of detection and decoding in the receiver. When, however, bandwidth efficiency is of major concern the most effective method of implementing forward error-control correction coding is t~ combine it with modulation as a single function, as shown in “Figure (6.1b)” In such an approach, coding is redefined as a process of imposing certain patterns on the transmitted signal.

6.1.1 –Difference between Channel Coding and Source Coding

Channel encoding means addition of controlled redundancy bits to the data to combat noise, but source encoding means compression of the information stream (speech coding, image and video compressing) so that no significant information is lost, enabling a perfect reconstruction of the information, Thus, by eliminating superfluous and uncontrolled redundancy the load on the transmission system is reduced.

6.1.2 –Minimum Distance Considerations

We would like to find good codes. In such codes every code word is as different as possible from every other code word. In the following sections we will focus on some examples of good codes, but first, we must define the

Page 163: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

148 | P a g e

Figure (6.2)

Figure (6.3)

term 'distance between code words'. Consider a pair of code words c1 and c2 that have the same number of elements. The Hamming distance between such a pair of code words is defined as the number of locations in which their respective elements differ. For example, take then . The minimum distance dmin of a code is defined as the smallest Hamming distance between any pair of code words in the code. The concept of distance between code words can be shown in short codes using the 3-D cube as shown below in “Figure (6.2)”. All the possible words with 3 bits are the edges of the cube, while the code words are a subset of these words and are colored in black.

In the example, there are only 2 code words of 3 bits each, which leads to the conclusion that each code word has 2 redundancy bits.

There are other possible codes of length equal to 3, with different minimum distance, as shown in “Figure

(6.3)”.

Page 164: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

149 | P a g e

Equation (6.1)

Figure (6.4)

Figure (6.5)

The minimum distance of a code is an important parameter of the code. Specifically, it determines the error-correcting capability of the code. A code of minimum distance dmin can correct up to t errors, while t is defined as shown below in “Equation (6.1)”.

If we go back to the repeat code, the Hamming distance of the code is the Hamming distance between the two words of the code , for while , which means that the code can correct up to error. The whole process of sending a word, an error that occurs and its correction can be seen in “Figure (6.4)”.

6.1.3 –Modulo-2 Arithmetic Operations

Definition: Arithmetic operations applied on binary codes only. In this notation, all arithmetic operations are the same as we perform on any two integers except for addition where we use the exclusive or operation to add any two code words. Thus, according to the notation, the rules for modulo-2 addition are as shown below in “Figure (6.5)”and one must put in mind that 1 + 1 = 0, as it follows that 1 = -1. Hence, in binary arithmetic, subtraction is the same as addition, the rules for modulo-2 multiplication are also shown in “Figure (6.5)”.

Division is trivial in that we have 1 ÷ 1 = 1 0 ÷ 1 = 0 and division by 0 is not permitted. Modulo-2 addition is the EXCLUSIVE-OR operation in logic, and modulo-Z multiplication is the AND operation.

Page 165: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

150 | P a g e

Figure (6.6)

6.2 – Comparison between Typical Coded versus Un-coded Systems Performance “Figure (6.6)” shows comparison between system’s performances at sending data directly and when passing

over channel coding block before sending.

6.2.1 –Error Performance versus Band-Width Performance

From the curve if the is no coding used and it want to decrease the bit error rate from 10-2 to 10-4

from point A to point B in the Fig. previous so increase the Eb/N0 from 8dB to 9dB but if want to decrease the error rate at a constant Eb/N0 at 8dB from point A to C in the previous Figure it must be use coding but the trade-off in this case is increasing the Bandwidth.

6.2.2 –Power Performance versus Band-Width Performance

Consider a system without coding and it operate at point D in the curve of fig but if it required to decrease the power Eb/N0 from 14dB to 9dB (from point D to point E) but at the same bit error rate which mean the same quality of service it must use coding so the cost in this case increasing the Bandwidth.

6.2.3 –Data Rate Performance versus Band-Width Performance

Consider that system without coding operating at point D in “Figure (6.2)” Eb/N0 = 14 dB. PB= 10-6, Assume that there is no problem with the data quality and no need to reduce power, but it need to increase the data rate so from the “Equation (6.2)” If the data rate R increase the Eb/N0 will decrease and in Fig the operating point would move

Page 166: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

151 | P a g e

Equation (6.2)

Equation (6.3)

Figure (6.7)

upwards from point D to point F but the bit error rate will increase so it move to point E by using coding so the data rate increase at the same quality of the data. The use of error-correction coding increased hand width.

⁄ ( ⁄ )

6.3 – Hard Decisions versus Soft Decisions The code word sequence U(m) made up of branch word , with each branch word comprised o f n code symbols ,

can he considered to be an endless stream, as opposed to a block code, in which the source data and their code words a repartitioned into precise block sizes. Consider that a binary signal transmitted over a symbol interval (0,T) is represented by S1 ( t ) for a binary one and S2 ( t ) for a binary zero. The received signal is represented by “Equation (6.3)”.

: is a zero-mean Gaussian noise process.

The detection of r(t) is described of two basic steps , In the first step, the received wave form is reduced to a

single number Z(t) = ai + no Where ai is the signal component of Z(t) and no is the noise component. The noise component, no is a zero-mean Gaussian random variable, and thus Z(t) is a Gaussian random variable with a mean of either a1 or a2 depending on whether a binary one or binary zero was sent.

In the second step of the detection process a decision was made as to which signal was transmitted, on the basis

of comparing Z(t) to a threshold as shown below in “Figure (6.7)”.

Since the decoder operates on the hard decisions made by the demodulator, the decoding is called hard-decision

decoding. The demodulator can also be con Figd to feed the decoder with a quantized value uf Z(T) greater than two levels . Such an implementation furnishes the decoder with more information than is provided in the hard-decision case, when the quantization level of the demodulator output is greater than two, the decoding is called soft-decision decoding. Eight levels (3-bits) of quantization are illustrated on the abscissa of Fig 1. When the demodulator sends a hard binary decision to the decoder, it sends it a single binary symbol, When the demodulator sends a soft binary decision, quantized to eight levels, it sends the decoder a 8-bit word describing the interval along Z(t). Referring to Fig 1 if the demodulator sends 111 to the decoder, this is tantamount to declaring the code symbol to be a one with very high confidence, while sending a 100 is tantamount to declaring the code symbol to be a one with very low confidence. It should he clear that ultimately every message decision out of the decoder must be a hard decision, otherwise, one might see computer print outs that read: "think it's a 1"- " think it's a 0." and so on.

Page 167: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

152 | P a g e

6.3.1 –Hard Decision Decoding

When binary coding is used, the modulator has only binary inputs. If binary demodulator output quantization is used, the decoder has only binary inputs. In this case, the demodulator is said to make hard decisions.

Decoding based on hard decision made by the demodulator is called ―hard decision decoding‖.

6.3.2 –Soft Decision Decoding

If the output of demodulator consists of more than two quantization levels or is left unquantized, the demodulator is said to make soft decisions.

Decoding based on soft decision made by demodulator is called soft-decision decoding. Hard-decision decoding is much easier to implement than soft-decision decoding. However, soft-decision

decoding offers much better performance.

6.4 – Error Control Coding This chapter is the natural sequel to the preceding chapter on Shannon's information theory. In particular, in this

chapter we present error-control coding techniques that provide different ways of implementing Shannon's channel-coding theorem, each error control coding technique involves the use of a channel encoder in the transmitter and a decoding algorithm in the receiver.

The error-control coding techniques described here include the following important classes of codes:

Linear Block Codes.

Cyclic Codes.

Convolutional Codes.

Compound Codes, Exemplified by turbo codes and low-density parity-check codes, and their irregular variants.

6.4.1 –Linear Block Codes

A block code consists of a set of fixed-length vectors called code words. The length of a code word is the number of elements in the vector and is denoted by n. The elements of a code word are selected from an alphabet of q elements. When the alphabet consists of two elements, 0 and 1, the code is a binary code and the elements of any code word are called bits. When the elements of a code word are selected from an alphabet having q elements (q>2), the code is non-binary, It is interesting to note that when q is a power of 2, i.e., q = 2b where b is a positive integer, each q-array element has an equivalent binary representation consisting of b bits, and, thus, a non-binary code of block length N can be mapped into a binary code of block length .

There are 2" possible code words in a binary block code of length n. From these 2" code words, we may select

M= 2k code words (k < n) to forma code. Thus, a block of k information bits is mapped into a code word of length n

selected from the set of M = 2k code words. We refer to the resulting block code as an (n, k) code, and the ratio

, is

defined to be the rate of the code. More generally, in a code having q elements, there are qn possible code words. A subset of M = 2k code words

may be selected to transmit k-bit blocks of information.

Page 168: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

153 | P a g e

Equation (6.4)

Equation (6.5)

Equation (6.6)

Besides the code rate parameter Rc, an important parameter of a code word is its weight, which is simply the number of nonzero elements that it contains. In general, each code word has its own weight. The set of all weights in a code constitutes the weight distribution of the code. When all the M code words have equal weight, the code is called a fixed-weight code or a constant-weight code.

The encoding and decoding functions involve the arithmetic operations of addition and multiplication performed

on code words. These arithmetic operations are performed according to the conventions of the algebraic field that has as its elements the symbols contained in the alphabet. For example, the symbols in a binary alphabet are 0 and 1; hence, the field has two elements. In general, a field F consists of a set of elements that has two arithmetic operations defined on its elements, namely, addition and multiplication that satisfy the following properties (axioms).

6.4.1.1 – Design Equations

Using matrix notation, we define the 1-by-k message vector, or information vector, m, the 1-by-(n - k) parity vector b and the 1-by-n code vector c as shown below in “Equation (6.4)”.

Note that all three vectors are row vectors. The use of row vectors is adopted for the sake of being consistent

with the notation commonly used in the coding literature. We may thus rewrite the set of simultaneous equations defining the parity bits in the compact matrix form shown below in “Equation (6.5)”.

: is the k-by-(n - k) coefficient matrix defined by:

[

]

: “0” or “1”.

From the definitions given before we see that may be expressed as a partitioned row vector as shown below in

“Equations (6.6)”.

|

Hence:

| Using the definition of the generator matrix G, we may simplify it to be:

The full set of code words, referred to simply as the code, is generated in accordance with “Equation (6.5)” by

letting the message vector ill range through the set of all 2k binary k-tuples (l-by-k vectors). Moreover, the sum of any two code words is another code word. This basic property of linear block codes is called closure. To prove its validity

Page 169: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

154 | P a g e

Equation (6.7)

Equation (6.8)

Figure (6.8)

considers a pair of code vectors ci and cj corresponding to a pair of message vectors mi and mj respectively. Using Equation (9) we may express the sum of ci and cj as shown below in “Equation (6.7)”.

( )

The modulo-2 sum of m, and rn; represents a new message vector. Correspondingly, the modulo-Z sum of c, and

c; represents a new code vector. There is another way of expressing the relationship between the message bits and parity-check bits of a linear block code. Let H denote an (n - k)-by-n matrix, defined as shown in “Equations (6.8)”.

| : is an (n - k)-by-k matrix, representing the transpose of the coefficient matrix P. : is the (n - k)-by-(n - k) identity matrix. Accordingly, we may perform the following multiplication of partitioned matrices:

| (

*

Then:

We have used the fact that multiplication of a rectangular matrix by an identity matrix of compatible dimensions

leaves the matrix unchanged. In modulo-Z arithmetic, we have , where 0 denote an (n - k)-by-k null matrix (i.e., a matrix that has zeros for all of its elements).

The matrix H is called the parity-check matrix of the code, and the set of equations specified by “Equations (6.8)”

are called parity-check equations. The generator equation and the parity-check detector equation are depicted in the form of block diagrams in

“Figure (6.8)”.

Page 170: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

155 | P a g e

Equations (6.9)

Equations

Equations (6.11)

6.4.1.2 – Properties of Linear Block Codes

In this clause we will quickly show the properties and features of the first type of channel coding which was described previously.

Linearity: Linear combination of two code0words yields a new code-word.

Hamming weight “ ”: Number of non-zero symbols in a code-word.

Hamming distance “ ”: Number of differing symbols between code-word “ ” and code-

word “ ”.

Error detection capability “ ” and Error correcting capability “ ” are given by “Equations (6.8)”.

: Minimum distance of received code-word

6.4.2 –Cyclic Codes

6.4.2.1 – Introduction

Cyclic coding forms a subclass of LBC .An advantage of cyclic codes over other types is that they are easy to encode using a well-defined mathematical structure. This led to the development of very efficient decoding schemes.

A binary code is said to be a cyclic code if it exhibits two main properties: 1. Linearity property: The sum of any two code words in the code is also a code word. 2. Cyclic property: Any cyclic shift of a code word in the code is also a code word.

Property 1: Restates the fact that a cyclic code is a linear block code. So it described as a parity-check code. Property 2: Restated in mathematical terms. Let the n-tuple (n-codes) . That donates a code word of linear block code. The code is cyclic code if the n-tuple satisfies “Equation (6.10)”.

All equations above must be valid code words in the code which lead us to generate the code polynomial

shown below in “Equation (6.11)” that helps us to develop the algebraic properties of cyclic codes.

: is the indeterminate.

Page 171: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

156 | P a g e

Equations (6.12)

Equations (6.13)

For binary codes the coefficients are 1s & 0s.Each power of X in the code polynomial ( ) represents a one-bit shift in time. This means that multiplication of the polynomial ( ) by X may be viewed as a shift to the right. So how we can make such a shift cyclic?

In following lines and equations one can find a clear answer for the previously mentioned question, Initially,

Let‘s start with the code polynomial be multiplied by “ ” as shown below in “Equation (6.12)”.

After rearranging & using the modulo-2 addition properties we may manipulate the first i terms of the last equation as follows:

So we can express the equation as follows:

By introducing the following definitions:

Using “Equation (6.9)” the last equation could be written as:

Because “ ” is the reminder that results from dividing Xi( ) by (Xn+1). The last equation is a code polynomial for any cyclic shift i.

6.4.2.2 – Generator Polynomial

The polynomial and its factors play a vital role in the generation of cyclic codes. Let ( ) be a polynomial of degree that is a factor of , ( may be expressed as shown below in “Equation (6.13)”.

The coefficient is equal to “0” or “1”.

According to this expansion, the polynomial has two terms with coefficient 1 separated by

terms. The polynomial is called the generator polynomial of a cyclic code. A cyclic code is uniquely determined by the generator polynomial in that each code polynomial in the code can be expressed in the form of a polynomial product as shown in “Equation (6.14)”.

Page 172: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

157 | P a g e

Equations (6.14)

Equations (6.15)

Equations (6.16)

: A polynomial in with degree .

Suppose we are given the generator polynomial and the requirements is to encode the message sequence

into a systematic cyclic code. That is, the message bits are transmitted in unaltered form, as shown by the following structure for a code word such as the one shown below in “Equations (6.15)”.

Let the message polynomial be defined as:

And let:

But we want the code polynomial to be in the form:

With the aid of the modulo-2 addition rules we may convert the previous equation to be:

The last equation states that the polynomial is the reminder left over after dividing by .

We may summarize the steps involved in the encoding procedure for a (n, k) cyclic code assured of a systematic structure. Specifically, we proceed as follows:

1. Multiply the message polynomial by ( ). 2. Divide the result of 1st step by the generator polynomial , obtaining the reminder 3. Obtain the code-word to be | .

6.4.2.3 – Parity-Check Polynomial

An (n, k) cyclic code is uniquely specified by its generator polynomial ( ) of order (n, k), such a code is also uniquely specified by another polynomial of degree k, which is called the parity-check polynomial, defined by “Equation (6.16)”.

Coefficients are “0” or “1”.

Page 173: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

158 | P a g e

Equations (6.17)

Equations (6.18)

Equations (6.19)

The parity-check polynomial h( ) has a form similar to the generator polynomial in that there are two terms with coefficient 1 , but separated by k-1 terms.

The generator polynomial ( ) is equivalent to the generator matrix G as a description of the code. Correspondingly, the parity-check polynomial, donated by h( ), is an equivalent representation of the parity-check matrix H. we thus find that the matrix relation HGT =0 for a LBCs corresponds to the relationship shown below in “Equation (6.17)”.

This equation shows that the generator polynomial and the parity-check polynomial are factors of the polynomial , and could be shown in “Equation (6.18)”.

This property provides the basis for selecting the generator or parity-check polynomial of a cyclic code. In particular, we may state that if is a polynomial of degree (n-k) and it is also a factor of Xn+1, then is the generator polynomial of an (n, k) cyclic code. Equivalently, we may state that if is a polynomial of degree k and it is also a factor of Xn+1, then is the parity-check polynomial of an (n, k) cyclic code. A final comment is in order. Any factor of Xn+1 with degree (n-k), the number of parity bits, can be used as a generator polynomial. For large values of n, the polynomial Xn+1 may have many factors of degree n-k. Some of these polynomial factors generate good cyclic codes, whereas some of them generate bad cyclic codes. The issue of how to select generator polynomials that produce good cyclic codes is very difficult to resolve. Indeed, coding theorists have expended much effort in the search for good cyclic codes.

6.4.2.4 – Generator and Parity-Check Matrices representations

Given the generator polynomial ( ) of an (n, k) cyclic code, we may construct the generator matrix G by noting that the polynomials span the code. The n-tuples corresponding are used as arrows to generate the matrix [G].

The construction of the parity-check matrix H of the cyclic code from the parity-check polynomial requires special attention, as described in “Equations (6.19)”.

But:

We get

The product on the left-hand side of the last equation contains powers extending up to ( . On the other hand the polynomial “ ” has degree “ ” or less, so the powers Xk, Xk+1, Xk+2,……, Xn-1

do not appear in the polynomial on the right-hand side of this equation .thus we set the coefficients of these terms in the right-hand side by zero as shown below in “Equations (6.20)”.

Page 174: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

159 | P a g e

Equations (6.20)

Equations (6.21)

We will arrange the coefficients in reversed order:

The polynomials “ ”, now used as rows of the parity-check matrix H

6.4.2.5 – Case Study I: Cyclic Redundancy Check Codes (CRC)

CRC codes are error-detecting codes typically used in automatic- repeat request (ARQ) systems, CRC codes have no error capability but they can be used in a combination with an error-correcting code to improve the performance of the system.

A CRC constructed by an (n, k) cyclic code is capable of detecting any error burst of length n-k or less. Binary

(n, k) CRC codes are capable of detecting the following error patterns:

1. All error bursts of length “ – ” or less.

2. A fraction of error bursts of length equal to “ – ”; the fraction equals “ – ”.

3. A fraction of error bursts of length greater than to “ – ”; the fraction equals “– ”.

4. All combinations of – (or fewer) errors. 5. All error patterns with an odd number of errors if the generator polynomial g(X) for the code has an even

number of nonzero coefficients

CRC Codes can be divided into more than on type as shown below:

Code name Generator polynomial n-k

CRC-12 Code 12

CRC-16 Code(USA) 16

CRC-ITU Code 16

CRC-ATM Code 8

The next sub-clauses with quickly discuss the important CRC models

6.4.2.5.1 – 16-Bits CRC- CCITT (USA Model)

The generator polynomial of that type is shown below in “Equation (6.21)”.

This code is defined as the reminder of dividing by ( )...the code words are formed as a k-

bit information sequence followed by a check pattern of 16 bits at the end of the message. The result is a code.

16-Bits CRC code encoder is shown below in “Figure (6.9)”.

Page 175: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

160 | P a g e

Figure (6.9)

In the encoding process, two zero bytes are added to the end of the message, which are used when

computing the CRC.

At receiver decoder computes the CRC of the message part & adds the result to the CRC bytes, and then tests to check whether the result equals zero.

6.4.2.5.2 – 16-Bits CRC- ATM

In the field of telecommunications, among the numerous cyclic redundancy codes in use, ATM CRC-32 is difficult to compute because it is based on a polynomial of degree 32 that has many more terms than any other CRC polynomial in common use. CRC checking and generation are generally carried out on a per-byte basis, in an attempt to cope with the dramatic increase of the data throughput of higher-speed lines. More calculations are needed to process a new incoming byte of data if the number of terms of the polynomial is large, because more bits of the current intermediate result must be combined to calculate each bit of the next one. This tends to counteract the intrinsic speed advantage of the per-byte computation by requiring that more processing be done at each cycle. This paper describes a method that overrides the intrinsic complexity of the CRC-32 computation. It permits expediting AAL5 messages, which must be segmented into ATM cells, with CRC-32 computed at one end, and reassembled from cells, with CRC-32 checked for data integrity at the destination.

The calculation is done in two steps as shown in the very next few lines: 1. A first division is done on the entire message (until the last cell is received or segmented) by a much

simpler polynomial. 2. A second division, on the remainder of the first division by the regular CRC-32 polynomial, is

performed only once, in order to obtain the final result.

6.4.2.6 – Case Study II: B. C. H. Codes

6.4.2.6.1 – Introduction

BCH code is an example on multi-level cyclic variable-length digital error-correcting code used to correct multiple random error patterns (t).

In coding theory the BCH codes form a class of parameterized error-correcting codes which have been

the subject of much academic attention in the last fifty years. BCH codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri. The acronym BCH comprises the initials of these inventors' names.

Page 176: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

161 | P a g e

Equations (6.22)

The principal advantage of BCH codes is the ease with which they can be decoded, via an elegant algebraic method known as syndrome decoding. This allows very simple electronic hardware to perform the task, obviating the need for a computer, thus a decoding device can be made small and low-powered. This class of codes is also highly flexible, allowing control over block length and acceptable error thresholds, thus a custom code can be designed to a given specification (subject to mathematical constraints).

Reed–Solomon codes, which are a special case of BCH codes, are used in applications like satellite

communications, compact disc players, DVDs, disk drives, and two-dimensional bar codes.

In technical terms a BCH code is a multilevel cyclic variable-length digital error-correcting code used to correct multiple random error patterns. BCH codes may also be used with multilevel phase-shift keying whenever the number of levels is a prime number or a power of a prime number. A BCH code in 11 levels has been used to represent the 10 decimal digits plus a sign digit.

BCH can be divided into more than 1 category of codes, here we deal Binary BCH.

6.4.2.6.2 – Encoding Algorithm

BCH Encoding can be summarized in the next “4” Steps which are represented simultaneously in “Equations (6.22)”:

1. Convert message into polynomial form. 2. Multiply the polynomial form of message by “ ”. 3. Divide by “Generator Polynomial” and let remainder to be 4. Construct Code-word using as parity polynomial.

Step 1:

Step 2:

Step 3:

Step 4: 𝑀 |

6.4.2.6.3 – Decoding Algorithm

BCH Decoding can be summarized in the next “7” Steps which are represented simultaneously in “Equations (6.23)”:

Page 177: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

162 | P a g e

Figure (6.10)

Equation (6.23)

Equation (6.24)

1. Extend o where m is biggest power in any of Generator equations, example on Extending to using the primitive polynomial ( is shown below in “Figure (6.10)”.

2. Calculate the -2t- Syndromes using “Equations (6.23)”.

: Convert received code word to polynomial and replace by

: Convert received code word to polynomial and replace by

: Convert received code word to polynomial and replace by

3. Detect the actual number of errors ( ) in received code-word using “Equations (6.24)”.

[

] ,

[

] {

4. Knowing all Syndromes solve newton’s equation (“Equation (6.25)”).

Page 178: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

163 | P a g e

Equation (6.25)

Equation (6.26)

[

] [

] [

]

5. Using all values of Solve the roots of error locator equation (“Equation (6.26)”).

6. The calculated roots represent reciprocal of the error location. 7. Flip the data received in the error locations determined before.

6.4.3 –Convolutional Codes

6.4.3.1 – Introduction

This section deals with convolutional coding. A convolutional code is described by three integers, n, k. and K, where. The integer K is a parameter known as the constraint length, it represents the number of k stages in the encoding shift register. An important characteristic of convolutional codes different from block codes is that the encoder has memory where the output from the convolutional encoder is not only a function of an input K, but is also a function of the previous K - 1 input.

6.4.3.2 – Encoder Structure

The general convolutional encoder is shown in “Figure (6.11)” it consist of “K” stage shift register and “n” module-2 adder where K is the constrain length. The constraint length represents the number of k-bits shifts over which the encoder give the total output, at each unit of time, k bits are shifted

“Figure (6.11)” shows the general convolutional encoder structure into the first k stages of the register; all

bits in the register are shifted K stages to the right, and the outputs of the n adders are calculated in every unit time. Since there are n code bits for each input group of k message bits, the code rate is kin message bit per code bit, where k < n In the most commonly used binary convolutional encoders for which k = 1bit which mean that one bit is shifted in one unit time, If the constrain length K increase with different code rate the convolutional cod become very complicated. So a simple convolutional code will be used to describe the code properties as to describe the code properties as shown in the “Figure (6.12)”.

Page 179: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

164 | P a g e

Figure (6.11)

Figure (6.12)

6.4.3.3 – Connection Representation

In the previous “Figure (6.11)” it represent the (2, 1) convolutional encoder with constrain length K=3, There are n = 2 modulo-2 adders so the code rate k/n is 1/2. At each unit time a bit is shifted into the leftmost stage and the bits in the register are shifted one position to the right. Next the output was taken from each modulo-2 adder (i.e. first the upper adder then the lower adder).then the output was taken at each unit time. One way to represent the encoder is to specify a set of n connections vectors one for each module-2 adder.

Each vector has dimension K and describes the connection of the encoding shift register to that modulo-2

adder. A one in the position of the vector indicates that the corresponding stage in the shift registers is connected to the module-2 adder and a zero in a given position indicates that no connection exists between the encoder and the rnodulo-2 adder. For the encoder in “Figure (6.12)”, we can write the connection vector g1 for the upper connections and g2 for the lower connections as “ ”.

Page 180: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

165 | P a g e

Equation (6.27)

Equation (6.28)

6.4.3.4 – Convolutional Encoder Representation

To describe a convolutional code it is need to know the encoding function G (m). So that given an input sequence m, easily to get the output U.

Several methods are used for representing a convolutional encoder 1. Polynomials representation. 2. State diagram representation. 3. Tree diagram representation. 4. Trellis diagram representation.

6.4.3.4.1 – Polynomial Representation

Sometimes the encoder connections are represented by generator polynomial, we can represent a convolutional encoder with a set of generator polynomials one for each module-2 adder, each polynomial of degree K - I or less and describe the connection of the encoding shift register to the modulo-2adder much the same way that a connection vector does.

The coefficient of each term in the (K – 1)-degree polynomial is either 1 or 0, depending whether a

connection exists or doe s not exist between the shift register and the modulo-2 adder. For the encoder in “Figure (6.12)”, we can write the generator polynomial, consider g1(X) for the upper connections and g2(X) for the lower connections as shown below in “Equation (6.27)”.

Where the lowest order in the polynomial is corresponds to the input stage of the register, and the

output is can be represented as shown below in “Equation (6.28).

Interlaced bye

So from this example it shows that the convolutional encoder can be treated as a set of cyclic code shift register, because it represents the encoder with the polynomials as used for describing cycling codes.

6.4.3.4.2 – State Representation

The convolutional encoder is consider as devices known as finite state machines, which mean that it has memory of the past signals. The mean by state consists of the smallest amount of information that with the current input gives the output, for convolutional encoder with rate 1/n the state is represented by the

Page 181: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

166 | P a g e

Figure (6.13)

content of the rightmost K-1 stages. Known of the state and the next input is necessary to get the output. The representation for the encoder shown in “Figure (6.11)” with the state diagram is shown below in “Figure (6.12)”. The convolutional encoder is consider as devices known as finite state machines, which mean that it has memory of the past signals. The mean by state consists of the smallest amount of information that with the current input gives the output, for convolutional encoder with rate 1/n the state is represented by the content of the rightmost K-1 stages. Known of the state and the next input is necessary to get the output. The representation for the encoder shown in “Figure (6.11)” with the state diagram is shown below.

In “Figure (6.13)”.The states shown in the ellipse of the diagram represent the possible contents of the

right most K - I stages of the register and the paths between the states represent the output branch words resulting from such state transitions. The states of the register are as follows a = 00, b = 10, c = 01, and d = 11. the diagram shown in “Figure (6.13)” illustrates an the state transitions that are possible for the encoder in “Figure (6.12)” There are only two transitions emanating from each state corresponding to the two possible input bits, next to each path between states is written the output branch word associated with the state transition . In drawing the solid line indicates zero input and the dashed line indicates input one.

6.4.3.4.3 – Tree Diagram Representation

The state diagram completely describes the encoder but one cannot easily use it for track the encoder transitions as a function of time, since the diagram cannot represent time history. The tree diagram adds the time to the state diagram. The tree diagram for the convolutional encoder shown in “Figure (6.11)”.In the tree diagram at each input at a time unit the encoding steps can be described by moving on the tree from left to right, each tree branch describing an output branch word. The rule for getting a code word sequence is as follows, if the input bit is a zero its indicate to moves upward direction and get the branch word from the arrow, and if the input bit is a one its indicate to moves downward direction and get the branch word from the arrow, assuming that the initial contents of the encoder is all zeros. The diagram shows that if the first input bit is a zero, the output branch word is 00 and if the first input bit is a one, the output branch word is 11, Similarly if the first input bit is a one and the second input bit is a zero, the second output branch word is 10, or if the first input bit is a o ne and the second input bit is a one, the second output branch word is 01and so on. The tree diagram solves the problem of the time history but if the sequence is long the number of branches increase so it is difficult to use the tree diagram.

Page 182: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

167 | P a g e

Figure (6.14)

“Figure (6.14)” shows the tree diagram of encoder (rate ½, K=3).

6.4.3.4.4 – Trellis Diagram Representation

The trellis diagram is basically a redrawing of the state diagram. It shows all possible state transitions at each time step. Frequently, a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (x/c). This compact representation is very helpful for decoding convolutional codes as discussed later. Observation of the Fig (6.10) tree diagram shows that for this example, the structure repeats itself at time t4, after the third branching (in general, the tree structure repeats after K branching. where K is the constraint length), We label each node in the tree of Fig (6.10) to correspond to the four possible states in the shift register as follows: {a=00. b=10, C =01, and d =11}.

The first branching of the tree structure at time t1 produces a pair of nodes labeled a, b. At each successive branch the number of nodes doubles. The second branching, at time t2, results in four nodes labeled a. b, c. and d. After the third branching, there are a total of eight nodes: two are labeled a, two arc labeled b. two are labeled c. and two are labeled d. We can see that all branches emanating from two nodes of the same state generate identical branch word sequences. In “Figure (6.14)” the trellis diagram use the same convention that introduced with the state diagrams a solid line denotes the output generated by an input bit zero and a dashed line denotes the output generated by an input bit one , The nodes of the trellis

Page 183: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

168 | P a g e

Figure (6.15)

Equation (6.29)

Equation (6.30)

characterize the encoder states; the first row nodes correspond to the state a '= 00, the second and subsequent rows correspond to the states b =10, c = 01 and d =11, at each unit of time the trellis requires 2K - I nodes to represent the 2K- I possible encoder states.

Each of the states can he entered from either of two preceding slates. Also each of the states can

transition to one of two states of the two outgoing branches one corresponds to an input bit zero (solid line)and the other corresponds to an input bit one(dashed line). In “Figure (6.15)” the output branch word corresponding to the state transitions appear as labels in the trellis branches.

If all input message sequences are equally likely, a decoder that achieves the minimum probability of

error is one that compares the conditional probabilities, also called the likelihood functions P(Z|U(m)), where Z is the received sequence and U(m)is one of the possible transmitted sequences, and chooses the maximum. The decoder chooses U (m’) on the condition that “Equation (6.29)” is satisfied.

In the binary demodulation there are only two equally likely possible signals,S1(t) or S2(t), that might have

been transmitted, therefore to make the binary maximum likelihood decision, given a received signal, meant only to decide that S1( t) was transmitted only if “Equation (6.30)” is satisfied.

We will assume that the noise is additive white Gaussian with zero mean and the channel is memory less,

which means that the noise affects each code symbol independently of all the other symbols. For a convolution al code of rate 1/n, we can therefore express the likelihood ratio received as shown below in “Equations (6.31)”.

Page 184: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

169 | P a g e

Equation (6.31)

Equation (6.32)

Zi : is the ith branch of the received sequence Ui(m): is the ith branch of a particular code-word sequence U(m)

The decoder problem is mainly choosing a path through the trellis such that:

Generally, it is computationally more convenient to use the logarithm of the likelihood function since this permits the summation instead of the multiplication of terms. We are able to use this transformation because the logarithm is a monotonically increasing function and thus will not alter the final result in our code-word selection.

We can modify the definition of the log-likelihood function to be as shown below in “Equation (6.32)”.

The decoder problem now consists of choosing a path through the tree or trellis such that ˠu(m) is maximized

For the decoding of convolutional codes, either the tree or the trellis structure can be used. In the tree

representation of the code, the fact that the paths remerge is ignored, Since for a binary code, the number of possible sequences made up of L branch words is 2L Using a tree diagram, requires the "brute force " or exhaustive comparison of 2L accumulated log-likelihood metrics, representing all the possible different code-word sequences that could have been transmitted. Hence it is not practical to consider maximum likelihood decoding with a tree structure.

6.4.3.5 – Case Study: Low Parity Check Codes (LDPC)

6.4.3.5.1 – Introduction

Low-Density Parity-Check (LDPC) codes are a class of linear block codes. The name comes from the characteristic of their parity-check matrix which contains only a few 1’s in comparison to the amount of 0’s. Their main advantage is that they provide a performance which is very close to the capacity for a lot of different channels and linear time complex algorithms for decoding.

Furthermore they are suited for implementations that make heavy use of parallelism. They were first introduced by Gallager in his PhD thesis in 1960. But due to the computational effort in implementing coder and encoder for such codes and the introduction of Reed-Solomon codes they were mostly ignored until about ten years ago.

6.4.3.5.1 – Matrix Representation of LDPC Codes

Let’s look at an example for a low-density parity-check matrix first. The matrix defined in “Equation (6.33)” is a parity check matrix with dimension n × m for a (8, 4) code. We can now define two numbers describing this matrix, for the number of 1’s in each row and for the columns. For a matrix to be called low-density the two conditions and must be satisfied. In order to do this, the parity check matrix should usually be very large, so the example matrix can’t be really called low-density.

Page 185: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

170 | P a g e

Equation (6.33)

Figure (6.16)

H=[

]

6.4.3.5.2 – Graphical Representation of LDPC Codes

Tanner graphs are bipartite graphs. That means that the nodes of the graph are separated into two distinctive sets and edges are only connecting nodes of two different types. The two types of nodes in a Tanner graph are called variable nodes (v-nodes) and check nodes (c-nodes).

“Figure (6.16)” is an example for such a Tanner graph and represents the same code as the matrix in

“Equation (6.33)”. The creation of such a graph is rather straight forward. It consists of m check nodes (the number of parity bits) and n variable nodes (the number of bits in a code word). Check node fi is connected to variable node cj if the element hij of H is a 1.

6.4.3.5.3 – Regular and Irregular LDPC Codes

LDPC code is called regular if Wc is constant for every column and Wr = Wc . (n/m) is also constant for every row. The example matrix is regular with Wc = 2 and Wr = 4. It’s also possible to see the regularity of this code while looking at the graphical representation. There is the same number of incoming edges for every v-node and also for all the c-nodes. If H is low density but the numbers of 1’s in each row or column aren’t constant the code is called irregular LDPC code.

6.4.3.5.4 – Constructing LDPC Codes

Several different algorithms exist to construct suitable LDPC codes. Gallager himself introduced one. Furthermore MacKay proposed one to semi-randomly generate sparse parity check matrices. This is quite interesting since it indicates that constructing good performing LDPC codes is not a hard problem. In fact, completely randomly chosen codes are good with a high probability. The problem that will arise is that the encoding complexity of such codes is usually rather high.

6.4.3.5.5 – Hard Decoding of LDPC Codes

Hard decision means, it uses hard output of the demodulator (1’s and 0’s) .Consider H-matrix suppose that the code word at the transmitter is [1 0 0 1 0 1 0 1].

Page 186: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

171 | P a g e

Figure (6.17)

Figure (6.18)

After it passes by a binary symmetric channel which makes bit c1 flip to 1 so now the code word at the receiver is [1 1 0 1 0 1 0 1].

Decoding algorithm can be represented as following:

1. All v-nodes ci send a message to their c-nodes fj containing the bit they believe to be the correct one for them. At this stage the only information a v-node ci has, is the corresponding received bit of c, yi . That means for example, that c0 sends a message containing 1 to f1 and f3, node c1 sends messages containing 1 to f0 and f1, and so on as shown below in “Figure (6.17)“.

2. Every check nodes fj calculate a response to every connected variable node. The response message

contains the bit that fj believes to be the correct one for this v-node ci assuming that the other v-nodes connected to fj are correct. In other words: If you look at the example, every c-node fj is connected to 4 v-nodes. So a c-node fj looks at the message received from three v-nodes and calculates the bit that the fourth v-node should have in order to fulfill the parity check equation. Table 2 gives an overview about this step. this might also be the point at which the decoding algorithm terminates. This will be the case if all check equations are fulfilled. We will later see that the whole algorithm contains a loop, so another possibility to stop would be a threshold for the amount of loops.

3. V-nodes receive the messages from the check nodes and use this additional information to decide if their originally received bit is OK. A simple way to do this is a majority vote. When coming back to our example that means, that each v-node has three sources of information concerning its bit. The original bit received and two suggestions from the check nodes. “Figure (6.18)” illustrates this step. Now the v-nodes can send another message with their (hard) decision for the correct value to the check nodes.

4. Go to step 2.In our example, the second execution of step 2 would terminate the

Decoding process since c1 has voted for 0 in the last step. This corrects the transmission error and all check equations are now satisfied.

Page 187: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

172 | P a g e

Equation (6.34)

Equation (6.35)

Equation (6.36)

6.4.3.5.6 – Soft-Decision Decoding

Soft decision means, it uses soft output of the demodulator (either probabilities of 1’s and 0’s or log likelihood ratios) .In this case we will use probabilities. The main advantage of soft decoding is that it gives better decoding performance thus it is the preferred method. Notations used in this algorithm:

Pi = Pr(ci = 1|yi)

qij is a message sent by the variable node ci to the check node fj. Every message contains always the pair

qij(0) and qij(1) which stands for the amount of belief that yi is a ”0” or a ”1”.

rji is a message sent by the check node fj to the variable node ci. Again there is a rji(0) and rji(1) that

indicates the (current) amount of believe in that yi is a ”0” or a ”1”.

Decoding algorithm can be represented as following:

1. All variable nodes send their qij messages. Since no other information is available at this step, qij(1) = Pi

and qij(0) =1 – Pi.

2. The check nodes calculate their response messages rji using “Equations (6.34)”.

=

But:

So they calculate the probability that there is an even number of 1’s among the variable nodes except ci (this is exactly what Vj\i means). This probability is equal to the probability rji(0) that ci is a 0.

3. The variable nodes update their response messages to the check nodes. This is done according to “Equations (6.35)”. qij(0) = Kij (1 − Pi)∏

qij(1) = KijPi∏

Constants Kij are chosen in a way to ensure that qij(0) + qij(1) = 1. Ci\j

4. At this point the v-nodes also update their current estimation I of their variable ci . This is done by calculating the probabilities for 0 and 1 and voting for the bigger one using “Equations (6.36)”.

Qi (0) = Ki (1 − Pi) ∏

Qi (1) = Ki Pi ∏

Previous equations are quite similar to the ones to compute qij(b) but now the information from every c-node is used. Now we have two cases, i equal one if Qi(1) > Qi(0) or i equal zero in any other case.

If the current estimated code word fulfills now the parity check equations the algorithm terminates. Otherwise termination is ensured through a maximum number of iterations.

Page 188: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

173 | P a g e

Figure (6.19)

5. Go to step 2 for a new iteration.

6.4.3.5.7 – LDPC Performance

“Figure (6.19)” shows LDPC performance curve at two different code rates.

Page 189: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

174 | P a g e

Figure (7.1)

Chapter Number 7 Orthogonal Frequency Division Multiplexing

7.1 – Introduction Due to the high data rate transmission and the ability to against frequency selective fading, orthogonal frequency

division multiplexing (OFDM) is a promising technique in the current broadband wireless communication system.

Orthogonal frequency division multiplexing (OFDM) technology is to split a high-rate data stream into a number of lower rate streams that are transmitted simultaneously over a number of subcarrier. Because the symbol duration increases for the lower rate parallel subcarrier, the relative amount of dispersion in time causes by multipath delay spread is decreased.

In a classical parallel data system, the total signal frequency band is divided into “N” non-overlapping frequency sub-channels, each sub-channel is modulated with a separate symbol and then the “N” sub-channels are frequency-multiplexed. It seems good to avoid spectral overlap of channels to eliminate inter-channel interference. However, this leads to inefficient use of the available spectrum. To cope with the inefficiency, the ideas proposed from the mid-1960s were to use parallel data and FDM with overlapping sub-channels, in which, each carrying a signaling rate b is spaced apart in frequency to avoid the use of high-speed equalization and to combat impulsive noise and multipath distortion, as well as to fully use the available bandwidth.

The employment of discrete Fourier transform to replace the banks of sinusoidal generator as shown in “Figure (7.1)”and the demodulation significantly reduces the implementation complexity of OFDM modems.

Inter-symbol interference is eliminated almost completely by introducing a guard interval with zero padding in

every OFDM symbol. In the guard time, the OFDM symbol is cyclically extended to avoid inter-carrier interference (cyclic prefix).

Page 190: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

175 | P a g e

7.2 – OFDM Historical Overview In 1971, Weinstein and Ebert applied the Discrete Fourier Transform (DFT) to parallel data transmission systems

as part of modulation and demodulation process.

In the 1980s, OFDM was studied for high-speed modems digital mobile communication and high-density recording and pilot tone is used to stabilize carrier and frequency control in addition to Trellis code is implemented.

In 1980, Hirosaki suggested an equalization algorithm in order to suppress both inter-symbol and inter-carrier interference caused by the channel impulse response or timing and frequency errors.

In the 1990s, OFDM was exploited for wideband data communications Mobile radio FM channels:

Fix-wire network

High-bit-rate digital subscriber line (HDSL)

Asymmetric digital subscriber line (ADSL)

Very-high-speed digital subscriber line (VDSL)

Digital audio broadcasting (DAB)

Digital video broadcasting (DVB)

High-definition television (HDTV) terrestrial broadcasting.

There exist three mechanisms about the digital terrestrial television broadcasting system European (COFDM )

Wireless LAN HIPERLAN2 (European) IEEE 802.11a (U.S.A) IEEE 802.11g (U.S.A)

Now, OFDM technique has been adopted as the new European DAB standard, DVB standard, widely used in all

WiMAX implementations, a candidate of 4G mobile communication, in IEEE 802.16 broadband wireless access system standards, and IEEE 802.20 mobile broadband wireless access and more other advanced communications systems.

7.3 – Important Definitions Inter-symbol interference (ISI) and inter-carrier interference (ICI) are very important phenomena that a signal can

face so we will focus here on their detailed meaning.

7.3.1 – Inter-Symbol Interference (ISI)

In telecommunication, inter-symbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols as shown below in “Figure (7.2)”. This is an unwanted phenomenon as the previous symbols have similar effect as noise, thus making the communication less reliable. ISI is usually caused by multipath propagation or the inherent non-linear frequency response of a channel causing successive symbols to "blur" together. The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible. Ways to fight inter-symbol interference include adaptive equalization and error correcting codes.

From the receiver point of view, the channel introduces time dispersion in which the duration of the received

symbol is stretched; extending the symbol duration causes the current received symbol to overlap previous received symbols and results in inter-symbol interference (ISI).

Page 191: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

176 | P a g e

Figure (7.2)

Figure (7.3)

7.3.2 – Inter-Carrier Interference (ICI)

Interference caused by data symbols on adjacent subcarriers, ICI occurs when the multipath channel varies over one OFDM symbol time as shown below in “Figure (7.3)”, when this happens, the Doppler shifts on each multipath component cause a frequency offset on the subcarriers, resulting in the loss of orthogonality among them. This situation can be viewed from the time domain perspective, in which the integer number of cycles for each subcarrier within the FFT interval of the current symbol is no longer maintained due to the phase transition introduced by the previous symbol, Finally, any offset between the subcarrier frequencies of the transmitter and receiver also introduces ICI to an OFDM symbol.

Page 192: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

177 | P a g e

Figure (7.4)

Figure (7.5)

7.4 – OFDM as a Multicarrier Transmission Technique OFDM is a special case of multicarrier transmission, where a single data stream is transmitted over a number of

lower rate subcarrier. Multi-carrier methods belong to the most complicated transmission methods of all and are in no way inferior to

the code division multiple access (CDMA) methods, But why this complexity? The reason is simple: the transmission medium is an extremely difficult medium to deal with.

The terrestrial transmission medium involves

• Terrestrial transmission paths, • Difficult line-associated transmission conditions.

The terrestrial transmission paths, in particular, exhibit the following characteristic features:

• Multipath reception via various echo paths caused by reflections from buildings, mountains, trees, vehicles. • Additive white Gaussian noise (AWGN). • Narrow-band or wide-band interference sources caused by internal combustion engines, streetcars or

other radio sources. • Doppler Effect i.e. frequency shift in mobile reception.

Multipath reception leads to location and frequency-selective fading phenomena as shown in “Figure (7.4)”

An effect known as “red-light effect” in car radios, the car stops at a red stop light and radio reception ceases. If

one were to select another station or move the car slightly forward, reception would be restored. If information is transmitted by only one discrete carrier precisely at one particular frequency, echoes will cause cancellations of the received signal at particular locations at exactly this frequency. This effect is a function of the frequency as shown in “Figure (7.5)”, the intensity of the echo and the echo delay.

Page 193: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

178 | P a g e

Figure (7.6)

If high data rates of digital signals are transmitted by vector modulated (I/Q modulated) carriers, they will exhibit a bandwidth which corresponds to the symbol rate.

The available bandwidth is usually specified. The symbol rate is obtained from the type of modulation and the

data rate. However, single carrier methods have a relatively high symbol rate, often within a range of more than 1 MS/s up to 30 MS/s. This leads to very short symbol periods of 1 μs and shorter (inverse of the symbol rate). However, echo delays can easily be within a range of up to 50 μs or more in terrestrial transmission channels. Such echoes would lead to inter-symbol interference between adjacent symbols or even far distant symbols and render transmission more or less impossible. An obvious trick would now be to make the symbol period as long as possible in order to minimize inter-symbol interference as shown in “Figure (7.6)”and, in addition, pauses could be inserted between the symbols, so-called guard intervals.

However, there is still the problem of the location- and frequency selective fading phenomena. If then the information is not transmitted via a single carrier but is distributed over many, up to thousands of subcarriers and a corresponding overall error protection is built in, the available channel bandwidth remaining constant, individual carriers or carrier bands will be affected by the fading, but not all of them.

At the receiving end, sufficient error-free information could then be recovered from the relatively undisturbed carriers to be able to reconstruct an error-free output data stream by means of the error protection measures taken. If, however, many thousands of subcarriers are used instead of one carrier, the symbol rate is reduced by the factor of the number of subcarriers and the symbols are correspondingly lengthened several thousand times up to a millisecond. The fading problem is solved and, at the same time, the problem of inter-symbol interference is also solved due to the longer symbols and the appropriate pauses between them.

A multi-carrier method is born and is called Coded Orthogonal Frequency Division Multiplex (COFDM). It is now

only necessary to see that the many adjacent carriers do not interfere with one another, i.e. are orthogonal to one another.

Page 194: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

179 | P a g e

Figure (7.7)

7.5 – OFDM Concept An OFDM signal consists of a number of closely spaced modulated carriers. When modulation of any form - voice,

data, etc. is applied to a carrier, then sidebands spread out either side. It is necessary for a receiver to be able to receive the whole signal to be able to successfully demodulate the data. As a result when signals are transmitted close to one another they must be spaced so that the receiver can separate them using a filter and there must be a guard band between them. This is not the case with OFDM. Although the sidebands from each carrier overlap, they can still be received without the interference that might be expected because they are orthogonal to each another. This is achieved by having the carrier spacing equal to the reciprocal of the symbol period.

To see how OFDM works, it is necessary to look at the receiver. This acts as a bank of demodulators, translating

each carrier down to DC. The resulting signal is integrated over the symbol period to regenerate the data from that carrier. The same demodulator also demodulates the other carriers. As the carrier spacing equal to the reciprocal of the symbol period means that they will have a whole number of cycles in the symbol period and their contribution will sum to zero - in other words there is no interference contribution.

7.5.1 –Illustrative Example

The way to see OFDM usage is to use the analogy of making a we have two options ,one hire a big truck or a bunch of smaller ones as shown in “Figure (7.8)” .both methods carry the exact same amount of data . but in case of accident ,only 1/4 of data on the OFDM trucking will suffer. These four smaller trucks when seen as signals are called the sub-carrier in an OFDM system and they must be orthogonal for this idea to work, the independent sub-carrier channels can be multiplexed by frequency division multiplexing (CDM).

7.5.2 – Analysis

To completely understanding the concept of OFDM we make analysis in time and frequency domains.

Figure (7.8)

Page 195: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

180 | P a g e

Figure (7.9)

Figure (7.10)

7.5.2.1 – Time Domain Analysis

Assume a channel that has the following impulse response “h(t)”, if we send a pulse “S(t)” over this channel, the pulse shape would be convolved with the channel impulse response as shown in “Figure (7.9)”.

Note that the pulse becomes dispersed or extended in time interfering with surrounding pulses and causing inter-symbol interference (ISI) and Now compare the two cases of T >> τ and T < τ:

T < τ: Pulse completely distorted. ISI is significant in this case.

T > τ: Pulse extended but the extension is much smaller than “T” the output behaves like the transmitted rectangular pulse.

7.5.2.2 – Frequency Domain Analysis

A wideband signal is completely distorted while a narrow band signal is essentially seeing a flat channel as shown in “Figure (7.10)”.

A high data rate transmitted signal has a consequent large bandwidth, this means that it subjected to

frequency selective or frequency dependent fading, which can distort the signal significantly, one solution is to divide the bandwidth available for transmission into many narrowband sub channels

Page 196: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

181 | P a g e

Figure (7.11)

Low data rate sub channel‘s frequency component encounter an almost flat channel. (The band over which the channel is almost constant is called the coherence bandwidth of the channel) Relative to the narrow sub channel, the channel is basically a frequency-independent complex number, i.e.; amplitude and a phase shift.

7.5.2.3 – Conclusion from both Domains Analysis

Short pulses suffer severely from channel, But we need these short pulses in order to send a greater number of them at given time period, i.e., to increase data or bit transmission rate, one solution is to use short pulses (high data rate) and an equalizer at receiver, the equalizer is a filter that compensates for the distortion induced by channel characteristics.

For very high data rates a sophisticated equalizer is needed and may not be ever feasible, is there

another solution? Yes, stick to long pulses. But how can we then increase the data rate? We can use many frequency channels (called sub channels), and hence the name FDM (Frequency Division Multiplexing), over each of these sub channels the data rate is low, but taken together and since they operate in parallel, a very high data rate can be achieved while circumventing the dispersive influence of the channel.

7.6 – Orthogonality of OFDM In OFDM, the spectra of subcarriers overlap but remain orthogonal to each other “Figure (7.11)”, this means that

at the maximum of each subcarrier spectrum, all the spectra of other subcarriers are zero.

The receiver samples data symbols on individual subcarriers at the maximum points and demodulates them free from any interference from the other subcarriers and hence no ICI, the orthogonality of subcarriers can be viewed in either the time domain or in frequency domain:

From the time domain perspective, each subcarrier is a sinusoid with an integer number of cycles within one FFT interval.

From the frequency domain perspective, this corresponds to each subcarrier having the maximum value at its own center frequency and zero at the center frequency of each of the other subcarriers.

Page 197: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

182 | P a g e

Equation (7.1)

Equation (7.2)

Figure (7.12)

Two functions, Xq(t) and Xk(t), are orthogonal over an interval [a, b] mathematically this can be written as shown in “Equation (7.1)”:

⟨ ⟩ ∫ {

The output of the integrator can be expressed as follows in “Equation (7.2)”:

⁄ ∫

⁄ ∫( ∑

)

{

This integral satisfies the orthogonality definition, to implement this integral sine wave carriers are used, Consequently, the harmonic exponential functions (sine wave carriers) with frequency separation “Δf= 1/TU” are orthogonal as shown below in “Figure (7.12)”.

7.7 – Comparing FDM to OFDM

7.7.1 –Frequency Division Multiplexing (FDM)

In Frequency Division Multiplexing system, signals from multiple transmitters are transmitted simultaneously (at the same time slot) over multiple frequencies. Each frequency range (sub-carrier) is modulated separately by different data stream and a spacing (guard band) is placed between sub-carriers to avoid signal overlap as shown “Figure (7.13)”.

Page 198: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

183 | P a g e

Figure (7.13)

Figure (7.14)

7.7.2 – Orthogonal Frequency Division Multiplexing (OFDM)

OFDM is a multiplexing technique that divides the bandwidth into multiple frequency sub carriers. OFDM also uses multiple sub-carriers but the sub-carriers are closely spaced to each other without causing interference, removing guard bands between adjacent sub-carriers. Here all the sub carriers are orthogonal to each other. Two periodic signals are orthogonal when the integral of their product, over one period, is equal to zero. The use of OFDM

results in bandwidth saving as seen in the “Figure (7.14)”.

Finally “Figure (7.15)” shows the difference between spectra of both FDMA, FDM.

Figure (7.15)

Page 199: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

184 | P a g e

Figure (7.16)

Equation (7.3)

7.8 – OFDM Implementation Keeps a time slot of length TS fixed, considers modulation in frequency direction for each time slot, starts

with a base transmit pulse “g(t)”.then obtains frequency-shifted replicas of this pulse, that is, if “g(t) = g0(t)” is located at the frequency “f = 0”, then “gk(t)” is located at “f =fk”, in contrast to the first scheme, for each time instant l, the set of K (or K + 1) modulation symbols is transmitted by using different pulse shapes “gk(t)”, the parallel data stream excites a filter bank of K (or K + 1) different band pass filters. The filter outputs are then summed up before transmission. This setup is depicted in “Figure (7.16)”.

The transmitted signal in the complex baseband can be represented as shown below in “Equation (7.3)”.

∑∑

It is obvious that we come back to the first setup if we replace the modulation symbols “skl” by “skl e −j2πfklTS” in

Equation, such a time-frequency-dependent phase rotation does not change the performance, so both methods can be regarded as equivalent.

However, the second – the filter bank – point of view is closer to implementation, especially for the case of OFDM, where the filter bank is just an FFT, as it will be shown later.

Frequency Division Multiplexing Orthogonal Frequency Division Multiplexing

Bandwidth dedicated to several sources All sub-channels are dedicated to a single data source

No relationship between the carriers Sum of a number of orthogonal carriers

There is a guard band between carriers No guard band between carriers

Low spectral efficiency Better spectral efficiency

More subject to ISI and external interference from other RF sources

Overcomes ISI and delay spread

Page 200: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

185 | P a g e

Figure (7.17)

Figure (7.18)

7.8.1 –Implementation using “FFT/IFFT”

In order to overcome the daunting requirement for L RF radios in both the transmitter and the receiver, OFDM uses an efficient computational technique, discrete Fourier transform (DFT) as shown “Figure (7.17)”.

Discrete Fourier transform (DFT) leads itself to a highly efficient implementation commonly known as the Fast

Fourier Transforms (FFT), this FFT and its inverse operation which is called (IFFT) can create a multitude of orthogonal subcarriers using a single radio.

7.9 – OFDM Transmitter The OFDM transmitter shown below in “Figure (7.18)” will be briefly discussed in the next few words. Initially, the bit sequence is first subjected to channel encoding to reduce the probability of error at the receiver

due to the channel effects, these bits are mapped to symbols of QPSK, QAM, or any other modulation schematic, the symbol sequence is converted to parallel format, IFFT (OFDM modulation) is applied to convert the block of frequency data to a block of time data that modulates the carrier, the sequence is once again converted to the serial format.

Now OFDM symbol already generated, after then, guard time is provided between the OFDM symbols and the

guard time is filled with the cyclic extension of the OFDM symbol, and windowing is applied to the OFDM symbols to make the fall-off rate of the spectrum steeper, the resulting sequence is converted to an analog signal using a DAC and passed on to the RF modulation stage, finally, The resulting RF modulated signal is, then, transmitted to the receiver using the transmit antennas.

Page 201: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

186 | P a g e

Figure (7.19)

7.10 – OFDM Receiver

The OFDM receiver shown below in “Figure (7.19)” will be briefly discussed in the next few words.

Initially, At the receiver, first RF demodulation is performed, Then the signal is digitized using an ADC, Timing and

frequency synchronization are performed, then the guard time is removed from each OFDM symbol, then The

sequence is converted to parallel format, After then FFT (OFDM demodulation) is applied to get back to the

frequency domain, The output is then serialized.

Symbol de-mapping is done to get back the coded bit sequence, and Channel decoding is, then, done to get the

user bit sequence.

7.11 – Cyclic Prefix The key to making OFDM realizable in practice is the use of the FFT algorithm, due to its low complexity.

In telecommunications, the term cyclic prefix refers to the prefixing of a symbol with a repetition of the end. Although the receiver is typically configured to discard the cyclic prefix samples, the cyclic prefix serves two purposes.

As a guard interval, it eliminates the inter-symbol interference from the previous symbol, while as a repetition of

the end of the symbol; it allows the linear convolution of a frequency-selective multipath channel to be modeled as circular convolution, which in turn may be transformed to the frequency domain using a discrete Fourier transform.

In order for the cyclic prefix to be effective (i.e. to serve its aforementioned objectives), the length of the cyclic

prefix must be at least equal to the length of the multipath channel. Although the concept of cyclic prefix has been traditionally associated with OFDM systems, the cyclic prefix is now also used in single carrier systems to improve the robustness to multipath.

In order for the IFFT/FFT to create an ISI-free channel, the channel must appear to provide a circular convolution, adding cyclic prefix to the transmitted signal, as the following copy “n” values from the end of the symbols sequence and use them in the guard region as shown “Figure (7.20)”.

Page 202: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

187 | P a g e

Figure (7.20)

Equations (7.4)

In addition to preventing interference between original signals, signal after adding cyclic prefix the cyclic prefix

above realizes the purpose of the OFDM by making each sub channel see a flat channel response.

One must take into consideration that the cyclic prefix must be larger than the delay spread to overcome the inter-symbol interference.

7.11.1 – Cyclic Prefix Representation in OFDM System

Cyclic Prefixes are used in OFDM in order to combat multipath by making channel estimation easy. As an example, consider an OFDM system which has N subcarriers, the OFDM symbol is constructed by taking the inverse discrete Fourier transform (IDFT) of the message symbol, followed by a cyclic prefixing, the this simple system can be represented by the following set of equations shown below in “Equations (7.4)”.

The message symbol can be written as:

Let the symbol obtained by the IDFT is denoted by: { }

Prefixing it with a cyclic prefix of length L − 1, the OFDM symbol obtained is:

{ }

Assume that the channel is represented using:

{ }

Then, after convolution with the channel, which happens as:

Which is circular convolution, as becomes: So, taking the Discrete Fourier Transform and knowing that is DFT of , we get:

Page 203: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

188 | P a g e

From the previous set of equations is the discrete Fourier transform of . Thus, a multipath channel is converted into scalar parallel sub-channels in frequency domain, thereby simplifying the receiver design considerably.

The task of channel estimation is simplified, as we just need to estimate the scalar coefficients for each sub-channel and once the values of { } are estimated, for the duration in which the channel does not vary significantly, merely multiplying the received demodulated symbols by the inverse of H[k] yields the estimates of {X[k]} and hence, the estimate of actual transmitted symbols.

7.11.2 – Advantages of using Cyclic Prefix

1. Changes the effect of channel to be circular convolution which can be simply equalized using division at the receiver side.

2. In addition to preventing interference between original signals, signal after adding cyclic prefix the cyclic prefix above realizes the purpose of the OFDM by making each sub channel see a flat fading channel response which is much more reliable to deal with than a selective fading channel.

3. The cyclic prefix is elegant and simple and don’t need any complex processing.

7.11.3 – Disadvantages of using Cyclic Prefix

The cyclic prefix is not entirely free; it comes with both a bandwidth and power penalty as shown below:

1. Since “n” redundant symbols are sent, the required bandwidth for OFDM increases from “ ” to “ ”.

2. Similarly, an additional n symbol must be counted against the transmit-power budget. Hence, the cyclic prefix carries a power penalty of “ in addition to the bandwidth penalty.

3. Peak to Average Power Ratio (PAPR) problem appears and we will deal with it later.

In summary, the use of the cyclic prefix entails data rate and power losses that are both, the wasted power has increased importance in an interference-limited wireless system, causing interference to neighboring users.

It can be noted that for N>>n, the inefficiency owing to the cyclic prefix can be made arbitrarily small by increasing the number of subcarriers, the cyclic prefix provides a guard interval for all multipaths following the first

arrival signal. As a result the timing requirement of the observation window is quite relaxed (up to τmax ambiguity).

7.12 – Coded Orthogonal Frequency Division Multiplexing (COFDM) Orthogonal frequency division multiplex is a multi-carrier method with up to thousands of subcarriers, none of

which interfere with each other because they are orthogonal to one another. The information to be transmitted is distributed interleaved to the many subcarriers, first having added the appropriate error protection, resulting in coded orthogonal frequency division multiplex (COFDM). Each of these subcarriers is vector modulated, i.e. In DVBT2 QPSK, 16QAM and often up to 256QAM modulated.

Figure (7.21)

Page 204: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

189 | P a g e

Figure (7.22)

COFDM is a composite of orthogonal (at right angles to one another or, in other words, not interfering with one another) and frequency division multiplex (division of the information into many subcarriers in the frequency domain).

In a transmission channel, information can be transmitted continuously or in time slots. It is then possible to transport different messages in the various time slots, e.g. data streams from different sources. This timeslot method has long been applied, mainly in telephony for the transmission of different calls on one line, one satellite channel or also one mobile radio channel.

The typical impulse-type interference caused by a mobile telephone conforming to the GSM standard with

irradiation into stereo systems and TV sets has its origin in this time slot method, also called time division multiple access (TDMA) in this case. However, it is also possible to subdivide a transmission channel of a certain bandwidth in the frequency domain, resulting in sub-channels into each one of which a subcarrier can be placed. Each subcarrier is modulated independently of the others and carries its own information independently of the other subcarriers. Each of these subcarriers can be vector modulated, i.e. QPSK, 16QAM and often up to 64QAM modulated.

All subcarriers are spaced apart by a constant interval Δf. A communication channel can contain up to thousands

of subcarriers, each of which could carry the information from a source which would have nothing at all to do with any of the others. However, it is also possible first to provide a common data stream with error protection and then to divide it into the many subcarriers. This is then frequency division multiplex (FDM). Thus, in FDM, a common data stream is split up and transmitted in one channel, not via a single carrier but via many, up to thousands of subcarriers, digitally vector modulated. Since these subcarriers are very close to one another, e.g. with a spacing of a few kHz, great care must be taken to see that these subcarriers do not interfere with one another. The carriers must be orthogonal to each other. The term orthogonal normally stands for ‘at 90 degrees to each other’ but in communications engineering quite generally means signals which do not interfere with one another due to certain characteristics.

When will adjacent carriers of an FDM system then influence each other to a greater or lesser extent?

Surprisingly, one has to start with a rectangular pulse and its Fourier Transform “Figure (7.22)”. A single rectangular pulse of duration Δt provides a sin(x)/x-shaped spectrum in the frequency domain, with nulls spaced apart by a constant Δf = 1/Δt in the spectrum. A single rectangular pulse exhibits a continuous spectrum.

Page 205: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

190 | P a g e

Figure (7.23)

Varying the period Δt of the rectangular pulse varies the spacing Δf of the nulls in the spectrum.

If Δt is allowed to tend towards zero, the nulls in the spectrum will tend towards infinity. This results in a Dirac pulse which has an infinitely flat spectrum which contains all frequencies.

If Δt tends towards infinity, the nulls in the spectrum will tend towards zero. This results in a spectral line at zero frequency which is DC. All cases in between simply correspond to Δf = 1/Δt.

A train of rectangular pulses of period Tp and pulse width Δt also corresponds to this “ ”-shaped

variation but there are now only discrete spectral lines spaced apart by fP = 1/TP which, however, conform to this “ ”-shaped variation.

What then is the relationship between the rectangular pulse and orthogonality? The carrier signals are sinusoidal.

A sine wave signal of frequency fS = 1/TS results in a single spectral line at frequency “fS” and “–fS” in the frequency domain. However, these sinusoidal carriers carry information by amplitude and frequency-shift keying.

These sinusoidal carrier signals do not extend continuously from minus infinity to plus infinity but change their

amplitude and phase after a particular time Δt. Thus one can imagine a modulated carrier signal to be composed of sinusoidal sections cut out rectangularly, so-called burst packets. Mathematically, a convolution occurs in the frequency domain.

The spectra of the rectangular window pulse and of the sine wave become superimposed. In the frequency

domain there is then a “ ”-shaped spectrum at the “fs” and “-fs” position instead of a discrete spectral line. The nulls of the sin(x)/x spectrum are described by the length of the rectangular window Δt. The space between the nulls is Δf = 1/Δt as shown in “Figure (7.23)”.

7.13 –Orthogonal Frequency Division Multiple Access (OFDMA) Both OFDM and OFDMA symbols are structured in similar way. In OFDMA each symbol consists of sub-channels

that carry data sub-carriers carrying information, pilot sub-carriers as reference frequencies and for various estimation purposes, DC sub-carrier as the center frequency, and guard sub-carriers or guard bands for keeping the space between OFDMA signals as shown in “Figure (7.24)”.

Page 206: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

191 | P a g e

Figure (7.24)

Figure (7.25)

Equation (7.6)

Active (data and pilot) sub-carriers are grouped into subsets of sub-carriers called sub-channels Sub-

channelization defines sub-channels that can be allocated to subscriber stations (SSs) depending on their channel conditions and data requirements. Using sub-channelization, within the same time slot, for example a BS can allocate more transmit power to SSs with lower SNR and less power to user devices with higher SNR. Sub-channelization also enables the BS to allocate higher power to sub-channels assigned to indoor SSs resulting in better in-building coverage. Sub-channelization in the uplink can save a user device transmit power because it can concentrate power only on certain sub-channel(s) allocated to it. This power-saving feature is particularly useful for battery-powered user devices. The concept of sub-channelization is explained in “Figure (7.25)”.

7.14 – Peak to Average Power Ratio (PAPR)

7.14.1 –Conceptual Meaning

Peak to Average Power Ratio (PAPR) is proportional to the number of sub-carriers used for OFDM systems and can be calculated simply using “Equation (7.5)”. An OFDM system with large number of sub-carriers will thus have a very large PAPR when the sub-carriers add up coherently. Large PAPR of a system makes the implementation of Digital-to-Analog Converter (DAC) and Analog-to-Digital Converter (ADC) to be extremely difficult. The design of RF amplifier also becomes increasingly difficult as the PAPR increases.

𝑀 { }

{ }

Page 207: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

192 | P a g e

Figure (7.26)

Figure (7.27)

7.14.2 – The cause of PAPR

Before transmitting the signal it must path through present high power amplifier (HPA). We already know that we have two main types of power amplifiers which are:

Linear Power Amplifiers: They are perfect amplifiers from linearity as shown below in “Figure (7.26)” while its destructive disadvantage operating with very low efficiency of maximum 25% so if we used it the battery of the mobile phone will rapidly exhausted.

Non Linear Power Amplifiers: They are perfect amplifiers from the efficiency point of view as they can operate with very high efficiency hence battery can survive for long time, and on the other side this type main disadvantage is operating in a nonlinear region of operation as shown below in “Figure (7.27)” which leads to loss of orthogonality & hence leads to intermarries interference

Page 208: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

193 | P a g e

When transmitted through a nonlinear device, a high peak signal generates out-of-band energy (spectral regrowth) and in-band distortion (constellation tilting and scattering). These degradations may affect the system performance severely, so, we must operate our system in the linear region of the nonlinear power amplifier.

7.14.3 – PAPR Effect

The power amplifiers at the transmitter need to have a large linear range of operation.

Non-linear distortions and peak amplitude limiting introduced by the High Power amplifier (HPA) will produce inter-modulation between the different carriers and introduce additional interference into the system.

Additional interference leads to an increase in the Bit Error Rate (BER) of the system.

The Analog to Digital converters and Digital to Analog converters need to have a wide dynamic range and this increases complexity.

A way to avoid non-linear distortion is by forcing the amplifier to work in its linear region, but unfortunately

such solution is not power efficient and thus not suitable for wireless communication. And at manually clipping the output signal in-band distortion (additional noise) and out-of-band radiation (ACI) took place.

7.14.4 – PAPR Reduction Techniques

We have two main reduction methods as shown next in this clause.

7.14.4.1 – Distorting Method

Distorts transmitted signal by clipping signal peaks manually using means of filtering and clipping, this leads to high bit error rate and decreasing in bandwidth.

7.14.4.2 – Non-Distorting Method

The signal remains undistorted this leads to decreases the overall bit error rate while the penalty for this is increasing the required bandwidth which in turn decreases the bandwidth efficiency and Selective Mapping is a perfect example on this reduction method.

7.14.4.2.1 – Selective Mapping Method

1. Multiply data signal with M different sequences “r1 to rm”.

2. Apply Inverse Fast Fourier Transform (IFFT) by size “N”.

3. At the receiving side, calculate the PAPR for all received “M” sequences and choose the sequence

with smallest PAPR.

Figure (7.27)

Page 209: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

194 | P a g e

Chapter Number 8 Space Time Coding

8.1 – Space Time Coding Concept A space–time code (STC) is a method employed to improve the reliability of data transmission in wireless

communication systems using multiple transmit antennas. STCs rely on transmitting multiple, redundant copies of a data stream to the receiver in the hope that at least some of them may survive the physical path between transmission and reception in a good enough state to allow reliable decoding.

STC may be further subdivided according to whether the receiver knows the channel impairments. In coherent

STC, the receiver knows the channel impairments through training or some other form of estimation. These codes have been studied more widely because they are less complex than their non-coherent counterparts. In noncoherent STC the receiver does not know the channel impairments but knows the statistics of the channel. In differential space–time codes neither the channel nor the statistics of the channel are available.

8.2 – Diversity In telecommunications, a diversity scheme refers to a method for improving the reliability of a message signal by

using two or more communication channels with different characteristics. Diversity plays an important role in combatting fading and co-channel interference and avoiding error bursts. It is based on the fact that individual channels experience different levels of fading and interference. Multiple versions of the same signal may be transmitted and/or received and combined in the receiver. Alternatively, a redundant forward error correction code may be added and different parts of the message transmitted over different channels. Diversity techniques may exploit the multipath propagation, resulting in a diversity gain, often measured in decibels.

8.2.1 –Diversity Classes

8.2.1.1 – Time Diversity

In this class of diversity multiple versions of the same signal are transmitted at different time instants, alternatively, a redundant forward error correction code is added and the message is spread in time by means of bit-interleaving before it is transmitted. Thus, error bursts are avoided, which simplifies the error correction, or in other words, we averaging the fading of the channel over time by using the channel coding and interleaving to let every part of the code-word effected by different fading along the time so if a deep fading occurs only part of the code-word will being missing not all the code-word as explained below in “Figure (8.1)”.

As we can see in “Figure (8.1)” every code-word in this example is consists of “4” symbols so (the number of

diversity branches) “L=4” ,here there is a deep fading in the 3rd time which will affecting on the 3rd code-word so if we

transmit the data without interleaving the code-word “x2” will be distorted and we cannot cover it again but in case of interleaving occurs the deep fading will distorting only one symbol from each code-word so we can recover this symbol from the channel coding at the receiver so we can see here that each part of the code-word effected with different fading over the time and if there is a deep fading occurs we can recover the missing part from the received parts of the code-word and this depending on the type and the rate of the channel coding.

Increasing number of diversity branches decreases the overall probability of error.

Page 210: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

195 | P a g e

8.2.1.2 – Frequency Diversity

The signal is transmitted using several frequency channels or spread over a wide spectrum that is affected by frequency-selective fading. Middle-late 20th century microwave radio relay lines often used several regular wideband radio channels, and one protection channel for automatic use by any faded channel.

In time diversity we assumed that the channel is occurs flat fading but it changes with the time but what about

the frequency selective fading channels? With this type of channels we will using the frequency diversity as in the high bit rate systems the transmitted signal suffering from the selective fading as the band width is more than the coherent band width of the channel so we will use some techniques to compete the fading over the frequency and some of these techniques are:

OFDM modulation in combination with subcarrier interleaving and forward error correction: In which we divide the available band to a smaller sub bands on sub carriers to let every sub band

affected with flat fading as shown in “Figure (8.2)”.

Spread spectrum, with all its types; for example Frequency Hopping or Direct Sequence-CDMA Frequency Hopping Spreading Spectrum: is a method of transmitting radio signals by rapidly

switching a carrier among many frequency channels, using a pseudorandom sequence known to both transmitter and receiver.

Direct Sequence Spread Spectrum, the original signal will multiplied with a pseudo-noise sequence(PN code) and transmitted over a band width larger than the original band width so if we assumed that the receiver is synchronized to the time delay and RF phase of the direct path so that the PN code that arrives from the non-direct channels is not synchronized to the PN code of the direct path and it rejected and that is due to the autocorrelation function of the PN code with the property shown below in “Equations (8.1)”.

Figure (8.1)

Page 211: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

196 | P a g e

8.2.1.3 – Antenna Diversity

Antenna diversity, also known as space diversity, is any one of several wireless diversity schemes that use two or more antennas to improve the quality and reliability of a wireless link. Often, especially in urban and indoor environments, there is no clear line-of-sight (LOS) between transmitter and receiver. Instead the signal is reflected along multiple paths before finally being received. Each of these bounces can introduce phase shifts, time delays, attenuations, and distortions that can destructively interfere with one another at the aperture of the receiving antenna. Antenna diversity is especially effective at mitigating these multipath situations. This is because multiple antennas offer a receiver several observations of the same signal. Each antenna will experience a different interference environment. Thus, if one antenna is experiencing a deep fade, it is likely that another has a sufficient signal. Collectively such a system can provide a robust link. While this is primarily seen in receiving systems (diversity reception), the analog has also proven valuable for transmitting systems (transmit diversity) as well.

Inherently an antenna diversity scheme requires additional hardware and integration versus a single antenna

system but due to the commonality of the signal paths a fair amount of circuitry can be shared. Also with the multiple signals there is a greater processing demand placed on the receiver, which can lead to tighter design requirements. Typically, however, signal reliability is paramount and using multiple antennas is an effective way to decrease the number of drop-outs and lost connections.

We have more than one type of Antenna diversity as shown below: 1. Multiple Input Single Output (transmit diversity). 2. Single Input Multiple Output (receive diversity). 3. Multiple Input Multiple Output. 4. Multiple Input Multiple Output (multi user).

Figure (8.2)

Equations (8.1)

Page 212: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

197 | P a g e

8.2.1.3.1 – Multiple Input Single Output (MISO)

It also called transmit diversity as we use multi antenna at the transmitter side and only one single antenna at the receiver side, Shown in “Figure (8.3)”example on MISO diversity using “2” transmitting antennas.

In the previous example on MISO diversity shown above in “Figure (8.3)”The receiver will receiving

two signals one from each antenna to the same transmitted data so if one of them has a deep fading he can get the information from the second one as each path here will effecting with different fading so the diversity occurs over the space (paths) but note here that the transmitted data from the antennas is the same so the data rate here does not changes and the more antennas we uses the better performance we get so we have to define a term called diversity order and it is the number of direct paths for the transmitted signal to go through from the transmitter to the receiver which is equal in our case the number of transmitter antennas.

Relation between received signal and transmitted signal can be represented using “Equation (8.2)”.

[

]

In our DVB-T2 system this type of diversity is the used one.

8.2.1.3.2 – Single Input Multiple Output (SIMO)

It called also receive diversity as we use multi antenna at the receiver side and only one single antenna is used at the transmitter side, “Figure (8.4)”shows an example for MISO diversity where “2” antennas are used at the receiving.

Here the receiver will has many copies of the transmitted signal at its antennas as each one of them

will effecting with different fading due to the path it passed through so if one of them has a deep fading the receiver can get the information from the other received signal so the diversity here occurs over the space and in this type the transmitter not required to know an information about the channel so this type of the

Figure (8.3)

Figure (8.4)

Equation (8.2)

Page 213: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

198 | P a g e

diversity is the commonly used type and the diversity order in this type is the number of receiver antennas. Now for the receive diversity how the receiver get the signal from the many copies reached to him? The answer is by using one technique of the diversity combining techniques which is many but we will select three types: 1. Selective combining (SC). 2. Maximal ratio combining (MRC). 3. Equal gain combining (EGC).

8.2.1.3.3 – Multiple Input Multiple Output (MIMO)

In this type we use multi antennas at the transmitter and receiver sides, “Figure (8.5)” shows example on MIMO diversity using “2” antennas at transmitting side and also “2” antennas at receiving side.

Here the diversity order is equal to the number of the transmitter antennas multiplies with the

number of receiver antennas (𝑀 ×𝑀 ) which is the number of independent possible paths between the transmitter and receiver so each antenna of transmitter antennas will transmitting a copy of message which will passing through different paths to reach the receiver antennas so here we have a 𝑀 paths for each one of the transmitter antenna so that the total number of paths which considered the diversity order is (𝑀 ×𝑀 ) .

Increasing the number of antennas used means increasing the order of diversity and by increasing

the order of diversity more flat faded channel can be achieved, shown below in “Figure (8.6)” comparison between the Rayleigh channel effect in case of using diversity of first order and another system of second order diversity.

Figure (8.5)

Figure (8.6)

Page 214: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

199 | P a g e

Relation between received signal and transmitted signal can be represented using “Equation (8.3)”.

[

]

[

]

[

]

8.2.1.3.4 – Multiple Input Multiple Output Multi Users (MIMO-Mu)

The main difference here with the MIMO system is that we have many receivers each one has an antenna not only one receiver in the system, shown in “Figure (8.7)” MIM-Mu system with “2” transmitting antennas and “2” receivers each of them has only a single receiving antenna.

This type is a typical example of the mobile communication systems or the transmit and received system

between the BTS and users in the cell as each user has only one antenna to transmit and receive with it but the BTS has two antennas to transmit and receive data from or to the users.

Relation between received signal and transmitted signal can be represented using “Equation (8.4)”.

[

]

[

]

[

]

[

]

8.2.2 –Diversity Effect on Bit-Error-Rate

So it’s obvious that diversity can be used in order to enhance the system performance by decreasing the channel’s effect on the transmitted signal resulting decreasing the overall bit-error-rate as shown below in “Figure (8.8)”

Figure (8.7)

Equation (8.3)

Equation (8.3)

Page 215: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

200 | P a g e

8.2.3 –Diversity Condition

In these types of diversity (spatial diversity) the distance between the antennas must be larger than the coherent distance to ensure that each antenna will transmitting data stream not correlated to the one transmits from the other one as shown in “Figure (8.9)” the array antenna is consists of many antenna near to each other and the transmitted data stream is depend to each other (correlated) but the separated ones with distance more than the coherent distance which is equal to half the wave length (or to be exact =0.38 )is not correlated.

Figure (8.8)

Figure (8.9)

Page 216: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

201 | P a g e

We can use different diversity techniques without worrying about the coherent distance using: 1. Polarization diversity. 2. Angle diversity. In the next few lines we will quickly show the conceptual definition of both Polarization and Angle diversity

techniques.

8.2.3.1 – Polarization Diversity

In this type of diversity horizontal and vertical polarization signals are transmitted by two different polarized antennas and received correspondingly by two different polarized antennas at the receiver. Different polarizations ensure that there is no correlation between the data streams, without having to worry about coherent distance of separation between the antennas.

8.2.3.2 – Angle Diversity

This applies at carrier frequencies in excess of 10 GHz. At such frequencies, the transmitted signals are highly scattered in space. In such an event the receiver can have two highly directional antennas facing in totally different directions. This enables the receiver to collect two samples of the same signal, which are totally independent of each other.

8.3 – Spatial Multiplexing Spatial multiplexing (seen abbreviated SM or SMX) is a transmission technique in MIMO wireless communication. Transmitting independent and separately encoded data signals, so-called streams, from each of the multiple

transmit antennas. Therefore, the space dimension is reused, or multiplexed, more than one time. If the transmitter is equipped with “Nt” antennas and the receiver has “Nr” antennas, the maximum spatial

multiplexing order (the number of streams) is represented by “Equation (8.4)”:

𝑀

If a linear receiver is used, this means that “Ns” streams can be transmitted in parallel, ideally leading to

an “Ns” increase of the spectral efficiency (the number of bits per second and per Hz that can be transmitted over the wireless channel). The practical multiplexing gain can be limited by spatial correlation, which means that some of the parallel streams may have very weak channel gains

Previously in the diversity we were sending the same data from the antennas or in other meaning the same bit

rate from the antennas while here different data is transmitted from each antenna.

8.4 – MIMO Concept A MIMO system is a physical layer technique that uses antenna arrays at both the transmitter and receiver in

combination with space-time modulation and coding techniques to achieve very high spectral efficiencies. In other words a MIMO system is a space-time signal processing approach in which the time dimension is

complemented with the spatial dimension through the use of multiple spatially distributed antennas.

Equation (8.4)

Page 217: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

202 | P a g e

Wireless system designers are faced with numerous challenges, including limited availability of radio frequency spectrum and transmission problems caused by such factors as fading and multipath distortion. Meanwhile, there is increasing demand for higher data rates, better quality service, fewer dropped calls, and higher network capacity. Meeting these needs requires new techniques that improve spectral efficiency and network links’ operational reliability. Multiple-input-multiple-output (MIMO) technology promises a cost-effective way to provide these capabilities. MIMO uses antenna arrays at both the transmitter and receiver.

Algorithms in a radio chipset send information out over the antennas. The radio signals reflect off objects, creating multiple paths that in conventional radios cause interference and fading. But MIMO sends data over these multiple paths, thereby increasing the amount of information the system carries. The data is received by multiple antennas and recombined properly by other MIMO algorithms. This technology promises to let engineers scale up wireless bandwidth or increase transmission ranges. MIMO is an underlying technique for carrying data. It operates at the physical layer, below the protocols used to carry the data, so its channels can work with virtually any wireless transmission protocol. For example, MIMO can be used with the popular IEEE 802.11 (Wi-Fi) technology, and in the upcoming mobile generations and broadband solutions such as IEEE 802.16 (WiMAX) and Long Term Evolution (LET). For these reasons, MIMO eventually will become the standard for carrying almost all wireless traffic; it is thought that MIMO will become a core technology in wireless systems. It is really the only economical way to increase bandwidth and range. MIMO still must prove itself in large scale, real-world implementations, and it must overcome several obstacles to its success, including energy consumption, cost, and competition from similar technologies.

8.4.1 –Advantages of MIMO Systems

There are many reasons to justify why it is thought that MIMO will become a core technology in wireless systems, some reasons are listed here but the coming future will demonstrate the powerful and importance of MIMO technology.

MIMO technique is able to:

Exploit multipath by taking advantage of random fading, as it is known that the main impairment to the performance of a wireless communication systems is fading due to multipath and interference.

Achieve very high spectral efficiency and it is a perfect solution to the limited bandwidth availability.

Save the system power consumption, as it increases the system capacity and reliability without consume excessive power.

Increase the system capacity so it can support many number of users.

Increase the system throughout as it can support high data rates.

Increase both the quality of service and the revenues significantly. From the previous reasons, there is no doubt about the importance of MIMO technique, so the aim of this

section is to provide a complete and concise overview about this promising technique.

8.4.2 –MIMO General Model

The assumptions of the MIMO system model can be summarize in the following points

The channel is a deterministic Gaussian channel.

Channel frequency response is flat as the signal BW is so narrow.

The channel matrix is known at the but unknown at the .

is the power across the irrespective of the number of antennas.

The signals transmitted from each antenna have equal powers of 𝑀 (channel unknown to ).

The received power for each of the receive antennas is equal to the total transmitted power (ignore signal attenuation, antenna gains, and so on).

Page 218: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

203 | P a g e

Each of the 𝑀 receive branches has identical noise power of .

= * . MIMO model can be represented as shown below in “Figure (8.10)”.

8.4.3 –MIMO General Capacity Equation

The capacity of any MIMO channel is generally mathematically defined as shown in “Equation (8.5)”.

𝑀 { }

: Capacity. : is the probability distribution of the vector “S”. : is the mutual information between vectors “S” and “Y”. Where:

| But:

{ } { }

Then after some mathematical simplifications we got:

𝑀 𝑀 { (

𝑀 *}

Figure (8.10)

Equation (8.5)

Page 219: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

204 | P a g e

One can note that the previously derived capacity expression can be modified to get the general capacity equation for any SIMO model as shown below in “Figure (8.6)”.

𝑀 𝑀 { (

𝑀 *}

𝑀 : Number of Antennas at receiving side. 𝑀 : Number of Antennas at transmitting side. But:

𝑀 𝑀 Then:

{ (

𝑀 *}

And 𝑀 Then

, ( 𝑀

)-

Multi-Antenna systems schemes are shown in the next table:

Scheme Physical meaning Comments

SISO 1 1 Neither transmit nor receive diversity. No Diversity.

SIMO 1 >1 Receive diversity only. Diversity proportional to 𝑀

MISO >1 1 Transmit diversity only. Diversity proportional to 𝑀

MIMO >1 >1 Transmit and receive diversity. Diversity proportional to 𝑀 𝑀

8.4.4 –Factors Affecting MIMO System Capacity

The superior performance of the MIMO systems is enhanced by several factors as it was shown in the analysis equations such as:

1. The channel knowledge existence to the transmitter. 2. Changing the SNR. 3. Changing the number of used antennas.

“Figure (8.11)” shows effect of changing the previously mentioned parameters on the normalized capacity of

channel.

Equation (8.5)

Figure (8.11)

Page 220: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

205 | P a g e

8.4.5 –How MIMO System works?

Consider the multi-antenna system diagrams as shown below in “Figure (8.12)” A digital input signal is fed to a serial to parallel splitter after error control coding and mapping to complex modulation symbols. The splitter produces several separate symbol streams and each are then mapped onto one of the multiple transmit antennas, which may include spatial weighting of the antenna elements or antenna space-time pre-coding. At the receiver, the signals are captured by multiple antennas and the signals are recovered after demodulation and de-mapping. This can be considered as an extension to conventional smart antenna applications. The intelligence of the multi-antenna system lies in the weight selection algorithm and can offer a more reliable communications link in the presence of adverse propagation conditions such as multipath fading and interference.

8.5 – Receiving Space Time Codes

The reality of MIMO receivers is that we need to contend with MSI, since the transmitted streams interfere with each other. In addition to this, we have the usual problem of channel fading and additive noise. Initially we assume un-coded SM (i.e., the data stream comprises un-coded data, where no temporal coding has been employed, but only mapping).

The following different types of receivers decode the received signal shown below in “Equation (8.6)”.

: Transmitted vector of signals. : Received vector of signals. : Channel fading effect matrix. : Noise added due to channel effect.

8.5.1 – Maximum Likelihood Decoder

This is an optimum receiver. If the data stream is temporally un-coded, the ML receiver solves “Equation (8.7)”.

𝑀 ‖ ‖

The ML receiver searches through the entire vector constellation for the most probable transmitted signal vector.

This implies investigating 𝑀 combinations, a very difficult task. Hence, these receivers are difficult to implement, but provide full 𝑀 diversity and zero power losses as a consequence of the detection process. In this sense it is optimal. There have been developments based on fast algorithms employing sphere decoding.

Figure (8.12)

Equation (8.6)

Equation (8.7)

Page 221: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

206 | P a g e

8.5.2 – Zero Forcing Decoder

The ZF receiver is a linear receiver. It behaves like a linear filter and separates the data streams and thereafter independently decodes each stream. We assume that the channel matrix “ ” is invertible a must and estimate the transmitted data symbol vector as shown below in “Equation (8.8)”.

: represents pseudo inverse matrix of the non-square matrix “ ” The Zero Forcing receiver decomposes the link into “𝑀T” parallel streams, each with diversity gain and array gain

proportional to “𝑀 −𝑀 +1”. Hence, it is suboptimum. The main idea of zero forcing decoders completely ignored the presence of noise and its effect but in reality

Additive White Gaussian Noise always present so the resulted equation would be modified to be as shown in “Equation (8.9)”.

If the channel matrix’s determinant is relatively low, which corresponds to not perfectly independent paths, the

inverse will be large, and when applied to the noise component, it will enhance it resulting bas performance of the decoder. To reach an optimum point between interference cancellation and noise enhancement, MMSE decoding mechanism was introduced.

8.5.3 – Minimum Mean Square Error (MMSE) Decoder

Here, we choose a matrix “B” such that is minimizes the Mean Squared Error as shown in “Equation (8.10)”.

{ } { }

: Mean Squared Error. The solution of linear MMSE produce:

(

*

Then:

(

*

8.5.4 – Successive Cancellation Decoder

The successive Interference cancellation receiver (SIC) algorithm is usually combined with V-BLAST receivers. This provides improved performance at the cost of increased computational complexity. Rather than jointly decoding the transmitted signals, this nonlinear detection scheme first detects the first row of the signal and then cancels its effect from the overall received signal vector. It then proceeds to the next row. The reduced channel matrix now has dimension 𝑀 ×(𝑀 1) and the signal vector has dimension (𝑀 1)×1. It then does the same operation on the next row. The channel matrix now reduces to 𝑀 ×(𝑀 2) and the signal vector reduces to (𝑀 2)×1 and so on. If we assume that all the decisions at each layer are correct, then there is no error propagation. Otherwise, the error rate performance is dominated by the weakest stream, which is the first stream decoded by the receiver.

Equation (8.8)

Equation (8.9)

Equation (8.10)

Page 222: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

207 | P a g e

Hence, the improved diversity performance of the succeeding layers does not help. To get around this problem the ordered successive interference cancellation (OSIC) receiver was introduced. In this case, the signal with the strongest signal-to-interference-noise (SINR) ratio is selected for processing. This improves the quality of the decision and reduces the chances of error propagation. This is like an inherent form of selection diversity wherein the signal with the strongest SNR is selected.

8.5.4.1 – Order Successive Interference Cancellation Algorithm

As shown before OSIC is a decoding algorithm helped in receiving the space time coded transmitted signal with high efficiency and this algorithm procedure is shown in a quick as follows:

a. Ordering: Determine the optimal detection order by choosing the row with minimum Euclidian norm (strongest SINR).

b. Nulling: Estimate the strongest transmit signal by nulling out all the weaker transmit signals. c. Slicing: Detect the value of the strongest transmit signal by slicing to the nearest signal constellation

value. d. Cancellation: Cancel the effect of the detected signal from the received signal vector to reduce the

detection complexity for the remaining signals.

8.5.4.2 – Vertical-Ball Laboratories Layered Space-Time Algorithm

V-BLAST is a detection algorithm to the receipt of multi-antenna MIMO systems, available for the first time in 1996 at Bell Laboratories in New Jersey in the United States by Gerard J. Foschini. He proceeded simply to eliminate interference caused successively issuers.

Its principle is quite simple: to make a first detection of the most powerful signal. It regenerates the received signal from this user from this decision. Then, the signal is regenerated subtracted from the received signal and, with this new sign; it proceeds to the detection of the second user's most powerful, since it has already cleared the first and so forth.

Page 223: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

208 | P a g e

Chapter Number 9 Digital Video Broadcasting 2nd Generation Terrestrial Simulation

9.1 – Introduction Digital video broadcasting –terrestrial system version 2 (DVB-T2)is developed on the top of DVB-T standard in

order to provide additional facilities and features. DVB-T2 uses latest modulation, coding and error correction techniques in order to enable highly efficient use of variable terrestrial spectrum.

9.1.1 –DVB-T2 Key Features

9.1.1.1 – Physical Layer Pipes (PLPs)

The commercial requirement for service-specific robustness together with the need for different stream types is met by the concept of fully transparent physical-layer pipes (“Figure (9.i1)”) which enable the transport of data independently of its structure, with freely selectable, PLP-specific physical parameters. Both the allocated capacity and the robustness can be adjusted to the content/service providers' particular needs, depending on the type of receiver and the usage environment to be addressed.

9.1.1.1.1 – Input Mode “A”

This most simple mode can be viewed as a straightforward extension of DVB-T employing the other advanced features outlined in this clause, but not being sub-divided into multiple PLPs. Here only a single PLP is used, transporting a single transport stream. Consequently the same robustness is applicable to all content, as in DVB-T.

9.1.1.1.2 – Input Mode “B”

This more advanced mode of operation applies the concept of multiple physical-layer pipes (“Figure 9.i2)”). Beside the advantage of service-specific robustness, this mode offers potentially longer time-interleaving depth as well as the option of power saving in the receiver. Therefore even in the case of identical physical-parameter settings, it might be useful to apply this mode, especially if portable and/or mobile devices are to be targeted.

Figure (9.i1)

Figure (9.i2)

Page 224: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

209 | P a g e

9.1.1.2 – Additional Band-Widths (1.7 MHz, 10 MHz)

In order to make DVB-T2 also suitable for professional use, e.g. transmissions between radio cameras and mobile studios, a 10 MHz option is included; consumer receivers are not expected to support the 10 MHz mode. To allow DVB-T2 to be used in narrower RF channel assignments in e.g. band III and in the L-band, the bandwidth 1,712 MHz is also included. The 1,712 MHz bandwidth is intended for mobile services.

9.1.1.3 – Extended Carrier Mode (8K, 16K, 32K)

Because the rectangular part of the spectrum rolls off more quickly for the larger FFT-sizes, the outer ends of the OFDM signal's spectrum can be extended, i.e. more sub-carriers per symbol can used for data transport. The gain achieved is between 1.4% (8 K) and 2.1% (32 K).

9.1.1.4 – Alamouti-Based MISO (In Frequency direction)

DVB-T2 incorporates the option of using the Alamouti technique with a pair of transmitters (see “Figure (9.i3)”). Alamouti is an example of a Multiple Input, Single Output (MISO) system, in which every constellation point is transmitted by each transmitter, but the second transmitter (Tx2 in figure 9.3) transmits a slightly modified version of each pair of constellations, and in the reverse order in frequency. The technique gives performance equivalent to diversity reception in the sense that the operations performed by the receiver result in an optimum combination of the two signals; the resulting signal-to-noise ratio is as though the powers of the two signals had combined in the air. The extra complexity required in the receiver includes a few extra multipliers for the Alamouti processing, and also some parts of the channel estimation need to be duplicated. There is a significant overhead increase in the sense that the density of scattered pilots needs to be doubled for a given Guard Interval fraction.

9.1.1.5 – Pre-ambles (P1 and P2)

The initial symbols of a DVB-T2 physical-layer frame are preamble symbols, transporting a limited amount of signaling data in a robust way. The frame starts with the highly robust differentially BPSK-modulated P1 symbol, with guard intervals at both ends, which carries 7 bits of information (including the FFT size of the payload symbols). The subsequent P2 symbols, whose number is fixed for a given FFT size, provide all static, configurable and dynamic layer-1 signaling (the dynamic part can also be transmitted as "in-band signaling", within the PLPs). The first few bits of signaling (L1 pre-signaling) have a fixed coding and modulation; for the remainder (the L1 post-signaling) the code rate is fixed to 1/2 but the modulation can be chosen from the options QPSK, 16-QAM and 64-QAM.The P2 symbol will in general also contain data for common and/or data PLPs, which continue into the other symbols of the T2-frame.

Figure (9.i3)

Page 225: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

210 | P a g e

9.1.1.6 – Pilot Patterns

Scattered pilots of pre-defined amplitude and phase are inserted into the signal at regular intervals in both time and frequency directions. They are used by the receiver to estimate changes in channel response in both time and frequency dimensions.

DVB-T2 has chosen a more flexible approach by defining eight patterns that can be selected depending on the FFT size and Guard Interval fraction adopted for the particular transmission. This reduces the pilot overhead whilst assuring a sufficient channel-estimation quality. The example in “Figure (9.i4)” shows a corresponding overhead reduction from 8 to 4 percent when using the PP3 pattern with a 1/8 guard interval.

9.1.1.7 – 256-QAM Modulation Technique

In DVB-T the highest-order constellation is 64-QAM, delivering a gross data rate of 6 bits per symbol per carrier (i.e. 6 bits per OFDM cell). In DVB-T2, the use of 256-QAM increases this to 8 bits per OFDM cell, a 33 % increase in spectral efficiency and capacity transported for a given code rate. Normally this would require a significantly higher carrier-to-noise ratio (4 dB to 5 dB higher, depending on channel and code rate). This is because the Euclidean distance between two neighboring constellation points is roughly a half that of 64-QAM, hence reception is more sensitive to noise. However, the performance of LDPC codes is much better than convolutional codes, and if a slightly stronger code rate is chosen for 256-QAM compared to the rate currently being used with 64-QAM using DVB-T, the required C/N is maintained whilst still achieving a significant bit rate increase.

9.1.1.8 – Rotated Constellations

A new technique of constellation rotation and Q-delay has been introduced in DVB-T2. After forming a constellation, it is rotated in the complex "I-Q" plane so that each axis on its own (u1, u2) now carries enough information to tell which of the points was sent (see “Figure (9.i5)”).

Figure (9.i4)

Figure (9.i5)

Page 226: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

211 | P a g e

The I and Q components are now separated by the interleaving process so that in general they travel on different frequencies, and at different times. If the channel destroys one of the components, the other component can be used to recover the information. This technique gives no loss of performance in Gaussian channels, and a gain of 0,7 dB in typical fading channels. The gain is even greater in 0 dB-echo channels (e.g. Single-Frequency Networks) and erasure channels (e.g. impulsive interference, deep selective fading).

9.1.1.9 – 16K and 32K FFT Sizes and (1/128) guard-interval fraction

Increasing the FFT size results in a narrower sub-carrier spacing, but longer symbol duration. The first attribute leads to greater difficulties with inter-carrier interference, hence lower Doppler frequency that can be tolerated. Whilst this is not a setting preferred for mobile reception in UHF band IV/V or higher, it could nevertheless be used in lower frequency bands. However, the second attribute, longer symbol duration, means that the guard-interval fraction is smaller for a given guard-interval duration in time.

Other advantages consist of better robustness against impulsive noise, quasi-rectangular spectrum down to

lower power-spectral-density levels and the option to interpolate in the frequency direction only. The memory requirements for interpolation in the receiver are in the same order for 32 K as for 8 K, but for 16 K they are doubled. The FFT-calculation complexity is only slightly increased .The 1/128 guard interval fraction is also new in DVB-T2: it can, for example, allow 32 K to be used with an absolute guard interval duration equivalent to 8 K 1/32 with a resulting reduction in overhead.

9.1.1.10 – LDPC and BCH error correcting codes

Whereas inner and outer error-control coding in DVB-T was based on convolutional and Reed-Solomon codes, ten years of technological development mean that the higher complexity of LDPC decoding can now be handled in the receiver. DVB-T2 uses concatenated LDPC/BCH coding, as for DVB-S2. These codes assure a better protection, allowing more data to be transported in a given channel.

9.1.1.11 – Interleavers (Bit, Cell, Time, and Frequency)

The target of the interleaving stages is to spread the content in the time/frequency plane in such a way that neither impulsive noise (disturbance of the OFDM signal over a short time period) nor frequency-selective fading (disturbance over a limited frequency span) would erase long sequences of the original data stream. Furthermore, the interleaving is matched to the behavior of the error-control coding, which does not protect all data equally. Lastly, the interleaving is designed such that bits carried by a given transmitted constellation point do not correspond to a sequence of consecutive bits in the original stream.

9.1.1.12 – Peak-Average Power Reduction Techniques

High PAPR in OFDM systems can reduce RF power-amplifier efficiency, i.e. RF power out vs. supply power in. Two PAPR-reduction techniques, Active Constellation Extension (ACE) and Tone Reservation (TR) are supported in DVB-T2, leading to a substantial reduction of PAPR, at the expense of a small average-power increment and/or at most 1 % reserved sub-carriers. Early practical implementations have shown a reduction of 2 dB in PAPR (at 36 dB MER). The ACE technique reduces the PAPR by extending outer constellation points in the frequency domain, while the TR reduces the PAPR by directly cancelling out signal peaks in time domain using a set of impulse-like kernels made of the reserved sub-carriers. The two techniques are complementary, i.e. the ACE outperforms the TR in a low-order modulation while the TR outperforms the ACE in a high-order modulation. The two techniques are not mutually exclusive and a combination of them can be used. However, ACE cannot be used with rotated constellations.

Page 227: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

212 | P a g e

9.1.1.13 – Future Extension Frames (FEFs)

In order to build a hook into the original DVB-T2 standard for future advancements such as MIMO or a fully mobile branch of the standard, a placeholder known as Future Extension Frame (FEF) parts is included. The only currently defined attributes of FEF parts, which are inserted between T2-frames (see “Figure (9.i6)”), are that they begin with a P1 symbol and their positions in the super frame and duration in time are signaled in the L1 signaling in the T2-frames. This enables early receivers to ignore the FEFs whilst -still receiving the T2 signal, as desired.

9.1.2 –System Architecture

A generic block diagram of DVB-T2 system is presented in “Figure 9.i7” there may be multiple copies of the sub-modules depending on the number of physical layer pipes (PLP’s) which are represented by the shadows behind the sub-modules.

9.1.2.1 – Mode Adaptation Module

At the beginning of DVB-T2 system, the input interface may have input preprocessor that takes transport streams or generic streams as input .afterwards the service splitter inside the input preprocessor may split the service of transport streams into logical data streams, which are then carried by individual PLP. Later mode adaption module individually works on the contents of each PLP by slicing the stream into data fields and adds baseband header at start of each field. The sub modules: input stream synchronizer, null packet deletion and cyclic redundancy check-8(CRC-8) encoder, and baseband header insertion.

Figure (9.i6)

Figure (9.i7)

Page 228: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

213 | P a g e

9.1.2.2 – Stream Adaptation Module

Stream adaption module takes a baseband header followed by a data field and makes a baseband frames. It copies of three sub-modules: scheduler, in-band signaling and baseband scrambler.

9.1.2.3 – Bit Interleaved Coding and Modulation Module

It takes a baseband frame as input and produces an output for the next frame mapper module. To carry out this task, the BICM module performs FEC encoding (BCH+LDPC), bit interleaving, de-multiplexing bits to cells, mapping cells to constellation points, and finally performs constellation rotation and cyclic Q-delay.

9.1.2.4 – Frame Mapper Module

It consists of cell interleaver, time interleaver, frame builder and frequency interleaver.it takes input from BICM module and produce output for modulator module.

9.1.2.5 – Modulator Module

The modulator module consists of MISO processing, pilot insertion, IFFT, PAPR reduction, guard interval insertion, p1 symbol insertion and D/A converter sub-modules.

9.2 – Mode Adaptation Module The input to the T2 system shall consist of one or more logical data streams. One logical data stream is

carried by one Physical Layer Pipe (PLP). The mode adaptation modules, which operate separately on the contents of each PLP, slice the input data stream into data fields which, after stream adaptation, will form base band frames (BBFRAMEs). The mode adaptation module comprises the input interface, followed by three optional sub-systems (the input stream synchronizer, null packet deletion and the CRC-8 encoder) and then finishes by slicing the incoming data stream into data fields and inserting the base band header (BBHEADER) at the start of each data field.

Each input PLP may have one of the formats specified in “Clause (9.2.1.1)”. The mode adaptation module can process input data in one of two modes, normal mode (NM) or high efficiency mode (HEM), which is described in “Clauses (9.2.1.7)” and “Clauses (9.2.1.8)”respectively. In HEM, further stream specific optimizations may be performed to reduce signaling overhead. The BBHEADER (“Clauses (9.2.1.7)”) signals the input stream type and the processing mode. “Figure (9.1)”Shows the input interface of DVB-T2.

Figure (9.1)

Page 229: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

214 | P a g e

9.2.1 –Input Formats

The Input Pre-processor/Service Splitter shall supply to the Mode Adaptation Module(s) a single or multiple streams (one for each Mode Adaptation Module). In the case of a TS, the packet rate will be a constant value, although only a proportion of the packets may correspond to service data and the remainder may be null-packets. Each input stream (PLP) of the T2 system shall be associated with a modulation and FEC protection mode which is statically configurable.

Each input PLP may take one of the following formats:

Transport Stream.

Generic Stream.

Generic Continuous Stream (GCS) (a variable length packet stream where the modulator is not aware of the packet boundaries).

Generic Fixed-length Packetized Stream (GFPS); this form is retained for compatibility with DVB-S2, but it is expected that GSE would now be used instead.

9.2.2 –Input Interface

The input interface subsystem shall map the input into internal logical-bit format. The first received bit will be indicated as the Most Significant Bit (MSB). Input interfacing is applied separately for each single physical layer pipe (PLP) , The Input Interface shall read a data field, composed of DFL bits (Data Field Length), where: 0<=DFL<=(Kbch-80) such that Kbch is the number of bits protected by the BCH and LDPC codes.

The maximum value of DFL depends on the chosen LDPC code, carrying a protected payload of Kbch bits. The 10-

byte (80 bits) BBHEADER is appended to the front of the data field, and is also protected by the BCH and LDPC codes. The Input Interface shall either allocate a number of input bits equal to the available data field capacity, thus breaking UPs in subsequent data fields (this operation being called "fragmentation"), or shall allocate an integer number of Ups within the data field (no fragmentation). The available data field capacity is equal to Kbch - 80 when in-band signaling is not used but less when in-band signaling is used. When the value of DFL < Kbch - 80, a padding field shall be inserted by the stream adapter to complete the LDPC / BCH code block capacity. A padding field, if applicable, shall also be allocated in the first BBFRAME of a T2-Frame, to transmit in-band signaling (whether fragmentation is used or not).

9.2.3 –Input Stream Synchronization

Data processing in the DVB-T2 modulator may produce variable transmission delay on the user information. The Input Stream Synchronizer subsystem shall provide suitable means to guarantee Constant Bit Rate (CBR) and constant end-to-end transmission delay for any input data format. The use of the Input Stream Synchronizer subsystem is optional, except that it shall always be used for PLPs carrying transport streams where the number of FEC blocks per T2-frame may vary. This process shall follow the specification given in annex C at the DVB-T2 standards [ETSI EN 302 755 V1.1.1 (2009-09)]. Examples of receiver implementation are given in annex I at the DVB-T2 standards [ETSI EN 302 755 V1.1.1 (2009-09)]. This process will also allow synchronization of multiple input streams traveling in independent PLPs, since the reference clock and the counter of the input stream synchronizers shall be the same.

The ISSY field (Input Stream Synchronization, 2 bytes or 3 bytes) carries the value of a counter clocked at the

modulator clock rate and can be used by the receiver to regenerate the correct timing of the regenerated output stream. The ISSY field carriage shall depend on the input stream format and on the Mode, In Normal Mode the ISSY Field is appended to UPs for packetized streams. In High Efficiency Mode a single ISSY field is transmitted per

Page 230: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

215 | P a g e

BBFRAME in the BBHEADER, taking advantage that UPs of a BBFRAME travel together, and therefore experience the same delay/jitter. When the ISSY mechanism is not being used, the corresponding fields of the BBHEADER, if any, shall be set to '0'. A full description of the format of the ISSY field is given in annex C of DVB-T2 standards [ETSI EN 302 755 V1.1.1 (2009-09)].

9.2.4 –Compensating Delay for Transport Stream

The interleaving parameters PI and NTI, and the frame interval IJUMP may be different for the data PLPs in a group and the corresponding common PLP. In order to allow the Transport Stream recombining mechanism described in annex D [DVB-T2 standards ETSI EN 302 755 V1.1.1 (2009-09)] without requiring additional memory in the receiver, the input Transport Streams shall be delayed in the modulator following the insertion of Input Stream Synchronization information. The delay (and the indicated value of TTO - see annex C [DVB-T2 standards ETSI EN 302 755 V1.1.1 (2009-09) ] ) shall be such that, for a receiver implementing the buffer strategy defined in clause C.1.1 [DVB-T2 standards ETSI EN 302 755 V1.1.1 (2009-09) ], the partial transport streams at the output of the dejitter buffers for the data and common PLPs would be essentially co-timed, i.e. packets with corresponding ISCR values on the two streams would be output within 1ms of one another.

9.2.5 –Null Packet Deletion

Transport Stream rules require that bit rates at the output of the transmitter's multiplexer and at the input of the receiver's demultiplexer are constant in time and the end-to-end delay is also constant. For some Transport-Stream input signals, a large percentage of null-packets may be present in order to accommodate variable bit-rate services in a constant bit-rate TS. In this case, in order to avoid unnecessary transmission overhead, TS null-packets shall be identified (PID = 8191D) and removed. The process is carried-out in a way that the removed null-packets can be re-inserted in the receiver in the exact place where they were originally, thus guaranteeing constant bit-rate and avoiding the need for time-stamp (PCR) updating.

When Null Packet Deletion is used, Useful Packets (i.e. TS packets with PID ≠ 8 191D), including the optional ISSY appended field, shall be transmitted while null-packets (i.e. TS packets with PID = 8 191D, including the optional ISSY appended field, may be removed. See next “Figure (9.2)”.

After transmission of a UP, a counter called DNP (Deleted Null-Packets, 1 byte) shall be first reset and then incremented at each deleted null-packet. When DNP reaches the maximum allowed value DNP = 255D, then if the following packet is again a null-packet this null-packet is kept as a useful packet and transmitted.

Figure (9.2)

Page 231: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

216 | P a g e

Figure (9.3)

Figure (9.4)

9.2.6 –CRC-Encoding

CRC is applied for error detection at UP level (Normal Mode and packetized streams only). The UPL-8 bits of the UP (after sync-byte removal, when applicable) shall be processed by the systematic 8-bit CRC-8 encoder defined in annex F [DVB-T2 standards ETSI EN 302 755 V1.1.1 (2009-09)]. The computed CRC-8 shall be appended after the UP.

9.2.7 –Base-Band Header Insertion

A fixed length BBHEADER of 10 bytes shall be inserted in front of the baseband data field in order to describe the format of the data field. The BBHEADER shall take one of two forms as shown in figure 4(a) for normal mode (NM) and in figure 4(b) for high efficiency mode (HEM). The current mode (NM or HEM) may be detected by the MODE field (EXORed with the CRC-8 field).

A fixed length BBHEADER of 10 bytes shall be inserted in front of the baseband data field in order to describe the

format of the data field. The BBHEADER shall take one of two forms as shown in “Figure (9.3)” for normal mode (NM) and in “Figure (9.4)” for high efficiency mode (HEM). The current mode (NM or HEM) may be detected by the MODE field (EXORed with the CRC-8 field).

MATYPE (1st byte): describes the input stream format and the type of Mode Adaptation as explained in the next

table:

Where these fields are as following

TS/GS field (2 bits), Input Stream Format: Generic Packetized Stream (GFPS); Transport Stream; Generic Continuous Stream (GCS); Generic Encapsulated Stream (GSE).

SIS/MIS field (1 bit): Single or Multiple Input Streams (referred to the global signal, not to each PLP).

CCM/ACM field (1 bit): Constant Coding and Modulation or Variable Coding and Modulation.

ISSYI (1 bit), (Input Stream Synchronization Indicator): If ISSYI = 1 = active,

NPD (1 bit): Null-packet deletion active/not active. If NPD active, then DNP shall be computed and appended after UPs.

Page 232: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

217 | P a g e

EXT (2 bits), media specific (for T2, EXT=0: reserved for future use).

MATYPE (1st byte): If SIS/MIS = Multiple Input Stream, then second byte = Input Stream Identifier (ISI); else second byte = '0'(reserved for future use).

BBHeader fields’ descriptions are shown in next table.

NOTE: The term ACM is retained for compatibility with DVB-S2. CCM means that all PLPs use the same coding and modulation, whereas ACM means that not all PLPs use the same coding and modulation. In each PLP, the modulation and coding will be constant in time (although it may be statically reconfigured).

9.2.8 –Mode Adaptation sub-system Output Stream Formats

This clause describes the Mode Adaptation processing and fragmentation for the various Modes and Input Stream formats, as well as illustrating the output stream format.

9.2.8.1 – Normal Mode, GDPS and TS

For Transport Stream, O-UPL=188x8 bits, and the first byte shall be a Sync-byte (47HEX). UPL (the transmitted user packet length) shall initially be set equal to O-UPL. The Mode Adaptation unit shall perform the following sequence of operations see “Figure (9.5)”.

Optional input stream synchronization; UPL increased by 16D or 24D bits according to ISSY field length; ISSY field appended after each UP. For TS, either the short or long format of ISSY may be used; for GFPS, only the short format may be used.

If a sync-byte is the first byte of the UP, it shall be removed, and stored in the SYNC field of the BBHEADER, and UPL shall be decreased by 8D. Otherwise SYNC in the BBHEADER shall be set to 0 and UPL shall remain unmodified.

Page 233: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

218 | P a g e

Figure (9.5)

For TS only, optional null-packet deletion; DNP computation and storage after the next transmitted UP; UPL increased by 8D.

CRC-8 computation at UP level; CRC-8 storage after the UP; UPL increased by 8D.

SYNCD computation (pointing at the first bit of the first transmitted UP which starts in the Data Field) and storage in BBHEADER. The bits of the transmitted UP start with the CRC-8 of the previous UP, if used, followed by the original UP itself, and finish with the ISSY and DNP fields, if used. Hence SYNCD points to the first bit of the CRC-8 of the previous UP.

• For GFPS: UPL storage in BBHEADER.

9.2.8.2 – High Efficiency Mode, Transport Stream

For Transport Streams, the receiver knows a-priori the sync-byte configuration and O-UPL=188x8 bits, therefore UPL and SYNC fields in the BBHEADER shall be re-used to transmit the ISSY field. The Mode Adaptation unit shall perform the following sequence of operations (see “Figure (9.6))”.

Optional input stream synchronization relevant to the first complete transmitted UP of the data field; ISSY field inserted in the UPL and SYNC fields of the BBHEADER.

Sync-byte removed, but not stored in the SYNC field of the BBHEADER.

Optional null-packet deletion; DNP computation and storage after the next transmitted UP.

CRC-8 at UP level shall not be computed nor inserted.

SYNCD computation (pointing at the first bit of the first transmitted UP which starts in the Data Field) and storage in BBHEADER. The bits of the transmitted UP start with the original UP itself after removal of the sync-byte, and finish with the DNP field, if used. Hence SYNCD points to the first bit of the original UP following the sync-byte.

UPL not computed nor transmitted in the BBHEADER.

Page 234: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

219 | P a g e

Figure (9.6)

Figure (9.7)

9.2.8.3 – Normal Mode, GCS and GSE

For GCS the input stream shall have no structure, or the structure shall not be known by the modulator, For GSE the first GSE packet shall always be aligned to the data field (no GSE fragmentation allowed), for both GCS and GSE the Mode Adaptation unit shall perform the following sequence of operations (see “Figure (9.7)”.

Set UPL=0D; set SYNC=0x00-0xB8 is reserved for transport layer protocol signaling, SYNC=0xB9-0xFF user private; SYNCD is reserved for future use and shall be set to 0D when not otherwise defined.

Null packed deletion and CRC-8 computation for Data Field shall not be performed.

9.2.8.4 – High Efficiency Mode, GSE

GSE variable-length or constant length UPs may be transmitted in HEM. If GSE packet fragmentation is used, SYNCD shall be computed. If the GSE packets are not fragmented, the first packet shall be aligned to the Data Field and thus SYNCD shall always be set to 0D. The receiver may derive the length of the UPs from the packet header; therefore UPL transmission in BBHEADER is not performed. As per TS, the optional ISSY field is transmitted in the BBHEADER.

The Mode Adaptation unit shall perform the following sequence of operations:

Optional input stream synchronization (see “Clause (5.1.3)”) relevant to the first transmitted UP which starts in the data field; ISSY field inserted in the UPL and SYNC fields of the BBHEADER.

Null-packet Deletion and CRC-8 at UP level shall not be computed nor inserted.

Page 235: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

220 | P a g e

Figure (9.8)

SYNCD computation (pointing at the first bit of the first transmitted UP which starts in the Data Field) and storage in BBHEADER. The transmitted UP corresponds exactly to the original UP itself. Hence SYNCD points to the first bit of the original UP.

UPL not computed nor transmitted.

9.3 – Stream Adaptation Module Stream adaptation shown below in “Figure (9.8)” provides the following:

Scheduling (for input mode 'B').

Padding to complete a constant length (Kbch bits) BBFRAME and/or to carry in-band signaling according.

Scrambling for energy dispersal.

The input stream to the stream adaptation module shall be a BBHEADER followed by a DATA FIELD. The output stream shall be a BBFRAME, as shown in “Figure (9.8)”.

9.3.1 –Scheduler

In order to generate the required L1 dynamic signaling information, the scheduler must decide exactly which cells of the final T2 signal will carry data belonging to which PLPs, although this operation has no effect on the data stream itself at this stage, the scheduler shall define the exact composition of the frame structure, the scheduler works by counting the FEC blocks from each of the PLPs. Starting from the beginning of the Interleaving Frame (which corresponds to either one or more T2-frames, the scheduler counts separately the start of each FEC block received from each PLP. The scheduler then calculates the values of the dynamic parameters for each PLP for each T2-frame.. The scheduler then forwards the calculated values for insertion as in-band signaling data, and to the L1 signaling generator. The scheduler does not change the data in the PLPs whilst it is operating. Instead, the data will be buffered in preparation for frame building, typically in the time interleaver memories.

9.3.2 –Padding

Kbch depends on the FEC rate. Padding may be applied in circumstances when the user data available for transmission is not sufficient to completely fill a BBFRAME, or when an integer number of UPs has to be allocated in a BBFRAME. (Kbch-DFL-80) zero bits shall be appended after the DATA FIELD. The resulting BBFRAME shall have a constant length of Kbch bits.

9.3.3 –Using Padding Fields for in-band Signaling

In input mode 'B', the PADDING field may also be used to carry in-band signaling. An in-band signaling carrying L1/L2 update information and co-scheduled information is defined as in-band type A. When IN-BAND_FLAG field in L1-post signaling is set to '0', the in-band type A is not carried in the PADDING field. The use of in-band type A is mandatory for PLPs that appear in every T2-frame and for which one Interleaving Frame is mapped to one T2-frame (i.e. the values for PI and IJUMP for the current PLP are both equal to 1).

Page 236: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

221 | P a g e

Figure (9.9)

Equation (9.1)

Figure (9.10)

The L1 dynamic signaling for Interleaving Frame n+1 (Interleaving Frame n+2 in the case of TFS, see annex E [DVB-T2 standards ETSI EN 302 755 V1.1.1 (2009-09)]) of a PLP or multiple PLPs is inserted in the PADDING field of the first BBFRAME of Interleaving Frame n of each PLP. If NUM_OTHER_PLP_IN_BAND=0, the relevant PLP carries only its own in-band L1 dynamic information. If NUM_OTHER_PLP_IN_BAND>0, it carries L1 dynamic information of other PLPs as well as its own information, for shorter channel switching time.

“Figure (9.9)” shows padding format at the output of the stream adapter for in-band signaling.

9.3.4 –Scrambler

The complete BBFRAME shall be randomized. The randomization sequence shall be synchronous with the BBFRAME, starting from the MSB and ending after Kbch bits. The scrambling sequence shall be generated by the feed-back shift register of “Figure (9.10)”. The polynomial for the Pseudo Random Binary Sequence (PRBS) generator shall be as shown below in “Equation (9.1)”.

Loading of the sequence (100101010000000) into the PRBS register, as indicated in “Figure (9.10)”, shall be

initiated at the start of every BBFRAME.

Page 237: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

222 | P a g e

Figure (9.11)

9.4 – Bit Interleaving and Modulation Module

9.4.1 –Forward Error Correction Encoding

This sub-system shall perform outer coding (BCH), Inner Coding (LDPC) and Bit interleaving; the input stream shall be composed of BBFRAMEs and the output stream of FECFRAMEs.

Each BBFRAME (Kbch bits) shall be processed by the FEC coding subsystem, to generate a FECFRAME (Nldpc bits). The parity check bits (BCHFEC) of the systematic BCH outer code shall be appended after the BBFRAME, and the

parity check bits (LDPCFEC) of the inner LDPC encoder shall be appended after the BCHFEC field, as shown in “Figure (9.11)”.

Coding parameters for Normal frame operation are shown in the next table

While coding parameters for Short frame operation are shown in the next table

NOTE: For Nldpc = 64800 as well as for Nldpc =16200 the LDPC code rate is given by Kldpc / Nldpc. In 1st table the LDPC

code rates for Nldpc = 64800 are given by the values in the 'LDPC Code' column. In 2nd table the LDPC code rates for Nldpc = 16200 are given by the values in the 'Effective LDPC rate' column, i.e. for Nldpc = 16 200 the 'LDPC Code identifier' is not equivalent to the LDPC code rate.

Page 238: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

223 | P a g e

9.4.1.1 – Outer Coding (B. C. H.)

A t-error correcting BCH (Nbch, Kbch) code shall be applied to each BBFRAME to generate an error protected packet. The generator polynomial of the t error correcting BCH encoder is obtained by multiplying the first t polynomials in show in “Figure (9.12)” for Nldpc = 64800 and in “Figure (9.13)” for Nldpc = 16200.

Encoding Algorithm is illustrated in “Clause (6.4.3.6.2)” while the decoding algorithm is briefly illustrated in

“Clause (6.4.3.6.3)”.

9.4.1.2 – Inner Coding (LDPC)

The LDPC encoder treats the output of the outer encoding, I= (i0, i1,……,ikldpc-1) , as an information block of size Kldpc= Nbch, and systematically encodes it onto a code-word Λ of size Nldpc as shown in “Equation (9.2)”.

Figure (9.13)

Figure (9.12)

Equation (9.2)

Page 239: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

224 | P a g e

9.4.1.2.1 – Inner Coding for Normal FEC Frame

The task of the encoder is to determine Nldpc- Kldpc parity bits (p0,p1,….,pNldpc-Kldpc-1) for every block of Kldpc information bits(i0,i1,……,iKldpc-1). The procedure is as follows:

Initialize P0=P1=P2=…..= pNldpc-Kldpc-1 =0.

Accumulate the first information bit, i0 , at parity bit addresses as shown in “Figure (9.14)”[see annex A at standards],we must put into mind that all additions are in GF(2) are converted to be XORing.

For the next 359 information bits, im, m =1, 2, ..., 359 accumulate im at parity bit addresses {x+m mod 360 *Qldpc } mod{Nldpc-Kldpc} where x denotes the address of the parity bit accumulator corresponding to the first bit i0 , and Qldpc is a code rate dependent constant can be hunted out from “Figure (9.16)”, continuing with the example, Qldpc=60 for rate 2/3. So for example for information bit i1 , the following operations shown in “Figure (9.15)” are performed.

Figure (9.14)

Figure (9.15)

Figure (9.16)

Page 240: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

225 | P a g e

For the 361st information bit i360 , [see annex A at standards]. In a similar manner the addresses of the parity bit accumulators for the following 359 information bits im , m = 361, 362, ..., 719 are obtained using the formula {x+(m mod 360 )*Qldpc } mod{Nldpc-Kldpc} where x denotes the address of the parity bit accumulator corresponding to the information bit i360 , [see annex A at standards].

In a similar manner, for every group of 360 new information bits, a new row from tables A.1 through A.6 [see annex A at standards] are used to find the addresses of the parity bit accumulators.

9.4.1.2.2 – Inner Coding for Short FEC Frame

K BCH encoded bits shall be systematically encoded to generate Nldpc bits as described in “Clause (9.4.1.2.1)”, replacing used values shown in “Figure (6.16)”with values in “Figure (6.17)” the tables of annex A with the tables of annex B at standards.

9.4.1.2.2 – Inner Coding Simulation Results

“Figure (9.18)” shows LDPC performance comparison between different code rates.

Figure (9.17)

Figure (9.18)

Page 241: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

226 | P a g e

9.4.2 –Bit Interleaver

The output Λ of the LDPC encoder shall be bit interleaved, which consists of parity interleaving followed by column twist interleaving. The parity interleaver output is denoted by U and the column twist interleaver output by V.

In the parity interleaving part, parity bits are interleaved using “Equation (9.3)”.

The configuration of the column twist interleaving for each modulation format is specified in “Figure (9.19)”.

In the column twist interleaving part, the data bits ui from the parity interleaver are serially written into the

column-twist interleaver column-wise, and serially read out row-wise (the MSB of BBHEADER is read out first) as shown in “Figure (9.20)”, where the write start position of each column is twisted by tc. This interleaver is described by the following:

The input bit ui with index i, for 0 ≤ i < Nldpc, is written to column ci, row ri of the interleaver, as shown in “Equations (9.4)”.

The output bit vj with index j, for 0 ≤ j < Nldpc, is read from row rj, column cj, as shown below in “Equation (9.5)”.

So for 64-QAM and Nldpc = 64 800, the output bit order of column twist interleaving would be: (v0, v1, v2,...v64799 ) = (u0,u5400,u16198,...,u53992,u59231,u64790 ).

A longer list of the indices on the right hand side, illustrating all 12 columns, is: 0, 5 400, 16 198, 21 598, 26 997,32 396, 37 796, 43 195, 48 595, 53 993, 59 392, 64 791, …… 5 399, 10 799, 16 197, 21 597, 26 996, 32 395, 37 795,43 194, 48 594, 53 992, 59 391, 64 790.

Equation (9.3)

Figure (9.19)

Equation (9.4)

Equation (9.5)

Page 242: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

227 | P a g e

9.4.3 –Mapping Bits onto Constellations

Each FECFRAME (which is a sequence of 64800 bits for normal FECFRAME, or 16200 bits for short FECFRAME),shall be mapped to a coded and modulated FEC block by first de multiplexing the input bits into parallel cell words and then mapping these cell words into constellation values. The number of output data cells and the effective number of bits per cell ηMOD is defined as shown in “Figure (9.21)”.

9.4.3.1 – Bits to Cells Word De-Multiplexer

“Figure (9.22)” shows the definition of a cell.

Figure (9.20)

Figure (9.21)

Figure (9.22)

Page 243: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

228 | P a g e

The bit-stream vdi from the bit interleaver is de-multiplexed into Nsubstreams sub-streams, as shown in “Figure (9.23)” where the value of Nsubstreams is defined in “Figure (9.24)”.

“Figure (9.25)” shows parameters for de-multiplexing of bits to sub-streams for code rates 1/2, 3/4, 4/5 and 5/6.

“Figure (9.26)” shows parameters for de-multiplexing of bits to sub-streams for code rate 3/5 only.

Figure (9.23)

Figure (9.24)

Figure (9.25)

Page 244: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

229 | P a g e

“Figure (9.27)”shows parameters for de-multiplexing of bits to sub-streams for code rate 2/3 only.

Figure (9.26)

Figure (9.27)

Page 245: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

230 | P a g e

NOTE: “Figure (9.25)” is the same as “Figure (9.27)”except for the modulation format 256-QAM with Nldpc=64800. Except for QPSK (Nldpc=64800 or 16200) and 256-QAM (Nldpc=16200 only), the words of width Nsubstreams are split into two cell words of width ηMOD= Nsubstreams /2 at the output of the demultiplexer. The first ηmod =Nsubstreams /2 bits [b0,do..bNsubstreams/2-1,do] form the first of a pair of output cell words [y0,2do.. y ηmod-1, 2do] and the remaining output bits [bNsubstreams/2, do..bNsubstreams-1,do] form the second output cell word [y0, 2do+1..yηmod-1,2do+1] fed to the constellation mapper.

In the case of QPSK (Nldpc = 64 800 or 16 200) and 256-QAM (Nldpc=16 200 only), the words of width Nsubstreams from the demultiplexer form the output cell words and are fed directly to the constellation mapper.

9.4.3.3 – Cell Word Mapping into I/Q constellations

9.4.3.3.1 – Equation Representation

Each cell word (y0,q..yηmod-1,q) from the demultiplexer shall be modulated using either QPSK, 16-QAM, 64-QAM or 256-QAM constellations to give a constellation point zq prior to normalization.

BPSK is only used for the L1 signalling but the constellation mapping is specified here.

The exact values of the real and imaginary components Re(zq) and Im(zq) for each combination of the relevant input bits ye,q are shown below for the various constellations.

“Figure (9.28)” shows the real and imaginary portions of mapped BPSK cells.

“Figure (9.29)” shows the real and imaginary portions of mapped QPSK cells.

“Figure (9.30)” shows the real and imaginary portions of mapped 16-QAM cells.

“Figure (9.31)” shows the real and imaginary portions of mapped 64-QAM cells.

Figure (9.28)

Figure (9.29)

Figure (9.30)

Page 246: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

231 | P a g e

“Figure (9.32)” shows the real and imaginary portions of mapped 256-QAM cells.

9.4.3.3.2 – Constellation Representation

“Figure (9.33)” shows the constellation representation of mapped QPSK cells.

“Figure (9.34)” shows the constellation representation of mapped 16-QAM cells.

Figure (9.31)

Figure (9.32)

Figure (9.33)

Figure (9.34)

Page 247: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

232 | P a g e

“Figure (9.35)” shows the constellation representation of mapped 64-QAM cells.

“Figure (9.36)” shows the constellation representation of mapped 256-QAM cells.

Figure (9.35)

Figure (9.36)

Page 248: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

233 | P a g e

The constellation points zq for each input cell word (y0,q..yηmod-1,q) are normalized according to “Figure (9.37)” to obtain the correct complex cell value fq to be used.

“Figure (9.38)” shows brief comparison between performances of each type of modulation techniques discussed before.

Figure (9.37)

Figure (9.38)

Page 249: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

234 | P a g e

9.4.4 –Constellation Rotation and Cyclic Q-Delay

When constellation rotation is used, the normalized cell values of each FEC block F=(f0, f1, …, fNcells-1), coming from the constellation mapper are rotated in the complex plane and the imaginary part cyclically delayed by one cell within a FEC block. Ncells is the number of cells per FEC block and is given in “Figure (9.39)”. The output cells G=(g0, g1, …, gNcells-1) are given by “Equations (9.6)”.

Where the rotation angle Φ depends on the modulation and is as shown in “Figure (9.40)”.

Constellation rotation shall only be used for the common PLPs and the data PLPs and never for the cells of the L1

signalling where constellation rotation is not used. The cells are passed onto the cell interleaver unmodified. If rotated constellation diagrams are used, the information about the position of a constellation point is

contained both in the I component and in the Q component of the signal (“Figure (9.41)”). In a case of disturbance, this can be used for providing more reliable information about the position of the constellation point, in contrast to a non-rotated diagram(“Figure (9.42)”), contributing to better decidability. In contrast to a non-rotated constellation diagram, the IQ information, which is now discrete, can be used for soft decisions if necessary. Practice will show how much actual benefit can be derived from this.

Figure (9.39)

Equation (9.6)

Figure (9.40)

Figure (9.41)

Page 250: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

235 | P a g e

In reality, however, the whole process is slightly more complex. With a rotated diagram, the Q component is not

transmitted on the same carrier, or more precisely in the same "cell“, but with delay on another carrier (“Figure (9.42)”, “Figure (9.43)”)or better in another cell. From one QAM, virtually two ASKs (Amplitude Shift Keying modulations) in the I and Q-direction are then produced which are then transmitted on independent carriers – "cells“ which are disturbed differently in practice and are thus intended to contribute to the reliability of demodulation.

9.5 – Frame Mapper Module

9.5.1 –Cell Interleaver

The Pseudo Random Cell Interleaver (CI), which is illustrated in figure shown below, shall uniformly spread the cells in the FEC code-word, to ensure in the receiver an uncorrelated distribution of channel distortions and interference along the FEC code-words, and shall differently "rotate" the interleaving sequence in each of the FEC blocks of one Time Interleaver Block.

Figure (9.41)

Figure (9.42)

Figure (9.43)

Page 251: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

236 | P a g e

The input of the Cell interleaver is G(r) = (gr,0, gr,1, gr,2 ,..., gr, Ncells-1) shall be the data cells (g0, g1, g2,..., gNcells-1) of the FEC block of index 'r', generated by the constellation rotation and cyclic Q delay When time interleaving is not used, the value of 'r' shall be 0 for every FEC block.

The output of the CI shall be a vector D(r) = ( , , ,..., ) defined by “Equation (9.6)”.

For q = 0, 1… Ncells-1 The permutation function is given by:

: is the basic permutation function. : is the shift value to be used in FEC block r of the TI-block. “Figure (9.44)” shows the structure of cell interleaving.

9.5.2 –Time Interleaver

The FEC blocks from the cell interleaver for each PLP shall be grouped into Interleaving Frames (which are mapped onto one or more T2-frames).

Each Interleaving Frame shall contain a dynamically variable whole number of FEC blocks, the number of FEC

blocks in the Interleaving Frame of index n is denoted by NBLOCKS_IF(n) and is signaled as PLP_NUM_BLOCKS in the L1 dynamic signaling.

NBLOCKS may vary from a minimum value of 0 to a maximum value NBLOCKS_IF_MAX. --- NBLOCKS_IF_MAX is

signaled in the configurable L1 signaling as PLP_NUM_BLOCKS_MAX. the largest value this may take is 1023.

Equation (9.6)

Figure (9.44)

Page 252: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

237 | P a g e

Each Interleaving Frame is either mapped directly onto one T2-frame or spread out over several T2-frames as described before Each Interleaving Frame is also divided into one or more (NTI) TI-blocks, where a TI-block corresponds to one usage of the time interleaver memory, as described before The TI-blocks within a Interleaving Frame can contain a slightly different number of FEC blocks. If an Interleaving Frame is divided into multiple TI-blocks, it shall be mapped to only one T2-frame.

There are therefore three options for time interleaving for each PLP:

9.5.2.1 – Case I

Each Interleaving Frame contains one TI-block and is mapped directly to one T2-frame as shown in “Figure (9.45)”, this option is signaled in the L1-signalling by TIME_IL_TYPE='0' and TIME_IL_LENGTH='1'.

9.5.2.2 – Case II

Each Interleaving Frame contains one TI-block and is mapped to more than one T2-frame. As shown in “Figure (9.46)”, an example in which one Interleaving Frame is mapped to two T2-frames, and FRAME_INTERVAL(IJUMP)=2. This gives greater time diversity for low data-rate services. This option is signaled in the L1-signalling by TIME_IL_TYPE='1'.

9.5.2.2 – Case III

Each Interleaving Frame is mapped directly to one T2-frame and the Interleaving Frame is divided into several TI-blocks as shown in “Figure (5.47)”, Each of the TI-blocks may use up to the full TI memory, thus increasing the maximum bit-rate for a PLP. This option is signaled in the L1-signalling by TIME_IL_TYPE='0'.

Figure (9.45)

Figure (9.46)

Page 253: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

238 | P a g e

Graphical representation of the time interleaving block is shown below in “Figure (9.48)”.

9.5.3 –Frame Builder

The frame builder functions to always apply for a T2 system with a single RF channel. Some of the frame builder functions for a TFS system with multiple RF channels differ from those defined in this clause. The TFS specific frame builder functions are defined in annex E. Other frame builder functions for a TFS system than those specified in annex E apply as they are described in this clause.

Figure (9.47)

Figure (9.48)

Page 254: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

239 | P a g e

The function of the frame builder is to assemble the cells produced by the time interleavers for each of the PLPs and the cells of the modulated L1 signaling data into arrays of active OFDM cells corresponding to each of the OFDM symbols which make up the overall frame structure. The frame builder operates according to the dynamic information produced and the configuration of the frame structure.

9.5.3.1 – Frame Structure

The DVB-T2 frame structure is shown in “Figure (9.49)”. At the top level, the frame structure consists of super-frames, which are divided into T2-frames and these are further divided into OFDM symbols. The super-frame may in addition have FEF parts.

9.5.3.2 – Super Frame

A super-frame shown below in “Figure (9.50)” can carry T2-frames and may also have FEF parts; the number of T2-frames in a super-frame is a configurable parameter NT2 that is signaled in L1-pre signaling.

A FEF part may be inserted between T2-frames. There may be several FEF parts in the super-frame, but a FEF part shall not be adjacent to another FEF part. The location in time of the FEF parts is signaled based on the super-frame structure.

The maximum value for the super-frame length TSF is 64s if FEFs are not used (equivalent to 255 frames of 250 ms) and 128s if FEFs are used. Note also that the indexing of T2-frames and NT2 are independent of Future Extension Frames.

Figure (9.49)

Figure (9.50)

Page 255: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

240 | P a g e

9.5.3.3 – T2-Frame

The T2-frame comprises one P1 preamble symbol, followed by one or more P2 preamble symbols, followed by a configurable number of data symbols. In certain combinations of FFT size, guard interval and pilot pattern, the last data symbol shall be a frame closing symbol. The details of the T2-frame structure are described later.

The P1 symbols are unlike ordinary OFDM symbols and are inserted later, the P2 symbol(s) follow

immediately after the P1 symbol.

The main purpose of the P2 symbol(s) is to carry L1 signaling data. The L1 signaling data to be carried is described later, its modulation and error correction coding are described later and the mapping of this data onto the P2 symbol(s) is described later.

9.5.3.4 – Duration of the T2-Frame

The beginning of the first preamble symbol (P1) marks the beginning of the T2-frame, the number of P2 symbols NP2 is determined by the FFT size as given in table 45, whereas the number of data symbols Ldata in the T2-frame is a configurable parameter signaled in the L1-pre signaling, i.e. Ldata = NUM_DATA_SYMBOLS. The total number of symbols in a frame (excluding P1) is given by LF = NP2+Ldata. The T2-frame duration is therefore given by “Equation (9.7)”.

: The total OFDM symbol duration The duration of the P1 symbol. The maximum value for the frame duration TF shall be 250 ms. Thus, the maximum number for LF. The minimum number of OFDM symbols LF shall be NP2+3 when the FFT size is 32K and NP2+7 in other modes,

when the FFT size is 32K, the number of OFDM symbols LF shall be even. The P1 symbol carries only P1 specific signaling information. P2 symbol(s) carry all the remaining L1 signaling

information and, if there is free capacity, they also carry data from the common PLPs and/or data PLPs. Data symbols carry only common PLPs or data PLPs as defined.

The mapping of PLPs into the symbols is done at the OFDM cell level, and thus, P2 or data symbols can be shared

between multiple PLPs. If there is free capacity left at the end of the T2-frame, it is filled with auxiliary streams (if any) and dummy cells as defined later. In the T2-frame, the common PLPs are always located before the data PLPs.

9.5.3.5 – Capacity and Structure of T2-Frame

The frame builder shall map the cells from both the time interleaver (for the PLPs) and the constellation mapper (for the L1-pre and L1-post signaling) onto the data cell of each OFDM symbol in each frame,

where:

“m” is the T2- frame number.

“L” is the index of the symbol within the frame, starting at 0 for the first P2 symbol, 0 ≤ l < LF.

“p” is the index of the data cell within the symbol prior to frequency interleaving and pilot insertion.

Data cells are the cells of the OFDM symbols which are not used for pilots or tone reservation.

Equation (9.7)

Page 256: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

241 | P a g e

The P1 symbol is not an ordinary OFDM symbol and does not contain any active OFDM cells; the values of Cdata when tone reservation is used are calculated by subtracting the value in the "TR cells" column from the Cdata value without tone reservation. For 8K, 16K and 32K two values are given corresponding to normal carrier mode and extended carrier mode.

In some combinations of FFT size, guard interval and pilot pattern, the last symbol of the T2-frame is a special frame closing symbol. It has a denser pilot pattern than the other data symbols and some of the cells are not modulated in order to maintain the same total symbol energy when there is a frame closing symbol, the number of data cells it contains is denoted by NFC and is defined in “Figure (9.51)”. The lesser number of active cells, i.e. data cells that are modulated, is denoted by CFC, and is defined in table 44. Both NFC and CFC are tabulated for the case where tone reservation is not used and the corresponding values when tone reservation is used are calculated by subtracting the value in the "TR cells" column from the value without tone reservation.

9.5.3.6 – Signaling of the T2-Frame structure and PLPs

The configuration of the T2-frame structure is signaled by the L1-pre and L1-post signaling .The locations of the PLPs themselves within the T2-frame can change dynamically from T2-frame to T2-frame, and this is signaled both in the dynamic part of the L1-post signaling in P2, and in the in-band signaling Repetition of the dynamic part of the L1-post signaling may be used to improve robustness.

In a system with one RF channel, the L1-post dynamic signaling transmitted in P2 refers to the current T2-frame and the in-band signaling refers to the next Interleaving, frame. This is depicted in “Figure (9.52)”. In a TFS system the L1-post dynamic signaling transmitted in P2 refers to the next T2-frame and the in-band signaling refers to the next-but-one Interleaving Frame, as described in annex E When the interleaving Frame is spread over more than one T2-frame; the in-band signaling carries the dynamic signaling for each T2-frame of the next Interleaving Frame.

Figure (9.51)

Figure (9.52)

Page 257: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

242 | P a g e

9.5.3.7 – Overview over T2-Frame Mapping

The slices and sub-slices of the PLPs, the auxiliary streams and dummy cells are mapped into the symbols of the T2-frame as illustrated in “Figure (9.53)”. The T2-frame starts with a P1 symbol followed by NP2 P2 symbols. The L1-pre and L1-post signaling are first mapped into P2 symbol(s). After that, the common PLPs are mapped right after the L1 signaling. The data PLPs follow the common PLPs starting with type 1 PLP1. The type 2 PLPs follows the type 1 PLPs.

The auxiliary stream or streams, if any, follow the type 2 PLPs, and this can be followed by dummy cells.

Together, the PLPs, auxiliary streams and dummy data cells shall exactly fill the remaining cells in the frame.

Note: The L1 signaling is a control bit which mean it is the most important bits, so we should achieve high

BER to guarantee the arrival of right data, so we should use The QPSK modulation is the best modulation for achieving higher BER bit error rate, and the rate (1/4) is the best as well for achieving also high BER.

9.5.3.8 – Auxiliary Stream Insertion

Following the Type 2 PLPs, one or more auxiliary streams may be added. Each auxiliary stream consists of a sequence of cell values in each T2-frame, where the auxiliary stream index. The cell values shall have the

same mean power as the data cells of the data PLPs but apart from this restriction they may be used as required by the broadcaster or network operator. The auxiliary streams are mapped one after another onto the cells in order of increasing cell address, starting from the first address following the last cell of the last sub-slice of the last Type 2 PLP.

The start position and number of cells for each auxiliary stream may vary from T2-frame to T2-frame, and bits

are reserved to signal these parameters in the L1 dynamic signaling.

The cell values for auxiliary streams need not be the same for all transmitters in a single frequency network However, if MISO is used, care shall be taken to ensure that the auxiliary streams do not interfere with the correct decoding of the data PLPs.

Specific uses of auxiliary streams, including coding and modulation, will be defined either in future editions of the present document or elsewhere. The auxiliary streams may be ignored by the receiver. If the number of auxiliary streams is signaled as zero, this clause is ignored.

Figure (9.53)

Page 258: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

243 | P a g e

9.5.3.9 – Future Extension Frames

Future Extension Frame (FEF) insertion enables carriage of frames defined in a future extension of the DVB-T2 standard in the same multiplex as regular T2-frames. The use of future extension frames is optional.

A future extension frame may carry data in way unknown to a DVB-T2 receiver addressing the current standard version. A receiver addressing the current standard version is not expected to decode future extension frames. All receivers are expected to detect FEF parts.

A FEF part shall begin with a P1 symbol that can be detected by all DVB-T2 receivers. The maximum length of a FEF part is 250 ms. all other parts of the future extension frames will be defined in future extensions of the present document or elsewhere. The detection of FEF parts is enabled by the L1 signaling carried in the P2 symbol(s). The configurable L1 fields signal the size and structure of the super-frame. The NUM_T2_FRAMES describes the number of T2-frames carried during one super-frame. The location of the FEF parts is described by the L1 signaling field FEF_INTERVAL, which is the number of T2-frames at the beginning of a super-frame, before the beginning of the first FEF part. The same field also describes the number of T2-frames between two FEF parts. The length of the FEF part is given by the FEF_LENGTH field of the L1 signaling. This field describes the time between two DVB-T2 frames preceding and following a FEF part as the number of elementary time periods T, i.e. samples in the receiver

The parameters affecting the configuration of FEFs shall be chosen to ensure that, if a receiver obeys the TTO signaling and implements the model of buffer management, the receiver's de-jitter buffer and time de-interleaver memory shall neither overflow nor underflow.

NOTE: In order not to affect the reception of the T2 data signal, it is assumed that the receiver's automatic gain control will be held constant for the duration of FEF part, so that it is not affected by any power variations during the FEF part.

9.5.4 –Frequency Interleaver

The purpose of the frequency interleaver, operating on the data cells of one OFDM symbol, is to map the data cells from the frame builder onto the Ndata available data carriers in each symbol. Ndata = CP2 for the P2 symbol(s), Ndata = Cdata for the normal symbols, and Ndata = NFC for the Frame Closing symbol, if present. For the P2 symbol(s) and all other symbols, the frequency interleaver shall process the data cells Xm,l = (xm,l,0, xm,l,1, …, xm,l, Ndata-1) of the OFDM symbol l of T2-frame m, from the frame builder. Thus for example in the 8k mode with scattered pilot pattern PP7 and no tone reservation, blocks of 6698 data cells from the frame builder during normal symbols form the input vector Xm,l = (xm,l,0, xm,l,1, xm,l,2,...xm,l,6697). A parameter Mmax is then defined as shown below in “Figure (9.54)”.

Figure (9.54)

Page 259: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

244 | P a g e

The interleaved vector Am,l = (am,l,0, am,l,1, am,l,2...am,l,Ndata-1) is defined by “Equation (9.8)”. For mode 32K: am,l,H(p) = xm,l,p for even symbols of the frame (l mod 2 = 0) for p= 0,...,Ndata-1. a m,l,p= x m,l,H(p) for odd symbols of the frame (l mod 2 = 1) for p = 0,...,Ndata-1. For mode 1K, 2K, 4K, 8K, 16K: a m,l,p = x m,l,H0(p) for even symbols of the frame (l mod 2 = 0) for p = 0,...,Ndata-1. a m,l,p = x m,l,H1(p) for odd symbols of the frame (l mod 2 = 1) for p = 0,...,Ndata-1. H(p), H0(p) and H1(p) are permutation functions based on sequences R'i defined by the following, R'i is defined with Nr = log2 Mmax, where R'i takes the following values: i = 0,1: R'i [Nr-2, Nr-3,...,1,0]= 0,0,...,0,0 i = 2: R'i [Nr-2, Nr-3,...,1,0] = 0,0,...,0,1 2 < i < Mmax: { R'i [Nr-3, Nr-4,...,1,0] = R'i-1 [Nr-2, Nr-3,...,2,1]; In the 1k mode: R'i [8] = R'i-1 [0] R'i-1 [4] In the 2k mode: R'i [9] = R'i-1 [0] R'i-1 [3] In the 4k mode: R'i [10] = R'i-1 [0] R'i-1[2] In the 8k mode: R'i [11] = R'i-1 [0] R'i-1 [1] R'i-1[4] R'i-1 [6] In the 16k mode: R'i [12] = R'i-1 [0] R'i-1 [1] R'i-1[4] R'i-1 [5] R'i-1 [9] R'i-1 [11] In the 32k mode: R'i [13] = R'i-1 [0] R'i-1 [1] R'i-1[2] R'i-1 [12] } A vector Ri is derived from the vector R'i by the bit permutations depending on the FFT size as shown below in

“Figure (9.55)”. The permutation function H (p) is defined by the following algorithm represented by “Equations (9.9)”.

9.6 – Modulator Module

9.6.1 –Alamouti Space Time Coding

The alamouti scheme is historically the first space-time block code to provide full transmit diversity for systems with two transmit antennas. It is worthwhile to mention that delay diversity scheme can also achieve a full diversity, but they introduce interference between symbols and complex detectors are required at the receiver.

Equation (9.8)

Equation (9.9)

Page 260: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

245 | P a g e

9.6.1.1 – Alamouti Encoding

Let us assume that an M-ary modulation scheme is used. In the Alamouti space-time encoder, each group of m information bits is first modulated, where m= log2 M. Then, the encoder takes a block of two modulated symbols x1 and x2 in each encoding operation and maps them to the transmit antenna according to a code matrix given by “Equation (9.10)”.

[

]

“Figure (9.55)”shows a brief block diagram for alamouti space time code.

The encoders outputs are transmitted in two consecutive transmission periods from two transmit antennas.

During the first transmission period, two signals x1 and x2 are transmitted simultaneously from antenna one and antenna two, respectively. In the second transmission period, signal –x2

* is transmitted from transmit antenna one and signal x1

* from transmit antenna two, where x1* is the complex conjugate of x1.

It is clear that the encoding is done in both the space and time domain. We can denote the transmit sequence from antennas one and two by x1 and x2 such that X1 = [x1 , -x2

*] and X2= [ x2 , x1*].

The key feature of the Alamouti scheme is that the transmit sequence from the two transmit antennas are

orthogonal, since the inner product of the sequences x1 and x2 is zero. Let us assume that one receive antenna is used at the receiver. The fading channel coefficients from the first

and second transmit antenna to the receive antenna at time t are denoted by h1(t) and h2(t), respectively. Assuming that the fading coefficients are constant across two consecutive symbol transmission periods, they can be expressed as shown below in “Equation (9.11)”

| |

| |

| | And Ѳi, i=0, 1, are the amplitude gain and phase shift for the path from transmit to the receive T is the symbol duration.

At the receive antenna, the received signals over two consecutive symbol periods denoted by r1 and r2 as shown

below in “Equation (9.12)”.

Equation (9.10)

Figure (9.55)

Equation (9.11)

Equation (9.12)

Page 261: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

246 | P a g e

9.6.1.2 – Alamouti Modified Encoding

DVB-T2 contains MISO (Multiple Input/Single Output) as an option as shown below in “Figure (9.56)”. This means that possibly two transmitting antennas may be used which, however, do not radiate the same transmitted signal. Instead, adjacent symbols are transmitted repeatedly once by one and once by the other transmitting antenna in accordance with the modified Alamouti principle. This is an attempt to come closer to the Shannon limit.

Alamouti modified encoding algorithm can be represented as a type of coding which is performed in pairs.

Two data carriers are transmitted unmodified on TX1 while they get mathematically adapted and change position for transmission on TX2 as shown below in “Equation (9.13)” and graphically represent in “Figure (9.57)”.

Figure (9.56)

Equation (9.14)

Figure (9.57)

Page 262: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

247 | P a g e

9.6.1.3 – Alamouti Modified Decoding using Zeros Forcing Decoder

The signal components can be recovered at the receiver side, as follows. The received complex values for the cells of a MISO pair, r 1 and r2 are given by “Equations (9.15)”.

Then:

“ ” And “ ”: describe the channel transfer function between Tx1 and Rx and Tx2 and Rx respectively, “ ” And “ ”: are characterizing the noise terms. The diversity signal may be used with an arbitrary number of receive aerials.

The received pair of cells can be represented by a vector r , and zero forcing decoding algorithms shown

below in “Equations (9.16)” are applied.

*

+

But:

And:

[

]

[

]

*

+

Then:

*

+ [

] [

] *

+

Thus:

9.6.1.4 – Alamouti Performance

“Figure (9.58)” shows DVB-T2 system’s performance using normal transmission (SISO) and using Alamouti principle at transmission (MISO).

Equation (9.15)

Equation (9.16)

Page 263: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

248 | P a g e

9.6.2 –Pilot Insertion

Various cells within the OFDM frame are modulated with reference information whose transmitted value is known to the receiver. Cells containing reference information are transmitted at "boosted" power level. The information transmitted in these cells is called pilot cells.

The pilots can be used for frame synchronization, frequency synchronization, time synchronization, channel

estimation, transmission mode identification and can also be used to follow the phase noise. DVB-T2 has the following pilots, 1. Edge pilots. 2. Continual Pilots. 3. Scattered Pilots. 4. P2-Pilots. 5. Frame Closing Pilots. “Figure (9.59)” gives an overview of the different types of pilots and there distribution over different types of

transmitted symbols.

Figure (9.58)

Figure (9.59)

Page 264: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

249 | P a g e

9.6.2.1 – Definition of Reference Plane

The pilots are modulated according to a reference sequence, rl,k, where l and k are the symbol and carrier indices as previously defined. The reference sequence is derived from a symbol level PRBS, wk, and a frame level PN-sequence, pn1. This reference sequence is applied to all the pilots (i.e. Scattered, Continual, Edge, P2 and Frame Closing pilots) of each symbol of a T2-frame, including both P2 and Frame Closing symbols

Reference plane sequence can be generated from XORing two sequences of bits as shown below in “Figure (9.60)”.

9.6.2.1.1 – Symbol Level

The symbol level PRBS sequence, is generated according to “Figure (9.61)”, the shift register is initialized with all '1's so that the sequence begins w0, w1, w2… = 1,1,1,1,1,1,1,1,1,1,1,0,0…

9.6.2.1.2 – Frame Level

Each value of the frame level PN-sequence is applied to one OFDM symbol of the T2-frame. The length of the frame level PN-sequence NPN is therefore equal to the T2-frame length LF, i.e. the number of symbols in the T2-frame excluding P1.

9.6.2.2 – Scattered Pilots

Scattered pilots are used by the DVB-T2 receiver to make measurements of the channel and to estimate the channel response for every OFDM cell. The measurements need to be sufficiently dense that they can follow channel variations as a function of both frequency and time.

Figure (9.60)

Figure (9.61)

Page 265: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

250 | P a g e

In DVB-T2, a choice of 8 different pilot patterns is possible, PP1 to PP8, which gives the possibility to adapt to particular channel scenarios. The choice depends on the FFT size and impacts the Doppler performance and the performance regarding self-interference.

Reference information, taken from the reference sequence, is transmitted in scattered pilot cells in every symbol except P1, P2 and the frame-closing symbol (if applicable) of the T2-frame.

9.6.2.2.1 – Locations

A given carrier k of the OFDM signal on a given symbol l will be a scattered pilot if the appropriate “Equations (9.17)” shown below is satisfied.

For Normal Carrier mode:

( )

For Extended Carrier mode:

( )

Dx is the separation of pilots bearing carriers. Dy is the Number of symbols forming one scattered pilot.

Dx and Dy depends on the pilot pattern (PP1-PP8).

k € [Kmin; Kmax]. l € [NP2; LF-2] when there is a frame closing symbol. l € [NP2; LF-1] when there is no frame closing symbol. Np2: number of P2 symbols Lf: total number of symbols in the 1 frame.

“Figure (9.62)” shows scattered pilots distribution over different subcarriers.

9.6.2.2.2 – Modulation

The modulation value of the scattered pilots is simply given by “Equation (9.18)”.

Re{cm,l,k} = 2 ASP (1/2 –rl,k)

Im{ cm,l,k } = 0

Asp is the scattered pilot amplitude.

rl,k is the reference sequence. m is the T2-frame index. k is the frequency index of the carriers l is the time index of the symbols.

9.6.2.3 – Continual Pilots

In addition to the scattered pilots described above, a number of continual pilots are inserted in every symbol of the frame except for P1 and P2 and the frame closing symbol (if any). The number and location of continual pilots depends on both the FFT size and scattered pilot pattern PP1-PP8 in use.

Equation (9.17)

Equation (9.18)

Page 266: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

251 | P a g e

9.6.2.3.1 – Locations

The continual pilot locations are taken from one or more "CP-continual pilots- groups" depending on the FFT mode. The pilot locations belonging to each CP group depend on the scattered pilot pattern in use.

the carrier index for each CP is given by k = ki,32K mod Kmod, where Kmod is known for each FFT size.

When a carrier's location is such that it would be both a continual and scattered pilot, the boosting value

for the scattered pilot pattern shall be used (ASP).

9.6.2.2.2 – Modulation

The modulation value for the continual pilots is simply given by “Equations (9.19)”. Re{cm,l,k} = 2 ACP (1/2 -rl,k)

Im{ cm,l,k } = 0.

ACP is the amplitude of the continual pilots

rl,k is the reference sequence.

9.6.2.4 – Edge Pilots

The edge carriers, carriers k=Kmin and k=Kmax, are edge pilots in every symbol except for the P1 and P2 symbol(s). They are inserted in order to allow frequency interpolation up to the edge of the spectrum.

The modulation of these cells is exactly the same as for the scattered pilots.

9.6.2.5 – P2 Pilots

9.6.2.5.1 – Locations

In 32 K SISO modes, cells in the P2 symbol(s) for which k mod 6 = 0 are P2 pilots.

In all other modes (including 32K MISO), cells in the P2 symbol(s) for which k mod 3 = 0 are P2 pilots. In extended carrier mode, all cells for which Kmin ≤ k < Kmin + Kext and for which Kmax - Kext < k ≤ Kmax are also P2 pilots.

9.6.2.5.2 – Modulation

The corresponding modulation is given by “Equation (9.20)”.

Re{cm,l,k} = 2 AP2 (1/2 - rl,k) Im{cm,l,k} = 0 Ap2 is the amplitude of P2 symbol. m is the T2-frame index. k is the frequency index of the carriers. l is the symbol index.

Equation (9.19)

Equation (9.20)

Page 267: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

252 | P a g e

9.6.2.6 – P2 Pilots

When any of the combinations of FFT size, guard interval and scattered pilot pattern (for SISO mode) is used, the last symbol of the frame is a special frame closing symbol. Frame closing symbols are always used in MISO mode, except with pilot pattern PP8, when frame closing symbols are never used.

9.6.2.6.1 – Locations

The cells in the frame closing symbol for which k mod DX = 0, except when k = Kmin and k = Kmax, are frame closing pilots, where DX is the value that used in the scattered pilot pattern in use. With an FFT size of

1K with pilot patterns PP4 and PP5, and with an FFT size of 2K with pilot pattern PP7, carrier Kmax-1 shall be an additional frame closing pilot.

9.6.2.5.2 – Modulation

The corresponding modulation is given by “Equation (9.21)”:

Re{cm,l,k} = 2 ASP (1/2 - rl,k) Im{cm,l,k} = 0 m is the T2-frame index. k is the frequency index of the carriers. l is the time index of the symbols.

9.6.2.7 – Modifications on Pilots for MISO Operation

In MISO mode, the phases of the scattered, continual, edge and frame-closing transmitted from any transmitter from transmitters in MISO group 2.

The scattered pilots from transmitters in MISO group 2 are inverted compared to MISO group 1 on alternate scattered-pilot-bearing carriers as shown below in “Equations (9.22)”.

Re {cm,1,k} = 2(-1)k/Dx Asp (1/2 – rl,k) Im{ cm,l,k } = 0.

The continual pilots from transmitters in MISO group 2 falling on scattered-pilot-bearing carriers are inverted

compared to MISO group 1 as shown below in “Equations (9.23)”.

{ } {

⁄ ( ⁄ )

( ⁄ )

{ }

Those cells which would be both a continual and a scattered pilot are treated as scattered pilots and

therefore have the amplitude ASP.

The edge pilots from transmitters in MISO group 2 are inverted compared to MISO group 1 on odd-numbered

OFDM symbols as represented in “Equation (9.24)”.

Equation (9.21)

Equation (9.22)

Equation (9.23)

Page 268: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

253 | P a g e

Re{cm,l,k} = 2 (-1)l ASP (1/2-rl,k) Im{ cm,l,k } = 0.

The P2 pilots from transmitters in MISO group 2 are inverted compared to MISO group 1 on carriers whose indices are odd multiples of three as shown below in “Equation (9.25)”:

{ } {

⁄ ( ⁄ )

( ⁄ )

{ }

The frame closing pilots from transmitters in group 2 are inverted compared to group 1 on alternate scattered-pilot-bearing carriers as shown below in “Equation (9.26)”: Re {cm,l,k} = 2(-1)k/Dx Asp (1/2 – rl,k)

Im{ cm,l,k } = 0.

The locations and amplitudes of the pilots in MISO are the same as in SISO mode for transmitters from both MISO group 1 and MISO group 2, but additional P2 pilots are also added.

In normal carrier MISO mode, carriers in the P2 symbol(s) for which k= Kmin+1, k= Kmin+2, k=Kmax-2 and k=Kmax-1 are additional P2 pilots, but are the same for transmitters from both MISO group 1 and MISO group 2.

In extended carrier MISO mode, carriers in the P2 symbol(s) for which k= Kmin+Kext +1, k= Kmin+Kext +2, k=Kmax-Kext-2 and k=Kmax-Kext-1 are additional P2 pilots, but are the same for transmitters from both MISO group 1 and MISO group2.

Hence for these additional P2 pilots in MISO mode modulation can be obtained be applying “Equations (9.27)”:

Re{cm,l,k} = 2 AP2 (1/2 -rl,k) Im{ cm,l,k } = 0.

9.6.3 –Channel Estimation

The received carrier amplitudes output by the receiver FFT are not in general the same as transmitted - they are affected by the channel through which the signal has passed on its way from the transmitter.

The channel estimate can be derived from the known information inserted in certain OFDM cells (Pilots)- a term

we use for the entity conveyed by a particular combination of carrier (location in frequency) and symbol (location in time). These cells containing Pilots which are known information for the receiver ; they are affected by the channel in exactly the same way as the data and thus the noise effect can be estimated, there are two different methods for channel estimation ; Decision-directed and pilot–symbol-aided methods, there are two major problems in designing channel estimators for wireless OFDM systems.

Equation (9.24)

Equation (9.25)

Equation (9.26)

Equation (9.27)

Page 269: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

254 | P a g e

The first problem is the arrangement of pilot or reference signal for transmitters and receivers, the second problem is the design of an estimator with both low complexity and good performance.

9.6.3.1 – Least Square Estimator

There are many types of estimators; minimum mean-square error (MMSE), modified MMSE, the maximum likelihood (ML) estimator, and the parametric channel modeling-based (PCMB) estimator, but the LS estimator has the lowest complexity of all estimators but suffers from high mean square error.

The task here is to estimate the channel conditions (specified by H) given the pilot signals (specified by matrix

X or vector X) and received signals (specified by Y) as represented below in “Equation (9.28)”.

: Channel Estimates.

9.6.3.2 – Interpolation

The frequency interpolator can only make use of the finite number of pilot-bearing carriers, similarly, the temporal interpolator clearly cannot access measurements from before the time the receiver was switched on or the current radio-frequency channel was selected. Much more importantly, the length of the temporal interpolator is tightly limited by the fact that the main signal stream awaiting equalization has to be delayed while the measurements to be input into the temporal interpolator are gathered.

9.6.3.2.1 – Temporal Interpolation

The presumption is that most receivers, on grounds of simplicity and minimizing delay, will use at most simple linear temporal interpolation. “Figure (9.62)”shows an example pilot pattern and three different cases of temporal interpolation. To begin with we will consider the simplest case of interpolation between scattered pilots.

Equation (9.28)

Figure (9.62)

Page 270: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

255 | P a g e

9.6.3.2.1 – Frequency Interpolation

Fortunately the frequency interpolator can use rather more taps. This is in fact necessary, since the 'time-width' (an analogous term, for frequency-domain sampling, to the common use of bandwidth in relation to time-domain sampling) of the interpolator now has to be an appreciable fraction of the Nyquist limit, “Figure (9.63)” shows frequency interpolation example.

9.6.4 –OFDM implementation using IFFT

This part specifies the OFDM structure to use for each transmission mode. The transmitted signal is organized in frames. Each frame has duration of TF, and consists of LF OFDM symbols. NT2 frames constitute one super-frame.

Each symbol is constituted by a set of Ktotal carriers transmitted with a duration TS. It is composed of two parts: a

useful part with duration TU and a guard interval with duration. The symbols in an OFDM frame (excluding P1) are numbered from 0 to LF-1. All symbols contain data and

reference information. Since the OFDM signal comprises many separately-modulated carriers, each symbol can in turn be considered to

be divided into cells, each corresponding to the modulation carried on one carrier during one symbol. The carriers are indexed by k € [Kmin; Kmax] and determined by Kmin and Kmax. The spacing between adjacent

carriers is 1/TU while the spacing between carriers Kmin and Kmax are determined by (Ktotal-1)/TU.

9.6.5 –Peak-to-Average Power Reduction

PAPR reduction stands for Peak to Average Power Ratio Reduction shown below in “Figure (9.64)”, and means nothing else than the reduction of crest factors. The crest factor is the ratio of the maximum peak voltage to the RMS value.

Figure (9.63)

Page 271: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

256 | P a g e

In order to be able to limit this crest factor in DVB-T2, two methods of PAPR are provided, namely Active

Constellation Extension (ACE) (“Figure (9.65)”) and Tone Reservation (TR).

In the case of the Active Constellation Extension, the fact is used that outermost constellation points could be

shifted further out within certain limits, without restriction in the demodulation, in order to reduce the current crest factor by summation of all carriers and by suitably adapting certain carrier amplitude. However, the ACE is not possible with rotated constellation diagrams which is why this method will probably not used in DVB-T2.

In the case of the Tone Reservation, The basic idea is that some carriers are reserved to reduce PAPR. These

reserved carriers don't carry any data information and are instead filled with a peak-reduction signal. Because data and reserved carriers are allocated in disjoint subsets of subcarriers, Tone Reservation needs no side information at the receiver other than an indication that the technique is in use.

“Figure (9.66)” shows the structure of the OFDM transmitter using Tone Reservation. Reserved carriers are allocated according to predetermined carrier locations which are reserved carrier indices. After the IFFT, peak cancellation is operated to reduce PAPR by using a predetermined signal. The predetermined signal, or kernel, is generated by the reserved carriers.

Figure (9.64)

Figure (9.65)

Figure (9.66)

Page 272: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

257 | P a g e

9.6.5.1 – PAPR De-Mapping

The ACE PAPR technique makes modifications to the outer transmitted-constellation points in order to reduce the peak-to-mean ratio of the transmitted signal. This could potentially affect the LLR de-mapping process in the receiver, since the assumption built into the traditional LLR is that the transmitted points are all on the QAM grid and any departure of the received point from these points is the result of additive noise. Constellation-based measures such as the average distance from the nearest constellation point are also used to estimate the level of noise and interference affecting each carrier. Clearly, this could also be misled by the use of ACE.

Receiver implementers should therefore be careful to take into account the use of ACE in developing these

algorithms. The receiver can tell whether ACE is in use by examining the L1 signaling parameter 'PAPR'.

9.6.5 –Guard Interval Insertion

In DVB-T2 Alamouti coding is in frequency direction. For each TX branch, pilots are inserted and a set of tones are reserved for PAPR reduction purposes. Finally the OFDM symbols are transformed into time domain with subsequent insertion of the Guard Interval.

For each FFT size a Guard Interval fraction (/Tu) is defined.

9.7 – Synchronization using Control Signals Synchronization is very important in DVB-T2 system and the receiver has to perform synchronization tasks before

starting the modulation of the subcarriers. Firstly, the receiver has to obtain correct symbol timing in order to minimize the effects of ISI and ICI. Secondly, it has to determine and correct the carrier frequency offset of the received signal to remove the ICI. To deal with different synchronization tasks, DVB-T2 system has different signal features, like P1 symbol, p2 symbol and scattered and continual pilots.

At the beginning, the receiver starts the scanning process without any knowledge of the channel or the signal. Whenever the receiver finds a P1 symbol, then it can deduce the presence of T2 signal in the channel. After that the receiver uses p1 symbol to perform some symbol and frequency related synchronization and can determine the FFT size. Once the RX knows the FFT size, then it can move forward to deduce the guard interval. To know further information about the pilot pattern used in the main payload, the receiver needs to decode the L1 signaling in P2 symbols. By decoding the L1 signaling information, the receiver knows the pilot pattern and guard interval of the data

symbols.

9.7.1 –P1-Symbols

According to the frame structure of DVB-T2, every T2 frame has a P1 symbol at the very beginning. After that P1 symbol, there will be one or more p2 symbol(s) before the data symbols. “Figure (9.67)”shows the position of the P1

symbol in the DVB-T2 frame.

Figure (9.67)

Page 273: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

258 | P a g e

9.7.1.1 – P1-Symbol Over-view

Preamble symbol P1 has four main purposes. First it is used during the initial signal scan for fast recognition of the T2 signal, for which just the detection of the P1 is enough. Construction of the symbol is such that any frequency offsets can be detected directly even if the receiver is tuned to the nominal center frequency. This saves scanning time as the receiver does not have to test all the possible offsets separately.

The second purpose for P1 is to identify the preamble itself as a T2 preamble. The P1 symbol is such that it can be used to distinguish itself from other formats used in the FEF parts coexisting in the same super-frame. The third task is to signal basic TX parameters that are needed to decode the rest of the preamble which can help during the initialization process, and tells whether the transmission is SISO or MISO. The fourth purpose of P1 is to enable the receiver to detect and correct frequency and timing synchronization.

9.7.1.2 – P1-Symbol Description

P1 is a 1K OFDM symbol with two 1/2 "guard interval-like" portions added. The total symbol lasts 224µs in 8

MHzsystem, comprising 112 µs, the duration of the useful part 'A' of the symbol plus two modified 'guard-

interval' sections ‘A’ and ‘B’ of roughly 59 µs (542 samples) and 53 µs (482 samples) See “Figure (9.68)”.

Out of the 853 useful carriers of a 1K symbol, only 384 are used, leaving others set to zero. The used carriers occupy roughly 683 MHz band from the middle of the nominal 761 MHz signal bandwidth. Design of the symbol is such that even if a maximum offset of 500 kHz is used, most of the used carriers in P1 symbol are still within the 761 MHz nominal bandwidth and the symbol can be recovered with the receiver tuned to nominal center frequency. The first active carrier corresponds to 44, while the last one is 809 (see “Figure (9.69)”).

Figure (9.68)

Page 274: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

259 | P a g e

9.7.1.3 – P1-Symbol Generation

The actual P1symbol structure in DVB-T2 standard was inspired by the C-A-B structure, and was implemented with few modifications both in the symbol structure and in the receiver processing chain, “Figure (9.70)” shows block diagram of P1-Symbol generation.

9.7.1.4 – Carrier Distribution in P1-Symbol

The active carriers are distributed using the following algorithm: out of the 853 carriers of the 1K symbol, the 766 carriers from the middle are considered. From these 766 carriers, only 384 carry pilots; the others are set to zero. In order to identify which of the 766 carriers are active, three complementary sequences are concatenated: the length of the two sequences at the ends is 128, while the sequence in the middle is 512 chips long. The last two bits of the third concatenated sequence are zero, resulting in 766 carriers where 384 of them are active carriers.

9.7.1.5 – Modulation of the Active Carriers in P1-Symbol

Active carriers are DBPSK modulated with a modulation pattern for encoding two signaling fields S1 and S2.P1 symbol has the capability to convey 7 bits. Among them, S1 and S2 fields, Up to 8 values (can encode 3 bits) and 16 values (can encode 4 bits) can be signaled in each field, respectively. Patterns to encode S1 are based on 8 orthogonal sets of 8 complementary sequences of length 8 (total length of each S1 pattern is 64); while patterns

Figure (9.69)

Figure (9.70)

Page 275: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

260 | P a g e

to encode S2 are based of 16 orthogonal sets of 16 complementary sequences of length 16 (total length of each S2 pattern is 256).

P1 symbol carries two types of information: the first one is carried by S1 field to distinguish the preamble

format and the frame type and the second one is carried by S2 field to signal FFT size to the receiver, as presented in “Figure (9.71)”.

Figure (9.71)

Page 276: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

261 | P a g e

9.7.1.6 – P1-Symbol Decoding

At this stage, the p1 structure is assumed to be correctly detected and the system knows which of the active carriers are used, then the received P1 sequence has to be extracted from the carrier distribution and unscrambling and de-mapping has to be performed in order to decode the P1 signaling. In the transmitter, the signaling field S1 and S2 are encoded with specified patterns. Therefore, those signaling fields can be obtained in the receiver by performing a simple comparison between the received sequences with those expected patterns.

Decoding the P1 symbol will provide the basic transmission parameters. At first, p1 symbol will disclose the fact that whether it belongs to a T2 frame or to a FEF part, because it is possible to find the FEF parts within the same channel carrying the T2 signal. Then decoding the p1 symbol will give the FFT size and SISO or MISO mode of transmission.

9.7.2 –P2-Symbols

The layer1 signaling (L1 signaling) is transmitted from the modulator to the receiver in 1 … 16 P2 symbols (preamble symbols 2) per DVB-T2 frame. Physically, a preamble symbol 2 has almost the same structure as the later data symbols. The FFT mode corresponds to that of the data symbols and is already signaled in the P1 symbol. However, the pilot density is greater.

A P2 symbol consists of a pre- and post-signaling component. Both components are differently modulated and error protected. The L1-pre signaling enables the reception and decoding of the L1-post signaling, which in turn conveys the parameters needed by the receiver to access the physical layer pipes. The L1-post signaling is further split into two main parts: configurable and dynamic, and these may be followed by an optional extension field. The L1-post finishes with a CRC and padding (if necessary), see “Figure (9.72)”.

9.7.2.1 – L1-Signaling Data

All L1 signaling data, except for the dynamic L1-post signaling, shall remain unchanged for the entire duration of one super-frame. Hence any changes implemented to the current configuration (i.e. the contents of the L1-pre signaling or the configurable part of the L1-post signaling) shall be always done within the border of two super-frames.

Figure (9.72)

Page 277: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

262 | P a g e

Pre-signaling component is permanently BPSK-modulated and protected with a constant error protection known to the receiver. The transmission parameters of the P2-pre-signalling component are:

Modulation: BPSK

FEC Encoding: BCH + 16K LDPC.

LDPC Code Rate: ½. The transmission parameters of the P2 post-signaling component are:

Modulation: BPSK, QPSK, 16-QAM, or 64-QAM.

FEC Encoding: BCH + 16K LDPC.

LDPC Code Rate: ½ with QPSK, 16-QAM, and 64-QAM or ¼ with BPSK.

P2 data in part 1(constant length, L1 pre signaling) contains:

Guard Interval.

Pilot Pattern.

Cell ID.

Network ID.

PAPR usage.

Number of data symbols.

L1-post-signaling parameters. P2 data in part 2 (variable length, L1 posts signaling) contains:

Number of PLPs.

RF Frequency.

PLP IDs.

PLP signaling parameters.

9.7.2.2 – Repetition of L1-Post dynamic data

To obtain increased robustness for the dynamic part of L1-post signaling, the information may be repeated in the preambles of two successive T2-frames. The use of this repetition is signaled in L1-pre parameter L1_REPETITION_FLAG. If the flag is set to '1', dynamic L1-post signaling for the current and next T2-frames are present in the P2 symbol(s) as illustrated in “Figure (9.73)” Thus, if repetition of L1-post dynamic data is used, the L1-post signaling consists of one configurable and two dynamic parts as depicted.

Figure (9.72)

Page 278: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

263 | P a g e

9.7.2.3 – L1-Post Extension Field

The L1-post extension field allows for the possibility for future expansion of the L1 signaling. Its presence is indicated by the L1-pre field.

9.7.2.4 – CRC for the L1-Post Signaling

A 32-bit error detection code is applied to the entire L1-post signaling including the configurable, the dynamic for the current T2-frame, the dynamic for the next T2-frame, if present, and the L1-post extension field, if present. The location of the CRC field can be found from the length of the L1-post.

9.7.2.5 – Error Correction Coding and Modulation of L1-Pre Signaling

The L1-pre signaling is protected by a concatenation of BCH outer code and LDPC inner code. The L1-pre signaling bits have a fixed length and they shall be first BCH-encoded, where the BCH parity check bits of the L1-pre signaling shall be appended to the L1-pre signaling. The concatenated L1-pre-signalling and BCH parity check bits are further protected by a shortened and punctured 16K LDPC code with code rate 1/4 (Nldpc=16 200).

After the shortening and puncturing, the encoded bits of the L1-pre signaling shall be mapped to 1840 BPSK symbols. Finally, the BPSK symbols are mapped to OFDM cells

9.7.2.6 – Error Correction Coding and Modulation of L1-Post Signaling

The number of L1-post signaling bits is variable, and the bits shall be transmitted over one or multiple 16K LDPC blocks depending on the length of the L1-post signaling. Each block of information is protected by a concatenation of BCH outer codes and LDPC inner codes. The concatenated information bits of each block and BCH parity check bits are further protected by a shortened and punctured 16K LDPC code with code rate ½.

Modulation order (BPSK, QPSK, 16-QAM, or 64-QAM) are used for the L1-post signaling.

Page 279: DVB-T2 graduation project book 2011

Digital Video Broadcasting 2nd Generation Terrestrial Simulation

2011

[i]

Selected References and Bibliography

1) Communication Systems 4th Edition, By Simon Haykin. 2) Digital Communications 4th Edition, By John Proakis. 3) Digital video and audio broadcasting technology 3rd Edition, By W.Fischer. 4) Wireless Communications, By Andrea Goldsmith 5) ETSI EN 302 755 V1.1.1 (DVB-T2 Standard). 6) Implementation guidelines for DVB-T2. 7) DTG DVB-T2 Implementers’ Seminar, By Jonathan Stott. 8) Channel Estimation in OFDM Systems by Yushi Shen and ED Martinez. 9) Rotated Constellations for DVB-T2, By D. Perez-Calderon, C. Oria, J. Garcia, P. Lopez, V.

Baena, and I. Lacadena. 10) LDPC Codes – a brief Tutorial, By Bernhard M.J. Leiner. 11) A Long Block Length BCH Decoder for DVB-S2 Application, By Yi-Min Lin, Jau-Yet Wu, Chien-

Ching Lin, and Hsie-Chia Chang. 12) Matlab R2010a Help.