automatic modulation classification...

156
AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH A Thesis submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy By Sajjad Ahmed Ghauri 0909-PDEE-004 Department of Electronic Engineering School of Engineering & Applied Sciences (SEAS) ISRA University, Islamabad Campus Islamabad, Pakistan August 2015

Upload: others

Post on 25-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

AUTOMATIC MODULATION CLASSIFICATIONUSING FEATURE BASED APPROACH

A Thesis submitted in partial fulfillment of therequirements for the Degree of Doctor of Philosophy

By

Sajjad Ahmed Ghauri

0909-PDEE-004

Department of Electronic EngineeringSchool of Engineering & Applied Sciences (SEAS)

ISRA University, Islamabad CampusIslamabad, Pakistan

August 2015

Page 2: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

i

AUTOMATIC MODULATION CLASSIFICATIONUSING FEATURE BASED APPROACH

Submitted by

Sajjad Ahmed Ghauri

0909-PDEE-004

Dr. Ijaz Mansoor Qureshi (Supervisor)ProfessorDepartment of Electrical EngineeringAir University, Islamabad.

Dr. Tanweer Ahmad CheemaHODSchool of Engineering & Applied SciencesISRA University, Islamabad

Dr. Muhammad Sher (External Examiner)DeanFaculty of Basic & Applied Sciences,International Islamic University, Islamabad.

Dr. Ihsan ul Haq (External Examiner)Principal ICTFaculty of Engineering & Technology,International Islamic University, Islamabad.

Page 3: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

ii

CERTIFICATE

It is certified that the research work contained in this thesis has

been carried out under the supervision of Dr. Ijaz Mansoor

Qureshi, at ISRA University, Islamabad Campus is original. It is

fully adequate, in scope and quality, as a thesis for the degree of

Doctor of Philosophy.

Signature: ___________________

SupervisorProf. Dr. Ijaz Mansoor Qureshi Department ofElectrical Engineering,Air University, Islamabad.

Page 4: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

iii

DEDICATED TO

MY WORTHY FATHER

Page 5: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

iv

ACKNOWLEDGEMENTS

I am grateful to The Almighty Allah, The Beneficent and The Merciful, Who

enabled me to complete this research work. I thank quite cordially to my supervisor

Dr. I. M. Qureshi, for his constant encouragements and bountiful enthusiasm. His

guidance in perusing for innovative ideas in wireless communications have always

inspired and motivated me to explore new horizons of research. He has always

treated me like his own child and has paid special attention to guide me about

studies and my research at all stages. He has not just benefitted me with a range of

solutions for wireless communication problems, but also skilled in research

conducting methodologies. Above all, I have learned to live a simple and pious life

from such a multi-dynamic and consummated individual. I pay tribute my teachers

Dr. A. N. Malik, Dr. T. A. Cheema and all other teachers who have taught me from

nursery to postgraduate classes.

My words are not enough to express the feeling of gratitude which I confide

in my heart about my parents. All what they have done for me is considerable effort

which projects their sublime love for knowledge in me. I have always found them

wishing good for me in their late night prayers. I cannot forget to thank rest of my

family members for their humble prayers and munificent assistance, especially my

beloved wife who has always inspired me, valued my ideas and encouraged me in

the arduous moments.

At last but not least, I appreciate my friend Mr. Hannan Adeel for their

sincere friendship. It is hard to forget those study sessions during our course work.

Lastly, I pray for all who have acquaintance with me.

Page 6: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

v

ABSTRACT

Automatic Modulation Classification (AMC) is a scheme to classify the

modulated signal by observing its received signal features. The received signal is

usually corrupted by influence of various sources, such as, white guassian noise

and fading, which degrades the signal quality. Automatic modulation classification

plays an important role in cognitive radio communication. Due to amassed usage of

digital signals in different technologies, such as, cognitive radios, scientists have

focused on recognizing these signal types. AMC is expected to be incorporated in

the upcoming cognitive communication. Generally, digital signal type classification

can be categorized into two major categories: decision theoretic (DT) methods and

pattern classification (PC) methods.

In this research we focused on PC methods which are based upon

features extraction. The feature extraction based modulation classification is

accomplished in two modules. The first module is the feature extraction and second

is classification process which gives decision based upon the features extracted.

The features extracted from the received signal are higher order moments, higher

order cummulants, spectral features, cyclo-stationary features and novel Gabor

features. The classification of digital modulation formats such as pulse amplitude

modulation (PAM), quadrature amplitude modulation (QAM) and phase shift keying

(PSK) and frequency shift keying (FSK) are considered throughout the research.

The performance of proposed classifier are analyzed on additive white guassian

noise channel (AWGN), Rayleigh flat fading channel, Rician flat fading channel and

log normal fading channel.

The proposed classifier algorithm for classification of different unknown

modulated signals is based on normalized higher even order cummulants features

and spectral features. The proposed classifiers are based on likelihood function,

Page 7: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

vi

multilayer perceptron and linear discriminant analysis. The simulation results show

that the proposed algorithms have high classification accuracy even at low signal to

noise ratio (SNR). The proposed classifier algorithms perform efficiently as

compared to the existing classifiers.

A novel joint feature extraction and classification technique is proposed to

classify the digital modulated signals by adaptively tuning the parameters of Gabor

filter network. The Gabor atom parameters are tuned using delta rule and weights of

the Gabor filter using least mean square (LMS) algorithm. The proposed algorithm

classifies efficiently the PSK, FSK and QAM signals with 100% classification. The

Modified gabor filter network is proposed for classification of M-PAM signals.

The proposed HMM and Gabor filter network formulates an optimal classifier

structure. The proposed classifier use Baum-Welch algorithm and Genetic algorithm

(GA) to update the Gabor filter network and hidden markov model (HMM)

parameters. The fitness function for the genetic algorithm is probability of

observation sequence given the model. The objective is to maximize the probability

of observation sequence. To improve the classification accuracy, three parameters

of Gabor filters (GFs) network and one HMM parameter are adjusted simultaneously

such that the probability of observation sequence is maximized.

The proposed classifiers are compared with well-known techniques in the

literature and simulation results show the supremacy of the proposed schemes over

the contemporary techniques.

Page 8: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

vii

ABBREVIATIONS

Abbreviation TermADMRA-------Analog and Digital Modulation Recognition Algorithm

ADMR----------------------Automatic Digital Modulation Classification

ALRT--------------------------------------- Average Likelihood Ratio Test

AMC--------------------------------- Automatic Modulation Classification

AMRA------------------------Analog Modulation Recognition Algorithm

ANN-------------------------------------------------Artificial Neural Network

AWGN------------------------------------ Additive White Gaussian Noise

BPA--------------------------------------------Back Propagation Algorithm

CDP-----------------------------------------------------Cyclic Domain Profile

COMINT---------------------------------------Communication Intelligence

CR ----------------------------------------------------------- Cognitive Radios

DCS----------------------------------------Digital Communication System

DMRA-------------------------Digital Modulation Recognition Algorithm

FBA------------------------------------------------ Feature Based Approach

FFBPA---------------------Feed Forward Back Propagation Algorithm

FFBPNN-----------Feed Forward Back Propagation Neural Network

FFT---------------------------------------------------Fast Fourier Transform

FSK-------------------------------------------------- Frequency Shift Keying

GA-----------------------------------------------------------Genetic Algorithm

GFN------------------------------------------------------Gabor Filter Network

GLRT---------------------------------- Generalized Likelihood Ratio Test

Page 9: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

viii

GNN---------------------------------------------------Gabor Neural Network

GP-KNN-------------Genetic Programming with K-Nearest Neighbor

HLRT----------------------------------------- Hybrid Likelihood Ratio Test

HMM---------------------------------------------------Hidden Markov Model

HOC----------------------------------------------Higher Order Cummulants

HOS---------------------------------------------------Higher Order Statistics

HT------------------------------------------------------------Hough Transform

IB---------------------------------------------------Instantaneous Bandwidth

IF-----------------------------------------Instantaneous Carrier Frequency

LBA---------------------------------Likelihood Function Based Approach

LDA--------------------------------------------Linear Discriminant Analysis

LLF---------------------------------------------------------Likelihood Function

LMS-------------------------------------------------------Least Mean Square

LUT----------------------------------------------------------------Lookup Table

MC ------------------------------------------------- Modulation classification

MGFN---------------------------------------Modified Gabor Filter Network

MHD----------------------------------------------Margenau-Hill Distribution

ML--------------------------------------------------------Maximum Likelihood

MLP-----------------------------------------------------Multilayer Perceptron

MMSE----------------------------------------Minimum Mean Square Error

MSE--------------------------------------------------------Mean Square Error

ODST------------------------------Optimized Distribution Sampling Test

PAM--------------------------------------------Pulse Amplitude Modulation

Page 10: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

ix

PCC-----------------------------------Probability of Correct Classification

PHC---------------------------------------------Phase-Histogram Classifier

PPDF---------------------------------Phase Probability Density Function

PRA-----------------------------------------Pattern Recognition Approach

PSD--------------------------------------------------Power Spectral Density

PSK ------------------------------------------------------- Phase Shift Keying

PSO--------------------------------------------Particle Swarm Optimization

QAM----------------------------------- Quadrature Amplitude Modulation

qLLR--------------------------------------------Quasi-Log-Likelihood Ratio

RBPA------------------------------Resilient Back Propagation Algorithm

RD---------------------------------------------------------Rihaczk Distribution

RLS--------------------------------------------------Recursive Least Square

SCF-------------------------------------------Spectral Coherence Function

SDR -------------------------------------------------Software Defined Radio

SLC-----------------------------------------------------Square Law Classifier

SNR-----------------------------------------------------Signal to Noise Ratio

STP--------------------------------------------Serial to Parallel Conversion

SUMC-----------------------------Single User Modulation Classification

SVD-----------------------------------------Singular Value Decomposition

SVM-------------------------------------------------Support Vector Machine

Page 11: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

x

TABLE OF CONTENTS

Page

ACKNOWLEDGEMENTS------------------------------------------------------------ ivABSTRACT------------------------------------------------------------------------------ vABBREVIATIONS --------------------------------------------------------------------- viiTABLE OF CONTENTS-------------------------------------------------------------- xLIST OF TABLES ---------------------------------------------------------------------- xiiiLIST OF FIGURES -------------------------------------------------------------------- xvi

CHAPTER I – INTRODUCTION --------------------------------------------------- 011.1 Introduction ------------------------------------------------------------------------- 011.2 Motivation and Problem Statement ------------------------------------------ 031.3 Contribution of the Thesis ------------------------------------------------------ 051.4 Organization of the Thesis ----------------------------------------------------- 06

CHAPTER II – LITERATURE REVIEW ------------------------------------------ 092.1 Introduction ------------------------------------------------------------------------- 092.2 Likelihood based Decision Theoretic Approach--------------------------- 10

2.2.1 Likelihood Ratio Test ------------------------------------------------------ 112.2.1.1 Average Likelihood Ratio Test ------------------------------------- 112.2.1.2 Generalized Likelihood Ratio Test -------------------------------- 112.2.1.3 Hybrid Likelihood Ratio Test --------------------------------------- 122.2.1.4 Quasi-Hybrid Likelihood Ratio Test ------------------------------ 122.2.1.5 Sequential Probability Ratio Test --------------------------------- 13

2.3 Features based Pattern Recognition Approach---------------------------- 152.3.1 Feature Extraction --------------------------------------------------------- 16

2.3.1.1 Statistical Features -------------------------------------------------- 162.3.1.2 Spectral Features --------------------------------------------------- 172.3.1.3 Cyclo-stationary Features ------------------------------------------ 172.3.1.4 Time Frequency Features ------------------------------------------ 18

2.3.2 Classification ---------------------------------------------------------------- 182.3.2.1 Nature Inspired Heuristic Techniques -------------------------- 192.3.2.2 Artificial Neural Network -------------------------------------------- 202.3.2.3 Fuzzy c-Means -------------------------------------------------------- 212.3.2.4 Hidden Markov Models -------------------------------------------- 22

2.4 Summary---------------------------------------------------------------------------- 25

CHAPTER III – AUTOMATIC MODULATION CLASSIFICATIONUSING FEATURE EXTRACTION TECHNIQUES----------------------------- 263.1 Generalized System Model ---------------------------------------------------- 263.2 Automatic Modulation Classification using Higher Order Statistics--- 29

3.2.1 Introduction------------------------------------------------------------------- 29

Page 12: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xi

3.2.2 Statistical Features -------------------------------------------------------- 303.2.3 Proposed Algorithm for AMC-------------------------------------------- 333.2.4 Simulation Results-------------------------------------------------------- 333.2.5 Summary--------------------------------------------------------------------- 40

3.3 Automatic Modulation Classification using Linear DiscriminantAnalysis (LDA)-------------------------------------------------------------------------- 40

3.3.1 Introduction------------------------------------------------------------------- 403.3.2 Higher Order Cummulants as Feature Set--------------------------- 413.3.3 Proposed Algorithm for AMC-------------------------------------------- 423.3.4 Simulation Results--------------------------------------------------------- 433.3.5 Summary------------------------------------------------------------------ 48

3.4 Automatic Modulation Classification using Spectral Features--------- 483.4.1 Introduction------------------------------------------------------------------- 483.4.2 Spectral Features---------------------------------------------------------- 483.4.3 Proposed Algorithm using MLP----------------------------------------- 513.4.4 Simulation Results--------------------------------------------------------- 543.4.5 Proposed Algorithm using SVM----------------------------------------- 573.4.6 Simulation Results--------------------------------------------------------- 613.4.7 Comparison with Existing Techniques ------------------------------- 623.4.8 Summary --------------------------------------------------------------------- 64

CHAPTER IV – AUTOMATIC MODULATION CLASSIFICATIONUSING GABOR FILTER NETWORK---------------------------------------------- 654.1 Introduction ------------------------------------------------------------------------- 654.2 System Model --------------------------------------------------------------------- 664.3 Gabor filter for Classification and Feature Extraction-------------------- 664.4 Training and Testing of Gabor filter network-------------------------------- 694.5 The Proposed Algorithm for Modulation Classification------------------- 744.6 Simulation Results ---------------------------------------------------------------- 764.7 Modified Gabor Filter Network for Classification of PAM signals------ 88

4.7.1 Modified Gabor Filter Network------------------------------------------ 894.7.2 Modified Proposed Algorithm for Modulation Classification----- 904.7.3 Simulation Results of Modified Gabor Filter Network ------------- 91

4.8 Comparison with Existing Techniques--------------------------------------- 984.9 Summary ---------------------------------------------------------------------------- 98

CHAPTER V – AUTOMATIC MODULATION CLASSIFCATION USINGHIDDEN MARKOV MODELS ------------------------------------------------------- 1015.1 Introduction ------------------------------------------------------------------------- 1015.2 System Model and Gabor Filter Network ----------------------------------- 1025.3 Genetic Algorithm ----------------------------------------------------------------- 1035.4 Hidden Markov Model ----------------------------------------------------------- 104

5.4.1 Baum Welch (BW) Algorithm ------------------------------------------- 1065.5 Proposed Classifier and its Training ----------------------------------------- 110

5.5.1 Training of Classifier ------------------------------------------------------ 112

Page 13: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xii

5.5.2 Testing of Classifier ------------------------------------------------------- 1135.6 Simulation Results ---------------------------------------------------------------- 1145.7 Comparison with Existing Techniques -------------------------------------- 1185.8 Summary ---------------------------------------------------------------------------- 121

CHAPTER VI –CONCLUSIONS AND FUTURE DIRECTIONS------------ 1226.1 Summary of Results ------------------------------------------------------------- 1226.2 Future Directions ------------------------------------------------------------------ 124

REFERENCES –----------------------------------------------------------------------- 126

Page 14: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xiii

LIST OF TABLES

Table Page

II – 1 A Summary of Likelihood based Classifiers --------------------------------------13

II – 2 A Summary of Feature based Classifiers -----------------------------------------23

III – 1 Theoretical Normalized 4th, 6th & 8th order cummulants for variousModulation Constellations -------------------------------------------------------- 32

III – 2 Theoretical Values of Normalized cummulants of ConsideredModulation Types ------------------------------------------------------------------- 42

III – 3 Confusion Matrix for PSK modulation in AWGN channel (AveragePerformance 99.95%) ------------------------------------------------------------- 45

III – 4 Confusion Matrix for PSK modulation in AWGN+ Rician Flat FadingChannel (Average Performance 92.26%) ------------------------------------ 45

III – 5 Confusion Matrix for PSK modulation in AWGN + Rayleigh FlatFading Channel (Average Performance 91.38%)--------------------------- 45

III – 6 Confusion Matrix for FSK modulation in AWGN channel (AveragePerformance 99. 5%)-------------------------------------------------------------- 46

III – 7 Confusion Matrix for FSK modulation in AWGN+ Rician Flat FadingChannel (Average Performance 91%)----------------------------------------- 46

III – 8 Confusion Matrix for FSK modulation in AWGN + Rayleigh FlatFading Channel (Average Performance 88%)------------------------------- 46

III – 9 Confusion Matrix for QAM modulation in AWGN channel (AveragePerformance 99.95%)-------------------------------------------------------------- 47

III – 10 Confusion Matrix for QAM modulation in AWGN+ Rician Flat FadingChannel (Average Performance 81%)----------------------------------------- 47

III – 11 Confusion Matrix for QAM modulation in AWGN + Rayleigh FlatFading Channel (Average Performance 77.36%)--------------------------- 47

Page 15: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xiv

III – 12 Specifications for the proposed classifier------------------------------------------53

III – 13 Percentage of correct classification on AWGN channel at 10dB ofSNR------------------------------------------------------------------------------------ 55

III – 14 Percentage of correct classification on Rician flat fading channel plusAWGN at 10dB of SNR ------------------------------------------------------------ 56

III – 15 Percentage of correct classification on Rayleigh flat fading channelplus AWGN at 10dB of SNR ---------------------------------------------------- 56

III – 16 Performance of Recognizer at SNR of 0 to 10 dB ------------------------------61

III – 17 Performance comparison of Spectral Features with existingtechniques----------------------------------------------------------------------------- 62

III – 18 Performance comparison of HOC features with existing techniques ------63

IV – 1 Updated Shift and Modulation Parameter for PSK modulation 2-64--------78

IV – 2 Updated Scale and weight Parameter for PSK modulation 2-64------------79

IV – 3 Updated Shift and Modulation Parameter for FSK modulation 2-64--------79

IV – 4 Updated Scale and weight Parameter for FSK modulation 2-64-------- 80

IV – 5 Updated Shift and Modulation Parameter for QAM modulation 2-64--- 81

IV – 6 Updated Scale and weight Parameter for QAM modulation 2-64------- 81

IV – 7 Training Performance of MGFN of M-PAM signal classificationwithout Noise------------------------------------------------------------------------- 93

IV – 8 Training Performance of MGFN of M-PAM signal classification onAWGN channel---------------------------------------------------------------------- 94

IV – 9 Testing Performance of MGFN of M-PAM signal classification onAWGN channel---------------------------------------------------------------------- 94

Page 16: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xv

IV – 10 Testing Performance of MGFN for M-PAM signal classification atSNR=10dB on AWGN channel--------------------------------------------------- 94

IV – 11 Testing Performance Comparison of MGFN on AWGN and Fadingchannels for the example of 8-PAM format--------------------------------- 97

IV – 12 Performance Comparison with the Existing Techniques-----------------------99

V – 1 Classification accuracy for the proposed classifier for different no. ofsamples and SNRs ---------------------------------------------------------------- 115

V – 2 Percentage Classification performance at different SNRs and 2048samples ------------------------------------------------------------------------------- 117

V – 3 Performance Comparison with the existing techniques -----------------------120

Page 17: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xvi

LIST OF FIGURES

Figure Page

III – 1 The Generalized System Model ----------------------------------------------------- 28

III – 2 PCC on AWGN channel in scenario {BPSK, QPSK}, N=250----------------- 34

III – 3 PCC on AWGN channel in scenario {QAM 2 to 64}, N=3000----------------- 34

III – 4 PCC on AWGN channel in scenario {PAM 2 to 64}, N=3000----------------- 35

III – 5 PCC on Flat Fading channel in scenario {BPSK, QPSK}, N=250------------ 35

III – 6 PCC on Flat Fading channel in scenario {PAM 2 to 64}, N=2000----------- 36

III – 7 PCC on Flat Fading channel in scenario {QAM 2 to 64}, N=2000----------- 36

III – 8 PCC on Rayleigh Flat Fading channel in scenario {BPSK, QPSK},N=250--------------------------------------------------------------------------------------- 37

III – 9 PCC on Lognormal Fading channel in scenario {BPSK, QPSK}, N=250-- 37

III – 10 PCC on Rician Flat Fading channel in scenario {BPSK, QPSK}, N=250-- 38

III – 11 Performance of ADMC on Faded channel in scenario {BPSK, QPSK},N=250--------------------------------------------------------------------------------------- 38

III – 12 Performance of ADMC on Faded channel in scenario {QAM 2 to 64},N=3000-------------------------------------------------------------------------------------- 39

III – 13 Performance of ADMC on Faded channel in scenario {PAM 2 to 64},N=2500-------------------------------------------------------------------------------------- 39

III – 14 Flow Chart for Modulation Classification------------------------------------------- 42

III – 15 Proposed Architecture for classification of modulation formats--------------- 52

Page 18: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xvii

III – 16 The FFBPN result for the proposed classifier {PSK 2 to 64} on AWGNchannel at SNR of 10dB.------------------------------------------------------------- 55

IV – 1 System Model for Modulation Classification--------------------------------------- 66

IV – 2 Gabor filter bank with input layer and output layer------------------------------- 67

IV – 3 Testing Scheme for Modulation Classification------------------------------------ 69

IV – 4 Training of gabor filter parameters and weights for modulationclassification in case of PSK modulation 2-64 for different number ofiterations at SNR=10dB---------------------------------------------------------------- 83

IV – 5 Training of gabor filter parameters and weights for modulationclassification in case of FSK modulation 2-64 for different number ofiterations at SNR=10dB---------------------------------------------------------------- 84

IV – 6 Training of gabor filter parameters and weights for modulationclassification in case of QAM modulation 2-64 for different number ofiterations at SNR=10dB--------------------------------------------------------------- 84

IV – 7 Training of gabor filter parameters and weights for modulationClassification in case of PSK modulation 2-64 at different SNRs andfixed number of iterations------------------------------------------------------------- 85

IV – 8 Training of gabor filter parameters and weights for modulationClassification in case of FSK modulation 2-64 at different SNRs andfixed number of iterations------------------------------------------------------------- 85

IV – 9 Training of gabor filter parameters and weights for modulationClassification in case of QAM 2-64 at different SNRs and fixed numberof iterations------------------------------------------------------------------------------- 86

IV – 10 Probability of correctness (PCC) versus Number of Iterations atSNR=10dB ------------------------------------------------------------------------------- 87

IV – 11 Probability of correctness (PCC) versus SNR for fixed Number ofIterations ---------------------------------------------------------------------------------- 88

IV – 12 Training of MGFN for the M-PAM formats under no Noise-------------------- 92

Page 19: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

xviii

IV – 13 Training of MGFN for the M-PAM formats on AWGN channel---------------- 93

IV – 14 Probability of Correctness curve for the example of PAM16------------------ 96

IV – 15 Probability of Correctness curve under AWGN channel for the exampleof PAM16--------------------------------------------------------------------------------- 97

V – 1 Flow chart of Genetic Algorithm ----------------------------------------------------- 103

V – 2 Proposed Classifier Structure -------------------------------------------------------- 111

V – 3 Structure of Proposed Classifier ----------------------------------------------------- 112

V – 4 Testing of Proposed Classifier ------------------------------------------------------- 114

V – 5 Classifier training for the case of PAM and QAM signals --------------------- 116

V – 6 Average classification performance for different number of sample-------- 116

Page 20: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

1

CHAPTER I

INTRODUCTION

1.1 Introduction

Generally in a communication system, there is mutual

cooperation/handshaking between transmitter and the receiver, so the receiver

has a priori knowledge of the modulation format of the transmitted signal. The

modulation format includes modulation type, modulation index and nominal

carrier frequency in analog communication systems. In digital communication

systems (DCS), the modulation format includes modulation type, alphabet size,

nominal carrier frequency, symbol constellation, frequency deviation (for

frequency modulated signals only) and the symbol rate. The system designer

generally focuses on making DCS reliable (Azzouz et al., 1996).

In DCS, security is one of the fundamental requirements. Two users

do not want their communication to be known to any third user. So the

communication authorities might wish to detect the non-licensed transmitters.

The detection of non-licensed transmitters is to be done by recognizing or

classifying the modulation format. Some of the applications are interference

identification, spectrum management, surveillance, threat analysis, warning

and target acquisition (Azzouz et al., 1996a).

In communication intelligence (COMINT) systems, the classification of

modulation has been done manually. This is done by using the demodulator

banks where each of the demodulator banks is designed for one modulation

Page 21: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

2

type. Demodulator outputs give the modulation type of received signal which

is corrupted by noise and channel effects. The classification process requires

large signal duration, experienced analyzers and also requires intelligent

decision algorithm at the demodulator output. Implementation of such

classification process is complex, requiring excessive storage and unable to

guarantee reliable classification results. The modulation signals are

recognized and limited to the use of demodulator’s bank.

With the emergence of software-defined radios (SDRs), radio

frequency (RF) communication devices have the capability to transmit in low

power, change the transmitting frequency, and modify the modulation format

during transmission. Adaptive modulation varies the rate of data transmission

relative to the channel operating conditions. In an environment without

handshaking between the transmitter and receiver, RF signals need to be

recognized. Automatic Modulation Classification (AMC) is a widely used and

demanded feature on receiver side to adapt without handshaking between a

transmitter and relevant receiver pair.

Cognitive radio (CR) has become key research area in digital

communication. CR is promising technology that improves the spectrum

efficiency by opportunistically finding and utilizing the un-occupied frequency

bands. Automatic modulation classification has various applications in

Cognitive Radio (cooperative and non-cooperative communication), civilian

and military communication, electronic warfare and surveillance. In military

applications, there is no such information available about enemy signal. CR

needs to identify the modulation format employed in that signal. AMC is to

recognize the received signal modulation type, which has undergone through

Page 22: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

3

channel effects like fading, noise and interference etc. during transmission of

the signal. AMC is basically a non-cooperative communication which also

includes some aspects of cooperative communication, such as tracking and

identification of channel and estimation and detection of signal parameters.

1.2 Motivation and Problem Statement

Link adaptation which is also known as adaptive modulation and

coding, creates an adaptive modulation scheme where pool of multiple

modulations are employed by the same system. It enables the optimizing of

transmission reliability and data rate through adaptive selection of modulation

schemes according to channel conditions. While transmitter has freedom to

choose how signals are modulated. The receiver must have knowledge of the

modulation format to demodulate the signal. Easy way to achieve this is to add

modulation information in each frame, so that receiver would be notified about

any change in the modulation scheme. This will affect the spectrum efficiency

due to extra modulation information in each frame. The solution is automatic

modulation classification (Zhu 2014).

Existing approaches for AMC are based on decision theoretic

approach and pattern recognition approach. The likelihood approach is optimal

and pattern recognition approach is sub optimal. Both approaches provide

classification/recognition of modulation formats on fading channels. In this

research we will focus only on the feature based pattern classification

approach. The popular modulation formats are PAM, QAM, PSK and FSK of

order 2 to 64.

Page 23: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

4

The previously designed classifiers are capable to classify less

number of modulation formats by Cai et al., (2004), Dobre et al., (2007) and

Mustafa et al., (2004). The major problem in feature based pattern

classification is the extraction of the features. The most popular techniques

used for feature extraction in the literature are spectral features extraction,

cyclo-stationary property based feature extraction, higher order moments and

higher order cummulants (Azzouz et al., (1995); Poopola et al., (2011); Wong

et al., (2001); Nandi et al, (1998); Dobre et al., (2007); Dobre et al., (2010);

Ebrahimzadeh et al., (2012); He et al., (2008); Lanjun et al., (2010); Lopatka

et al., (2000); Ramkumar (2009)). The feature based approach is easy to

implement and computationally simple (Dobre et al., 2003).

The maximum number of modulation formats classified by the

previous methods are limited. The classification accuracy of the classifier are

mostly evaluated in the presence of additive white guassian noise (AWGN)

channel model. So classifying the maximum number of modulation formats

(PAM, QAM, PSK and FSK of order 2 to 64) is one of the problems which

needs to be addressed. The classification of modulation formats in the

presence of AWGN and also on fading channels (Rayleigh flat fading channel,

Rician flat fading channel and Log normal fading channel) is also an open issue

which needs to be addressed. In the effort to produce more efficient and

accurate classification performance, the feature based pattern recognition

approach is frequently used. The extraction of new features from the received

signal and devising an efficient classifier design has significantly improved the

performance of classifier. Thus problem statement is defined to

identify/classify the digital modulation formats from the noisy signal.

Page 24: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

5

1.3 Contribution of the Thesis

The objective of this thesis is to design an efficient classifier structure,

with improved performance as compared to state of art well known existing

techniques. The contribution is summarized as follows:-

1. The normalized higher order cummulants based modulation

classification is presented. Higher even-ordered normalized

cummulants are used, for the classification of modulation formats.

The main advantage of the proposed classifier based on maximum

likelihood and linear discriminant analysis is the improvement

being claimed as major contribution in the classification accuracy.

The performance of proposed classifier is also compared with the

well-known existing classifiers (Chen et al., (2008) & Orlic et al.,

(2009)).

2. The improved digital modulation classification scheme is

presented based on spectral features. The proposed classifier

uses feed-forward back propagation neural network (FFBPNN)

and support vector machine. The classification accuracy of the

proposed classifier shows better performance as compared to

existing techniques (Ghauri et al. (2014)).

3. The new features are extracted, named as Gabor features, which

to the best of our knowledge has not been utilized for the problem

of modulation classification. The proposed classifier parameters

and weights of the adaptive filter are adjusted until cost function is

minimized. The training and testing of proposed classifier shows

Page 25: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

6

that proposed algorithm is capable of classifying the modulation

formats (PSK, FSK and QAM) of order 2 to 64 with high probability

of correctness.

4. The modified Gabor filter network for classification of M-PAM

signals is presented. Some changes are proposed to classify the

PAM signals as classification of M-PAM is difficult because of

increasing amplitude values as M varies from 2 to 64. The

proposed changes make the algorithm robust for the classification

of M-PAM modulation formats not only for AWGN channel but also

for use in fading channels.

5. The classifier based upon the hidden markov model (HMM) and

Gabor filter is presented. The parameters of the HMM and Gabor

filter network are updated to maximize the fitness function. The

proposed classifier structure is capable of efficiently classifying the

M-QAM and M-PAM signals. The proposed classifier is also

compared with the existing classifier.

1.4 Organization of the Thesis

Chapter II provides brief overview of automatic modulation

classification techniques in the literature over the past two decades. The

literature survey is divided into two subsections (decision theoretic approach

and pattern recognition approach). In the first section, an overview of existing

techniques based on decision theoretic approach is provided in which lookup

table (LUT), average likelihood ratio test (ALRT), hybrid likelihood ratio test

Page 26: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

7

(HLRT), generalized likelihood ratio test (GLRT) and quasi-HLRT are

discussed. In second subsection, features based pattern recognition approach

is presented in which neural network based classifiers are briefly discussed.

The limitations and drawbacks of different types of classifiers are also

discussed.

In Chapter III, two features extraction techniques are given. In the first

technique, features extracted are higher order moments (HOM) and higher

order cummulants (HOC). The classifiers used are based on the maximum

likelihood and linear discriminant analysis. In the second technique, spectral

features are extracted from the received signal. The classifiers proposed are

based upon the FFBPNN and SVM. The quantitative analysis is presented to

compare the performance of the proposed classifier with the existing

classifiers.

In Chapter IV, two algorithms are presented to classify the digital

modulation formats (PSK, FSK, QAM and PAM). The new features are

extracted named as Gabor features. In these algorithms, Gabor filter network

and Modified Gabor filter network are used for joint feature extraction and

classification. The training and testing of Gabor filter network reveals that

proposed classifier provides better results as compared to the existing

techniques.

In Chapter V, classification of M-PAM and M-QAM signals are carried

out using Gabor filter bank and Hidden markov models. The Gabor filter bank

parameters and HMM parameters are updated using genetic algorithm and

Baum Welch algorithm. The objective is to maximize the fitness function. The

Page 27: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

8

training and testing of proposed classifier demonstrates improved

classification accuracy over the state of art existing techniques.

Finally, Chapter VI summarizes and concludes the work and also gives

suggestions for future direction.

Page 28: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

9

CHAPTER IILITERATURE REVIEW

2.1 Introduction

A comprehensive review of literature regarding automatic modulation

techniques is presented. The automatic modulation classification (AMC) is an

intermediate step between detection of the information carried by the signal

and its demodulation. AMC is an approach which classifies the modulation

formats of the received signal on the receiver side. Automatic modulation

classification has found extensive importance in the field of electronic

surveillance, military sensitive areas, electronic counter measures, civilian

populated premises domain, software defined radios and lately in cognitive

radios (Dobre et al., (2010); Ye et al., (2007); Zeng et al., (2012); Zhou et al.,

(2010); Boutte et al., (2009); Avci et al.,(2007); Avci et al., (2008)). In military

domains it may be used for monitoring and interference recognition, whereas

in civil domain it includes interference confirmation, spectrum management

and signal confimation.

The most important applications in civil domain are intelligent

modems, software defined radios and cognitive radios (Dobre et al., 2006).

Due to incremental technologies such as cognitive radios, recent research has

been focused on identifying and then classifying these types of signal by Mitola

et al., (1999) and Mitola, (2000) and Haykin, (2005). In cognitive radios, the

secondary user use the spectrum of primary user with cooperation or by

Page 29: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

10

sensing the primary user but with limitation of interference to primary user. For

this reason secondary user should have knowledge of modulation format of

the primary user signal as suggested by Ramkumar, (2011). In this chapter,

brief review of different techniques for classification of modulation formats is

presented.

2.2 Likelihood based Decision Theoretic Approach

As one of the automatic modulation techniques, the likelihood function

based (LB) decision theoretic approach is computationally complex but optimal

by Ghauri et al., (2014). The modulation classification in decision theoretic

approach has been viewed as multiple hypothesis test or may be sequence of

pair wise multiple hypothesis test. Once the likelihood function is set up,

likelihood ratio tests (LRT) are to be used to determine the modulation format

of the received signal by Wang et al., (2010). Due to phase errors, channel

effects, timing jitter and frequency offset, the decision theoretic approach is not

robust to model mismatch by Yucek et al., (2004).

The classifiers used in decision theoretic approach are based on

likelihood ratio test, log-likelihood ratio test, phased-based, phase histogram-

based and square-law based classifiers. Some of the test are average

likelihood ratio test (ALRT), generalized likelihood ratio test (GLRT), hybrid

likelihood ratio test (HLRT), quasi-HLRT, quasi-likelihood ratio (qLLR), log-

likelihood ratio test (LLRT) and sequential probability ratio test (SPRT) (Dobre

et al., 2005; Hameed et al., 2009; Dobre et al., 2006; Lin et al.,1997)

Page 30: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

11

2.2.1 Likelihood Ratio Test (LRT)

The idea behind the likelihood based decision theoretic approach is

that the probability density function (PDF) of the observed waveform,

conditioned by the embedded modulated signal, contains all information for the

signal classification.

2.2.1.1 Average Likelihood Ratio Test (ALRT)

In average likelihood ratio test, the unknown quantities are treated as

random variables (RVs) with certain probability density function (PDF). The

PDF of the received signal is computed by averaging over each hypothesis.

The ALRT method gives maximum probability of classification, if the assumed

and actual distribution of the unknown quantities coincide. ALRT suffer from

the high computational complexity when number of unknown parameters are

increased (Huang et al., 1995; Long et al., 1994; Abdi et al., 2004; Sills, 1999;

Wei et al., 2000; Hong et al., 2003; Beidas et al., 1995). ALRT is optimal in a

Bayesian sense, if the chosen PDF is same as the true PDF (Beidas et al.,

1998).

2.2.1.2 Generalized Likelihood Ratio Test (GLRT)

Generalized likelihood ratio test (GLRT) treats the unknown quantities

as deterministic unknowns. These deterministic unknowns are estimated

under the hypothesis, after that unknown quantities are substituted by the

maximum likelihood estimate (MLE) obtained from the observed data to

perform the likelihood ratio test. When the length of observed data increased,

GLRT computational complexity increase exponentially (Panagiotou et al.,

Page 31: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

12

2000). When the constellations are nested, the GLRT fails (Abdi et al., 2004;

Dobre et al., 2007). The remedy is to either avoid nested modulation schemes

or change the weighting probabilities.

2.2.1.3 Hybrid Likelihood Ratio Test (HLRT)

In hybrid likelihood ratio test, both ALRT and GLRT are combined in

which some of the unknown quantities are treated as random variable and

others are treated as deterministic known. In HLRT, the problem of nested

constellation is overcome by averaging the signal constellation points (Dobre

et al., 2007; Sills et al., 1999).

ALRT requires a multidimensional integration and GLRT requires a

multidimensional maximization. The ALRT is impractical because of

performing a multidimensional integration for a large number of unknown

quantities and also the need for knowing the prior PDFs. The GLRT requires

multidimensional maximization over the unknown data, which can lead to the

same value of the likelihood function for nested signal constellation, which

yields incorrect classification (Tress, 2001).

2.2.1.4 Quasi-Hybrid Likelihood Ratio Test (q-HLRT)

To reduce computational complexity in maximum likelihood estimation

of the HLRT, a sub-optimal algorithm namely quasi-HLRT is used. The ML

estimates obtain in q-HLRT from the conditional observation PDF instead of

un-conditional PDF in HLRT. The other parameters are estimated either using

the maximizing the PDF or the LFs. In q-HLRT, the likelihood function (LF)

under hypothesis is computed by using method of moments (MoM) estimates

Page 32: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

13

of the unknown parameters and averaging over unknown constellation points.

The MoM estimates are computed based on second and fourth-order moments

of the received signal (Fahad et al., 2006).

2.2.1.5 Sequential Probability Ratio Test (SPRT)

In sequential test, random number of samples are used. For multiple

hypothesis, sequential test provides flexibility in controlling the error

probability. The maximum a posteriori (MAP) test have no control over the

individual error probability conditioned on hypothesis, while minimizes the total

error probability. The Neyman-Pearson maximizes the probability of correct

classification for the binary hypothesis but not in practice for the multiple

hypothesis. The sequential probability ratio test (SPRT) can be utilized for

binary as well as multi- hypothesis cases (Lin et al., 1997). It provides decision

error probability with least number of samples. The contributions of numerous

articles are summarized in compact form in Table II-1.

Table II-1: A Summary of Likelihood based Classifiers

Author(s) Classifier(s) Modulations Unknownparameter(s) Channel

Kim et al.,(1988) andHsue et al.,

(1989)

Quasi-ALRTBPSK, QPSK,BFSK, 4FSK,

8FSK

Carrier phaseθ AWGN

Long et al.,(1994) Quasi-ALRT 16PSK, 16QAM,

V29Carrier phase

θ AWGN

Beidas etal., (1995)

ALRT

Quasi-ALRT32FSK, 64FSK

Phase jitter

1

Kk k

AWGN

Page 33: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

14

Nandi et al.,(1995)

Quasi-ALRTUW

BPSK, QPSK,8PSK, 16PSK

(12 Modulations)

Carrier phaseθ & timing

offsetAWGN

Chuggi etal., (1995) HLRT BPSK, QPSK,

OQPSK

Carrier phaseθ, signalpower &

PSD

AWGN

Ho et al.,(1995) HLRT MPSK, MFSK Angle of

arrival AWGN

Sapiano etal., (1996) ALRT UW BPSK, QPSK,

8PSK - AWGN

Beidas etal., (1996)and Beidas

et al.,(1998)

ALRT

Quasi-ALRT

32FSK, 64FSK Phase jitter& timing

offsetAWGN

Sills, (1999) ALRTBPSK, QPSK,

16QAM,V29,32QAM, 64QAM

Carrier phaseθ AWGN

Wei et al.,(2000) ALRT 16QAM, V29 - AWGN

Panagiaotou et al.,(2000)

GLRT

HLRT16PSK, 16QAM,

V29Carrier phase

θ AWGN

Hong et al.,(2003) ALRT BPSK, QPSK Signal level AWGN

Dobre et al.,(2004) HLRT

BPSK, QPSK,8PSK, 16PSK,

16QAM, 64QAM

Channelamplitude &

phase φ

FlatFading

Abdi et al.,(2004)

ALRT Quasi-HLRT

16QAM, 32QAM,64QAM

Channelamplitude a& phase φ

FlatFading

Li et al.,(2005) Quasi-HLRT 4QAM, 16QAM,

64QAMFrequencyoffset Δf AWGN

Dobre et al.,(2006)

HLRT

Quasi-HLRTBPSK, QPSK,8PSK, 16PSK

PSD N ,Channel

amplitude a& phase φ

FlatFading

Page 34: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

15

Su et al.,(2008) ALRT, LUT 16-QAM, 32-QAM - AWGN

Hameed etal., (2009)

HLRT, q-HLRT

16QAM,BPSK,QPSK

Signalamplitude,Phase &

Noise power

Fading

Rvayuli etal., (2013) HML 16-QAM, 32-

QAM, 64-QAM - RayleighFading

Kebrya etal., (2013)

LLF,Weighted

SumAlgorithm

QPSK, 16QAM

Amplitude,Phase and

NoiseVariance

AWGN

Mohammadiet al.,(2013)

ExpectationMaximization

(EM)

BPSK, QPSK,16QAM, 64 QAM

Channelcoefficients,noise power

-

Sergienkoet al.,(2014)

LLF MPSK, 16QAM - -

Zhu et al.,(2014)

Non-parametric LF

BPSK, QPSK,8PSK, 4QAM,

16QAM, 64QAM

NoiseVariance Fading

2.3 Features based Pattern Recognition Approach

The feature based (FB) pattern recognition method is suboptimal

solution. In FB approach modulation recognition is carried out in two modules.

The first module is feature extraction subsystem, in which features are

extracted from the received signal. The second module is pattern recognizer

subsystem, in which input to the classifier structure are extracted features

which determines the modulation format. Due to robustness with respect to

model mismatches and low computational complexity, FB approach is used for

modulation recognition (Lanjun et al., 2011; Orlic et al., 2012; Wong et al.,

2001).

Page 35: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

16

Feature based pattern recognition approach generally includes feature

extraction and classification. The most common features extracted from the

received signal are spectral features, statistical features, cyclo-stationary

features, time frequency features and wavelet features. The classifiers are

generally based on the nature inspired heuristic algorithms, artificial neural

networks, fuzzy logic and hidden markov models.

2.3.1 Feature Extraction

A feature provides an information, relevant to an application, required

for solving a computational task. Feature selection is dependent upon the

application area. In most of the applications, extraction of only one feature from

the received signal does not provide sufficient information about modulation

format hence instead of one, two or more different features are extracted.

Some of the features in the literature used are multi-fractal features, mean and

variance, empirical characteristics function (ECF), correlation function,

wavelets features, higher order correlation (He at al., 2008; Puengnim et al.,

2010; Dulek et al., 2014; Marey et al., 2014; Avci et al., 2007; Zeng et al., 2012;

Liu et al., 2014 and Beidas et al., 1995).

2.3.1.1 Statistical Features

The higher order statistics involves the moments and cummulants.

Cummulants are set of quantities that provides alternatives to the moments,

Cummulants are made up of moments. Cummulants may be of 2nd, 4th, 6th and

8th order. The higher order moments (HOM) and higher order cummulants

Page 36: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

17

(HOC) are extracted from the received signal to discriminate the modulation

formats. The higher order cyclic cummulants are also used as feature set. The

HOC are utilized for the modulation classification by the Liu et al., (2014), Su,

(2013), Eldemerdash et al., (2013), Orlic et al., (2012), Sanderson et al.,

(2013), Zhou et al., (2010), Prakasam et al., (2008), Lopatka et al., (2000), Dai

et al., (2002), Cai et al., (2004) and Ebrahimzadeh et al., (2011).

2.3.1.2 Spectral Features

The spectral features are obtained from three basic parameters i.e.

amplitude, phase and frequency. The amplitude, phase and frequency give the

hidden information contents in the modulated signal. These three parameters

are obtained by taking the Hilbert transform of the modulated signal. The

spectral features are the standard deviations of the centered instantaneous

amplitude, frequency and phase and power spectral density (PSD). The

spectral features exploited for the classification of limited number of the

modulation formats by Nandi et al., (1995), Popoola et al., (2011),

Ebrahmzadeh, (2011) and Valipour et al., (2012). The cyclic spectral features

which are extracted from the cyclo-stationary property are also used by Xianci

et al., (1995).

2.3.1.3 Cyclo-stationary Features

In practice, most of the modulated signals have some parameters

which change with time. In digital communication, these parameters may be

periodic keying of amplitude, frequency and phase. In previous decades, these

periodicities are not been used for extracting parameters from the received

Page 37: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

18

signal. The most of the signals have the cyclo-stationary property which can

be exploited for classification of modulation formats. The features extracted

from the cyclo-stationary property are known as cyclo-stationary features.

These features are exploited for classification of limited number of modulations

by Ramkumar, (2009), Dobre et al., (2009), Like et al., (2009) and Sanderson

et al., (2013).

2.3.1.4 Time-Frequency Features

The Fourier transform (FT) mainly emphasises on the frequency

domain and cannot analyze the variations in time domain. To do the analysis

of frequency and time jointly, short time Fourier transform is used. Short Time

Fourier Transform (STFT), the simplest time-frequency representation, is a

two-dimensional representation created by computing the FT and using a

sliding temporal window. By using the STFT we can observe how the

frequency of the signal change with time. The features extracted from the time-

frequency analysis are used to classify the modulation formats. The Wigner-

Vile distribution and Margenau-Hill distribution are also used in time-frequency

analysis. The limited number of modulation formats classified by Ketterer et

al., (1999), Ye et al., (2007), Zhang et al., (1999) and Gandetto et al., (2014)

using time-frequency features.

2.3.2 Classification

In recent years, several classifiers have been proposed for the

purpose of classification of modulation formats. The features discussed in

Page 38: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

19

section 2.3.1 are input to classifier structure which decides the modulation

format of received signal. The classifier structure includes the nature inspired

heuristic techniques, artificial neural networks, K-nearest neighbours (KNN),

support vector machine (SVM), fuzzy c-means and hidden markov models

(Ling et al., 2010; Puengnim et al., 2010; Avci et al., 2007 and Khaarbech et

al., 2014).

2.3.2.1 Nature Inspired Heuristic Techniques

The nature inspired heuristic techniques have been exploited for the

modulation classification in past several decades. The techniques involved are

genetic algorithm (GA), particle swarm optimization (PSO), ant colony

optimization (ACO), bee colony optimization (BCO) and ant bee colony

optimization (ABCO). The above listed algorithms are frequently used with the

neural networks for modulation classification.

Ling et al., (2010) uses particle swarm optimization (PSO) and

subtractive clustering. The features extracted from the PSO-SC are with the

best clustering radius. Zhu et al., (2014) used optimized distribution sampling

test (ODST) and GA to optimize the distance metrics. Ebrahimzadeh, (2011)

used spectral features and genetic algorithm (GA) based clustering for

modulation classification. Combination of spectral features, statistical features

and wavelet based features are used and performance of classifier is

optimized using PSO by Valipour et al., (2012). BCO is also used to improve

the overall performance of the proposed classifier by Ebrahimzadeh, (2012).

Page 39: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

20

2.3.2.2 Artificial Neural Network

An artificial neural network (ANN) is a mathematical model that

resemble the interconnected neurons of human beings. It contains

interconnected processing elements in the form of layers. It acts as parallel

distributed processor which is capable of storing the information in training and

making it available for later use when required (Haykin (2008); Cheng et al.,

(2014); Gandetto et al., (2014); Guldemir et al., (2007)). Several neural

networks-based modulation classification schemes have been proposed by

the Wong et al., (2004), Cheng et al., (2014), Wong et al., (2001) and Yang et

al., (2001), Popoola et al., (2011), Zadeh et al., (2006), Yang et al., (2001),

Desimio et al., (1988), Luo et al., (2014).

In these techniques, discrete wavelet transform (DWT) and principal

component analysis (PCA) are used for feature extraction and reduction,

respectively. The ANN techniques used for classification of modulation formats

are back propagation neural network (BPNN), SVM, multi-layer perceptron

(MLP), radial basis function (RBF) and independent component analysis (ICA).

The back propagation neural network and its variants are used for

classification of modulation formats by Popoola (2014), Desimio et al. (1988),

Nandi et al. (1995), Azzouz et al. (1996), Azzouz et al. (1997a) and Azzouz et

al. (1997b), Park et al. (2006), Wong et al. (2004), Popoola et al. (2011), Cheng

et al. (2014) and Hassan et al. (2010).

One of the simplest classification technique is the KNN classifier.

Classification of an input feature vector is done by determining the k-closet

training vectors according to a suitable distance metric. This vector is then

Page 40: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

21

assigned to that class to which the majority of those KNN belongs. KNN

classifiers are used with the neural network and evolutionary computing

techniques for modulation classification by Aslam et al., (2012). The fitness

function for genetic programming (GP) are evaluated by using KNN during

training phase.

The support vector machine is state of the art classification method

based on machine learning theory. Compared with other methods such as

ANN, decision tree and Bayesian network, SVMs have significant advantages

of higher accuracy, elegant mathematical tractability and direct geometric

interpretation. Besides, it does not need a large number of training samples to

avoid over-fitting. Original SVMs are the linear classifiers. The SVM and

multiclass SVM can be used for the modulation classification.

Ebrhimzadeh et al., (2010) used SVM and multi-class SVM as

classifier and performance of classifier is optimized using GA. Ebrahimzadeh,

(2012) used hierarchical support vector machine with BCO to improve the

overall performance of classifier. Ebrahimzadeh et al., (2011) used hybrid

neural network classifier for classification. Some of the related work carried out

using SVM by Valipour et al., (2012), Ebrahimzadeh et al., (2009), Mustafa et

al., (2004) and Park et al., (2006).

2.3.2.3 Fuzzy c-Means

Fuzzy c-means (FCM) method is a simple statistical feature

comparison that distinctively characterizes the object. FCM used an objective

function to allow cluster formation in multidimensional space. Each data point

in the space can belong to more than one cluster, with different membership

Page 41: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

22

values. Avci et al., (2007) utilized the expert discrete wavelet adaptive network

based fuzzy inference system for modulation classification. The twenty

different feature sets are generated using daubenchies, coiflets, biorthogonal

and symlets wavelet families. Mobasseri, (1999) used a fuzzy c-means

clustering algorithm for recovery of constellation, after that recovered

constellations are modeled by binomial non-homogenous spatial random

fields. Lopatka et al., (2000) used mamdani fuzzy classifier for classification of

modulation formats.

2.3.2.4 Hidden Markov Model

In HMM, the main interest is to determine the posteriori probability of

the received signal. These probabilities are computed using Baum-Welch

algorithm. Puengnim et al., (2010) used BW algorithm for classifying the

modulation formats. The features extracted are the mean and variance for

classification, classifiers estimate the posteriori probabilities of the received

signal for each modulation type and plug them in to optimum Bayes decision

rule. Some of the related work using HMM by Puengnim et al., (2008) and

Puengnim et al., (2007). The automatic modulation classification (AMC)

algorithms are discussed in detail by Dobre et al., 2007, which rely on the

features extraction technique of the received signal. The contributions of

numerous articles are summarized in Table II-2.

Page 42: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

23

Table II-2: A Summary of Feature Based Classifiers

Author(s) +Year Features Modulation

Formats Channel

Hsue et al.,(1990) andYang et al.,

(1991)

Variance of the zero-crossing intervalsequence, phase

difference and zero-crossing interval

histograms

UW, BPSK,QPSK,8PSK,BFSK, 4FSK,

8FSK

AWGN

Sapiano et al.,(1995) DFT of phase PDF UW, BPSK,

QPSK, 8PSK AWGN

Azzouz et al.,(1996)

Maximum PSD ofnormalized centeredamplitude, standard

deviations of normalizedcentered amplitude,phase and frequency

2ASK,4ASK,BPSK,QPSK,2FSK, 4FSK

AWGN

Martret et al.,(1997)

Fourth- and second-order moments of the

received signalQPSK, 16QAM AWGN

Yang et al.,(1997)andYang et al.,

(1998)

PDF of phase UW, BPSK,QPSK,8PSK AWGN

Marchand etal., (1998)

Fourth- and second-order cyclic cummulants

of the received signal

4QAM,16QAM,64QAM

AWGN

Hong, (1999)

Variance of HWTmagnitude and

normalized HWTmagnitude

QPSK, 4FSK,16QAM AWGN

Swami et al.,(2000)

Normalized fourth-ordercummulants of the

received signal and AMAcost function

BPSK, 4ASK,QPSK,16QAM,

V29,V32,64QAM

FrequencySelectiveChannel

Wei et al.,(2000)

Normalized fourth-ordercummulants of the

received signal

BPSK, 4ASK,16QAM,8PSK,V32, V29, V29c

AWGN,impulsivenoise, co-channel

interference

Page 43: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

24

Hong et al.,(2002)

Variance of HWTmagnitude, HWT

magnitude and peakmagnitude histograms

BPSK, QPSK,8PSK,2FSK,

4FSK,8FSK,CP2FSK,CP4FSK,CP8F

SK, MSK

AWGN

Dobre et al.,(2003)

Eighth-order cycliccummulants of the

received signal

BPSK, QPSK,8PSK,ASK,

8ASK,16QAM,64QAM, 256QAM

AWGN

Yu et al.,(2003)

DFT of the receivedsignal

2FSK, 4FSK,8FSK,16FSK,

32FSKAWGN

Dobre et al.,(2004)

Eighth-, sixth-, andfourth- order cycliccummulants of the

received signal

4QAM, 16QAM AWGN,impulsive noise

.

Dobre et al.,(2005)

Eighth-order cycliccummulants of the

signal at the output of aselection combiner

4ASK,8ASK,BPSK,

QPSK,16QAM,32QAM,64QAM

Rayleigh &Rician Fading

Channels

Avci et al.,(2007)

Daubenchies, Coiflets,Biorthogonal andSymlets Wavelet

ASK8, FSK8,PSK8, QASK8 AWGN

He et al.,(2008)

Multi-fractalfeatures/dimensions

CW, BFSK,BPSK, 4ASK,

16QAMAWGN

Like et al.,(2009)

Hierarchical cyclo-stationary features

AM, BPSK,OFDM, CDMA,4ASK, 8ASK,

1-16PSK,16&64 QAM

Fading

Puengnim etal., (2010) Mean and Variance

QPSK,8PSK,16APSK,32APS

K

AWGN +Fading

Ling et al.,(2010)

Particle SwarmOptimization subtractive

clustering

(PSO-SC)

M-QAM AWGN

Page 44: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

25

Ebrahimzadehet al., (2010) HOC & SVM BPSK, QPSK,

QAMMultipathFading

Hassan et al.,(2010)

HOM of continuouswavelet transform

M-ary ShiftKeying

AWGN +Fading

Ahmadi,(2011)

Constellation diagramand two threshold

algorithm

4QAM,16QAM,64QAM,256QAM

AWGN

Orlic et al.,(2012) 6th Order Cummulants QAM and PAM Multipath

Fading

Sanderson etal., (2013)

4th and 6th OrderCummulants &

Cyclostaionary Features

BPSK, QPSK,16-QAM

Fading +AWGN

Liu et al.,(2014)

Higher Order CyclicCummulants

BPSK, 4PAM,QPSK, 8PSK,

16-QAMAWGN

Marey et al.,(2014) Correlation function BPSK, QPSK,

8PSK,16QAM Fading

Cheng et al.,(2014) Spectral Features

2ASK, 4ASK,2FSK, 4FSK,

BPSK & QPSKAWGN

2.4 Summary

In this chapter, we focused only the features based pattern recognition

approach. The popular modulation formats considered for classification

purpose are PAM, QAM, PSK and FSK of order 2 to 64. The most popular

feature extraction techniques in the literature are spectral features, cyclo-

stationary features, higher-order moments and higher-order cummulants. A

detailed summary of algorithms based on decision theoretic approach and

pattern recognition approach are also given in tabular form.

Page 45: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

26

CHAPTER IIIAUTOMATIC MODULATION CLASSIFICATION USING

FEATURE EXTRACTION TECHNIQUES

In this chapter, the generalized system model for the classification of

digital modulation formats is presented. The features extracted from the

received signal are normalized higher order cummulants and spectral features.

The 8th order normalized cummulants are used for the classification of

modulation formats, where existing features are 4th and 6th order normalized

cummulants. The performance comparison with the existing techniques shows

that high classification accuracy is achieved using proposed features and also

classifier is capable to classify PSK, PAM and QAM efficiently. The classifier

performance is also evaluated on fading channels. The classifiers proposed in

this chapter are based on linear discriminant analysis, maximum likelihood,

feed forward back propagation neural network and support vector machine.

The classification accuracy of the proposed classifiers are much better at lower

SNRs and classifiers structure are capable to classify maximum number of

modulation formats. The classification performance is also compared with well-

known existing techniques in the literature.

3.1 Generalized System Model

In communication system, modulation format of the transmitted signal

is one of the important parameter. For non-cooperative communication

Page 46: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

27

systems, identification of signal modulation format is one of the complex issue.

In practical communication system, signal modulation format has become

more diverse to improve its anti-interference ability. Modulation classification

(MC) is done before demodulation of the signal.

The generalized system model for the automatic modulation

classification (AMC) is shown in Figure III-1. The input symbols are modulated

by any of the modulation formats (PSK, FSK, PAM and QAM) of order 2 to 64.

The modulated symbols have undergone through the channel effects and as

well as channel noise. The generalized expression for the received signal is

represented in Eq.III.1.

( ) ( ; ) ( )ir t x t y t u (Eq.III.1)

where ( )y t is AWGN. The ( ; )ix t u is the noise free baseband complex

envelope of the received signal in its comprehensive form is given by

(2 )

1

; ))( ( 1j

Jj ft i

i i jj

x t a e e x h t j T

u (Eq.III.2)

where ia is unknown amplitude factor 2i

si

px

EaE

, where sE is

baseband signal energy, 2ix

is variance of ith signal constellation andpE is

transmitted pulse energy. f is carrier frequency offset, is carrier phase,

j is phase jitter (vary sample to sample), ijx is symbol sequence from ith

modulation format, is timing offset, (.)h is channel impulse response and T

is symbol period.

Page 47: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

28

Figure III-1. The Generalized System Model

There are four basic types of modulation schemes; FSK, PSK, PAM

and QAM. Suppose we have a baseband (message) signal ( )m t and a carrier

signal:

( ) cos 2c c c cm t A f t (Eq.III.3)

where cA is the carrier amplitude and cf is the carrier frequency and

c is carrier phase angle. The transmitted signal is ( ) ( )* ( )cx t m t m t where

( )m t is the baseband signal and ( )cm t is the carrier signal. Modulated signals

for frequency shift keying (FSK) and phase shift keying (PSK) are defined by

( ) cos 2   ( )FSK cm t f t t (Eq.III.4)

(2 )PSK c im t Acos f t (Eq.III.5)

where 1, 2, 3...i M . In pulse amplitude modulation (PAM), the

samples of the baseband signal may vary with the amplitude of carrier in

proportion to the sampled values of baseband signal.

  ( ) ( )PAMn

m t m n p t nT (Eq.III.6)

Page 48: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

29

where ( )m n are the pulse amplitude, T is the repetition of pulse

interval and 1/ T is the symbol rate. Quadrature amplitude modulation (QAM)

requires chang of amplitude and phase of signal and carrier. QAM can be

achieved by mixing two sine waves that are 90 degree out of phase with each

other. By varying only the amplitude of any signal will vary the phase and

amplitude of the mixed signal. Let 1( )m t and 2  ( )m t be the two signals such

that 1( ) ( )m t Acos and 2( ) ( )m t Asin . Modulated signal for QAM is

1 2( ) ( ) (2 ) ( ) (2 )QAM c cm t m t cos f t m t sin f t (Eq.III.7)

3.2 Automatic Modulation Classification usingHigher Order Statistics (HOS)

3.2.1 Introduction

In this section, AMC is presented using normalized 8th order

cummulants. The proposed algorithm considers the PAM 2 to 64, QAM 2 to

64, BPSK and QPSK for the classification. The proposed algorithm for

automatic digital modulation classification (ADMC) has significant

classification performance under the effect of AWGN. The classification

problem is divided into three scenarios of {BPSK, QPSK}, {PAM 2-64} and

{QAM 2-64}. The simulation results show that the proposed algorithm has high

classification accuracy at lower SNR. The performance is also compared with

6th order and 4th order normalized cummulants by Orlic et al., (2009) and Chen

et al., (2008).

Page 49: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

30

3.2.2 Statistical Features

As cummulants are made up of moments, so various moments have

been used as features. For the complex valued stationary random processx(n), cummulants of 2nd, 4th, 6th and 8th order have the following definitions by

Ghauri et al., (2013).

220, E x n cumm x n , x nxC (Eq.III.8)

2 *21, E x n cumm x n , x nxC

(Eq.III.9)

240, 40 20M 3M

cumm x n , x n , x n , x n

xC

(Eq.III.10)

2

42, 42 20 21

* *

M M 2M

cumm x n , x n , x n , x n

xC

(Eq.III.11)

3

63, 63 21 42 21 20 43 22 41 20 21 22

* * *

M 9M M 12M 3M M  3M M 18M M M

cumm x n , x n , x n , x n , x n , x n

xC

(Eq.III.12)

2 2 2 484, 84 63 21 40 42 42 21 21

* * * *

M 16C C C 18C 72C C  24C

cumm x n , x n , x n , x n , x n , x n , x n , x n

xC

(Eq.III.13)

stands for moments of received signal and it is given by

* p q q

pqM E x k x k (Eq.III.14)

The normalized 8th order cummulants84 ,  xC by Ghauri et al. (2013):-

Page 50: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

31

4

(84, )(84, )

(21, )

ˆ xx

x

CC

C (Eq.III.15)

In ADMC 84,ˆ

xC is the key feature. 84,ˆ

xC can also be estimated from the

received signal r t which is corrupted by AWGN noise and also from fading:

84,84, 2 4

21,

ˆ( )

rx

r g

CC

C

(Eq.III.16)

1 8

041 2

0

( )

( )

L

k

L

k

h k

h k

(Eq.III.17)

The normalized 6th order cummulants63 , xC by Orlic et al. (2009)

63,

63, 3

21,

ˆ xx

x

CC

C (Eq.III.18)

63,

63, 3221,

ˆ rx

r g

CC

C

(Eq.III.19)

1 6

031 2

0

( ) 

( )

L

k

L

k

h k

h k

(Eq.III.20)

The normalized 4th order cummulants42 , xC by Wu et al. (2008)

42,

42, 2

21,

ˆ xx

x

CC

C (Eq.III.21)

42,42, 2 2

21,

ˆ( )

rx

r g

CC

C

(Eq.III.22)

Page 51: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

32

1 4

021 2

0

( )

( )

L

k

L

k

h k

h k

(Eq.III.23)

Table III-I, shows the theoretical values for normalized cummulants

84 ,  xC ,63 , xC and

42 , xC for the considered modulation scenarios.

Table III-1. Theoretical Normalized 4th, 6th & 8th order cummulants for variousModulation Constellations

42,ˆ

xC 63,ˆ

xC 84,ˆ

xC

BPSK -2 13 -163

QPSK -1 4 -34

PAM2 -2 13 -163

PAM 4 -1.3586 70.7 440.60

PAM 8 -1.2368 522.26 1292.9717

PAM 16 -1.2113 3636.415 36934.8038

PAM 32 -1.2039 23019.7704 931687.15872

PAM 64 -1.1988 15363.424446 2588813.3566

QAM 2 -2 13 -163

QAM 4 -1 1.96 13.6

QAM 8 -1.0011 0.0192 0.0637

QAM 16 -0.6778 2.08 -13.9808

QAM 32 -0.6876 1.9448 -12.005

QAM 64 -0.6167 1.7972 -11.5022

Page 52: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

33

3.2.3 Proposed Algorithm for AMC

The algorithm for automatic digital modulation classification on fading

channel (Rayleigh Flat, Rician Flat and Log-normal) by Ghauri et al. (2014):-

Step 1. Calculate the normalized channel coefficients h(k).

Step 2. Calculate the β according to Eq. III-17, Eq.III-20 and Eq.III-23.

Step 3. Calculate the normalized even order cummulants according tothe Eq. III-15, Eq. III-18 and Eq. III-21.

Step 4. Compare the 84,ˆ

xC , 63,ˆ

xC and 42,ˆ

xC with the theoretical valueslisted in Table III-1 to determine the modulation type of thereceived signal.

3.2.4 Simulation Results

The performance of the proposed classifier is analyzed in this section.

The simulation results shows the probability of correct classification (PCC) in

the presence of AWGN channel, as well as on fading channels such as

Rayleigh flat, Rician flat and Lognormal fading channels. The results also

shows the comparison of utilizing the normalized eight-order 84,ˆ

xC , sixth-order

63,ˆ

xC & fourth-order 42,ˆ

xC cummulants. The correct rate of classification using 8th

order cummulants is much higher than that of using 6th and 4th order

cummulants.

Page 53: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

34

Figure III-2. PCC on AWGN channel in scenario {BPSK, QPSK}, N=250

Figure III-2, shows the classifier performance using 8th order

cummulants using AWGN channel. The average PCC is approximately 0.99 at

SNR=2dB, where using 6th and 4th order cummulants, the average PCC is 0.91

and 0.82 respectively at the same SNR. Figure III-3, shows the PCC curve for

scenario {QAM 2 to 64} considering AWGN channel with N=3000. The 8th order

cummulants average PCC is approximately 0.9 at SNR=0dB, where using 6th

and 4th order cummulants the average PCC is 0.85 and 0.77 respectively, at

same SNR.

Figure III-3. PCC on AWGN channel in scenario {QAM 2 to 64}, N=3000

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

8th Order Cummulants6th Order Cummulants4th Order Cummulants

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants

6th Order Cummulants

8th Order Cummulants

Page 54: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

35

Figure III-4. PCC on AWGN channel in scenario {PAM 2 to 64}, N=3000

Figure III-4, shows the correction rate using 8th order cummulants is

reaching approximately 1 at SNR=2dB, where using 6th and 4th order

cummulants the average PCC is 0.96 and 0.84 respectively at the same SNR.

Figure III-5, III-6 & III-7 show the PCC curve for scenario {BPSK, QPSK}, {PAM

2 to 64} and {QAM 2 to 64} on fading channels having number of samples 250,

2000 & 2000 respectively.

Figure III-5. PCC on Flat Fading channel in scenario {BPSK, QPSK}, N=250

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants

6th Order Cummulants

8th Order Cummulants

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants

6th Order Cummulants

8th Order Cummulants

Page 55: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

36

Figure III-6. PCC on Flat Fading channel in scenario {PAM 2 to 64}, N=2000

The curves show that using 8th order cummulants PCC is higher than

the 6th and 4th order cummulants in all three cases. The average PCC using

8th order cummulants for scenario {BPSK, QPSK} is 0.95, average PCC for

scenario {PAM 2 to 64} is 0.7 and average PCC for scenario {QAM 2 to 64} is

0.64 at SNR=0dB.

Figure III-7. PCC on Flat Fading channel in scenario {QAM 2 to 64}, N=2000

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants

6th Order Cummulants

8th Order Cummulants

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants6th Order Cummulants8th Order Cummulants

Page 56: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

37

Figure III-8. PCC on Rayleigh Flat Fading channel in scenario {BPSK,QPSK}, N=250

Figure III-8, III-9 & III-10 shows the PCC curve for scenario {BPSK,

QPSK} on Rayleigh flat fading, lognormal fading and Rician flat fading channel

having N=250. The curve shows that using 8th order cummulants PCC is higher

on faded channels. The average PCC is 0.94 using 8th order cummulants while

average PCC is 0.8 using 4th cummulants.

Figure III-9. PCC on Lognormal Fading channel in scenario {BPSK, QPSK},N=250

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants

6th Order Cummulants

8th Order Cummulants

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants

6th Order Cummulants

8th Order Cummualnts

Page 57: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

38

Figure III-10. PCC on Rician Flat Fading channel in scenario {BPSK, QPSK},N=250

Figure III-11, shows the performance evaluation of ADMC on faded

channel and AWGN channel for the scenario {BPSK, QPSK}. The 8th order

cummulants feature provides higher accuracy on all faded channel.

Figure III-11. Performance of ADMC on Faded channel in scenario {BPSK,QPSK}, N=250

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

4th Order Cummulants6th Order Cummulants8th Order Cummulants

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

8th Order Cumulants AWGN channel8th Order Cumulants Flat Fading channel8th Order Cumulants Rayleigh Flat Fading channel8th Order Cumulants Log Normal Faing channel8th Order Cumulants Rician Fading channel6th Order Cumulants AWGN channel6th Order Cumulants Flat Fading channel6th Order Cumulants Rayleigh Flat Fading channel6th Order Cumulants Log Normal Faing channel6th Order Cumulants Rician Fading channel4th Order Cumulants AWGN channel4th Order Cumulants Flat Fading channel4th Order Cumulants Rayleigh Flat Fading channel4th Order Cumulants Log Normal Faing channel4th Order Cumulants Rician Fading channel

Page 58: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

39

Figure III-12. Performance of ADMC on Faded channel in scenario {QAM 2 to64}, N=3000.

The PCC for scenario {QAM 2 to 64} on Rayleigh flat, Rician flat and

lognormal fading channel is shown in Figure III-12. The usage of 8th order

cummulants shows the better classification rate than that of 6th & 4th order

cummulants. For example average PCC is 0.57 at SNR= -2dB on lognormal

fading channel, while using 6th & 4th order cummulants the average PCC is

0.51, 0.45, respectively, at same SNR.

Figure III-13. Performance of ADMC on Faded channel in scenario {PAM 2 to64}, N=2500

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

8th Order Cumulants AWGN channel8th Order Cumulants Flat Fading channel8th Order Cumulants Rayleigh Flat Fading channel8th Order Cumulants Log Normal Faing channel8th Order Cumulants Rician Fading channel6th Order Cumulants AWGN channel6th Order Cumulants Flat Fading channel6th Order Cumulants Rayleigh Flat Fading channel6th Order Cumulants Log Normal Faing channel6th Order Cumulants Rician Fading channel4th Order Cumulants AWGN channel4th Order Cumulants Flat Fading channel4th Order Cumulants Rayleigh Flat Fading channel4th Order Cumulants Log Normal Faing channel4th Order Cumulants Rician Fading channel

-10 -8 -6 -4 -2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

PC

C

8th Order Cumulants AWGN channel8th Order Cumulants Flat Fading channel8th Order Cumulants Rayleigh Flat Fading channel8th Order Cumulants Log Normal Faing channel8th Order Cumulants Rician Fading channel6th Order Cumulants AWGN channel6th Order Cumulants Flat Fading channel6th Order Cumulants Rayleigh Flat Fading channel6th Order Cumulants Log Normal Faing channel6th Order Cumulants Rician Fading channel4th Order Cumulants AWGN channel4th Order Cumulants Flat Fading channel4th Order Cumulants Rayleigh Flat Fading channel4th Order Cumulants Log Normal Faing channel4th Order Cumulants Rician Fading channel

Page 59: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

40

The PCC for scenario {PAM 2 to 64} on Rayleigh flat, Rician flat and

lognormal fading channel is shown in Figure III-13. The performance of

classifier using 8th order cummulants feature is much better than 6th & 4th order

cummulants. For example average PCC is 0.6 at SNR= 0dB on Rician fading

channel, while using 6th & 4th order cummulants the average PCC is 0.57, 0.51,

respectively, at same SNR.

3.2.5 Summary

The performance of using normalized 8th order cummulants with

normalized 6th and 4th order cummulants are shown and it is found that the

probability of correct classification using 8th order cummulants is much better

than the 6th and 4th order cummulants at lower SNRs.

3.3 Automatic Modulation Classification usingLinear Discriminant Analysis (LDA)

3.3.1 Introduction

In this section, modulation classification has been carried out using

linear discriminant analysis (LDA) on fading channels. The features used for

the classification are higher order cummulants and normalized higher order

cummulants. The LDA classify the received signal into set of different classes.

The performance metric is minimum distance criterion which discriminates the

received signal data set into different classes. The performance of the

Page 60: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

41

proposed algorithm shows substantial organization of different digital

modulated signals on faded channels.

3.3.2 Higher Order Cummulants as Feature Set

For classification of modulation signals higher order moments and

higher order cummulants are used. The HOC and HOM are defined in Eq. III-

8 to Eq. III-14. The 4th, 6th and 8th order cummulants are defined by Ghauri et

al., (2013):-

41 40 20 21

*

C M 3M M

cumm x n , x n , x n , x n

(Eq.III.24)

3

60 60 20 40 20C M 15M M 30M

cumm x n , x n , x n , x n , x n , x n

(Eq.III.25)

261 61 21 40 20 41 20 21

*

C M 5M M 10M M 30M M

cumm{x(n),x(n),x(n),x(n),x(n),x (n)}

(Eq.III.26)

2 2

62 62 20 42 21 41 22 40 20 22 21 22

* *

C M 6M M 8M M M M 6M M 24M M

cumm x n , x n , x n , x n , x n , x n

(Eq.III.27)

2 2 4

80 80 40 60 20 40 20 20C M 35M 28M M 420M M 630M

cumm x n , x n , x n , x n , x n , x n , x n , x n

(Eq.III.28)

The normalized 4th order cummulants and 6th order cummulants

values for {FSK2 to 64} modulation formats are given in Table III-2. The values

given in Table III-2 for considered modulation formats are theoretical values.

Page 61: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

42

Table III-2. Theoretical Values of Normalized cummulants of ConsideredModulation Types

42,ˆ

xC 63,ˆ

xC

FSK2 -2 13

FSK 4 -1.3586 7.07

FSK 8 -1.2368 5.22

FSK 16 -1.2113 3.63

FSK 32 -1.2039 2.30

FSK 64 -1.1988 1.536

3.3.3 Proposed Algorithm for AMC

The proposed algorithm based on LDA is presented by Ghauri et al.

(2014). LDA works by projecting data into a feature space using a linear

mapping and then comparing the result to a centroid for each class. The flow

chart for classification of the modulation schemes is given below:

Figure III-14: Flow Chart for Modulation Classification

Signal Received

Apply Projection Matrix

Spectrum Analysis

Calculate Statistics

Calculate Distance

Page 62: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

43

Proposed Algorithm for Modulation Classification using LDA:

Step 1. Create the two data sets i.e. classes c1 & c2

Step 2. Calculate the mean of each data μ1 & μ2

Step 3. Calculate the weighted mean μ=p1*μ1+ μ2*p2

Step 4. Calculate the covariance matrix Si of data set and mean of dataset.

Step 5. class independent case

inverse(Sw)*SB, where Sw and SB are within class scatter matrixand the between class scatter matrix

1 1

,    ( )N N

Tw i B i i

i i

S S S n m m m m

Step 6. Find the projection matrix W

Step 7. Find the optimum weight vector.

Step 8. Find the transformed data set Ty W x

Step 9. Find the minimum distance between transformed data set andtest data.

3.3.4 Simulation Results

First, 30 training signals of 10,000 symbols are generated for the

(PSK2, PSK4, PSK8, PSK16, PSK32, PSK64, FSK2, FSK4, FSK8, FSK16,

FSK32, FSK64, QAM2, QAM4, QAM8, QAM16, QAM32, QAM64) modulation

formats and higher order moments & cummulants are also calculated for each

training signal. After that these parameters are passed to a training function

that determines a projection matrix and class centroids. The spectrum of each

signal is analyzed by calculating the fast fourier transform (FFT) of first 4096

samples. After that, normalized cummulants are calculated and projected into

feature space using the projection matrix. The projected features are

Page 63: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

44

compared to class centroids and closet one is chosen as the modulation

format. From Table III-3 to III-5, the classification performance in form of

confusion matrix for PSK modulations on AWGN, Rayleigh flat fading channel

and Rician flat fading channel are shown at SNR of 10dB. The average

classifier performance in case of AWGN channel is 99.95%, where in case of

Rayleigh channel the performance is 91.3% at SNR of 10dB.

From Table III-6 to III-8, the classification performance in form of

confusion matrix for FSK modulations on AWGN, Rayleigh flat fading and

Rician flat fading channel are shown at SNR of 10dB. The average classifier

performance in case of AWGN channel is 99.5 %, where in case of Rayleigh

channel the performance is 88% and in case of Rician channel the

performance is 91% at SNR of 10dB. The performance is better in case of

Rician channel when comparing with Rayleigh channel, because it has one

line of sight component which preserves the spectral component of FSK

signals, resulting in high accuracy of classification.

From Table III-9 to III-11, the confusion matrix for QAM modulations

on AWGN channel, Rayleigh flat fading channel and Rician flat fading channel

are shown at SNR of 10dB. The average classifier performance in case of

AWGN channel is 99.95 %, where in case of Rayleigh channel the

performance is 77.36% and in case of Rician channel the performance is 81%

at SNR of 10dB.

Page 64: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

45

Table III-3. Confusion Matrix for PSK modulation in AWGN channel (AveragePerformance 99.95%)

10 dB 2PSK 4PSK 8PSK 16PSK 32PSK 64PSK

2PSK 10000 0 0 0 0 0

4PSK 0 10000 0 0 0 0

8PSK 0 0 10000 0 0 0

16PSK 0 0 3 9992 5 0

32PSK 0 0 0 10 9985 5

64PSK 0 0 0 0 10 9990

Table III-4. Confusion Matrix for PSK modulation in AWGN+ Rician FlatFading Channel (Average Performance 92.26%)

10dB 2PSK 4PSK 8PSK 16PSK 32PSK 64PSK

2PSK 9793 207 0 0 0 0

4PSK 295 9395 290 40 0 0

8PSK 0 111 9639 250 0 0

16PSK 0 190 413 8967 430 0

32PSK 0 0 174 309 9091 426

64PSK 0 0 110 390 1038 8462

Table III-5. Confusion Matrix for PSK modulation in AWGN + Rayleigh FlatFading Channel (Average Performance 91.38%)

10dB 2PSK 4PSK 8PSK 16PSK 32PSK 64PSK

2PSK 9793 207 0 0 0 0

4PSK 496 9304 200 0 0 0

8PSK 0 185 9625 190 0 0

16PSK 0 9 895 8925 171 0

32PSK 0 0 48 931 8920 101

64PSK 0 0 0 681 994 8325

Page 65: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

46

Table III-6. Confusion Matrix for FSK modulation in AWGN channel (AveragePerformance 99. 5%)

10dB 2FSK 4FSK 8FSK 16FSK 32FSK 64FSK

2FSK 10000 0 0 0 0 0

4FSK 0 10000 0 0 0 0

8FSK 0 4 9996 0 0 0

16FSK 0 8 31 9961 0 0

32FSK 0 11 22 34 9921 12

64FSK 0 2 11 27 133 9827

Table III-7. Confusion Matrix for FSK modulation in AWGN+ Rician FlatFading Channel (Average Performance 91%)

10dB 2FSK 4FSK 8FSK 16FSK 32FSK 64FSK

2FSK 10000 0 0 0 0 0

4FSK 416 9584 0 0 0 0

8FSK 0 421 9196 383 0 0

16FSK 0 0 921 8942 137 0

32FSK 0 0 0 876 8425 699

64FSK 0 0 0 713 893 8394

Table III-8. Confusion Matrix for FSK modulation in AWGN + Rayleigh FlatFading Channel (Average Performance 88%)

10dB 2FSK 4FSK 8FSK 16FSK 32FSK 64FSK

2FSK 10000 0 0 0 0 0

4FSK 632 9179 189 0 0 0

8FSK 0 645 8924 431 0 0

16FSK 0 0 1116 8795 89 0

32FSK 0 0 69 1721 8210 0

64FSK 0 0 18 381 1891 7710

Page 66: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

47

Table III-9. Confusion Matrix for QAM modulation in AWGN channel(Average Performance 99.95%)

10dB 2QAM 4QAM 8QAM 16QAM 32QAM 64QAM

2QAM 10000 0 0 0 0 0

4QAM 0 10000 0 0 0 0

8QAM 0 0 9996 0 0 0

16QAM 0 0 0 9995 3 2

32QAM 0 0 4 4 9990 2

64QAM 0 0 0 6 8 9986

Table III-10. Confusion Matrix for QAM modulation in AWGN+ Rician FlatFading Channel (Average Performance 81%)

10dB 2QAM 4QAM 8QAM 16QAM 32QAM 64QAM

2QAM 10000 0 0 0 0 0

4QAM 639 9158 203 0 0 0

8QAM 0 793 8854 353 0 0

16QAM 0 0 1432 7698 870 0

32QAM 0 0 0 1834 6825 1341

64QAM 0 0 0 999 2921 6080

Table III-11. Confusion Matrix for QAM modulation in AWGN + Rayleigh FlatFading Channel (Average Performance 77.36%)

10dB 2QAM 4QAM 8QAM 16QAM 32QAM 64QAM

2QAM 10000 0 0 0 0 0

4QAM 686 8925 389 0 0 0

8QAM 0 813 8456 731 0 0

16QAM 0 0 1392 7121 1487 0

32QAM 0 0 0 2031 6489 1480

64QAM 0 0 0 1486 3089 5425

Page 67: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

48

3.3.5 Summary

The linear classifiers proposed here are very effective on fading

channels. The simulation shows that the classifier performance is

approximately 99.9% at SNR of 10 dB. The average performance of classifier

for PSK modulated signals in the presence of AWGN channel is 99.95% at

SNR of 10dB, while the performance of classifier in Rayleigh flat fading and

Rician flat fading channel are approximately 91.38% and 92.26% respectively.

3.4 Automatic Modulation Classification usingSpectral Features

3.4.1 Introduction

In this section we have used spectral features for classification of PSK,

PAM, QAM and FSK modulation formats. The proposed classifier is multilayer

perceptron (MLP) which is also referred to as feed forward back propagation

neural network (FFBPNN) and support vector machine (SVM). The

performance comparison with the existing techniques shows the supremacy of

proposed classifiers.

3.4.2 Spectral Features

A common method is to use the information contents in the

instantaneous amplitude, frequency and phase of the modulated signal. We

Page 68: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

49

have used standard deviation of normalized signal frequency, phase and

amplitude derived from instantaneous amplitude, phase and frequency of the

considered (FSK, PSK, PAM, QAM) modulated signals by Dobre et al., (2005),

Nandi et al., (1998), Ghauri et al., (2013) and Ghauri et al., (2014).

Let a signal with sampling rate of sf =4000 is generated and digitally

modulated using Eq. III-1. After modulation, Hilbert transform is used for

calculating amplitude, frequency and phase. Then these three parameters

(amplitude, phase and frequency) are used for extraction of the features

ap dp ,     aa    fa fn  max(σ , σ σ σ , σ , γ ).

A. ap : Standard deviation of absolute value of the centered non-linear

components of instantaneous phase

2

   

1 1  ( ) | |ap NL NL

s s

i iN N

(Eq.III.29)

where Ns are number of samples in NLφ (i) at instants

it f

Nonlinear component of centered instantaneous phase

NL oφ i  φ(i) φ andNs

oi 1s   

1φ φ(i) N

, ap is used to discriminate the

modulation formats which have absolute phase information.

B. dp : Standard deviation of direct value of the centered non-linear

components of instantaneous phase

2

   

1 1  ( ) ( )dp NL NL

s s

i iN N

(Eq.III.30)

Page 69: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

50

 dp is used to discriminate the modulation formats which have direct

information and no direct information.

C.    aa : Standard deviation of absolute value of the normalized centered

instantaneous amplitude

2

2   

1 1  ( ) | |  aa cn cn

s s

A i A iN N

(Eq.III.31)

cnA i is normalized centered instantaneous amplitude at time instant

s

it f , 1cn nA i A i and ( ) /n aA i A i m , am is average value of

instantaneous amplitude over one frame. 1

1 sN

ais

m A iN

. This feature

is used to distinguish the modulation formats which have absolute

amplitude information and no absolute amplitude information.

D.fa : Standard deviation of absolute value of the normalized centered

instantaneous frequency

2

21 1    ( ) | |fa n n

s s

f i f iN N

(Eq.III.32)

Normalized centered instantaneous frequency cn

s

f if i r , sr is the

symbol rate, c i ff i f m andN

fi 1s

1m f (i)

N

where sN is symbol

rate.fa is used to discriminate the modulation formats that have no

absolute frequency information and direct frequency information.

Page 70: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

51

E. fn : Standard deviation of direct value of the normalized centered

instantaneous frequency

2

1 1  ( ) ( )fn n nf i f i

Ns Ns

(Eq.III.33)

F. max : Maximum value of the power spectral density of the normalized

centered instantaneous amplitude

2

 

1maxmax cnDFT A i

Ns (Eq.III.34)

DFT is the Discrete Fourier transform of the modulated signal, max uses

to discriminate the modulation formats which has amplitude information

from that which has no amplitude information (PSD=0).

3.4.3 Proposed Algorithm using MLP

The proposed FFBPNN have three layers; input layer, hidden layer and output

layer. At input layer neurons are only used for distribution of extracted features

to the hidden layer neurons which are used for the computations. The output

of the hidden layer is input to the output layer. Six neurons are used in input

layer as analogous to the number of features extracted from the received

signal. Hidden layer carries ten neurons, while six neurons in output layer

corresponding to the number of outputs/ modulation formats. The proposed

classifier architecture for the classification of considered modulation formats is

shown in Figure III-15 by Ghauri et al. (2014).

Page 71: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

52

Figure III-15: Proposed Architecture for classification of modulation formats

Training of Algorithm: The input data set and target data set are

used to train the proposed classifier for considered modulation formats. The

difference between the output value and target value is known as error value,

which is back propagated to hidden layer. The feed forward back propagation

algorithm is used which involves forward and backward path. In forward path,

weights are initialized for training the feed forward network, while in this path

weights values are fixed. The error signal is given by

j j je t y (Eq.III.35)

wherejt is the target response of jth input and is the output of the

network. In second path weights are updated, till the error is minimized in a

statistical sense using mean square error criterion.

Page 72: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

53

N2

j jj 0

1Cost  function   J (t y )

N

(Eq.III.36)

The training of proposed classifier for the classification is as follows:

Step-1. The input and target vectors are concatenated to represent the datamatrix

Step-2. Generated data are normalized and randomly sorted

Step-3. The 50% of normalized data are used for the training the neuralnetwork.

20% of normalized data are used to validation.

Step-4. While for testing the network, 30% of the normalized data are used.

Step-5. Feed forward back propagation neural network is created. Theactivation function used are tang-sigmoid (tanh) and (logistic) log-sigmoid.

Testing of proposed algorithm: For testing the FFBPNN, net lab

toolbox is used. The 30% of the normalized data are used to test the

performance of classifier for different values of SNR.

Table III-12: Specifications for the proposed classifier

S. No. Parameters Value

1. neural network architecture Feed-forward

2. input layer neurons 6

3. hidden layer neurons 10

4. output layer neurons 6

5. weight-decay Coefficient 0.001

6. hidden & output layer Activation function Logistic

7. Iterations 500

8. Performance Metric MSE

9. Learning algorithm SCG

Page 73: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

54

3.4.4 Simulation Results

The feature vectors and target vectors are concatenated to form the

data set. The data set is divided in to three portions; 50% used for training

while rest 50% are used for validation and testing the proposed algorithm. The

performance of proposed is evaluated under the effects of different channel

models at SNR of 10dB. The simulation results in the form of confusion matrix

show that performance of classification is approximately 100 at SNR of 10dB.

The Figure III-16, shows that the FFBPNN output result for the case of

PSK 2 to 64 modulation format. From the Figure III-16, it is clear that probability

of failure is approximately zero and the proposed classifier perfectly classifies

the modulation formats. Table III-13, shows the percentage of correct

classification at fixed SNR of 10dB. The overall performance of classifier is

99.62% in case of FSK, 99.55% in case of PSK, 99.56% in case of PAM and

99.34% in case of QAM.

Table III-14, shows that the percentage of correct classification in case

of FSK 2 to 64, PSK 2 to 64, PAM 2 to 64 and QAM 2 to 64 at fixed SNR of

10dB. The performance of classifier in the form of confusion matrix shows that

approximately 98% classification. The overall performance of classifier is

98.07% in case of FSK, 98.15% in case of PSK, 97.98% in case of PAM and

98.34% in case of QAM.

Table III-15, shows that the percentage of correct classification in the

form of confusion matrix. The overall performance of classifier is 96.99% in

case of FSK, 97.13% in case of PSK, 96.84% in case of PAM and 97.35% in

case of QAM.

Page 74: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

55

Figure III-16: The FFBPN result for the proposed classifier {PSK 2 to64} on AWGN channel at SNR of 10dB.

Table III-13: Percentage of correct classification on AWGN channel at 10 dBof SNR

FSK M=2 M=4 M=8 M=16 M=32 M=64M=2 99.61M=4 99.92M=8 99.99M=16 99.94M=32 99.54M=64 98.76

PSK M=2 M=4 M=8 M=16 M=32 M=64M=2 99.99M=4 99.92M=8 99.54M=16 98.93M=32 99.32M=64 99.64

PAM M=2 M=4 M=8 M=16 M=32 M=64M=2 99.98M=4 99.32M=8 99.59M=16 99.65M=32 99.29M=64 99.55

QAM M=2 M=4 M=8 M=16 M=32 M=64M=2 99.99

Page 75: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

56

M=4 99.64M=8 99.87M=16 99.29M=32 99.13M=64 98.12

Table III-14: Percentage of correct classification on Rician flat fading channelplus AWGN at 10 dB of SNR

FSK M=2 M=4 M=8 M=16 M=32 M=64M=2 98.79M=4 98.32M=8 97.47M=16 98.61M=32 97.15M=64 98.10

PSK M=2 M=4 M=8 M=16 M=32 M=64M=2 99.18M=4 98.12M=8 97.83M=16 97.95M=32 98.53M=64 97.31

PAM M=2 M=4 M=8 M=16 M=32 M=64M=2 97.72M=4 98.93M=8 97.45M=16 98.23M=32 97.51M=64 98.05

QAM M=2 M=4 M=8 M=16 M=32 M=64M=2 99.37M=4 99.23M=8 98.57M=16 98.37M=32 97.34M=64 97.19

Table III-15: Percentage of correct classification on Rayleigh flat fadingchannel plus AWGN at 10 dB of SNR

FSK M=2 M=4 M=8 M=16 M=32 M=64M=2 96.32M=4 97.81M=8 97.62M=16 96.43M=32 96.15

Page 76: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

57

M=64 97.62PSK M=2 M=4 M=8 M=16 M=32 M=64

M=2 97.40M=4 97.93M=8 96.23M=16 97.11M=32 97.02M=64 97.13

PAM M=2 M=4 M=8 M=16 M=32 M=64M=2 97.23M=4 96.54M=8 96.78M=16 97.01M=32 96.29M=64 97.19

QAM M=2 M=4 M=8 M=16 M=32 M=64M=2 98.45M=4 98.54M=8 96.59M=16 96.82M=32 96.99M=64 96.72

3.4.5 Proposed Algorithm using SVM

The concept behind SVM is to provide a learning technique for

classification by constructing hyper planes, it provides regression based

estimation, it combines learning and optimization theory and gives an

optimized solution as opposed to artificial neural networks or decision test and

most important is that it uses Kernel trick to extract features. The working

principle of SVM is based upon construction of hyper planes for linearly

separable data pattern and nonlinear separable data pattern by Haykin (2008).

Suppose that training data samples 1

,N

i i ix d

, ,     { 1,1}d

ix R d can be

separated by hyper plane 0Tiw x b , where x is the input pattern (total

Page 77: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

58

number of patterns are N ), b is the bias, w is the weight vector and d is the

desired response. The decision function is 0TiD x w x b , if this hyper

plane maximizes the margin then ( ) 1    ( ) -1 0T Ti i i iy w x b or d w x b . The

margin of hyper plane is 2 /M w , as 0,    w M where

2 2 21 2 .  .  .  ..  .  . Mw w w w . The problem is to maximize the margin by

minimizing w subject to constraint ( ) -1 0Ti id w x b . The minimization of w

is same as minimizing 21 1

2 2Tw w w . The cost function is

1

1[ 1]

2

NT T

i i ii

J d b

w w w x (Eq.III.37)

Differentiating the cost function with respect to bias b , yields

1 1

1( )

2

N NT T

i i i ii i

J d

w w w x (Eq.III.38)

1 1 1

1

2

N N NT

i i j i j i ji i j

d d

x x (Eq.III.39)

( )J should be minimized under the constraints1

0N

i ii

d

and 0,i

for 1, 2, .,i N . The training points for which the ( b) 1Ti id w x becomes

equality are called support vectors (SV). The optimal bias for any support

vector ix is given by

* *Ti ib d w x (Eq.III.40)

Then optimal decision function (ODF) is given by

Page 78: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

59

* *

1

NT

i i i ii

D sgn d b

x x x (Eq.III.41)

where *i are optimal Lagrange multipliers. It is not possible to

construct hyper plane without encountering error of class recognition, thus we

will find out an optimal hyper plane that minimizes the probability of correctness

error. SVM uses soft margins for high noise level in the input data. To set

correct recognition method, the new set of non-scalar variables are

transformed into separable hyper planes called slack variables i( ). These

slack variables will measure the deviation of data points from ideal condition.

Thus the separable hyper planes are as follows,

Ti  i iy w   X   b)  ( 1 (Eq.III.42)

For 0  1 , the data point falls inside the region of separation but

on the right side of the decision surface. For 1 , it falls on the wrong side of

the separation hyper plane. The support vectors are those particular data

points that satisfy the new separating hyper plane equation precisely even if

0. The weight vector w and the slack variable minimize the cost function:

N

Ti

i=1

1φ C2 w w w (Eq.III.43)

Here C is the user specified positive parameter known as penalty

parameter. For the nonlinear separable scenario, SVM maps use kernel

function )( ,i jK x x . The radial basis function (RBF) is used as kernel function

in the simulation and given by:

Page 79: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

60

2 2, exp / 2K x y x y (Eq.III.44)

where is width of radial basis kernel function. The problem will be

1 1 1

1,

2

N N N

i i j i j i ji i j

D d d K

x x x (Eq.III.45)

* arg(max ),  0 ,   0 ,   1, 2,.  ,N

i i i ii

D C y i N

The optimal decision function becomes

* *

1

,N

i i i ji

D sgn d K b

x x x (Eq.III.46)

Proposed Algorithm based on SVM:-

Step 1. Extraction of Key features:

Following two sets of feature are extracted

(i) Spectral features

(ii) Higher order cummulants

Step 2. Select Kernel Function

Radial basis function is used as kernel function

Step 3. Calculate parameters of kernel function

Select the best parameters with cross validation

Step 4 Train the samples

Step 5. Test of Network

Page 80: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

61

3.4.6 Simulation Results

The features used for the SVM are spectral features and higher order

cummulants. The realization taken for each modulation format is 1200. Based

on the experiments, value of 1 and 10C is selected for SVM. The

performance of classifier on different channels is shown in Table III-16. The

average classification performance on fading channels and AWGN is 97.66%

at 10dB of SNR.

Table III-16. Performance of Recognizer at SNR of 0 to 10 dB

SNRTesting

AWGN Channel

Testing

Rayleigh Flat fading

0 70.14% 60.35%

1 74.52% 66.89%

2 78.26% 71.73%

3 82.56% 75.54%

4 86.50% 79.69%

5 89.75% 82.88%

6 92.85% 86.91%

7 94.74% 89.39%

8 97.42% 92.42%

9 99.92% 95.59%

10 100% 97.66%

Page 81: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

62

3.4.7 Comparison with Existing Techniques

Table III-17, shows the performance comparison of using normalized

higher order cummulants with the existing schemes on AWGN and fading

channels. Table III-18, shows the performance comparison of using spectral

features with the well-known techniques in the literature on fading channels. It

is shown from the Table III-17 and III-18, that the classification accuracy at

10dB of SNR of the proposed classifiers is much better than the existing

classifiers.

Table III-17. Performance comparison of Spectral Features with existingtechniques

Author & Year Features +Algorithm

% ClassificationAccuracy

Azzozu et al., (1995) Spectral Features 90% (AWGN)

Wong et al., (2004) Spectral Featuresand MLP+ GA 98% (AWGN)

Ye et al., (2007) Time FrequencyFeatures + MLP 97.6% (AWGN)

Bouttee et al., (2009)

IndependentComponent

Analysis (ICA)+SVM

93% (AWGN)

Abaviasani et al., (2009)

EuclideanDistance and

ConstantModulation (CM)

Equalization

60.1% (Fading)

Ebrahimzadeh, (2009)

SpectralFeatures+ RadialBasis Function

(RBF)

90% (Fading)

Avci et al., (2009)Discrete wavelet

(DW) neuralnetwork

90.24%

Page 82: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

63

Avci et al., (2009) DW ANFIS 96.51%

Hassan et al., (2010) Wavelet featuresand MLP 98% (AWGN)

Valipour et al., (2012) Spectral Features,HOM and SVM 98.5% (AWGN)

Valipour et al., (2012) Spectral Features,HOM and MLP 97% (AWGN)

Proposed Method

2014

Spectral Features

SVM and MLP

99% (AWGN)

95 % (FadingChannels)

Table III-18. Performance comparison of HOC features with existingtechniques

Author & Year Features +Algorithm

% ClassificationAccuracy

Swami et al., (2000) Cummulants 90% (AWGN)

Dobre et al., (2003)Higher order

cycliccummulants

70% (AWGN)

Mirarab et al., (2007) HOC 85% (Fading)

Wong et al., (2008) Navie BayesClassifier 94.4% (AWGN)

Chen et al., (2008)Normalized 4th

orderCummulants

85% (Fading)

Orlic et al., (2009)Normalized 6th

orderCummulants

88% (Fading)

Orlic et al., (2010) Sixth ordercummulants 70% (Fading)

Chaithanya et al., (2010) HOC and HOM 90% (AWGN)

Aslam et al., (2011) Cummulants andGP-KNN 96.4% (AWGN)

Proposed Method NormalizedHigher Order

99 % (AWGN)

Page 83: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

64

2014 Cummulants +LDA

90% (FadingChannels)

Proposed Method

2014

NormalizedHigher Order

Cummulants +ML

99% (AWGN)

91% (FadingChannels)

3.4.8 Summary

In this chapter, higher order normalized cummulants and spectral

features are used as features set to classify the modulation formats effectively.

The classifiers proposed are based on ML, LDA, FFBNN and SVM. The

classifiers performance is evaluated on different channels. Also when

comparison with the existing techniques, our proposed classifier shows better

classification performance.

Page 84: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

65

CHAPTER IV

AUTOMATIC MODULATION CLASSIFICATION USINGGABOR FILTER NETWORK

4.1 Introduction

In this chapter, Gabor filter (GF) network is proposed for the joint

feature extraction and classification of digital modulation formats. The digital

modulations considered are PSK2, PSK4, PSK8, PSK16, PSK32, PSK64,

FSK2, FSK4, FSK8, FSK16, FSK32, FSK64, QAM2, QAM4, QAM8, QAM 16,

QAM 32, and QAM 64. The proposed classifier structure has been divided into

two phases. In the training phase of the classifier, the GF parameters are

adjusted using delta rule and weights of adaptive filter using LMS algorithm to

minimized the cost function. For each considered modulation format, the

Gabor filter network is trained and GF parameters are saved. In testing phase,

minimum error of the classifier corresponds to desired modulation formats. The

proposed algorithm gives high classification accuracy even at lower SNRs.

A modified GF network is also proposed for classification of M-PAM

signals. We have made some changes in the previous proposed method to

make it efficient for M-PAM signals. The performance of classifier is evaluated

on Rician and Rayleigh flat fading channels. The classifier performance is also

compared with the well-known existing techniques.

Page 85: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

66

4.2 System Model

The system model for classification of modulation formats is shown in

Fig. IV-1. Firstly the PSK, QAM and FSK modulation formats are classified

using the GF network and then modification in the proposed algorithm is done

to enable it, for classifying the PAM signals. The received signal is corrupted

by AWGN plus with the channel effects. The features are extracted from the

received signal using GF network and input to classifier, which makes decision

about the modulation format of the received signal.

4.3 Gabor filter for Classification and FeatureExtraction

Gabor atom is efficient tool for feature extraction from the received

signal. The Gabor atom in simple form can be written as:

, ,

1cos 2c f i

t cg t g f t

(Eq.IV.1)

where ,          c and f are shift parameter, scale parameter and

modulation parameter, respectively.

Figure IV-1: System Model for Modulation Classification

Page 86: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

67

Figure IV-2: Gabor filter bank with input layer and output layer

In Fig IV-2. GFN is shown which have two layers. Inputs to the filter

are first serial to parallel converted , 1, 2,3, ..{ },i i Nx and the outputs are

, 1,2,3{ },k k Ny . Let , 1, 2,3, ..{ },i i Ng be the ith class gabor atom

and is defined as

1cos 2i

i iii

t cg t g f t

(Eq. IV. 2)

The Gabor atom parameters ( , , )c f are required to be adjusted until

cost function is minimized. The input layer have N nodes 1 2, , .,   N also

called gabor nodes. The output of the ith gabor atom node is i corresponding

to the input signal ix . Thus output of gabor atom is defined as:

,i i ig x (Eq. IV.2)

-* -1( )ijf ti

i iii

t cg e x t dt

(Eq. IV.3)

INPUT SIGNALX

SERIAL TO PARALLEL CO

NVERSIO

N (S T P)

==

Page 87: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

68

where

2

1cos 2

i

i

t c

i i

i

g t e f t

(Eq. IV.4)

The output layer consists of N nodes ,  1,2,3, ..,ky k N and for

convenience N is usually set to 1. The output of the Gabor atom node i in

the input layer is weighted by iw i.e.

1

      1, 2,3 ..,  N

kn in ini

y w n K

(Eq. IV.5)

GFN constitutes of two layers: In input layer, features are extracted

and second layer have GF weights, which constitute the linear classification

part. Gabor atom parameters and GF weights are adjusted to minimize the

sum of squared error. The difference between the desired outputs kd and

actual output of GF ky is defined as:

-k k ke d y (Eq. IV.6)

In training phase of modulation classification, the two adaptive

algorithms are performed by GFN. 1) The updating of Gabor atom parameters

( , , )c f . 2) For given set of Gabor atom parameters, algorithm updates the

weight of Gabor filter. In testing phase, shown in Fig IV-3, the modulated signal

may be PSK 2 to 64, FSK 2 to 64 and QAM2 to 64. The modulated signal is

passed through GFN that updates four parameters ( , , , )c f w and based upon

these parameters error is calculated. The minimum error corresponds to

decision about the received signal modulation format.

Page 88: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

69

Figure IV-3: Testing Scheme for Modulation Classification

4.4 Training and Testing of Gabor filter network

The training of GFN is partitioned into two phases: the first phase is

training of Gabor atom parameters ( , , , )c f w , while in second phase, updating

the weights of adaptive filter. Let γi denote one of ith Gabor node parameter

including shift parameter   ic , scale parameter   i and modulation parameter

  if . (Ghauri et al. (2014)). According to Delta rule

  γ ( )γi

i

J k

(Eq. IV.7)

Page 89: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

70

where is learning rate. The cost function is square of difference

between desired response and output of Gabor function i.e.

2( ) ( ) ( )J k E d k y k (Eq. IV.8)

The partial derivatives of cost function with respect to shift parameter

  ic , scale parameter   i and modulation parameter   if are as follows:-

( 1) ( )i i ic c n c n (Eq. IV.9)

2 2

c c ii

i i i

J k J kc

c c

(Eq. IV.10)

( 1) ( )i i in n (Eq. IV.11)

2 2

ii

i i i

J k J k

(Eq. IV.12)

( 1) ( )i i if f n f n (Eq. IV.13)

2 2

f f ii

i i i

J k J kf

f f

(Eq. IV.14)

From (Eq. IV.9)

2( )( ) ( )

 2 ( ) ( ) ( ) ( )

2 ( ) ( ) ( )

i i

i

i

J kd k y k

d k y k d k y k

d k y k y k

(Eq. IV.15)

Page 90: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

71

From (Eq. IV.6)

1

1

( )

   

M

j jji i

M

iij jj

y

w

k w

w

(Eq. IV.16)

Putting (Eq. IV.17) into (Eq. IV.16), we get

  ( )2 ( ) ( ) i

i

J kd k y k w

(Eq. IV.17)

From (Eq. IV.11), (Eq. IV.13) and (Eq. IV.15)

 ( ) ( ) ii c i

i

c d k y k wc

(Eq. IV.18)

 ( ) ( ) ii i

i

d k y k w

(Eq. IV.19)

 ( ) ( ) ii f i

i

f d k y k wf

(Eq. IV.20)

From (Eq. IV.5),

2

i

1cos  

i

i

t c

i i

i

x e f t

(Eq. IV.21)

For real valued signals, Gabor atoms are also real, in such case the

partial derivatives of i with respect to shift parameter   ic , scale parameter   i

and modulation parameter   if are as follows:-

Page 91: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

72

2

2

5

 

1

2

i

i

i

i

ii i

i i

t c

i ii i

t c

ii i

i

x gc c

x e cos f tc

xcos f t t c e

(Eq. IV.22)

2

22

3

  2 1 

2

i

i

i

i

t c

ii i

i i i

t c

i i i

i ii

x e cos f t

x cos f t t ce

(Eq. IV.23)

2

2

1 i

i

i

i

t c

ii i

i i i

t c

i i

i

x e cos f tf f

tx e sin f t

(Eq. IV.24)

The updating of Gabor atom parameters (shift parameter   ic , scale

parameter   i and modulation parameter   if ) according to Delta rule are:

2

 5

( ) ( ) 2i

i

t c

ii c i i i

i

xc d k y k w cos f t t c e

(Eq. IV.25)

2

 5

1) ( ) ( ) ( ) ( )2(i

i

t c

ii i c i i i

i

xc n c n d k y k w cos f t t c e

(Eq. IV.26)

Page 92: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

73

2

2

  3

  2 1( ) ( )  

2

i

i

t c

i i ii i

i ii

x cos f t t cd k y k w e

(Eq. IV.27)

2

 

2

3

1 ( ) ( )

  2 1 

2

i

i

i i i

t c

i i i

i ii

n n d k y k w

x cos f t t ce

(Eq. IV.28)

2

 ( ) ( )i

i

t c

i f i i i

i

tf d k y k w x e sin f t

(Eq. IV.29)

2

 1   ( ) ( )

i

i

i i f i

t c

i i

i

f n f n d k y k w

tx e sin f t

(Eq. IV.30)

Eq. IV.27, Eq. IV.29 and Eq. IV.31 shows the updated shift parameter,

scale parameter and modulation parameter of GFN. The weights of adaptive

filter are updated as follows:-

) )(  1 (  i i iw w n w n (Eq. IV.31)

2

 2

2

wi

i

w

i

J kw

w

d k y kw

   wi

d k y k y kw

(Eq. IV.32)

1

( ) 

N

j j iji i

y kw

w w

(Eq. IV.33)

Page 93: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

74

Substituting (Eq. IV.34) in (Eq. IV.32), we get

  ( ) ( )i w iw d k y k

From (Eq. IV.32)

( 1) ( ) ( ) ( )i i w iw n w n d k y k (Eq. IV.34)

Eq. IV. 35 shows the weight updating of the adaptive filter.

4.5 The proposed Algorithm for ModulationClassification

The parameters of GFN (shift, scale and modulation) are updated

before they are input to the adaptive filter where weights of adaptive filter are

adjusted to minimize the error function. The error is calculated, if error is less

than the threshold training process stops otherwise update the GF parameters

and weights of the adaptive filter until the cost function is minimized. In test

phase of the classifier algorithm, input modulated signal is fed to the trained

GFN. The parameters of GFN and weights of the adaptive filter are updated

and error is calculated. The minimum error corresponds to the desired

modulation format.

The proposed algorithm for the training and testing of GFN for the

problem of modulation classification is presented by Ghauri et al. (2014). The

algorithm 1 shows the training of gabor filter network and algorithm 2 shows

the testing of gabor filter network for the classification of M-QAM, M-FSK and

M-PSK signals.

Page 94: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

75

Algorithm 1: Training of Gabor filter Network

Step1. Initialization of Gabor atom parameters (shift parameter   ,ic

scale parameter   i and modulation parameter   if ) and

weights of Gabor filter ( ).iw

Step2. Calculate the Gabor atom using (Eq. IV.4) and using (Eq.

IV.22), compute all Gabor atom nodes.

Step3. The Gabor atoms node ( )i are now input to the Adaptive

filter, and adjust the weights of the adaptive filter using LMS

(Eq. IV.32)-( Eq. IV.35).

Step4. Evaluate error which is defined in (Eq. IV.7). If error is less

than chosen threshold, then training of algorithm is stopped

and save Gabor atom parameters and Gabor filter weights

( , , , )c f w .

Step5. If error is not less than threshold, repeat step (3) by using

the error calculated in step (4).

Step6. Tune the Gabor atom parameters ( , , )c f using (Eq. IV.8),

(Eq. IV.27), (Eq. IV.29) and (Eq. IV.31).

Step 7. Stoppage Criterion

Step8. Save Gabor atom parameters and Gabor filter weights

( , , , ).i i i ic f w

Algorithm 2: Testing of Modified Gabor filter Network

Page 95: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

76

Step1. Input digital modulated signal which may be PAM 2 to

64 modulated.

Step2. Compute the output of each GFN.

Step3. Compute the error function of each GFN.

Step4. Minimum error corresponds to the desired modulation

format of the input signal.

4.6 Simulation Results

The modulation classification using Gabor filter is evaluated in this

section. Firstly the training of algorithm is presented and then the testing of

algorithm in the presence of AWGN channel. The PCC curves are simulated

against number of iterations and SNR, for three different modulation scenarios.

Table IV-1 to IV-3 and Fig IV-4 to Fig IV-9 shows the training of GFN

for the considered modulation formats (PSK, FSK & QAM) up to order 2 to 64.

The GFN parameters (shift, scale & modulation) and weights are updated

according to each considered modulation formats. For minimized error

function, the Gabor atom parameters and weights of Gabor filter ( , , , )i i i ic f w

are stored.

The updated Gabor atom parameters and weights of Gabor filter

( , , , )i i i ic f w are shown in Table IV-1 to IV-3. Fig IV-4 to Fig IV-6 shows the

training of GFN for different number of iterations in case of PSK modulation,

FSK modulation and QAM, respectively. The training process for different

Page 96: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

77

SNRs is also shown in Fig IV-7 to Fig IV-9. The training shows that the mean

square error dies down as number of iterations is increased and also by

increasing the SNR. Fig IV-10 to Fig IV-15, shows that the testing of GFN for

the considered modulations formats (PSK 2 to 64, FSK 2 to 64 & QAM 2 to 64)

in the presence of AWGN channel. The probability of correctness is plotted

against signal to noise ratio (SNR) and different number of iterations to

evaluate the classification accuracy of the proposed GFN. The simulations

results shows the classification accuracy for the examples of PSK4, FSK16

and QAM32 which are 100% for fixed SNR and different number of iterations.

Table IV-1 and Table IV-2 shows the updated Gabor atom parameters

for the modulation formats of PSK of order 2 to 64. The Table IV-1 have four

parts; first part shows the updated scale parameter for PSK 2 to 64, second

part shows the updated shift parameter, third part shows the updated

modulation parameter and forth shows the updated weights of the adaptive

filter. The all values of updated GF parameters and weights of adaptive filter

are for minimum mean square error.

Table IV-3 and Table IV-4 shows the updated Gabor atom parameters

for the modulation formats of FSK of order 2 to 64. The table IV-2 have four

parts; first part shows the updated scale parameter for FSK 2 to 64, second

part shows the updated shift parameter, third part shows the updated

modulation parameter and forth shows the updated weights of the adaptive

filter. Table IV-5 and Table IV-6 shows the updated Gabor atom parameters

for the modulation formats of QAM of order 2 to 64. The all values of updated

GF parameters and weights of adaptive filter are for minimum mean square

error. The table IV-3 have four parts; first part shows the updated scale

Page 97: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

78

parameter for QAM 2 to 64, second part shows the updated shift parameter,

third part shows the updated modulation parameter and forth shows the

updated weights of the adaptive filter.

Table IV-1: Updated Shift and Modulation Parameter for PSK modulation 2-64

Shift Parameter (c)PSK2 PSK4 PSK8 PSK16 PSK32 PSK645.54 5.07 5.77 5.65 4.81 5.275.77 5.90 4.25 4.45 4.18 4.294.69 4.24 4.33 5.20 4.56 5.564.63 4.72 4.83 4.17 4.67 4.565.02 4.29 5.00 4.08 5.13 4.595.03 5.18 5.68 5.89 5.23 4.795.16 5.09 5.89 4.91 5.79 4.634.61 4.67 5.52 5.91 4.35 4.474.48 4.59 5.13 5.62 5.61 4.475.43 5.14 5.13 4.94 4.15 4.75

Modulation Parameter (f)PSK2 PSK4 PSK8 PSK16 PSK32 PSK642.35 -0.05 0.80 3.14 -0.53 1.82-2.72 -0.56 -0.37 -0.53 2.90 -3.021.91 2.52 -1.90 0.95 -1.33 -2.24-1.02 0.10 -1.55 1.46 2.47 0.381.73 1.08 1.83 1.51 -0.75 -1.89-2.75 -1.83 -2.57 -1.42 2.66 1.100.47 0.20 -2.79 -2.21 0.54 -1.570.32 0.11 -0.93 -1.56 -0.94 -0.551.88 0.33 -2.59 0.31 -1.28 0.68-0.76 -2.27 0.07 2.83 -1.01 -1.70

Table IV-2: Updated Scale and weight Parameter for PSK modulation 2-64

Scale Parameter ( )PSK2 PSK4 PSK8 PSK16 PSK32 PSK6415.63 16.25 10.25 16.76 1.67 2.4516.03 8.57 8.17 5.18 13.53 16.2114.87 14.08 2.55 18.87 6.78 5.331.92 18.27 12.31 6.63 13.33 4.666.44 3.16 11.89 18.48 9.38 5.94

13.32 3.17 4.80 6.85 11.90 2.5011.49 12.94 12.83 15.33 7.44 15.29

Page 98: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

79

14.39 3.33 7.33 7.58 6.03 5.2119.04 14.23 7.39 5.13 18.00 12.818.67 2.94 8.48 1.05 12.56 2.59

Weights (w)PSK2 PSK4 PSK8 PSK16 PSK32 PSK645.059 -2.65 -0.74 -0.119 0.837 -4.310-1.83 -1.97 0.671 0.297 1.309 -3.0736.844 -2.42 -0.53 -0.371 -3.033 8.3331.847 2.528 0.499 -0.356 -0.894 -3.8025.384 -0.09 0.677 -0.346 -1.484 1.4138.671 1.923 0.256 -0.917 -0.843 3.8870.058 -2.85 -0.70 -0.455 0.449 -1.762-1.36 1.852 -0.56 -0.487 -1.844 2.6821.899 1.055 0.481 -1.498 0.844 -3.901-3.6 -2.7 0.402 -0.664 1.832 2.549

Table IV-3: Updated Shift and Modulation Parameter for FSK modulation 2-64

Shift Parameter (c)FSK2 FSK4 FSK8 FSK16 FSK32 FSK644.25 4.36 5.35 4.67 5.74 4.224.09 4.88 5.80 4.20 4.55 5.934.58 4.59 5.54 5.94 4.01 5.895.50 5.81 5.71 5.35 5.57 5.254.46 5.55 4.60 5.83 4.59 5.965.36 4.93 4.28 4.01 5.50 5.894.27 4.49 4.16 4.31 5.15 5.954.38 5.28 4.75 5.70 5.76 5.404.35 5.03 4.41 5.04 5.39 4.774.38 4.29 5.58 5.97 5.31 5.84

Modulation Parameter (f)FSK2 FSK4 FSK8 FSK16 FSK32 FSK64-0.15 -2.97 2.87 1.64 2.42 -1.701.15 2.77 0.84 0.76 -1.48 -2.071.22 1.82 2.69 -2.54 -1.23 -0.84-1.50 2.91 -0.67 2.37 0.77 3.102.87 1.94 -0.99 -2.95 0.16 0.450.66 2.08 2.29 0.04 -3.00 -2.56-2.45 -0.72 2.18 -1.43 -1.85 0.410.19 2.87 -1.23 2.59 2.05 -1.09-0.44 1.69 -1.04 -2.30 0.69 0.342.55 -0.76 1.41 -1.80 0.79 -0.54

Page 99: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

80

Table IV-4: Updated Scale and weight Parameter for FSK modulation 2-64

Table IV-5: Updated Shift and Modulation Parameter for QAM modulation 2-64

Shift Parameter (c)QAM2 QAM4 QAM8 QAM16 QAM32 QAM645.54 5.07 5.77 5.65 4.81 5.275.77 5.90 4.25 4.45 4.18 4.294.69 4.24 4.33 5.20 4.56 5.564.63 4.72 4.83 4.17 4.67 4.565.02 4.29 5.00 4.08 5.13 4.595.03 5.18 5.68 5.89 5.23 4.795.16 5.09 5.89 4.91 5.79 4.634.61 4.67 5.52 5.91 4.35 4.474.48 4.59 5.13 5.62 5.61 4.47

Scale Parameter ( )FSK2 FSK4 FSK8 FSK16 FSK32 FSK644.44 6.46 6.80 17.24 19.65 12.2714.22 2.53 3.57 3.73 6.25 19.222.11 8.02 15.44 10.10 17.67 6.865.89 10.94 4.75 11.42 4.03 6.9319.21 2.30 11.89 2.10 6.77 19.9619.44 2.78 14.26 5.46 4.12 14.0417.74 4.84 11.82 10.50 3.96 5.871.37 6.67 10.98 17.24 17.69 5.8719.53 15.99 13.96 8.73 19.91 3.7412.67 3.57 1.45 14.57 3.65 19.59

Weights (w)FSK2 FSK4 FSK8 FSK16 FSK32 FSK64-2.58 -7.1 0.931 0.124 -0.318 8.540-0.49 -0.57 -1.2 -0.239 -1.668 5.174-1.53 -3.56 -1.20 3.627 4.663 -1.5706.719 -5.81 -1.1 -0.984 -2.764 0.566-9.32 4.059 -0.08 -0.283 0.840 -2.799-8.05 -2.46 0.068 0.990 3.381 -1.6280.012 3.768 0.443 0.242 4.211 12.898-12.2 1.989 -1.49 1.298 -1.675 -6.2708.351 5.954 -1.45 -0.296 -4.969 4.876-4.07 -0.03 0.40 1.027 -5.423 -8.421

Page 100: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

81

5.43 5.14 5.13 4.94 4.15 4.75

Modulation Parameter (f)QAM2 QAM4 QAM8 QAM16 QAM32 QAM642.35 -0.05 0.80 3.14 -0.53 1.82-2.72 -0.56 -0.37 -0.53 2.90 -3.021.91 2.52 -1.90 0.95 -1.33 -2.24-1.02 0.10 -1.55 1.46 2.47 0.381.73 1.08 1.83 1.51 -0.75 -1.89-2.75 -1.83 -2.57 -1.42 2.66 1.100.47 0.20 -2.79 -2.21 0.54 -1.570.32 0.11 -0.93 -1.56 -0.94 -0.551.88 0.33 -2.59 0.31 -1.28 0.68-0.76 -2.27 0.07 2.83 -1.01 -1.70

Table IV-6: Updated Scale and weight Parameter for QAM modulation 2-64

Scale Parameter ( )QAM2 QAM4 QAM8 QAM16 QAM32 QAM6415.63 16.25 10.25 16.76 1.67 2.4516.03 8.57 8.17 5.18 13.53 16.2114.87 14.08 2.55 18.87 6.78 5.331.92 18.27 12.31 6.63 13.33 4.666.44 3.16 11.89 18.48 9.38 5.9413.32 3.17 4.80 6.85 11.90 2.5011.49 12.94 12.83 15.33 7.44 15.2914.39 3.33 7.33 7.58 6.03 5.2119.04 14.23 7.39 5.13 18.00 12.818.67 2.94 8.48 1.05 12.56 2.59

Weights (w)QAM2 QAM4 QAM8 QAM16 QAM32 QAM64-0.031 -0.504 1.009 -1.428 -1.095 0.339-0.021 -0.446 -0.796 -1.339 -0.559 -0.835-0.043 -0.187 -0.285 -0.725 0.304 0.4420.194 0.794 0.299 -1.122 1.359 -0.5580.354 1.061 0.664 -0.476 -1.109 0.5060.250 -0.591 -0.306 0.044 -1.357 -0.2340.097 -0.622 -0.380 1.193 0.290 0.0220.007 0.247 -0.769 -0.316 0.624 -0.1770.245 0.467 -0.608 0.984 -0.334 0.6940.046 -0.117 0.536 0.552 -0.294 0.423

Page 101: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

82

Figure IV-4, shows the training of GF network for the case of PSK

modulation format having order 2to64 for different number of iterations at fixed

SNR of 10dB. The parameters of GFN are trained for greater than 50 iterations

for the case of PSK 2, 4 and 8 while for the PSK 16, 32 and 64 the training of

GFN are for less than 50 iterations. The mean square error is minimized and

approaches to zero for all curves shown in Fig IV-4, as number of iterations

are increased.

Figure IV-5, shows the GFN training for the case of FSK modulation

format having order 2to64 for fixed number of iterations. As shown in Fig IV-5,

the mean square error is approaching to zero, when number of iterations are

increased. The training of GFN are done for maximum 50 iterations for the FSK

modulation case.

Figure IV-6, shows the training of GFN for QAM case with fixed SNR

of 10 dB and different iterations. The training of GFN shown minimized mean

square error for all curves shown in Fig IV-6 with less number of iterations. In

figure 6, QAM 16, 32 and 64 are trained for 20 iterations and for QAM 2, 4 and

8 are trained for above 50 number of iterations.

Figure IV-7, shows the training of GFN parameters and weights of

adaptive filter in case of PSK modulation up to order 2 to 64 with fixed number

of iterations and different SNR’s. As SNR increasing from 0 to 20, the mean

square error approaching towards zero. The training of proposed algorithm for

all cases of considered modulation is done successfully and Fig IV-7, shows

the proposed algorithm for the modulation classification is trained at SNR of

10dB.

Page 102: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

83

Figure IV-4 Training of Gabor filter parameters and weights for modulationclassification in case of PSK modulation 2-64 for different no. of iterations at

SNR=10dB.

0 50 100 150 200 2500

5

10

15

Number of Iterations

Mean S

quare

Err

or

PSK2

0 50 100 150 200 2500

1

2

3

4

Number of Iterations

Mean S

quare

Err

or

PSK4

0 50 100 150 200 2500

5

10

15

Number of Iterations

Mean S

quare

Err

or

PSK8

0 50 100 150 200 2500

2

4

6

Number of Iterations

Mean S

quare

Err

or

PSK16

0 50 100 150 200 2500

5

10

15

Number of Iterations

Mean S

quare

Err

or

PSK32

0 50 100 150 200 2500

2

4

6

8

10

Number of Iterations

Mean S

quare

Err

or

PSK64

Page 103: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

84

Figure IV-5 Training of Gabor filter parameters and weights for modulationclassification in case of FSK modulation 2-64 for different no. of iterations at

SNR=10dB.

Figure IV-6 Training of Gabor filter parameters and weights for modulationclassification in case of QAM modulation 2-64 for different number of

iterations at SNR=10dB.

0 50 100 150 200 2500

0.2

0.4

0.6

0.8

1

Number of Iterations

Mean S

quare

Err

or

FSK2

0 50 100 150 200 2500

0.5

1

1.5

2

2.5

Number of Iterations

Mean S

quare

Err

or

FSK4

0 50 100 150 200 2500

0.1

0.2

0.3

0.4

Number of Iterations

Mean S

quare

Err

or

FSK8

0 50 100 150 200 2500

0.5

1

1.5

Number of Iterations

Mean S

quare

Err

or

FSK16

0 50 100 150 200 2500

0.2

0.4

0.6

0.8

1

Number of Iterations

Mean S

quare

Err

or

FSK32

0 50 100 150 200 2500

0.5

1

1.5

Number of Iterations

Mean S

quare

Err

or

FSK64

0 50 100 150 200 2500

1

2

3

4

Number of Iterations

Mean S

quare

Err

or

QAM2

0 50 100 150 200 2500

2

4

6

8

10

Number of Iterations

Mean S

quare

Err

or

QAM4

0 50 100 150 200 2500

5

10

15

Number of Iterations

Mean S

quare

Err

or

QAM8

0 50 100 150 200 2500

5

10

15

Number of Iterations

Mean S

quare

Err

or

QAM16

0 50 100 150 200 2500

2

4

6

8

Number of Iterations

Mean S

quare

Err

or

QAM32

0 50 100 150 200 2500

5

10

15

20

25

Number of Iterations

Mean S

quare

Err

or

QAM64

Page 104: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

85

Figure IV-7: Training of Gabor filter parameters and weights for ModulationClassification in case of PSK modulation 2-64 at different SNRs and fixed

number of iterations

Figure IV-8: Training of Gabor filter parameters and weights for ModulationClassification in case of FSK modulation 2-64 at different SNRs and fixed

number of iterations.

-10 -5 0 5 10 15 200

0.1

0.2

0.3

0.4

SNR

Mean S

quare

Err

or

PSK2

-10 -5 0 5 10 15 200

0.2

0.4

0.6

0.8

1

SNR

Mean S

quare

Err

or

PSK4

-10 -5 0 5 10 15 200

1

2

3

SNR

Mean S

quare

Err

or

PSK8

-10 -5 0 5 10 15 200

2

4

6

SNR

Mean S

quare

Err

or

PSK16

-10 -5 0 5 10 15 200

5

10

15

SNR

Mean S

quare

Err

or

PSK32

-10 -5 0 5 10 15 200

5

10

15

SNR

Mean S

quare

Err

or

PSK64

-10 -5 0 5 10 15 200

0.2

0.4

0.6

0.8

SNR

Mean S

quare

Err

or

FSK2

-10 -5 0 5 10 15 200

0.5

1

1.5

SNR

Mean S

quare

Err

or

FSK4

-10 -5 0 5 10 15 200

0.5

1

1.5

2

2.5

SNR

Mean S

quare

Err

or

FSK8

-10 -5 0 5 10 15 200

1

2

3

4

5

SNR

Mean S

quare

Err

or

FSK16

-10 -5 0 5 10 15 200

2

4

6

8

10

SNR

Mean S

quare

Err

or

FSK32

-10 -5 0 5 10 15 200

5

10

15

SNR

Mean S

quare

Err

or

FSK64

Page 105: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

86

Figure IV-9: Training of Gabor filter parameters and weights for ModulationClassification in case of QAM 2-64 at different SNRs and fixed number of

iterations

Figure IV-8, shows the training of GFN parameters and weights of

adaptive filter in case of FSK modulation up to order 2 to 64 with fixed number

of iterations and different SNR’s. The training of FSK modulation formats are

done at SNR of 10-15dB. The training of GFN for QAM 2 to 64 at different

SNR’s and fixed number of iterations are shown in Figure IV-9. The parameters

of GFN and weights of the adaptive filter are updated and mean square error

is minimized as SNR is increased from 0 to 20dB.

The example considered in Figure IV-10 are PSK4, FSK16 and

QAM32. The probability of correctness versus different number of iterations at

SNR=10dB is shown in Figure IV-10. The PCC in Figure IV-10 is approximately

1 when number of iterations is increased up to 200.

In Figure IV-11, the probability of correctness for different modulation

scenarios is shown for different SNR. The PCC curve shows the classification

-10 -5 0 5 10 15 200

0.1

0.2

0.3

0.4

0.5

SNR

Mean S

quare

Err

or

QAM2

-10 -5 0 5 10 15 200

0.2

0.4

0.6

0.8

SNR

Mean S

quare

Err

or

QAM4

-10 -5 0 5 10 15 200

0.1

0.2

0.3

0.4

0.5

SNR

Mean S

quare

Err

or

QAM8

-10 -5 0 5 10 15 200

0.2

0.4

0.6

0.8

SNR

Mean S

quare

Err

or

QAM16

-10 -5 0 5 10 15 200

0.1

0.2

0.3

0.4

0.5

SNR

Mean S

quare

Err

or

QAM32

-10 -5 0 5 10 15 200

0.1

0.2

0.3

0.4

0.5

SNR

Mean S

quare

Err

or

QAM64

Page 106: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

87

performance is approximately 100% at SNR=10dB for fixed number of

iterations.

The simulation results shows the 100 % classification accuracy of the

proposed algorithm. The features extracted from the proposed architecture

and classifier based upon GFN provides correct classification among group of

considered modulation formats. Moreover the received signal is corrupted by

additive white guassian noise but the classification accuracy is approximately

100% at lower SNRs. The algorithm is also computationally less complex and

classification accuracy is attained at less number of iterations.

Figure IV-10: Probability of correctness (PCC) versus Number of Iterations atSNR=10dB

0 20 40 60 80 100 120 140 160 180 2000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

No. of Iterations

Pro

babi

lity

of C

orre

ctne

ss (P

CC

)

PSK

FSK

QAM

Page 107: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

88

Figure IV-11: Probability of correctness (PCC) versus SNR for fixed Numberof Iterations

4.7 Modified Gabor Filter Network for Classificationof PAM-signals

This section presents Modified Gabor filter based efficient features

extraction from the received signal which to the best of our knowledge have

not been utilized for the problem of modulation classification of M-PAM signals.

The received signal has undergone additive white guassian noise (AWGN)

channel, Rayleigh flat fading channel or Rician flat fading channel. After the

successful extraction of features, weights of the adaptive filter are updated

using Recursive Least Square (RLS) algorithm. The previously proposed

algorithm in section 4.5 are capable to classify the M-QAM, M-PSK and M-FSK

but was not at all efficient for M-PAM signals. In modified Gabor filter network

-10 -5 0 5 10 15 200

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR

Pro

babi

lity

of C

orre

ctne

ss (P

CC

)

PSKFSKQAM

Page 108: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

89

(MGFN), we have made two important changes to make it efficient for M-PAM

signals.

4.7.1 Modified Gabor Filter Network

To classify the M-PAM signals, the training and testing of the proposed

algorithm have to be done. The training of GFN for the PAM2, PAM4, PAM8,

PAM16, PAM32 and PAM64 is carried out by adjusting the three parameters

of GFN which are shift, scale, modulation parameter ( , , )c f and weights of

the adaptive filter ( ).w The PAM formats are spread about x-axis, and as

increasing M which may vary from 2 to 64, the values of amplitudes are also

increased. The increased values of amplitudes destroy the convergence of the

algorithm. To cope up with the problem of divergence, following are the

proposed changes in the existing algorithm by Ghauri et al. (2014) for

classification of PAM formats (Ghauri et al. (2015)):

1. The absolute values of amplitude are taken instead of taking whole

input modulated signal, e.g. PAM4 have amplitudes 3, 1,1,3 but

after taking absolute values, amplitude becomes 1,3 .

2. The desired response for each of the considered modulation

formats are the average amplitudes i.e.1

Mj

j

Aa

M

where

7, 5, 3, 1,1, 3, 5,{ 7}jЄA for example, for PAM8 the desired

response is 4.

Page 109: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

90

3. The weights of the adaptive filter are updated using RLS algorithm

instead of LMS algorithm for two motives. First the convergence

rate of RLS is sooner than the LMS. Second the mean square error

produced by RLS is lesser than the LMS.

4.7.2 Modified Proposed Algorithm for ModulationClassification

The weights of adaptive filter are updated using RLS algorithm which

are as follows:-

1

1T

n nn

n n n

Kk

K(Eq. IV.35)

1

     M

i ii

e n d n y n d n w

(Eq. IV.36)

1n n n e n w w k (Eq. IV.37)

1 11 1Tn n n n n K K k K (Eq. IV.38)

To initialize the algorithm, weights are initialized (0) 1, 1, .. . . 1 ,w the

K is referred as inverse correlation matrix. The n is the input vector and

is the forgetting factor. The algorithm for training and testing the proposed

modified Gabor filter network for the classification of M-PAM signals are as

under:-

Page 110: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

91

Algorithm 1: Training of Modified Gabor filter Network

Step1. Initialize the Gabor atom parameters

Step2. Compute all Gabor atom nodes using (Eq. IV-21).

Step3. Adjust adaptive filter using RLS (Eq. IV-36)-(Eq. IV-39).

Step4. After adjusting the weights, Calculate error using (Eq. IV-7).

Step5. If error is less than chosen threshold, then training of algorithm isstop and save Gabor atom parameters and Gabor filter weights( , , , ).i i i ic f w

Step6. If error is not less than threshold, repeat step (3) by using the errorcalculated in step (4).

Step7. Tune the Gabor atom parameters ( , , )i i ic f using (Eq. IV-27), (Eq.IV.29) and (Eq. IV.30).

Algorithm 2: Testing of Modified Gabor filter Network

Step1. Input digital modulated signal which may be PAM 2 to 64

Step2. Compute the output of each GFN by using the relation

1

M

i ii

y w

Step3. Compute the error function of each GFN.

Step4. Minimum error corresponds to the desired modulation format ofthe input signal

4.7.3 Simulation Results of Modified Gabor FilterNetwork

The training of MGFN is evaluated for the M-PAM signal classification

in tabular form and also simulation results for mean square error versus

number of iterations and signal to noise ratio. Tables and figures show that the

Page 111: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

92

mean square error approach approximately zero when number of iterations

and SNR are increased. At the end of training the three parameters of MGFN

(shift, scale, modulation parameter) and adaptive filter weights are saved for

minimum mean square error. In second module the testing of GFN is carried

out by finding the error function of each Gabor filter network. The minimum

square error corresponds to the desired modulation format.

Table IV-7, shows the training performance of MGFN in the form of

diagonal matrix or accuracy matrix for the classification of considered

modulation formats. The training performance is approximately 100% under

no noise considerations. Table IV-8, shows the training performance of MGFN

in the form of diagonal matrix or accuracy matrix for the classification of M-

PAM signals under the influence of additive white guassian noise. The training

of MGFN is done at SNR of 10dB and accuracy is approximately 99.5% for

considered modulation formats.

Figure IV-12. Training of MGFN for the M-PAM formats under no Noise

0 10 20 30 40 500

0.2

0.4

0.6

0.8

1

No. of Iterations

MM

SE

PAM2

0 10 20 30 40 500

5

10

15

No. of Iterations

MM

SE

PAM4

0 10 20 30 40 500

20

40

60

No. of Iterations

MM

SE

PAM8

0 10 20 30 40 500

10

20

30

No. of Iterations

MM

SE

PAM16

0 10 20 30 40 500

2

4

6

8

No. of Iterations

MM

SE

PAM32

0 10 20 30 40 500

10

20

30

40

No. of Iterations

MM

SE

PAM64

Page 112: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

93

Table IV-7. Training Performance of MGFN of M-PAM signal classificationwithout Noise

PAM 2 PAM 4 PAM 8 PAM 16 PAM 32 PAM 64

PAM 2 100%

PAM 4 100%

PAM 8 100%

PAM 16 99.2%

PAM 32 99.6%

PAM 64 100%

Figure IV-13. Training of MGFN for the M-PAM formats on AWGN channel

Table IV-8. Training Performance of MGFN of M-PAM signal classification onAWGN channel

-10 -5 0 5 10 15 200

0.5

1

1.5x 10

-3

SNR

MM

SE

PAM2

-10 -5 0 5 10 15 200

2

4

6

8

SNR

MM

SE

PAM4

-10 -5 0 5 10 15 200

20

40

60

80

SNR

MM

SE

PAM8

-10 -5 0 5 10 15 200

500

1000

1500

2000

SNR

MM

SE

PAM16

-10 -5 0 5 10 15 200

1000

2000

3000

SNR

MM

SE

PAM32

-10 -5 0 5 10 15 200

5000

10000

15000

SNR

MM

SE

PAM64

Page 113: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

94

PAM 2 PAM 4 PAM 8 PAM 16 PAM 32 PAM 64

PAM 2 100%

PAM 4 99.9%

PAM 8 100%

PAM 16 98.4%

PAM 32 99.2%

PAM 64 99.3%

Table IV-9. Testing Performance of MGFN of M-PAM signal classification onAWGN channel

PAM 2 PAM 4 PAM 8 PAM 16 PAM 32 PAM 64

PAM 2 98.6%

PAM 4 97.8%

PAM 8 96.6%

PAM 16 96.1%

PAM 32 98.1%

PAM 64 97.6%

Table IV-10. Testing Performance of MGFN for M-PAM signal classificationat SNR=10dB on AWGN channel

PAM 2 PAM 4 PAM 8 PAM 16 PAM 32 PAM 64

PAM 2 99.5%

PAM 4 98.9%

PAM 8 99.9%

PAM 16 98.3%

PAM 32 98.5%

PAM 64 98.9%

Figure IV-12, shows the training of MGFN under no noise conditions.

From the Figure IV-12, it is clear that the minimum mean square error (MMSE)

Page 114: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

95

is approaching to zero as number of iterations increases for all considered

modulation formats. The training of network is stop when MMSE reaches some

threshold or zero and the features are stored.

Figure IV-13, shows the training of MGFN under the influence of

AWGN channel with fixed number of iterations. The MMSE approach towards

zero as SNR increases from -10 to 20 dB for the considered modulation

formats as shown from the Figure IV-13.

Table IV-9, shows the testing performance of MGFN in the form of

diagonal matrix for the M-PAM signal classification. The performance is

evaluated at SNR of 5dB and it is shown from the table that percentage

accuracy for classifying the modulation formats is much better at low SNRs.

Table IV-10, shows the testing performance of MGFN in the form of

diagonal matrix for the M-PAM signal classification at SNR of 10dB and it is

shown that percentage accuracy for classifying the modulation formats is

98.7%. The testing performance is better due to two facts: first choice of

efficient features from the MGFN and second the classifier.

Figure IV-14, shows the probability of correctness (POC) plotted

against number of iterations and from Figure IV-14, it is clear that POC is

approximately 1 when number of iterations increased up to 500. The example

considered in the above figure is PAM 16 among class of M-PAM signals which

are classified correctly.

Figure IV-15, shows the probability of correctness (POC) plotted

against signal to noise (SNR) for fixed number of iterations and from Figure IV-

15 it is clear that POC is approximately 1, when SNR is increased up to 10.

Page 115: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

96

The example considered in the above figure is PAM 16 and classification

accuracy is approximately 90% at greater than 3dB of SNR.

The classification performance of PAM 8 considered example among

the M-PAM signals is evaluated in Table IV-11. The Table IV-11, shows the

performance comparison of percentage classification on the AWGN channel,

Rician flat fading channel and Rayleigh flat fading channel. The classifier

performance is approximately 100 % on AWGN channel, 95% on Rician flat

fading channel and 92% for the Rayleigh flat fading channel at SNR of 10 dB.

The efficient features extraction from the MGFN easily classifies the

considered modulation formats with very low probability of error.

Figure IV-14. Probability of Correctness curve for the example of PAM16.

50 100 150 200 250 300 350 400 450 5000.65

0.7

0.75

0.8

0.85

0.9

0.95

1

Number of Iterations

Prob

abilit

y of C

orrec

tness

(POC

)

Probability of Correctness Curve

Page 116: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

97

Figure IV-15. Probability of Correctness curve under AWGN channel for theexample of PAM16

Table IV-11. Testing Performance Comparison of MGFN on AWGN andFading channels for the example of 8-PAM format

SNR indB AWGN Rician Flat

Fading Rayleigh Flat

-4 45.0% 35.1% 31.0%

-3 57.0% 45.6% 37.0%

-2 66.2% 55.44% 46.8%

-1 74.2% 64.64% 54.9%

0 81.0% 72.64% 63.4%

1 85.2% 78.84% 70.8%

2 88.4% 83.36% 76.8%

3 91.6% 87.16% 81.0%

4 94.4% 89.6% 85.1%

5 96.6% 91.4% 87.8%

6 98.4% 92.76% 89.8%

7 99.4% 93.76% 91.2%

8 99.8% 94.36% 91.8%

10 100% 94.87% 92.0%

-4 -2 0 2 4 6 8 100.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

Prob

abilit

y of C

orrec

tness

(POC

)

Probability of Correctness Curve

Page 117: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

98

4.8 Comparison with Existing Techiques

Table IV-12, shows the performance comparison with the existing

techniques. The table objects shows that the method proposed, the year,

number of modulation formats to be classified, classification accuracy in

percentage form at SNR of 10dB and number of features extracted from the

received signal. From the existing techniques, modulation formats considered

for classification are in less numbers and number of features used are more

and also classification accuracy is not above 90% for the maximum cases. The

efficient features which we extracted are only three and we successfully

classified maximum number of modulation formats at SNR of 8dB.

4.9 Summary

In this chapter, the proposed joint approach for feature extraction and

classification for multi-signal vector is used for modulation classification. In first

part, Gabor filter based approach is used to classify the digital modulated

signal {PSK2, PSK4, PSK8, PSK16, PSK32, PSK64, FSK2, FSK4, FSK8,

FSK16, FSK32, FSK64, QAM2, QAM4, QAM8, QAM 16, QAM 32, and QAM

64}. In second part, modified Gabor filter network based classification

algorithm is used to efficiently classify the M-PAM signals.

The proposed algorithm gives high classification accuracy at lower

SNRs. The performance of proposed modified classifier is also compared on

three different channels which shows the success rate of the classifier. In the

Page 118: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

99

end, the proposed GFN and MGFN based modulation classification algorithms

is compared with the existing techniques.

Table IV-12. Performance Comparison with the Existing Techniques

Method Year &Reference

ModulationFormats to be

classified

ClassificationAccuracy in %

at 10dB ofSNR

No offeatures

Pdf of the receivedsignal, Zero Crossing

Hsue et al. (1990)

PSK, FSK

6

98

(15dB of SNR)

4

Spectral Features

Azzouz et al. (1995)

PSK,FSK

6

90 5

Spectral Featuresand Moments

Nandi et al. (1998)

PSK, FSK,other types

12

96

(15dB of SNR)

9

Constellation Shape

Mobasseri et al.(2000)

PSK, QAM

3

90

(5dB of SNR)

4

HOC

Swami et al. (2000)

PSK,QAM

4

96 2

Neural Networks

Zhao et al. (2003)

PSK,FSK, QAM

7

93

(8dB of SNR)

5

Back PropagationAlgorithm (BPA),

Resilient BPA Wonget al. (2004)

PSK, FSK,QAM

10

89.96

99.95

17

Time frequencyfeatures Ye et al.

(2007)

FSK,PSK

6

97.64 4

NormalizedCummulants Wu et

al. (2008)

PSK, QAM

5

97.5 1

Page 119: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

100

CummulantsFeatures, Multiclass

SVM

Ebrahimzadeh et al.(2010)

PSK, QAM

9

97.56 12

Spectral features andMoments

Popoola et al. (2011)

FSK,QAM

12

99.95 8

HOC & SpectralFeatures, Multiclass

SVM

Ebrahimzadeh(2012)

PSK,QAM,FSK

11

97.45 11

Gabor Filter basedFeatures Ghauri et

al. (2014)

PSK,FSK,QAM

18

100 3

Modified Gabor Filterbased Features

PAM

6

100

(8dB of SNR)

3

Page 120: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

101

CHAPTER V

AUTOMATIC MODULATION CLASSIFICATION USINGHIDDEN MARKOV MODELS

5.1 Introduction

In this chapter, we have proposed joint feature extraction and classifier

structure based upon GFs network and hidden markov model (HMM) for

classification of modulation formats that differ with the existing classifiers. The

proposed classifier structure uses Baum-Welch (BW) algorithm for training of

HMM, which computes the probability of observation sequence given the

model. The probability of observation sequence is maximized via updating four

parameters. Three parameters (shift, scale, modulation) of GFs network are

updated using GA, and one HMM parameter, the probability distribution in

each of the states, is updated using BW algorithm.

In training of the classifier, the probability of observation sequence has

to be maximized while updating four parameters. For each modulation format,

Gabor filter network and HMM is trained individually. We are simultaneously

optimizing the feature extractor (Gabor filter bank) and the classifier (hidden

markov model). In testing phase, proposed classifier classifies the considered

modulation formats. Efficient features extractor and HMM classifier are trained

simultaneously to formulate an optimal structure. The simulation results shows

the significant performance improvement when compared with other existing

Page 121: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

102

techniques. The PAM & QAM signals are considered for classification and all

the experiments are done in MATLAB environment.

5.2 System Model and Gabor Filter Network

The system model for the Gabor filter is the same as discussed in

Chapter 3. The input to the GF is the received signal which is corrupted by

AWGN. First the proposed GF converts the input sequence from serial to

parallel. The GF layer has N nodes k1 k2, , ,   kNO O O also called gabor nodes.

The gabor atom is defined as:

1          1 i kjw tk i

i kii

t cg t g e k K

(Eq. V.1)

where 21/42 ktkg t e and ( , , ) c f are shift parameter, scale

parameter and modulation parameter, respectively. The output of the ith gabor

atom node is kiO corresponding to input signal .kix Thus output of gabor atom

which represents the inner product of gabor atom and the transmitted signal is

defined as:

,( )ki i k kiO g t x (Eq. V.2)

The outputs of the GF are input to the classifier structure, the HMM.

After maximizing the fitness function the parameters of GF become features

which we have used to classify the PAM and QAM signals.

Page 122: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

103

5.3 Genetic Algorithm

Genetic Algorithms enjoys a lot of variety in terms of representation,

generation of initial population, selection criterion, mutation and crossover

methods. The parameters to be optimized, known as genes are concatenated

to make a string which is called a chromosome. In our problem GF parameters

(scale, shift and modulation) are a chromosomes. The major part of GA is the

fitness function, the fitness function in our problem is probability of observation

sequence given the model parameters i.e. [ | ]P O . There are a number of

ways to apply crossover, single point crossover, multipoint crossover and

uniform crossover but we use single point crossover. The purpose of mutation

is to avoid stagnation and premature convergence. In order to preserve best

individuals and promote them to next generation, the elitism operator is used.

The flow diagram of the genetic algorithm are shown in Figure V-1.

Start

Fitness Function Evaluation(Probability of Observation given Model)

Initialize population (Gabor filter parameters)[shift, scale, modulation]

Mutation

Crossover

Selection Best Individual

Stop

OptimizationCriteria Met

YESNO

Figure V-1. Flow Chart of Genetic Algorithm

Page 123: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

104

5.4 Hidden Markov Model

The main characteristics of HMM are the states of HMM, state

transition probability, observation symbol probability and the initial state

distribution.

1. States of HMM

The state of HMM at time t instant is tq , thus tq takes value from

1 2{ , ,  . . . . . , },Nq q q where N is the number of states.

2.State transition Probability

The set of state transition probabilities { }ijA a , where

1 | ,1  ,ij t ta P q j q i i j N (Eq. V.3)

and1

0,   1N

ij ijj

a a

. We assume that for all constellations state transition

probabilities are equally probable i.e. 1 / ,M where M is the number of

distinct observation symbols.

3. Observation Symbol Probability

The probability of observing some specific symbol from a specific state

( ,)jB b l where

|j t l tb l P O v q j (Eq. V.4)

Page 124: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

105

t1     &   1  ,j N l K O is the vector of outputs symbols and   lv is the lth

observation symbol 1

0,   1.N

j jj

b k b l

Here we assumed jb l to be

flat and update that using BW algorithm.

4. Initial State Distribution

The probability of starting the process from specific state, the initial state

distribution is

1,     1/i i P q i N π (Eq. V.5)

A complete specification of an HMM requires three probability

measures ( , , )A B . Therefore , )( ,  A B is used to denote the complete set

of HMM with discrete probability distribution. There are some assumptions

which we made in HMM to reduce the computational complexity:-

1. Markov Assumption

1 |ij t ta P q j q i

The above statement is first order HMM due to one step memory

because next state is only dependent upon the current state.

2. Stationary Assumption

The state transition probability doesn’t change with the time.

1 11 1| |t t t tP q j q i P q j q i (Eq. V.6)

Page 125: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

106

There are three basic problems of interest in HMM, evaluation

problem, learning problem and decoding problem.

1. Evaluation Problem: Given a HMM model and a sequence of

observations O , what is probability that the observation are

generated by the model [ | ]P O ?

2. Learning Problem: Given a HMM model and a sequence of

observations O , how should we adjust the model parameters in

order to maximize the [ | ]P O ?

3. Decoding Problem: Given a HMM model and a sequence of

observations O , what is the most likely state sequence in the

model that produced the observation?

5.4.1 Baum Welch (BW) Algorithm

Since in this thesis, only first two problems are being exploited,

therefore solutions of the above mentioned problems based on forward

backward procedure which estimates iteratively the unknown model

parameters and maximizing the posterior probability [ | ]P O which is known

as Baum-Welch Algorithm. The following is the procedure to evaluate the

[ | ]P O :

The forward variable ( )t i is defined as the probability of partial

observation sequence 1 2, ,  . . . . . ,to o o when process terminates at state i ,

1 2( ) [ , ,  . . . . . , ,   |  λ]t t ti P o o o q i

Page 126: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

107

Initialization

1 1 1 1 1

1

(1) ,  λ  

  ,               1]  [

i

i i

P o q i P o q i P q i

b o i N

(Eq. V.7)

Induction

1 1 2 1 1

1 2 1

1 2 1

1 1

1 2 1

( ) [ , ,  . . . . . , , ,   |  λ]

[ , ,  . . . . . , , 1|λ] | 1

[ , ,  . . . . . , , 2|λ] 2| 1

[ |   ] .

.

[ , ,  . . . . . , , |λ] |

t t t t

t t t t

t t t t

t t

t t t t

j P o o o o q j

P o o o q P q j q

P o o o q P q q

P o q j

P o o o q N P q j q N

1 1 21

[ , ,  . . . . . , ,   |  λ]N

j t t t iji

b o P o o o q i a

1 11

( )N

t j t t iji

j b o i a

(Eq. V.8)

Using the recursion formula we obtain

( ) [ ,   |  λ]T Ti P O q i

Termination

The required probability of observation given model is given by

1 1

[ | λ] ( ) [ ,   | λ]N N

T Ti i

P O i P O q i

(Eq. V.9)

The backward variable ( )t i is defined as the probability of partial

observation sequence 1 2, ,  . . . . . ,t t To o o given that current state is i ,

1 2( ) [ , ,  . . . . . , ,   |λ]t t t T ti P o o o q i

Page 127: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

108

Initialization

( ) 1,   1 T i i N

Induction

A recursive relationship which can be used to calculate

1 11

( )   () )(N

t t ij j tj

i j a b o

(Eq. V.10)

1 2 1 2( ) , ,  . . . . . , ,    λ , ,  . . . . . , ,    λ

  [ ,   |  λ]

( )t t t t t t T t

T

i P o o o q i P o o o q i

P O q

i

i

Termination

Therefore another way to calculate using [ | ]P O forward and

backward variables is as follows:-

1 1

[ | λ]   [ , | ] ( ) ( )N N

T t ti i

P O P O q i i i

(Eq. V.11)

In addition to forward and backward variables, we need to define two

more auxiliary variables. The two variables are ( , )t i j probability of being in

state i at tth time and in state j at time 1t .

1

1

1

11 1

1

11 1

 ξ ( , )  [ ,  | , λ][ , , | λ]

[ | λ][ , , | λ]

[ , ,

(

  |λ]

  ( )

  ( )

) ( )

( ) ( )

t t t

t t

t tN N

t ti j

t ij j t t

N N

t ij j t ti j

i j P q i q j O

P q i q j O

P O

P q i q j O

P q i q j O

i a b o

i a

i

ib o

(Eq. V.12)

Page 128: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

109

The posterior probability   ( )t i , probability of being in state i at time t

given the observation sequenceO .

1

1

  ( ) [ | ,  λ][ , | λ]

 [ |λ][ , | λ]

[ ,   | ]

( ) ( )

( ) ( )

t t

t

tN

tj

t tN

t ti

i P q i O

P q i O

P O

P q i O

P q i O

i i

i i

(Eq. V.13)

The relationship between   ( )t i and ( , )t i j are as follows:-

1

 ξ ( , )N

t tj

i i j

(Eq. V.14)

According to BW algorithm, given an initial model   ,( , , )A B after

the calculation of forward ( )t i and backward ( )t i variables, ( , )t i j and   ( )t i

are calculated. Finally HMM parameters are updated in such a way to

maximize the [ | ]P O according to following equations.

1 )ˆ (i r i (Eq. V.15)

1

11

1

 ξ ( , )

  ( )ˆ

T

ttij T

tt

i ja

i

(Eq. V.16)

where,1

1

  ( )T

tt

i

is expected number of times are in state i or expected

number of transitions made from state i .

Page 129: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

110

1

1 2 11

1 2 1

  ( )   ( )   ( )   ( )

| ,  λ | ,  λ [ | ,  λ]

T

t Tt

T

i i i i

P q i O P q i O P q i O

(Eq. V.17)

1

1

 ξ ( , )T

tt

i j

is the expected number of transitions from state i to state j

.

1

1 21

2 3

1

 ξ , ,   | ,  λ

,   | ,  λ

.

.

  + ,   | ,  λ

T

tt

T T

i j P q i q j O

P q i q j O

P q i q j O

(Eq. V.18)

1

1

1 2

1

 

( )  ( )

     

ˆ

  ( )

t l

t l

Tt t

o vj T

tt

T o v

T

tt

j

b li

j j j

i

(Eq. V.19)

Using re-estimation formulas of (Eq. V-15, Eq. V-16, Eq. V-19), the

HMM parameters are updated in such a way to maximize the probability of

observation sequence i.e. [ | ].P O

5.5 Proposed Classifier and its Training

In the proposed classifier structure the objective is to maximize the

probability of observations given the model parameters [ | ].P O To maximize

[ | ],P O the we update the HMM parameters using Baum-Welch algorithm

Page 130: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

111

and Gabor parameters using GA. The proposed classifier structure is shown

in Figure V-2. The symbols considered are K N and the observation vector

is given by

1 2     T

k k kNO O O O (Eq. V.20)

where,2

1

1

1 1 1cos( ),   1,2, .., .kt c

k k kO x e w t k K

The one of the HMM

parameter is probability of observing some specific symbol from a specific

state is updated using forward backward algorithm. The proposed classifier

structure have two phases; in first phase training of the classifier is carried out

while in second phase we test the classifier. In training phase, the probability

of observation sequence given the model [ | ].P O is to maximize. This

probability is the total likelihood of the observation and can be expressed as

[ | ].totL P O There is no way to analytically solve the model   ,( , , )A B

which maximize the quantity totL . But the model parameters choice, made it

locally maximized using iterative procedure like Baum-Welch algorithm. The

structure of the proposed classifier is shown in Figure V-3.

Figure V-2. Proposed Classifier Structure

Page 131: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

112

Digital Modulated Data Set

Training Set Testing Set

Extract Features from Gabor Filter

Build the Classifier Model

Fitness Function Evaluation

Meet Criterion?Optimized Classifier

parameters andFeature Subset

Yes

No

Cross Over

Selection

Mutation

Figure V-3. Structure of Proposed Classifier

5.5.1 Training of Classifier

In training phase of the classifier structure, the objective is to maximize

the probability of observation, for that we have solved the learning problem of

HMM in which we adjust the HMM parameters. We use GA to adjust the three

parameters of the GFs. The length of each chromosome is 3 number of GFs

parameters 1 1 1 1 1 1 1 1 11 1 1 2 2 2,   ,   ,   ,   ,   ,   , ,   ,   .N N Nc f c f c f The fitness function for the

Page 132: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

113

GA is probability of observation sequence i.e. [ | ].P O We also use BW

algorithm to maximize the [ | ]P O via updating one of the HMM parameter

which is probability distribution in each of the states. The proposed training

algorithm for the classifier is as follows:-

Algorithm for Training the classifier

Step 1. Initialize the HMM model parameters as stated in (Eq. V-3), (Eq. V-4) and (Eq. V-5).

Step 2. Input observation sequence using (Eq. V-6).

Step 3. Evaluate the [ | ]P O using (Eq. V-9) and (Eq. V-11).

Step 4. Calculate the ( , )t i j and   ( )t i using (Eq. V-12) and (Eq.V-13) respectively.

Step 5. Update HMM parameters (re-estimation formulas) using(Eq. V-19).

Step 6. Evaluate the fitness function i.e. [ | ]P O using updatedHMM parameters.

Step 7. Update the gabor filter parameters using Figure V- 1.

Step 8. Evaluate the [ | ].P O

Step 9. Save the gabor filter and HMM parameters for maximum[ | ].P O

5.5.2 Testing of Classifier

In test phase of the classifier, we solve HMM evaluation problem which

is the observation sequence probability given the model parameters. Figure V-

4, shows the proposed classifier testing scheme in which received signal is fed

to the classifier which constitutes of bank of gabor filter (BGF) network and

HMM, where classifier evaluates the probability of observation sequence given

Page 133: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

114

the model. The maximum among all the outputs of the classifier tells us the

modulation format of the received signal.

The steps involved in the testing phase are as follows:-

Algorithm for Testing the Classifier

Step 1. Input received signal corrupted by AWGN noise

Step 2. Evaluate the [ | ]iP O for each of considered modulation.format.

Step 3. Maximum [ | ]iP O gives the received signal modulationformat.

Figure V-4. Testing of Proposed Classifier

5.6 Simulation Results

The classification performance of the proposed classifier is evaluated

in this section. The considered modulation formats are PAM and QAM signals

which are corrupted by AWGN channel model. The training and testing of the

Page 134: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

115

classifier is done using gabor features and HMM in conjunction with GAs. The

population size for the genetic algorithm throughout the experiment is 100. The

random values of the scale, shift and modulation parameter are selected in a

specified range to make the population.

Table V-1. Classification accuracy for the proposed classifier for different no.of samples and SNRs.

Method 512 1024 2048 4096

Feature Extractor

plus classifiers (FEPC)

0 dB 0.73 0.88 0.94 0.98

5 dB 0.84 0.97 1.00 1.00

10 dB 0.96 1.00 1.00 1.00

15 dB 0.99 1.00 1.00 1.00

The number of samples are 512, 1024, 2048 and 4096 at different

SNRs of 0dB, 5dB and 10dB. For each value of SNR and considered

modulation format samples, training is done with 10,000 realizations. The

training curves for the PAM and QAM signals are shown in Figure V-5. The

probability of correctness approaches 1, which is the objective of the training

at lower SNRs. The training of the classifier for considered modulation formats

are done at lower SNRs.

Table V-1, shows the performance accuracy for different values of

SNR and also for different number of samples. The performance is shown in

the form of average probability of correctness (PCC) versus different SNRs.

Page 135: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

116

Figure V-5. Classifier training for the case of PAM and QAM signals

Figure V-6. Average classification performance for different number ofsamples.

-5 0 5 100.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

Prob

abilit

y of

Cor

rect

ness

-5 0 5 100.4

0.5

0.6

0.7

0.8

0.9

1

SNR in dB

Prob

abilit

y of

Cor

rect

ness

PAM 4PAM 8PAM 16

QAM 4QAM 16QAM 64

-5 0 5 10 15

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

SNR in dB

Pro

babi

lity

of C

orre

ct C

lass

ifica

tion

512 samples1024 samples2096 samples4096 samples

Page 136: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

117

Table V-2. Percentage Classification performance at different SNRs and2048 samples.

SNR PAM4 PAM8 PAM16 QAM4 QAM16 QAM64

0dB

PAM4 72.35

PAM8 70.94

PAM16 68.99

QAM4 72.35

QAM16 71.92

QAM64 69.96

PAM4 PAM8 PAM16 QAM4 QAM16 QAM64

5dB

PAM4 97.32

PAM8 98.01

PAM16 97.29

QAM4 97.57

QAM16 97.09

QAM64 97.78

PAM4 PAM8 PAM16 QAM4 QAM16 QAM64

10dB

PAM4 99.92

PAM8 99.89

PAM16 99.95

QAM4 99.99

QAM16 99.98

QAM64 99.84

Figure V-6, shows the performance of classifier for the considered

modulation formats at different number of samples. From the figure it is clear

that as number of samples increases the probability of correct classification

also increases. The PCC is 1 at 6dB of SNR when 4096 samples were taken.

Page 137: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

118

The classification performance is much better at lower SNRs and with less

number of features.

Table V-2, shows the performance of classifier with 2048 samples at

different SNRs in the form of confusion matrix. It is also clear from the table

that classification of modulation formats are done at lower SNRs with 10,000

realizations. The joint PAM and QAM signals classification from the Table V-2

are approximately 100% at lower SNR below than 10dB. The confusion matrix

shows only for three different SNRs of 0dB, 5dB and 10dB.

5.7 Comparison with Existing Techniques

Table V-3, shows performance comparison with the well-known

existing techniques. The tabular arrangement shows the number of features,

considered modulation formats and classification accuracy. The classification

of proposed FEPC is much better at lower SNRs with less number of features

used.

Dobre et al., (2003) proposed 8th order cummulants to classify the

modulation formats with classification accuracy of 70% at 10dB of SNR. At the

same SNR, the classification accuracy we achieved is 100% for the six class

classification problem.

Shi et al., (2011) classifier, classification accuracy is 83% for higher

order QAMs at SNR of 10dB, while using characteristics function to

compensate the inefficiency of cummulants. Authors also introduced the timing

Page 138: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

119

offset, but without timing offset we achieved 100% classification accuracy

while classifying the higher order QAM and PAM signals.

Puengnim et al., (2007), takes 2000 samples to classify the 4-QAM

and 16-QAM where the classifier accuracy is 90% at SNR of 12dB. While we

achieved 100% classification accuracy with 2048 samples at SNR of 10dB.

The classification accuracy achieved is 93.5% at SNR of 10dB with

1000 samples by Chaithanya et al., (2010), while we achieved 99% at same

SNR using novel gabor features and HMM+GA classifier.

The classification accuracy reported is 94% at SNR of 10dB using

2000 number of samples while they used cummulants as feature set by

Mirrarab et al., (2007), while we achieved 100% accuracy at SNR of 10dB

using 2048 samples using novel classifier and gabor features.

Xi et al., (2006) utilized 8th order cummulants to obtain a classification

accuracy of 94% with 2000 samples at SNR of 10dB, while our proposed

classifier classifies the higher order QAMs approximately 100% with same

SNR and number of samples considered.

He et al., (2008) classify the modulation formats with classification

accuracy of 97% at SNR of 15dB, while our classifier classifies the QAM16

approximately 100% at SNR of 10dB. The 4th order cummulants are used for

classifying the QAM signals and reported classification performance is 94% at

10 dB of SNR by Wu et al., (2008). The classification accuracy at same SNR

is 99.99% with our proposed model.

Page 139: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

120

The classification accuracy using genetic programming with K nearest

neighbor for classifying the 2-class and 4-class problem is 98.55% at 15 dB of

SNR while using 1024 samples by Aslam et al., (2012). The classification

accuracy we achieved is 100% at SNR of 10dB for classifying the 6-class

problem at same number of samples. The overall classification accuracy for

the considered modulation formats is approximately 100% at 8 dB of SNR.

Ghauri et al., (2014) classifies the modulation formats at 10 dB of SNR

with classification performance of 99% for classifying the QAM signals, while

we classified the QAM signals with classification accuracy of 100% at 8dB of

SNR.

Table V- 3. Performance Comparison with the Existing Techniques

Features, Year & AuthorClassificationAccuracy at10dB of SNR

ProposedClassifier

ClassificationAccuracy at10dB of SNR

8th order cummulants

Dobre et al., (2008)70%

100%

(six class problem)

Characteristics Function +Cummulants

Shi et al., (2011)83%

100%

BW algorithm

Puengnim et al., (2007)

90% at 12db ofSNR

(4QAM & 16QAM)

100%

(six class problem)

HOM & HOC

Chaithanya et al., (2010)

93.5%

1000 samples

99%

1024 samples

Cummulants features

Mirrarab et al., (2007)

94%

2000 samples

100%

2048 samples

Page 140: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

121

8th order cummulants

Xi et al., (2006)

94%

2000 samples

100%

2048 samples

Multifractal Features

He et al., (2008)97% at 15 dB of

SNR 100%

4th order cummulants

Wu et al., (2008)

94%99.9%

Genetic Programming + KNN

Aslam et al., (2012)98.55% at 15 dB

of SNR

100% at 8db ofSNR (overallclassification

accuracy)

5.8 Summary

In this chapter, the classification of PAM and QAM signals have been

done using novel classification approach. The proposed classifier structure is

basically a hidden markov model which evaluates the probability of

observation sequence. Genetic algorithm is used to adjust the GF parameters

such that to maximize the probability of observation sequence. The HMM

parameters are updated using BW algorithm. The proposed classifier,

classifies the considered modulation formats with higher classification

accuracy at lower SNRs. The classification performance of proposed classifier

is also evaluated for different number of samples. The classifier performance

is approximately 100% for the PAM and QAM signals classification at low SNR.

Page 141: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

122

CHAPTER VICONCLUSIONS AND FUTURE DIRECTIONS

6.1 Summary of Results

Automatic modulation classification is one of the most prominent and

necessary feature of many current and future communication systems. In

AMC, the identification/classification of received signal modulation format with

high probability of correctness is one of the objectives of the research. In this

dissertation only feature extraction based pattern recognition approach is used

to solve the problem of classification. The extraction of new features and

modified classifiers had designed to classify modulation formats with low

probability of error.

In the literature many algorithms are investigated for the solution of this

problem, many of these algorithms are capable of classifying the limited

number of modulation formats at higher SNRs. The efficient feature extractor

and classifier structure formulates the optimum classifier. Hence the optimum

solution demands a novel features extraction and classification of modulation

formats with high classification accuracy. This has been the theme of this

dissertation described in chapters III, IV and V.

In chapter III, the proposed classifiers are based on ML, LDA, FFBPNN

and SVM. The features set used for classification are normalized higher order

cummulants and spectral features. The classification accuracy of the proposed

classifier using 8th order cummulants is much better than existing classifiers

Page 142: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

123

which are based on 4th order and 6th order cummulants. The classifiers based

on spectral features are also capable to classify the considered modulation

formats efficiently. The classification accuracy is also compared with well-

known existing techniques and it is found that proposed classifier performs well

on AWGN channel and also on fading channels. The classifiers proposed in

chapter III, uses the existing features in such a way so as to classify the

maximum number of modulation formats with higher classification accuracy.

In chapter IV, the new efficient features are extracted from the Gabor

filter network named as Gabor features. The features extracted are capable of

classifying the M-QAM, M-FSK and M-PSK signals efficiently. For each

modulation format, one Gabor filter network is trained and parameter of GFN

is saved. The training and testing of the classifier shows that performance of

Gabor filter based classifier, efficiently classify the modulation formats on

AWGN channel. The classification process is difficult for M-PAM signals as the

signals are spread around axis. The modification in classifier algorithm makes

the classifier capable to classify the M-PAM signals at lower SNRs. The

performance is also compared with the existing schemes in the literature which

shows that the Gabor features are efficient enough to classify the M-PAM, M-

QAM, M-FSK and M-PSK under the effect of AWGN channel, Rayleigh flat

fading channel and Rician flat fading channel.

In chapter V, novel classifier structure and feature extractor is

proposed to formulate an optimum classifier. The classifier is based upon HMM

and bank of Gabor filter (BGF) network. The simulation results show that the

100% classification accuracy even at lower SNRs for classification of M-QAM

and M-PAM signals. The classifier performance is also compared with well-

Page 143: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

124

known existing techniques which shows the supremacy of the proposed

classifier.

Moreover, it is shown by simulations that proposed schemes perform

significantly better than many existing schemes in the literature. Also the new

extracted Gabor features are compared with higher order cummulants and

spectral features and the performance of proposed features are much better

than that of existing features.

6.2 Future Directions

A lot of directions can be explored under the supervision of Gabor

features based classifier structure in application to adaptive communications.

Few of these directions are given as below,

The proposed classifier structure may be extended from single user

ACM to a multi-user AMC. In this regard multi-input multi-output (MIMO)

systems can also be investigated.

Performance of the classifier based on HMM+GA is demonstrated

over a AWGN channel; other channels like Rayleigh fading, Rician Fading and

Nakagami-m fading channels may also be investigated for different channel

parameters like Doppler shift and other fading characteristics.

The Gabor features may be optimized with any of the optimization

techniques. The techniques used for optimization may be nature inspired

algorithms and fuzzy rule based system. The performance of classifier may be

improved by optimizing the Gabor features and classifier structure.

Page 144: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

125

The proposed classifier scheme may be extended for the application

of radar signal classifications and images classification. The use of genetic

programming may be used to improve the classification accuracy with Gabor

features set.

Page 145: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

126

REFERENCES

ABAVIASANI, M.S. and TABATABAVAKILI, V.A. (2009) Novel algorithm forblind adaptive recognition between 8-PSK and /4-Shifted QPSKmodulated signals for software defined radio applications. 4thInternational Conference on Cognitive Radio Oriented WirelessNetworks and Communications, pp. 1-6.

ABDI, A., DOBRE, O. A., CHOUDHRY, R., BAR-NESS, Y. and SU, W. (2004)Modulation classification in fading channels using antenna arrays. IEEEMILCOM, pp. 211-217.

AHMADI, N. (2011) Modulation recognition based on Constellation shapeusing TTSAS algorithm and template matching. Journal of patternrecognition research, Vol. 1, pp. 43-55.

ASLAM, M.W., ZHU, Z. and NANDI, A.K. (2011) Robust QAM classificationusing Genetic programming and fisher criterion. 19th European SignalProcessing Conference, pp. 995-999.

ASLAM, M.W., ZHU, Z. and NANDI, A.K. (2012) Automatic modulationclassification using combination of Genetic programming and KNN.IEEE transactions on wireless communication, Vol. 11, No. 8, pp. 2742-2750.

ASSALEH, K., FARRELL, K. and MAMMONE, J.R. (1992) A new method ofmodulation classification for digitally modulated signals. MILCOMCommunications-Fusing Command, Control and Intelligence, Vol. 2,pp. 712-716.

AVCI, E. and AVCI, D. (2008) The performance comparison of discrete waveletneural network and discrete wavelet adaptive network based fuzzyinference system for digital modulation recognition. Experts system withapplications, Vol. 35, pp. 90-101.

AVCI, E., HANBAY, D. and VAROL, A. (2007) An expert discrete waveletadaptive network based fuzzy inference system for digital modulationrecognition. Experts system with applications, Vol. 33, pp. 582-589.

AZZOUZ, E.E. and NANDI, A.K. (1995) Automatic recognition of digitalmodulations. Signal Processing, Vol. 47, No. 1, pp. 55-69.

AZZOUZ, E.E. and NANDI, A.K. (1996) Automatic modulation recognition ofcommunication signals. Kluwer Academic Publishers.

AZZOUZ, E.E. and NANDI, A.K. (1996a) Automatic modulation recognition ofcommunication signals. Kluwer Academic Publishers.

AZZOUZ, E.E. and NANDI, A.K. (1996b) Procedure for automatic recognitionof analogue and digital modulations. IEEE Proceedings onCommunications, Vol. 143, No. 5, pp. 259-266.

Page 146: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

127

AZZOUZ, E.E. and NANDI, A.K. (1997a) Automatic modulation recognition-I.Journal of the Franklin Institute, Vol. 33(4B), No. 2, pp. 241-273.

AZZOUZ, E.E. and NANDI, A.K. (1997b) Automatic modulation recognition-II.Journal of the Franklin Institute, Vol. 33(4B), No. 2, pp. 275-305.

BEIDAS, B. F. and Weber, C. L. (1995) Higher-order correlation-basedapproach to modulation classification of digitally frequency-modulatedsignals. IEEE Journal on Selected Areas in Communications, Vol. 13,pp. 89-101.

BEIDAS, B. F. and Weber, C. L. (1998) Asynchronous classification of MFSKsignals using the higher order correlation domain. IEEE Transactionson Communications, Vol. 46, pp. 480-493.

BEIDAS, B.F. and WEBER, C.L. (1996) Higher-order correlation-basedclassification of asynchronous MFSK signal. IEEE MILCOM, pp. 1003-1009.

BLOCK, H.W. and FANG, Z. (1988) A multivariate extension of Hoeffding’sLemma. The Annals of Probability, Vol. 16, No. 4, pp.1803-1820.

BOUTTE, D. and SANTHANAM, B. (2009) A feature weighted hybrid ICA-SVMapproach to Automatic modulation recognition. IEEE DSP & SPEducation Workshop, pp. 399-403.

CAI, Q., WEI, P. and XIAO, X. (2004) A digital modulation recognition method.ICCCAS, Vol. 2, pp. 863-866.

CHAITHANYA, V and REDDY, V.U. (2010) Blind modulation classification inthe presence of carrier frequency offset. International conference onsignal processing and communication, pp. 1-5.

CHENG, L. and LIU, J. (2014) An optimal neural network classifier forAutomatic Modulation Recognition. TELKOMNIKA indonessain journalof Electrical Engineering, Vol. 12, No. 2, pp. 1343-1352.

CHUGG, K.M., LONG, C.S. and POLYDOROS, A. (1995) Combined likelihoodpower estimation and multiple hypothesis modulation classification.ASILOMAR, pp. 1137-1141.

DAI, W., WANG, Y. and WANG, J. (2002) Joint power estimation andmodulation classification using second- and higher statistics. WCNC,Vol. 1, pp. 155-158.

DESIMIO P.M. and PRESCOTT, G.E. (1988) Adaptive generation of decisionfunctions for classification of digitally modulated signals. NAECON, pp.1010-1014.

DOBRE, O.A, NESS, B.Y, and SU. W. (2003) Higher-order cyclic cummulantsfor high order modulation classification. IEEE MILCOM, vol. 1, pp. 112–117.

Page 147: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

128

DOBRE, O.A., ABDI, A., NESS, B.Y. and SU, W. (2005) Selection combiningfor modulation recognition in fading channels. IEEE MILCOM, pp. 1-7.

DOBRE, O.A., ABDI, A., NESS, B.Y. and SU, W. (2007) Survey of automaticmodulation classification techniques: classical approaches and newtrends. IET Communications, Vol.1, No. 2, pp. 137-156.

DOBRE, O.A., ABDI, A., NESS, Y.B. and SU, W. (2010) Cyclostatoinaritybased modulation classification of linear digital modulations in flatfading channels. Wireless Personal Communications, Vol. 54, pp. 699-717.

DOBRE, O.A., and Hameed, F. (2006) Likelihood-based algorithms for lineardigital modulation classification in fading channel. IEEE CCECE.

DOBRE, O.A., NESS, B.Y. and SU, W. (2004) On the classification of linearlymodulated signals in fading channel. CISS Conference.

DOBRE, O.A., NESS, B.Y. and SU, W. (2004) Robust QAM modulationclassification algorithm based on cyclic cummulants. WCNC, pp. 745-748.

DOBRE, O.A., RAJAN, S. and INKOL, R. (2009) Joint signal detection andclassification based on first order cyclo-stationarity for cognitive radios.EURASIP Journal on Advances in signal processing, Vol. 2009.

DULEK, B., OZDEMIR, O., VARSHNEY, P.K. and SU, W. (2014) A novelapproach to dictionary construction for Automatic modulationclassification. Journal of the Franklin Institute, Vol. 351, pp. 2991-3012.

EBRAHIMZADEH, A. (2009) Automatic modulation recognition using RBFNNand efficient features in fading channel. 1st International conference onNetworked Digital Technologies, pp. 485-488.

EBRAHIMZADEH, A. (2012) A novel method for automatic modulationrecognition. Applied Soft Computing, Vol. 12, pp. 453-461.

EBRAHIMZADEH, A. and GHAZALIAN, R. (2011) Blind digital modulationclassification in software radio using optimized classifier and featuresubset selection. Journal of engineering applications of ArtificialIntelligence, Vol. 24, No. 1, pp. 50-59.

EBRAHIMZADEH, A. and ZADEH, M.H. (2011) A novel method using GAbased clustering and spectral features for modulation classification.IEEE, pp. 4705-4708.

EBRAHIMZADEH, A., AZIMI, H. and NAEEMI, H.M. (2010) Classification ofcommunication signals using an optimized classifier and efficientfeatures. Arabian Journal of Science and Engineering, Vol. 35, No. 1B,pp. 225-235.

ELDEMERDASH, Y.A., MAREY, M., DOBRE, O.A., KARAGIANNDIS, G.K.and INKOL, R. (2013) Fourth order statistics for Blind classification of

Page 148: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

129

spatial multiplexing and alamouti space time block code signals. IEEEtransactions on Communications, Vol. 61, No. 6.

GANDETTO, M., GUAINAZZO, M. and REGAZZONI, C.S. (2014) Use of timefrequency analysis and neural networks for mode identification in awireless software defined radio approach. EURASIP Journal on appliedsignal processing, pp. 1778-1790.

GARDNER, W.A. and SPOONER, C.M. (1992) Signal interception:Performance advantages of cyclic-feature detectors. IEEETransactions on Communications, Vol. 40, No. 1, pp. 149–159.

GHAURI, S.A., QURESHI, I.M., MALIK, A.N., CHEEMA T.A. (2014) A novelmodulation classification approach using Gabor filter network. TheScientific World Journal (TSWJ), pp. 1-14.

GHAURI, S.A., QURESHI, I.M. (2015) M-PAM signals classificationusing modified Gabor filter network. Mathematical Problems inEngineering, pp. 1-10.

GHAURI, S.A., QURESHI, I.M., BASIR, S., DIN, H. (2014) ModulationClassification using Spectral Features on Fading Channels. ScienceInternational, Vol. 27, No. 1, pp.147-153.

GHAURI, S.A., QURESHI, I.M., SHAH, I., KHAN, N. (2014) ModulationClassification using Cyclo-stationary Features on Fading Channels.Research Journal of Applied Sciences Engineering & Technology(RJASET), Vol. 7, No. 24, pp-5331-5339.

GHAURI, S.A., QURESHI, I.M., AZIZ, A., CHEEMA T.A. (2014) Classificationof Digital Modulated Signals using Linear Discriminant Analysis onFaded Channel. World Applied Sciences Journal, Vol. 29, No. 10, pp.1220-1227.

GHAURI, S.A., QURESHI, I.M., MALIK, A.N., CHEEMA T.A. (2014) AutomaticDigital Modulation Classification Technique using Higher OrderCummulants on Faded Channels. J. Basic. Appl. Sci. Res., Vol. 4, No.3, pp. 1-12.

GHAURI, S.A., QURESHI, I.M., MALIK, A.N., CHEEMA T.A. (2013) HigherOrder Cummulants based Digital Modulation Classification Scheme.Research Journal of Applied Sciences Engineering & Technology(RJASET), Vol. 6, No. 20, pp.3910-3915.

GHAURI, S.A., QURESHI, I.M. (2013) Automatic Classification of DigitalModulated Signals using Linear Discriminant Analysis on AWGNChannel. IJICTT, Vol. 1, No. 1, pp. 1-4.

GULDEMIR, H. and SENGUR, A. (2007) Online modulation recognition ofanalog communication signals using neural network. Experts Systemwith Applications, Vol. 33, No. 1, pp. 206-214.

Page 149: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

130

HAMEED, F., DOBRE, O.A. and POPESCU, D.C. (2009) On the likelihoodbased approah to modulation classification. IEEE Transactions onwireless communications, Vol. 8, No. 12, pp. 5884-5892.

HASSAN, K., DAYOUB, I., HAMOUDA, W. and BERBINEAU, M. (2010)Automatic modulation recognition using wavelet transform and neuralnetworks in wireless systems. EURASIP Journal on Advances in signalprocessing, Vol. 2010, pp. 1-13.

HAYKIN, S. (2005) Cognitive radio: brain-empowered wirelesscommunications. IEEE Journal on Selected Areas in Communications,Vol. 23, No. 2, pp. 201-220.

HAYKIN, S. (2008) Neural Networks and Learning Machines. 3rd Edition,ISBN-13: 978-0131471399, ISBN-10: 0131471392.

HE, T. and ZHOU, Z. (2008) Classification of modulated signals usingmultifractal features. Journal of the Chinese institute of Engineers, Vol.31, No. 2, pp. 335-338.

HO, K.C., PROKOPIW, W. and CHAN, Y. T. (1995) Modulation identificationby the wavelet transform. IEEE MILCOM, pp. 886-890.

HO, K.C., PROKOPIW, W. and CHAN, Y. T. (2000) Modulation identificationof digital signals by the wavelet transform. IEEE Transactions on Radar,Sonar and Navigation, Vol. 47, pp. 169-176.

HONG, L. and HO, K. C. (2002) An antenna array likelihood modulationclassifier for BPSK and QPSK signals. IEEE MILCOM, pp. 647-651.

HONG, L. and HO, K.C. (2003) Classification of BPSK and QPSK signals withunknown signal level using the Bayes technique. IEEE ISCAS, pp. 41-44.

HONG, L., and HO, K.C. (1999) Identification of digital modulation types usingthe wavelet transform. IEEE MILCOM, Vol. 1, pp. 427-431.

HSUE, S.Z. and SOLIMAN, S.S. (1989) Automatic modulation recognition ofdigitally modulated signals. IEEE MILCOM, pp. 645-649.

HSUE, S.Z. and SOLIMAN, S.S. (1990) Automatic modulation classificationusing zero-crossing. IEEE radar sonar navigation, Vol. 137, pp. 459-464.

HUANG, C.Y. and POLYDOROS, A. (1995) Likelihood methods for MPSKmodulation classification. IEEE Transactions on Communications, Vol.43, No. 2/3/4, pp. 1493-1504.

KAVALOV, D. (2001) Improved noise characteristics of a saw artificial neuralnetwork RF signal processor for modulation recognition. IEEEUltrasonic Symposium, Vol. 1, pp. 19-22.

KEBRYA, A.R., KIM, I.M., KIM, D.I., CHAN, F. and INKOL, R. (2013)Likelihood based modulation classification for multiple antenna

Page 150: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

131

receiver. IEEE transactions on Communications, Vol. 61, No. 9, pp.3816-3829.

KETTERER, H., JONDRAL F. and COSTA, H.A. (1999) Classification ofmodulation modes using time-frequency methods. ICASSP, Vol. 5, pp.2471-2474.

KHARBECH, S., DAYOUB, I., COLIN, M. Z., SIMON, E.P. and HASSAN, K.(2014) Blind digital modulation identification for time selective MIMOchannels. IEEE wireless communication letters, Vol. 3, No. 4, pp. 373-376.

KIM, K. and POLYDOROS, A. (1988) Digital modulation classification: theBPSK versus QPSK case. IEEE MILCOM, pp. 431-436.

KIM, K., AKBAR, I.A., BAE, K.K., UM, J.S., SPOONER, C.M. and REED, J. H.(2007) Cyclo-stationary approaches to signal detection andclassification in cognitive radio. IEEE DySpan, pp. 212–215.

LANJUN. Q. and CANYAN. Z. (2010) Modulation classification based on cyclicspectral features and neural network. International Congress on Image& Signal Processing, vol. 8, pp. 3601-3605.

LENIR, V., VANWATERSHOOT, T., MOONEN, M. and DUPLICY, J. (2009)Blind CP-OFDM and ZP-OFDM parameter estimation in frequencyselective channels. EURASIP Journal on wireless communication andnetworking, Vol. 2009.

LI, H., DOBRE, O.A., NESS, B.Y. and SU, W. (2005) Quasi-hybrid likelihoodmodulation classification with nonlinear carrier frequency offsetsestimation using antenna arrays. IEEE MILCOM, pp. 1-6.

LIKE, E., CHAKRAVARTHY, V.D., RATAZZI, P. and WU, Z. (2009) SignalClassification in fading channel using cyclic spectral analysis. EURASIPJournal on wireless communication and networking, vol. 2009, pp. 1-14.

LIN, Y.C. and IBOB, C.C.J. (1997) Classification of quadrature amplitudemodulated (QAM) via sequential probability ratio test (SPRT). SignalProcessing, Vol. 60, pp. 263-280.

LING, L.Y., BING, L.B. and YI, C.Y. (2010) Modulation classification of M-QAMsignals using particle swarm optimization and substrative clustering.ICSP, pp. 1537-1540.

LIU, P. and SHUI, P.L. (2014) Digital modulation classifier with rejection abilityvia greedy convex hull learning and alternative convex hull shrinkage infeature space. IEEE transactions on wireless communication, Vol. 13,No. 5, pp. 2683-2695.

LIU, Y., SIMEONE, O., HAIMOUICH, A.M. and SU, W. (2014) Modulationclassification via Gibbs Sampling based on a Latent Dirichlet BayesianNetwork. IEEE signal processing letters, Vol. 21, No. 9, pp. 1135-1139.

Page 151: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

132

LONG, C., CHUGG, K. and POLYDOROS, A. (1994) Further results inlikelihood classification of QAM signals. IEEE MILCOM, pp. 57-61.

LOPATKA, J. and PEDZISZ, M. (2000) Automatic modulation classificationusing statistical moments and a fuzzy classifier. Signal ProcessingProceedings, WCCCICSP, Vol. 3, pp. 1500-1506.

LUO, M.G., LI, L., QIAN, G. and LU, J. (2014) A blind modulation identificationalgorithm for STBC systems using multidimensional ICA. Concurrencyand Computation: Practice and Experience, Vol. 26, No. 8,pp. 1490-1505.

MARCHAND, P., LACOUME, J.L. and MARTRET, C.L. (1997) Classificationof linear modulations by a combination of different orders cycliccummulants. IEEE Signal Processing Workshop on Higher-OrderStatistics, pp. 47–51.

MARCHAND, P., LACOUME, J.L. and MARTRET, C.L. (1998) Multiplehypothesis classification based on cyclic cummulants of differentorders. ICASSP, pp. 2157-2160.

MARCY, M. and DOBRE, O.A. (2014) Blind modulation classification for singleand multiple antenna system over frequency selective channels. IEEEsignal processing letters, Vol. 21, No. 9, pp. 1098-1102.

MARTRET, C. and BOITEAU, D.M. (1997) Modulation classification by meansof different order statistical moments. IEEE MILCOM, pp. 1387-1391.

MIRARAB, M.R. and SOBHANI, M.A. (2007) Robust modulation classificationfor PSK/QAM/ASK using higher order cummulants, 6th InternationalConference on Information, Communications & Signal Processing, pp.1-4.

MITOLA, J and MAGUIRE, G.Q. (1999) Cognitive radio: making softwareradios more personal. IEEE Personal Communication Magazine, Vol.6, No. 4, pp. 13–18.

MITOLA, J, III, (2000) Cognitive radio: An integrated agent architecture forsoftware defined radio, Phd Thesis, KTH, Stockholm.

MOBASSERI, B.G. (1999) Constellation shape as a robust signature for digitalmodulation recognition. IEEE MILCOM, Vol. 1, pp. 442-446.

MOBASSERI, B.G. (2000) Digital modulation classification using constellationshape. Signal Processing, Vol. 80, No. 2, pp. 251-277.

MOHAMMADI, E.S. and POUR, M.N. (2013) Blind modulation classificationover fading channels using expectation maximization. IEEEcommunication letters, Vol. 17, No. 9, pp. 1692-1695.

MUSTAFA, H. and DOROSLOVACKI, M. (2004) Digital modulation recognitionusing support vector machine classifier. Thirty-Eighth ASILOMARConference on Signals, Systems and Computers, Vol. 2, pp. 2238-2242.

Page 152: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

133

NAGY, P.A.J., (1994) A modulation classifier for multi-channel systems andmulti transmitter situations. IEEE MILCOM 94, Vol. 3, pp. 816-820.

NANDI, A.K. and AZZOUZ, E.E. (1995) Automatic analogue modulationrecognition. Signal Processing, Vol. 46, No. 2, pp. 211-222.

NANDI, A.K. and AZZOUZ, E.E. (1998) Algorithms for automatic modulationrecognition of communication signals. IEEE Transactions onCommunications, Vol. 46, No. 4, pp. 431-436.

ORLIC, V.D. and DUKIC, M.L. (2010) Multipath channel estimation algorithmfor automatic modulation classification. Electronics Letters, Vol. 46, No.19, pp. 1348-1349.

ORLIC, V.D. and DUKIC, M.L. (2012) Automatic modulation classification:Sixth order cummulants feature as a solution for real world challenges.20th Telecommunication forum TELEFOR, pp. 1-8.

ORLIC. V. D and DUKIC. M. L. (2009) Automatic modulation classificationalgorithm using higher-order cummulants under real-world channelconditions. IEEE communication letters, Vol. 13, No. 12, pp. 917-919.

PANAGIOTOU, P., ANASTASOPOULOS, A. and POLYDOROS, A. (2000)Likelihood ratio tests for modulation classification. IEEE MILCOM, Vol.2, pp.670-674.

PARK, C.S and KIM, D.Y (2006) A novel robust feature of modulationclassification for reconfigurable software radio. IEEE Transactions onConsumer Electronics, Vol. 52, pp. 1193-1200.

POLYDOROS, A and Kim, K. (1990) On the detection and classification ofquadrature digital modulations in broad-band noise. IEEE Transactionson Communications, Vol. 38, No. 8, pp. 1199-1211.

POPOOLA, J.J. (2014) Automatic recognition of both inter and intra classes ofdigital modulated signals using artificial neural network. Journal ofEngineering Science and Technology, Vol. 9, No. 2, pp. 273-285.

POPOOLA, J.J. and ADELOYE, V.S.A. (2007) A study of effectiveness ofspace transmits diversity technique for combating signal fading inmobile communication system in Nigeria. FUTA International Journal ofEngineering and Engineering Technology (FUTAJEET), Vol. 5, No. 2,pp. 77-82.

POPOOLA, J.J. and OLST, R.V. (2011) A novel modulation sensing method.IEEE vehicular technology magazine, pp. 60-69.

POPOOLA, J.J. and OLST, R.V. (2011) Automatic classification of combinedanalog and digital modulation schemes using feed forward neuralnetwork. IEEE Africon, pp.1-6.

POPOOLA, J.J. and VANOLST, R. (2011) Automatic recognition of analogmodulated signals using artificial neural networks. Journal of ComputerTechnology and Applications, Vol. 2, No. 1, pp. 29-35.

Page 153: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

134

POPOOLA, J.J. and VANOLST, R. (2011) Cooperative sensing reliabilityimprovement for primary radio signal detection in cognitive radioenvironment. Southern Africa Telecommunication Networks andApplications Conference 2011 (SATNAC 2011), pp. 131-136.

POPOOLA, J.J. and VANOLST, R. (2011) Novel modulation sensing methodas a remedy for uncertainty around the practical use of cognitive radiotechnology. 26th Wireless World Research Forum 2011 (WWRF 2011),pp.11-13.

PRAKASAM, P. and MADHESWARAN, M. (2008) Digital modulationidentification using wavelet transform and statistical parameters.Journal of computer system, networks and communications, Vol. 2008,pp. 1-8.

PRAKASAM, P. and MADHESWARAN, M. (2008) M-ary shift keyingmodulation scheme identification algorithm using wavelet transform andhigher order statistical moment. Journal of Applied Sciences, Vol. 8, No.1, pp. 112-119.

PUENGNIM, A., THOMAS, N., TOURNERET, J.Y. and VIDAL, J. (2007)Hidden markov models for digital modulation classification in unknownISI channels. 15th European signal processing conference, pp. 1882-1886.

PUENGNIM, A., THOMAS, N., TOURNERET, J.Y. and VIDAL, J. (2008)Classification of linear and nonlinear modulations using the Baum-Welch algorithm. 16th European signal processing conference, pp. 25-29.

PUENGNIM, A., THOMAS, N., TOURNERET, J.Y. and VIDAL, J. (2010)Classification of linear and nonlinear modulation using the Baum-Welchalgorithms and MCMC methods. Signal Processing, Vol. 90, No. 12, pp.3342-3355.

QAIN, S. and CHEN, D. (1999) Joint time frequency analysis. SignalProcessing Magazine, Vol. 16, pp. 52-67.

RAMKUMAR, B. (2009) Automatic modulation classification for cognitiveradios using cyclic feature detection. IEEE circuits and systemmagazine, pp. 27-45.

RAMKUMAR, B. (2011) Automatic modulation classification and blindequalization for Cognitive radios, Phd Thesis, Blacksburg, Virginia.

OZDEMIR, O., LI, R. and VARSHNEY, P.K. (2013) Hybrid maximum likelihoodmodulation classification using multiple radios. IEEE communicationletters, Vol. 17, No. 10, pp. 1889-1892.

SANDERSON, J., LI, X., LIU, Z. and WU, Z. (2013) Hierarchical blindmodulation classification for underwater acoustic communication signalvia cyclostaionary and maximal likelihood analysis. IEEE MILCOMM,Vol. 29, No. 34.

Page 154: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

135

SAPIANO, P.C. and MARTIN, J.D. (1996) Maximum likelihood PSK classifier.ICASSP, pp. 1010-1014.

SAPIANO, P.C., MARTIN, J. and HOLBECHE, R. (1995) Classification of PSKsignals using the DFT of phase histogram. ICASSP, pp. 1868-1871.

SAYAN, T.G., LEBLEBICIOGLU, K. and INCE, T. (1999) Electromagnetictarget classification using time-frequency analysis and neural networks.Micro wave Opt. Technology letters, Vol. 21, pp.63-69.

SERGIENKO, A.B. and OSIPOV, A.V. (2014) Digital modulation recognitionusing circular harmonic approximation of likelihood function. IEEEinternational conference on acoustic, speech and signal processing(ICASSP), pp. 3460-3463.

SHI, Q., GONG, Y. and GUAN, Y. L. (2011) Modulation classification forasynchronous high-order QAM signals. Wireless Communications andMobile Computing, pp. 1415–1422.

SHI, Y. and ZHANG, D.X. (2001) A gabor atom network for signal classificationwith application in radar target recognition. IEEE Transactions on SignalProcessing, Vol. 49, pp. 2994-3004.

SILLS, J.A. (1999) Maximum-likelihood modulation classification forPSK/QAM. IEEE MILCOM, Vol.1, pp. 217-220.

SOLIMAN, S.S. and HSUE, S.Z. (1992) Signal classification using statisticalmoments. IEEE Transactions on Communications, Vol. 40, No. 5, pp.908-916.

SPOONER, C.M. (1995) Classification of co-channel communication signalsusing cyclic cummulants. ASILOMAR, pp. 531-536.

SPOONER, C.M., BROWN, W.A. and YEUNG, G.K. (2000) Automatic radio-frequency environment analysis. ASILOMAR, pp. 1181-1186.

SU, W. (2013) Feature space analysis of modulation classification using veryhigher order statistics. IEEE communication letters, Vol. 17, No. 9, pp.1688-1691.

SU, W., XU, J.L. and ZHOU, M. (2008) Real time modulation classificationbased on Maximum likelihood. IEEE communication letters, Vol. 12, No.11, pp. 801-803.

SWAMI, A. and SADLER, B.M. (2000) Hierarchical digital modulationclassification using cummulants. IEEE Transactions onCommunications, Vol. 48, No. 3, pp. 416-429.

SWAMI, A., BARBAROSSA, S. and SADLER, B. (2000) Blind sourceseparation and signal classification. ASILOMAR, pp. 1187-1191.

TREES, H. L. V. (2001) Detection, Estimation and Modulation Theory- Part I.Wiley.

Page 155: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

136

VALIPOUR, M.H., HOMAYOUNPOOR, M.M. and MEHRALIAN, M.A. (2012)Automatic Digital Modulation recognition in the presence of noise usingSVM and PSO. 6th International Symposium on Telecommunication,pp. 378-382.

WANG, F. and WANG, X. (2010) Fast and robust modulation classification viaKolomogorov-Smirnov test. IEEE Transactions on Communication, Vol.58, No. 8, pp. 2324-2332.

WEI, W. and MENDEL, J. M. (2000) Maximum-likelihood classification fordigital amplitude-phase modulations. IEEE Transactions onCommunications, Vol. 48, pp. 189-193.

WONG, M. L. D and NANDI. A. K. (2004) Automatic digital modulationrecognition using artificial neural network and genetic algorithm. SignalProcessing, Vol. 84, pp. 351-365.

WONG, M.L.D. and NANDI, A.K. (2001) Automatic digital modulationrecognition using spectral and statistical features with multi-layerperceptron’s. Sixth International Symposium on Signal Processing andits Applications, Vol. 2, pp. 390-393.

WONG, M.L.D. TING, S.K. and NANDI, A.K. (2008) Navie bayes classificationof adaptive broad band wireless modulation types with higher ordercummulants. International Conference on signal processing andcommunication systems.

WU, H, SAQUIB. M and YUN. Z. (2008) Novel automatic modulationclassification using cummulants features for communications viamultipath channels. IEEE transactions on Wireless Communications,Vol. 7, pp. 3098-3105.

XI, S. and WU, H.C. (2006) Robust automatic modulation classification usingcummulants features in the presence of fading channels. IEEE wirelesscommunication and networking conference, Vol. 4, pp. 2094-2099.

XIANCI, L.M.X and LEMING, L. (1996) Cyclic spectral features basedmodulation recognition. ICCT, Vol. 2, pp. 792-795.

YANG, Y. and LIU, C.H. (1998) An asymptotic optimal algorithm for modulationclassification. IEEE Communication Letters, Vol. 2, pp. 117-119.

YANG, Y. and SOLIMAN, S.S. (1991) Optimum classifier for M-ARY PSKsignals. IEEE MILCOM, Vol. 3, pp. 1693-1697.

YANG, Y. and SOLIMAN, S.S. (1991) Statistical moments based classifier forMPSK signals. GLOBECOM, Vol.1, pp. 72-76.

YANG, Y. and SOLIMAN, S.S. (1997) A suboptimal algorithm for modulationclassification. IEEE Transactions on Aerospace and ElectronicSystems, Vol. 33, pp. 38-45.

YAQIN, Z., GUANGHUI, R., XUEXIA, W., ZHILU, W. and XUEMAI, G. (2003)Automatic digital modulation recognition using artificial neural networks.

Page 156: AUTOMATIC MODULATION CLASSIFICATION …prr.hec.gov.pk/jspui/bitstream/123456789/6771/1/Sajjad...2016/03/18  · i AUTOMATIC MODULATION CLASSIFICATION USING FEATURE BASED APPROACH Submitted

137

IEEE International Conference on Neural Networks and Signalprocessing, pp. 257-260.

YE, Y. and WENBO, M. (2007) Digital modulation classification usingmultilayer perceptron and time-frequency features. Journal of systemsengineering and electronics, Vol. 18, No. 2, pp. 249-254.

YEUNG, G.K. and GARDNER, W. A. (1996) Search efficient methods ofdetection of cyclo-stationary signals. IEEE Transactions on SignalProcessing, Vol. 44, No. 5, pp. 1214–1223.

YU, Z., SHI, Y.Q. and SU, W. (2003) M-ary frequency shift keying signalclassification based on discrete Fourier transform. IEEE MILCOM, pp.1167-1172.

YUCEK, T. and ARSLAN, H. (2004) A novel sub-optimum maximum-likelihoodmodulation classification algorithm for adaptive OFDM systems. IEEEWireless Communications and Networking Conference, Vol. 2, pp. 739–744.

ZADEH, A.E., SEYEDIN, S.A. and DEHGHAN, M. (2006) An intelligent methodfor modulation type identification. 3rd International Conference onMobile Technology, Applications and Systems, pp. 1-4.

ZAERIM, M. and SEYFE, B. (2012) Multiuser modulation classification basedon cummulants in AWGN channel. IET Signal Processing, Vol. 6, No.9, pp. 815-823.

ZENG, D., ZEND, X., CHENG, H. and TANG, B. (2012) Automatic modulationclassification using the Rihaczek distirbution and Hough transform. IETRadar Sonar Navigation, Vol. 6, No. 5, pp. 322-333.

ZHANG, Y., ANSARI, N. and SU, W. (2013) Multisensor signal fusion basedmodulation classification by using wireless sensor networks. Wirelesscommunication and mobile computing.

ZHOU, X., WU, Y. and YANG, B. (2010) Signal classification method basedon SVM and higher order cummulants. Wireless Sensor Networks, Vol.2, pp. 48-52.

ZHU, F., ZHANG, X.D. and HU, Y.F. (2009) Gabor filter approach to jointfeature extraction and target recognition. IEEE transactions onAerospace and Electronic Systems, Vol. 45, No. 1, pp. 17-30.

ZHU, Z. and NANDI, A. K. (2014) Blind digital modulation classification usingminimum distance centroid estimator and non-parametric likelihoodfunction. IEEE transactions on wireless communication, Vol. 13, No. 8,pp. 4483-4494.

ZHU, Z., ASLAM, M. W. and NANDI, A.K. (2014) Genetic algorithm optimizeddistribution sampling test for M-QAM modulation classification. SignalProcessing, Vol. 94, pp. 264-277.