sparse regularization approach ... - supcom.mincom.tn

194
University of Carthage Higher School of Communications of Tunis (SUP’COM) Doctoral School of Information and Communication Technologies (ICT) of SUP’COM Sparse Regularization Approach Application to Wireless Communications A thesis presented in fulfilment of the requirements for the degree of Doctor of Philosophy in Information and Communication Technologies By Zakia Jellali Defended on the 24 th of January 2017 before the committee composed of: Chair Prof. Monia Turki Professor at ENIT, Tunisia Reviewers Prof. Benoit Geller Professsor at ENSTA ParisTech, France Dr. Ines Kammoun Jemal Associate Professor at ENIS, Tunisia Examiner Prof. Pascal Larzabal Professor at ENS Cachan, France Supervisor Dr. Le¨ ıla Najjar Atallah Associate Professor at SUP’COM, Tunisia Co-Supervisor Prof. Sofiane Cherif Professor at SUP’COM, Tunisia COmmunication, Signaux et IMages (COSIM) Research laboratory Academic year: 2016/2017

Upload: others

Post on 16-Jan-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sparse Regularization Approach ... - supcom.mincom.tn

University of Carthage

Higher School of Communications of Tunis (SUP’COM)

Doctoral School of Information and Communication Technologies (ICT) of SUP’COM

Sparse Regularization Approach

Application to Wireless Communications

A thesis presented in fulfilment of the requirements for the degree of

Doctor of Philosophy in

Information and Communication Technologies

By

Zakia Jellali

Defended on the 24th of January 2017 before the committee composed of:

Chair Prof. Monia Turki Professor at ENIT, Tunisia

Reviewers Prof. Benoit Geller Professsor at ENSTA ParisTech, France

Dr. Ines Kammoun Jemal Associate Professor at ENIS, Tunisia

Examiner Prof. Pascal Larzabal Professor at ENS Cachan, France

Supervisor Dr. Leıla Najjar Atallah Associate Professor at SUP’COM, Tunisia

Co-Supervisor Prof. Sofiane Cherif Professor at SUP’COM, Tunisia

COmmunication, Signaux et IMages (COSIM) Research laboratory

Academic year: 2016/2017

Page 2: Sparse Regularization Approach ... - supcom.mincom.tn
Page 3: Sparse Regularization Approach ... - supcom.mincom.tn

Acknowledgements

This thesis is far to present a work by typing some words on the computer, it not only shows

that I have grasped the experiences of finishing a whole scientific research work but also marks

the first milestone of my scientific research life. Throughout the nearly past four years I have

learned how to read the related works and extract and analyze the main views and ideas, how

to find the drawbacks or limitations of the proposed methods, thus improve and extend them;

how to propose new ideas in one specific research area.

This work was carried out as part of the Phd programs at COmmunication, Signaux et IM-

ages (COSIM) laboratory, SUP’COM, University of Carthage under the supervision of Dr.

Leıla Najjar Atallah, associate professor at SUP’COM, and Mr. Sofiane Cherif, professor at

SUP’COM.

First of all, my grateful thanks go to my supervisors for their support and encouragement during

these five years of my Phd. I express my gratitude to them for their quality leadership, valuable

guidance and fruitful discussions that have enabled me to overcome the many complexities of

the research topic. I wish also to thank them for sharing their outstanding expertise and

knowledge, for being patient and continuously offering precious advices and help.

I would like to express my very deep appreciation to all my committee members, for their

insightful comments and constructive feedback on my dissertation work and research directions,

their kind flexibility, their concern, and their valuable time: Mrs. Monia Turki, professor at

ENIT as a Chair, Mr. Benoit Geller, professsor at ENSTA ParisTech and Dr. Ines Kammoun

Jemal, associate professor at ENIS, as reviewers, and Mr. Pascal Larzabal, professor at ENS

Cachan, as examiner.

I am thankful to my wonderful friends and all the members of the COSIM team at SUP’COM

(permanent and PhD students) for their support and for providing a stimulating and fun

environment. I would like to thank Miss Donia Lassaoud for her useful suggestions and warm

hearted help. My thanks also go to all SUP’COM staff for their availability and assistance.

My deepest gratitude goes to my parents for their encouragement, care, and sacrifice, to whom

I dedicate this work. My every little success would not have been possible without their prayers.

I would also like to give my special thanks to my brothers and my sisters. They all taught

me the value of knowledge, and provided me with constant support, encouragements, and

unconditional love. Thank you all very much!.

Finally, I am grateful to all the people who have supported me, encouraged me, who have

contributed to the realization of this project.

i

Page 4: Sparse Regularization Approach ... - supcom.mincom.tn
Page 5: Sparse Regularization Approach ... - supcom.mincom.tn

Abstract

This thesis falls within the scope of sparse representation study. Compressed Sensing (CS)

theory defines a framework in which it is possible to estimate in a unique and improved way

signals having a sparse structure in a given incomplete basis. Our main goal is to exploit

the potentials of CS to improve performance in wireless communication systems. The first

part of this thesis deals with the recovery and the tracking issues of sparse signals in known

bases. In this context, the problem of rare events detection and counting in Wireless Sensors

Networks (WSN) is considered. Firstly, we envisaged small-scale WSN scenario. In this context,

under the hypothesis of rare events, the number of targets per cell forms a sparse vector

with discrete integer components. Thus, new approaches based on the Greedy aspect and

the Orthogonal Matching Pursuit (OMP) algorithm are proposed for targets detection and

counting. Secondly, we focus on large scale WSN, for which the large-scale fading severely

affects the transmitted signal. Therefore, in order to improve the detection performance in

WSN, we propose collaborative schemes that control the transmitted power of certain targets

based on CS coherence criterion. Also, we consider the continuous sparse parameter tracking

based on the assumption of slow time variation of its support. The envisaged application is

the sparse Channel Impulse Response (CIR) tracking in OFDM systems. In such scenario,

the slow time variation of the support and the sparse CIR structure have been exploited to

improve the channel estimation accuracy. Indeed, a new scheme based on a combination of

delay subspace tracking by Kalman filtering and an adaptive CIR support tracking procedure

is suggested. The proposed approaches lead to performance enhancement with respect to

tracking approaches not accounting for channel sparsity. In addition, we consider the problem of

optimized pilot subcarriers placement in OFDM systems for spectral efficiency increase. In this

context, we propose a new scheme that iteratively allows to find the near-optimal pilot pattern

in a forward manner using a tree-based structure. In addition to performance improvement,

the proposed forward scheme allows a noticeable computational load reduction compared to

former schemes. The second part of this thesis focuses on a more general framework where the

observed signal is a priori non sparse yet compressible in a given basis to be optimized. The

chosen application is that of spatially correlated 2D WSN measurements. In this framework,

in addition to conventional sparsity inducing transforms, we propose a new technique based

on linear prediction coding (LPC) that exploits the data spatial correlation. This technique

allows to design a sparsification transformation allowing for CS application, thus guaranteeing

the original signal recovery from a reduced number of sensor readings. First, a 1D reading of

the network is considered. Then, a 2D scenario, which is better adapted to exploit the spatial

correlation, is envisaged.

Keywords: Compressed Sensing, sparsity, coherence, Greedy, small-scale WSN, large-scale

WSN, power control, OFDM systems, tracking, Kalman filter, optimized pilot allocation, spa-

tial correlation, LPC, 2D WSN readings.

iii

Page 6: Sparse Regularization Approach ... - supcom.mincom.tn

Resume

Cette these s’interesse aux representations parcimonieuses. Dans ce cadre, la theorie de l’acquisi-

tion compressee (CS) permet d’estimer de maniere unique et amelioree des signaux ayant une

structure creuse dans une base incomplete donnee. Notre objectif principal est d’exploiter les

potentiels du CS pour ameliorer les performances dans les systemes de communication sans fil.

La premiere partie de la these porte sur les problemes de reconstruction et de poursuite des

signaux parcimonieux dans des bases connues. Dans ce contexte, le probleme de la detection

et du comptage d’evenements rares dans les reseaux de capteurs sans fil (RCSF) est considere.

Sous l’hypothese d’evenements rares, le nombre de cibles par cellule forme un vecteur parci-

monieux avec des composantes discretes. Nous nous interessons d’abord au scenario RCSF a

petite echelle. De nouvelles approches basees sur l’aspect Greedy et l’algorithme Orthogonal

Matching Pursuit (OMP) sont proposees pour la detection et le comptage des cibles. Nous

envisageons ensuite les RCSF a grande echelle, pour lesquels l’effet d’affaiblissement de par-

cours affecte severement le signal transmis. Afin d’ameliorer les performances de detection

dans ces RCSF, nous proposons des schemas collaboratifs qui controlent la puissance transmise

de certaines cibles en se basant sur le critere de la coherence. En outre, nous considerons

la poursuite de parametres parcimonieux a valeurs continues sous l’hypothese d’une variation

temporelle lente du support. Ceci est applique a la pousruite de la reponse impulsionnelle du

canal (CIR) dans les systemes OFDM en se basant sur la variation temporelle lente de la struc-

ture parcimonieuse du CIR. Un nouveau schema qui combine la poursuite des sous-espaces

de retards par le filtrage du Kalman et une procedure adaptative de poursuite du support

CIR est suggere. Les performances obtenues sont ameliorees par rapport aux approches ne

tenant pas compte de la structure parcimonieuse du CIR. Par ailleurs, nous considerons le

probleme d’optimisation d’emplacement des sous-porteuses pilotes dans les systemes OFDM

afin d’augmenter l’efficacite spectrale. Dans ce contexte, nous proposons un schema iteratif

avec une structure d’arborescence qui optimise la repartition des porteuses pilotes. Le schema

propose permet une nette reduction de la charge calculatoire par rapport aux schemas exis-

tants. La deuxieme partie de cette these porte sur un cadre general ou le signal observe est a

priori non creux mais compressible dans une base a optimiser. L’application choisie est celle des

mesures RCSF a 2D spatialement correlees. Ainsi, en plus des transformations classiques, nous

proposons une nouvelle technique basee sur le codage de prediction lineaire (LPC) qui exploite

la correlation spatiale. Cette technique aboutit a une transformation de sparsification perme-

ttant l’application du CS, garantissant ainsi la recuperation du signal original a partir d’un

nombre reduit de lectures des capteurs. Tout d’abord, une lecture 1D du reseau est consideree.

Ensuite, un scenario 2D, mieux adapte a exploiter la correlation spatiale, est envisage.

Mots cles: Acquisition compressee, parcimonie, coherence, Greedy, RCSF a petite echelle,

RCSF a grande echelle, controle de puissance, systeme OFDM, poursuite, filtre de Kalman,

allocation optimisee des pilotes, correlation spatiale, LPC, lectures 2D du RCSF.

Page 7: Sparse Regularization Approach ... - supcom.mincom.tn

Contents

Acknowledgements i

Abstract iii

List of Figures xi

List of Tables xv

Abbreviations xvii

Notations xix

1 Introduction 1

2 Sparse Representation Approach and Compressed Sensing Theory: Appli-

cations in Wireless Communications 11

2.1 Compressed Sensing and Sparse Recovery . . . . . . . . . . . . . . . . . . . . . 11

2.1.1 Conventional compression . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.2 Compressed Sensing context . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.1.3 Performance guarantees . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.3.1 Sparsity concept . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.3.2 Design of CS matrices . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 Compressed Sensing Recovery Algorithms . . . . . . . . . . . . . . . . . . . . . 17

v

Page 8: Sparse Regularization Approach ... - supcom.mincom.tn

Contents CONTENTS

2.2.1 Original problem formulation . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2.2 Relaxed formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.2.3 Greedy pursuit algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.3 Sparse Representation in Wireless Communications . . . . . . . . . . . . . . . . 23

2.3.1 Literature overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3.2 Wireless sensor networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.3.2.1 WSN components . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.3.2.2 Wireless communication standards . . . . . . . . . . . . . . . . 25

2.3.2.3 WSN energy optimization . . . . . . . . . . . . . . . . . . . . . 26

2.4 Thesis Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Sparse Static Signal Recovery through Rare Events Detection and Counting

in Small Scale WSN 29

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2 System Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2.1 Network model and assumptions . . . . . . . . . . . . . . . . . . . . . . 31

3.2.2 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2.3 Data aggregation schemes . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.3 Sparse Targets Detection Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.3.1 Large scale WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.3.2 Small scale WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.4 Proposed Rare Events Detection Algorithms . . . . . . . . . . . . . . . . . . . . 38

3.4.1 MP based algorithms for discrete parameter recovery . . . . . . . . . . . 38

3.4.1.1 Greedy MP (GMP) algorithm . . . . . . . . . . . . . . . . . . 38

3.4.1.2 Two-stages GMP (2S-GMP) algorithm . . . . . . . . . . . . . 39

3.4.1.3 Generalized GMP and 2S-GMP algorithms . . . . . . . . . . . 40

3.4.2 OMP based algorithms for discrete parameter recovery . . . . . . . . . . 41

Page 9: Sparse Regularization Approach ... - supcom.mincom.tn

Contents vii

3.4.2.1 Greedy OMP (GOMP) algorithm for discrete parameter recovery 42

3.4.2.2 Two-stages GOMP (2S-GOMP) algorithm . . . . . . . . . . . 44

3.5 Optimized Sensors Selection Schemes . . . . . . . . . . . . . . . . . . . . . . . . 46

3.6 Numerical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.6.1 Parameters setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.6.2 Numerical results analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.6.2.1 Generalized versions performance . . . . . . . . . . . . . . . . 48

3.6.2.2 Performance of GOMP . . . . . . . . . . . . . . . . . . . . . . 49

3.6.2.3 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.6.3 Computational load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4 Power Control Mechanism for Sparse Static Events Detection in Large Scale

WSN 59

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.2 Model Description and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.2.1 System model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.2.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.3 Existing Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.4 Proposed Approaches for power Control . . . . . . . . . . . . . . . . . . . . . . 65

4.4.1 Framework overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.4.2 Proposed power control mechanism based on distance (PCMD) . . . . . 68

4.4.2.1 PCMDg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.4.2.2 PCMDl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.4.3 Proposed power control mechanism based on sensors number (PCMSn) 71

4.4.3.1 PCMSng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.4.3.2 PCMSnl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

4.5 Numerical Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Page 10: Sparse Regularization Approach ... - supcom.mincom.tn

Contents CONTENTS

4.5.1 Simulation parameters setting . . . . . . . . . . . . . . . . . . . . . . . . 75

4.5.2 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5 Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel

Estimation 85

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.2.1 Wireless channel characteristics . . . . . . . . . . . . . . . . . . . . . . . 86

5.2.2 Framework overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.2.2.1 Classical approaches for sparse OFDM channel estimation . . 89

5.2.2.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.3 Sparse Channel Estimation in OFDM Systems . . . . . . . . . . . . . . . . . . 90

5.3.1 System model description . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.3.2 Structured LS channel estimation . . . . . . . . . . . . . . . . . . . . . . 91

5.4 CIR Tracking in Fast Fading Channel . . . . . . . . . . . . . . . . . . . . . . . 92

5.4.1 CIR support detection using thresholding . . . . . . . . . . . . . . . . . 93

5.4.2 CIR tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

5.4.3 Subspace tracking with Kalman filter . . . . . . . . . . . . . . . . . . . 95

5.4.4 CIR support tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.5 Pilots Allocation for Sparse Channel Estimation . . . . . . . . . . . . . . . . . 98

5.5.1 Mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.5.2 Pilots placement optimization . . . . . . . . . . . . . . . . . . . . . . . . 99

5.5.3 Proposed pilots allocation scheme . . . . . . . . . . . . . . . . . . . . . 101

5.6 Discussions and Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.6.1 CIR tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.6.2 OFDM pilot allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Page 11: Sparse Regularization Approach ... - supcom.mincom.tn

Contents ix

6 Robust Sparse Data Representation from 1D to 2D processing Strategies:

Application to WSN 113

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

6.2 1D Data Compression in WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.2.1 1D CS for 2D signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.2.2 Survey on unsupervised sparsity inducing transformations in 1D . . . . 116

6.2.3 1D principal component analysis (1DPCA) method . . . . . . . . . . . . 118

6.3 Contribution in 1D Supervised Sparsifying Basis Learning: Linear Prediction

Coding (LPC) Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.4 Numerical Results Analysis for 1D Processing . . . . . . . . . . . . . . . . . . . 123

6.4.1 Correlated data generation . . . . . . . . . . . . . . . . . . . . . . . . . 123

6.4.2 Performance evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.4.3 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

6.4.4 LPC basis relevance analysis . . . . . . . . . . . . . . . . . . . . . . . . 128

6.5 2D Separable Data Compression: Application to WSN . . . . . . . . . . . . . . 130

6.5.1 Sparsity of 2D signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

6.5.2 Predefined 2D transformation . . . . . . . . . . . . . . . . . . . . . . . . 133

6.5.3 2D separable principal component analysis technique . . . . . . . . . . . 133

6.6 Proposed 2D Separable Linear Prediction Coding Approach . . . . . . . . . . . 135

6.6.1 2D separable LPC: causal scenario . . . . . . . . . . . . . . . . . . . . . 137

6.6.2 2D separable LPC: noncausal scenario . . . . . . . . . . . . . . . . . . . 141

6.7 Discussions and Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 143

6.7.1 Application to WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

6.7.2 Data spatial correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

6.7.3 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

6.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

7 Conclusion and Future Work 151

Page 12: Sparse Regularization Approach ... - supcom.mincom.tn

Contents CONTENTS

Bibliography 155

Page 13: Sparse Regularization Approach ... - supcom.mincom.tn

List of Figures

2.1 Fundamental idea behind CS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 CS process and its domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.3 Sparsity and inverse problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.4 Sparsity and data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.5 Example of WSN Communication Architecture. . . . . . . . . . . . . . . . . . . 24

2.6 WSN data gathering based on clustering. . . . . . . . . . . . . . . . . . . . . . 27

3.1 Events detection scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2 Network model scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.3 Measurements aggregation one-hop scheme from M among N active sensors . . 36

3.4 Targets positions estimation when SNR=15dB (one realization), N = 64, M =

25, K = 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.5 Performance evaluation of the simple and generalized versions when N = 64,

M = 25 and K = 4. Solid line correspond to GMP/gGMP and dashed line to

2S-GMP/g2S-GMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.6 Targets positions estimation when SNR=15dB, N = 64, M = 25, K = 4. . . . . 51

3.7 Performance comparison when N = 64, M = 20 and K = 3. . . . . . . . . . . . 52

3.8 Performance comparison versus measurements number (M) when SNR=20dB,

N = 64 and K = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.9 Correct events detection versus cells number when MN ≈ 0.3, K

N ≈ 0.03, and

SNR= 20dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

xi

Page 14: Sparse Regularization Approach ... - supcom.mincom.tn

List of Figures LIST OF FIGURES

3.10 Comparison between different proposed sensors selection optimized approaches

for GMP algorithm when M = 20 and K = 3. . . . . . . . . . . . . . . . . . . . 54

3.11 Mean iterations number and mean run time versus SNR when N = 64, M = 20

and K = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.1 NMSEt versus SNR when N = 64, M = 20, K = 2 and P0 = 1. . . . . . . . . . 62

4.2 PC model. Only the nodes inside the radius dm have power control. . . . . . . 64

4.3 Targets positions estimation when SNR=15dB, N = 64, M = 25, K = 4. . . . . 64

4.4 NMSEt versus SNR when N = 64, M = 20, K = 2 and P0 = 1, case without PC 66

4.5 PC Transmission model. Only the targets inside the coverage zone transmit. . 67

4.6 PCMDg (n = 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.7 PCMSn (n = 2), in the two subsets N2 = 9 nodes are present. . . . . . . . . . . 72

4.8 Performance evaluation for PCMDg approach . . . . . . . . . . . . . . . . . . . 78

4.9 Performance evaluation for PCMDl approach. . . . . . . . . . . . . . . . . . . . 78

4.10 Performance evaluation of PCMSnl approach. . . . . . . . . . . . . . . . . . . . 79

4.11 Comparison between different proposed PC schemes for n = 2 with equal cover-

age d0 constraint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.12 Comparison between different proposed schemes for n = 2 when SNR= 25dB.

Dashed curves correspond to schemes with equal power constraint w.r.t. w.o.PC

case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.13 Performance evaluation versus coverage d0 when SNR= 20dB. Left figure corre-

sponds to PCMD scheme and right figure to PCMSn scheme. . . . . . . . . . . 82

4.14 Total consumed power versus coverage d0 when SNR= 20dB. Dashed lines cor-

respond to local scheme and solid lines to global scheme . . . . . . . . . . . . . 82

5.1 Block diagram of structured LS estimation based on CIR structure detection. . 91

5.2 Proposed Kalman-based subspace tracking and CIR support tracking schemes

diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

5.3 CIR support tracking procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Page 15: Sparse Regularization Approach ... - supcom.mincom.tn

List of Figures xiii

5.4 Two trees based forward structure, Ns = 5, Np = 3, p = 2 . . . . . . . . . . . . 102

5.5 Comparison between different CIR structure detection procedures. . . . . . . . 104

5.6 Normalized Mean Square Error versus SNR in the case of fast fading channels

with quasi-stationary delays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.7 Fast fading channels with quasi-stationary delays. Solid line correspond to PFE

and dashed line to TMSE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.8 Performance evaluation of the proposed optimized pilot allocation scheme. . . . 108

5.9 Evaluation of the complexity reduction ratio γ for varying number of subcarriers

and pilots Ns and Np values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

6.1 Selection of the number of principal components . . . . . . . . . . . . . . . . . 120

6.2 Spatial correlation for different parameters values. . . . . . . . . . . . . . . . . 124

6.3 Normalized residual norm versus iterations number (K) for N = 100, M = 50,

p = 20 and mean correlation ρ = 0.7. . . . . . . . . . . . . . . . . . . . . . . . . 125

6.4 Normalized prediction error (a) and reconstruction error (b) versus prediction

order (p). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

6.5 Reconstruction error versus mean correlation ρ when N = 100, M = 50 and

SNR= 0dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.6 Reconstruction error versus measurements number (M) when when mean corre-

lation ρ = 0.8. Black line correspond to SNR= −10dB and red line to SNR= 10dB.127

6.7 Reconstruction error versus SNR when mean correlation ρ = 0.5. Black line

correspond to M = 30 and red line to M = 50. . . . . . . . . . . . . . . . . . . 128

6.8 Reconstruction error versus cells number when mean correlation ρ = 0.8, M/N ≈

0.3 and SNR=0dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.9 Reconstruction error versus iterations number when SNR= 10dB, M = 50,

p = 20 and ρ = 0.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.10 Sparse signal representation for one realization when ρ = 0.7. . . . . . . . . . . 130

Page 16: Sparse Regularization Approach ... - supcom.mincom.tn

List of Figures LIST OF FIGURES

6.11 Performance comparison versus OMP stop conditions when SNR= 10dB, M =

50, p = 20 and ρ = 0.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

6.12 2D WSN sub part. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

6.13 Synthesis Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

6.14 Generated correlated 2D data with mean correlation ρ = 0.9. . . . . . . . . . . 145

6.15 Generated correlated 2D data with mean correlation ρ = 0.5. . . . . . . . . . . 146

6.16 Normalized residual norm versus iteration number (K) when mean correlation

ρ = 0.9 and M = 49. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

6.17 Comparison between 1D and 2D scenarios of supervised transforms when mean

correlation ρ = 0.9. Solid line correspond to M = 25 and dashed line to M = 49. 147

6.18 Comparison between 1D and 2D scenarios of unsupervised transforms when

mean correlation ρ = 0.9. Solid line correspond to M = 25 and dashed line

to M = 49. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

6.19 2D performance evaluation when ρ = 0.8. . . . . . . . . . . . . . . . . . . . . . 148

6.20 Reconstruction error versus cells number (N) when mean correlation ρ ≈ 0.7 and

SNR= 0dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Page 17: Sparse Regularization Approach ... - supcom.mincom.tn

List of Tables

1.1 Thesis contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1 Summary of Bluetooth, ZigBee, WiFi and UWB technologies features [1]. . . . 25

3.1 Procedures comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.2 Computational complexity comparison. . . . . . . . . . . . . . . . . . . . . . . 54

3.3 Complexity in terms of multiplication for SNR= 15dB and K given by figure

3.11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.1 Coherence comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2 Comparison between referenced and proposed approaches. . . . . . . . . . . . . 65

4.3 Decomposition matrix coherence evaluation in absence of PC. . . . . . . . . . . 66

4.4 Simulation parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4.5 Coherence and power consumption evaluations, benchmark approach. . . . . . 76

4.6 Coherence comparison and power consumption for PCMD approaches. . . . . . 77

4.7 Coherence comparison and power consumption for PCMSn approaches. . . . . 77

4.8 Power consumption comparison for different proposed PC approaches for n = 2. 79

5.1 Computational complexity order. . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2 Comparison of the optimized pilot sets corresponding DFT submatrix coherence

measure Np × µ(√NsFpg) for different number of trees p and various values of

CF weight β, Ns = 256, Np = 16. . . . . . . . . . . . . . . . . . . . . . . . . . . 107

xv

Page 18: Sparse Regularization Approach ... - supcom.mincom.tn

List of Tables LIST OF TABLES

5.3 Effect of the number of trees p on the final CS matrix coherence Np×µ(√NsFpg)

for forward and backward schemes using the coherence as cost function (β = 0),

Ns = 256, Np = 16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6.1 Coherence comparison for different considered approaches. . . . . . . . . . . . . 129

Page 19: Sparse Regularization Approach ... - supcom.mincom.tn

Abbreviation

AWGN Additive White Gaussian Noise

BP Basis Pursuit

BPDN Basis Pursuit Denoising

BPIC Basis Pursuit Inequality Constraints

CCK Complementary Code Keying

CDS Cyclic Difference Set

CF Cost Function

CFR Channel Frequency Response

CH Cluster Head

CIR Channel Impulse Response

CLB Complexity Load Backward

CLF Complexity Load Forward

COE Counting Error

CP Cyclic Prefix

CS Compressed Sensing

CSI Cubic Spline Interpolation

DCT Discrete Cosine Transform

DFT Discrete Fourier Transform

2S-GMP Two-Stage Greedy Matching Pursuit

2S-GOMP Two-Stage Greedy Orthogonal Matching Pursuit

DSSS Direct Sequence Spread Spectrum

DS-UWB Direct Sequence Ultra Wide Band

EP Equal Power

FDMA Frequency Division Multiple Access

FHSS Frequency Hopping Spread Spectrum

FFT Fast Fourier Transform

xvii

Page 20: Sparse Regularization Approach ... - supcom.mincom.tn

List of Tables Abbreviations

GAIC Generalized Akaike Information Criterion

gGMP generalized Greedy Matching Pursuit

g2S-GMP generalized Two-Stage Greedy Matching Pursuit

GMP Greedy Matching Pursuit

GOMP Greedy Orthogonal Matching Pursuit

GP Greedy Pursuit

HCS High Coverage Sensors

IC Iterative Construction

IDCT Inverse Discrete Cosine Transform

IDFT Inverse Discrete Fourier Transform

IoT Internet of Things

LASSO Least Absolute Shrinkage and Selector Operator

LCS Low Coverage Sensors

LPC Linear Prediction Coding

LS Least Squares

LSL Local Sparsity Level

MB-OFDM Multi Band Orthogonal Frequency Division Multiplexing

MP Matching Pursuit

MSE Mean Squares Error

NMSE Normalized Mean Squares Error

OFDM Orthogonal Frequency Division Multiplexing

OMP Orthogonal Matching Pursuit

PC Power Control

PCA Principal Compoent Analysis

PCMD Power Control Mechanism based on Distance

PCMDg Power Control Mechanism based on Distance global

PCMDl Power Control Mechanism based on Distance local

PCMSn Power Control Mechanism based on Sensors number

PCMSng Power Control Mechanism based on Sensors number global

PCMSnl Power Control Mechanism based on Sensors number local

PDO Positions Differences Occurrences

PFE Probabilistic Framework Estimator

PoV Proportion of Variance

QAM Quadrature Amplitude Modulation

Page 21: Sparse Regularization Approach ... - supcom.mincom.tn

List of Tables xix

QPSK Quadrature Phase Shift Keying

RIP Restricted Isometry Property

SER Symbol Error Rate

SNR Signal to Noise Ratio

SP Subspace Pursuit

StOMP Stagewise Orthogonal Matching Pursuit

TDMA Time Division Multiple Access

TMSE Threshold-based Mean Squares Error

UWB Ultra Wide Band

WiFi Wireless-Fidelity

WLAN Wireless Local Area Networks

WPAN Wireless Personal Area Networks

WSN Wireless Sensors Networks

Page 22: Sparse Regularization Approach ... - supcom.mincom.tn
Page 23: Sparse Regularization Approach ... - supcom.mincom.tn

Notations

N Set of positive integer elements

R Set of real elements

[•]∗ Conjugate operator

[•]T Transpose operator

[•]H Complex conjugate transpose operator (Hermitian)

[•]† Moore-Penrose inverse

⊙ Element-wise product operator

< •, • > Scalar product

‖ • ‖0 l0 norm

‖ • ‖2 l2 norm

card(•) Cardinality of a given set

vec(•) Vectorization matrix function

E(•) Expectation operator

∆P Adjacent pilot subcarriers spacing

IN Identity matrix of size N ×N

Φ Sensing matrix

Ψ Measurement matrix

α Path loss coefficient

δij Kronecker symbol, equals 1 if i = j, else 0

Ej Set of active cells in jth sensor range

η Complex Additive White Gaussian Noise

θ Sparse signal

µ Coherence value

ρ Mean correlation in the network

ρij Correlation between i and j sensor readings

σ2 Noise variance

xxi

Page 24: Sparse Regularization Approach ... - supcom.mincom.tn

List of Tables Notations

σ2ck Variance of the kth active CIR coefficient

A Decomposition/reconstruction matrix

d Distance between adjacent sensors

d0 Targeted coverage radius for transmitted power P0

dij Distance separating sensor i and sensor j

dmax Maximum sensors coverage

E 2D prediction error sequence

e 1D prediction error sequence

Fp Np ×Ns Fourier sub-matrix

Fpg Np ×Ng Fourier sub-matrix

Fps Np ×K Fourier sub-matrix

Fps Np × K Fourier sub-matrix

Fs Ns × K Fourier sub-matrix

H Frequency channel response

HKalman CFR estimate of the output of Kalman filter

Hls CFR recovered by LS estimator

Hp CFR estimation over the pilot subcarriers using the LS

h Sampled CIR

h Equivalent Sampled CIR

hls CIR LS estimator

h(t) Continuous CIR

K Degree of sparsity: number of nonzero entries

K Detected active positions number

L Channel memory

M Number of measurements

N Number of WSN cells

Ng Cyclic prefix length

Np Number of pilot sub-carriers per OFDM symbol

Ns Number of sub-carriers per OFDM symbol

Ntot Total number of WSN events

P Power control matrix

Pmax Maximal transmitted targets power

p prediction order

S 2D sparse WSN signal

Page 25: Sparse Regularization Approach ... - supcom.mincom.tn

List of Tables xxiii

Ts Sampling period

X 2D correlated WSN signal

Xd Diagonal matrix containing the pilot symbols

Xp 2D predicted signal

x 1D correlated WSN signal

y(i) Residual vector from vector y at ith iteration

zp FFT output over the Np pilot subcarriers

Page 26: Sparse Regularization Approach ... - supcom.mincom.tn
Page 27: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 1

Introduction

General Context

Over the past few years, low dimension linear representation/modeling methods have become

broadly popular in the mathematical field [2–4]. Sparse approximation has emerged for its

potentials to use limited resources while capturing all relevant information. Parsimony is

generally formulated with: “The multiple should not be used unnecessarily”.

The need for high ratio compression is rapidly evolving with the increasing requirements of large

amount of data acquisition and aggregation in the emerging Internet of Things (IoT) context

[5–7]. This thesis is interested with data acquisition from only few measurements compared to

those required by conventional methods. This is feasible if the data has a sparse representation

in some given basis.

The principle of sparsity, or parsimony, consists of representing some phenomenon with as few

variables as possible. In recent years, a large amount of multi-disciplinary research has been

conducted on sparse models and their applications [8]. In statistics and machine learning,

the sparsity principle is used to perform model selection. That is, automatically selecting a

simple model among a large collection of them [9–12]. In signal processing [4, 13], a signal

is qualified as sparse if most of its components are zero. Then, the signal can be accurately

described using only a small number of significant components in a suitable dictionary. If

most of a signal components are weak compared to its few significant components, it is said

approximately sparse. In linear algebra, sparse approximation consists in representing data

with linear combinations of a few dictionary elements. Sparse representation is practical in

many applications. In particular, it allows the application of CS [14–16], in which a few

1

Page 28: Sparse Regularization Approach ... - supcom.mincom.tn

measurements are sufficient to effectively reconstruct a much higher dimension original signal

based on the prior knowledge that the signal is sparse in some adequate basis [17].

Through the work of Donoho on statistical estimation [18], the sparsity concept was developed

with CS paradigm which examines various aspects of redundancy as a mathematical concept.

Indeed, Tao’s foundational discovery in CS [19] had demonstrated that an intractable formula-

tion of the sparse reconstruction problem under naturally adapted l0 norm could, under certain

assumptions, be replaced by an equivalent convex and highly structured optimization formula-

tion. This suggestion, which enables theoretical support to the use of l1 instead of the l0 norms

opened the door to a burst of work on optimization algorithms that exploit the specific struc-

ture of the sparse reconstruction problems. As a consequence, efficient algorithms for solving

the resulting optimization problems have been proposed.

CS is an emerging field that has attracted considerable research interest over the past few

years [18, 20–22]. It has gained a lot of attention thanks to its enhanced performance and

reduced resources requirement. CS addresses the problem of retrieving sparse signals from

under-determined linear measurements. Therefore, it offers the advantage of reducing both

computational complexity and material cost in the measurement stage as well. In recent years,

CS has been applied into the domain of wireless communications, in which it allowed to cut

down the signal processing cost as well as to increase system efficiency. In particular, it was

applied for the identification of frequency hopping signal [23, 24], in the spectrum sensing in

cognitive radio [25, 26], in radio channel recovery [27, 28] and in the events detection in WSN

[29, 30].

We start by a motivation part about the CS exploiting in wireless communication context.

Research Motivation

This thesis exploits the CS potentials in order to enhance performance in wireless communi-

cation systems. In particular, the first part of the thesis focuses on sparse signal recovery.

This part thus turns around recovery or reconstruction algorithms and enhancements. In this

framework, the problem of events counting in WSN is first addressed. This problem has the

specificity of integer entries of the sparse vector and is addressed using greedy framework. Sec-

ondly, we consider sparse parameter tracking based on the assumption of slow time variation

of the support (non zero entries positions). The considered application is sparse CIR tracking

in OFDM systems.

2

Page 29: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 1. Introduction

The second part of the thesis focuses on a more general framework, where the observed signal

is a priori non sparse yet compressible in a given basis to optimize. The chosen application is

that of spatially correlated WSN measurements.

We hereafter emphasize the relevance and timeless of the herein chosen wireless applications

as well as their respective main challenges, which we address by CS.

Wireless Sensors Networks: Over the past few years, development and deployment of WSN

have experienced a rapid growth. WSN consists of large number of distributed tiny, low cost,

low energy and wirelessly connected sensor nodes over a wide area used to remotely monitor

diverse parameters. Several applications are envisaged: such as in environment [31, 32], in mo-

bile vehicles, robots tracking, etc. In WSN, each sensor node is capable to measure data and

communicate it to other nodes. In environmental monitoring, the sensed data can be temper-

ature, humidity, chemical composition,... Two schemes can be envisaged for data processing:

either distributed at the node level or centralized at a sink or cluster head.

WSN technology is being employed to insure the connection between the physical world of

humans and the virtual world of electronics. Its potential is to provide low cost solution

for the problems met in many fields including military [33], industrial [34], environmental

monitoring [35] and many other real life applications [36]. Rapid deployment, self organization,

high sensing fidelity, flexibility and low cost of WSN make them very promising for various

monitoring applications. However, owing to limited storage capacity and power of sensor nodes,

numerous issues and challenges are being faced in WSN. One of the most crucial challenges in

WSN is energy efficiency and stability because battery capacities of sensor nodes are limited

and replacing them are impractical [37] especially in difficult access areas. Therefore, new

mechanisms to reduce power consumption, cost, delay and traffic are needed. CS theory holds

promising improvements to these issues [38]. As it will be proved later, its application in

WSN allows for enhanced performance while reducing the number of deployed sensors and

thus lowering the energy budget.

Radio Channel Estimation and Tracking : In high data rate wireless communication systems,

multipath channels often lead to severe frequency selective fading and serious Inter-Symbol

Interference (ISI). High mobility adds the problem of fast fading. The Orthogonal frequency

division multiplexing (OFDM) technique has been used to mitigate multipath channel impair-

ments. To benefit from OFDM advantages as its simple equalization and ISI avoidance, an

accurate channel estimate is required [39–41] and is critical to the performance of coherent

demodulation. To this end, some pilots, should be multiplexed with data for channel response

3

Page 30: Sparse Regularization Approach ... - supcom.mincom.tn

recovery aim. Pilots repartition over the time-frequency grid is adapted according to the chan-

nel dynamics respectively in time and frequency domains.

Wideband wireless channels frequently lead to equivalent sampled impulse response with a

sparse structure. For such channels, most of the CIR coefficients are zero or near zero valued.

This can be related to the large delay spread, compared to the sampling period, and the

relatively low number of propagation paths. Then, for each path, depending on whether the

delay is sample spaced or not, its contribution is respectively restricted to one CIR tap or leaks

over a set of adjacent CIR coefficients. By exploiting the CIR sparse structure, its estimation

performance can be greatly improved [42–44]. Many recent works consider sparse channel

estimation by the recently emerged CS application. The use of CS has several advantages such

that reducing the number of pilots with respect to the CIR length, which enhances spectral

efficiency. It also leads to denoised and thus more accurate estimate.

Contributions of the Dissertation

The thesis contributions are hereafter summarized.

1. Continuous and integer sparse vector recovery and tracking: Although general

CS framework was developed for real and continuous-valued sparse data, some studies

addressed the case of finite alphabet such as in [45]. We herein consider the case of sparse

vector with either discrete or continuous entries. These two scenarios were respectively

applied in WSN events detection and channel estimation problems. Also, the issue of

sparse parameter tracking based on its support slow time variation is addressed and

applied for sparse OFDM channel tracking.

• Sparse targets detection and counting in small scale WSN

We consider that the monitored area is divided into cells and that each cell is

equipped by one sensor. The first scenario envisages small scale WSN scenario

with dense sensors deployment. In this part, we exploit the rare nature of cells

holding targets in the monitored area to apply CS approach for targets detection

and counting [46]. To this end, new CS Greedy algorithms and enhancements of

existing algorithms such as the popular Orthogonal Matching Pursuit (OMP) [47]

algorithm are proposed [48, 49].

• Power Control (PC) for rare events detection enhancement in large scale WSN

The problem of rare events detection using CS technique is investigated in large

4

Page 31: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 1. Introduction

scale WSN. In such scenario, the large scale fading severely affects the transmitted

signal and thus the detection performance. In order to remedy to such impairments,

collaborative schemes which control the transmitted power of some targets based on

CS (coherence criterion) rules are proposed [50].

• Sparse CIR tracking in OFDM systems

Radio propagation of Ultra Wide Band (UWB) transmissions is characterized by

a large delay spread that induces a sparse CIR structure [51, 52]. The problem of

sparse multipath channel estimation in OFDM systems is considered. More pre-

cisely, we consider mobility scenario that induce fast fading channels typified by

highly non stationary gains and delays with much lower temporal variation. In this

context, the slow time variation of CIR support is exploited to enhance the channel

estimation and tracking accuracy. Spectral efficiency is also addressed by minimizing

the number of required pilots through CS use while optimizing the CIR estimation

recovery performance [53].

2. Search of adequate sparsity inducing basis: The parameter to be recovered is

here originally non sparse but supposed compressible in some basis. The application to

spatially correlated data compression in WSN is here considered.

A sparse data may naturally exist for some applications. Nevertheless, a sparse data

representation can not be easily induced in many other real-world contexts such as con-

tinuous environmental data gathering. In this perspective and unlike the first part, we

consider correlated data, and in particular 2D reading of sensors in Wireless Sensors

Networks (WSN). Then, in order to reduce the amount of exchanged data, we compress

them based on CS, thus allowing to eliminate their redundancy and reduce their size.

This requires the design of a specific sparsifying transform or data compression basis.

In this work, in addition to conventional and predefined sparsity inducing transforms, a

new technique based on linear prediction and exploiting spatial correlation is developed

[54, 55].

The schematic layout of the thesis contributions is given in the following table 1.1.

At the end of this section, we provide an outline of the thesis manuscript which organizes the

previously detailed contributions in 6 chapters.

5

Page 32: Sparse Regularization Approach ... - supcom.mincom.tn

Contexts Discrete parameter Continuous parameter

Sparse parameter recovery• Sparse static parameter • Sparse events detection • CIR recovery in OFDM

and counting in WSN systems(Chap.3) (Chap.5)

• Sparse dynamic parameter • CIR support tracking(Chap.5)

• Coherence reduction • Sensors selection • Pilots placements optimization• PC mechanism

(Chap.4) (Chap.5)

Search of sparsifying basis• Compressible static parameter • 2D correlated measurements:

case of WSN(Chap.6)

• Energy consumption • Estimation accuracy• Advantages of CS applications • Sensors deployment • Overhead reduction

• Efficient WSN data gathering

Table 1.1: Thesis contexts

Organization of the Manuscript

The second chapter is dedicated to a survey of CS theory and its mathematical background.

Then, the specific context of developments and their application to wireless communications

are detailed. Finally, an overview of the general framework of this thesis is given.

The third chapter is dedicated to the problem of sparse and discrete valued sparse parameter

recovery. This is envisaged in the frame of sparse targets detection and counting in WSN.

Greedy CS recovery algorithms are proposed in the aim to provide a better tradeoff between

performance and complexity, compared to methods of the literature. In its first part, a modified

version of the recent Greedy Matching Pursuit (GMP) [56] algorithm is presented. Our main

enhancement is the separation of the detection and counting stages of the algorithm, which is

shown to both improve performance and reduce computational load. In addition, a generalized

extension of the considered greedy versions, allowing to identify simultaneously multiple active

components/cells at each iteration, is suggested.

To take into account the non orthogonality of the decomposition basis, related in our case to

the propagation channel, a new Greedy algorithm named Greedy OMP (GOMP), which both

counts and localizes targets, is proposed. It adapts the OMP algorithm to the discrete nature

of sparse vector, which contains the events number per cell. Further, a modified version of

the proposed GOMP algorithm, based on separating the steps of targets detection and targets

counting is presented. At the end of the chapter, we propose to optimize the sensors placement

6

Page 33: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 1. Introduction

in contrast to existing work where they are randomly chosen. This is realized through different

schemes aiming to better verify CS performance guarantees rules. Two kind of schemes are

envisaged: based on the channel matrix (supposed known) or on the measured observation.

The proposed sensors selection schemes assume equal power transmission of all targets.

For a better choice of the decomposition basis, a further approach is envisaged in chapter 4 in

the framework of large scale WSN where a PC mechanism aims to compensate for path loss.

More precisely, we focus on CS recovery capacity enhancement based on coherence reduction

of the sensing matrix. This is realized in large scale WSN through a mechanism of transmitted

PC. Two collaborative schemes are proposed, in which each sensor listens to the WSN during

one time slot as a cluster head (CH). The first partition is based on the range of sensors to

the cluster head and is referred to as PC Mechanism based on Distance “PCMD”. The second

partition depends on the deployed sensors density around the cluster head and is referred

to as PC Mechanism based on Sensors number “ PCMSn”. The obtained results show an

enhancement of detection and counting performance compared to the case without PC.

Chapter 5 addresses the problem of a sparse parameter tracking, when there is a slow variation

of its support (non zero elements positions). This is studied in the case of continuous-valued

sparse parameter and applied to the problem of sparse CIR tracking in OFDM systems. Besides

tracking issues, pilots placement optimization is addressed as a way to optimize the spectrum

efficiency and the reconstruction basis quality.

Contrarily to the three first chapters of the thesis, which consider the problem of sparse pa-

rameter recovery and tracking issues, when the sparsifying basis is a priori known, chapter 6

envisages the case of originally non sparse yet compressible data. The aim being to optimize

the sparsifying basis choice in order to obtain the most sparse representation.

In chapter 6, we investigate the issue of data aggregation of non-sparse but spatially corre-

lated data in WSN. We study transformations that sparsify the observed measurements. In

this context, in addition to conventional data compression techniques such as Principal Com-

ponent Analysis (PCA), Discrete Cosine Transform (DCT) and Discrete Fourier Transform

(DFT), we propose a novel sparsity inducing algorithm that exploits the spatial correlation.

This algorithm is based on Linear Prediction Coding (PCA). It enables to find a sparsifying

transformation thus allowing for CS application. This also permits to recover the original signal

from a reduced number of sensors. First, a 1D reading of the network is considered. Then, a

2D reading is envisaged and shown to outperform the first since it is tailored to better exploit

the spatial correlation.

7

Page 34: Sparse Regularization Approach ... - supcom.mincom.tn

Finally, in chapter 7, the dissertation is concluded by presenting a summary of the contributions.

Then, some open issues and perspectives to the realized work and suggestions are developed.

Publications

This thesis contributions were published in six conference papers and one journal.

Journal Papers

• Z. Jellali, L. Najjar and S. Cherif, “ Improving rare events detection in WSN through

cluster-based power control mechanism”, International Journal of Distributed Sensor Net-

works, IJDSN, vol. 12, no. 2, 2016.

• Z. Jellali, L. Najjar, “Fast Fading Channel Estimation by Kalman Filtering and CIR

Support Tracking”, IEEE Transactions on Broadcasting, under revision (under minor

revision).

Conference Papers

• Z. Jellali, L. Najjar and S. Cherif, “Linear Prediction for Data Compression and Recovery

Enhancement in Wireless Sensors Networks”, International Wireless Communications and

Mobile Computing, IWCMC 2016, IEEE, pp. 779− 783, Cyprus, September 2016.

• Z. Jellali, L. Najjar and S. Cherif, “Data Acquisition by 2D Compression and 1D Re-

construction for WSN Spatially Correlated Data”, International Symposium on Signal,

Image, Video and Communications, ISIVC 2016, IEEE, pp. 224−229, Tunisia, November

2016.

• Z. Jellali, L. Najjar and S. Cherif, “Generalized Targets Detection and Counting in Dense

Wireless Sensors Networks”, 23rd European In Signal Processing Conference, EUSIPCO

2015, IEEE, pp. 1187 − 1191, 2015.

• Z. Jellali, L. Najjar and S. Cherif, “Greedy Orthogonal Matching Pursuit for Sparse

Target Detection and Counting in WSN”, 22rd European In Signal Processing Conference,

EUSIPCO 2014, IEEE, pp. 2250 − 2254, 2014.

8

Page 35: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 1. Introduction

• Z. Jellali and L. Najjar, “Tree-based Optimized Forward Scheme for Pilot Placement in

OFDM Sparse Channel Estimation”, International Wireless Communications and Mobile

Computing Conference, IWCMC 2014, IEEE, pp. 925 − 929, 2014.

• Z. Jellali, L. Najjar and S. Cherif, “A Study of Deterministic Sensors Placement for Sparse

Events Detection in WSN based on Compressed Sensing”, International Conference on

Communications and Networking, ComNet 2014, IEEE, pp. 1− 5, 2014.

Conclusion

After the presentation of the thesis context and contributions and of the manuscript organiza-

tion, the following chapter gives the fundamentals and background of CS paradigm and sparse

parameter recovery.

9

Page 36: Sparse Regularization Approach ... - supcom.mincom.tn
Page 37: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2

Sparse Representation Approach

and Compressed Sensing Theory:

Applications in Wireless

Communications

Sparse representation concept has grown very rapidly in the past few years and has attracted

much attention in a wide range of applications. The basic idea of sparsity is that the param-

eter or signal, qualified as sparse, can be described using only a small number of significant

components in a suitable basis.

The taxonomy of sparse approximation is directly related to CS paradigm, which has recently

proved its high potentials in several problems resolution. The goal of this chapter is to present

the basic tools of sparse modeling relationship with CS theory as well as their applications to

wireless communications. It gives an overview of CS basic ideas, different formulations and

the main sparse recovery algorithms. At the end of this chapter, CS application to wireless

communications will be discussed.

2.1 Compressed Sensing and Sparse Recovery

This introductory part surveys the theory of compressive sampling [57], also known as com-

pressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wis-

dom in data acquisition. In recent years, CS [18, 20–22] has attracted considerable attention in

11

Page 38: Sparse Regularization Approach ... - supcom.mincom.tn

2.1. Compressed Sensing and Sparse Recovery

areas of applied mathematics, computer science, and electrical engineering by suggesting that

it may be possible to surpass the traditional limits of sampling theory. CS builds upon the

fundamental fact that we can represent many signals using only a few non-zero coefficients in

a suitable basis or dictionary. Nonlinear optimization can then enable recovery of such signals

from very few measurements.

After a brief historical overview, we treat the central question of how to accurately recover

a high-dimensional signal from a small set of measurements and how to provide performance

guarantees for a variety of sparse recovery algorithms.

2.1.1 Conventional compression

Claude Shannon, American mathematician and electrical engineer, is considered a Father of

information theory. His name is associated with the famous sampling theorem also known as

Nyquist-Shannon criterion, claiming that if an analog signal is sampled at a frequency Fe =1Ts,

(where Ts denotes the sampling period), equal at least to twice the maximum signal frequency

2Fmax, then we can rebuild without information loss the analog signal from its samples.

Today, with the requirement of important information volumes storing and transmission, there

is a real need of more efficient compression techniques, that allow for optimized use of available

resources, and of adapted and high performance recovery methods. Indeed, in many important

and emerging applications, the resulting Nyquist rate can be so high that we end up with too

many samples that need further compression in order to be stored or transmitted.

In other applications, the cost of signal acquisition is prohibitive, either because of a high cost

per sample, or because state-of-the-art cannot achieve the high sampling rates required by

Shannon/Nyquist. In many situations, an important quantity of samples do not bring much

information and should be thrown away before storage, thus pointing the need for more efficient

acquisition/sampling/sensing approaches.

Otherwise, the main question is if it is possible to sample the signal at below the minimum

rate prescribed by Shannon. To this end, we design compressed data acquisition protocols

which perform as if it was possible to directly acquire just the important information about

the signals, not acquiring that part of the data that would eventually just be ”thrown away”

by lossy compression. Moreover, such protocols are nonadaptive i.e. they do not require

knowledge of the signal to be acquired in advance. They only need to know the basis in which

the data is compressible. In specific applications, this principle might enable dramatically

12

Page 39: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

reduced measurement time, dramatically reduced sampling rates, or reduced use of analog-to-

digital converter resources.

CS theory has emerged as a new paradigm in signal processing particularly for data acquisition

[58, 59]. This theory needs fewer samples or measurements than required by the traditional

frequency bandwidth of Shannon-Nyquist [60, 61]. This compression reduces the size of sensed

data and thus decreases the storage requirements of the system. It depends on the existence

of a basis in which the signal is either sparse: can be represented by just a few coefficients, or

approximately sparse: only a few components are not negligible.

Figure 2.1: Fundamental idea behind CS

In summary, compared to conventional compression, CS provides a direct method which ac-

quires compressed samples without going through the intermediate stages of conventional com-

pression as shown in figure 2.1. In the following, an overview of CS theory will be given.

2.1.2 Compressed Sensing context

In 2004, a study prepared by Emmanuel Candes on data acquisition according to Shannon’s

theorem, raises very natural logical questions [22]: why go to so much effort to acquire all the

data when most of what we get will be thrown away? Can we not just directly measure the

part that will not end up being thrown away? This period is a sort of big bang of CS theory.

Towards 2006, the theory of CS, based on sparsity criterion of the objective signal, appears

with several founders ”Emmanuel Candes, Justin Romberg, Terence Tao and David Donoho”

[18]. Since this year, this approach knew many changes and variations and is explored in many

applications by engineers, physicians and scientists of all stripes.

13

Page 40: Sparse Regularization Approach ... - supcom.mincom.tn

2.1. Compressed Sensing and Sparse Recovery

In the case of a digital signal (sound recording, digital image or video), which can be always

represented as a vector x with N components, the question comes down to whether we can

perfectly reconstruct x from an observation y such as

y = Φx, (2.1)

where y is a reduced number of M compared to N combinations of x entries, and where Φ

is a sensing matrix of size M × N with M ≪ N , modeling sampling matrix and verifying

certain properties. Ideally, this measurement matrix is designed to reduce the number of

measurements M as much as possible while allowing for recovery of a wide class of signals x

from their measurement vectors y. We are dealing here with an under-determined system or

extremely ill-posed problem because there is much less equations than unknowns. Typically,

such a system has either no solution or infinite number of solutions. It turns out that under

certain conditions, which will be presented in the next subsection, it is possible to perfectly

reconstruct the signal x from the observation y.

2.1.3 Performance guarantees

As mentioned above, CS offers a framework for simultaneous sensing and compression of finite-

dimensional vectors, that relies on linear dimensionality reduction. Indeed, this approach

asserts that one can recover certain signals from far fewer samples or measurements than

required by conventional Shannon theory. To make this possible, CS relies on two principles:

sparsity , which pertains to the signals of interest, and incoherence. This latter pertains to the

sensing modality. Next section presents the two fundamental premises underlying CS: sparsity

and incoherence.

2.1.3.1 Sparsity concept

The fundamental idea of CS is to sample a signal to a significantly lower frequency than that

prescribed by Shannon by exploiting its sparsity characteristics. More precisely, CS exploits

the fact that many natural signals are sparse or compressible in the sense that they have concise

representations when expressed in a proper basis Ψ. Then, this sparsity expresses the idea of

the information rate and signal structure behind many compression algorithms that employ

transform coding.

14

Page 41: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

To introduce the sparsity notion, we rely on a signal representation x in a given basis Ψ. Using

this N ×N matrix Ψ = Ψ1, . . . ,ΨN with vectors Ψi as columns, a signal x can be expressed

as [20]

x =N∑

i=1

Ψiθi or x = Ψθ, (2.2)

where θ is the N×1 column vector of weighting coefficients. The signal θ can be also expressed

by approximately sparse representation in which only a few components have large or non zero

magnitude and the remaining components have very weak norm l1. Such signals can be well

approximated by the K largest coefficients obtained by thresholding the original signal. The

matrix Ψ is conventionally chosen as an orthonormal basis. In a general setting, we refer to

the matrix Ψ as the sparsifying dictionary and its columns as atoms. We refer by support to

the set of positions at which θ has non zero or significant entries. When most of θ components

are zero valued (resp. insignificant), x is said sparse (resp. approximatively sparse).

As mentioned above, CS focuses on signals that have a sparse representation, where x is a

linear combination of just K basis vectors, with K ≪ N . That is, only K of N coefficients of

θ are nonzero and N −K are zero. Sparsity is motivated by the fact that many natural and

manmade signals are compressible in the sense that there exists a basis where the representa-

tion in eq. (2.2) has just a few large coefficients and many zero valued or small coefficients.

This sparsity concept is one of the constraints required for the low rate sampling and the re-

construction process. Compressible signals are well approximated by K-sparse representations;

this is the basis of transform coding. For example, natural images tend to be compressible

in the Discrete Cosine Transform DCT [62] and wavelet bases [63] on which the JPEG and

JPEG-2000 compression standards are based. Audio signals [64] and many communication

signals are compressible in a localized Fourier basis.

2.1.3.2 Design of CS matrices

In conventional compression framework, we need to acquire the fullN -sample signal x, compute

the complete set of transform coefficients θi via θ = Ψ−1x, locate the K largest coefficients

and discard the N − K smallest coefficients; and encode the K values and locations of the

largest coefficients. Unfortunately, the sample-then-compress framework suffers from two in-

herent inefficiencies: First, we must start with a potentially large number of samples N even

if the ultimate desired K is small. Second, the encoder must compute all of the N transform

coefficients θi, even though it will discard all but K of them. As an alternative, it is not neces-

sary to invest a lot of power into observing the entries of a sparse signal in all coordinates when

15

Page 42: Sparse Regularization Approach ... - supcom.mincom.tn

2.1. Compressed Sensing and Sparse Recovery

most of them are zero anyway. Our framework of data acquisition condenses the signal directly

into a compressed representation without going through the intermediate stage of taking N

samples. Consider the more general linear measurement process that computes M < N inner

products between x and a collection of vectors ΦiNi=1 as in yj =< x,Φj >. Stacking the

measurements yj into the M × 1 vector y and the measurement vectors ΦTj as rows into an

M ×N measurement matrix Φ and substituting in eq. (2.2), we can write

y = Φx = ΦΨθ = Aθ, (2.3)

where A = ΦΨ is a M ×N sensing matrix for estimating the K-sparse vector θ. The M ×N

measurement matrix Φ is projecting the signal x into y. figure 2.2 illustrates these transforms

and figure 2.3 summarizes the different previous steps. Therefore, the incomplete observation

y is exploited to reconstruct the original signal θ, based on approximating high dimensional data

using fewer parameters. Once θ recovered, using Ψ we reconstruct the whole N observation x.

b

θ

Coefficient domain

Ψb

x = Ψθ

Signal domain

Φb

y = Φx

CS domain

Figure 2.2: CS process and its domains.

The problem with CS consists of designing a stable measurement matrix Φ such that the salient

information in any K-sparse or compressible signal is not damaged by the dimensionality

reduction from x ∈ RN×1 to y ∈ R

M×1. This can be guaranteed through the notion of

incoherence that will be detailed in the following. Once, the measurement and sparsifying

matrices Φ andΨ are identified, the aim will be to recover θ from y, then reconstruct x. Several

CS recovery algorithms can be envisaged. The following section is dedicated to introduce the

main recovery algorithms. In the following, we will show how CS provides solutions to this

problem.

To obtain good reconstruction performance, the measurement matrix Φ should be incoherent

with the sparsifying basis Ψ [65]. The coherence is defined as the maximum value amongst

16

Page 43: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

Representationθ

SparseRepresentation

Object with large dimensionx

Few nonzero components

x = Ψθ

RandomProjection

Observationy

InverseProblems

Fewer equations than unknowns

y = ΦΨθ

= Aθ

Figure 2.3: Sparsity and inverse problems

inner product of the sparsifying basis Ψ and the measurement matrix Φ. A low value of

coherence is desirable in order to ensure mutually independent matrices and therefore have

better compressive sampling [59]. The mutual coherence can be measured as

µ(Φ,Ψ) = max1≤i,j≤N

| < Φi,Ψj > |‖Φi‖2‖Ψj‖2

. (2.4)

If Φ and Ψ present a high coherence, the matrix A cannot be a good sensing matrix. The

incoherence condition can be achieved with high probability simply by selecting Φ as a random

matrix [66]. One choice for designing the matrix Φ is Gaussian where matrix elements are

independent and identically distributed (i.i.d.) Gaussian samples. This choice is deemed good

since a Gaussian sensing matrix satisfies the incoherence condition with high probability for

any choice of orthonormal basis Ψ [59].

2.2 Compressed Sensing Recovery Algorithms

In this section, the main recovery algorithms based on CS theory are introduced, starting by

presenting the original sparse signal recovery problem in the first subsection, progressing to the

relaxed optimization in the second subsection. Finally, we focus on the recovery greedy pursuit

algorithms.

17

Page 44: Sparse Regularization Approach ... - supcom.mincom.tn

2.2. Compressed Sensing Recovery Algorithms

y ∈ RM×1

= ΦΨ

Observed signal

Sparse signal

K active coefficients

(K ≪ N )

θ ∈ RN×1

(M ≪ N )

‖θ‖0 =∑

i=1,θi 6=0

|θi|0 = card(i, θi 6= 0)

Figure 2.4: Sparsity and data compression

2.2.1 Original problem formulation

In a general CS setup, few measurements y are available and the task is to reconstruct a larger

signal x, passing by the recovery of sparse signal θ. One may wonder how to reconstruct the

signals from this incomplete set of measurements. With the prior information of the signal

being compressible or sparse in a given basis, one of the theoretically simplest ways to recover

such a vector from its measurements eq. (2.3) is to solve l0 minimization problem [67] that

counts the number of non-zeros entries (see figure 2.4), the reconstruction problem turns to

be

θ = argminθ∈RN

‖θ‖0 subject to y = Aθ, (2.5)

where l0 is defined as the number of non zero entries of θ. If the observation is corrupted

by noise and if we have some knowledge about the noise variance, the problem constraint is

reformulated from y = Aθ to ‖y−Aθ‖ ≤ ζ, where ζ depends on the noise, we will come back

to the noisy setting in the following.

The l0-minimization problem works perfectly theoretically. However, it is computationally NP-

Hard in general [67] and as a consequence it is intractable to solve eq. (2.5) for any matrix and

vector. Our goal, therefore, is to find computationally feasible algorithms that can successfully

recover a sparse vector θ from the measurement vector y for the smallest possible number of

measurements M .

18

Page 45: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

2.2.2 Relaxed formulation

The convex relaxation methods are based on relaxing problem (2.5) by replacing the objective

function of the problem by a convex function [68, 69]. An alternative to the l0 norm used in

eq. (2.5) is to use the l1 norm, defined as ‖θ‖1 =N∑

n=1

|θn|.

Hereafter, the noiseless and noisy cases with recovery guarantees for sparse signal recovery and

uniqueness are presented.

• Noiseless Case

The resulting l1 norm adaptation of eq. (2.5) is formally defined a

θ = argminθ∈RN

‖θ‖1 subject to y = Aθ. (2.6)

Since the l1 norm is convex, eq. (2.6) can be seen as a convex relaxation of eq. (2.5). Thanks to

the convexity, this algorithm can be implemented as a linear program, known as Basis Pursuit

(BP) [69].

As M ≪ N , recovering θ from y is an ill-posed problem as there is an infinite number of solu-

tions for θ satisfying (2.3). Indeed, we need to determine some properties of the decomposition

matrix A that guarantee that for distinct signals θ1 and θ2 verifying θ1 6= θ2, lead to different

measurement vectors y1 = Aθ1 6= y2 = Aθ2. In other words, we want each vector y ∈ RM to

be matched to at most one K− sparse vector θ such that y = Aθ. A key relevant property

of the matrix in this context is its spark which is given by the smallest number of columns

that are linearly dependent. Indeed, if spark(A) > 2K [70], then for each measurement vector

y ∈ RM there exists at most one sparse signal θ such that y = Aθ thus guaranteeing solution

unicity. As spark(A) ∈ [2,M + 1], then M ≥ 2K measurements are required.

• Noisy Case

If the measurement vector y is corrupted with noise, the measurement becomes

y = Aθ + n, (2.7)

where n is an unknown additive White Gaussian Noise (AWGN) with 0M mean and covariance

σ2nIM .

The last optimization can then be modified to consider noisy measurements y, this is obtained

19

Page 46: Sparse Regularization Approach ... - supcom.mincom.tn

2.2. Compressed Sensing Recovery Algorithms

by changing the problem formulation to [71]

θ = argminθ∈RN

‖θ‖1 subject to ‖y −Aθ‖2 ≤ ζ, (2.8)

where ζ is an appropriately chosen bound on the noise magnitude. Under AWGN noise, [19]

suggests to choose ζ = σn√

M + ε√2M , where ε is some tuning parameter.

When the measurements y are contaminated with noise, it will be useful to consider somewhat

stronger conditions. The most commonly used criterion for evaluating the quality of a CS

measurement matrix is the Restricted Isometry Property (RIP) introduced in [18, 22, 72] by

Candes and Tao (which they initially called the uniform uncertainty principle). The RIP

condition ensures the exact recovery of the sparse signal from the observed measurement y (in

noiseless case). This necessary and sufficient condition to be verified by CS matrix ΦΨ can be

summarized as follows

(1− δK)‖θ‖22 ≤ ‖ΦΨθ‖22 ≤ (1 + δK)‖θ‖22, (2.9)

for a particular positive constant δk in the interval [0, 1] and any K-sparse vector θ. Here ‖.‖22denotes the l2-norm of a vector. If a given matrix M satisfies the RIP, all submatrices of M of

size M ×K are close to an isometry, and therefore the matrix M approximately preserves the

distance between any pair of K-sparse vectors. This condition must be satisfied by A = ΦΨ in

order to successfully recover the sparse signal. However, checking if the CS matrix A satisfies

the RIP condition has combinatorial computational complexity that usually prevents its use in

practice. Fortunately, RIP can be achieved by high probability through choosing Φ as a random

matrix. It was shown that random Gaussian, Bernoulli, and partial Fourier matrices [73] for

Φ choice lead to A that verifies the RIP with high probability with a number of measurements

nearly linear in the sparsity level such as M = O(KLog(NK )) [74].

This modified optimization (2.8) is known as Basis Pursuit Inequality Constraints (BPIC) and

is quadratic program with polynomial complexity solvers [69]. If there is no knowledge about

the noise, one can instead further relax the problem to

θ = argminθ∈RN

υ‖θ‖1 +1

2‖y −Aθ‖2, (2.10)

where υ > 0, is a tuning parameter adjustable for different levels of sparsity. This solver is

known as Basis Pursuit DeNoising (BPDN).

20

Page 47: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

2.2.3 Greedy pursuit algorithms

In this section, we focus on sparse signal recovery based on Greedy Pursuit (GP) algorithms.

This class has proven to be computationally efficient and easy to implement while providing

better performance, mostly with Subspace Pursuit version, than the convex relaxation based

algorithms [75, 76]. These methods are iterative in nature and select columns of A according

to their correlation with the measurements y determined by an appropriate inner product.

Given a measurement vector y, the main principle of GP algorithms is to detect (or estimate)

the underlying support-set of a sparse signal vector θ followed by evaluating the associated

signal values. To estimate the support set, that is a set of indices corresponding to the non-

zero (active) components in θ, and the associated signal values, the GP algorithms use different

linear algebraic tools. For instance, it involves matched filter detection (based on calculating

the absolute correlation vector) and least squares estimation. Among the several existing GP

algorithms, the popular ones are Matching Pursuit (MP) [77] and Orthogonal Matching Pursuit

(OMP) [47, 78]. At each step of MP, the atom that has the strongest correlation with the

residual signal is selected. This is what matching means. Note that MP selects atoms among

the whole dictionary at each step. This means that an atom can be selected more than once,

which slows down the convergence. OMP algorithm [78] avoids this problem by projecting, at

each iteration, the signal onto the subspace spanned by the set of already selected atoms. In

this way, the detected components of θ are updated at each iteration, which makes OMP more

complex than MP. Yet, fewer steps are required to converge.

The use of OMP for solving the standard CS problem was studied by Tropp and Gilbert in

2007 [79]. It is described in Algorithm 1.

In the kth iteration, OMP forms the matched filtered detection, it identifies the index corre-

sponding to the largest correlation amplitude (step 1), and adds it to the support-set estimate

(step 2). It proceeds by updating the residual removing the projection-vector (step 4). This

process is then repeated until a stopping rule is attained. If the degree of sparsity of θ, which

we denote by K, is known, then K iterations of OMP are processed. Otherwise, we stop when

the residual norm ‖rk‖2 attains a given value.

Based on the same key idea, Donoho et al. proposed Stagewise Orthogonal Matching Pursuit

(StOMP) [80], for which OMP represents a special case. At each step, StOMP selects the

atoms whose inner products with current residual exceed a specially-designed threshold. In

addition, Dai and Milenkovic proposed a new GP algorithm called Subspace Pursuit (SP) [81]

which uses similar tools as OMP. In OMP strategy, during each iteration the algorithm selects

21

Page 48: Sparse Regularization Approach ... - supcom.mincom.tn

2.2. Compressed Sensing Recovery Algorithms

Algorithm 1 Orthogonal Matching Pursuit

Input: CS matrix A, measurement vector y,Output: Sparse representation signal θInitialize: index set Ω0 = φ, r0 = y, A0 is an empty matrix and the iteration counter k = 1.

while halting criterion false do (iteration k)1− Find the index pk via

pk = argmaxj /∈Ωk−1

| < rk−1,Aj > |. (2.11)

2− Augment the index set and the matrix of selected dictionaries as

Ωk = Ωk−1, pk, and Ak = [Ak−1,Apk ].

3− Recompute the coefficients by solving a least-squares problem

θk = argminθ

‖y −Akθ‖2. (2.12)

4− Update the residualrk = y −Akθk. (2.13)

k = k + 1end whileReturn θ = θk.

one index that represents good partial support set estimate and then the estimated support

is updated. Once an index is included in, it remains in this set throughout the remainder of

the reconstruction process. However, in the SP algorithm, a support of size K is maintained

and refined during each iteration. An index, which is considered reliable in some iteration

can appear to be wrong at a later iteration, and can therefore be removed from the estimated

support set and replaced by an other index at any stage of the recovery process. In this way,

the sparsity degree K knowledge is needed contrarily to OMP which can be processed with stop

conditions. Another algorithm, called CoSaMP [82], closely resembles SP technique. It also

requires the sparsity level knowledge as part of its input. For identification step, it forms a proxy

of the residual from the current samples as A∗rk and locates the largest components of the

proxy. Using the largest coordinates, the support set is updated. Then, an approximation to the

signal using LS estimate is made at each iteration. After each new residual is formed, the proxy

measurements are updated. Each iteration in SP and CoSaMP requires more computational

effort than each iteration in OMP.

22

Page 49: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

2.3 Sparse Representation in Wireless Communications

In this part, we give examples of sparse representation use in wireless communications. Af-

ter introducing a literature overview, we concentrate on CS application in WSN and channel

estimation fields.

2.3.1 Literature overview

The concept of sparsity is readily used in wide and diverse range of application fields. More

specifically, it has been successfully used in extensive real-world applications, such as image

denoising and inpainting, classification and segmentation [83], image retrieval and biometrics

[84] fields.

Sparse representation, from the viewpoint of its origin, is directly related to CS, which was

first proposed for efficient data storage and compression of digital images. Recently, interest in

CS as a disruptive approach in wireless communication networks and signal processing begun

to grow. In a word, the advantages of CS make it very promising for wireless communication

networks. In particular, CS good performance and high compression ratio make it an excellent

candidate in the emerging Internet of Things (IoT) field [85, 86] which will involve huge data

volumes acquisition and processing.

IoT aims to effectively inter connect a large number of heterogenous devices that gather large

amounts of data, spanning from environmental monitoring, smart agriculture, smart parking,

etc. In this scenario, WSN can be integrated into the IoT, which consists of a number of

interconnected sensor nodes [87, 88]. However, sensor plateforms providing WSN application

are very constrained in terms of computational power, memory and storage. Therefore, the

energy consumption minimization of IoT devices is very important, where the overhead of

changing batteries is very high in terms of both manpower and economic cost. Then, a low

cost data acquisition is necessary to effectively collect and process the data at WSN end nodes.

Being aware of these benefits, the CS theory has been widely exploited in WSN [89]. In wireless

channels estimation, CS allows to reduce the number of pilots thus enhancing spectral efficiency

in addition to high denoising capabilities.

23

Page 50: Sparse Regularization Approach ... - supcom.mincom.tn

2.3. Sparse Representation in Wireless Communications

2.3.2 Wireless sensor networks

A WSN can be generally described as a network of devices or nodes, which can monitor physical

or environmental conditions, such as temperature, sound, pressure, etc. It cooperatively com-

municates the information gathered from the monitored field through wireless links. The data

is forwarded, possibly via multiple hops, to a sink node that is connected to other network e.g.,

the Internet through a gateway. The nodes can be stationary or moving. They can be aware of

their location or not. They can be homogeneous or not. The WSN development was motivated

by military applications such as battlefield surveillance. Adapted forms of such networks are

used in many industrial and consumer applications, such as industrial process monitoring and

control, machine health monitoring, and so on.

2.3.2.1 WSN components

The main components of a general WSN are the sensor nodes and the sink or the base station.

Each sensor node in WSN has the capabilities of sensing, processing and communicating data

to the required destination as shown in figure 2.5. Indeed, a sensor node is a hardware

Figure 2.5: Example of WSN Communication Architecture.

device with an integrated micro-controller, thus consisting of processing unit, memory and

converters. The processing unit is responsible for data acquisition, processing, incoming and

out going information. Once the data is measured and processed at the considered node, it will

be transferred to the other nodes via one or multi-hop transmission or directly to the sink.

The sink is an interface between the management/processing center and the sensors. It is a

special resource node, having unconstrained computational capabilities and energy, used for

data gathering.

24

Page 51: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

2.3.2.2 Wireless communication standards

Many technologies are adopted in WSN for transmission such as Bluetooth, ZigBee, Wireless-

Fidelity (WiFi) and UWB [1, 90]. Each presents a different use according to its characteristics.

The difference between these protocols lies in the quality of service and in some constraints

related to the application and environment. For a suitable choice of wireless technology, the

main constraints that can be retained include range, cost, bandwidth, reliability, transmission

speed, flexibility of installation and use, power consumption. table 2.1 summarizes the main

differences between the four protocols. Each protocol is based on one IEEE standard. Obvi-

ously, WiFi and UWB provide a high data rate, compared to Bluetooth and ZigBee. In general,

Bluetooth and UWB are intended for Wireless Personal Area Networks (WPAN) (a range of

10m). While WiFi and ZigBee are oriented to Wireless Local Area Networks (WLAN) (range

from 50m to 150m). A brief description of each protocol is given hereafter.

Bluetooth, also known as the IEEE 802.15.1 standard is based on a wireless radio system

designed for short range reaching 10m with 1Mbps rate. For ZigBee, it is a specification

for low data rate, low power consumption, reliability and short range applications. It is IEEE

802.15.4 standard based technology with varying data rates from 20 to 250Kbps which operates

in 868Mhz, 915Mhz and 2.4Ghz with range between 10m and 100m. It uses direct sequence

spread spectrum with Binary phase shift keying and orthogonal phase shift key modulation

techniques. Other candidate technologies for WSN are the various forms of IEEE 802.11 or

WiFi for which data rate can reach 54Mbps with range of about 100 meters. Thus, WiFi

presents the advantage of high data rate and long range transmission, yet it requires high

transmission power.

Technology Bluetooth ZigBee WiFi UWB

IEEE standard 802.15.1 802.15.4 802.11a/b/g 802.15.3a

Frequency band 2.4Ghz 868/915MHz, 2.4Ghz 3.1− 10.6Ghz2.4Ghz 5Ghz

Max signal rate 1Mbps 250Kbps 54Mbps 110Mbps

Nominal Range 10m 10− 100m 100m 10m

Spread Spectrum FHSS DSSS DSSS, CCK, DS-UWBOFDM MB-OFDM

Power consumption low low high low

Multipath performance poor poor poor good

Interference high high high lowto other systems

Table 2.1: Summary of Bluetooth, ZigBee, WiFi and UWB technologies features [1].

25

Page 52: Sparse Regularization Approach ... - supcom.mincom.tn

2.3. Sparse Representation in Wireless Communications

UWB is a radio technology used for high data rate communication over short distances. One

of the most exciting characteristics of UWB is its large bandwidth, occupying about 500MHz,

which can satisfy most of the multimedia applications such as audio and video delivery in home

environment. Its characteristics are low power, low cost and high speed transmission. UWB

has some characteristics which make it more attractive such as low complexity and low cost,

resistance to severe multipath and jamming, a very good time domain resolution for location

and tracking applications.

UWB propagation channels differ from narrowband propagation channels. It spreads the trans-

mitted signal over a very large bandwidth (typically 500MHz or more) from 3.1Ghz to 10.6Ghz.

It uses two spreading techniques, Direct Sequence Ultra Wide Band (DS-UWB) and Multi Band

OFDM (MB-OFDM) spread spectrum method where the band is divided into thirteen regular

sub-bands. Each sub-band uses OFDM modulation for transmission. The transmission node

can adaptively choose one or more sub-bands to avoid interference. High sampling frequency

and large time spread frequently induce a sparse structure of the channel equivalent sampled

impulse response.

2.3.2.3 WSN energy optimization

In WSN, the energy used by one node is consumed by computing, receiving, transmitting,

listening for messages on the radio channel, sampling data and sleeping. WSN are mainly

characterized by their limited energy supply. Hence, the need for energy efficient infrastructure

and processing is becoming increasingly more important since it impacts upon the network

operational lifetime.

CS theory combined with sensor node clustering is one of the techniques that can expand the

lifespan of the whole network through data aggregation at the cluster head [91]. Firstly, based

on CS paradigm, a compressed signal representation without going through the intermediate

stage of acquiring N samples has been envisaged, i.e. only a subset among all sensors nodes

are randomly activated. Also, CS based methods are built on the hypothesis that the raw

sensed data are either sparse or compressible in some domains such as DCT, Discrete Fourier

Transform (DFT) or Wavelet. After, the clustering mechanism integrates these considered

sensor nodes to form a cluster, where its radius depends on the sensor sensitivity. Each cluster

is made up of nodes and Cluster Head (CH). CH receives data from the nodes within its

coverage zone, aggregates the data, it compresses all the sensed data by linear compressed

projections and it forwards them to the base station (see figure 2.6).

26

Page 53: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 2. Sparse Representation Approach and Compressed Sensing Theory: Applications in WirelessCommunications

Figure 2.6: WSN data gathering based on clustering.

2.4 Thesis Framework

This thesis applies CS for different problems: first for the recovery of the sparse signals for both

continuous and integer valued entries parameters. The envisaged applications are respectively

sparse channel CIR recovery in OFDM and rare events detection and counting in WSN, where

both small and large scale networks are envisaged. Then, tracking sparse parameter when the

support is slowly varying is envisaged. This is applied to CIR tracking and exploits slow time

variation of the propagation delays.

The first part of the thesis considers formulation of CS recovery problem where the sparsifying

basis is known and focuses on proposing more adapted and efficient recovery schemes. In the

frame of WSN, sparsity allows to decrease the number of required measurements and thus

to lower the energy consumption and deployment cost. In the frame of channel estimation,

denoising capabilities led to enhanced estimation and tracking performance. CS use allowed

also to achieve a higher spectral efficiency by reducing the number of pilots.

Sparse representation is the key idea of CS application. However, it cannot be easily induced

in many real-world contexts. The second part of the thesis studies the optimization of the

sparsifying basis search for correlated signals. This requires the design of a specific sparsifying

transform to further allow a good reconstruction of the original signal based on CS. We consider

correlated data for which our objective will be to design an adequate compression basis. We

envisage to apply this approach on 2D spatially correlated WSN measurements. In this way,

novel sparsity-inducing schemes, exploiting spatial correlation, are introduced and applied to

data aggregation in WSN.

27

Page 54: Sparse Regularization Approach ... - supcom.mincom.tn

2.5. Conclusion

2.5 Conclusion

This chapter first gave the fundamentals of Compressive sensing and its close connections

with sparse representation as well as its interesting potentials for resource use optimization.

CS indeed uses the signal sparsity in a given basis to recover it from far fewer samples than

required by the Shannon-Nyquist sampling theorem. The main CS recovery algorithms have

been presented and especially the iterative greedy pursuit algorithms. The reconstruction

complexity of these algorithms is significantly lower than the convex optimization based on l1

minimization. For this reason, it will be considered for sparse signal recovery, and especially

enhanced versions of the OMP will be proposed. As the sparsity degree may not be available in

many considered practical applications, then, OMP operates considering some adapted stopping

rules. Several CS applications in wireless networks are considered.

28

Page 55: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3

Sparse Static Signal Recovery

through Rare Events Detection and

Counting in Small Scale WSN

3.1 Introduction

In recent years, WSN have received important attention in several applications such as envi-

ronmental monitoring, intrusion detection and target tracking. WSN are frequently deployed

to perform tasks such as detection of specific events. We address the particular scenario of rare

events detection in WSN such as forest fires or intrusions. The objective of sensor network in

this application is to detect and localize rare events. Due to the infrequency of the events of

interest, the driving design principle in this case is to minimize the energy consumption and

then maximize the network lifetime. To this end, the problem of rare events detection in WSN

is investigated from the perspective of CS. The problem resorts to how to adopt as few sensors

as possible to precisely and efficiently detect the events. This problem falls within the class of

sparse vector recovery where nonzero valued entries belong to a discrete alphabet.

This chapter starts by setting the network model and considered assumptions to formulate the

problem for CS based rare events detection in WSN. Then, based on an overview of the existing

work that addresses this scenario, two categories of detection can be distinguished. In this way,

several works considered a binary detection [92]. However, counting the number of point events

or targets is very interesting to many WSN applications. In this context, the problem of both

events detection and counting in WSN using CS is investigated. Exploiting the spatial sparsity

29

Page 56: Sparse Regularization Approach ... - supcom.mincom.tn

3.2. System Model Description

of the phenomena, we propose new CS-based sparse events detection and counting algorithms.

As mentioned above, the next section elaborates the system model.

3.2 System Model Description

We consider a monitored area which is partitioned into regular cells, each equipped with one

sensor. We treat the case of discrete events whose number can be quantified as shown in figure

3.1.

* * *

* *

*****

*******

*******

*

* * * * *

* * * * * * *

l

* * *

* * * * * * * * * *

* * l * * * * *

* *

* * *

* * * * *

* l

***

*********

Sensor

target/event

l l

*l

l

l

l

l

l

X

Y

0

Figure 3.1: Events detection scenario

As mentioned before, several works assume that each cell contains at most one event. In this

scenario, the problem of events detection is considered under binary detection model [92] in

which a sensor reports ’1’ if one or more targets are detected in its sensed area and ’0’ otherwise.

To detect the sparse events locations, a complex bayesian approach is proposed in [92]. Events

counting as investigated in [56] is of broad interest to many WSN applications such as intrusion

detection and mobile object tracking [93, 94]. In our work, we assume that one active cell can

hold many targets. Then, the ultimate goal is counting and positioning multiple targets using

as low number of measurements as possible.

We first describe the context of sparse and discrete events detection. The sensed area, where

the exceptional phenomenon might happen occupy a very large area. Yet, the cells where the

30

Page 57: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

phenomenon really happens are not numerous. For this reason, the sensors which are utilized

to sense the events are usually densely distributed in the area for preventing event missing.

However, to reduce the data processing step burden, only a subset of measurements, which are

the most relevant, should be used in the events detection step. The CS application is intended

for an accurate reconstruction of the events number per cell from the reduced number of sensors

measurements. Next subsection establishes the WSN data model.

3.2.1 Network model and assumptions

We consider a WSN deployed in the monitored area for counting and localizing potential targets

that transmit given signals. These targets are considered as point events and are supposed

spatially clustered, making the event of one active cell (cell with targets presence) rare. The

aim is here to detect (localize) the events and to count their number per cell. For this purpose,

and without loss of generality, a grid partitions the monitored area into N regular cells. We

denote by s the vector stacking the number of events per cell. For i = 1, . . . , N , we suppose

that the number of events per cell si ∈ 0, . . . ,m where m is a positive integer that upper

bounds the events number in one cell. The events rareness assumption implies that s is K−sparse with K ≪ N non zero valued elements corresponding to cells holding events. These

K cells are referred to as active cells and verify si > 0. Note that in this context, we handle

strictly sparse vectors s, contrarily to other problems where approximately sparse parameters

are considered.

In the following, we precise the assumptions used for the data model derivation:

• H0: The different targets transmitted signals are uncorrelated.

• H1: The locations of the sensor nodes are fixed and a priori known. For the regular

setting, one sensor node is placed at each cell center to sense targets transmissions, which

are used for their detection and counting.

• H2: The vector s containing the events number per cell is K-sparse and stationary in

time, which respectively means that events are rare or concentrated in space and supposed

constant (in position and number) during the processing time.

• H3: The set of events occurring in the same cell are assumed spatially uniformly dis-

tributed within the cell. In the adopted model, the distance separating one event from its

cell center is taken constant equal to dmean the mean possible distance in [0.1, 1]d/2 where

31

Page 58: Sparse Regularization Approach ... - supcom.mincom.tn

3.2. System Model Description

* * *

* * *

* * *j

i

ll

l T1

T2

T3

dijdi3 j di1 j

di2 j *

lSensor

Target

Figure 3.2: Network model scenario

d is the distance between two adjacent sensors. The distance separating these events from

the other cells sensors is of the order of the distance separating the corresponding cells

centers.

As shown in figure 3.2, the cell i with si = 3 events ik (k = 1, 2, 3) at different posi-

tions from the sensor are supposed at the same distance dmean w.r.t the sensor i such as

di1i = di2i = di3i, where diki is the distance separating kth event of cell i from its sensor.

We take into account that for cells i 6= j dikj ≫ diki, k ∈ 1, 2, 3. Then, we can write in

general that dikj ≃ dij for k = 1, . . . , si and i 6= j where dij denotes the distance between

sensors i and j and si is the number of events in cell i.

• H4: Channels between pairwise sensors are supposed Rayleigh block fading with known

response.

From H3, we also assume that the channels between one cell events and an other sensor outside

the cell are the same as the channel between the corresponding sensors. This is in particular

true in the scheme where each sensor retransmits its cell events signals to the network. Also,

channels from different cells to a third one are supposed uncorrelated.

In the next subsection, the system model is derived based on the above assumptions.

3.2.2 Problem formulation

In large scale WSN, the propagation model should account for both path loss and Rayleigh

fading effects. In this way, the received signal at sensor j corresponding to targets located in

32

Page 59: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

an active cell i with si events, is given by

zij =

si∑

k=1

hikj

dα2ikj

gik , (3.1)

where hikj is the channel between the ith cell kth event and sensor j, which is a Rayleigh fading

coefficient modeled by a complex Gaussian process with zero mean and unit variance, dikj is the

distance between the ith cell kth event and the sensor j and gik is the signal generated by the

ith cell kth event, which is modeled as complex Gaussian variable with zero mean and variance

equal to P0. Note that if cell i is inactive, then si = 0 and zij = 0. The coefficient α denotes

the path loss exponent related to the environment density [95]. In our work, α describes the

free space and is taken equal to 2.

Let

cij =

hi1j

dα/2i1j

, . . . ,hisi j

dα/2isi j

, (3.2)

which denotes channels of ith cell events to sensor j and

gi =[

gi1 , . . . , gisi]

, (3.3)

which are signals transmitted by cell i targets (if si > 0), then zij can be written as

zij = cij gTi . (3.4)

Let x denote the measurements vector from the N sensors. The received signal at sensor j,

and denoted by xj corresponds to the received signals from all cells events within its range,

disturbed by an additive noise. It is expressed as

xj =∑

i∈Ejcij g

Ti + ηj , (3.5)

where Ej denotes the set of active cells in jth sensor range and ηj is a complex Additive

White Gaussian Noise (AWGN) distributed as ηj ∼ CN(

0, σ2)

. The additive noise is spatially

uncorrelated such as E(ηiη∗j ) = σ2δij , where δij is the Kronecker coefficient. The noise is

uncorrelated with the targets signals gik . Unless mentioned otherwise, we suppose in this part

that all sensors ranges cover the whole WSN, i.e.: Ej = E which is the ensemble of cells with

events.

33

Page 60: Sparse Regularization Approach ... - supcom.mincom.tn

3.2. System Model Description

Concatenating all sensor measurements, we have in matrix form

x = CgT + ηT , (3.6)

where C is N ×Ntot channel matrix where Ntot =

N∑

i=1

si is the total number of the grid events

constructed as

C =

c11 c21 · · · cN1

c12 c22 · · · cN2

......

. . ....

c1N c2N · · · cNN

. (3.7)

g = [g1, . . . , gN ] is 1×Ntot and denotes the generated events signals in the monitored area and

η = [η1, . . . , ηN ] is the noise component captured by the N sensors.

The autocorrelation matrix of the signal x is expressed as

E(xxH) = E(CgT g∗CH) + E(ηT η∗). (3.8)

The received signal energy at different sensors is given by the diagonal elements of E(xxH).

For the jth sensor, it is given by

E(xjx∗j ) =

n∈E

m∈EE(

cnj gTn g

∗mcHmj

)

+ E(ηjη∗j ). (3.9)

H4 states that the different cells links channels are uncorrelated, then considering the assump-

tion H0 which implies that E(gTn g

∗m) = 0 for n 6= m, we can write

E(xjx∗j ) =

n∈EE(

cnj gTn g

∗nc

Hnj

)

+ E(|ηj |2), (3.10)

=∑

n∈EE

(

sn∑

k=1

hnkj

dα/2nkj

gnk

sn∑

l=1

h∗nlj

dα/2nlj

g∗nl

)

+ E(|ηj |2). (3.11)

Then, the last expression is reduced to

E(|xj |2) =∑

n∈EE

(

sn∑

k=1

|hnkj|2dαnkj

|gnk|2)

+ σ2. (3.12)

In practice, the energy received from sensors is evaluated by replacing the expectation in the

above expression by an average over samples collected during a limited observation interval

over which the channel is supposed unchanged (block fading channel).

34

Page 61: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

Referring to assumptions H3 and H4, for any sensor j and one cell n with sn events, we use

the approximations that hnkj ≃ hnj and dnkj ≃ dnj for k = 1, . . . , sn.

Therefore,

E(|xj |2) = P0

N∑

n=1

|hnj |2dαnj

sn + σ2. (3.13)

Concatenating E(|xj |2), for j = 1, . . . , N , leads to the N elements energy vector x which is

written as

x = Ψs+ η, (3.14)

where Ψ is a N ×N target decay energy matrix given by

Ψ = P0

|h11|2dα11

|h21|2dα21

· · · |hN1|2dαN1

|h12|2dα12

|h22|2dα22

· · · |hN2|2dαN2

......

. . ....

|h1N |2dα1N

|h2N |2dα2N

· · · |hNN |2dαNN

. (3.15)

and η = σ21N . Note that dii is taken equal to dmean for i = 1, . . . , N . In CS, the matrix Ψ

that relates the measurement x to the sparse vector of interest s is called sensing matrix.

Once the WSN energy measurements are available at the sink or fusion center, the goal is to

accurately locate the events occurrence and their number per cell. This is obviously operated

under the constraint of power consumption and deployment cost reduction for network lifetime

maximization. In this way, CS is applied to the problem of rare events detection in WSN as it

allows to reduce the number of required measurements from N to some M such that M ≪ N .

Indeed, the parameter s of interest being a sparse representation of x in the channel matrix

basis Ψ, only M ≪ N nodes measurements are sufficient to recover s. Therefore, M linear

combinations of x are used through a measurement matrix Φ for s recovery. To reduce the

power consumption in WSN, we choose a random selection matrix Φ as measurement matrix

to pick M among N sensors to be active. Mathematically, this selection can be expressed as

y = Φx = ΦΨs+Φη, (3.16)

= As+ n, (3.17)

where Φ is a M ×N binary selection matrix, A = ΦΨ and n = Φη is a subvector of η. If it is

not mentioned otherwise, the selection matrix Φ is randomly chosen (M sensors are randomly

chosen among the N sensors).

35

Page 62: Sparse Regularization Approach ... - supcom.mincom.tn

3.3. Sparse Targets Detection Scenarios

Our objective in the following is to reconstruct the sparse representation s from the measure-

ment vector y supposing the knowledge of the reconstruction matrix A.

3.2.3 Data aggregation schemes

Concerning the data collection task at the fusion center for processing purpose, different

schemes can be envisaged. The data yj sensed by the sensor node j, j = 1, . . . ,M , can be for

example gathered to the fusion center using a mobile collection robot [96]. Also, we can adopt a

deterministic access such as the Frequency or Time Division Multiple Access (FDMA,TDMA)

where over each frequency sub-band or during each time slot, one sensor can broadcast and

transmit its data (energy measurements through the network). This supposes a perfect syn-

chronization and reliable communication [97, 98]. Also, the deployed sensors readings can be

transmitted simultaneously to the sink in the case of the sink knowledge of the different links

and the independence between them.

In our study, we consider a time division multiplexing scheme where the time is divided into

packets of M slots as shown in figure 3.3, in which each slot is dedicated for one sensor to

transmit its measured energy to the network/sink. In this way, A(i, j) is the energy of cell j

targets received by sensor i, i ∈ 1, . . .M and j ∈ 1, . . . N. For time slot i, node i listens,

and communicates cooperatively through the network to the sink node. Reconstruction is then

processed after all M sensors energy measurements collection.

Sensor1transmits

Sensor2transmits

b b b b bSensor3transmits

SensorMtransmits

time

y1 = A(1, :)s + n1

Active Active Active

y2 = A(2, :)s + n2 y3 = A(3, :)s + n3

Active

yM = A(M, :)s + nM

Figure 3.3: Measurements aggregation one-hop scheme from M among N active sensors

3.3 Sparse Targets Detection Scenarios

Sensors deployment should be adapted to the envisaged application. We hereafter envisage two

scenarios.

36

Page 63: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

3.3.1 Large scale WSN

In a large scale WSN, a vast number of distributed sensor nodes are deployed over a large

region. Such WSN is adequate especially in military fields for enemy and arms detection and

in animals tracking in large forests, [99] etc. The channel model adopted in the context of

large scale WSN accounts for both Rayleigh and path loss fading effects, which correspond to

the signal attenuation caused, respectively, by local scatterers and by distance. Therefore, the

received signal at the sensor node j corresponding to targets at cell i is expressed as in eq.

(3.1) and the corresponding energy signal is given by eq. (3.13). As it will be studied in the

next chapter, the path loss seriously affects the reconstruction performance which causes a high

targets recovery error. This problem is avoided in small scale WSN as considered hereafter.

3.3.2 Small scale WSN

In this scenario, the distance between neighboring nodes is generally small which is the case of

industrial process monitoring like manufacturing broken down robots localization [100], intru-

sion detection, localization in commercial [93] and machine health monitoring [101].

For the remainder of this chapter, we will consider small scale dense WSN where the spacing

between sensors is very small in such a way that the large scale fading effect can be neglected,

i.e. the path loss effect is disregarded, which corresponds, at the model level, to setting the

path loss exponent α to 0. In this way, the received signal at sensor j corresponding to events

located in an active cell i is reformulated as

zij =

si∑

k=1

hikjgik . (3.18)

Then, assuming that all events emit with the same power P0 and based on the formulation

established in the first section, the jth sensor received energy can be approximately by

E(|xj |2) = P0

N∑

n=1

|hnj |2sn + σ2. (3.19)

Next, we exploit the rare nature of cells holding targets in the monitored area to apply CS

approach for targets detection and counting in dense scale WSN scenario. More precisely, new

algorithms are proposed which take into account of the discrete nature of the sparse vector to

be recovered (number of targets per cell).

37

Page 64: Sparse Regularization Approach ... - supcom.mincom.tn

3.4. Proposed Rare Events Detection Algorithms

3.4 Proposed Rare Events Detection Algorithms

Similarly to the work in [56], we consider that in each cell more than one target can occur.

Then, our aim is to locate the cells where events occur and to count the events number per cell

which confers the characteristic of discrete nature to the sparse vector.

Recent works have addressed the problem of rare events detection and counting in WSN to

which CS theory is well adapted [102, 103]. In particular, the Matching Pursuit (MP) algorithm

is formulated in this context through a Greedy version algorithm denoted Greedy MP (GMP)

which has the advantage not to require any knowledge about the signal sparsity level and to

account for the discrete nature of the sparse parameter to be recovered.

Our first contribution is to provide enhancements on the GMP algorithm in terms of providing

a better tradeoff between complexity and performance. To this end, two new variants of GMP

are proposed. Then, a new method based on the well known OMP [78] algorithm principle

and adapted to the context of sparse parameter with discrete components is proposed. These

contributions are hereafter detailed.

3.4.1 MP based algorithms for discrete parameter recovery

In the first part of this subsection, an overview of the recent GMP approach is given. Then, in

order to reduce the high computational burden of GMP a new version, denoted by Two-stages

GMP (2S-GMP) and based on the detection and counting separating, is proposed. As a second

enhancement, generalized versions of both GMP and 2S-GMP are derived for detection capacity

enhancement. This generalization is based on the identification of more than one active cell

position at each iteration.

3.4.1.1 Greedy MP (GMP) algorithm

In this section, we explain briefly how GMP algorithm, proposed in [56], can be used to jointly

count and localize targets from a small number of measurements based on the pseudocode given

by Algorithm 2.

Intuitively obvious, the cell, which has the largest number of targets should the most contribute

in measured energies contained in y. At each iteration, we search for both the cell i and the

corresponding targets number denoted by zi in Algorithm 2 that minimizes the square error

‖y(i−1)−Az‖22 where z is a vector containing zeros except at position i, its value zi is in the set

38

Page 65: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

0, 1, . . . ,m, and y(i−1) is the residual vector. The residual is then updated and the selected

cell is removed from the previous set Ω containing presumed inactive cells. The algorithm

terminates when no cell, containing at least one target, is found.

Algorithm 2 GMP

Input: matrix A, measurement vector y,Output: reconstructed signal sInitialization: residual y(0) = y, index set Ω = 1, . . . , N, z = [z1, . . . , zN ]T is aN−dimensional column vector initialized to 0, i = 1.while true do

1) Find out the cell index gi that contains zi targets via

(gi, zi) = argminn∈Ω,p∈0,...,m

‖y(i−1) −A[0, . . . , zn = p, 0, . . . , 0]T ‖22. (3.20)

if zi = 0 then breakend if

2) Ω = Ω\gi,3) sgi = zi,4) y(i) = y(i−1) −Az,5) Reset the vector z to 0 and i = i+ 1.

end whileReturn (s).

3.4.1.2 Two-stages GMP (2S-GMP) algorithm

In order to reduce GMP high complexity induced by the joint active cell detection and its

targets counting, we propose the separation of the detection and counting stages. At each step,

we first identify the active cell and then evaluate the number of targets within the detected

cell. The main steps of the proposed two steps GMP (2S-GMP) algorithm are detailed and

summarized hereafter in the pseudo code Algorithm 3.

Our aim is to determine which columns of A participate in the measurement vector y. In each

iteration i of the proposed 2S-GMP, we first choose the column of A that is most strongly

correlated with the remaining part of y (residual y(i−1)). This corresponds to the detection of

the cell where events occurred. Once the active cell is identified by projection procedure, we

evaluate the number of targets at this cell, which corresponds to the counting step. To this

end, we enumerate all possible values of the targets number si ∈ 0, 1, . . . ,m for the already

detected active cell gi and find the one that contributes the most to the residual observation

vector y(i−1). Then, we subtract off its contribution and update the residual vector.

39

Page 66: Sparse Regularization Approach ... - supcom.mincom.tn

3.4. Proposed Rare Events Detection Algorithms

Algorithm 3 2S-GMP

Input:• An M ×N matrix A.• An M− dimensional signal measurement vector y.Output:An N− dimensional reconstructed signal s.Initialize: residual y(0) = y, index set Ω = 1, . . . , N, i = 1.Procedure:At the ith iteration

1) Find the index gi that solves the easy optimization problem

gi = argmaxj∈Ω

| < y(i−1),Aj > |‖Aj‖2.‖y(i−1)‖2

, (3.21)

where Aj is the jth column vector of A matrix.2) Find the number of targets in the cell zgi = p such as

zgi = argminp∈0,1,...,m

‖y(i−1) − pAgi‖22. (3.22)

If zgi = 0, then break, else continue.3) Updating step• Ω = Ω\gi• sgi = zgi ,• y(i) = y(i−1) −Agizgi and i = i+ 1.• Go to step 2.

3.4.1.3 Generalized GMP and 2S-GMP algorithms

Considering existing methods, developed in the framework of discrete events detection and

counting, we propose the generalization of the two above proposed, GMP and 2S-GMP al-

gorithms. The so-generalized greedy algorithms are denoted by gGMP, for generalized GMP,

and g2S-GMP, for generalized 2S-GMP. This generalization is in the sense of identifying more

than one active cell (q > 1) in each iteration. A similar procedure is investigated in [104] in a

noiseless scenario for the recovery of continuous entries sparse vectors by OMP algorithm.

• Generalized GMP (gGMP) algorithm

With the classical GMP algorithm, at each iteration, only one cell q = 1 is detected as active and

the number of its targets is counted. With the proposed gGMP version, the targets detection

and counting in q > 1 cells is processed at each iteration. The different steps of the proposed

gGMP are detailed in the pseudo code Algorithm 4.

At iteration i, we search the set P(i) of possible combinations of q positions taken from Ω(i−1)

where Ω(i−1) is the set of cells indices not still detected as active. Then, since each active cell

40

Page 67: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

can hold a number of targets in 1, . . . m, we form V as the set of vectors of q elements each

taking their values from 0, 1, . . . m. Its cardinal is card(V) = (m+ 1)q.

The optimization step consists of finding the best element p in P(i) (cells positions) associated to

an element v in V (associated number of targets per cell) which most contributes to the residual

observation y(i). The result of iteration i is the N×1 vector z(i)opt which minimizes ‖y(i)−Azvp‖22

over possible zvp vectors with active (non zero) positions p in P(i) and corresponding values

v in V. The so detected q cells have positions denoted by p(i)opt ∈ P(i) and associated targets

number v(i)opt ∈ V.

Ω(i) is then updated by subtracting the q newly detected positions. y(i) is updated by subtract-

ing the located events contribution from the residual observation according to (3.25) where A(i)

is the submatrix of A containing columns with positions p(i)opt.

• Generalized 2S-GMP (g2S-GMP) algorithm

A similar procedure is here adopted with the generalized version of 2S-GMP. The main stages

of g2S-GMP are summarized through the pseudo code Algorithm 5.

Indeed, q cells positions with the corresponding events number will be detected at each iteration

of the algorithm. However, it is different from the algorithm gGMP due to the separation of

the events detection and counting stages as is the case for the 2S-GMP (Algorithm 3). In this

way, in each iteration, the correlations between the updated measurements y(i) (residual) and

the columns of A with positions from Ω(i) (the set of not yet detected positions) are compared

and q indices of the columns corresponding to maximum correlation values are chosen as the

new active detected cells positions p(i)opt. After the detection step, we seek the events number at

the detected q cells. Like for gGMP the same set V is to be checked for the q detected positions

to search the optimal values set v(i)opt ∈ V optimizing the cost function given by equation (3.28)

in the pseudo code Algorithm 5. The two operations are repeated until at least one among

the detected q cells has an events number equal to zero.

3.4.2 OMP based algorithms for discrete parameter recovery

After having envisaged several enhancements of GMP algorithm [56] for targets positioning in

WSN, we here propose a novel Greedy OMP (GOMP) algorithm that outperforms GMP in both

detection and counting performances. Like GMP, the GOMP detects in each iteration, the cell

which contributes the most to the residual observation. The proposed enhancement is based

on accounting for the non orthogonality of the decomposition basis. This non orthogonality

41

Page 68: Sparse Regularization Approach ... - supcom.mincom.tn

3.4. Proposed Rare Events Detection Algorithms

Algorithm 4 gGMP

Input:• An M ×N matrix A.• An M− dimensional signal measurement vector y.• Number of cells q to detect per iteration.Output:An N− dimensional reconstructed signal s with integer entries.Initialization:

iteration count i = 0.Residual vector y(0) = y.Ω(0) = 1, 2, . . . , N and l(0) = 0.Reconstructed signal signal s = 0N×1.

Procedure:Iteration:

while l(i) = 0i = i+ 11) Form the sets P(i) and V.

p(i)opt,v

(i)opt = argmin

p∈P(i),v∈V‖y(i−1) −Azvp‖22, (3.23)

with zvp(pi) = vi.

l(i) = Card(

j\v(i)opt(j) = 0, for j = 1, . . . , q

)

.

2) Updating phase

Ω(i) = Ω(i−1)\p(i)opt. (3.24)

y(i) = y(i−1) −Azv(i)opt

p(i)opt

. (3.25)

s(p(i)opt) = v

(i)opt (3.26)

end whileReturn s

is indeed related to channel matrix Ψ. Further, a two-stages reduced complexity version 2S-

GOMP of GOMP is proposed which, like for GMP and gGMP, separates the detection and

counting stages in order to reduce complexity.

3.4.2.1 Greedy OMP (GOMP) algorithm for discrete parameter recovery

We here develop the novel GOMP algorithm for sparse targets counting and localization in

WSN. Respecting the GMP algorithm (Algorithm 2) and accounting for the sparse vector to

be with discrete entries, the herein proposed algorithm is also greedy. The main difference is

42

Page 69: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

Algorithm 5 g2S-GMP

Same input and initialization phase than gGMP algorithm.Iteration:

While l(i) = 0i = i+ 11) Detection step

p(i)opt = argmax

j∈Ω(i−1)

| < y(i−1),Aj > |‖Aj‖2.‖y(i−1)‖2

(3.27)

2) Counting step

v(i)opt = argmin

v∈V‖y(i−1) −Azv

p(i)opt

‖22 (3.28)

l(i) = Card(

j\v(i)opt(j) = 0 for j = 1, . . . , q

)

.

3) Updating phase

Like gGMP algorithm

end whileReturn s

related to accounting for the decomposition basis A = ΦΨ non orthogonality. To this end, we

adapt the OMP algorithm to the case of discrete parameter in pseudo code given byAlgorithm

6.

At the ith iteration, we seek to find, in the first step, all combinations (possibilities) C(i) of

i values (numbers of events) taken from the set 1, 2, . . . ,m. We also incorporate the case

where the new detected position is zero valued (no more events to detect). The number of

tested combinations is then Card(C(i)) = m(i−1)(m + 1). The number of possibilities of new

active cell at ith iteration is N − i + 1. Then, a new detected active cell gi that contains

copt number of targets is found via the optimization problem given by eq. (3.29). It is worth

noting that the number of events values in the cells already detected as active (entries of s)

are updated after each new active cell detection. Also, the detected positions set is updated

as shown in eq. (3.30). Note that the set D(i−1) presents the detected positions at the i− 1th

iteration and the ith position is to be determined within Ω. The algorithm stops when the new

detected active cell has zero events.

43

Page 70: Sparse Regularization Approach ... - supcom.mincom.tn

3.4. Proposed Rare Events Detection Algorithms

Algorithm 6 GOMP

Input:• An M ×N matrix A.• An M− dimensional signal measurement vector y.Output:An N− dimensional reconstructed signal s with integer entries.Initialization:

Ω = 1, 2, . . . , N.D(0) = ∅.

Procedure:At the ith iteration

1) Search the set C(i) giving all possibilities of i cells events numbers.

2) New active cell detection step

gi, copt = argminj∈Ω,c∈C(i)

‖y −Azci‖22, (3.29)

where zci is a vector with non zero elements at positions in D(i−1) and one more positionj ∈ Ω and corresponding events numbers values in c ∈ C(i).

3) If the new detected cell has zero events, then break.Else, update the detected active cells set as

D(i) = D(i−1) ∪ gi and Ω = D(i). (3.30)

Go to steps 2 and 3.

3.4.2.2 Two-stages GOMP (2S-GOMP) algorithm

As presented in previous subsection, the GOMP algorithm is based on joint detection of new

active cell position and counting the targets in the set of already detected cells. Despite its

potentials for improved performance compared to GMP algorithm, the GOMP incurs a high

computational load due to the implied multidimensional optimization. Then, to reduce this

computational load, we here propose a modified version of GOMP, called 2S-GOMP, which

aims to separate the detection and counting steps as presented in Algorithm 7.

Indeed, at each iteration, like 2S-GMP technique (Algorithm 3) we start by the active cell

position detection througth the optimization function given by eq. (3.31). Once the new active

cell is determined, the number of targets in the set of already detected active cells is updated.

table 3.1 displays a comparison between the proposed schemes in terms of strategy and number

of detected cells per iteration.

44

Page 71: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

Algorithm 7 2S-GOMP

Input:• An M ×N matrix A.• An M− dimensional signal measurement vector y.Output:An N− dimensional reconstructed signal s with integer entries.Initialization:

the residual y(0) = y.Ω = 1, 2, . . . , N.D(0) = ∅.

Procedure:At the ith iteration

1) Detection step: find the position gi that solves the following optimization

gi = argmaxj∈Ω

| < y(i−1),Aj > |‖Aj‖2.‖y(i−1)‖2

, (3.31)

where Aj is the jth column vector of matrix A.2) Update the active detected cells positions as

D(i) = D(i−1) ∪ gi. (3.32)

3) Counting step: find the number of targets in the detected active cells

copt = argminc∈C(i)

‖y −AzcD(i)‖22. (3.33)

where the set C(i) (as that in GOMP) is to be determined and tested. zcD(i) is a vector with

elements, at positions D(i), corresponding to events number values c ∈ C(i).

If the optimal set copt ∈ C(i) in (3.33) corresponds to a zero value in the ith last detected cellthen break. Else, go to step 4).

4) Updating step:

• Update the residual asy(i) = y −Az

coptD(i) . (3.34)

• iterate steps 2 to 4

Before setting the above algorithms performance, next section investigates optimized sensors

selection schemes (choice of Φ).

45

Page 72: Sparse Regularization Approach ... - supcom.mincom.tn

3.5. Optimized Sensors Selection Schemes

Algorithm Detection & counting Number of detected cells per iteration

GMP joint q = 1

2S-GMP consecutive q = 1

gGMP joint q > 1

g2S-GMP consecutive q > 1

GOMP joint q = 1

2S-GOMP consecutive q = 1

Table 3.1: Procedures comparison.

3.5 Optimized Sensors Selection Schemes

CS theory states that under sparsity conditions, only a reduced number of M measurements are

needed for s reconstruction such that M ≪ N . In this way, for binary measurement matrix Φ

choice, M sensors should be activated at a time while the remaining can enter sleep mode, which

may extend the network life duration. In this section, we discuss the problem of optimizing the

sensors selection procedure, which is till now taken randomly [56, 92]. Active sensors selection

is here optimized considering the criterion of reconstruction matrix. In CS framework, this

performance is related to quality of the reconstruction. In literature, this quality is evaluated

using RIP condition or coherence measure. In our context, the reconstruction matrix is A =

ΦΨ, where Φ is a binary selection matrix which implies that A is a submatrix made of M

among N rows of Ψ. We hereafter propose and compare several approaches. First, we consider

a choice based on the channel matrix Ψ, where we aim either to minimize the coherence or

to maximize the channel energy measures. In the second case, we focus on the measured

observation energy. These schemes are expected to outperform a random sensors selection

scheme since they optimize criteria related to reconstruction matrix coherence or to energy.

•Optimized schemes based on channel matrix

Cumulated channels gains: Each channel matrix row corresponds to the channels between

one sensor to all others. In this scheme, we choose M among N sensors with the higher channels

gain values (elements of Ψ amplitudes). Thereby, we consider only the M higher cumulated

amplitudes rows.

Coherence: As stated in CS theory, for the successful recovery of a sparse signal, the decom-

position matrix A coherence should be minimized [65]. In the second scheme, our aim is to

shape a sub-matrix of Ψ with size M ×N with a minimum coherence measure. The coherence

of a matrix A is indeed evaluated as the maximum absolute correlation between two different

46

Page 73: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

columns as

µ(A) = max1≤i,j≤N,i 6=j

|ATi Aj|

||Ai||2||Aj ||2. (3.35)

We adopt a backward processing to find a sub-matrix with dimension M ×N [105]. We start

by the global matrix Ψ and we process an iterative scheme. For each iteration, we delete a row

of the updated Ψ and we compute the coherence of the corresponding sub-matrix where we

will test all possible row positions. Then, we eliminate the row vector leading to the submatrix

with the minimal coherence. We follow this pattern until obtaining a sub-matrix with M ×N

dimensions, in this way we need N −M iterations.

•Optimized schemes based on measured observation

In this part, we are interested in the energy of components from the global measured observation

x. Therefore, we choose theM measurements with the maximum energy. Contrarily to random

selection and to the above proposed schemes, which are based on channel matrix, the full

measured observation x is required, where all sensor nodes will be activated.

The next section is dedicated to performance evaluation of the already discussed approaches

for rare events detection and counting in WSN.

3.6 Numerical Study

We hereafter evaluate recovery performance and computational load of the above proposed CS

algorithms. Also their scalability is discussed.

3.6.1 Parameters setting

A small monitored area of 16m × 16m is regularly divided into N = 64 cells (8 by 8 cells).

Each cell is equipped with one sensor. Then, the distances between any two neighboring nodes

are fixed to 2m which leads to a small and dense monitored area with almost 20m as higher

remoteness between sensors. We activate sensors in M = 20 cells. Among the N cells, we

select randomly K ≪ M active cells where the targets number is chosen uniformly at random

from 1, 2, 3 (m = 3). As mentioned above, the large scale fading effect can be neglected,

then the path loss exponent is here fixed to α = 0. The target transmitted power is fixed

to P0 = 1W . For the generalized version, the case of two and three cells detection at each

iteration (q = 2 and q = 3) are hereafter envisaged.

The performance is evaluated in terms of Normalized Mean Squares Error (NMSEt) on s and

47

Page 74: Sparse Regularization Approach ... - supcom.mincom.tn

3.6. Numerical Study

on active cells positions detection, independently of the estimated events number (NMSEp)for

varying signal to noise ratio defined as SNR= P0σ2 . Th error in position detection accounts for

both missing (undetected active cell) and false alarm (erroneously detected cell where there are

no targets). The chosen performance evaluation criteria NMSEt and NMSEp are, respectively,

given by

NMSEt=E(‖s− s‖22)E(‖s‖22)

, (3.36)

where we indicate by s and s the vectors that concatenate true and estimated number of events

per cell.

NMSEp=E(‖z − z‖22)E(‖z‖22)

, (3.37)

where z and z are binary vectors obtained respectively from s and s by placing 1 at non

zero valued entries positions and 0 elsewhere. Additionally, the rates of correct detection and

counting error (COE) over the true detection realizations are reported. More precisely, the

COE rate is given by

COE =

N∑

i=1

|ni − n′

i|

N∑

i=1

ni

, (3.38)

where ni and n′

i denote respectively the actual and the estimated number of targets at cell i.

3.6.2 Numerical results analysis

In this section, we set up simulation results. First, a comparison between GMP, 2S-GMP and

generalized versions gGMP, g2S-GMP algorithms is made. Then, the detection performance of

the proposed GOMP and 2S-GOMP algorithms is evaluated. Finally, a comparative study of

all proposed Greedy versions is carried with both random and optimized sensors selection.

3.6.2.1 Generalized versions performance

We first study the targets detection performance of generalized versions, gGMP and g2S-GMP

compared to GMP and 2S-GMP in the cases of two and three cells detection at each iteration

i.e. q = 2, 3. figure 3.4 illustrates an example of targets localization where we select randomly

M = 25 sensors and fix SNR= 15dB. This example clearly indicates that gGMP with q = 2, 3

can precisely recover the K = 4 targets locations whereas GMP correctly estimates 2 among 4

targets. However, 2S-GMP achieves an enhanced detection compared to g2S-GMP.

48

Page 75: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

0 2 4 6 8 10 12 14 160

2

4

6

8

10

12

14

16

Selected active sensors Targets q=1 q=2 q=3

gGMP versions

0 2 4 6 8 10 12 14 160

2

4

6

8

10

12

14

16

Selected active sensors Targets q=1 q=2 q=3

g2S-GMP versions

Figure 3.4: Targets positions estimation when SNR=15dB (one realization), N = 64, M =25, K = 4.

In the second part, we consider the performance of the proposed gGMP and g2S-GMP compared

to GMP and 2S-GMP versus SNR for N = 64, K = 4 and M = 20. figure 3.5 displays the

performance in terms of NMSEt, NMSEp, COE and correct detection rates for different values

of detected cells per iteration, q = 1, 2, 3. It is clear that a considerable improvement of recovery

accuracy is obtained with gGMP when the number q grows. In fact, gGMP with q = 3 achieves

the lowest NMSE rates realizing a gain of the order of 11dB w.r.t GMP (case q = 1). For q = 2,

this gain is of the order of 7dB. Also, the lowest COE and the best correct detection rates are

achieved with gGMP especially for q = 3. For 2S-GMP, the generalization is not beneficial and

the performance degrades as q increases. Operating in two stages mode leads to enhancement

w.r.t joint detection/counting only for q = 1.

3.6.2.2 Performance of GOMP

In this part, we evaluate the proposed GOMP algorithm and its 2S version. In this context, an

example for targets positions estimation, given by figure 3.6, demonstrates that the proposed

GOMP and 2S-GOMP algorithms detect correctly the active cells positions differently from

GMP and 2S-GMP for SNR=15dB, N = 64, M = 25, and K = 4.

After examining the detection performance of the proposed greedy versions based on OMP

algorithm, a comparison between gGMP, g2S-GMP with q = 2, GOMP, 2S-GOMP, 2S-GMP

and GMP is carried. Examining the NMSE (NMSEt and NMSEp) rates versus SNR given

49

Page 76: Sparse Regularization Approach ... - supcom.mincom.tn

3.6. Numerical Study

−5 0 5 10 15 20 25 3010

−2

10−1

100

101

SNR[dB]

NM

SE

t

q=1q=2q=3

−5 0 5 10 15 20 25 3010

−2

10−1

100

101

SNR[dB]

NM

SE

p

q=1q=2q=3

−5 0 5 10 15 20 25 30

10−1

100

SNR[dB]

CO

E

q=1q=2q=3

−5 0 5 10 15 20 25 3010

−3

10−2

10−1

100

SNR[dB]

Rat

e of

Cor

rect

Det

ectio

n

q=1q=2q=3

Figure 3.5: Performance evaluation of the simple and generalized versions when N = 64,M = 25 and K = 4. Solid line correspond to GMP/gGMP and dashed line to 2S-GMP/g2S-

GMP.

by figure 3.7, we notice that enhanced performance in terms of NMSEt is attained by the

proposed GOMP. A similar behavior of COE and correct detection rates performance is ob-

served in figure 3.7. The GOMP outperforms the GMP variants and the 2S-GOMP scheme

by achieving the lowest COE and the best correct detection rates over all SNR range.

We now study the effect of the number of activated sensors M in the network. In this context,

NMSEt and correct detection rates are displayed in figure 3.8, obtained for K = 3, SNR=

20dB and varying values of M . It is noticed that the larger M , the higher the correct active

cell positions detection and the lower error rates. It can be observed that GOMP achieves the

best performance in terms of NMSEt followed by the modified 2S-GOMP version. Similarly,

GOMP and 2S-GOMP achieve enhanced detection rate compared to gGMP with q = 2.

50

Page 77: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

0 2 4 6 8 10 12 14 160

2

4

6

8

10

12

14

16

Selected active sensors Targets GMP GOMP

GMP and GOMP comparison

0 2 4 6 8 10 12 14 160

2

4

6

8

10

12

14

16

Selected active sensors Targets 2S−GMP 2S−GOMP

2S-GMP and 2S-GOMP comparison

Figure 3.6: Targets positions estimation when SNR=15dB, N = 64, M = 25, K = 4.

3.6.2.3 Scalability

The scalability effect is analyzed in figure 3.9 where the rates of true detection is evaluated

versus the number of the network cells N while keeping the same spacement between sensors

(validity of small scale network). The results are obtained over 200 iterations and for square

networks of size N = N20 , for N0 ranging from 5 to 10. The compression and the sparsity levels

are fixed respectively to M/N = 0.3 and K/N = 0.03. We observe that detection performance

remains acceptable when the network dimensions are extended. Indeed, as N increases the

detection rates remain very good even if a slight decrease with gGMP version is observed

for larger N , as it also implies larger K. Also, GOMP outperforms other schemes for larger

networks.

For all the previously presented results, a random selection of deployed sensors is adopted.

In the following, a comparative study of the above (section 3.5) proposed optimized sensors

selection schemes is carried. NMSEt and correct detection curves are displayed in figure

3.10. This figure demonstrates the enhanced performance of the proposed sensors selection

approach based on measured observation x over that based on reconstruction matrix which

evenly outperforms the random selection. Indeed, it achieves the lowest NMSEt by achieving a

gain of 6dB at low SNR compared to random selection. It also led to the best correct detection

rates as shown in figure 3.10. For optimized sensors placement based on channel matrix, the

scheme based on coherence optimization using backward processing outperforms the criterion

51

Page 78: Sparse Regularization Approach ... - supcom.mincom.tn

3.6. Numerical Study

−5 0 5 10 15 20 25 3010

−2

10−1

100

101

SNR[dB]

NM

SE

t

GMP2S−GMPgGMP(q=2)g2S−GMP(q=2)GOMP2S−GOMP

−5 0 5 10 15 20 25 30

10−1

100

101

SNR[dB]

NM

SE

p

GMP2S−GMPgGMP(q=2)g2S−GMP(q=2)GOMP2S−GOMP

0 5 10 15 20 25 3010

−2

10−1

100

SNR[dB]

CO

E

GMP2S−GMPgGMP(q=2)g2S−GMP(q=2)GOMP2S−GOMP

−5 0 5 10 15 20 25 3010

−3

10−2

10−1

100

SNR[dB]

Rat

e of

cor

rect

det

ectio

n

GMP

2S−GMP

gGMP(q=2)

g2S−GMP(q=2)

GOMP

2S−GOMP

Figure 3.7: Performance comparison when N = 64, M = 20 and K = 3.

of maximal energy rows.

After having assessed the recovery performance, we hereafter evaluate the computational load

of the envisaged methods.

3.6.3 Computational load

The complexity of sparse targets detection algorithms Calgo is here evaluated in terms of com-

plex multiplications. In this way, table 3.2, where Kalgo denotes the iterations number of the

algorithm indicated in index, presents a comparison of their computational complexities. We

recall that the signal sparsity level K (number of active cells) is not available in practice. All

considered algorithms operate without the knowledge of the sparsity level K and they stop

iterations when the number of events to count is zero. This advantage is indeed inherited from

the discrete nature of the sparse parameter.

52

Page 79: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

10 20 30 40 50 60 7010

−3

10−2

10−1

100

101

Measurement number (M)

NM

SE

t

GMP2S−GMPgGMP(q=2)g2S−GMP (q=2)GOMP2S−GOMP

10 20 30 40 50 60 70

10−1

100

Measurement number (M)

Ra

te o

f co

rre

ct d

ete

ctio

n

GMP2S−GMPgGMP(q=2)g2S−GMP (q=2)GOMP2S−GOMP

Figure 3.8: Performance comparison versus measurements number (M) when SNR=20dB,N = 64 and K = 3.

20 30 40 50 60 70 80 90 100 1100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Cells number

Rat

e of

Cor

rect

Det

ectio

n

GMP2S−GMPgGMP (q=2)g2S−GMP (q=2)GOMP2S−GOMP

Figure 3.9: Correct events detection versus cells number when MN

≈ 0.3, KN

≈ 0.03, andSNR= 20dB.

Firstly, we consider GMP and its modified version 2S-GMP. Considering that N ≫ M > K

and i ≤ K, for each i, we have

(m+ 1)(N − i+ 1) > (N − i+ 1) + (m+ 1). (3.39)

As shown in figure 3.11, giving mean iterations number and mean run time, in the case of

53

Page 80: Sparse Regularization Approach ... - supcom.mincom.tn

3.6. Numerical Study

−5 0 5 10 15 20 25 3010

−1

100

SNR[dB]

NM

SE

t

Random selectionbackwardmax energyrows with max energy

−5 0 5 10 15 20 25 3010

−3

10−2

10−1

100

SNR[dB]

Ra

te o

f co

rre

ct

de

tectio

n

Random selection

backward

max energy

rows with max energy

Figure 3.10: Comparison between different proposed sensors selection optimized approachesfor GMP algorithm when M = 20 and K = 3.

Algorithm Complexity

GMP CGMP = M(m+ 1)

KGMP∑

i=1

(N − i+ 1).

2S-GMP C2S−GMP = M

K2S−GMP∑

i=1

((N − i+ 1) + (m+ 1)).

gGMP CgGMP = Mq(m+ 1)qKgGMP∑

i=1

Cq

Ω(i−1) ,

with Card(Ω(i−1)) = N − q(i− 1).

g2S-GMP Cg2S−GMP = M

Kg2S−GMP∑

i=1

(N − q(i− 1) + q(m+ 1)q).

GOMP CGOMP = M

KGOMP∑

i=1

(N − i+ 1)m(i−1)(m+ 1)i.

2S-GOMP C2S−GOMP = M

K2S−GOMP∑

i=1

(

N − i+ 1 +m(i−1)(m+ 1)i)

.

Table 3.2: Computational complexity comparison.

three cells with events (K = 3) and without a priori knowledge of K, GMP has almost similar

mean iteration number than 2S-GMP as 3 < K2S−GMP ≈ KGMP which leads to

(m+ 1)

KGMP∑

i=1

(N − i+ 1) >

K2S−GMP∑

i=1

((N − i+ 1) + (m+ 1)), (3.40)

thus implying that CGMP > C2S−GMP . This proves that in addition to performance enhance-

ment, the 2S-GMP complexity load is reduced with respect to that of GMP. This is corroborated

54

Page 81: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

by figure 3.11 exhibiting the mean run time (in seconds) versus SNR.

Passing now to the complexity comparison of the different proposed generalized versions. fig-

ure 3.11 shows that, as expected, the iterations number decreases for the generalized greedy

versions when q value increases, compared to GMP and 2S-GMP which correspond to q = 1.

Also, it is worth noting that gGMP and g2S-GMP exhibit similar mean iterations number for a

given value of q. We can see, based on figure 3.11 exhibiting the mean run time (in seconds)

versus SNR, that g2S-GMP has the lowest mean run-time for q = 3. In addition, through the

mean run time depicted by figure 3.11, we can see that gGMP is the most complex due to the

large set of possibilities to be tested in each iteration. This complexity increases with q value.

It is to notice however that when q increases, the required number of iterations is expected

to decrease. Indeed, the gGMP jointly processes a detection and counting which involves a

search over a set of Cq

Card(Ω(i−1))× (m+1)q elements at each iteration. Whereas, GMP involves

a search over a reduced set of (m + 1) × (N − i + 1) elements which justifies the high mean

run-time of gGMP.

As shown in figure 3.11, GOMP suffers from the higher computational load. The mod-

ified version of GOMP, 2S-GOMP, achieves noticeable reduced complexity w.r.t. GOMP.

More precisely, joint detection and counting by GOMP involves a search over a set of Card

(Ω) × Card(C(i)). As 2S-GOMP operates detection and counting separately, it involves a

search over a reduced set of Card (Ω) + Card(C(i)). Then, considering that N ≫ K, N ≫KGOMP and N ≫ K2S−GOMP , we notice that CGOMP proportionality coefficient with N is

M

KGOMP∑

i=1

i(mi +m(i−1)) whereas it is of M

K2S−GOMP∑

i=1

1 = MK2S−GOMP for C2S−GOMP which

is much lower, thus showing that since 2S-GOMP attains the stop condition more rapidly than

GOMP, i.e. KGOMP > K2S−GOMP as displayed in figure 3.11, 2S-GOMP provides an im-

portant complexity reduction. Also, we can observe that the corresponding mean run-time

decreases with SNR which can be related to the decrease of the false alarm rate.

table 3.3 gives a numerical evaluation of complexities, in terms of multiplications, given in

table 3.2 for the parameters m, M and N values used in simulations at SNR= 15dB and

using the mean number of iterations taken from figure 3.11. Obtained results corroborate

the mean run time depicted on figure 3.11 and show that the herein proposed GOMP, 2S-

GOMP and generalized version with gGMP lead to a general performance enhancement but not

always to a complexity reduction. Indeed, an obvious compromise between performance and

complexity is noticed. The proposed GOMP, 2S-GOMP and gGMP attain the best enhanced

detection performance compared to all other methods yet with considerably higher complexity.

55

Page 82: Sparse Regularization Approach ... - supcom.mincom.tn

3.7. Conclusion

−5 0 5 10 15 20 25 301

2

3

4

5

6

7

8

9

10

SNR[dB]

Me

an

ite

ratio

n n

um

be

r

GMP2S−GMPgGMP(q=2)g2S−GMP(q=2)gGMP (q=3)g2S−GMP(q=3)GOMP2S−GOMP

−5 0 5 10 15 20 25 3010

−3

10−2

10−1

100

101

102

SNR[dB]

Me

an

ru

n t

ime

(s)

GMP2S−GMPgGMP(q=2)g2S−GMP(q=2)gGMP (q=3)g2S−GMP(q=3)GOMP2S−GOMP

Figure 3.11: Mean iterations number and mean run time versus SNR when N = 64, M = 20and K = 3.

Methods Complexity Evaluation

q 1 2 3

Generalized GMP/ CgGMP = 2.104 CgGMP = 3.63.106 CgGMP = 2.98.108

gGMP KgGMP = 4 KgGMP = 3 KgGMP = 2

versions 2S-GMP/ Cg2S−GMP = 5.32.103 Cg2S−GMP = 5.64.103 Cg2S−GMP = 1.52.104

g2S-GMP Kg2S−GMP = 4 Kg2S−GMP = 3 Kg2S−GMP = 3

GOMP CGOMP = 6.96.105 with KGOMP = 4

2S-GOMP C2S−GOMP = 1.63.104 with K2S−GOMP = 4

Table 3.3: Complexity in terms of multiplication for SNR= 15dB and K given by figure3.11.

Whereas, g2S-GMP attains lower performance yet with greatly reduced computational burden,

when compared to gGMP.

3.7 Conclusion

This chapter investigated the problem of sparse targets detection and counting in small scale

dense WSN from the perspective of CS theory.

In its first part, we have provided a theoretical detailed formulation and has proved the validity

of the problem of CS based targets detection and counting. After detailed scenario description,

we considered small scale WSN where the large scale path loss can be neglected.

56

Page 83: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 3. Sparse Static Signal Recovery through Rare Events Detection and Counting in Small Scale WSN

In the second part, several targets detection and counting algorithms based on CS have been

proposed. In this context, a new generalized extension of the recent GMP algorithm which

jointly locates and counts the events has been presented. This generalization allows to identify

simultaneously multiple active cells positions and their events number at each iteration. Addi-

tionally, taking into account for the CS decomposition basis non orthogonality, a novel Greedy

OMP (GOMP) algorithm is proposed. In order to reduce the computational load, modified

versions of GMP and GOMP based on the detection and counting stages separating have been

introduced. The obtained results show that the generalization g-GMP (q > 1) is beneficial to

GMP and not to 2S-GMP. Indeed, GMP performance enhances as the number q of detected

active cells per iteration increases. As a second contribution in this chapter, we have consid-

ered the problem of optimized sensors activation. We have proposed new schemes based on

measured observation energy and channel matrix coherence. These proposed optimized sensors

selection schemes are shown to enhance the detection capacity compared to the random sensors

selection scheme.

To validate the superiority of the proposed schemes over the existing GMP algorithm, a nu-

merical study has been carried in the end of this chapter. Simulation results show that the

proposed algorithms achieve a better tradeoff between performance and computational load

when compared to the GMP algorithm.

Next chapter envisages the case of large scale WSN which accounts for both Rayleigh and path

loss fading. More precisely, some criteria of CS will be exploited to enhance targets detection

in large scale WSN.

57

Page 84: Sparse Regularization Approach ... - supcom.mincom.tn
Page 85: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4

Power Control Mechanism for

Sparse Static Events Detection in

Large Scale WSN

4.1 Introduction

In this chapter, we investigate the CS theory application for sparse events detection and count-

ing in large-scale WSN. Such networks involve a large number of sensors, each one with a

computational power and sensing capability. In order to save energy and prolong the lifetime

of the network, CS approach is adopted which allows to use only a small subset of the sensors

measurements and thus leads to the global scale communication costs reduction within the

network.

As already mentioned in the introductory chapter, successful CS application necessitates the

sparsity and incoherence features. The sparsity is here fulfilled by exploiting the rare nature of

spatially distributed targets within the monitored area. For good reconstruction performance,

RIP can be verified for a sensing matrix with independent, and identically distributed entries.

Rather than RIP, we can consider coherence minimization of the sensing matrix. In large scale

WSN scenario, the sensing matrix depends on the network topology through sensors locations

and does not necessarily satisfy the reconstruction performance guarantees requirements. To

this aim, we here propose a mechanism of PC to force the sensing matrix to approximately

obey the i.i.d. entries distribution hypothesis. This is expected to approach RIP conditions and

consequently to guarantee a good reconstruction performance. In this context, a cooperative

59

Page 86: Sparse Regularization Approach ... - supcom.mincom.tn

4.2. Model Description and Motivation

framework is considered, where targets transmission power can be controlled and adjusted from

sensors.

The remainder of this chapter is organized as follows. After introducing the system model

presenting the CS application for events detection and counting in large scale WSN, related

work addressing the problem of cooperative PC in WSN, will be assessed. The proposed

mechanisms based on PC will then be presented. Finally, simulation results, followed by the

main conclusions, are drawn and analyzed.

4.2 Model Description and Motivation

In this first part, we fix the system model in the frame of large scale WSN. Additionally, before

introducing the main contribution based on PC mechanisms, a motivation part that justifies

this scenario is needed.

4.2.1 System model

In this work, we adopt the same scheme as presented in the last chapter, where the sensed area

is divided into a grid of N cells, each cell is equipped with one sensor. We assume that discrete

events can occur and that sensors are able to adjust events transmission power, which is said

to correspond to a cooperative framework.

Large scale WSN scenario frequently involves a large number of sensors distributed over a

vast monitored area. In such context, the transmitted signal is not only affected by Rayleigh

fading, but it is also subject to path loss effect caused by distance. We recall that the jth sensor

received energy is expressed as given in the first section from the previous chapter by

E(|xj |2) = P0

N∑

n=1

|hnj |2dαnj

sn + σ2, (4.1)

where P0 is the targets common transmission power.

In a matrix form, the previous formula can be rewritten as follows

x = Ψs+ η, (4.2)

60

Page 87: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

Ψ is a N ×N target decay energy matrix given by

Ψ = P0

|h11|2dα11

|h21|2dα21

· · · |hN1|2dαN1

|h12|2dα12

|h22|2dα22

· · · |hN2|2dαN2

......

. . ....

|h1N |2dα1N

|h2N |2dα2N

· · · |hNN |2dαNN

. (4.3)

CS application for rare events detection in WSN allows for reducing the number of required

measurements from N to some M such that M ≪ N . More precisely, we choose a random

selection matrix Φ to pick M among N sensors to be active i.e: the matrix Φ is chosen as

a binary selection matrix which makes N − M sensors in idle mode. Mathematically, this

selection can be expressed as

y = Φx = ΦΨs+Φη, (4.4)

= As+ n. (4.5)

Note that Φ choice makes A elements keep the same form than Ψ entries.

Our objective is to reconstruct the sparse representation s from the measurement vector y and

the matrix A knowledge. Then, based on the incoherence criterion of the sensing matrix A

that will be handled later, s can be accurately recovered using targets detection greedy CS

algorithms.

4.2.2 Motivation

A sufficient condition for the successful recovery of a sparse signal is that the decomposition

basis A obeys the RIP condition [18, 22]. This can be verified if the measurement matrix Φ

is a random matrix with i.i.d. entries. Such matrices are known to have a low coherence with

any sensing matrix Ψ [20, 22, 106], thus verifying the RIP with high probability. Examples

of such random matrices include matrices with elements following Gaussian distribution with

zero mean and variance 1/M [107]. In large WSN context, the sensing matrix A depends

on the sensors locations through distance dij which violates the i.i.d. assumption and might

produce high data recovery error. Therefore, in order to better satisfy RIP in large scale WSN,

for which path loss can not be neglected, the location effect, which appears through distance

terms at power α in matrix A should be compensated for to reduce its coherence.

61

Page 88: Sparse Regularization Approach ... - supcom.mincom.tn

4.2. Model Description and Motivation

In the following, we illustrate the impact of the decomposition matrix A characteristics on the

recovery performance. To this end, we consider a comparison between four scenarios differing

by the sensing matrix Ψ:

• perfect compensation of distance, exclusively Rayleigh effect, whereΨij = P0|hji|2, whichcoincides with the small scale WSN (studied in the last chapter).

• Ψ of eq. (4.3) with adjacent sensors spacing of d = 5m, d = 10m and d = 20m. For

the same cells number N . These 3 cases can be seen as small, medium and large scale

networks.

The corresponding GMP reconstruction performance is evaluated in terms of reconstruction

error by NMSEt on s, given by eq. (3.36). We also consider a measure of coherence which is

more likely to describe the performance of the proposed mechanisms in approaching the RIP

conditions guaranteeing decomposition accuracy and uniqueness. For a given dictionary A, its

coherence is defined as the largest absolute and normalized inner products between different

columns in A as follows

µ(A) = max1≤i,j≤N ;i 6=j

|ATi Aj|

||Ai||2||Aj ||2. (4.6)

The coherence value provides a measure of the worst similarity between the dictionary columns.

According to figure 4.1 displaying the NMSEt on sparse signal, s, versus SNR, it is noticed

that the path loss fading seriously affects the reconstruction rate which leads to a high targets

recovery error. Then, the larger WSN, the higher the reconstruction error is. The Rayleigh

0 5 10 15 20 25 3010

−1

100

101

102

SNR[dB]

NM

SE

t

Rayleigh fading+path loss (d=20m)Rayleigh fading+path loss (d=10m)Rayleigh fading+path loss (d=5m)Rayleigh fading

Figure 4.1: NMSEt versus SNR when N = 64, M = 20, K = 2 and P0 = 1.

62

Page 89: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

scenario (with α = 0) achieves the lowest recovery error which coincides with the smallest

coherence value reaching 0.8951 compared to 0.9902 with the large scale scenario (Rayleigh

and path loss fading). table 4.1 shows that the coherence value is not as much sensitive to the

spacing between sensors as NMSEt depicted on figure 4.1. This could be predicted as matrices

A corresponding to different values of d (for α = 2) are equal to within a multiplicative scaling

factor.

α = 2 α = 0

Scenario d = 20 m d = 10 m d = 5 m optimal

Coherence 0.9902 0.9903 0.9903 0.8951

Table 4.1: Coherence comparison.

Further, a coherence reduction is shown to guarantee a more accurate recovery. Therefore,

in order to enhance reconstruction accuracy, we propose to control the transmitted power of

targets in order to compensate for the path loss and to makeA better approach an i.i.d. matrix.

4.3 Existing Work

The study in [108] also addressed the problem of cooperative PC in WSN. It proposed a

method for proper matrix A design to achieve a sufficiently low coherence and guarantee a good

recovery performance. Different models for the sensing matrix are adopted and PC schemes

are proposed accordingly. Two key parameters dm and dM are optimized in order to minimize

both decomposition matrix coherence and channel estimation cost. dm is the PC region radius

relatively to any sensor or cluster head (CH). Indeed, the processing is composed in periods.

In each period, one sensor is considered as a CH or a reference node and applies PC for the

network. dM denotes the CH coverage where further nodes do not transmit. We hereafter

compare the performance of the above cited ’existing work’ to those of Rayleigh fading and

case without PC, referred to as w.o.PC, all of which account for the sensors maximum coverage

range. As illustrated in figure 4.2, we choose dM = dmax the coverage range and dm equal to

the mean distance of the nodes within the coverage to the considered CH.

The performance is evaluated in terms of NMSEt and NMSEp given respectively by eq. (3.36)

and eq. (3.37). Also, the missing and false alarm rates are illustrated. The results are displayed

in figure 4.3 and they show that the referenced work achieves a high NMSE, missing and false

alarm rates especially at low SNR compared to the case w.o.PC (without Power Control). This

scheme outperforms the case w.o.PC only above 35 dB and for this reason it will not be

63

Page 90: Sparse Regularization Approach ... - supcom.mincom.tn

4.3. Existing Work

**

*

*

*

*

*

**

*

*

*

**

*

*

*

**

*

*

*

*

dM

* * Sensor nodeCluster Head

d m

PC

no PC

no transmission

Figure 4.2: PC model. Only the nodes inside the radius dm have power control.

10 15 20 25 30 35 40 45 5010

−2

10−1

100

101

SNR[dB]

NM

SE

t

10 15 20 25 30 35 40 45 5010

−2

10−1

100

101

SNR[dB]

NM

SE

p

Existing work w.o.PC Rayleigh fading

10 15 20 25 30 35 40 45 5010

−2

10−1

100

SNR[dB]

Ra

te o

f m

issin

g

10 15 20 25 30 35 40 45 5010

−3

10−2

10−1

100

SNR[dB]

Ra

te o

f fa

lse

ala

rm

Existing work w.o.PC Rayleigh fading

Figure 4.3: Targets positions estimation when SNR=15dB, N = 64, M = 25, K = 4.

further considered in the rest of the chapter. Also, figure 4.3 shows that the reconstruction

and detection performance of the benchmark scheme, Rayleigh fading, is not very sensitive

to noise, which is indeed due to the Rayleigh fading effect dominance. In the following, we

consider large scale WSN and low to medium SNR range such as SNR≤ 30dB.

We hereafter propose new PC scheme whose accurate comparison to the existing work [108]

is given in table 4.2. Two new approaches, which are respectively based on sensors spatial

64

Page 91: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

Criterion/Approach existing work proposed approach

Cells partition range range or number

Repartition parameters cost of power and channel coverageestimation

Compensation local local or global

reduce power of neighboring increase or reduce powerPC targets according to their per class of nodes

range from the sensor

Table 4.2: Comparison between referenced and proposed approaches.

repartition and sensors number with global and local distance compensation, are proposed,

detailed and compared in the next section.

4.4 Proposed Approaches for power Control

In this section, we introduce the proposed mechanisms for PC in large scale WSN. Before giving

the details of these methods, the considered framework is firstly presented.

4.4.1 Framework overview

In our work, we envisage a suboptimal PC scheme based on distance compensation per subset

of cells (or clusters). This scheme accounts for the sensors maximum coverage (range) dmax. In

this way, only the nodes inside the coverage region transmit. Note that as a result of the sensors

limited coverage, the energy saving by canceling the transmissions of the out of coverage targets

leads to a sensing matrix with some zero valued coefficients which may pejoratively impact the

coherence and thus RIP condition. In this context, we envisage a comparison between two

scenarios with different coverage values corresponding to Zigbee [109, 110] and WiFi [111]

standards. We consider the case of regular large scale WSN with N = 64 and d = 20m. The

envisaged Zigbee and WiFi protocols used to insure the devices interconnection have respective

ranges dmax of 100m and 300m. With WiFi protocol, the network is fully recovered: i.e. all

targets are detectable by any considered CH. We denote this case as High Coverage Sensors

(HCS) since the coverage includes the whole network. Whereas, the Zigbee scenario, where

only a part of network is recovered by the reference node is referred to as Low Coverage Sensors

(LCS). We envisage both HCS, as in WiFi, and LCS, as in Zigbee, in case w.o.PC processing.

Examining the detection curves displayed in figure 4.4, we note that HCS achieves a slightly

higher detection rate compared to LSC scenario. This is in concordance with the coherence

65

Page 92: Sparse Regularization Approach ... - supcom.mincom.tn

4.4. Proposed Approaches for power Control

measure displayed in table 4.3 which is slightly lower for HCS case. The small difference

between both considered scenarios is due to the fact that the path loss being very severe

outside the coverage region, the corresponding received signals are so attenuated that they

approach the case where the targets are outside the maximum coverage.

10 15 20 25 303010

−1

100

101

SNR[dB]

NM

SE

t

HCS LCS

Figure 4.4: NMSEt versus SNR when N = 64, M = 20, K = 2 and P0 = 1, case without PC

Scenario α = 2 and d = 20 m

Case HCS LCS

Coherence 0.9905 0.9919

Table 4.3: Decomposition matrix coherence evaluation in absence of PC.

In the following, we focus on large WSN and take account of the sensors coverage. More

precisely, the Zigbee protocol (LCS) will be considered. Actually, the coverage range dmax

is related to the sensors sensitivity and to the maximal transmitted targets power Pmax. As

mentioned above, P0 denotes the targets transmitted power which corresponds to the coverage

d0.

Referring to the power law model [112], let ǫ denote the lowest perceivable power by the sensors,

or equivalently the threshold of sensitivity, then ǫ depends on the maximal power and on its

corresponding maximal coverage dmax as follows

ǫ =Pmax

dαmax

ω, where ω is a constant. (4.7)

66

Page 93: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

The choice of P0 in the scheme w.PC indeed depends on the targeted coverage radius d0.

According to the chosen value of d0 with d0 ≤ dmax, the transmission power will be

P0 = Pmax

(

d0dmax

. (4.8)

The choice of d0, equivalently of P0, will be analyzed in numerical results section.

In the following, in order to reduce the sensing matrix A coherence, we propose to control the

transmitted power of targets within the maximal coverage zone with radius dmax. The PC

scheme allows to compensate the distance separating the sensor device and the target location

throughout the network. More precisely, it increases the transmit power for the furthest targets

to compensate their fading and reduce the power for the nearest targets to CH. As a benchmark,

we consider the scheme w.o.PC, in which all nodes transmit with power P0. This PC principle

is illustrated by figure 4.5

**

*

*

*

*

*

**

*

*

*

**

*

*

*

**

*

*

*

*

d0

* * Sensor nodeCluster Head

Figure 4.5: PC Transmission model. Only the targets inside the coverage zone transmit.

We especially suggest two approaches for PC, both of which consider a spatial partition of

nodes w.r.t. a reference node (CH). Contrarily to the perfect PC where each node within the

CH coverage has a perfect compensation of path loss effect, in our work, motivated by total

power consumption reduction, the PC is operated by subgroups of nodes. These subgroups are

formed either based on their proximity to the reference node (approach 1: distance (range)-

based) or based on their spatial density (approach 2: sensors number). These partitions result

in rings (uniform for range-based partition). For both of range and number based approaches,

two schemes are envisaged, which are respectively global and local schemes. In global approach,

67

Page 94: Sparse Regularization Approach ... - supcom.mincom.tn

4.4. Proposed Approaches for power Control

the PC is done per class, or cluster, of nodes where the same transmitted power is allocated

to the nodes belonging to the same ring. In local approach, the PC considers both the node

distance from CH and the ring to which it belongs. In the following two sections, these proposed

approaches are detailed.

4.4.2 Proposed power control mechanism based on distance (PCMD)

This approach processing depends on the sensors maximum range, dmax, related to the sensors

sensitivity. Also, we denote by Pmax the maximum power which can be delivered by the targets.

dmax and Pmax are supposed to be uniform for respectively all sensors and all targets. The

target is not detectable by the CH when the distance between them exceeds dmax. In such case,

the PC sets the transmitted power to zero (corresponding targets do not need to transmit).

Our approach attempts to compensate the energy attenuation caused by distance. It considers

only the cells (sensors) within the coverage of the corresponding CH, i.e. situated at a distance

lower than dmax. This set is divided in n subsets with 1 ≤ n ≤ N1 where N1 is an upper bound

to prevent empty sets and is related to the ratio between dmax and adjacent sensors spacing d.

Each subset k contains the sensors lying within a ring delimited by the two circles centered on

the CH and of respective radii R(n)k and R

(n)k+1 such that

R(n)k = k

dmax

n, n = 1, . . . N1, 1 ≤ k ≤ n, (4.9)

where R(n)n corresponds to dmax.

This mechanism performs with two manners: in a global manner for which the scheme is

denoted PCMDg, or in a local manner for which the scheme is named PCMDl. In the following,

these proposed schemes are developed.

4.4.2.1 PCMDg

In this part, we propose to globally compensate for the distance effect: the nodes within the

same ring are allocated the same transmission power. In the following, we will discuss the

cases n = 1 and n = 2 for illustrative aim. The scheme is then generalized to greater values of

subsets n corresponding to a finer spatial partition.

• n = 1: As mentioned above, the CH can hear only the nodes within the coverage d0 (for

targets transmission power P0). Then, we propose to amplify the transmitted power of the

targets outside the ring with radius d0 and at the same time within the range dmax. Therefore,

68

Page 95: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

the components of PC matrix, P(i, j), denoting the required transmit power of targets situated

in cell i controlled by sensor j can be presented as

P(i, j) =

P0 dij ≤ d0,

Pmax d0 < dij ≤ dmax,

0 dij > dmax.

(4.10)

• n = 2: The PC is based on the partition described in figure 4.6 where the CH coverage is

partitioned into two concentrated circular regions.

*

*

*

*

*

*

*

*

*

*

*

**

*

**

*

*

*

*

*R

(2)2

R(2)1

* Cluster Head * Sensor node

Figure 4.6: PCMDg (n = 2)

The components of PC matrix, P(i, j) can be expressed as

P(i, j) =

P(2)0 dij ≤ R

(2)1 ,

P(2)1 = Pmax R

(2)1 < dij ≤ dmax,

0 dij > dmax.

(4.11)

verifyingP

(2)0

R(2)1

α = Pmax

dαmax= P0

dα0.

• General Case:

Generalizing the schemes described above for n = 2 to larger values of n ≥ 2, the form of the

power matrix is given by

P(i, j) =

P(n)l Rn

l < dij ≤ Rnl+1, l = 0, . . . , n− 1,

0 dij > dmax,(4.12)

69

Page 96: Sparse Regularization Approach ... - supcom.mincom.tn

4.4. Proposed Approaches for power Control

such asP0

dα0=

P(n)l

(R(n)l+1)

α,

P(n)l = P0

(

R(n)l+1

d0

= P(n)0 (1 + l)α, l = 0, . . . , n− 1.

Let us note that according to the ratioR

(n)l

d0, the controlled power can be larger or lower than

P0.

4.4.2.2 PCMDl

In this part, a local distance compensation is considered: each node is affected a different

transmission power according to its position. We keep the same notation presented in the

PCMDg. Also, the cases n = 1 and n = 2 will be presented before generalizing to higher orders

of n. For the nodes outside the coverage zone with radius dmax, no transmission is allowed.

• n = 1: In this case, we locally compensate the distance for the targets within the coverage

d0. Then, the components of PC matrix P are presented as

P(i, j) =

P0

(

dijd0

)αdij ≤ d0,

Pmax d0 < dij ≤ dmax,

0 dij > dmax.

(4.13)

• n = 2: The components of PC matrix P are expressed as

P(i, j) =

P0

(

dijd0

)αdij ≤ R

(2)1 ,

Pmax R(2)1 < dij ≤ dmax,

0 dij > dmax.

(4.14)

• General Case:

The previous development can be generalized for n ≥ 2 as follows

P(i, j) =

P0

(

dijd0

)αdij ≤ R

(n)n−1,

Pmax R(n)n−1 ≤ dij ≤ dmax,

0 dij > dmax.

(4.15)

70

Page 97: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

4.4.3 Proposed power control mechanism based on sensors number (PCMSn)

The PC scheme proposed in section 4.4.2 partitions the nodes into subsets with respect to the

distance separating them from the CH as a regular fraction of the sensor coverage region radius

dmax. Differently, we now consider a partition of the nodes which splits them into subsets, still

contained in the same ring centered on the CH, but with a further constraint that the rings

have equal number of nodes. This partition results in irregular rings and changes according

to the deployed sensors density in the monitored area and to the CH position. First, we sort

the sensors according to the distance separating them from CH j, in an ascending order (from

the nearest to the farthest). This sorting implies for any CH j to consider the sorted distances

which verify dij ≤ di+1j for i = 1, . . . , N − 1. Then, for each CH the network is divided into n

collections.

Like for the distance based PC, two ways can be envisaged for PCMSn scheme: a global com-

pensation scheme denoted PCMSng and a local compensation scheme referred to as PCMSnl.

In the following, we will limit the discussion to the case n = 2 and even if it is not included

in this work, this mechanism can be further generalized to larger number of node classes n for

both PCMSng and PCMSnl approaches.

In this case of n = 2, for each sensor j, taken as CH, we divide the sensors network into two

sets, the first set contains the N−12 closest sensors to the sensor j as shown in figure 4.10.

The remaining N−12 sensors form the second set. Let d

(1)m and Let d

(2)m denote respectively the

mean distance of the two groups such as

d(1)m =2

N − 1

N−12∑

i=1

dij , (4.16)

d(2)m =2

N − 1

N∑

i=N−12

+1

dij . (4.17)

As already considered above, the PC implies that targets outside the CH j coverage, remain

silent, i.e. they do not transmit.

The global and local schemes are hereafter detailed for n = 2.

4.4.3.1 PCMSng

This scheme is based on global compensation of distance. Indeed, we will associate the same

transmitted power for a subset of sensors.

71

Page 98: Sparse Regularization Approach ... - supcom.mincom.tn

4.4. Proposed Approaches for power Control

*

d(2)N/2

d(2)m

d(1)N/2

d(1)m

*

*

*

*

*

*

* *

*

* **

*

*

*

*

* *

* Cluster Head * Sensor node

Figure 4.7: PCMSn (n = 2), in the two subsets N2 = 9 nodes are present.

• Case 1: d(2)m ≤ dmax

The components of PC matrix can be presented as

P(i, j) =

P′

0 dij ≤ d(1)m ,

P′

1 d(1)m < dij ≤ d

(2)m ,

P′

2 d(2)m < dij ≤ dmax,

0 dij > dmax.

(4.18)

with

P′

0 = P0

(

d(1)m

d0

.

The sensing matrix A components should at the best fit the i.i.d. entries model in order to

verify the RIP criterion. According to the structure of submatrix A of matrix Ψ given by (ref

eq. (3.15)), this can be achieved by compensating for the path loss effects. To this end, the

PC values are chosen asP

0

(d(1)m )α

=P

1

(d(2)m )α

=P

2

dαmax

,

which leads to the power values,

P′

1 = P0

(

d(2)m

d0

and P′

2 = P0

(

dmax

d0

= Pmax.

72

Page 99: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

• Case 2: d(1)m < dmax < d

(2)m

P(i, j) =

P′

0 dij ≤ d(1)m ,

P′

0

(

dmax

d(1)m

d(1)m < dij ≤ dmax,

0 dij > dmax.

(4.19)

• Case 3: d(1)m > dmax

We seek to find the targets positions i with distance dij ≤ dmax. Then, we compute the mean

distance of sensors locations within the coverage zone denoted by d(0)m . The components of

corresponding PC matrix P(i, j) can be expressed as

P(i, j) =

P0

(

d(0)m

d0

dij ≤ d(0)m ,

Pmax d(0)m < dij ≤ dmax,

0 dij > dmax.

(4.20)

4.4.3.2 PCMSnl

In this part, a local distance compensation is adopted. The case n = 2 is described hereafter.

• Case 1: d(2)m ≤ dmax

The components of PC matrix can be presented as

P(i, j) =

P0

(

dijd0

)αdij ≤ d

(2)m ,

Pmax d(2)m < dij ≤ dmax,

0 dij > dmax.

(4.21)

• Case 2: d(1)m < dmax < d

(2)m

P(i, j) =

P0

(

dijd0

)αdij ≤ d

(1)m ,

Pmax d(1)m < dij ≤ dmax,

0 dij > dmax.

(4.22)

73

Page 100: Sparse Regularization Approach ... - supcom.mincom.tn

4.4. Proposed Approaches for power Control

• Case 3: d(1)m > dmax

The corresponding PC procedure is as follows

P(i, j) =

P0

(

dijd0

)αdij ≤ d

(0)m ,

Pmax d(0)m < dij ≤ dmax,

0 dij > dmax.

(4.23)

The previous steps for both global and local PC can be generalized for 2 < n ≤ N1 where N1 is

an upper bound to prevent empty sets. In such way, the network is divided into n subsets with

the same number of sensors per subset and respecting the deduced values of d(l)m , l = 1, . . . n.

Compared to perfect distance compensation, the above local scheme allocates Pmax on the last

coverage ring, thus leading on this ring to a slightly higher power than that required by perfect

distance compensation (both for distance and sensors number based).

Using PC, the measured energy vector during the sensing period can be written as

y =1

P0P⊙As+ n, (4.24)

where ⊙ is the elementwise product and matrix P elements tune the targets transmission

power according to the chosen PC scheme. Compared to eq. (4.5) where no PC is used, the

new decomposition matrix is

A′

=1

P0(P⊙A) (4.25)

which is expected to better verify RIP and thus to have lower coherence. Note that (4.5)

coincides with (4.24) for P = P01M×N .

The performance study will hereafter evaluate the accuracy of s recovery from y measurement.

The adopted decomposition algorithm is the GMP for its computational advantage in large

WSN. As mentioned before, it jointly detects a new active (with events) cell and counts the

number of events in the detected cell in each iteration and the iterative projection procedure

is updated according to the MP algorithm.

The following section is devoted to the performance evaluation of the above proposed PC

schemes.

74

Page 101: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

4.5 Numerical Results and Analysis

This section aims to evaluate the efficiency of the proposed PC mechanisms and their impact

on the recovery and detection performance. This is evaluated in terms of NMSEt and NMSEp

respectively given by eq. (3.36) and eq. (3.37) for varying SNR values. We also consider a

measure of coherence, given by eq. (4.6), to characterize the proposed mechanisms capacity

in approaching the RIP conditions. After introducing the simulation parameters, numerical

results are drawn and analyzed.

4.5.1 Simulation parameters setting

The following displayed results are obtained through Monte Carlo simulations based on the

following setting. We consider a regular monitored area divided into N = 64 (8 by 8) cells.

Among these N cells, K = 2 cells are active (where events occur). At each cell center, one

sensor node is placed to detect events occurring in the network. The adjacent sensors spacing

is d = 10m. We deploy N sensors among which M = 20 are selected to be active (during the

processing). The number of events occurring in the active cells is chosen uniformly at random

from 1, 2, 3. As mentioned above, we consider Zigbee as a wireless protocol ensuring the

interconnection between the devices. dmax and d values correspond to a control region covering

a maximal radius of 5 cells. The PC parameters are described in table 4.4.

Total number of sensors N = 64

Active sensors M = 20

Active cells number (random) K = 2

Max range (Zigbee) dmax = 100m

Targeted coverage radius d0 = 20m

Max transmitted power Pmax = 1mW

Table 4.4: Simulation parameters.

The above proposed PC schemes adjust the power to be transmitted according to a mean

distance (global) or the actual distance (local) from the CH. It is then of great interest to also

evaluate for the proposed schemes the total amount of consumed power in the network denoted

as Ptot and given by

Ptot =M∑

i=1

N∑

j=1

P(i, j)sj . (4.26)

In order to assess the proposed approaches relevance in terms of power saving, a quotient of

the required power normalized to the case w.o.PC described in section 4.4.1 is envisaged.

75

Page 102: Sparse Regularization Approach ... - supcom.mincom.tn

4.5. Numerical Results and Analysis

In addition to the case w.o.PC, we consider the benchmark of Perfect PC in which we com-

pensate locally and perfectly the distance of the corresponding target within the range dmax.

4.5.2 Numerical results

In this subsection, numerical results will be presented showing the performance improvement

obtained with each of the proposed PC approaches. Firstly, the coherence µ of the decomposi-

tion matrix with PCA′

(4.25) and network total power Ptot are evaluated. Then, reconstruction

and detection performance, in terms of NMSEt and NMSEp are evaluated. Finally, a compar-

ative study of all proposed PC approaches is carried with a further comparison to the cases

w.o.PC and with perfect PC.

⋆ Coherence and required power

Results are given in table 4.5 to table 4.7, for different PC schemes, obtained by averaging

over 500 Monte Carlo trials where Rayleigh fading, active cells (K among N) and sensors

selection (M among N) are randomly generated.

table 4.5 displays the results relative to benchmarks: perfect PC and w.o.PC. It shows that

when comparing the two cases, a coherence reduction yet higher power consumption are induced

by perfect PC compared to w.o.PC. It can be noticed that perfect PC reduces coherence and

increases power consumption w.r.t case w.o.PC.

Criterion/ Approach LCS scenario

Cases w.o.PC Perfect PC

µ 0.999 0.8964

Ptot (mW) 3.3 17.6

Quotient (normalized to the case w.o.PC) 1 5.33

Table 4.5: Coherence and power consumption evaluations, benchmark approach.

table 4.6 and table 4.7 respectively give results of PCMD and PCMSn schemes. table 4.6

shows for PCMD that both PCMDg and PCMDl lead to lower coherence than w.o.PC yet

higher than perfect PC. As the number of clusters n of cells used in PC processing is increased,

the coherence decreases, for both global and local schemes except for n = 1. Also, for larger

n, the power consumption decreases for both global and local schemes. Compared to local

processing, global processing leads to higher coherence and higher power consumption. The

same observations hold for PCMSn scheme with results that are displayed in table 4.7.

To summarize:

76

Page 103: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

Criterion/ Case n = 1 n = 2 n = 3 n = 4

Approach Global Local Global Local Global Local Global Local

µ 0.9271 0.9306 0.9707 0.9122 0.9462 0.8984 0.9186 0.8972

Ptot (mW) 68 67 40.2 32.5 30.8 21.1 26.3 18

Ptot quotient(normalized to 21.26 20.94 12.43 10.15 9.54 6.60 8.15 5.61the case w.o.PC)

Table 4.6: Coherence comparison and power consumption for PCMD approaches.

Criterion/ Case PCMSng PCMSnl

Approach n = 2 n = 3 n = 2 n = 3

µ 0.9237 0.9100 0.9161 0.9084

Ptot (mW) 32.9 28.1 27.7 24.1

Ptot quotient (normalized to the case w.o.PC) 10.45 8.93 8.57 7.44

Table 4.7: Coherence comparison and power consumption for PCMSn approaches.

• in terms of power consumption, the case w.o.PC has the lowest value, then Perfect PC

and proposed schemes consume the largest power.

• in terms of decomposition matrix coherence, the least value is obtained by perfect PC

then by proposed PC schemes and finally the case w.o.PC has the largest coherence.

• by increasing the number of WSN cells partitions(n), the proposed PC schemes consume

lower power and lead to smaller coherence converging to the perfect PC. The criteria of

power consumption and coherence evaluation are not sufficient to evaluate the different

schemes relevance. Hereafter, we evaluate the events recovery performance.

⋆ Detection and counting

figures 4.8 to 4.10 report respectively performance of PCMDg, PCMDl, PCMSng and PCM-

Snl in terms of NMSEt and NMSEp obtained by GMP algorithm for varying number of clusters

n used for PC.

figure 4.8 shows performance enhancement when the number of levels n decreases. In fact,

the case n = 1 is the most efficient in terms of reconstruction achieving the best performance.

Indeed, even if the coherence is reduced for larger n, power consumption decreases negatively

and affects the performance.

For all schemes, it is possible to achieve better performance than the case perfect PC. This

can be related to the reduced power consumption of the perfect PC scheme. The worst scheme

is that of w.o.PC. The above performance comparison is elaborated for a given scenario by

77

Page 104: Sparse Regularization Approach ... - supcom.mincom.tn

4.5. Numerical Results and Analysis

10 15 20 25 3030

10−2

10−1

100

101

SNR[dB]

NM

SE

t

w.o.PCn=1n=2n=3n=4Perfect PC

10 15 20 25 3030

10−2

10−1

100

101

SNR[dB]

NM

SE

p

w.o.PCn=1n=2n=3n=4Perfect PC

Figure 4.8: Performance evaluation for PCMDg approach

10 15 20 25 3030

10−2

10−1

100

101

SNR[dB]

NM

SE

t

w.o.PCn=1n=2n=3n=4Perfect PC

10 15 20 25 303010

−2

10−1

100

101

SNR[dB]

NM

SE

p

w.o.PCn=1n=2n=3n=4Perfect PC

Figure 4.9: Performance evaluation for PCMDl approach.

the definition of (d0,P0) specifying designed coverage and corresponding transmission power.

Applying different PC schemes leads to modified transmission power, making the obtained

results relative to the consumed power.

This difference between powers is induced by choosing the same coverage for all schemes. For

a fair comparison we hereafter consider an equal consumed power constraint.

⋆ Comparison between the proposed PC approaches

In this part, a comparative study of the different PC proposed schemes is carried. Two cases

are envisaged: case 1 with equal coverage and no constraint on the consumed power and case

2 where an equal consumed power is imposed for all schemes. The results are depicted in

figures 4.11 and 4.12 where solid lines correspond to case 1 and dashed lines correspond to

case 2.

Case 1: Equal coverage d0 constraint

In case 1, all presented schemes have been realized under equal coverage d0, thus resulting in

different total consumed powers. Thus, imposing the same coverage leads to an increase of all

the PC schemes consumed power w.r.t. the w.o. PC case. In such way, figure 4.11 shows

78

Page 105: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

10 15 20 25 3030

10−2

10−1

100

101

SNR[dB]

NM

SE

t

w.o.PCn=2n=3Perfect PC

10 15 20 25 3030

10−2

10−1

100

101

SNR[dB]

NM

SE

p

w.o.PCn=2n=3Perfect PC

Global scheme

10 15 20 25 3030

10−2

10−1

100

101

SNR[dB]

NM

SE

t

w.o.PCn=2n=3Perfect PC

10 15 20 25 303010

−2

10−1

100

101

SNR[dB]

NM

SE

p

w.o.PCn=2n=3Perfect PC

Local scheme

Figure 4.10: Performance evaluation of PCMSnl approach.

that the best performance is achieved with the global PC mechanism since it is the scheme

leading to the lowest reconstruction errors at low SNR range. This can be related to the higher

consumed power of the global scenario compared to local scheme as given by table 4.8. From

SNR= 25 dB, local schemes lead to the best performance. The perfect PC achieves slightly

lower performance than local schemes for all SNR range. This comparison is true for both

performance criteria NMSEt and NMSEp. The behavior of perfect PC case can be justified by

its lowest consumed power compared to all proposed PC schemes as shown in table 4.8.

Power/ Case w.o.PC PCMSng PCMDg PCMSnl PCMDl Perfect PC

Ptot (mW) 3.3 32.9 40.2 27.7 32.5 17.6

quotient 1 10.45 12.43 8.57 10.15 5.33

Table 4.8: Power consumption comparison for different proposed PC approaches for n = 2.

Case 2: Equal total power constraint

We here compare each PC scheme to the w.o.PC case while imposing respectively their con-

sumed powers equality. It is possible to unify PC schemes by setting Psc(i, j) =(

Lsc(i,j)d0

)αP0,

where Lsc(i, j) depends only on network topology and PC scheme ′sc′ among PCMDg, PCMDl,

79

Page 106: Sparse Regularization Approach ... - supcom.mincom.tn

4.5. Numerical Results and Analysis

10 15 20 25 3010

−2

10−1

100

101

SNR[dB]

NM

SE

t

w.o.PCPCMSngPCMDgPCMSnlPCMDlPerfect PC

10 15 20 25 3010

−2

10−1

100

101

SNR[dB]

NM

SE

p

w.o.PCPCMSngPCMDgPCMSnlPCMDlPerfect PC

Figure 4.11: Comparison between different proposed PC schemes for n = 2 with equalcoverage d0 constraint.

PCMSng and PCMSnl (refer to eq. (4.10-4.15) and (4.18-4.23)). The total consumed power

for PC cases is then expressed as

P sctot =

M∑

i=1

N∑

j=1

Psc(i, j)sj =M∑

i=1

N∑

j=1

(

Lsc(i, j)

d0

P0sj . (4.27)

The total consumed power of the w.o.PC scheme is expressed as

Pwotot = MP0

N∑

j=1

sj. (4.28)

Imposing Equal Power (EP) constraint between w.o.PC and one ’sc’ PC scheme then leads, if

PC scheme considers coverage d0, to transmission power PEP0 for the w.o.PC scheme such that

PEP0 =

P0

M∑

i=1

N∑

j=1

(

Lsc(i, j)

d0

sj

M

N∑

j=1

sj

, (4.29)

corresponding to a coverage dEP0 such that

dEP0

d0=

α

PEP0

P0= α

M∑

i=1

N∑

j=1

(

Lsc(i, j)

d0

sj

MN∑

j=1

sj

. (4.30)

In this way, according to d0 and dmax, the coverage of PC scheme under EP constraint dEP0 can

80

Page 107: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

be larger or smaller than d0, corresponding respectively to larger or smaller coverage compared

to the considered PC scheme ′sc′. For example, for the case envisaged in the former detection

performance study, imposing the same coverage leads to an increase of all the PC schemes

consumed power w.r.t. the w.o. PC case (table 4.8). The r.h.s. of eq. (4.30) is then larger

than 1. Under EP constraint, a higher w.o. PC scheme coverage is obtained.

In this context, in figure 4.12 giving the reconstruction error versus measurements number

M , dashed curves correspond to processing w.o.PC scheme when it uses the same power than

one of the PC schemes (solid line with the same symbol) when the latter has range d0. It

is worth noting that with EP constraint, the consumed power is the same while the coverage

changes. We can see that the proposed schemes perform highly better than w.o. PC for the

same total power budget. The higher gain is performed by PCMSnl (after perfect PC).

15 20 25 30 35 40 45 50 55 6010

−2

10−1

100

Measurements number (M)

NM

SE

p

w.o.PCPCMSngPCMDgPCMSnlPCMDlPerfect PC

15 20 25 30 35 40 45 50 55 6010

−2

10−1

100

Measurements number (M)

NM

SE

t

w.o.PCPCMSngPCMDgPCMSnlPCMDlPerfect PC

Figure 4.12: Comparison between different proposed schemes for n = 2 when SNR= 25dB.Dashed curves correspond to schemes with equal power constraint w.r.t. w.o.PC case.

In the following, the coverage effect is studied by varying d0. Then, a performance comparison

of the proposed PC schemes is given under equal coverage d0 constraint.

⋆ Impact of the coverage through d0 choice

Detection performance is here evaluated for all schemes for unified coverage varying from 20m to

50m. figure 4.13 depicts the obtained results respectively for PCMD and PCMSn approaches

in terms of NMSEt and NMSEp. Except for the w.o. PC scheme, for which a higher coverage

is beneficial since transmitted power is higher (refer to (4.28)), for all PC schemes, d0 increase

leads to performance loss. Indeed, as shown in figure 4.14 the transmission power of w.o.PC

increases for higher d0. However, it remains constant for all proposed PC schemes because the

ratio P0/dα0 is constant whatever is the chosen coverage d0 (see (4.27)). Otherwise, the number

of targets for which PC applies decreases as d0 and thus P0 increase. These figures show the

81

Page 108: Sparse Regularization Approach ... - supcom.mincom.tn

4.6. Conclusion

relevance of the proposed PC schemes for all coverage values when considering PCMD, and for

low coverage when considering PCMSn.

20 30 40 5010

−2

10−1

100

101

d0(m)

NM

SE

t

Perfect PC w.o.PC n=1 n=2 n=3 n=4

20 30 40 5010

−2

10−1

100

d0(m)

NM

SE

p

20 30 40 5010

−2

10−1

100

101

d0(m)

NM

SE

t

20 30 40 5010

−2

10−1

100

d0(m)

NM

SE

p

Local scheme

Global scheme

20 30 40 50

10−1

100

d0(m)

NM

SE

t

Perfect PC w.o.PC n=2 n=3

20 30 40 50

10−1

100

d0(m)

NM

SE

p

20 30 40 50

10−1

100

d0(m)

NM

SE

t

20 30 40 50

10−1

100

d0(m)

NM

SE

p

Global scheme

Local scheme

Figure 4.13: Performance evaluation versus coverage d0 when SNR= 20dB. Left figure cor-responds to PCMD scheme and right figure to PCMSn scheme.

20 25 30 35 40 45 5010

−3

10−2

10−1

d0(m)

Co

nsu

me

d p

ow

er

(W)

Perfect PCw.o.PCn=1n=2n=3n=4

20 25 30 35 40 45 5010

−2

10−1

100

d0(m)

Co

nsu

me

d p

ow

er

(W)

Perfect PCw.o.PCn=2n=3

Figure 4.14: Total consumed power versus coverage d0 when SNR= 20dB. Dashed linescorrespond to local scheme and solid lines to global scheme

4.6 Conclusion

In this chapter, the problem of rare events detection in large scale WSN scenario is stud-

ied. Therefore, in order to enhance the CS recovery capacity in WSN, this work proposed an

82

Page 109: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 4. Power Control Mechanism for Sparse Static Events Detection in Large Scale WSN

approach based on a coherence reduction of the sensing matrix premised on the transmitted

PC.

Firstly, we justified the validity of the power control mechanism in WSN by introducing a

motivating part. Then, an overview of the related work based on PC in WSN is given, where a

comparative study with the proposed schemes is illustrated. Additionally, before introducing

the main contribution, the adopted scenario is accurately described, in which the considered

framework is specified while accounting for practical issues including sensors maximum coverage

and maximal targets transmission.

In its main part, this chapter proposed a collaborative scheme which controls the transmitted

power of some events in order to reduce the coherence of the sensing matrix thus guaranteeing

an enhancement of CS detection performance in WSN. In this context, we suggested two new

schemes for PC that partition the sensor nodes into disjoint sets or clusters, respectively based

on the range of sensors from the cluster head ”PCMD” and the deployed sensors density

around the cluster head ”PCMSn”. Each of the two proposed schemes perform either, with

a local or a global distance effect compensation manner, respectively denoted by PCMDl and

PCMDg for PCMD mechanism, and PCMSnl and PCMSng for PCMSn approach. Simulation

results validate their superiority over the case without PC in terms of coherence reduction and

reconstruction performance. Under equal consumed power constraint, the global approach for

distance based then number of nodes based PC schemes are found to outperform local approach

which is much less sensitive to the partition granularity.

Until now, the two elaborated chapters handled the case of rare events in WSN, in which the

targets are supposed spatially clustered which leads to rarity and spareness of cells holding

targets. This gave an adequate framework for CS theory application with specificity of integer

entries of the sparse parameter. Still in sparse signal recovery issue, we will focus in the next

chapter on sparse parameter recovery where the active entries take continuous values. The

envisaged application is that of sparse channel CIR estimation and tracking in OFDM systems.

83

Page 110: Sparse Regularization Approach ... - supcom.mincom.tn
Page 111: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5

Sparse Dynamic Signal Tracking:

Application to Fast Fading OFDM

Channel Estimation

5.1 Introduction

There are growing demands for high-speed data transmissions in wireless communication sys-

tems such as cellular network. Also, some applications in WSN have high data rate require-

ments. In WSN, the sensor nodes operate under severe constraints on energy consumption and

resources. Most existing technologies such as WiFi, Bluetooth and Zigbee are unable to meet

these requirements. Due to the good features such as large bandwidth and low power spectral

density, UWB technology can offer the ideal solution for high-speed transmission WSN [113].

In particular, UWB systems provide high data rates, having very good time domain resolution

and immunity to multipath fading [114].

Among UWB techniques, MB-OFDM technique is suitable for high data rate WSN thanks to

its high spectral efficiency and robustness to multipath environments [115]. Thanks to OFDM-

based modulation, systems can effectively deal with delay spread and frequency selectivity of

UWB channels. Since OFDM modulation provides very good spectral control, then interference

with narrow band receivers can be avoided. UWB communications are subject to large time

spread which frequently induces a sparse structure of the channel equivalent sampled impulse

response [52, 116]. As the performance of coherent UWB transceivers relies on the availability

of accurate channel estimates, it is important to design channel estimation strategies that

85

Page 112: Sparse Regularization Approach ... - supcom.mincom.tn

5.2. Motivation

achieve the best estimation accuracy. Such techniques can exploit the structural and statistical

properties of the propagation channel.

This chapter is concerned with sparse OFDM channel estimation and tracking. More precisely,

the problem of structure detection will be investigated in details. Exploiting the sparse struc-

ture of channel can lead to important performance enhancement. Next, an overview of the

framework is given.

5.2 Motivation

Wideband wireless communications are usually subject to severe multipath impairments. OFDM

technique has been used for its high frequency efficiency and good reliability to mitigate multi-

path fading [115, 117]. As channel estimation is an essential task in OFDM coherent receivers

[39–41], it is recommended to explore all the available characteristics of the wireless channel in

order to estimate it efficiently.

5.2.1 Wireless channel characteristics

The large delay spread and mobility that characterize wireless propagation lead to frequency

selective time varying channels. According to the Doppler frequency, the coherence time can

cover several blocks or one OFDM block. In the first case, the block-type channel estimation

[118] is considered where the channel is estimated once over the coherence time, based on one

dedicated OFDM pilot block. If the channel varies between adjacent blocks, we rather use

comb estimation [119]. In comb estimation, pilot and data are multiplexed in every OFDM

block and channel estimation is repeated over each block. Data and pilot repartition over the

time-frequency grid is adapted according to the channel dynamics respectively in time and

frequency domains. The Channel Frequency Response (CFR) required for data demodulation

is obtained by interpolating the channel response on pilot sub-carriers [120].

A second important feature of the considered wireless channels is their time domain sparse

structure, where most of the CIR coefficients are zero or near zero valued. This sparsity in

time domain is induced by the high delay spread occasioned by multipath propagation and

can also result from relatively high sampling rate. In such channels, the CIR presents a sparse

structure, or can approximately be considered as sparse. We hereafter plot the relationship

between the baseband equivalent CIR and its sampled version.

86

Page 113: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

Consider the continuous CIR h(t) corresponding to L multipaths channel which can be ex-

pressed by

h(t) =

L−1∑

l=0

hlδ(t− τl), (5.1)

where hl and τl respectively denote the lth path gain and delay. Let denote the sampled

equivalent CIR by h = [h0, . . . ,hNs−1]T , where the sampling period Ts is taken as Ts = Tu/Ns

such as Tu and Ns denote respectively the duration and the subcarriers number of OFDM

symbol. Let H denote the corresponding CFR, that can be written as

H = Fh, (5.2)

where F is the Ns ×Ns DFT transform matrix with entries Fl,m = 1√Ns

e−j2π(l−1)(m−1)

Ns .

The kth component of H is expressed as

Hk =L−1∑

l=0

hle−j2π kl

Ns . (5.3)

It can also be expressed as

Hk =L−1∑

l=0

hle−j2π

kτlNsTs . (5.4)

Therefore, the CFR H can be rewritten as

H = Fh, (5.5)

where F is the Ns × L matrix with entries Fl,m = 1√Ns

e−j2π lτmNsTs and h = [h0, . . . , hL−1]

T

corresponds to the observed CIR at the receiver.

Based on the last established equation, the sampled equivalent CIR h can be expressed as

h = F−1Fh = Mh, (5.6)

87

Page 114: Sparse Regularization Approach ... - supcom.mincom.tn

5.2. Motivation

where

Mm,n =

Ns−1∑

i=0

F−1m,iFi,n,

=1

Ns

Ns−1∑

i=0

ej2πiNs

(m− τnTs

),

=1

Ns

1− ej2π(m− τnTs

)

1− ej2πNs

(m− τnTs

),

=1

Nsej

πNs

(m+(Ns−1) τnTs

) sin(πτnTs)

sin( πNs

(m− τnTs)). (5.7)

Thus, the kth component of h is given by

hk =1

Ns

L−1∑

l=0

hlej πNs

(k+(Ns−1)τlTs

) sin(πτlTs)

sin( πNs

(k − τlTs)). (5.8)

In this way, we can distinguish two cases.

• For a sampled-spaced delay τl, it ∃ pl ∈ N such as τl = plTs, thus leading to Mk,l = δk,pl ,

showing that the energy from the lth path is mapped to one tap in the sampled CIR.

• For a non sampled-spaced delay, there exists an energy leakage to adjacent channel taps.

However, the contribution of the energy will be maximum around the coefficient pl where pl

presents the integer part of τl/Ts.

As a consequence, for each path, depending whether the delay is sample spaced or not, its

contribution is respectively restricted to one CIR tap or leaks over a set of adjacent CIR taps.

Let note that the sampled CIR length corresponds to the nearest larger integer to maxl

(τl/Ts).

It is upper bounded by the cyclic prefix length in order to guarantee the inter block interference

absence. For disparate delays, even the case of non sampled space delays leads to the presence

of very weak CIR coefficients and then to approximately or perfectly sparse CIR.

In our work, we consider the case of perfectly sparse channels. Based on the above analysis, this

model encompasses both cases of sample and non sample spaced delays. In [121], it is shown

that, below an SNR value inversely proportional to the tap energy, neglecting the corresponding

coefficient, and thus considering a CIR sparse structure leads to higher accuracy, in terms of

mean squares error estimation, than structured estimation respecting the CIR true support.

In this way, the above assumption on perfect channel sparsity is legitimate.

88

Page 115: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

5.2.2 Framework overview

Before introducing the innovative part, an overview of the conventional techniques for sparse

channel estimation in OFDM systems is required.

5.2.2.1 Classical approaches for sparse OFDM channel estimation

For sparse channels, Least Squares (LS) method [122] based channel estimation can be en-

hanced. Many existing works exploit the a priori on channel sparsity which results in a more

accurate channel estimation [42–44]. To this aim, several studies have addressed the CIR

structure detection issues, such as the Generalized Akaike Information Criterion (GAIC) [123]

that operates an iterative processing to locate, one by one, the CIR effective taps. Also, the

OMP algorithm, which is based on projection procedure has been successfully applied to sparse

channel estimation [124, 125]. A low complexity approach based on a thresholding procedure

is also studied where the coefficients of LS coarse CIR estimate are compared to optimized

tap-dependent thresholds. In this context, two threshold-based CIR structure detection, the

Probabilistic Framework Estimator (PFE) [126] and the Threshold-based Mean Squares Er-

ror (TMSE) [127], minimizing respectively the detection probability error and the elementwise

mean squares error are investigated. By exploiting a priori knowledge about the channel spar-

sity level, tap-tuned thresholds are designed to improve the channel accuracy estimation. A

local sparsity level (LSL) measure is incorporated [128]. It quantifies each tap probability to

be active (not zero valued) according to its energy, recovered from a first raw CIR estimate.

5.2.2.2 Contribution

Our concern is with frequency selective fast fading sparse channels, in which the comb-type

estimation is an adequate choice. In order to enhance the channel estimation accuracy, consid-

ering the sparsity of CIR structure, we balance between estimation performance, computational

load and spectrum efficiency.

In the rest of this section, we summarize the two main contributions of our ourk in the frame

of sparse channel estimation and tracking.

• In the first part, we propose to exploit both the CIR sparsity and the slow temporal

variation of its support, induced by propagation delays slow variation, in the channel

response estimation. Indeed, realistic models for fast fading channels predict highly non

stationary gains and much lower temporal variation of delays. More precisely, we propose

89

Page 116: Sparse Regularization Approach ... - supcom.mincom.tn

5.3. Sparse Channel Estimation in OFDM Systems

a novel scheme to track the CIR by disjointly tracking the delay subspace by Kalman

filtering without the prior knowledge of channel order and tracking the CIR support. In

this context, a thresholding procedure based CIR structure detection over the already

detected support is considered.

• In the second part of this chapter, CIR sparsity is exploited for reducing the number

of pilot tones and thus enhancing the spectrum efficiency. More precisely, the problem

of optimized pilot placement is addressed. Based on CS rules, a new computationally

effective suboptimal pilot arrangement strategy is proposed.

The next section will assess the sparse channel estimation in OFDM systems. After this, the

CIR channel tracking aspects are studied and the proposed new schemes are developed. Then,

the OFDM pilot allocation based on CS is studied.

5.3 Sparse Channel Estimation in OFDM Systems

The system model used for OFDM channel estimation is here established followed by the

structured LS estimator description.

5.3.1 System model description

We here consider a cyclic prefix (CP)-OFDM system with Ns subcarriers, among which Np

(with Np ≤ Ns) are selected as pilots for comb-type channel estimation. The pilot tones

positions, ranged in ascending order are denoted as P1, . . . , PNp . We assume a perfect synchro-

nization and that the CP of length Ng is longer than the channel memory, denoted L, in order

to avoid the inter-symbol-interference problem. At the receiver, the transmitted OFDM signal

is sampled at a sampling rate 1Ts

equal to Ns times the subcarriers spacing. Then, the FFT

output over the Np pilot subcarriers after demodulation is expressed as [129]

zp =√

NsAh+ η, (5.9)

where A = XdFp, where Xd is a Np ×Np diagonal matrix containing the pilot symbols x(Pi)

and Fp is a Np by Ns Fourier submatrix indexed by [P1, . . . , PNp ] in row, obtained from the

standard Ns by Ns DFT matrix F.

h = [h1, . . . ,hL,0Ns−L]T is the sampled equivalent CIR with coefficients modeled by complex

Gaussian processes such that hk ∼ CN(

0, 2σ2ck

)

. η =[

η1, . . . , ηNp

]T ∼ CN(

0, σ2INp

)

denotes

90

Page 117: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

the AWGN noise component. Note that the CIR memory is here abusively denoted by L and

is different from the number of multipaths in (5.1).

Frequency selective channels can be characterized by perfectly or approximately sparse CIR

structure [121]. In our work, the CIR support is sparse, where only K among L taps coefficients

are non zero valued, with K ≪ L.

5.3.2 Structured LS channel estimation

As mentioned above, the sparse channel estimation problem may be formulated as CIR struc-

ture detection associated to structured estimation. CIR structure can be detected using differ-

ent procedures as shown in block diagram given by figure 5.1. Comparing raw CIR estimate

coefficients recovered by LS method to some optimized thresholds allows to recover the active

taps positions. Differently, the OMP recovers CIR structure by CIR comb estimate iterative

projection on the DFT decomposition matrix. Further, based on the CIR detected support, a

z Comb LS F†p

HpThresholdingprocedure

Projection/information

hls

CIR structuredetection

Structured

estimation

hs S

Fs

Hs

criterion

LS

Figure 5.1: Block diagram of structured LS estimation based on CIR structure detection.

structured LS estimator can be designed to denoise its estimate.

We here exploit the sparsity of the CIR, seen through the small number K of non zero-valued

taps, whose positions form the set S = s1, s2, . . . , sK. Let Fps define the Np × K sub-

matrix DFT matrix F with the rows corresponding to pilot tones positions and the columns

corresponding to non zero valued CIR taps. Then, the CFR over the pilot subcarriers can be

expressed as

Hp = Fph = Fpshs, (5.10)

where the vector hs of size K concatenates the non zero-valued entries of h.

Based on conventional CFR estimation scheme, once the CFR over pilot subcarriers is deduced,

an interpolation step, ranging from simple linear to second order and cubic spline, can be used

to recover the whole frequency response. We here propose to avoid the interpolation step and

91

Page 118: Sparse Regularization Approach ... - supcom.mincom.tn

5.4. CIR Tracking in Fast Fading Channel

to rather process a time domain denoising exploiting the CIR sparse structure following the

scheme in figure 5.1. Indeed, eq. (5.9) can be re-expressed as

zp =√

NsXdFpshs + η. (5.11)

Let S = s1, s2, . . . , sK denote the positions of the CIR taps, detected as active, with cardi-

nality K. The structured LS CIR estimate is then obtained as

hs =1√Ns

F†psX

−1d zp,

= (FHpsFps)

−1FHpsFpshs +

1√Ns

F†psX

−1d η, (5.12)

where Fps is a Np × K sub-matrix of Fp obtained by selecting the columns corresponding to

the detected taps positions.

Then, the CFR is deduced as

Hs = Fshs, (5.13)

where Fs denotes the Ns×K sub-matrix of F obtained by selecting the columns corresponding

to the detected taps positions.

Let mention that using (5.12) requires a minimal number of pilot subcarriers proportional to

the sparsity level K for the structured LS to be operational. For the LS raw estimate used in

threshold based structure detection, this constraint is a minimal number of pilots equal to the

channel memory L. The last constraint is avoided if projection or information criterion is used

rather than thresholding.

5.4 CIR Tracking in Fast Fading Channel

The aim of the following is to provide enhanced solution to the problem of time varying channel

tracking for sparse CIR. Our contribution indeed benefits from the slow variation of the CIR

sparse support to assist channel subspace tracking. We first give an overview on CIR support

detection based on thresholding, which makes part of our proposed scheme. Then, move to

detail the proposed solution.

92

Page 119: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

5.4.1 CIR support detection using thresholding

As shown in figure 5.1 describing structured LS processing, firstly, the CFR estimation over

the pilot subcarriers using the LS method is derived as

Hp =1√N s

X−1d zp = Fph+

1√Ns

X−1d η. (5.14)

Our goal is to estimate h from Hp. Therefore, the CIR LS estimator is expressed as

hls = F†pHp = F†

pFph+1√Ns

F†pX

−1d η, (5.15)

where F†p = FH

p (FpFHp )−1 = FH

p . The last equation can be re-written as

hls = FHp Fph+

1√Ns

FHp X−1

d η = Gh+1√Ns

FHp X−1

d η, (5.16)

where the elements of the N ×N matrix G = FHp Fp are given by

gij =

Np∑

k=1

(FHp )ik(Fp)kj . (5.17)

The kth CIR tap LS estimate is then

hk = hk + nhk , (5.18)

where hk =∑Ns

i=1 gkihi denotes the kth tap coefficient LS estimate in the noiseless case and

nhk ∼ N

(

0, 2σ2n

)

where σ2n = σ2

2N2s

Np∑

i=1

1

|x(Pi)|2is the CIR LS estimation noise variance per

dimension.

For comb-type LS solution, the uniform pilot allocation is the optimal [130] where the Np pilots

are regularly spaced. For regular pilot arrangement, the adjacent pilot subcarriers spacing is

∆P = Ns

Np

1Tu

. It is worth noting that ∆P should not exceed the coherence bandwidth [131].

Consequently, Np should be chosen greater than the channel memory L as Np ≥ L. As in

practice, L is unknown to the transmitter, it is sufficient to choose Np > Ng to fulfill the last

constraint.

93

Page 120: Sparse Regularization Approach ... - supcom.mincom.tn

5.4. CIR Tracking in Fast Fading Channel

5.4.2 CIR tracking

In this part, the slow variation of the channel delays in contrast to its fast gains fluctuation

between consecutive OFDM symbols is exploited for CIR tracking purpose. Contrarily to our

context, the earlier work on channel tracking in [132] does not consider the particular case of

sparse channels. More precisely, a subspace tracking method based on Kalman filter is here

applied for delay-subspace tracking and is combined to CIR support tracking.

The two proposed channel tracking schemes are summarized in the block diagram of figure

5.2. Both schemes exploit the slow variation of delays profile. An intermediate step based on

Kalman filter is added for delay-subspace tracking and CFR estimation improvement. The CIR

estimate is then recovered by IDFT. Scheme a directly associates threshold-based detection on

the first Ng components of the recovered CIR estimate. In scheme b, an intermediate step of

CIR support tracking is inserted. Like scheme a, scheme b uses a final step of threshold-based

CIR structure detection before structured LS application as depicted in figure 5.2.

Comb LSSubspace Kalman

TrackingIDFT

Select

componentsNg

WeightingBlocks

structuresupdate

Str

uctu

red

LS

estim

ate

Thr

esho

ld-b

ased

Hls Hkalman hN

Sri

Support Tracking

Schemea

Schemeb

σn

evaluation

and

appr

oach

es

Figure 5.2: Proposed Kalman-based subspace tracking and CIR support tracking schemesdiagram.

CIR model

Temporal correlation study of practical wireless channels has shown that the delay of a path

varies at a much slower time scale than its gain [133]. This is accounted for in the CIR support

and path gains time variation models.

• CIR support: With the slow varying propagation delays, the CIR support is assumed to be

static over a block of B adjacent OFDM symbols. It is also supposed that active positions

evolve independently. Each active coefficient position moves between two adjacent blocks

by a stochastic value from the set −u,−u+1, . . . , u− 1, u within the support interval,

94

Page 121: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

where u is a positive integer related to mobility. In this way, the CIR support elements

are modeled by a discrete time Markov chain following a random walk process.

• Path gains: The gains are assumed independent between OFDM symbols and are ran-

domly generated. Each CIR coefficient hk is modeled by a complex Gaussian variable

with zero mean and variance 2σ2ck.

5.4.3 Subspace tracking with Kalman filter

As stated in [132] and [134], the CFR lies within the delay subspace (Uopt), whose temporal

variation, like that of delays, is slow and can then be tracked independently from highly non

stationary propagation gains through the use of a subspace tracking approach such as the

Kalman filtering.

The different steps of Kalman filtering can be summarized in the following steps.

For OFDM symbol n (nth iteration),

1− Compute y(n) = UHopt(n− 1)Hp(n), where Hp(n) is the CFR at nth iteration recovered by

unstructured LS according to eq. (5.14) (Hp over block n) and Uopt is a Np × Ng submatrix

composed of the Ng eigenvectors corresponding to the larger eigenvalues in the decomposition

of Hp correlation matrix, which spans the signal subspace.

2− Compute

g(n) = [K(n, n− 1)y(n)]/[yH (n)K(n, n− 1)y(n) + ζe(n− 1)]],

where K(1, 0) = INg and ζe(0) = c1.

3− Update step

• K(n+ 1, n) = K(n, n− 1)− g(n)yH(n)K(n, n − 1),

• The update of Uopt(n) combines Uopt(n − 1) corresponding to the previous OFDM symbol

and an equivalent error term expressed as

Uopt(n) = Uopt(n− 1) + [Hp(n)− Uopt(n− 1)y(n)]gH (n).

The CFR estimate obtained at the output of Kalman filter over the pilot subcarriers is expressed

as [132]

Hkalman(n) = Uopt(n)UHopt(n)Hp(n). (5.19)

In order to accelerate convergence, ζe(n) is also updated for each OFDM symbol as

ζe(n) = c2‖Hkalman(n)− Hp(n)‖, (5.20)

where c1 = 10 and c2 = 0.75 are adjustable constant factors, here is fixed as in [132].

95

Page 122: Sparse Regularization Approach ... - supcom.mincom.tn

5.4. CIR Tracking in Fast Fading Channel

After Kalman filter application in the first scheme (scheme a) displayed in upper part of figure

5.2, a thresholding procedure with enhanced PFE and TMSE over the first Ng components of

the CIR deduced from HKalman is applied to detect the CIR structure. Further, a structured

LS estimation is processed. In the second scheme (scheme b), we suggest to incorporate, as will

be hereafter detailed, a CIR support tracking before threshold-based detection application and

structured LS estimation.

5.4.4 CIR support tracking

Exploiting the propagation delays slow time variation and thus the CIR support stationarity

over a large number of OFDM symbols, a learning phase is here incorporated after Kalman

filtering and before threshold-based CIR structure detection in order to estimate and track the

CIR support. According to the delays variation speed, we can assume that the CIR structure

is constant over a block of B adjacent OFDM symbols. Then, exploiting the CIR support slow

variation between two consecutive blocks, a recursive procedure is used to track the support.

At each new OFDM symbol, as shown in figure 5.3, the CIR support tracking procedure

combines the last and current blocks (of B OFDM symbols each) CIR detected supports Si−1

and Si as

Sri = f(

ξSi−1 + (1− ξ)Si

)

, (5.21)

where the forgetting factor ξ ∈ [0, 1], Si−1 and Si are binary vectors (1 for nonzero and 0 for

zero valued coefficient). They denote the estimated CIR supports respectively for blocks i− 1

and i corresponding to the last 2 adjacent blocks of B OFDM symbols each (see figure 5.3).

f(.) is the rounding/clipping function. Then, the CIR estimated support on the ith block is

obtained for j = 1, . . . , Ng as

Sri(j) = 1 if ξSi−1(j) + (1− ξ)Si(j) ≥ 0.5,

Sri(j) = 0 if not.(5.22)

For deriving Si from the last B OFDM symbols (i−B+1 to ith symbols), we consider, for each

OFDM symbol of the block, the CIR support derived from the LS CIR estimate by comparing

the Ng first components of hls given by (eq. (5.16)) to the noise variance σ2n as

|hls(k)|2 ≥ σ2n and k ≤ Ng detect as active component

|hls(k)|2 < σ2n or k > Ng rejection.

(5.23)

96

Page 123: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

(i− 1)th block ith block

B OFDM symbols B OFDM symbols Sliding window withB OFDM symbols

Si−1 Si+

ξ 1− ξ

x

Sri

ith OFDM symbol

x

f(.)

Figure 5.3: CIR support tracking procedure

In practice, to proceed (5.22), we need the knowledge of the CIR LS estimation noise σ2n. We

propose to estimate this parameter from the N −Ng last pure noise components of the LS CIR

estimate hLS , located outside the CP, as follows [126]

σn =

2

π

1

Ns −Ng

Ns∑

i=Ng+1

|hi|. (5.24)

Once the CIR structures of the B OFDM symbols Sl (l = i−B+1, . . . , i) of the block derived for

each coefficient (k = 1, . . . , Ng), we estimate its probability p(k) to be active. This probability

estimation is evaluated as the quotient of the number of recovered CIR structures l where the

tap k is detected as active among the B recovered CIR structures of the block. This can be

described by

p(k) =card

(

Sl|Sl(k) = 1, l = i−B + 1, . . . , i)

B. (5.25)

Then, in Si (refer to figure 5.3) we retain only the component with probability p(k) > λ,

where λ denotes a pre-chosen value. The obtained CIR structure Si is then used through eq.

(5.21) to compute the CIR structure Sri over which the thresholding step, PFE [126] or TMSE

[127], will operate to refine the support detection accuracy.

To summarize, the CIR support tracking algorithm, incorporated within scheme b in figure

5.2, processes (after Kalman filtering and IDFT for Ng points CIR recovery) for each OFDM

block the following steps:

• At ith OFDM block, we detect CIR structure (on Ng first components following (eq.

(5.23)).

97

Page 124: Sparse Regularization Approach ... - supcom.mincom.tn

5.5. Pilots Allocation for Sparse Channel Estimation

• During a learning phase on the last B OFDM symbols (from i − B + 1 to i), the CIR

support is estimated, then revised to a some degree using a weighting criterion combining

the past block and the actual block CIR structures as follows:

-Consider the so detected last B CIR structures to derive p(k), k = 1, . . . , Ng following

(eq. (5.25)),

-deduce Si,

-apply (eq. (5.21)) to get Sri .

After this CIR support tracking, the improved thresholding procedure, PFE [135] or

TMSE [127], which will be explained hereafter, is applied over the estimated CIR support

in order to enhance the support detection before CIR LS structured estimation. In this

way, in scheme b, PFE and TMSE do not act on the whole first CIR Ng coefficients

but only on those detected as active in Sri . The aim of this processing is to reduce false

alarm rate (detection of noise as active coefficient). It however can not remedy to missing

occurrence.

As already mentioned above, for better spectral efficiency, the number of pilot subcarriers Np

should be kept as low as possible. Exploiting CIR sparsity, it is possible to go below the

coherence bandwidth constraint and to use a number of pilots proportional to the degree of

sparsity K (number of active CIR coefficients) and apply CS recovery algorithms. For such

algorithms, unlike for LS, the pilot uniform partition is suboptimal. The optimization of pilot

placement is hereafter addressed.

5.5 Pilots Allocation for Sparse Channel Estimation

In this part, we investigate the problem of sparse channel estimation using CS that fully

explores the characteristics of channel sparsity [79], and allows a reduced pilot overhead and

thus a better spectral efficiency.

In the following, the mathematical model for CS based sparse channel estimation will be as-

sessed. Pilot allocation problem will be then introduced and an overview of the literature related

works is given. In a second step, the proposed pilot placement scheme will be presented.

98

Page 125: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

5.5.1 Mathematical model

CS approach allows to relax the constraint on the minimal pilots number from L to a lower

number proportional to sparsity degree K while guaranteeing the decomposition uniqueness

[21], which leads to spectral efficiency increase.

Using L ≤ Ng < Ns, h denotes hereafter the Ng long zero padded sampled equivalent CIR as

h = [h1, . . . ,hL,0Ng−L]T . Eq. (5.9) can be re-expressed accordingly as

zp =√

NsAgh+ η, (5.26)

or equivalently

z′

p = X−1d zp =

NsFpgh+X−1d η, (5.27)

where Ag = XdFpg is here Np ×Ng submatrix such as Fpg is a submatrix containing the first

Ng columns of Fp. Adopting Np < L ≤ Ng leads to an overcomplete matrix Ag with more

columns than rows, i.e. Np ≤ Ng. Then (5.26) can be interpreted as a sparse approximation

problem where the sparse parameter h is to be determined from zp and the reconstruction

matrix Ag.

We hereafter address the pilot allocation optimization to better fit the CS requirements for

unique and accurate sparse parameter recovery (coherence consideration).

5.5.2 Pilots placement optimization

The deterministic pilot allocation refers to the choice of Np pilot tones among the Ns subcarri-

ers. This choice directly affects the CS decomposition basis Ag or Fpg, deduced from the DFT

matrix and associated with the pilot subcarriers. Indeed, the coherence minimization of the

DFT submatrix, computed as (5.29), depends on the pilot subcarriers positions. In [136], it is

derived that the pilot design according to the Cyclic Difference Set (CDS), only available for

some (Ns, Np) pairs, is optimal in the sense of achieving the minimal coherence measurement.

Unfortunately, the existence of CDS is limited to some specific pairs (Ns, Np).

For arbitrary pilot selection of Np subcarriers from a total of Ns subcarriers, there are

Ns

Np

pilot allocation possibilities. The exhaustive search of the best allocation over all possible pilot

subsets is computationally prohibitive. For example, for Ns = 256 and Np = 16, it leads to a

large possibilities number of order of 1025. In particular, for fast fading channels, the channel

memory length L and sparsity degree K have fast time variation. Thereby, the number of

99

Page 126: Sparse Regularization Approach ... - supcom.mincom.tn

5.5. Pilots Allocation for Sparse Channel Estimation

pilots should be updated consequently as well as the CS decomposition matrix. To this end,

reduced complexity, near-optimal, allocation schemes are highly preferable. In this context,

many schemes are proposed to find a near-optimal pilot pattern. In fact, in [137], a discrete

stochastic approximation scheme based respectively on the mean squares error and coherence

minimization is proposed to search for the best sub-optimal pilot locations. On the other

hand, in [136] a greedy method was proposed which is based on variance minimization of pilot

Positions Differences Occurrences (PDO) to find a suitable pilot indexes set. Also, a tree-based

backward iterative scheme pilot generation is adopted in [138], which is based on coherence

minimization in each iteration.

We distinguish between forward and backward pilot allocation schemes. In the first, we add

one by one the pilots positions. In the second, we eliminate one by one the subcarriers not

chosen to carry pilot data.

• Suboptimal schemes metrics

The first scheme introduced in [136] is developed in a forward manner, it is initialized by the

first row choice from the selection of the first Ng columns of F. Then, it adds one row in the

reconstruction matrix per each of the Np − 1 iterations.

For a given set of positions in 1, . . . , Ns, we define the positions differences as the dis-

tance, in terms of positions, between pairs of positions. The PDO (Positions Differences

Occurrences) is the number of occurrence of possible values of positions differences Ok =

card (Pi − Pj = k, i ≥ j) , for k = 1, · · · , Ns − 1.

In the scheme in [136], each row choice is made such that it minimizes the variance of pilots

PDO.

Let P i denote a given pilots subset choice at the ith iteration and Ok the corresponding number

of occurrences of pilots positions difference equal to k ∈ 1, · · · , Ns − 1. The considered cost

function for P i is denoted by C1(Pi) and given by

C1(Pi) = var

k∈1,...,Ns−1(Ok). (5.28)

Contrarily to the above algorithm, the scheme in [137] proceeds in a backward way. In the

ith iteration, it constructs a decomposition matrix Fi by eliminating one row from the last

iteration matrix. The row to eliminate, is chosen to minimize the corresponding coherence

measure C2(Pi) given by (5.29). This scheme then operates Ns − Np iterations to construct

the Np rows decomposition matrix used by the CS recovery algorithm.

The coherence of Fi, corresponding to a given pilots positions subset P i, for the ith iteration, is

100

Page 127: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

evaluated as the maximum absolute correlation between two different columns. It is expressed

as

C2(Pi) = µ(Fi) = max

1≤m6=n≤Ng

| < Fim,Fi

n > |‖Fi

m‖2‖Fin‖2

, (5.29)

where Fim is Fi mth vector and < •, • > denotes the scalar product.

5.5.3 Proposed pilots allocation scheme

In this section, the proposed pilots positioning scheme is detailed. The considered metric is

firstly presented. Then, the proposed scheme for pilot allocation is described.

• Adopted metric

In order to assess the relevance of the two aforehand detailed metrics (5.28) and (5.29), cor-

responding to PDO variance and coherence, CS matrices Iterative Construction (IC), first by

coherence minimization in each row elimination and second by pilots positions cyclic differ-

ences occurrences PDO variance minimization, are considered and compared for both forward

(Forward IC) and backward (Backward IC) schemes.

For this comparison, the herein adopted cost function (CF) corresponds to a linearly weighted

combination of the two criteria: the coherence C1 of eq. (5.28) and the pilot positions dif-

ferences occurrences C2 given by eq. (5.29). Then, the CF to minimize is, for a given pilot

allocation P i, expressed as

CF(P i) = βC1(Pi) + (1− β)C2(P

i), β ∈ [0, 1]. (5.30)

At each iteration, the optimal pilot subset P i is chosen as that minimizing CF(P i). The first

term C1(Pi) in (5.30) corresponds to the pilot positions differences occurrences PDO variance

and is computed as in [136]. It must be minimized to reach near CDS performance, for which

this variance is zero. The second term in (5.30) C2(Pi) denotes the coherence measure.

In practice, the coherence of Fi is computed as the largest off-diagonal upper-triangular com-

ponent of FiHFi. Whereas, for the variance component of the CF, for each potential allocation,

we compute the cyclic differences occurrences of the pilot positions and their variance.

• Proposed allocation scheme: tree-based forward pilot generation

We propose a tree-based structure yet processing in a forward manner. This scheme iteratively

builds the matrix Fpg in a forward manner, by adding one pilot in each iteration. The basic

idea is to use a several trees’ structure in which not only the best, in terms of lowest CF value,

but a set p of the best solutions are selected for the next iteration.

101

Page 128: Sparse Regularization Approach ... - supcom.mincom.tn

5.5. Pilots Allocation for Sparse Channel Estimation

The main stages of the proposed scheme are summarized as follows:

• We initialize the indexes sets by S(1)l = il, l = 1, . . . , p and il are chosen as pairwise different

elements within 1, . . . , Ns where p corresponds to the number of trees kept in each iteration.

• Then, for each tree, we form all Ns − 1 possible two elements subsets by adding, according to the

cost function values, an element (one pilot) to S(1)l as follows

S(2)l = S(1)

l ∪ s/s ∈ 1, . . . , Ns\S(1)l . (5.31)

Among the (Ns−1)×p obtained two pilots allocation sets, we choose the p best subsets, in terms

of minimizing the chosen cost function (CF).

We repeat this scheme Np − 1 times until obtaining p CS matrices with Np pilotspositionseach.

Finally, we pick up the pilots set achieving the lowest CF as a near-optimal pilot pattern.

In order to avoid choosing the same solution more than once, the p selected sets at each iteration

should, whenever it is possible, have different CF values.

figure 5.4 illustrates a scheme with two trees (p = 2) where the pilot positions to be selected

at each iteration are marked in blue.

Figure 5.4: Two trees based forward structure, Ns = 5, Np = 3, p = 2

102

Page 129: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

5.6 Discussions and Numerical Results

We consider an OFDM system with Ns = 256 subcarriers, that multiplex pilotd and data

symbols. A sparse multipath channel is considered with a CIR h of length L = Ng, where

only K = 3 coefficients with random positions are non zero-valued. An exponentially decaying

power-delay channel profile with rate Γ = 2Ng

is used.

The performance is evaluated in terms of NMSE of the CFR for a varying SNR= Es

σ2 , with

Es denoting the transmitted energy per symbol. Also, the CIR support detection performance

(rates of miss-detection, false alarm) is investigated. In addition to performance evaluation, the

computational load is studied. In the following, we will start with the performance evaluation

of the tracking scheme. After this, the OFDM pilot allocation performance will be evaluated.

5.6.1 CIR tracking

• Simulation conditions

In this part, as mentioned above the number of pilot subcarriers Np should be chosen greater

than Ng. We set Np = N4 = 64 and Ng = 16. Also, the pilot subcarriers are regularly spaced,

which is the optimal pilot placement for LS. For both pilots and data, the symbols are drawn

from a 16− QAM constellation. We consider tap-optimized thresholds by both PFE [126]

and TMSE [127] that depend on the probability of each tap to be active. This probability

is here evaluated through the below LSL measure [128, 135]. For the chosen LSL measure,

the probability of the kth tap to be active is taken as its estimated gain variance Pak = 2σ2ck

evaluated from the coarse CIR LS estimate hls as [135]

σck =

max

(

2

π|hls|2 − σ2

n, 0

)

, k = 1, . . . , Ng. (5.32)

Also, enhanced thresholds accounting for the raw LSL estimation noise are here considered.

The so modified thresholds based versions are respectively referred to as enhanced PFE and

enhanced TMSE.

For fast fading channels with stationary propagation delays, the CIR support is assumed to be

static over a block of B = 200 OFDM symbols with fast gains fluctuations between adjacent

OFDM symbols. At each block, the CIR active positions are selected independently with

slow variations from one block to the following one. More precisely, we generate a slow CIR

support variation as follows. Each active coefficient moves between 2 adjacent blocks by one

103

Page 130: Sparse Regularization Approach ... - supcom.mincom.tn

5.6. Discussions and Numerical Results

position (at left or right) with probability 0.5 and does not change position with probability

0.5. This corresponds to choosing u = 1 in the CIR model introduced in subsection 5.4.2. The

parameters of the subspace tracking Kalman are fixed as in [132] and λ = 0.8 in (5.25) for p(k)

thresholding.

In the following, we consider a static channel and we begin by a comparison between enhanced

threshold schemes and other CIR structure detection procedures, such as projection criterion

with OMP algorithm and information criterion by GAIC technique. After this, the tracking

performance is evaluated for fast fading channels.

• Numerical results for stationary CIR

figure 5.5 displays CIR estimate NMSE and rate of true CIR structure detection of enhanced

PFE, TMSE and Rosati et al. method [130], all of which detect the CIR structure by the use of

thresholding-based procedures. We also consider the iterative and decomposition-based OMP

algorithm [79], as well as the generalized Akaike Information criterion GAIC [123]. We can see

that the considered TMSE and PFE versions with estimated LSL and taking into account the

estimation noise, denoted by enhanced PFE and enhanced TMSE, realize improved performance

compared to the OMP, GAIC and Rosati et al. methods in terms of both CIR structure true

detection rate and NMSE.

−10 −5 0 5 10 15 20 25 3010

−5

10−4

10−3

10−2

10−1

100

101

SNR[dB]

NM

SE

Enhanced PFEEnhanced TMSERosatiGAICOMPTrue structure

CFR Normalized Mean Squares Error

−10 −5 0 5 10 15 20 25 3010

−6

10−5

10−4

10−3

10−2

10−1

100

SNR[dB]

Ra

te o

f tr

ue

CIR

str

uctu

re d

ete

ctio

n

Enhanced PFEEnhanced TMSERosatiGAICOMP

Rate of true CIR structure detection

Figure 5.5: Comparison between different CIR structure detection procedures.

• Computational complexity issues

We will show hereafter that in addition to performance enhancement, PFE and TMSE enhanced

versions are also advantageous in terms of lower computational burden compared to the above

104

Page 131: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

benchmarks. Since all above compared methods are based on the unstructured LS estimation

then on CIR denoising by structured LS, we will compare only the computational complexity

involved in the CIR structure detection phase. In [126], a comparison between PFE and GAIC

with unknown channel degree of sparsity K and without thresholds enhancement shows that

the complexities are of order of O(1) and O(N2p ) in Np for respectively the PFE and GAIC.

In this way, the complexity is of order of O(1) in Np for both PFE and TMSE enhanced

methods [126]. For the OMP algorithm, we can distinguish the projection phase over Fp and

the residual update with a computational load of NpNg and NpK respectively. Therefore, the

OMP complexity is of order of O(Np), hence the advantage in computational load reduction

of the PFE and TMSE and their enhanced versions w.r.t. OMP is obvious. Even, if Rosati

method has the same complexity order O(1), it leads to a loss of about 2 dB w.r.t. PFE and

TMSE at low SNR. The computational complexity order is summarized in table 5.1.

Algorithms Complexity order

enhanced PFE and TMSE O(1)

Rosati et al. O(1)

OMP O(Np)

GAIC O(N2p )

Table 5.1: Computational complexity order.

• Numerical results for time varying CIR

The estimation performance of fast fading channel with slow propagation delays variation is

evaluated for schemes a and b depicted in figure 5.2. In figures 5.6 and 5.7, displaying the

tracking performance, we fix ξ = 0.5 within eq. (5.21). It is worth to note that its value should

be adapted in practice according to the temporal correlation of the CIR support. Examining

the corresponding false alarm and missing rates of CIR support detection displayed in figure

5.7, we observe that the support tracking (scheme b) outperforms the direct application of

thresholding based techniques (scheme a) mainly in terms of false alarm rates, they achieve the

same missing rates. Also, the Kalman filter insertion leads to lower missing rates, however it

realizes higher false alarm rates. This can be explained by the better capacity of the tracking

scheme b to reduce false alarm rates over scheme a. The NMSE curves obtained respectively

for PFE and TMSE in figure 5.6 compare schemes a and b with tracking CIR directly from

LS estimate (without delay subspace tracking by Kalman). Also, the true structure LS is envis-

aged as the best benchmark. The herein envisaged approaches which take account of sparsity

outperform the simple tracking operated by Kalman filtering. Enhanced PFE/TMSE and en-

hanced PFE/TMSE associated to support tracking lead to enhanced performance compared to

105

Page 132: Sparse Regularization Approach ... - supcom.mincom.tn

5.6. Discussions and Numerical Results

Kalman. Associating Kalman filtering to support tracking and thresholding (scheme b) or to

thresholding (scheme a) leads to further enhanced performance.

0 5 10 15 20 25 3010

−5

10−4

10−3

10−2

10−1

100

SNR[dB]

NM

SE

Enhanced PFE

Kalman+ Enhanced PFE

Kalman+Support tracking+ Enhanced PFE

Support Tracking+Enhanced PFE

Kalman filter

True structure

PFE algorithm

0 5 10 15 20 25 3010

−5

10−4

10−3

10−2

10−1

100

SNR[dB]

NM

SE

Enhanced TMSE

Kalman+Enhanced TMSE

Kalaman+tracking support+Enhanced TMSE

Support Tracking+ Enhanced TMSE

Kalman filter

True structure

TMSE algorithm

Figure 5.6: Normalized Mean Square Error versus SNR in the case of fast fading channelswith quasi-stationary delays.

0 5 10 15 20 25 3010

−7

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR[dB]

Ra

tes o

f F

als

e A

larm

Enhanced Thresholding

Kalman+ Enhanced Thresholding

Kalman+Support tracking+ Enhanced Thresholding

Support tracking+ Enhanced Thresholding

With Kalman filtering

False alarm rates

0 5 10 15 20 25 3010

−3

10−2

10−1

100

SNR[dB]

Ra

te o

f m

isse

d d

ete

ctio

n

Enhanced ThresholdiingKalman+Enhanced ThresholdingKalman+Support tracking+Enhanced ThresholdingSupport tracking+Enhanced Thresholding

With Kalman filtering

Rates of missed detection

Figure 5.7: Fast fading channels with quasi-stationary delays. Solid line correspond to PFEand dashed line to TMSE.

5.6.2 OFDM pilot allocation

We are here interested in the optimized OFDM pilot allocation. In this way, we consider an

OFDM system with N = 256 subcarriers, where Np = 16 pilot subcarriers are used for channel

106

Page 133: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

estimation. The QPSK modulation is employed. A sparse CIR structure of length L = Ng = 50

is generated, where K = 4 nonzero coefficients are randomly positioned within the CP. The

effective taps are zero mean complex and normally distributed with variance equal to 0.5 per

component, corresponding to a uniform power delay profile. The number of pilot allocation

candidates for the parameters set Ns = 256 and Np = 16 leads to a tremendous set over which

an exhaustive search can not be envisaged in practice.

In this way, the performance in terms of coherence minimization is also evaluated. We adopt for

the best benchmark the case of perfect channel knowledge. We compare the proposed method

performance to the uniform, randomly positioned pilots and to the recent pilot generation

methods of Pakrooh et al. [136] and Qi et al. [138].

• Coherence and performance study

As shown in table 5.2, we evaluate the coherence of the submatrix Fp according to the trees

number p and to the CF weighting factor β value in (5.30). These results are obtained by

the proposed scheme and show a coherence value reduction when the number of trees p grows.

This is true for different β values. Also, comparing table 5.2 to the results of [138] which are

generated for the same parameters set, we can see that the proposed forward scheme realizes

an enhancement in terms of coherence reduction (which corresponds to the CF for β = 0).

p\β β = 0 β = 0.25 β = 0.5 β = 0.75 β = 1

1 5.0390 4.9330 5.0777 4.9137 7.2312

2 4.8334 4.9763 4.9028 4.9704 6.8483

3 4.8956 4.8956 4.9962 4.9107 7.6798

5 4.7485 4.8956 5.0693 4.7229 7.7772

7 4.9160 4.7354 4.8078 4.7229 7.7772

11 4.5090 4.7583 4.8590 4.7425 6.6366

Table 5.2: Comparison of the optimized pilot sets corresponding DFT submatrix coherencemeasure Np × µ(

√NsFpg) for different number of trees p and various values of CF weight β,

Ns = 256, Np = 16.

Scheme \p 1 2 3 5 7 11

Forward 5.0390 4.8334 4.8956 4.7485 4.9160 4.5090

Backward 5.5717 5.4896 5.0499 5.0668 4.9710 4.8339

Table 5.3: Effect of the number of trees p on the final CS matrix coherence Np×µ(√NsFpg)

for forward and backward schemes using the coherence as cost function (β = 0), Ns = 256,Np = 16.

table 5.3 shows the effect of the number of trees p on final CS decomposition matrix coherence

minimization for both forward and backward IC schemes. In the following results, we set β = 0

107

Page 134: Sparse Regularization Approach ... - supcom.mincom.tn

5.6. Discussions and Numerical Results

and p = 11, where the CF reduces to the coherence measurement. We consider OMP as

CS recovery algorithm for sparse channel estimation. Also, the unstructured LS estimator

based on uniform pilot allocation with Cubic Spline interpolation (CSI) [120] is taken as a

conventional channel estimation. Simulation results show that OMP outperforms the LS with

CSI interpolation.

The CFR NMSE curves displayed on figure 5.8 show that, when compared to the method of

[138], the proposed scheme achieves the same performance at low SNR yet realizes an increasing

gain for high SNR values. It is worth noting, that the worst performance is obtained for a

uniform pilot allocation, which is due to the fact that it does not optimize the decomposition

basis Fpg that has a high coherence. For the herein chosen parameters, the computational

load respectively for Forward and Backward schemes (in terms of complex multiplications) are

CLF = 44737 and CLB = 7.49. 1010. In this way, the computational load of the proposed

scheme is reduced in a ratio of γ = 6. 10−3 compared to that of [138] thus allowing for a

noticeable computational burden reduction.

figure 5.8 also shows a substantial gain over the method of [136] where one branch construction

(p = 1) is employed and the cost function only considers pilots positions cyclic differences PDO

variance minimization. Examining the SER curves displayed in figure 5.8, we note that for

the low dimensional constellation QPSK, the proposed method outperforms the former schemes

and achieves equivalent or lower SER than the recent method [138] respectively for low and

high SNR values.

0 5 10 15 20 25 3010

−3

10−2

10−1

100

SNR [dB]

NM

SE

Pakrooh methodQi method (p=11)Proposed method (p=11) Random generationUniform allocation

CFR NMSE versus SNR

0 5 10 15 20 25 3010

−4

10−3

10−2

10−1

100

SNR [dB]

SE

R

Pakrooh methodQi method (p=11)Proposed method (p=11) Random generationUniform allocationTrue ChannelTrue Structure

SER versus SNR

Figure 5.8: Performance evaluation of the proposed optimized pilot allocation scheme.

• Computational complexity

108

Page 135: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

To obtain the pilot allocation by the trees-based proposed forward scheme, Np − 1 iterations

are needed. Whereas, in the backward processing of [138], the number of operated iterations

is of Ns −Np. Generally, in the CS framework, the choice of Np should satisfy Np ≪ Ns, thus

making the proposed forward scheme realize an important reduction in complexity compared

to the backward scheme, as is shown hereafter.

At the ith iteration, the proposed forward scheme computes the cost function for p(Ns − i)

possible (i + 1) × Ng matrices. On the other hand, at the ith iteration, the backward scheme

computes the cost function for p(Ns − i) possible matrices with size (Ns − i) × Ng each. It

is straightforward to observe that the coherence computation of a complex matrix with size

m1 × n1 implies m1n1(n1−1)

2 complex multiplications. The considered schemes computational

loads are then evaluated, for a given number of p trees in terms of complex multiplications,

respectively for the forward and backward processing as

CLF (Ns, Ng, Np, p) = Λ

Np−1∑

i=1

(Ns − i)(i+ 1), (5.33)

CLB(Ns, Ng, Np, p) = Λ

Ns−Np∑

i=1

(Ns − i)2, (5.34)

with Λ = pNg(Ng−1)

2 . The ratio of the last computational burdens only depends on Ns and Np.

It is given by

γ(Ns, Np) =CLF

CLB=

Np−1∑

i=1

(Ns − i)(i+ 1)

Ns−Np∑

i=1

(Ns − i)2

. (5.35)

It is worth noting that γ diminishes towards 0 as the number of pilots Np becomes small

compared to Ns. This condition is the most true since the sparsity level is high i.e. K is small

and Np is chosen proportionally to K. Some further developments show that if Np ≪ Ns, CLF

is linear in Ns whereas CLB varies proportionally to N3s which highlights the computational

benefit of the proposed scheme compared to the earlier one [138] also based on a tree structure.

figure 5.9 displays the complexity reduction ratio of forward scheme compared to backward

scheme, γ, for varying number of subcarriers and pilots Ns ∈ 64, 128, 256, 512, 1024 and

Np ∈ 4, 8, 16, 32, 32. It shows that the computational burden of the proposed forward scheme,

is all the more reduced compared to that of the backward method of [138] that the number

of pilots is a small fraction of the subcarriers number, i.e.: Np ≪ Ns. It is worth noting that

109

Page 136: Sparse Regularization Approach ... - supcom.mincom.tn

5.7. Conclusion

0

10

20

30

40

020040060080010001200−30

−28

−26

−24

−22

−20

Np

Ns

10lo

g 10(γ

)

Figure 5.9: Evaluation of the complexity reduction ratio γ for varying number of subcarriersand pilots Ns and Np values.

this condition should be verified in the general context of CS theory and that in our context, it

corresponds to the case of highly sparse (dispersive) channels. In summary, good performance

is achieved with the proposed forward scheme based on tree structure realizing a substantially

lower computational load compared to former schemes.

5.7 Conclusion

In this chapter, the problem of OFDM sparse and time varying frequency selective channel

response estimation and tracking is investigated.

This work is particularly adapted to high data rate WSN. UWB is one enabling technology

for WSN, which has among its communication approaches the MB-OFDM offering the advan-

tage of spectrum flexibility. UWB channel is characterized by sparse CIR structure. In most

WSN applications (targets detection, counting, ...), the channel knowledge is required. For the

frequent case of moving targets, channel tracking is then of interest.

In the first part of this chapter, slow time variation of CIR support and channel sparsity in

time domain are exploited to enhance the channel estimation accuracy. Then, a combination of

delay subspace tracking by Kalman filtering and an adaptive CIR support tracking procedure

based on CIR structures over current and past blocks of OFDM symbols is suggested. The

110

Page 137: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 5. Sparse Dynamic Signal Tracking: Application to Fast Fading OFDM Channel Estimation

obtained approaches lead to performance enhancement with respect to tracking approaches not

accounting for channel sparsity.

In the second part of this cahpter, we considered the problem of optimized pilot subcarriers

placement in OFDM systems for the aim of sparse channel estimation and spectral efficiency

enhancement. In this context, we proposed a new scheme that iteratively allows to find the

near-optimal pilot pattern in a forward manner using a tree-based structure. We also proved

that the proposed forward scheme allows a noticeable computational load reduction compared

to the backward scheme. Simulation results showed that the proposed scheme achieves a

performance enhancement in terms of NMSE estimation and symbol error rate, compared to

the recently proposed pilot placement schemes.

Until now, the already presented chapters focused on sparse signal recovery where the problem

of events detection in WSN and support tracking for sparse OFDM channel estimation are

addressed. For both envisaged contexts of rare events detection (in WSN) and sparse channel

recovery/tracking (in OFDM systems), the sparsifying basis is a priori known. It corresponds

to the channel matrix in the first case and to the Fourier transform basis in the second case.

The main difference between both problems is the nature of the sparse signal to be recovered,

which is with integer entries in the first problem (as the number of events in WSN cells),

whereas it is with continuous entries in the framework of CIR recovery (gain of propagation).

In the next chapter, we will focus on a more general framework where the observed signal is a

priori non sparse yet compressible in a given basis to be optimized. The objective will be how

to search for the best sparsifying basis, which leads to better reconstruction performance. This

is envisaged for general 2 dimensional correlated signals. Once, this basis is optimized, the

algorithms, proposed in the first part of the dissertation, studying CS application to problems

with known decomposition basis, for both integer and continuous sparse vector entries, become

conceivable.

111

Page 138: Sparse Regularization Approach ... - supcom.mincom.tn
Page 139: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6

Robust Sparse Data Representation

from 1D to 2D processing

Strategies: Application to WSN

6.1 Introduction

The first part of the thesis considers the problem of sparse signal recovery when the basis in

which the observation is sparse is known. This was first envisaged for WSN rare events detec-

tion, where the sparse vector has discrete entries as the number of events per cell. Then, sparse

channel estimation and tracking in OFDM was considered, in which the channel frequency

response is sparse in the Fourier basis.

Even if sparse (or approximately sparse) data, with few nonzero (resp. significant) elements,

may naturally exist for some applications (such as the path reconstruction problem [139], indoor

localization [140], sparse events detection in WSN [141] and radio propagation wideband CIR)

as those discussed in previous chapters, a sparse data representation can not be easily induced

in many other real-world contexts. However, in many contexts data is compressible, such as in

meteorological applications and environmental data gathering [142]. In such framework, to be

able to apply CS and to benefit from its advantages, the design of a specific sparsifying transform

or data compression basis is required. Its main objective is to plot a good approximation of

the original signal reconstruction at the sink.

We here consider the problem of efficiently gathering large amounts of data in a 2D context.

Our aim is to recover large data sets, with high accuracy from a small collection of readings.

113

Page 140: Sparse Regularization Approach ... - supcom.mincom.tn

6.1. Introduction

More precisely, we address the problem of searching a sparse representation of 2D correlated

signals (such as WSN measurement) in order to be able to apply efficient CS reconstruction

techniques. Indeed, in WSN, sensors are densely deployed in order to monitor one or a set

of given environmental parameters. Such parameters (temperature, pressure, humidity, wind,

gases concentration,...) are generally known to vary continuously in the space. As a con-

sequence, the adjacent sensor measurements are expected to be highly correlated and thus

compressible. This means that we are able to represent the same signal by fewer samples than

the original signal, which corresponds to finding a sparse representation for it. The envisaged

application is in environmental monitoring where the parameters to measure have low spatial

variation allowing for a compressed acquisition.

Our concern here is to apply CS concept to the scenarios of 1D and 2D readings recovery from

a 2D correlated signal, as in WSN, while requiring excellent recovery accuracy. The challenge

is to achieve this goal while collecting only a small fraction of data at a gathering point. Our

main objective in this work is then to design a robust and effective sparsity inducing method.

We begin by providing a survey on existing sparsifying transforms proposed in the literature.

Further, we propose a new transformation based on Linear Prediction Coding (LPC) in which

the sparse signal corresponds to the prediction error. LPC is applied to effectively exploit

correlation between neighboring data: a spatial correlation in the case of sensor readings in

WSN. The challenge is that of finding the best sparsifying basis: that better compresses the

data and leads to the lowest reconstruction loss at the sink.

Even if the hereafter developments apply to the general 2D correlated data sparsifying basis

search, in the following, the problem of data recovery from a subset of few sensor readings

collection in the case of spatially correlated large WSN will be considered.

Why this can be interesting in the framework of WSN? Ensuring energy efficient data commu-

nication, transmission, and storage has become a huge challenge with the deployment of large

WSN and the era of IoT [143, 144]. To handle this problem, two possibilities can be envisaged:

either locally compute relevant information to be transmitted or compress the data and sent

it to the fusion center. The first approach places complexity at nodes level and loses access

to the original data. A key approach of managing big data in WSN is data compression. It

not only increases WSN energy efficiency but also ensures its scalability. Beside conventional

compression methods, another promising taxonomy is CS theory, which is essentially based on

sparsity concept. To benefit from CS high performance, it is compulsory to find a sparsifying

basis of the measurements. This will be the main objective of this chapter.

114

Page 141: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

In the following, we first study 2D signals data compression in 1D reading scenario. A 2D

reading scenario will be developed afterwards.

6.2 1D Data Compression in WSN

This section addresses the 1D compression of 2D signals. First, the objective of CS in this

context is detailed. Then, sparsify inducing methods of the literature are overviewed. Our

contribution based on LPC is then detailed.

6.2.1 1D CS for 2D signals

We consider a WSN composed of N sensors, randomly distributed in a monitored area to serve

different tasks, such as continuous environmental surveillance. Sensor readings 2D data are

forwarded to central sink node. We use a single hop transmission model for data gathering

where we assume that each sensor node knows its local routing structure. Due to the power

consumption constraints, it is inefficient to directly transmit the raw sensed data to the sink,

as they often exhibit a high correlation in the spatial domain and can be efficiently compressed

to reduce power consumption. Our aim is here to compress data based on exploiting the sensor

readings correlation. We focus on designing an efficient sparsifying dictionary for WSN data.

This part envisages a 1D reading from the 2D signal captured by the WSN. The 2D sig-

nal, X, is read by choosing an adjacent sensors reading to form the following 1D signal

x = [x1, . . . ,xN ]T ∈ RN×1. Each node acquires a sample xi, corresponding to continuous

parameter measurement such as temperature, pressure, etc. We assume that due to its corre-

lation, the signal x is sparse, or approximately sparse, in some domain to determine. It then

admits a representation with a number of non zero elements K ≪ N . This can be expressed

by the existence of a transformation matrix Ψ of dimension N ×N such that

x =

N∑

i=1

θiΨi or x = Ψθ, (6.1)

where θ is the sparse, or approximately sparse, representation of x in basis Ψ, with only K

nonzero (or significant) elements. Then, the signal x can be represented as a linear combination

of only K basis vectors. Once the sparsifying basis of the measurement signal x is successfully

designed, CS algorithm can be applied to recover the original signal x from a small number M

of measurements such that K < M ≪ N . In this way, the sink collects M measurements and

115

Page 142: Sparse Regularization Approach ... - supcom.mincom.tn

6.2. 1D Data Compression in WSN

recovers N readings. The received signal at the sink is represented as

y = ΦΨθ + n, (6.2)

= Aθ + n, (6.3)

where y ∈ RM×1 is the gathered readings at the sink, n is the additive noise that disturbs the

sensors measurements. n components are supposed independent with zero mean and variance

σ2IM and Φ ∈ RM×N denotes the measurement matrix.

As mentioned in the state of the art of CS in chapter 2, if the measurement matrix Φ satis-

fies the RIP condition, then a perfect signal reconstruction is guaranteed. In order to meet

this last hypothesis, Φ can be sampled from different distributions such as the random Gaus-

sian distribution. This choice considers as measurements linear combinations of all sensors

measurements, which needs all of them to be active. This is not optimal in terms of energy

consumption. To avid this, Φ can be chosen as a selection matrix in which M among N sensors

are randomly activated, even if this choice may be suboptimal in terms of RIP. To recover x,

the sink requires the measurement vector y and the reconstruction or decomposition matrix

A. It first recovers a sparse estimate θ of θ by applying the CS approach. Once θ is recovered,

the whole measurements are accessible through

x = Ψθ. (6.4)

The literature is rich with sparsity-inducing methods, which were adopted for optimized WSN

operations [145]. Among those methods, several dictionary learning approaches have been

proposed to find sparse representation of the data. The further aim being to apply CS approach

and benefit from its numerous advantages.

6.2.2 Survey on unsupervised sparsity inducing transformations in 1D

In this section, we consider a variety of sparsity inducing/compression schemes, and discuss

some important considerations for signal compression in WSN. In the following, we give an

overview on unsupervised sparsifying basis search including Discrete Cosine and Fourier Trans-

forms (DCT and DFT).

• Discrete Cosine Transform (DCT)

DCT is a method employed to transform correlated measurements into white process (uncorre-

lated). This is known as whitening processing. It leads for real signals to real coefficients. The

116

Page 143: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

number of significant coefficients is reduced in comparison to the original signal thus allowing

its compression. For example, if a set of correlated WSN data is accumulated at the sink node,

DCT attempts to decorrelate it [146] using a group of cosine functions. This leads to a lower

quantity of data transmission and then reduces the energy consumption in the network. Also,

we can get back the original samples by employing inverse DCT (IDCT) on the uncorrelated

coefficients. The most common mathematical expression of one-dimensional DCT is as follows

θ(u) = κ(u)N−1∑

n=0

x(n) cos

[

π(2n+ 1)u

2N

]

, (6.5)

where θ(u) denotes the uth DCT coefficient of N uncorrelated data, x(n), n = 0, . . . , N − 1,

denotes the original data set (measurements). κ(u) is a constant defined as follows

κ(u) =

1N for u = 0

2N for u 6= 0.

(6.6)

Once the transformation has been performed, the original data can be reconstructed by using

inverse DCT given by the following expression

x(n) =N−1∑

u=0

κ(u)θ(u) cos

[

π(2n + 1)u

2N

]

. (6.7)

• Discrete Fourier Transform (DFT)

DFT is a very interesting transform for computational reasons, since it can be implemented

using the Fast Fourier Transform (FFT) algorithm. The DFT coefficients are computed as

θ(k) =1√N

N−1∑

n=0

x(n)e−j2π nkN , (6.8)

and the original signal can be recovered by Inverse DFT (IDFT) as

x(n) =1√N

N−1∑

k=0

θ(k)ej2πnkN . (6.9)

The DCT and DFT consider standard decomposition basis which are not necessarily suited to

the considered signals (do not necessarily sparsify the signals).

Our aim in the following is to design a robust and more effective sparsity method suited to

the considered signal. To achieve this goal, some statistical properties of data such as their

correlation is exploited to derive an adequate data compression technique. This will be the

117

Page 144: Sparse Regularization Approach ... - supcom.mincom.tn

6.2. 1D Data Compression in WSN

main objective of the following, where supervised sparsity inducing matrix design is addressed

through the Principal Component Analysis (PCA) [147].

6.2.3 1D principal component analysis (1DPCA) method

PCA technique is one of the dimensionality reduction models. It uses an orthonormal trans-

formation to convert a set of correlated observations into a set of uncorrelated variables called

principal components. Recently, the PCA-based algorithms have been effectively applied to

compress WSN data [148].

PCA is a statistical tool that projects the data onto a new basis and aims to retain higher

variance components while minimizing the data redundancy. It is especially based on some

statistical parameters such as mean and covariance. In order to implement PCA technique, a

training phase is required. To this end, we consider L data collection rounds, during which the

sink collects the readings from all the N sensors (L realizations of 2D observations and then L

realizations of 1D reading x). The basic PCA application steps are summarized in Algorithm

8.

Without loss of generality, we consider a regular square grid WSN where√N×

√N = N sensors

are regularly deployed for 2D X data can be read in 1D as x = [x1, . . . ,xN ]T ∈ RN×1 gathering

the N sensors readings, with X(i, j) in the kth position computed as k = (i− 1)√N + j.

To summarize, thanks to PCA, each original signal x(i) (at realization i = 1, . . . , L) can be

transformed into a point θ(i) that can be considered K-sparse (when neglecting non significant

coefficients). The sparsity degree of θ, denoted K, depends on the actual level of correlation

between the observation x components.

Afterwards, we study the dimensionality reduction by using PCA technique. This reduction

passes by choosing K eigenvectors of mathbfx autocorrelation matrix among N , which corre-

spond to the K largest eigenvalues. More precisely, we consider the proportion variance (PoV)

expressed as

PoV (K) =λ1 + λ2 + . . . λK

λ1 + λ2 + . . . λK + . . . λN, (6.14)

where the eigenvalues λi are sorted in descending order. Typically, K is chosen such that this

ratio verifies PoV (K) ≥ 0.9. figure 6.1, displays the eigenvalues x autocorrelation matrix in

decreasing order. It illustrates an example of two trials of L realizations with L = 200. By

examining the last condition on PoV, we see that at least K = 4, principal components are

118

Page 145: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

Algorithm 8 Main steps of PCA

Consider L realizations as x(1), . . . ,x(L) where x(i), i = 1, . . . , L, is the ith realizationwith N × 1 dimension.1) Compute the sample mean vector x as

x =1

L

L∑

i=1

x(i). (6.10)

2) Subtract the mean as d(i) = x(i) − x and form the matrix D = [d(1), . . . ,d(L)] withN × L dimensions.3) Estimate the covariance matrix Ccov of centered version of x as

Ccov =1

L

L∑

n=1

d(n)d(n)T =1

LDDT . (6.11)

4) Compute the eigenvalues of Ccov, sorted as λ1 ≥ λ2, . . . ≥ λN and corresponding toeigenvectors u1,u2, . . . ,uN. It is performed via Singular Value Decomposition (SVD) ofthe data matrix D.5) Consider the basis u1,u2, . . . ,uN, any representation x − x can be written as a linearcombination of the eigenvectors as

x− x = θ1u1 + θ2u2 + . . .+ θNuN ,

=

N∑

i=1

θiui. (6.12)

6) Proceed to dimensionality reduction keeping only the terms corresponding to the K,K ≪ N , largest eigenvalues respecting PoV. Indeed, the representation of x − x into theorthonormal basis U = [u1, . . . ,uK ] (such as UTU = IN ) is considered, which is expressedas

θ = UT (x− x), (6.13)

where θ = [θ1, . . . , θK ]T .

needed for the first realization, while K = 20 principal components are required for the second

realization.

In order to implement PCA in conjunction with CS recovery algorithms, we alternate the

following two phases:

1. a training phase: L data collection rounds, during which the sink collects the readings

from all N sensors. This information is used to compute the mean and the covariance

matrix and essentially to derive U for a given K satisfying PoV.

2. processing phase: only M ≪ N sensor nodes are activated and y serves to recover θ and

then x based on sparsifying basis recovered in step 1 .

119

Page 146: Sparse Regularization Approach ... - supcom.mincom.tn

6.3. Contribution in 1D Supervised Sparsifying Basis Learning: Linear Prediction Coding (LPC) Approach

0 20 40 60 80 10010

−4

10−3

10−2

10−1

100

101

102

eigenvectors number

eige

nval

ues

realization 1realization 2

PoV=0.9925

PoV=0.9910

K=20K=4

Figure 6.1: Selection of the number of principal components

6.3 Contribution in 1D Supervised Sparsifying Basis Learning:

Linear Prediction Coding (LPC) Approach

The LPC technique is widely used for speech coding [149]. We here introduce it in the frame-

work of the sparsifying basis design steps. LPC is specifically adapted to transform sensed

data from its original domain into an other domain in which it has a sparse representation. In

this section, we first review basic tools from LPC before detailing its application in sparsifying

basis design.

In the following, x is the 1D signal transmit, and x denotes its nth component (for a given data

collection). In LPC, the predicted sample xp(n) of x(n) using the former last p samples can

be expressed as

xp(n) =

p∑

k=1

akx(n− k), (6.15)

where ak are the prediction coefficients and p is the predictor order. Then, the error between

the actual sample x(n) and its predicted value xp(n) can be expressed as

e(n) = x(n)− xp(n),

= x(n)−p∑

k=1

akx(n− k). (6.16)

• LPC Coefficients Computation

The predictor coefficients ak are determined in order to minimize the prediction square error

120

Page 147: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

J given by

J =∑

n

e(n)2 =∑

n

(

x(n)−p∑

k=1

akx(n− k)

)2

. (6.17)

The mean of J corresponding to the Mean Square Error (MSE), is given by

J = E(e(n)2). (6.18)

Coefficients ak that minimize J , are obtained from zeroing the partial derivatives of J with

respect to such coefficients such as

∂J

∂ai= 0, 1 ≤ i ≤ p, (6.19)

which leads top∑

k=1

akE (x(n− k)x(n − i)) = E (x(n)x(n − i)) . (6.20)

Supposing x stationary, we define R(i) = E (x(n)x(n− i)) as the autocorrelation sequence.

Therefore, the last linear prediction equation can be rewritten as

p∑

k=1

akR(|i− k|) = R(i), 1 ≤ i ≤ p. (6.21)

The assumption on x stationarity can be fulfilled in uniform correlation case, where the corre-

lation depends only on spacement (spatial distance in WSN).

In matrix form, eq. (6.21) is expressed as

r = Ra, (6.22)

where a = [a1, . . . , ap]T and r = [R(1), . . . , R(p)]T is a p×1 vector of autocorrelation coefficients

and

R =

R(0) R(1) · · · R(p− 1)

R(1). . .

. . . R(p− 2)...

. . .. . .

...

R(p− 1) R(p− 2) · · · R(0)

, (6.23)

is p× p matrix of autocorrelation coefficients.

121

Page 148: Sparse Regularization Approach ... - supcom.mincom.tn

6.3. Contribution in 1D Supervised Sparsifying Basis Learning: Linear Prediction Coding (LPC) Approach

Equation (6.22) is known as the Wiener-Hopf equation. A direct solution of which is obtained

by multiplying r by the inverse of R as

a = R−1r. (6.24)

In addition to inversion procedure, the low complexity Levinson-Durbin recursive solution [150]

is possible since the correlation matrix R is Toeplitz.

• Sparse representation

Eq. (6.16) shows that the prediction error can be interpreted as a filtered version of x. The

LPC-based analysis filter can be expressed using a transfer function A(z) as

Analysis Filter

x(n) e(n)A(z)

where A(z) = 1 −p∑

k=1

akz−k. If we have both prediction error sequence e(n) and prediction

coefficients ak, we can reconstruct the output signal x(n) by filtering e(n) with the synthesis

filter H(z) = 1/A(z) as

x(z) = H(z)e(z). (6.25)

Synthesis Filter

e(n) x(n)H(z)

Since the impulse response h corresponding to H(z) is infinite, we can approach it by truncating

the filter response to its first N coefficients h = [h0, . . . , hN−1]. The recovered signal x(n) can

then be expressed as

x(n) =

N−1∑

k=0

hke(n− k). (6.26)

In matrix form, we can write

x = He, (6.27)

122

Page 149: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

where

H =

h0 0 · · · 0

h1. . .

. . ....

.... . .

. . ....

hN−1 hN−2 · · · h0

(6.28)

is N × N Toeplitz lower triangular matrix. Supposing that the prediction error is sparse, H

can then be identified to the sparsifying basis Ψ of x and e to the sparse representation of x

in this basis.

In order to implement LPC-based transform, an offline training phase is considered, during

which the sink collects L collections of the readings from all N sensor nodes and uses this

information to compute the coefficients ak and the matrix H. During the processing phase,

a sub set y of observations x is considered and used to recover e then reconstruct the whole

observation x. The measurement matrix Φ is chosen as a binary selection matrix to pick M

among N sensors to be active.

6.4 Numerical Results Analysis for 1D Processing

The concern of this part is about 2D correlated data compression for 1D WSN reading. This is

applied to WSN measurements. We here consider a regular monitored area. This area is split

into a grid with N square cells and we place each of the N = 100 sensing nodes within a given

cell. Then, the distance between any two neighboring nodes is fixed to 20m. Also, we discuss

the generation of the considered signal for the performance evaluation. In such context, we

assume that sensor readings are spatially correlated.

6.4.1 Correlated data generation

In 1D processing, the 2D data, X, is read as x = [x1, . . . ,xN ] ∈ RN×1. Each component of x

presents the reading of one sensor recovered considering a reading pattern. The data are highly

correlated in space domain. We exploit the spatial correlation among sensor readings in 2D√N ×

√N WSN using 1D horizontal and contiguous reading of the network matrix.

Let the correlation coefficients ρij denote the correlation between i and j sensors readings.

The common correlation models based on spatial locations are introduced in [151]. The most

123

Page 150: Sparse Regularization Approach ... - supcom.mincom.tn

6.4. Numerical Results Analysis for 1D Processing

conventional model is the power exponential. In this model, ρij is expressed as

ρij = exp

(

−(

dijθ1

)θ2)

, θ1 > 0 and 0 < θ2 ≤ 2, (6.29)

where dij denotes the distance separating sensors i and j. Note that this model assumes a

uniform correlation: it is the same for any pair of sensors having the same spatial spacing.

As a consequence, the corresponding data is spatially stationary. figure 6.2, displaying the

0 0.5 1 1.5 20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

θ2

ρ ij

θ1=2

θ1=5

θ1=10

θ1=100

θ1=103

θ1=104

Figure 6.2: Spatial correlation for different parameters values.

correlation for dij = 20m versus θ2, shows that the value of control parameter θ2 affects the

correlation model. Indeed, when the quotientdijθ1

≥ 1, the correlation decreases as a function of

θ2 reaching a maximum value approximately equal to 0.35. Otherwise, ifdijθ1

< 1 the correlation

increases versus θ2 for different chosen values of θ1. Consequently, to get sufficiently correlated

and thus compressible data, we need to choose accordingly convenient parameters for model

(6.29). We use parameters θ1 and θ2 for which the mean correlation throughout the WSN

exceeds 0.5.

From this correlation model, the N×N network correlation matrix Re is generated with (i, j)th

element given by ρij. Then, we use a Cholesky decomposition to generate correlated random

variables xi for i = 1, . . . , N . For this aim, we firstly generate a vector of uncorrelated Gaussian

random variables z ∼ N (0, 1). Then, we compute a square root of Re, i.e a matrix C such

that CCT = Re. A popular choice to calculate C is the Cholesky decomposition. Our target

vector of correlated observation x is then given by x = Cz.

124

Page 151: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

6.4.2 Performance evaluation

The performance is evaluated in terms of NMSE on x. Once the sparse representation s (here e)

is reconstructed using the measurements vector y by applying OMP algorithm, the estimated

signal x is recovered as x = Ψs (here Ψ = H). The NMSE reconstruction error given by

NMSE =E(‖x− x‖22)E(‖x‖22)

, (6.30)

is then evaluated experimentally (by averaging over simulations).

For the generated synthetic signal x, the degree of sparsity K can not be controlled. In this

scenario, the OMP algorithm operates with a stop condition related to the residual observation

norm. The residual at iteration k is denoted rk and r0 = y. figure 6.3, displays the normal-

ized residual norm versus the supposed sparsity level (K) or equivalently the OMP iterations

number. It shows that the residual decreases as a function of the number of OMP iterations.

The lower speed of residual decrease for LPC (compared to PCA and DCT) may be related

to the small e entries amplitude. In the following, we choose ε = ‖rk‖2‖r0‖2 < 10−3 as a stopping

rule. This is obtained with 32 iterations for PCA and DCT. 40 iterations for DFT and 42 with

LPC.

0 5 10 15 20 25 30 35 40 4510

−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Iterations Number (K)

‖rK‖2

/‖r0‖2

PCADCTDFTLPC

Figure 6.3: Normalized residual norm versus iterations number (K) for N = 100, M = 50,p = 20 and mean correlation ρ = 0.7.

125

Page 152: Sparse Regularization Approach ... - supcom.mincom.tn

6.4. Numerical Results Analysis for 1D Processing

0 10 20 30 40 50

0.5

0.55

0.6

0.65

0.7

0.75

0.8

Prediction order (p)

Nor

mal

ized

pre

dict

ion

erro

r

ρ=0.7ρ=0.5

(a)

0 10 20 30 40 50

0.15

0.2

0.25

Prediction order (p)

NM

SE

M=30

M=50

(b)

Figure 6.4: Normalized prediction error (a) and reconstruction error (b) versus predictionorder (p).

6.4.3 Numerical results

We firstly study the performance of LPC scheme as a function of prediction order p. figures

6.4(a) and 6.4(b) illustrate respectively the mean normalized prediction error ‖e‖2‖x‖2 , for two

values of mean network correlation ρ = 0.5 and ρ = 0.7 and NMSE, for two values of measure-

ments number M = 30 and M = 50, versus the prediction order p. It is noticed that the larger

p is, the smaller is the prediction error. Also, we can see that from p = 20 the considered error

rates remain constant. Therefore, for the following results, we will use p = 20.

As mentioned above, the considered approaches for data compression exploit spatial data cor-

relation. Thus, we hereafter evaluate the performance according to mean correlation. In this

way, we modify the control parameters θ1 and θ2 and then compute the mean network cor-

relation ρ as the mean of matrix Re. figure 6.5 depicts NMSE versus the mean correlation

over all network ρ for SNR= 0dB, where SNR is defined as the ratio of transmitted signal x

mean energy by the noise variance σ2. It shows that the proposed LPC approach achieves

the lowest reconstruction error over different considered correlation values. Thus, LPC better

exploits spatial correlation of WSN data. Note that this gain over PCA, DCT and DFT is

more significant for lower correlation values.

figure 6.6 illustrates reconstruction NMSE versus the measurements number M for both

SNR= −10dB and SNR= 10dB. It can be observed that LPC technique realizes the best

performance by achieving the lowest NMSE rates especially for low M . Also, the proposed

sparsifying transform is more robust to noise for the chosen compression ratio M/N . The LPC

scheme allows to achieve the same reconstruction error with lower number of observations M ,

thus reducing energy consumption.

126

Page 153: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110

−2

10−1

100

101

Mean Correlation

NM

SE

PCADCTDFTLPC

Figure 6.5: Reconstruction error versus mean correlation ρ when N = 100, M = 50 andSNR= 0dB.

10 20 30 40 50 60 70 80 90 10010

−4

10−3

10−2

10−1

100

101

Measurements number (M)

NM

SE

PCA DCT DFT LPC

Figure 6.6: Reconstruction error versus measurements number (M) when when mean corre-lation ρ = 0.8. Black line correspond to SNR= −10dB and red line to SNR= 10dB.

NMSE curves, versus SNR, for different considered techniques, and for two values of deployed

sensors number M = 30 and M = 50 are superimposed in figure 6.7 and show enhanced

performance of LPC compared to existing approaches over the whole SNR range. WithM = 30,

LPC achieves lower NMSE than all of the benchmarks with M = 50. Also, we can see that

the presented curves of NMSE have very low dynamics and saturate at high SNR, this may be

related to the fact that the obtained signals are approximately sparse. For the sensors density

effect analysis, figure 6.8 reports the NMSE rates versus the number of the square network

cells ranging from 25 (5× 5) to 225 (15× 15). We also fix the compression level as M/N ≈ 0.3.

127

Page 154: Sparse Regularization Approach ... - supcom.mincom.tn

6.4. Numerical Results Analysis for 1D Processing

The results show that whatever the chosen number N , LPC remains the most efficient.

−10 −5 0 5 10 15 20 25 3010

−1

100

SNR[dB]

NM

SE

LPC PCA DCT DFT

Figure 6.7: Reconstruction error versus SNR when mean correlation ρ = 0.5. Black linecorrespond to M = 30 and red line to M = 50.

• Computational issues

In addition to performance enhancement, LPC is also advantageous in terms of lower computa-

tional complexity, evaluated in terms of multiplications number compared to PCA technique,

which identically to LPC incorporates a training phase with L realizations.

• PCA method: we can distinguish the covariance computing and the singular value de-

composition with a computational load of order O(LN2) and O(N3) respectively.

• Proposed LPC: two main steps are considered: autocorrelation matrix and predictor

coefficients estimation with computational complexity respectively of O(LN2) and O(p2).

Therefore, the complexity is of order O(N2) for LPC, whereas it is of order O(N3) for

PCA.

6.4.4 LPC basis relevance analysis

In this part, we give an analysis concerning the LPC basis relevance. In this context, figure

6.9 displays the reconstruction error (NMSE) versus the supposed sparsity level (K). It is worth

noting that NMSE decreases and then quickly grows above an optimal value of K for DFT,

DCT and PCA techniques. This means that the signal is perfectly sparse in the corresponding

sparsifying basis. However, the LPC reconstruction error exhibits a continuous decrease as the

supposed sparsity level increases. This can be related to the fact that the built signal e is

128

Page 155: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

50 100 150 2000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Cells Number (N)

NM

SE

LPC PCA DCT DFT

Figure 6.8: Reconstruction error versus cells number when mean correlation ρ = 0.8, M/N ≈0.3 and SNR=0dB.

approximately sparse where the majority of its components is very small and nonzero valued

as is the case with the other transformations as shown in figure 6.10. This behavior is in

concordance with the higher coherence of the LPC transformation inducing sparsity compared

to the other considered techniques as shown in table 6.1. In this manner, the high correlation

between the column vectors of H induces the relatively slow decay of the residual norm given

by figure 6.3. As a result, LPC approach needs a larger number of iterations to recover the

original signal x. The low NMSE obtained by the benchmark methods is achieved by PCA for

K = 6 iterations. LPC attains the same NMSE from K = 10 iterations which respects the CS

constraint for unique reconstruction for M = 50.

coherence/technique PCA DCT DFT LPC

µ(A) 0.45 0.42 0.24 0.83

Table 6.1: Coherence comparison for different considered approaches.

Now, we study the impact of the spasifying basis on the choice of OMP stop condition related

to the residual and defined as ε = ‖rk‖2‖r0‖2 . Indeed, examining figure 6.11 showing the recon-

struction error versus the mean iterations number. It is worth noting for all methods, that

the mean iterations number increases as ε decreases, LPC and DFT achieve the higher mean

iterations number. Also, the corresponding reconstruction error increases when ε increases.

The corresponding reconstruction error (NMSE) degrades for lower ε which may be related to

false alarm increase. This is avoided for LPC for which NMSE decreases with ε.

129

Page 156: Sparse Regularization Approach ... - supcom.mincom.tn

6.5. 2D Separable Data Compression: Application to WSN

0 5 10 15 20 25 30 35 40 4510

−1

100

Iterations Number (K)

NM

SE

PCADCTDFTLPC

Figure 6.9: Reconstruction error versus iterations number when SNR= 10dB, M = 50,p = 20 and ρ = 0.7.

0 20 40 60 80 100−2

0

2

4

6

8

10

12

14

16

18

Sample index

Mag

nitude

Correlated signalSparse signal with LPCSparse signal with PCA

Figure 6.10: Sparse signal representation for one realization when ρ = 0.7.

6.5 2D Separable Data Compression: Application to WSN

The above analysis based on 1D reading of the 2D measurements exploits the correlation only

in one direction and one way. Note that for spatial data as in WSN, a 1D prediction envisaging

both ways could be envisaged (anti causal w.r.t. reading). The 1D reading in general may

prevent exploiting the whole correlation features. That is why we aim to better sparsify the

data by better exploiting the spatial correlation.

In this section, we address the problem of finding a suitable sparsifying transform or data

compression basis for 2D data, by a 2D reading in order to preserve and better exploit the

130

Page 157: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

10−5

10−4

10−3

10−2

10−1

101

102

ε

Me

an

ite

ratio

ns n

um

be

r

PCADCTDFTLPC

10−5

10−4

10−3

10−2

10−1

10−1

100

ε

NM

SE

PCADCTDFTLPC

Figure 6.11: Performance comparison versus OMP stop conditions when SNR= 10dB, M =50, p = 20 and ρ = 0.7.

WSN spatial correlation in two directions.

In the following, the problem formulation of 2D sparse data representation will be established.

Then, two dimensional compression techniques used for sparsity inducing will be presented.

After this, through comparison with respect to one dimensional sparse representation of mea-

surements, we will prove the effectiveness of the proposed scenario modeling for 2D. This is

applied on 2D WSN spatially correlated signals.

6.5.1 Sparsity of 2D signals

A straightforward implementation of 2D CS is to stretch 2D observation matrices into 1D

vectors as envisaged in the first part of this chapter. However, such direct stretching increases

the complexity and memory usage at both encoder and decoder and uses the correlation in

1D, thus destroying the data bi dimensional correlation. An alternative to 1D stretching is to

sample rows and columns of 2D signals independently using separable operators [152]. Then,

the principle of 2D separable sampling is as follows. We assume that the 2D signal X is K−sparse under a given transformation. thus, X can be written as

X = Ψ1SΨT2 , (6.31)

where Ψ1 (resp. Ψ2) is√N ×

√N non singular matrix. There are only K ≪ N important

elements (spikes) in S which is then qualified as sparse. Our aim here is to find how to design

the matrix Ψ1 (resp. Ψ2) to better sparsify X.

131

Page 158: Sparse Regularization Approach ... - supcom.mincom.tn

6.5. 2D Separable Data Compression: Application to WSN

Considering the sampling operators Φ1 and Φ2 ∈ R

√M×

√N , where

√M <

√N and K < M <

N , are used to independently combine the readings from the rows and columns of X. The

matrix Φ1 (resp. Φ2) contains the combination coefficients that are usually picked at random

according to a given distribution in order to verify RIP condition. Here, Φi (i = 1, 2) is chosen

as a selection (binary) matrix. Then, the 2D collected noisy readings Y ∈ R

√M×

√M at the

sink node is expressed as

Y = Φ1XΦT2 +N,

= A1SAT2 +N, (6.32)

whereAi = ΦiΨi ∈ R

√M×

√N , i = 1, 2, denotes the 2D reconstruction matrices andN is the 2D

additive Gaussian noise matrix disturbing measurements, whose components are independent

with zero mean and variance σ2IM .

If the matrix A1 (resp. A2) obeys the RIP condition and Φ1 (resp. Φ2) has low coherence

with Ψ1 (resp. Ψ2) [18], then the sparse representation S can be effectively recovered from Y.

By utilizing 2D separable sampling, the 2D signal recovery problem, as will be hereafter de-

scribed, can be recast into 1D signal recovery problem. So, the ordinary 1D recovery greedy

algorithms such as OMP method [78] can be applied on a vector, y obtained from Y, to recover

the sparse vector s. Then, S and X are transformed in 2D. It is worth nothing that contrarily

to 1D reading, this transformation preserves the correlation among sensed values in the 2D

space. Let y = vec(Y) denote the M × 1 vector obtained from the√M ×

√M matrix Y by

concatenating vectors as follows

y =[

Y(1, 1), . . . ,Y(√M, 1),Y(1, 2), . . . ,Y(

√M, 2) . . . ,Y(1,

√M), . . . ,Y(

√M,

√M)

]

T .(6.33)

From linear algebra, applying the operator vec(.) on eq.(6.32) leads to

y = (A1 ⊗A2) vec(S) + vec(N),

= As+ n, (6.34)

where A = A1 ⊗A2 ∈ RM×N is the 1D reconstruction matrix, s = vec(S) sparse vector and

n = vec(N) are columns of length N and M respectively, obtained by columns concatenation

of matrices S and N.

At the sink node, only y and A matrix, from measured X, Ψi knowledge and chosen Φi

132

Page 159: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

(i = 1, 2) respectively, are needed to recover the 2D input signal X. Indeed, eq.(6.34) is solved

using 1D CS recovery algorithms such that OMP to recover the sparse 1D solution s. Then,

2D solution S is deduced and used to reconstruct X as

X = Ψ1SΨT2 . (6.35)

Our aim in the following is to design robust sparsitying transform (Ψ1,Ψ2) through 2D com-

pression scenario.

6.5.2 Predefined 2D transformation

As mentioned above, the considered sparsifying transform Ψ is 2D separable. This property

holds for most popular sparsifying transformations such as DCT and DFT bases. Such trans-

form is used in order to obtain a sparse representation of the network correlated readings in

some adequately designed basis.

The 2D compression is expected to better exploit the spatial correlation and thus to give a

higher compression ratio when compared to 1D compression. Further, the reconstruction can

be realized in 1D using some sparsity 1D-2D equivalence, expressed through eq. (6.34). In

this way, the use of 2D separable linear transforms allows to recover the measured 2D signal

X by transforming the 2D reconstruction problem into 1D one, thus leading to a significant

complexity reduction.

6.5.3 2D separable principal component analysis technique

Two-dimensional PCA (2DPCA) approach was proposed in the frame of image processing,

which computes eigenvectors of the so-called image covariance matrix [153]. In the following, we

first indicate that PCA is essentially working in the row direction of images, and then consider

an alternative PCA which is working in the column direction of images. By simultaneously

considering the row and column directions, we develop the 2-Directional 2DPCA for more

efficient data compression.

• PCA in the rows direction

Let B ∈ R

√N×

√N be an orthonormal basis. Projecting X onto B yields a

√N by

√N matrix

[154] as

Z = XB. (6.36)

133

Page 160: Sparse Regularization Approach ... - supcom.mincom.tn

6.5. 2D Separable Data Compression: Application to WSN

But, how a good projection B that makes Z the most sparse possible can be determined?

Define the 2D signal X covariance matrix as

G1 = E(

(X− E(X))T (X− E(X)))

. (6.37)

Suppose that there are L training realizations of signal X of size√N ×

√N , denoted Xk, for

k = 1, . . . , L and denote the average signal as

X =1

L

L∑

k=1

Xk. (6.38)

Then, G1 of size√N ×

√N can be evaluated by

G1 =1

L

L∑

k=1

(

Xk − X)T (

Xk − X)

, (6.39)

=1

L

L∑

k=1

√N∑

i=1

(

X(i)k − X(i)

)T (

X(i)k − X(i)

)

. (6.40)

where X(i)k and X(i) denote the ith row of respectively Xk and X.

Applying PCA in rows direction corresponds to consider the projection matrix B composed

by the orthonormal eigenvectors B1, . . . ,B√N of G1 corresponding to the

√N eigenvalues in

decreasing order λ1 ≥ λ2 . . . ,≥ λ√N .

• Alternative PCA principle

A common definition for images covariance [155] is

G2 =1

L

L∑

k=1

(

Xk − X) (

Xk − X)T

, (6.41)

=1

L

L∑

k=1

√N∑

j=1

(

X(j)k − X(j)

)(

X(j)k − X(j)

)T, (6.42)

where X(j)k and X(j) denote the jth column of respectively Xk and X.

Let Q ∈ R

√N×

√N be a matrix with orthonormal columns. Projecting X onto Q yields a

√N

by√N matrix as C = QTX.

Thus, the projection matrix Q can be obtained by computing the eigenvectors Q1, . . . ,Q√N

of G2 corresponding to the√N eigenvalues in decreasing order λ

1 ≥ λ′

2 . . . ,≥ λ′√N.

• 2DPCA principle

134

Page 161: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

The above PCA and alternative PCA work in the row and column directions respectively. In-

deed, PCA learns an optimal matrix B from a set of training 2D signals reflecting information

of rows of images. Similarly, the alternative DPCA learns optimal matrix Q reflecting infor-

mation of columns.

Suppose we have computed the projection matrices B and Q of 2D signal X [155] such that

S = QTXB, (6.43)

the matrix S is called the coefficients matrix in image signal representation. Once an estimate

S of S is recovered, it can be used to reconstruct the original image signal X as

X = QSBT . (6.44)

In this way, a 2D separable compression based on PCA can be envisaged through a learning

phase.

6.6 Proposed 2D Separable Linear Prediction Coding Approach

In this section, LPC technique is addressed for 2D signals compression. Let us consider the

model given by figure 6.12 which illustrates an example of 2D spatial sub part of a WSN

readings with N = 9 partitioned in a square area as 3 × 3 sensors. In this way, the sample

X(m,n) denotes the 2D sequence of 2D WSN measurements.

In two dimensional LPC (2DLPC), we assume that the considered sample X(m,n) is predicted

using the neighboring samples. The 2D predicted sample Xp(m,n) is then expressed as [156]

Xp(m,n) =∑

k∈D1

l∈D2

a(k, l)X(m − k, n− l), (6.45)

where a(k, l) is a 2D prediction coefficient and Di is a set of integers of considered samples

indexes and depend on the prediction orders in both directions.

Then, the 2D prediction error sequence E(m,n) is expressed as

E(m,n) = X(m,n)−Xp(m,n), (6.46)

= X(m,n)−∑

k∈D1

l∈D2

a(k, l)X(m− k, n− l).

135

Page 162: Sparse Regularization Approach ... - supcom.mincom.tn

6.6. Proposed 2D Separable Linear Prediction Coding Approach

n

m

* * *

***

* * *

X(m, n) X(m, n+ 1)

X(m+ 1, n+ 1)X(m + 1, n)X(m+ 1, n− 1)

X(m, n− 1)

X(m− 1, n− 1) X(m − 1, n) X(m− 1, n+ 1)

Figure 6.12: 2D WSN sub part.

The prediction error can be seen as a filtered version of X(m,n) considering the filter with the

transfer function

A(z1, z2) = 1−∑

k∈D1

l∈D2

a(k, l)z−k1 z−l

2 . (6.47)

The model coefficients a(k, l) are chosen to minimize the mean squared value of the 2D predic-

tion error sequence. Considering stability is a very important issue in linear prediction. Only in

the one dimensional case can the autocorrelation method guarantee stability [157]. Thus, if the

prediction error filter is structured as the product of two 1D prediction error filters, each one

predicting along a different direction in the (m,n) plane, then by using the 1D autocorrelation

method, the parameters of each individual filter can be found while stability is guaranteed. We

hereafter consider a 2Q× 2Q 2D separable prediction mask. Then, under separability assump-

tion, the corresponding prediction error filter and prediction error sequence can be respectively

expressed as

A(z1, z2) =

1−Q∑

k=−Q,k 6=0

a(k)z−k1

1−Q∑

l=−Q,l 6=0

b(l)z−l2

, (6.48)

136

Page 163: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

and

E(m,n) = X(m,n)−Q∑

k=−Q,k 6=0

a(k)X(m − k, n)−Q∑

l=−Q,l 6=0

b(l)X(m,n − l)

+

Q∑

k=−Q,k 6=0

Q∑

l=−Q,l 6=0

a(k)b(l)X(m − k, n− l). (6.49)

In general, to find the coefficients of a separable predictor, a(k), b(k), k = −Q, . . . ,−1, 1, . . . , Q,we need to minimize the square norm of E(m,n) by setting partial derivatives with respect to

the coefficients to zero, and solving the resulting system. However, this will not be easy task

since it is evident from eq. (6.49) that the resulting system of equations is nonlinear. Thus, a

suboptimum solution is considered when the 2D separable filter is realized as a cascade of two

filters with the error sequence being minimized at the output of each filter separately. In other

words, prediction is done first along one direction and then along the other. In the following,

the causal and noncausal processing according to the two directions will be discussed. We refer

by causal and non causal respectively to processing in one way or two ways (in each direction).

6.6.1 2D separable LPC: causal scenario

We here start by the 1DLPC with causal scenario following the rows then the columns direction

of the 2D signal. Causality here refers to the fact that in each direction, only one way neighbors

are used for prediction.

• Causal 1DLPC in the rows direction

Let Er(m,n) denote the output of the first prediction error filter

Er(m,n) = X(m,n)−Q∑

k=1

a(k)X(m− k, n). (6.50)

Then, the optimal a(k) coefficients are deduced by minimizing the square norm of Er(m,n)

expressed as

J1 =∑

m

n

E2r(m,n). (6.51)

Considering the corresponding MSE denoted as J1 = E(

Er(n)2)

, a(k) are determined from

the partial derivatives of J1 with respect to such coefficients such as

∂J1∂a(i)

= 0, 1 ≤ i ≤ Q. (6.52)

137

Page 164: Sparse Regularization Approach ... - supcom.mincom.tn

6.6. Proposed 2D Separable Linear Prediction Coding Approach

This leads to

E (X(m,n)X(m− i, n)) =

Q∑

k=1

a(k)E (X(m− k, n)X(m− i, n)) , 1 ≤ i, j ≤ Q. (6.53)

Under stationarity assumption, the autocorrelation of X for shifts k and l is

R(k, l) = E (X(m,n)X(m− k, n− l)) , (6.54)

Then, eq. (6.53) can be rewritten as

R(i, 0) =

Q∑

k=1

a(k)R(|k − i|, 0), 1 ≤ i ≤ Q. (6.55)

In matrix form, we have

Rra = rr, (6.56)

where

Rr =

R(0, 0) R(1, 0) · · · R(Q− 1, 0)

R(1, 0) R(0, 0) · · · R(Q− 2, 0)...

.... . .

...

R(Q− 1, 0) R(Q− 2, 0) · · · R(0, 0)

(6.57)

rr = [R(1, 0), . . . , R(Q, 0)]T is the autocorrelation vector and a = [a(1), . . . , a(Q)]T corresponds

to the coefficients vector. According to the last equation, the prediction coefficients a(k) can

be estimated as

a = R−1r rr, (6.58)

which involves the inversion of a Q×Q matrix.

If we have both the prediction error sequence Er(m,n) and the prediction coefficients a(k),we can reconstruct the output signal X(m,n) by filtering Er(m,n) with synthetic filter

H1(z1) =1

1−Q∑

k=1

a(k)z−k1

,

as

X(z1, z2) = H1(z1)Er(z1, z2). (6.59)

Since the impulse response h1 corresponding to H1(z1) is infinite, we can approach it by

138

Page 165: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

truncating the filter response to only its first√N coefficients. The recovered signal X(m,n)

can then be expressed as

X(m,n) =

√N−1∑

k=0

h1,kEr(m− k, n), (6.60)

with h1,k is the kth component of h1.

In matrix form, we have

X = H1Er, (6.61)

where

H1 =

h1,0 0 · · · 0

h1,1. . .

. . ....

.... . .

. . ....

h1,√N−1 h1,

√N−2 · · · h1,0

. (6.62)

Revisiting eq. (6.46)-(6.48), the 2D prediction error is

E(z1, z2) = A(z1, z2)X(z1, z2), (6.63)

with(

1−∑

k

a(k)z−k1

)

X(z1, z2) = Er(z1, z2). (6.64)

Then,

E(z1, z2) = Ec(z1, z2) =

(

1−∑

l

b(l)z−l2

)

Er(z1, z2). (6.65)

These last two equations serve to reconstruct X from Ec following the diagram of figure 6.13.

E = Ec1

1−∑

l

b(l)z−l2

1

1−∑

k

a(k)z−k1

Er X

H2 H1

Figure 6.13: Synthesis Filters

• Causal 1DLPC in the columns direction

The output of the second 1-D prediction filter error is given by

Ec(m,n) = Er(m,n)−Q∑

l=1

b(l)Er(m,n− l). (6.66)

139

Page 166: Sparse Regularization Approach ... - supcom.mincom.tn

6.6. Proposed 2D Separable Linear Prediction Coding Approach

The prediction error is

J2 =∑

m

n

E2c(m,n), (6.67)

and the corresponding MSE is J2 = E(Ec(m,n)2). b(l) are optimized to minimize J2 and

can be found by zeroing J2 derivative with respect to b(l), which leads to

E (Er(m,n)Er(m,n− j)) =

Q∑

l=1

b(l)E (Er(m,n− l)Er(m,n− j)). (6.68)

Then, we have

R(0, j) =

Q∑

l=1

b(l)R(0, |l − j|), 1 ≤ j ≤ Q. (6.69)

In matrix form, we have

Rcb = rc, (6.70)

where

Rc =

R(0, 0) R(0, 1) · · · R(0, Q− 1)

R(0, 1) R(0, 0) · · · R(0, Q− 2)...

.... . .

...

R(0, Q− 1) R(0, Q− 2) · · · R(0, 0)

, (6.71)

rc = [R(0, 1), . . . , R(0, Q)]T is the autocorrelation vector and b = [b(1), . . . , b(Q)]T is the coef-

ficients vector.

In this case, the synthetic filter is given by

H2(z2) =1

1−Q∑

l=1

b(l)z−l2

.

Then,

Er(z1, z2) = H2(z2)Ec(z1, z2). (6.72)

The recovered signal Er(m,n) can then be expressed as

Er(m,n) =

√N−1∑

l=0

h2,lEc(m,n− l), (6.73)

where h2,l are the first√N CIR coefficients corresponding to H2.

In matrix form, we have

Er = EcHT2 , (6.74)

140

Page 167: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

where

H2 =

h2,0 0 · · · 0

h2,1. . .

. . ....

.... . .

. . ....

h2,√N−1 h2,

√N−2 · · · h2,0

. (6.75)

Then, according to eq. (6.61) and eq. (6.74), we have

X = H1EcHT2 . (6.76)

Ec is identified to the sparse 2D signal S, and is used to recover X.

6.6.2 2D separable LPC: noncausal scenario

Differently from the above analysis, the prediction in each direction will consider measurements

in two ways. In such scenario, the filter (6.48) sums over k and l ranging from −Q to Q rather

than from 1 to Q as is the case in the causal scenario.

• Noncausal 1DLPC in the rows direction

In this way, the output of the first prediction error filter is expressed as

Er(m,n) = X(m,n)−Q∑

k=−Q,k 6=0

a(k)X(m− k, n), (6.77)

With the same earlier reasoning, we have

R(i, 0) =

Q∑

k=−Q,k 6=0

a(k)R(|k − i|, 0), −Q ≤ i ≤ Q, i 6= 0. (6.78)

In matrix form, we have

Rra1 = rr, (6.79)

141

Page 168: Sparse Regularization Approach ... - supcom.mincom.tn

6.6. Proposed 2D Separable Linear Prediction Coding Approach

where

Rr =

R(0, 0) R(1, 0) · · · R(Q− 1, 0) R(Q+ 1, 0) · · · R(2Q, 0)

R(1, 0) R(0, 0) · · · R(Q− 2, 0) R(Q, 0) · · · R(2Q− 1, 0)...

.... . .

......

......

R(Q− 1, 0) R(Q− 2, 0) · · · R(0, 0) R(2, 0) · · · R(Q+ 1, 0)

R(Q+ 1, 0) R(Q, 0) · · · R(2, 0) R(0, 0) · · · R(Q− 1, 0)...

.... . .

......

. . ....

R(2Q− 1, 0) R(2Q− 2, 0) · · · R(Q, 0) R(Q− 2, 0) · · · R(2Q, 0)

R(2Q, 0) R(2Q− 1, 0) · · · R(Q+ 1, 0) R(Q− 1, 0) · · · R(0, 0),

(6.80)

rr = [R(Q, 0), R(Q − 1, 0), . . . , R(1, 0), R(2, 0), . . . , R(Q, 0)]T is the autocorrelation vector and

a1 = [a(−Q), . . . , a(−1), a(1), . . . , a(Q)]T is the prediction coefficients vector. According to the

last equation, the prediction coefficients a(k) can be estimated. The steps from eq. (6.59) to

eq. (6.62) are processed in order to generate the sparsity inducing transform in rows direction

H1.

• Noncausal 1DLPC in the columns direction

The output of the second prediction error filter in columns direction is given by

Ec(m,n) = Er(m,n)−Q∑

l=−Q,l 6=0

b(l)Er(m− l, n), (6.81)

Following the above approach, we obtain

R(0, j) =

Q∑

l=−Q,l 6=0

b(l)R(0, |l − j|), −Q ≤ j ≤ Q, j 6= 0. (6.82)

In matrix form, we have

Rcb1 = rc, (6.83)

142

Page 169: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

where

Rc =

R(0, 0) R(0, 1) · · · R(0, Q− 1) R(0, Q+ 1) · · · R(0, 2Q)

R(0, 1) R(0, 0) · · · R(0, Q− 2) R(0, Q) · · · R(0, 2Q− 1)...

.... . .

......

......

R(0, Q− 1) R(0, Q− 2) · · · R(0, 0) R(0, 2) · · · R(0, Q+ 1)

R(0, Q+ 1) R(0, Q) · · · R(0, 2) R(0, 0) · · · R(0, Q− 1)...

.... . .

......

. . ....

R(0, 2Q − 1) R(0, 2Q − 2) · · · R(0, Q) R(0, Q− 2) · · · R(0, 2Q)

R(0, 2Q) R(0, 2Q − 1) · · · R(0, Q+ 1) R(0, Q− 1) · · · R(0, 0),

(6.84)

rc = [R(0, Q), R(0, Q − 1), . . . , R(0, 1), R(0, 2), . . . , R(0, Q)]T is the autocorrelation vector and

b1 = [b(−Q), . . . , b(−1), b(1), . . . , b(Q)]T is the prediction coefficients vector. The last equation

allows to recover the prediction coefficients b(k). Note that Rc and rc are computed by

considering the matrix Er (rather than X) of rows direction prediction error. As in causal

scenario, the sparsifying transform H2 in columns direction is derived from steps given by eq.

(6.72) to eq. (6.75).

• Global scheme

1. Training step: After generating observation X, the autocorrelation matrix in rows direc-

tion Rr(i, j) is computed for 0 ≤ i, j ≤ 2Q− 1, from which the sequence rr is evaluated.

In this way, the prediction coefficients a can be computed. Then, the sparsity inducing

transform in rows direction H1 is deduced. For the processing in columns direction, the

2D prediction error signal Er is needed. Then, the corresponding autocorrelation matrix

Rc is computed to predict the coefficients vector b then, the transform H2.

2. Reconstruction step: The sparse 2D signal S is derived from the measurement represen-

tation Y and reconstruction matrix A1 and AT2 given by eq. (6.32).

6.7 Discussions and Numerical Results

6.7.1 Application to WSN

We consider the same context than above, where a WSN with a regular topology is adopted.

The monitored square area is divided into N regular cells, each is equipped by one sensor. Each

sensor i, where i = 1, . . . , N , measures a real valued parameter. In order to exploit the WSN

143

Page 170: Sparse Regularization Approach ... - supcom.mincom.tn

6.7. Discussions and Numerical Results

spatial correlation, in both rows and columns directions, the data is viewed as a matrix X with√N ×

√N dimensions, with elements X(i, j) representing measurement of sensor at position i

for row and j for column.

For simulation parameters, the network model is the same as in the previous 1D processing. A

large monitored region with square area of 200m×200m is considered. This region is regularly

divided into N = 100 (10 by 10) cells. Each cell is equipped by one sensor that measures a

real valued continuous measurement. After sensing step, the readings are gathered at the sink

node directly by one hop or through multi-hop transmissions.

The reconstruction performance is evaluated in terms of NMSE on X, which is given by

NMSE =E(‖X− X‖22)

E(‖X‖22). (6.85)

The NMSE is experimentally evaluated and studied versus the measurements number M and

the SNR, which is defined as the ratio of the transmitted signal X mean energy by the noise

variance σ2 as SNR= 1Nσ2

√N∑

i=1

√N∑

j=1

|X(i, j)|2.

6.7.2 Data spatial correlation

In this section, we first describe the signal generation used for performance evaluation. For

data generation, we prceed by coloring a random, independent and identically distributed

(i.i.d.) Gaussian field to generate a synthetic signal from the correlation characteristics. We

use the following steps to generate X. First, we generate an i.i.d. and stationary 2D random

Gaussian field w(x, y) with zero mean and unit variance. In order to generate the considered

correlated 2D signal, we color the random Gaussian field through 2D filtering procedure [158].

This coloring involves the following steps

⋆ Transform w(x, y) to frequency domain as W(u, v), where (u, v) ∈ F which is the frequency

domain using the 2D Fourier transform F such as W(u, v) = Fw(x, y).

⋆ Multiply W(u, v) by a matrix M of size√N×

√N . M is related to the 2D Fourier transform

Rf of the network mean correlation matrix Rs such as

M = R1/2f and Rf = FRs. (6.86)

⋆ Compute the inverse Fourier transform of the result X = FHR1/2f W to obtain the desired

colored random field X(x, y).

144

Page 171: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

To generate Rs, the power exponential correlation model given by eq. (6.29) is considered.

Then, the retained correlation matrix Rs with√N×

√N dimensions is obtained by computing

the mean spatial correlation of each sensor node reading with the other nodes of the network.

On the other hand, since the 2D correlated signal generation is based on some transformations

in Fourier domain, the Discrete Fourier basis will serve as a benchmark since it represents the

most adequate and appropriate basis for sparse representation for the so generated signal.

Two examples of such colored fields are shown in figure 6.14 and figure 6.15 for two mean

correlation values. More precisely, figure 6.14 and figure 6.15 depict two synthetic signals,

in 3D and 2D representations, generated using the above procedure for respectively high and

low correlation.

The supervised approaches such as PCA technique and the proposed scenarios based on LPC,

for data compression exploit spatial WSN data correlation. Then, the mean correlation over

all network is hereafter chosen as ρ ≥ 0.5. Also, we consider only the nearest neighbors of the

data to be predicted. In the following, we will use Q = 2 for LPC with causal processing and

Q = 1 for LPC with noncausal scenario.

05

1015

20

0

5

10

15

200.2

0.21

0.22

0.23

0.24

0.25

xy

X(x

,y)

3D representation

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

2D representation

Figure 6.14: Generated correlated 2D data with mean correlation ρ = 0.9.

6.7.3 Numerical results

As mentioned above, the OMP CS algorithm is used for original data recovery. In this scenario,

the OMP operates with a stopping rule conditioned by the residual observation norm. figure

6.16 displays the normalized residual norm versus the supposed sparsity level (K), which is

145

Page 172: Sparse Regularization Approach ... - supcom.mincom.tn

6.7. Discussions and Numerical Results

05

1015

20

0

5

10

15

200.2

0.22

0.24

0.26

0.28

0.3

0.32

xy

X(x

,y)

3D representation

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

2D representation

Figure 6.15: Generated correlated 2D data with mean correlation ρ = 0.5.

the OMP iterations number. In the following, we choose this normalized residual norm ε =

‖rK‖2/‖r0‖2 equal to 10−3 as a stopping rule for all considered data compression schemes.

This residual is attained by PCA after 24 iterations and by causal LPC after 34 iterations. We

retrieve for 2D LPC similar slow decrease of the residual as observed in the 1D LPC case.

0 10 20 30 40 5010

−6

10−5

10−4

10−3

10−2

10−1

100

Iterations number (K)

Normalized

residual

norm

PCADCTDFTLPC causalLPC noncausal

Figure 6.16: Normalized residual norm versus iteration number (K) when mean correlationρ = 0.9 and M = 49.

• Comparison between 1D and 2D processing

In this part, a comparative study between 1D and 2D compression schemes is established. In

fact, we evaluate the NMSE for 1D scenario as given by eq. (6.30) and 2D processing for two

values of measurements number M = 25 and M = 49, versus SNR in figure 6.17 and figure

146

Page 173: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

6.18. We can see that the considered transformations for sparsity inducing in WSN with 2D

scenario realizes enhanced performance compared to 1D data compression by achieving the

lowest NMSE rates especially for high measurement number M . This behavior is related to

the fact that the 2D compression better exploits the correlation between WSN readings.

−5 0 5 10 15 20 2510

−3

10−2

10−1

100

SNR[dB]

NM

SE

PCA−1DPCA−2D

−5 0 5 10 15 20 2510

−3

10−2

10−1

100

SNR[dB]

NM

SE

LPC causal (Q=2)−1DLPC causal (Q=1)−2D

−5 0 5 10 15 20 2510

−3

10−2

10−1

100

SNR[dB]

NM

SE

LPC noncausal (Q=1)−1DLPC noncausal (Q=1)−2D

Figure 6.17: Comparison between 1D and 2D scenarios of supervised transforms when meancorrelation ρ = 0.9. Solid line correspond to M = 25 and dashed line to M = 49.

−5 0 5 10 15 20 2510

−3

10−2

10−1

100

SNR[dB]

NM

SE

DCT−1DDCT−2D

−5 0 5 10 15 20 2510

−4

10−3

10−2

10−1

100

SNR[dB]

NM

SE

DFT−1DDFT−2D

Figure 6.18: Comparison between 1D and 2D scenarios of unsupervised transforms whenmean correlation ρ = 0.9. Solid line correspond to M = 25 and dashed line to M = 49.

• 2D separable based approaches performance evaluation

After justifying the 2D compression scenario validity, a comparison between different considered

techniques with 2D processing is presented. In fact, figure 6.19(a) and figure 6.19(b) report

the NMSE rates versus respectively measurement number M and SNR. It is worth nothing that

LPC noncausal scenario realizes the best performance by achieving the lowest NMSE rates over

all considered measurement number (M). Also, the proposed sparsifyng transforms based on

LPC are more robust to noise level. As shown in figure 6.19(b), we can see that LPC with

147

Page 174: Sparse Regularization Approach ... - supcom.mincom.tn

6.7. Discussions and Numerical Results

20 30 40 50 60 70 80 90 10010

−2

10−1

100

Measurement number (M)

NM

SE

PCADCTDFTLPC causal (Q=2)LPC noncausal (Q=1)

SNR=−5dB

SNR=5dB

(a) NMSE versus M

−5 0 5 10 15 20 2510

−3

10−2

10−1

100

SNR[dB]

NM

SE

PCADCTDFTLPC causal (Q=2)LPC noncausal (Q=1)

M=49

M=64

(b) NMSE versus SNR

Figure 6.19: 2D performance evaluation when ρ = 0.8.

noncausal processing achieves the lowest NMSE rates mainly at low SNR values.

100 150 200 250 300 350 4000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

Cells Number (N)

NM

SE

PCADCTDFTLPC causalLPC noncausal

Figure 6.20: Reconstruction error versus cells number (N) when mean correlation ρ ≈ 0.7and SNR= 0dB.

Concerning the analysis of sensors density effect, figure 6.20 displays the reconstruction error

rates versus the number of the considered network cells ranging from 100 (10 × 10) to 400

(20 × 20). Also, the measurement number (M) are chosen from 72 to 172 in such a way to

conserve a constant ratio M/N = 0.5. Then, the obtained results indicate the efficiency of the

proposed LPC approaches, realizing the lowest NMSE rates over all sensors number compared

to the existing sparsity inducing transforms.

148

Page 175: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 6. Robust Sparse Data Representation from 1D to 2D processing Strategies: Application to WSN

As the 2D separable data compression techniques used for sparsity inducing in WSN are based

on applying 1D data compression in both rows and columns directions of the 2D WSN reading,

they remain the most advantageous in terms of computational complexity. LPC leads to further

reduced complexity, when compared to PCA.

6.8 Conclusion

Data compression is an effective solution to circumvent the energy limitation and guarantee

high transmission capacity in WSN. In this chapter, the problem of spatially correlated data

compression in WSN is investigated.

More precisely, we considered transforms for sparsity inducing in WSN, that are required

for CS application to reconstruct the original 2D signal from a reduced number of deployed

sensors. In this way, we have proposed a novel sparsity inducing algorithm, that exploits

spatial data correlation, for efficient data aggregation. The proposed approach is based on

Linear Prediction Coding (LPC) technique. Using LPC based sparsifying basis, we are able

to reduce the number of required measurements and then reduce the energy consumption in

WSN. Also, several sparsifying transforms such as predefined DFT, DCT and supervised PCA

techniques are here considered.

This chapter is divided into two parts. in its first part, we addressed the scenario of 1D data

compression. The analysis based on 1D readings of the 2D WSN measurements exploits the

correlation only in one direction. To better exploit spatial correlation, 2D compression scenario

is derived in the second part of this chapter. In this context, using some linear algebra tools, the

2D reconstruction step is reformulated into a 1D one. The WSN whole observations are then

recovered in 2D based on 1D recovery OMP algorithm. For both 1D and 2D data compression

scenarios, numerical results confirm the effectiveness of the proposed LPC approach and show

its robustness to noise and its superiority to existing WSN data sparsifying transforms.

149

Page 176: Sparse Regularization Approach ... - supcom.mincom.tn
Page 177: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 7

Conclusion and Future Work

Contributions Summary

With the wide amount of measured and processed data and the advances in wireless commu-

nications, efficiently transmitting and communicating information is a great challenge to the

energy consumption and deployment cost. In this thesis, we apply CS theory in wireless com-

munication, as a new perspective that holds promising improvements to some problems such as

power consumption, deployment cost and operational performance. CS indeed allows to pro-

cess a high dimensional signal with a low computational cost as soon as it can be represented

by a sparse parameter in some adequate basis. This makes it adapted to signal acquisition and

compression. We started elaborating an overview of CS theory, and its techniques for sparse

signal recovery. We then gave a survey on CS potentials in wireless communications.

The first part of this thesis deals with the recovery and the tracking of sparse signals in known

bases. Firstly, we considered the problem of rare events detection and counting in WSN. In

this context, we have proved the validity of CS problem for targets detection and counting by

a theoretical formulation. The target positions and numbers are considered to form a sparse

vector with integer components. To this end, CS formulation in a greedy framework has been

applied to estimate the targets locations and number. Thus, several greedy algorithms have

been proposed to enhance performance detection. More precisely, a new generalized extension

of the recent GMP algorithm has been envisaged. It allows to identify simultaneously multiple

active cells positions and their events number at each iteration. Also, a Greedy OMP version

(GOMP) algorithm has been proposed, which efficiently handles the CS decomposition basis

non orthogonality. Simulation results validated the proposed algorithms superiority over the

existing GMP algorithm in terms of mean squares error, counting error and correct events

151

Page 178: Sparse Regularization Approach ... - supcom.mincom.tn

positions detection. In order to reduce the computational load, modified versions of GMP and

GOMP algorithms based on detection and counting steps separating have been introduced.

Furthermore, we proposed to optimize sensors activation schemes where new criteria based on

measured observation and channel matrix coherence have been envisaged. Numerical results

demonstrated the detection capacity enhancement and counting error reduction at low SNR of

the proposed optimal sensors selection compared to random selection.

Secondly, the problem of rare events detection has been studied in a large WSN scale, in

which the distance between two adjacent sensors is very large in such a way that the large-scale

fading effect severely affects the transmitted signal. Therefore, we suggested to force the sensing

matrix to approximately obey the i.i.d. entries distribution hypothesis which is expected to

approach RIP conditions and consequently to guarantee good reconstruction performance. In

this context, we have proposed collaborative schemes that control the transmitted power of

some targets in the aim of coherence reduction of the sensing matrix thus guaranteeing an

enhancement of detection performance in WSN. In this scenario, two new schemes for PC that

partition the sensor nodes into clusters, respectively based on the range of sensors from the

cluster head, PCMD, and on the deployed sensors density around the cluster head, PCMSn,

were proposed. Each of the two proposed schemes performs either, with a local or a global

distance effect compensation manner. They are respectively denoted by PCMDl and PCMDg

for PCMD mechanism, and PCMSnl and PCMSng for PCMSn approach. Simulation results

validate the superiority of the proposed schemes compared to the case without PC in terms of

coherence reduction and reconstruction performance.

In the same context of sparse signal recovery, the case of continuous-valued sparse parameter

has been studied. The envisaged application is that of the sparse CIR tracking in OFDM

systems. Realistic models for such channels predict highly non stationary gains and delays

with much lower temporal variation. In order to enhance the channel estimation accuracy,

we proposed a scheme that disjointly and successively tracks the delay-subspace, by Kalman

filtering, then tracks the CIR structure. Contrarily to former subspace based channel response

tracking, the channel order is unknown. The channel sparsity in the time-domain is accounted

for by incorporating an adaptive CIR support tracking. This adaptive procedure combines the

last and current OFDM blocks recovered CIR structures. Then, threshold-based CIR structure

detection has been applied on the recovered CIR estimate over its detected support. Finally, a

structured LS estimation is processed. The proposed scheme is proved to outperform sparsity-

unaware Kalman tracking algorithm. It achieves similar performance than the best benchmark,

which is based on perfect CIR structure knowledge. In addition, we considered the issue of

152

Page 179: Sparse Regularization Approach ... - supcom.mincom.tn

Chapter 7. Conclusion and Future Work

optimized pilots placement for spectrum efficiency improving. In this framework, we proposed

a new scheme that iteratively allows to find the near-optimal pilot pattern in a forward manner

using a tree-based structure. Simulation results showed that the proposed scheme achieves a

performance enhancement in terms of CIR estimation and symbol error rate, compared to the

former pilot placement schemes, with noticeable computational load reduction.

The first part of the thesis considered the recovery of sparse signals based on the knowledge

of the decomposition/reconstruction matrix. However, a sparse data representation can not

be easily induced in many other real-world contexts such as continuous environmental data

gathering. Unlike the first part, we investigated the issue of data aggregation of non-sparse

but spatially correlated 2D data in the second part of the thesis. This was applied in WSN

context. In order to reduce the amount of exchanged data, we compress them and eliminate

their redundancy. This requires the search of an adequate sparsity inducing basis, contrarily

to the first part of the thesis where the sparsifying basis is a priori known. In addition to

conventional and predefined transforms sparsity inducing in WSN such as Principal Component

Analysis (PCA), Discrete Cosine Transform (DCT) and Discrete Fourier Transform (DFT), a

new technique based on linear prediction and exploiting spatial correlation has been developed.

It enables to find a sparsifying transformation thus allowing for original signal recovery from

a reduced number of sensors measurements. First, a 1D reading of the network is considered.

Then, a 2D reading has been envisaged and shown to outperform the first since it is tailored

to better exploit the spatial correlation.

Perspectives

A number of potential future research directions emerge at the end of this work.

The elaborated compressive sensing based WSN rare events detection assumes that the targets

are stationary and that the sparse parameter has a constant support during the observation.

However, in practice, the targets move, which leads to a change both the sparse vector support

and value. We are planning, in terms of future work, to study the case where the targets

move slowly in time. In such context, new adaptive data gathering schemes can be designed to

exploit not only the slow variation of the support (active cells positions) but also that of the

number of targets per cell, as one target may move from one cell to a neighboring cell.

Still in WSN context, the spatial correlation between sensor nodes, which was supposed uniform

in the second part of the thesis dedicated to reconstruction basis search, can not be perfectly

153

Page 180: Sparse Regularization Approach ... - supcom.mincom.tn

uniform. A uniform measurements correlation per region can be then adopted where each

region is formed by the sensor nodes with locally uniform correlation. Then, it is possible to

adopt different uniform correlation models, and then proceed by clustering approaches that

produce clusters with heterogeneous-sizes. This is expected to lead to more energy efficient

WSN for non uniform spatial data correlation.

Concerning WSN data compression issue, as part of our future work, we suggest to apply the

proposed sparsity inducing transforms on real world WSN data measurements sets.

Finally, CS holds promising capacities of high compression ratio to handle the tremendous

quantities of data to be exchanged and stored in the emerging IoT context. Indeed, a low-cost

data acquisition system is necessary to effectively collect and process the data and information

at IoT end nodes. WSNs have proven their good potentials in a wide range of applications

in many industrial systems and can be integrated into the IoT, which consists of a number of

interconnected sensor nodes.

However, the heterogeneity of information systems should be accounted for. Also, multi-hop

data transmission scenario, that exploit both intra and inter-sensor data correlations, can be

envisaged for the design of novel efficient large-scale data aggregation mechanisms.

154

Page 181: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[1] J. S. Lee, Y. W. Su, and C. C. Shen. “A comparative study of wireless protocols:

Bluetooth, UWB, ZigBee, and Wi-Fi”. Conference of the IEEE Industrial Electronics

Society, pages 46–51, 2007.

[2] B. K. Natarajan. “Sparse approximate solutions to linear systems”. SIAM journal on

computing, 24(2):227–234, 1995.

[3] Z. Zhang, Y. Xu, J. Yang, X. Li, and D. Zhang. “A survey of sparse representation:

algorithms and applications”. IEEE Access, 3:490–530, 2015.

[4] A. M. Bruckstein, D. L. Donoho, and M. Elad. “From sparse solutions of systems of

equations to sparse modeling of signals and images”. IEEE Access, 51(1):34–81, 2009.

[5] D. Gil, A. Ferrndez, H. Mora-Mora, and J. Peral. “Internet of Things: A review of

surveys based on context aware intelligent services”. Sensors, 16(7):1069, 2016.

[6] Y. Qin and et al. “When things matter: A survey on data-centric internet of things”.

Journal of Network and Computer Applications, 64(7):137–153, 2016.

[7] S. K. Anithaa and et al. “The internet of things-a survey”. World Scientific News, 41

(7):150, 2016.

[8] M. Elad. “Sparse and redundant representations: From theory to applications in signal

and image processing”. New York, NY, USA: Springer-Verlag, 2010.

[9] Y. Li. “Sparse representation for machine learning”. In Canadian Conference on Artificial

Intelligence, Springer Berlin Heidelberg, pages 352–357, 2013.

[10] Y. Li and A. Ngom. “Sparse representation approaches for the classification of high-

dimensional biological data”. BMC systems biology, 7(Suppl 4):S6, 2013.

155

Page 182: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[11] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan. “Sparse representation

for computer vision and pattern recognition.”. Proceedings of the IEEE, 98(6):1031–1044,

2010.

[12] L. El Ghaoui, G. C. Li, V. A. Duong, V. Phan, A. N. Srivastava, and K. Bhaduri. “Sparse

machine learning methods for understanding large text corpora”. In CIDU, pages 159–

173, 2011.

[13] F. Marvasti, A. Amini, F. Haddadi, M. Soltanolkotabi, B. H. Khalaj, A. Aldroubi, and

J. Chambers. “A unified approach to sparse signal processing”. EURASIP journal on

advances in signal processing, (1):1, 2012.

[14] Y. Tsaig and D. L. Donoho. “Extensions of compressed sensing”. Signal processing, 86

(3):549–571, 2006.

[15] G. Peyre. “Best basis compressed sensing”. In International Conference on Scale Space

and Variational Methods in Computer Vision, Springer Berlin Heidelberg, pages 80–91,

2007.

[16] Y. C. Eldar and G. (eds.). Kutyniok. “Compressed sensing: theory and applications”.

Cambridge University Press, 86(3):549–571, 2012.

[17] R. G. Baraniuk, E. Candes, M. Elad, and Y. Ma. “Applications of sparse representation

and compressive sensing”. Proceedings of the IEEE, 98(6):906–909, 2010.

[18] E. L. Candes and T. Tao. “Near-optimal signal recovery from random projections: Uni-

versal encoding strategies?”. IEEE Transactions on Informations Theory, 52(12):5406–

5425, 2008.

[19] E. J. Candes, J. K. Romberg, and T. Tao. “Stable signal recovery from incomplete

and inaccurate measurements”. Communications on Pure and Applied Mathematics, 59:

1207–1223, 2006.

[20] D. L. Donoho. “Compressed sensing”. IEEE Transactions on Information Theory, 52

(4):1289–1306, 2006.

[21] E. J. Candes, J. Romberg, and T. Tao. “Robust uncertainty principles: Exact signal

reconstruction from highly incomplete frequency information. information theory”. IEEE

Transactions on Information Theory, 52(2):489–509, 2006.

156

Page 183: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography BIBLIOGRAPHY

[22] E. L. Candes and M. B. Wakin. “An introduction to compressive sampling”. IEEE Signal

Processing Magazine, 25(2):21–30, 2008.

[23] Y. U. A. N. Jia, T. I. A. N. Pengwu, and Y. U. Hongyi. “The identification of frequency

hopping signal using compressive sensing”. Communications and Network, 1(01):52, 2007.

[24] F. Liu, Y. Kim, N. A. Goodman, A. Ashok, and A. Bilgin. “Compressive sensing of

frequency-hopping spread spectrum signals”. In SPIE Defense, Security, and Sensing.

International Society for Optics and Photonics, 1(01):83650–83650, 2007.

[25] Z. Tian. “Compressed wideband sensing in cooperative cognitive radio networks”. IEEE

Global Telecommunications Conference, pages 1–5, 2008.

[26] Z. Tian and G. B. Giannakis. “Compressed sensing for wideband cognitive radios”. In

IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP, 4:

IV–1357, 2007.

[27] C. R. Berger, Z. Wang, J. Huang, and S. Zhou. Application of compressive sensing

to sparse channel estimation”. In SPIE Defense, Security, and Sensing. International

Society for Optics and Photonics, 48(11):164–174, 2010.

[28] J. Meng, W. Yin, Y. Li, N. T. Nguyen, and Z. Han. “Compressive sensing based high-

resolution channel estimation for OFDM system”. IEEE Journal of selected topics in

signal processing, 6(1):15–25, 2012.

[29] A. Masoum, N. Meratnia, and P. J. Havinga. “A distributed compressive sensing tech-

nique for data gathering in wireless sensor networks”. Procedia Computer Science, 21:

207–216, 2013.

[30] Z. Han, H. Li, and W. Yin. “Compressive sensing for wireless networks”. Cambridge

University Press, 2013.

[31] F. L. Lewis. “Wireless sensor networks”. Smart environments: technologies, protocols,

and applications, pages 11–46, 2004.

[32] I. F. Akyildiz and M. C. Vuran. “Wireless sensor networks”. John Wiley and & Sons, 4:

11–46, 2010.

[33] M. P. Durisic, Z. Tafa, G. Dimic, and V. Milutinovic. “A survey of military applications

of wireless sensor networks”. In IEEE Mediterranean conference on embedded computing,

MECO, pages 196–199, 2012.

157

Page 184: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[34] G. Zhao. “Wireless sensor networks for industrial process monitoring and control: A

survey”. Network Protocols and Algorithms, 3(1):46–63, 2011.

[35] M. F. Othman and K. Shazali. “Wireless sensor network applications: A study in envi-

ronment monitoring system.”. Procedia Engineering, 41:1204–1210, 2012.

[36] J. A. Stankovic, A. D. Wood, and T. He. “Realistic applications for wireless sensor

networks”. In Theoretical Aspects of Distributed Computing in Sensor Networks, Springer

Berlin Heidelberg, 41:835–863, 2011.

[37] S. Sharma, R. K. Bansal, and S. Bansal. “Issues and challenges in wireless sensor

networks”. International Conference on Machine Intelligence and Research Advance-

ment, ICMIRA, IEEE, pages 58–62, 2013.

[38] M. A. Razzaque and S. Dobson. “Energy-efficient sensing in wireless sensor networks

using compressed sensing”. Sensors, 14(2):2822–2859, 2014.

[39] J. J. Van De Beek and et al. “On channel estimation inOFDM systems”. IEEE Vehicular

Technology Conference, 2:815–819, 1995.

[40] O. Edfors and et al. “Analysis of DFT-based channel estimators for OFDM”. Personal

Wireless Commun. , Kluwer Academic Publisher, 12(1):55–70, 2000.

[41] S. Colieri, M. Ergen, A. Puri, and A. Bahai. “A study of channel estimation in OFDM

systems”. IEEE Vehicular Technology Conference. Proceedings. VTC-Fall, 2:894–898,

2002.

[42] C. Carbonelli, S. Vedantam, and U. Mitra. “Sparse channel estimation with zero tap

detection”. IEEE Transactions on Wireless Communications, 6(5):1743–1763, 2007.

[43] Y. S. Lee, H. C. Shin, and H. N. Kim. “Channel estimation based on a time-domain

threshold forOFDM systems”. IEEE Transactions on Broadcasting, 55(3):656–662, 2009.

[44] G. Dziwoki and J. Izydorczyk. “Iterative identification of sparse mobile channels for

TDS-OFDM systems”. IEEE Transactions on Broadcasting, pages 1–14, 2015.

[45] A. K. Das and S. Vishwanath. “On finite alphabet compressive sensing”. IEEE Interna-

tional Conference on Acoustics, Speech and Signal Processing, pages 5890–5894, 2013.

[46] Z. Jellali, L. N. Atallah, and S. Cherif. “Generalized targets detection and counting

in dense wireless sensors networks”. 23rd European In Signal Processing Conference,

EUSIPCO, IEEE, pages 1187–1191, 2015.

158

Page 185: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography BIBLIOGRAPHY

[47] J. A. Tropp and Gilbert A. C. “Signal recovery from randommeasurements via orthogonal

matching pursuit”. IEEE Transactions on information theory, 53(12):4655–4666, 2007.

[48] Z. Jellali, L. N. Atallah, and S. Cherif. “Greedy orthogonal matching pursuit for sparse

target detection and counting in WSN”. 22rd European In Signal Processing Conference,

EUSIPCO, IEEE, pages 2250–2254, 2014.

[49] Z. Jellali, L. N. Atallah, and S. Cherif. “A study of deterministic sensors placement for

sparse events detection in wsn based on compressed sensing”. International Conference

on Communications and Networking, ComNet, IEEE, pages 1–5, 2014.

[50] Z. Jellali, L. N. Atallah, and S. Cherif. “Improving rare events detection in WSN through

clusterbased power control mechanism”. International Journal of Distributed Sensor

Networks, IJDSN, page 23, 2016.

[51] K. D. Colling and P. Ciorciari. “Ultra wideband communications for sensor networks”.

In MILCOM IEEE Military Communications Conference, pages 2384–2390, 2005.

[52] X. Zhu, Y. Li, X. Liu, T. Zou, and B. Chen. “Impulse radio UWB signal detection based

on compressed sensing”. Communications and Network, 5(03):98, 2013.

[53] Z. Jellali and L. N. Atallah. ‘Tree-based optimized forward scheme for pilot placement in

OFDM sparse channel estimation”. International Wireless Communications and Mobile

Computing Conference, IWCMC, IEEE, pages 925–929, 2015.

[54] Z. Jellali, L. N. Atallah, and S. Cherif. “Linear prediction for data compression and recov-

ery enhancement in wireless sensors networks”. International Wireless Communications

and Mobile Computing, IWCMC, IEEE, pages 779–783, 2016.

[55] Z. Jellali, L. N. Atallah, and S. Cherif. “Data acquisition by 2D compression and 1D

reconstruction for WSN spatially correlated data”. International Symposium on Signal,

Image, Video and Communications, ISIVC, IEEE, 2016.

[56] B. Zhang, X. Cheng, N. Zhang, Y. Cui, Y. Li, and Q. Liang. “Sparse target counting and

localization in sensor networks based on compressive sensing”. INFOCOM, Proceedings

IEEE, pages 2255–2263, 2011.

[57] E. J. Candes. “Compressive sampling”. In Proceedings of the international congress of

mathematicians, 3:1433–1452, 2006.

159

Page 186: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[58] S. Qaisar, R. M. Bilal, W. Iqbal, M. Naureen, and S. Lee. “Compressive sensing: From

theory to applications, a survey”. Journal of Communications and Networks, 5(15):

443–456, 2013.

[59] M. F. Duarte and Y. C. Eldar. “Structured compressed sensing: From theory to

applications”. IEEE Transactions on Signal Processing, 9(59):4053–4085, 2011.

[60] H. Nyquist. “Certain topics in telegraph transmission theory”. Trans. AIEE, 47:617–644,

1928.

[61] C. E. Shannon. “Communications in the presence of noise”. Proc. IRE, 37:10–21, 1949.

[62] K. Cabeen and P. Gent. “Image compression and the discrete cosine transform”. College

of the Redwoods, 1998.

[63] S. Mallat. “A wavelet tour of signal processing”. Academic Press, pages 4692–4702, 1999.

[64] G. Rajesh, A. Kumar, and K. Ranjeet. “Speech compression using different transform

techniques”. IEEE Conference on In Computer and Communication Technology, pages

146–151, 2011.

[65] E. Candes and J. Romberg. “Sparsity and incoherence in compressive sampling”. Inverse

problems, 23(3):969, 2007.

[66] M. A. Davenport, M. F. Duarte, Y. C. Eldar, and G. Kutyniok. “Introduction to com-

pressed sensing”. Preprint, 93(1):2, 2011.

[67] B. K. Natarajan. “Sparse approximate solutions to linear systems”. SIAM journal on

computing, 24(2):227–234, 1995.

[68] S. Boyd and L. Vandenberghe. “Convex optimization”. Cambridge university press, 2004.

[69] M. Rudelson and R. Vershynin. “Sparse reconstruction by convex relaxation: Fourier

and gaussian measurements”. IEEE Annual Conference on In Information Sciences and

Systems, pages 207–212, 2006.

[70] D. L. Donoho and M. Elad. “Optimally sparse representation in general (nonorthogonal)

dictionaries via l1 minimization”. Proc. Nat. Acad. Sci., 100(5):21972202, 2003.

[71] Y. Zhang. “Theory of Compressive Sensing via l1-minimization: a non-RIP analysis and

extension”. Journal of the Operations Research Society of China, 1(1):79–105, 2013.

160

Page 187: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography BIBLIOGRAPHY

[72] E. Candes and T. Tao. “Decoding by linear programming”. IEEE Trans. Inform. Theory,

51(12):42034215, 2005.

[73] M. Rudelson and R. Vershynin. “On sparse reconstruction from Fourier and gaussian

measurements”. Communications on Pure and Applied Mathematics, 61(9):1025–1045,

2008.

[74] R. DeVore R. G. Baraniuk, M. Davenport and M. Wakin. “A simple proof of the restricted

isometry property for random matrices”. Constructive Approximation, 28(3):253263,

2008.

[75] H. Huang and A. Makur. “Backtracking-based matching pursuit method for sparse signal

reconstruction”. IEEE Signal Processing Letters, 18(7):391–394, 2011.

[76] J. A. Tropp. “Greed is good: Algorithmic results for sparse approximation”. IEEE

Transactions on Information theory, 50(10):2231–2242, 2004.

[77] S. G. Mallat and Z. Zahng. “Matching pursuits with time-frequency dictionaries”. IEEE

Transactions on signal processing, 41(12):3397–3415, 1993.

[78] T. T. Cai and L. Wang. “Orthogonal matching pursuit for sparse signal recovery with

noise”. IEEE Transactions on Information Theory, 57(7):4680–4688, 2011.

[79] J.A. Tropp and A. C. Gilbert. “Signal recovery from random measurements via orthogo-

nal matching pursuit”. IEEE Transactions on Information Theory, 53:4655–4666, 2007.

[80] D. L. Donoho, Y. Tsaig, I. Drori, and J. L. Starck. “Sparse solution of underdeter-

mined linear equations of stagewise orthogonal matching pursuit”. IEEE Transactions

on Information Theory, 58(2):1094–1121, 2012.

[81] W. Dai and O. Milenkovic. “Subspace pursuit for compressive sensing signal

reconstruction”. IEEE Transactions on Information Theory, 55:2230–2249, 2009.

[82] D. Needell and J. A. Tropp. “Cosamp: Iterative signal recovery from incomplete and

inaccurate samples”. Applied and Computational Harmonic Analysis, 26:301321, 2009.

[83] D. Needell and J. A. Tropp. “Group sparse reconstruction for image segmentation”.

Neurocomputing, 136:4148, 2014.

[84] S. Shekhar, V. M. Patel, N. M. Nasrabadi, and R. Chellappa. “Joint sparse representation

for robust multimodal biometrics recognition”. IEEE Transactions on Pattern Analysis

and Machine Intelligence, 36(1):113–126, 2014.

161

Page 188: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[85] H. Lee, S. K. Jo, N. Lee, and H. W. Lee. “A method for co-existing heterogeneous

IoT environments based on compressive sensing”. IEEE International Conference on

Advanced Communication Technology, pages 206–209, 2016.

[86] P. Charalampidis, A. G. Fragkiadakis, and E. Z. Tragos. “Rate-adaptive compressive

sensing for IoT applications”. In Vehicular Technology Conference (VTC Spring), pages

1–5, 2015.

[87] L. Mainetti, L. Patrono, and A. Vilei. “Evolution of wireless sensor networks towards

the internet of things: A survey”. International Conference on In Software, Telecommu-

nications and Computer Networks, pages 1–6, 2011.

[88] N. Khalil, M. R. Abid, D. Benhaddou, and M. Gerndt. “Wireless sensors networks for

internet of things”. IEEE International Conference on In Intelligent Sensors, Sensor

Networks and Information Processing, pages 1–6, 2014.

[89] S. Li, L. Da Xu, and X. Wang. “Compressed sensing signal and data acquisition in wireless

sensor networks and internet of things”. IEEE Transactions on Industrial Informatics, 9

(4):2177–2186, 2013.

[90] S. Gargi, V. Mayank, and M. Neha. “Analysis of transmission technologies in wireless

sensor networks”. International Journal of Engineering Research and Technology, 3:

2440–2444, 2014.

[91] Z. Zou, C. Hu, F. Zhang, H. Zhao, and S. Shen. “WSNs data acquisition by combining

hierarchical routing method and compressive sensing”. Sensors, 14(9):16766–16784, 2014.

[92] J. Meang, H. Li, and Z. Han. “Sparse event detection in wireless sensor networks using

compressive sensing”. IEEE Conference on Information Sciences and Systems, pages

181–185, 2009.

[93] M. Ding and X. Cheng. “Fault tolerant target tracking in sensor networks”. In Pro-

ceedings of the tenth ACM international symposium on Mobile ad hoc networking and

computing, pages 125–134, 2009.

[94] H. W. Tsai, C. P. Chu, and T. S. Chen. “Mobile object tracking in wireless sensor

networks”. Computer communications, 30(8):1811–1825, 2007.

162

Page 189: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography BIBLIOGRAPHY

[95] K. T. Herring, J. W. Holloway, D. H. Staelin, and D. W. Bliss. “Path-loss characteristics

of urban wireless channels”. IEEE Transactions on Antennas and Propagation, 58(1):

171–177, 2010.

[96] Y. Liu, X. Zhu, C. Ma, and L. Zhang. “Multiple event detection in wireless sensor

networks using compressed sensing”. International Conference on Telecommunications,

IEEE, pages 27–32, 2011.

[97] J. Yang and B. Geller. “Near optimum low complexity smoothing loops for dynamical

phase estimation application to BPSK modulated signals”. IEEE Transactions on Signal

Processing, 57(9):3704–3711, 2009.

[98] C. Vanstraceele, B. Geller, J. M. Brossier, and J. P. Barbot. “A low complexity block

turbo decoder architecture”. IEEE Transactions on Communications, 56(12):1985–1987,

2008.

[99] X. Wang, M. Fu, and H. Zhang. “Target tracking in wireless sensor networks based on

the combination of KF and MLE using distance measurements”. IEEE Transactions on

Mobile Computing, 11(4):567–576, 2012.

[100] Y. Huang and J. L. Marques. “Robots for environmental monitoring: Significant ad-

vancements and applications”. IEEE Robotics & Automation Magazine, 19(1):24–39,

2012.

[101] Y. Huang, J. L. Beck, S. Wu, and H. Li. “Robust bayesian compressive sensing for signals

in structural health monitoring”. Computer-Aided Civil and Infrastructure Engineering,

29(3):160–179, 2014.

[102] D. Brunelli and C. Caione. “Sparse recovery optimization in wireless sensor networks

with a sub-Nyquist sampling rate”. Sensors, 15(7):16654–16673, 2015.

[103] A. F. Aderohunmu, D. Brunelli, D. J. Deng, and M. K. Purvis. “A data acquisition

protocol for a reactive wireless sensor network monitoring application”. Sensors, 15(5):

10221–10254, 2015.

[104] J. Wang, S. Kwon, and B. Shim. “Generalized orthogonal matching pursuit”. IEEE

Transactions on signal processing, 60(12):6202–6216, 2012.

[105] C. Qi and L. Wu. “Tree-based backward pilot generation for sparse channel estimation”.

Electronics letters, 48(9):501–503, 2012.

163

Page 190: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[106] J. Haupt, W. U. Bajwa, M. Rabat, and R. Nowak. “Compressed sensing for networked

data”. IEEE Signal Processing Magazine, 25(2):92–101, 2008.

[107] X. Wang, Z. Zhao, N. Zhao, and H. Zhang. “On the application of compressed sens-

ing in communication networks”. International conference on In Communications and

Networking in China, pages 1–7, 2010.

[108] D. Guo, X. Qu, L. Huang, and Y. Yao. “Optimized local superposition in wireless sensor

networks with T-average-mutual-coherence”. Progress in Electromagnetics Research, 122:

389–411, 2012.

[109] S. Lin, J. Liu, and Y. Fang. “Zigbee based wireless sensor networks and its applications in

industrial”. IEEE international conference on automation and logistics, pages 1979–1983,

2007.

[110] N. Patel, H. Kathiriya, and A. Bavarva. “Wireless sensor network using Zigbee”. Inter-

national Journal of Research in Engineering Technology, 2013.

[111] L. Li, H. Xiaoguang, C. Ke, and H. Ketai. “The applications of wifi-based wireless

sensor network in internet of things and smart grid”. IEEE Conference on Industrial

Electronics and Applications, 789-793, 2011.

[112] T. S. Rappaport. “Wireless communications: principles and practice”. in Mobile Radio

Propagation: Large-Scale Path Loss, 2/E, chapter 4, Prentice Hall, 2002.

[113] K. Ayub and V. Zagurskis. “Technology implications of UWB on wireless sensor network-

a detailed survey”. International Journal of Communication Networks and Information

Security, 7(3):147, 2015.

[114] J. Reed. “Introduction to ultra wideband communication systems”. an. Prentice Hall

Press, 2005.

[115] J. Zhang, P. V. Orlik, Z. Sahinoglu, A. F. Molisch, and P. Kinney. “UWB systems for

wireless sensor networks”. Proceedings of the IEEE, 97(2):313–331, 2009.

[116] J. L. Paredes, G. R. Arce, and Z. Wang. “Ultra-wideband compressed sensing: channel

estimation”. IEEE Journal of Selected Topics in Signal Processing, 1(3):383–395, 2007.

[117] K. Ayub and V. Zagurskis. “Technology implications of UWB on wireless sensor network-

a detailed survey”. International Journal of Communication Networks and Information

Security, 7(3):147, 2015.

164

Page 191: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography BIBLIOGRAPHY

[118] L. Shi, B. Guo, and L. Zhao. “Block-type pilot channel estimation for OFDM systems

under frequency selective fading channels”. IET International Communication Confer-

ence on Wireless Mobile and Computing (CCWMC), pages 21–24, 2009.

[119] H. Minn and V. K. Bhargava. “An investigation into time-domain approach for OFDM

channel estimation”. IEEE Transactions on Broadcasting, 46(4):3016–3023, 2000.

[120] S. Coleri, M. Ergen, A. Puri, and A. Bahai. “Channel estimation techniques based on

pilot arrangement in OFDM systems”. IEEE Transactions on broadcasting, 48(3):21–24,

2002.

[121] L. Najjar. “On optimality limits of channel-structured estimation in multicarrier

systems”. IEEE Transactions on Vehicular Technology, 61(5):2382–2387, 2012.

[122] J. C. Lin. “LS channel estimation for mobile OFDM communications on time-varying

frequency-selective fading channels”. IEEE International Conference on Communica-

tions, ICC’07, pages 3016–3023, 2007.

[123] M. R. Raghavendra and K. Giridhar. “Improving channel estimation in OFDM systems

for sparse multipath channels”. IEEE Signal Proc. Letters, 12:52–55, 2005.

[124] W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak. “A compressed sensing technique

for OFDM channel estimation in mobile environments: exploiting channel sparsity for

reducing pilots”. In Acoustics, speech and signal processing, ICASSP, pages 2885–2888,

2008.

[125] W. U. Bajwa, Aand . M. Sayeed J. Haupt, and R. Nowak. “Compressed channel sensing:

A new approach to estimating sparse multipath channels”. Proceedings of the IEEE, 98

(6):1058–1076, 2010.

[126] L. Najjar. “Sparse channels structured estimation in OFDM systems”. IEEE VTC

Spring, pages 1–5, 2011.

[127] Z. Jellali and L. N. Atallah. “Threshold-based channel estimation for MSE optimization

in OFDM systems”. EURASIP, EUSIPCO, pages 1618–1622, 2012.

[128] L. Najjar. “Sparsity level-aware threshold-based channel structure detection in OFDM

systems”. Electronics letters, 48(9):495–496, 2012.

[129] C. Qi and L. Wu. “A study of deterministic pilot allocation for sparse channel estimation

in OFDM systems”. IEEE Communications Letters, 16(5):742–744, 2012.

165

Page 192: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[130] S. Rosati, G. E. Corazza, and A. Vanelli-Coralli. “OFDM channel estimation with

optimal threshold-based selection of CIR samples”. In Global Telecommunications Con-

ference, GLOBCOM. IEEE, pages 1–7, 2009.

[131] L. Tong, B. M. Sadler, and M. Dong. “Pilot-assisted wireless transmissions: general

model, design criteria, and signal processing”. In Global Telecommunications Conference,

GLOBCOM. IEEE, 21(6):12–25, 2004.

[132] M. Huang, X. Chen, S. Zhou, and J. Wang. “Low-complexity subspace tracking based

channel estimation method for OFDM systems in time-varying channels”. IEEE Inter-

national Conference on Communications, 10:4618–4623, 2006.

[133] I. E. Telatar and D. N. C. Tse. “Capacity and mutual information of wideband multipath

fading channels”. IEEE transactions on information theory, 64(4):1384–1400, 2000.

[134] O. Simeone, Y. Bar-Ness, and U. Spagnolini. “Pilot-based channel estimation for OFDM

systems by tracking the delay-subspace”. IEEE transactions on information theory, 3(1):

315–325, 2004.

[135] Z. Jellali and L. N. Atallah. “Time varying sparse channel estimation by MSE optimiza-

tion in OFDM systems”. In Vehicular Technology Conference, VTC Spring, pages 1–5,

2013.

[136] P. Pakrooh, A. Amini, and F. Marvasti. “OFDM pilot allocation for sparse channel

estimation”. EURASIP Journal on Advances in Signal Processing, (1):1–9, 2012.

[137] C. Qi and L. Wu. “A study of deterministic pilot allocation for sparse channel estimation

inOFDM systems”. EURASIP Journal on Advances in Signal Processing, 16(5):742–744,

2012.

[138] C. Qi and L. Wu. “Tree-based backward pilot generation for sparse channel estimation”.

Electronics letters, 48(9):501–503, 2012.

[139] Z. Liu, Z. Li, M. Li, W. Xing, and D. Lu. “Path reconstruction in dynamic wireless

sensor networks using compressive sensing”. ACM international symposium on Mobile

ad hoc networking and computing, pages 297–306, 2014.

[140] S. Y. Fu, X. K. Kuai, R. Zheng, G. S. Yang, and Z. G. Hou. “Compressive sensing

approach based mapping and localization for mobile robot in an indoor wireless sensor

166

Page 193: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography BIBLIOGRAPHY

network”. international conference on in Networking, Sensing and Control, pages 122–

127, 2010.

[141] J. Meng, H. Li, and Z. Han. “Sparse event detection in wireless sensor networks using

compressive sensing”. Annual conference on in Information Sciences and Systems, pages

181–185, 2009.

[142] G. Barrenetxea, F. Ingelrest, G. Schaefer, and M. Vetterli. “Wireless sensor networks for

environmental monitoring: the sensorscope experience”. International Zurich Seminar

on Communications, IEEE, pages 98–101, 2008.

[143] S. Sharma, R. K. Bansal, and S. Bansal. “Issues and challenges in wireless sensor

networks”. International Conference on Machine Intelligence and Research Advancement

ICMIRA, IEEE, pages 58–62, 2013.

[144] K. Gupta and V. Sikka. “Design issues and challenges in wireless sensor networks”.

International Journal of Computer Applications, 112(4), 2015.

[145] M. A. Alsheikh, S. Lin, H. P. Tan, and D. Niyato. “Toward a robust sparse data rep-

resentation for wireless sensor networks”. In Local Computer Networks (LCN), IEEE,

pages 117–124, 2015.

[146] R. Banerjee, M. Mobashir, and S. D. Bit. “Partial DCT-based energy efficient compres-

sion algorithm for wireless multimedia sensor network”. IEEE International Conference

on In Electronics, Computing and Communication Technologies (CONECCT), pages 1–6,

2014.

[147] H. Abdi and L. J. Williams. “Principal component analysis”. Wiley Interdisciplinary

Reviews: Computational Statistics, 2(4):433–459, 2010.

[148] R. Masiero, G. Quer, D. Munaretto, M. Rossi, J. Widmer, and M. Zorzi. “Data acqui-

sition through joint compressive sensing and principal component analysis”. In Global

Telecommunications Conference (GLOBECOM), IEEE, pages 1–6, 2010.

[149] J. D. Markel and A. J. Gray. “Linear prediction of speech”. Springer Science & Business

Media, 12, 2013.

[150] J. Bradbury. “Linear predictive coding”. Mc G. Hill, 2000.

[151] N. Li, F. W, and B. Tang. “WSN data distortion analysis and correlation model based

on spatial location”. Journal of Nertworks, 5(12), 2010.

167

Page 194: Sparse Regularization Approach ... - supcom.mincom.tn

Bibliography

[152] Y. Rivenson and A. Stern. “Compressed imaging with a separable sensing operator”.

IEEE Signal Processing Letters, 16(6):449–452, 2009.

[153] J. Yang, D. Zhang, A. F. Frangi, and J. Y. Yang. “Two-dimensional PCA: a new

approach to appearance-based face representation and recognition”. IEEE transactions

on pattern analysis and machine intelligence, 26(1):131–137, 2004.

[154] A. Dwivedi, A. Tolambiya, P. Kandula, N. S. C. Bose, A. Kumar, and P. K. Kalra. “Color

image compression using 2-dimensional principal component analysis (2DPCA)”. Proc.

of ASID, 2:1, 2006.

[155] Z. Daoqiang and Z. Zhi-Hua. “(2D)2PCA: Two-directional two-dimensional pca for

efficient face representation and recognition”. Neurocomput., 69(1):224–231, 2005.

[156] P. Maragos, R. Schafer, and R. Mersereau. “Two-dimensional linear prediction and its

application to adaptive predictive coding of images”. IEEE Transactions on Acoustics,

Speech, and Signal Processing, 32(6):1213–1229, 1984.

[157] L. R. Rabiner and R. W. Schafer. “Digital processing of speech signals”. Englewood

Cliffs, N3: Prentice-Hall, 1978.

[158] P. D. Kroese and I. Z. Botev. “Spatial process simulation”. In Stochastic Geometry,

Spatial Statistics and Random Fields. Springer International Publishing, pages 369–404,

2015.

168