secure knn query processing in untrusted cloud environments

169
Purdue University Purdue e-Pubs Cyber Center Publications Cyber Center 5-2014 Secure kNN Query Processing in Untrusted Cloud Environments Sunoh Choi Purdue University, [email protected] Follow this and additional works at: hp://docs.lib.purdue.edu/ccpubs Part of the Engineering Commons , Life Sciences Commons , Medicine and Health Sciences Commons , and the Physical Sciences and Mathematics Commons is document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Choi, Sunoh, "Secure kNN Query Processing in Untrusted Cloud Environments" (2014). Cyber Center Publications. Paper 625. hp://dx.doi.org/10.1109/TKDE.2014.2302434

Upload: buicong

Post on 01-Jan-2017

226 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Secure kNN Query Processing in Untrusted Cloud Environments

Purdue UniversityPurdue e-Pubs

Cyber Center Publications Cyber Center

5-2014

Secure kNN Query Processing in Untrusted CloudEnvironmentsSunoh ChoiPurdue University, [email protected]

Follow this and additional works at: http://docs.lib.purdue.edu/ccpubs

Part of the Engineering Commons, Life Sciences Commons, Medicine and Health SciencesCommons, and the Physical Sciences and Mathematics Commons

This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] foradditional information.

Choi, Sunoh, "Secure kNN Query Processing in Untrusted Cloud Environments" (2014). Cyber Center Publications. Paper 625.http://dx.doi.org/10.1109/TKDE.2014.2302434

Page 2: Secure kNN Query Processing in Untrusted Cloud Environments

01 14

PURDUE UNIVERSITY GRADUATE SCHOOL

Thesis/Dissertation Acceptance

Thesis/Dissertation Agreement.Publication Delay, and Certification/Disclaimer (Graduate School Form 32)adheres to the provisions of

Department

Sunoh Choi

Secure Query Processing in Untrusted Cloud Environments

Doctor of Philosophy

ELISA BERTINO, Co-Chair

ARIF GHAFOOR, Co-Chair

SAURABH BAGCHI

NINGHUI LI

ELISA BERTINO, Co-Chair

M. R. Melloch 04/28/2014

Page 3: Secure kNN Query Processing in Untrusted Cloud Environments
Page 4: Secure kNN Query Processing in Untrusted Cloud Environments

SECURE QUERY PROCESSING

IN UNTRUSTED CLOUD ENVIRONMENTS

A Dissertation

Submitted to the Faculty

of

Purdue University

by

Sunoh Choi

In Partial Fulfillment of the

Requirements for the Degree

of

Doctor of Philosophy

May 2014

Purdue University

West Lafayette, Indiana

Page 5: Secure kNN Query Processing in Untrusted Cloud Environments

All rights reserved

INFORMATION TO ALL USERSThe quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscriptand there are missing pages, these will be noted. Also, if material had to be removed,

a note will indicate the deletion.

Microform Edition © ProQuest LLC.All rights reserved. This work is protected against

unauthorized copying under Title 17, United States Code

ProQuest LLC.789 East Eisenhower Parkway

P.O. Box 1346Ann Arbor, MI 48106 - 1346

UMI 3635814Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author.

UMI Number: 3635814

Page 6: Secure kNN Query Processing in Untrusted Cloud Environments

ii

To my parents, wife, and daughter

Page 7: Secure kNN Query Processing in Untrusted Cloud Environments

iii

ACKNOWLEDGMENTS

First of all, I would like to express my deepest gratitude to my advisor, Prof. Elisa

Bertino. Without her support and encouragement, I could not have completed my

thesis. Also I would like to thank Prof. Arif Ghafoor, Prof. Saurabh Bagchi, and

Prof. Ninghui Li for providing their invaluable comments in my committee.

I am also grateful to my colleagues who I worked with during my PhD: Dr. Gabriel

Ghinita from University of Massachusetts and Dr. Hyo-Sang Lim from Yonsei Uni-

versity.

Finally, I would like to express my gratitude to my parents, Woonggil and Jong-

nam, my wife Dowon, and my daughter Semin for their unconditional love and sup-

port. I am very grateful to my God for being with me during the time.

Page 8: Secure kNN Query Processing in Untrusted Cloud Environments

iv

TABLE OF CONTENTS

Page

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 SECURE kNN QUERY PROCESSING . . . . . . . . . . . . . . . . . . . 5

2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.2 Privacy Model . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1.3 Secure Range Query Processing Method . . . . . . . . . . . 10

2.2 One Nearest Neighbor (1NN) . . . . . . . . . . . . . . . . . . . . . 13

2.2.1 Voronoi Diagram-based 1NN (VD-1NN) . . . . . . . . . . . 13

2.2.2 Secure Voronoi Cell Enclosure Evaluation . . . . . . . . . . . 13

2.2.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . 17

2.3 k Nearest Neighbor (kNN) . . . . . . . . . . . . . . . . . . . . . . . 19

2.3.1 Secure Distance Comparison Method (SDCM) . . . . . . . . 19

2.3.2 Basic k Nearest Neighbor (B-kNN) . . . . . . . . . . . . . . 20

2.3.3 Triangulation-based kNN (TkNN) . . . . . . . . . . . . . . . 23

2.4 Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4.1 Hybrid Query Processing using Kd-trees . . . . . . . . . . . 26

2.4.2 Parallel Processing . . . . . . . . . . . . . . . . . . . . . . . 28

2.5 Incremental Updates . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.6 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 31

2.6.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . 31

2.6.2 1NN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Page 9: Secure kNN Query Processing in Untrusted Cloud Environments

v

Page

2.6.3 kNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.6.4 Data Encryption Time at the Data Owner . . . . . . . . . . 36

2.6.5 Precision of TkNN . . . . . . . . . . . . . . . . . . . . . . . 37

2.6.6 Incremental Update Time . . . . . . . . . . . . . . . . . . . 38

3 SECURE PROXIMITY DETECTION . . . . . . . . . . . . . . . . . . . 40

3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.1.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.1.2 Paillier Cryptosystem . . . . . . . . . . . . . . . . . . . . . . 44

3.1.3 GT Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2 Secure Proximity Detection . . . . . . . . . . . . . . . . . . . . . . 47

3.2.1 Secure Point Evaluation Method (SPEM) . . . . . . . . . . . 48

3.2.2 Secure Point Evaluation Method for Two Lines (t-SPEM) . 51

3.2.3 Secure Line Evaluation Method (SLEM) . . . . . . . . . . . 52

3.2.4 Secure Line Evaluation Method for Two Lines (t-SLEM) . . 54

3.2.5 MBR Filtering . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.2.6 Finding Two Lines for t-SPEM . . . . . . . . . . . . . . . . 58

3.2.7 Finding Two Lines for t-SLEM . . . . . . . . . . . . . . . . 61

3.2.8 Complete Protocol . . . . . . . . . . . . . . . . . . . . . . . 62

3.3 Security Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.4 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.4.1 Computation Time . . . . . . . . . . . . . . . . . . . . . . . 66

3.4.2 Communication Bandwidth . . . . . . . . . . . . . . . . . . 68

3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4 AUTHENTICATED TOP-K AGGREGATION . . . . . . . . . . . . . . 71

4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.1.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.1.2 Attack Model . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4.1.3 Three Phase Uniform Threshold Algorithm . . . . . . . . . . 77

Page 10: Secure kNN Query Processing in Untrusted Cloud Environments

vi

Page

4.2 Authenticated Top-K Aggregation . . . . . . . . . . . . . . . . . . . 78

4.2.1 Authenticated TPUT . . . . . . . . . . . . . . . . . . . . . . 79

4.2.2 Signature-based TPUT . . . . . . . . . . . . . . . . . . . . . 82

4.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

4.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

4.4.2 Communication Cost of S-TPUT . . . . . . . . . . . . . . . 92

4.4.3 Communication Cost of I-TPUT . . . . . . . . . . . . . . . 96

4.4.4 Comparing S-TPUT with TNRA . . . . . . . . . . . . . . . 97

5 SECURE PROXIMITY-BASED ACCESS CONTROL . . . . . . . . . . 100

5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.1.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.1.2 Attack and Failure Model . . . . . . . . . . . . . . . . . . . 104

5.1.3 Protocol Overview . . . . . . . . . . . . . . . . . . . . . . . 105

5.1.4 Bilinear Mapping . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2 Secure Proximity-based Access Control . . . . . . . . . . . . . . . . 107

5.2.1 Simple Proximity-based Access Control Method . . . . . . . 107

5.2.2 Aggregate Signature using Bilinear Mapping . . . . . . . . . 107

5.2.3 Preventing Attacks . . . . . . . . . . . . . . . . . . . . . . . 109

5.2.4 Secure and Resilient Proximity-based Access Control . . . . 113

5.2.5 Covering multiple areas using SPAC . . . . . . . . . . . . . 116

5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.3.2 Computaional Cost . . . . . . . . . . . . . . . . . . . . . . . 120

5.3.3 Communication Cost . . . . . . . . . . . . . . . . . . . . . . 122

6 SECURE SENSOR NETWORK SUM AGGREGATION WITH DETEC-TION OF MALICIOUS NODES . . . . . . . . . . . . . . . . . . . . . . 124

6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Page 11: Secure kNN Query Processing in Untrusted Cloud Environments

vii

Page

6.1.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 126

6.1.2 Attack Model . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.3 Additively Homomorphic Symmetric Encryption . . . . . . . 128

6.2 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.2.1 Bitmap Dissemination Method . . . . . . . . . . . . . . . . . 130

6.2.2 Flexible Aggregation Structure . . . . . . . . . . . . . . . . 132

6.2.3 Advanced Ring Structure . . . . . . . . . . . . . . . . . . . . 134

6.2.4 Flexible Secure Aggregation . . . . . . . . . . . . . . . . . . 136

6.2.5 DAC Algorithm for Finding Malicious Nodes . . . . . . . . . 137

6.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

6.3.1 Security Analysis . . . . . . . . . . . . . . . . . . . . . . . . 139

6.3.2 Reliability of Multipath Routing . . . . . . . . . . . . . . . . 139

6.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 140

6.4.1 Flexible Secure Aggregation . . . . . . . . . . . . . . . . . . 140

6.4.2 Advanced Ring Structure . . . . . . . . . . . . . . . . . . . . 141

6.4.3 Divide and Conquer Algorithm Evaluation . . . . . . . . . . 142

7 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

LIST OF REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

VITA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Page 12: Secure kNN Query Processing in Untrusted Cloud Environments

viii

LIST OF TABLES

Table Page

2.1 VD-1NN protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2 BkNN protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3 Performance Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.1 An Example Data Set with Three Lists . . . . . . . . . . . . . . . . . . 78

4.2 A-TPUT algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.3 S-TPUT algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.4 I-TPUT algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.5 Experimental Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 91

Page 13: Secure kNN Query Processing in Untrusted Cloud Environments

ix

LIST OF FIGURES

Figure Page

2.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 MOPE Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3 MOPE Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.4 Voronoi Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5 Secure Voronoi Cell Enclosure Evaluation . . . . . . . . . . . . . . . . 16

2.6 VD-1NN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7 Secure Distance Comparison Method . . . . . . . . . . . . . . . . . . . 19

2.8 BkNN using Query Square . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.9 BkNN Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.10 Triangulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.11 TkNN Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.12 TkNN Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.13 Partitioning data points using a kd-tree . . . . . . . . . . . . . . . . . 27

2.14 Change of the topological structure . . . . . . . . . . . . . . . . . . . . 29

2.15 1NN Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.16 1NN Communication Cost . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.17 1NN Response Time Breakdown . . . . . . . . . . . . . . . . . . . . . . 34

2.18 kNN Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.19 kNN Communication Cost . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.20 kNN Response Time Breakdown . . . . . . . . . . . . . . . . . . . . . . 36

2.21 Data Encryption Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.22 TkNN Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.23 Average Incremental Update Time per moving data point (1 million pointsdataset) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Page 14: Secure kNN Query Processing in Untrusted Cloud Environments

x

Figure Page

3.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.2 Proximity Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.3 Polygon Enclosure Evaluation Example . . . . . . . . . . . . . . . . . . 47

3.4 SPEM Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.5 t-SPEM Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.6 SLEM Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.7 t-SLEM Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.8 MBR Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.9 MBR Filtering Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.10 kd tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.11 A node in kd*-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.12 Finding Two Lines for t-SPEM . . . . . . . . . . . . . . . . . . . . . . 60

3.13 Finding Two Lines for t-SLEM . . . . . . . . . . . . . . . . . . . . . . 61

3.14 Additive Blinding: c = v + k . . . . . . . . . . . . . . . . . . . . . . . . 63

3.15 Perturbation in kd*-tree . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.16 Cpu Time with MOPE . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.17 Cpu Time with k-d*-tree filtering . . . . . . . . . . . . . . . . . . . . . 69

3.18 Communication Bandwidth with mOPE . . . . . . . . . . . . . . . . . 70

3.19 Communication Bandwidth with k-d*-tree filtering . . . . . . . . . . . 70

4.1 System Model for Top-k Aggregation . . . . . . . . . . . . . . . . . . . 74

4.2 Skewed Merkle Hash Tree . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.3 Performance Comparison of A-TPUT and I-TPUT . . . . . . . . . . . 90

4.4 Communication Cost by m in Zipf law . . . . . . . . . . . . . . . . . . 93

4.5 Communication Cost by n in Zipf law . . . . . . . . . . . . . . . . . . . 94

4.6 Communication Cost by k in Zipf law . . . . . . . . . . . . . . . . . . . 94

4.7 Communication Cost by α in Zipf law . . . . . . . . . . . . . . . . . . 94

4.8 Communication Cost by m in Uniform distribution . . . . . . . . . . . 95

4.9 Communication Cost by n in Uniform distribution . . . . . . . . . . . . 95

Page 15: Secure kNN Query Processing in Untrusted Cloud Environments

xi

Figure Page

4.10 Communication Cost by k in Uniform distribution . . . . . . . . . . . . 95

4.11 Communication Cost by α in Uniform distribution . . . . . . . . . . . 96

4.12 Response Time by m in TNRA and S-TPUT . . . . . . . . . . . . . . . 98

4.13 Response Time by n in TNRA and S-TPUT . . . . . . . . . . . . . . . 98

4.14 Response Time by k in TNRA and S-TPUT . . . . . . . . . . . . . . . 98

4.15 Response Time by α in TNRA and S-TPUT . . . . . . . . . . . . . . . 99

5.1 Proximity-based Access Control . . . . . . . . . . . . . . . . . . . . . . 101

5.2 Simple Proximity-based Access Control . . . . . . . . . . . . . . . . . . 108

5.3 Aggregate Signature using Bilinear Mapping . . . . . . . . . . . . . . . 108

5.4 Collusion Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.5 Bluesniping Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

5.6 Covering multiple areas (no fault tolerance) . . . . . . . . . . . . . . . 117

5.7 Covering multiple areas (fault tolerance(3,5)) . . . . . . . . . . . . . . 117

5.8 SPAC protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.9 Computation Time at LBS on Desktop . . . . . . . . . . . . . . . . . . 120

5.10 Computation Time at Reader on Desktop . . . . . . . . . . . . . . . . 121

5.11 Computation Time at Medical Device on Desktop . . . . . . . . . . . . 121

5.12 Computation Time at LBS on Android . . . . . . . . . . . . . . . . . . 122

5.13 Computation Time at Reader on Android . . . . . . . . . . . . . . . . 122

5.14 Computation Time at Medical Device on Android . . . . . . . . . . . . 123

5.15 Communication Overhead . . . . . . . . . . . . . . . . . . . . . . . . . 123

6.1 Proposed Scheme Overview . . . . . . . . . . . . . . . . . . . . . . . . 130

6.2 Flexible Aggregation Structure . . . . . . . . . . . . . . . . . . . . . . 132

6.3 Advanced Ring Structure . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.4 Communication Overhead in Aggregation . . . . . . . . . . . . . . . . 141

6.5 Number of Successful Transmission . . . . . . . . . . . . . . . . . . . . 143

6.6 Communication Overhead in ARS . . . . . . . . . . . . . . . . . . . . . 143

6.7 Performance of DAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

Page 16: Secure kNN Query Processing in Untrusted Cloud Environments

xii

ABSTRACT

Choi, Sunoh Ph.D., Purdue University, May 2014. Secure Query Processing in Un-trusted Cloud Environments. Major Professor: Elisa Bertino.

Nowadays, data are stored to a third party in cloud environments and query

processing is also done by the third party to reduce the expense to maintain the

system. Although there are lots of advantages in using independent third parties in

query processing, security problems become more crucial since we cannot completely

trust the third parties which can be easily corrupted or malfunctioning. The security

problems with untrusted third parties are multifaceted in several areas such as privacy,

authentication, and recovery. For privacy, the third party should not be able to

know what the user’s query is since the query itself describes the user’s interest. For

authentication, the user should be able to verify that the information from the third

party is not tampered since the correctness of the query results depends upon the

correctness of the information from the third party. For recovery, when the result is

found to be forged by an adversary, we should be able to find the adversary and get

a correct result by removing the adversary.

To address these challenges, we propose several schemes. First, with respect to

secure kNN query processing and secure proximity detection, we give novel schemes

based on Mutable Order Preserving Encryption (MOPE) and Secure Point Evalua-

tion Method (SPEM). Second, for authenticated top-k aggregation, we suggest novel

schemes using Three Phase Uniform Threshold Algorithm, Merkle Hash Tree, and

Condensed-RSA. Third, for detecting malicious nodes, we propose novel algorithms

based on Additively Homomorphic Encryption and Multipath Transmission. Our ex-

perimental evaluation and security analyses demonstrate that robust mechanisms can

be deployed with a minimal amount of computational and communicational expense.

Page 17: Secure kNN Query Processing in Untrusted Cloud Environments

1

1. INTRODUCTION

In recent years, database outsourcing has gained tremendous popularity. In order to

reduce operation and maintenance costs, an existing content distribution network such

as Akamai or an ad-hoc P2P/Grid computing environment may be used for database

outsourcing. Database outsourcing involves three types of entities: data owners,

service providers, and users. A data owner outsources its database functionality to

one or more third parties which are called service providers (e.g., a cloud computing

service such as Akamai) which have the computational power to support various query

processing. Users issue their queries to the service providers.

Database outsourcing has several advantages: 1) As the data owners store their

data on the service providers, they do not need to have their own facilities to store

and process the data. 2) Using third party service providers is a cheaper way to

achieve scalability than fortifying the owners data center and providing more network

bandwidth for every user. 3) The database outsourcing model removes the single point

of failure in the owners data center, hence reducing the databases susceptibility to

denial of service attacks and improving service availability. 4) The users can get the

query results by a service provider which is close in terms of network latency without

need to contact the data owners directly.

However, even though database outsourcing has several advantages, it poses sev-

eral security challenges because we cannot completely trust the third party service

providers which can be corrupted by adversaries. The first challenge is privacy. For

instance, In an application to find nearby friends, the server stores the locations of

the friends. If the location database is outsourced but not properly protected, unau-

thorized users may gain data access causing privacy bleaches to the data owners. In

addition , not only the data stored in the server provider but also the query issued to

the service provider is sensitive information that should be protected since the service

Page 18: Secure kNN Query Processing in Untrusted Cloud Environments

2

provider can know the location of the users. In other application areas, user queries

may disclose private details about users such as shopping habits, political or religious

affiliation, etc.

The second challenge is authentication. In outsourced databases, the data owners

delegate their database functionality like range query, kNN, proximity, top-k, SUM,

etc to the service providers. If the service providers are compromised, they could

return tampered results to the users. Authenticated query processing techniques

guarantee authenticity and completeness of query results in outsourced systems. Au-

thenticity ensures that all the results returned to users originate from the data owners

and no spurious results are introduced. Completeness guarantees that all the results

which satisfy the query are present in the result set.

On the other hand, authentication can be used for location based access control.

Location based access control is to give an access to an important information when

a user is in a restricted area. In order to determine whether the user is in the

restricted area, we can make the user to receive partial keys from several Location-

based Service(LBS) devices only when the user is in the area. Then, when the keys

are authenticated, the user is given an access to the information.

The third challenge is recovery. Several protocols acknowledge the above authenti-

cation issue and provide authentication in the presence of malicious service providers.

All these protocols deal with stealthy attacks where the malicious service providers

try to modify the result without being detected. Such techniques can verify whether

the result is correct or not, and in case they detect that the result has been tampered

with, they raise an alarm. However, they cannot pinpoint the source of the attack.

Hence, they cannot identify and remove the malicious service providers, leaving the

network vulnerable to denial-of-service attacks. So, when the results are not correct,

we need to detect the malicious service providers and give the correct results in the

next round by excluding them.

Next, we explain our contributions to address these challenges in detail.

Page 19: Secure kNN Query Processing in Untrusted Cloud Environments

3

Secure kNN Query Processing [1] . In order to achieve location privacy, we

propose a novel approach to secure kNN query processing. Our methods support

efficient and precise evaluation of conditions based on the ciphertexts of data and

queries. Our solution relies on Mutable Order Preserving Encryption (MOPE) [6],

a transformation that supports comparison between pairs of data items. MOPE

has been proposed for the evaluation of numerical comparison. We adapt MOPE

to support a broad range of condition evaluations such as polygon enclosure. We

propose a novel secure kNN query processing method. Our solution has a reduced

computational overhead and does not incur false positives.

Secure Proximity Detection [2] . In addition, when there are two users, we

can determine whether the two users are nearby by evaluating polygon enclosure

securely. However, since our system is asynchronous, we need another scheme to

evaluate polygon enclosure when the data owner has a data point and the client has

a polygon. So, we propose novel schemes t-SPEM and t-SLEM. Furthermore, when

the data owner has a lot of polygons, we propose a novel scheme to filter out polygons

which are far away from the client by using kd-tree [22] and perturbation.

Authenticated Top-K Aggregation [3] . For authentication, we investigate

algorithms that authenticate top-k aggregation results. We address not only authen-

tication but also efficiency. Our solution is based on a well-known top-k aggregation

algorithm: the Three Phase Uniform Threshold (TPUT) algorithm [33]. We first

develop an authenticated top-k aggregation algorithm based on TPUT. We call this

algorithm A-TPUT. The main strength of A-TPUT is that 1) it provides the authen-

tication capability which is not supported in the original TPUT algorithm and 2) it

only requires a fixed number of communication rounds regardless of the number of

data.

To develop an authenticated version of TPUT, we carefully integrate two authen-

tication techniques. The first technique is the Merkle Hash Tree (MHT) [34] which

is a tree-based data structure for detecting tampering over a series of values. The

second technique is the Condensed-RSA algorithm [35]. Condensed-RSA is a digital

Page 20: Secure kNN Query Processing in Untrusted Cloud Environments

4

signature technique which is suitable for combining signatures generated by a single

signer into a single condensed signature. We use this signature scheme to reduce the

communication cost between trust parties and untrusted parties.

Secure Proximity-based Access Control [4] . To determine whether the

client is in a restricted area, we propose a secure proximity-based access control

scheme. We deploy several LBS devices to give partial keys to the client in a restricted

area. When the client receives the partial keys, it aggregates the keys into a key to

reduce the energy consumption by using Condensed RSA [35]. Then, the target device

verifies the aggregated key. In addition, when some LBS devices are not working, we

can give a resilience to the client by using threshold algorithm [42].

Secure SUM Aggregation with Detection of Malicious Nodes [5] . For

recovery, we introduce a novel secure aggregation protocol for SUM which not only

detects forged results but also localizes malicious nodes and removes them from the

aggregation process. The proposed protocol is efficient as it uses only symmetric

key cryptography. Furthermore, in order to check the integrity of SUM aggregation,

our technique uses SIES [68] as a building block which incurs significantly lower

communication overhead than SHIA [69].

In order to find malicious nodes that tamper with in-network aggregation, the base

station must be able to check whether the partial sum is correct at the aggregators

as well as the base station. In addition, attackers can drop messages from the base

station to prevent it from communicating with other nodes. So, we devise a reliable

and efficient communication method which does not use flooding.

The rest of this document is organized as follows. In Section 2 and 3 we introduce

the secure kNN query processing method and the secure proximity detection method

respectively. Section 4 and 5 provide authenticated top-k aggregation scheme and

secure proximity-based access control scheme respectively and Section 6 gives our

secure SUM aggregation protocol with detection of malicious nodes. Finally we con-

clude with directions for future research in Section 7.

Page 21: Secure kNN Query Processing in Untrusted Cloud Environments

5

2. SECURE KNN QUERY PROCESSING

The emergence of mobile devices with fast Internet connectivity and geo-positioning

capabilities has led to a revolution in customized location-based services (LBS), where

users are enabled to access information about points of interest (POI) that are rele-

vant to their interests and are also close to their geographical coordinates. Probably

the most important type of queries that involve location attributes is represented

by nearest-neighbor (NN) queries, where a user wants to retrieve the k POIs (e.g.,

restaurants, museums, gas stations) that are nearest to the users current location

(kNN).

A vast amount of research focused on performing such queries efficiently, typically

using some sort of spatial indexing to reduce the computational overhead [22]. The

issue of privacy for users’ locations has also gained significant attention in the past.

Note that, in order for the NNs to be determined, users need to send their coordinates

to the LBS. However, users may be reluctant to disclose their coordinates if the LBS

may collect user location traces and use them for other purposes, such as profiling,

unsolicited advertisements, etc. To address the user privacy needs, several protocols

have been proposed that withhold, either partially or completely, the users location

information from the LBS. For instance, the work in [14–17] replaces locations with

larger cloaking regions that are meant to prevent disclosure of exact user whereabouts.

Nevertheless, the LBS can still derive sensitive information from the cloaked regions,

so another line of research that uses cryptographic-strength protection was started in

[26] and continued in [18,27]. The main idea is to extend existing Private Information

Retrieval (PIR) protocols for binary sets to the spatial domain, and to allow the LBS

to return the NN to users without learning any information about users locations.

This method serves its purpose well, but it assumes that the actual data points

(i.e., the points of interest) are available in plaintext to the LBS. This model is only

Page 22: Secure kNN Query Processing in Untrusted Cloud Environments

6

suitable for general-interest applications such as GoogleMaps, where the landmarks

on the map represent public information, but cannot handle scenarios where the data

points must be protected from the LBS itself.

More recently, a new model for data sharing emerged, where various entities gen-

erate or collect datasets of POI that cover certain niche areas of interest, such as

specific segments of arts, entertainment, travel, etc. For instance, there are social

media channels that focus on specific travel habits, e.g., eco-tourism, experimental

theater productions or underground music genres. The content generated is often geo-

tagged, for instance related to upcoming artistic events, shows, travel destinations,

etc. However, the owners of such databases are likely to be small organizations, or

even individuals, and not have the ability to host their own query processing services.

This category of data owners can benefit greatly from outsourcing their search ser-

vices to a cloud service provider. In addition, such services could also be offered as

plug-in components within social media engines operated by large industry players.

Due to the specificity of such data, collecting and maintaining such information is

an expensive process, and furthermore, some of the data may be sensitive in nature.

For instance, certain activist groups may not want to release their events to the general

public, due to concerns that big corporations or oppressive governments may intervene

and compromise their activities. Similarly, some groups may prefer to keep their geo-

tagged datasets confidential, and only accessible to trusted subscribed users, for the

fear of backlash from more conservative population groups. It is therefore important

to protect the data from the cloud service provider. In addition, due to financial

considerations on behalf of the data owner, subscribing users will be billed for the

service based on a pay-per-result model. For instance, a subscriber who asks for

kNN results will pay for k items, and should not receive more than k results. Hence,

approximate querying methods with low precision, such as existing techniques [28]

that return many false positives in addition to the actual results, are not desirable.

Such scenarios call for a novel and challenging category of services that provide

secure kNN processing in outsourced environments. Specifically, both the POI and

Page 23: Secure kNN Query Processing in Untrusted Cloud Environments

7

the user locations must be protected from the cloud provider. This model has been

formulated previously in literature as ”blind queries on confidential data” [16]. In this

context, POIs must be encrypted by the data owner, and the cloud service provider

must perform NN processing on encrypted data.

This is a very challenging task, as conventional encryption does not support pro-

cessing on top of ciphertexts, whereas more recent cryptographic tools such as homo-

morphic encryption are not flexible enough (they support only restricted operations),

and they are also prohibitively expensive for practical uses. To address this problem,

previous work such as [19] has proposed privacy-preserving data transformations that

hide the data while still allowing the ability to perform some geometric functions

evaluation. However, such transformations lack the formal security guarantees of en-

cryption. Other methods employ stronger-security transformations, which are used

in conjunction with dataset partitioning techniques [28], but return a large number

of false positives, which is not desirable due to the financial considerations outlined

earlier.

In this chapter, we propose a family of techniques that allow processing of NN

queries in an untrusted out-sourced environment, while at the same time protecting

both the POI and querying users’ positions. Our tech-niques rely on mutable order

preserving encoding (mOPE) [6], which guarantees indistinguishability under ordered

chosen-plaintext attack (IND-OCPA) [20,21]. We also provide performance optimiza-

tions to decrease the computational cost inherent to processing on encrypted data,

and we consider the case of incrementally updating datasets.

Inspired by previous work in [26, 27] that brought to-gether encryption and geo-

metric data structures that enable efficient NN query processing, we investigate the

use of Voronoi diagrams and Delaunay triangulations [22] to solve the problem of

secure outsourced kNN queries. We emphasize that previous work assumed that the

contents of the Voronoi diagrams [26, 27] is available to the cloud provider in plain-

text, whereas in our case the processing is performed entirely on ciphertexts, which

is a far more challenging problem.

Page 24: Secure kNN Query Processing in Untrusted Cloud Environments

8

Our specific contributions are:

(i) We propose the VD-kNN method for secure NN queries which works by pro-

cessing encrypted Vo-ronoi diagrams. The method returns exact results, but it is

expensive for k > 1, and may impose a heavy load on the data owner.

(ii) To address the limitations of VD-kNN, we intro-duce TkNN, a method that

works by processing encrypted Delaunay triangulations, supports any value of k and

decreases the load at the data owner. TkNN provides exact query results for k = 1,

but when k > 1 the results it returns are only approximate. However, we show that

in practice the accuracy is high.

(iii) We outline a mechanism for updating encrypted Voronoi diagrams and Delau-

nay triangulations that allows us to deal efficiently, in an incremental manner, with

changing datasets.

(iv) We propose performance optimizations based on spatial indexing and parallel

computation to decrease the computational overhead of the proposed techniques.

(v) Finally, we present an extensive experimental evaluation of the proposed tech-

niques and their optimizations, which shows that the proposed methods scale well for

large datasets, and clearly outperform competitors.

The rest of the chapter is organized as follows: in Section 2.1, we give an overview

of the relevant background for the studied problem. In Section 2.2, we introduce the

VD-kNN method which relies on Voronoi diagrams and provides exact query results.

Section 2.3 introduces the TkNN method which alleviates the load on the data owner,

at the expense of slightly lower precision in returned results. In Section 2.4, we

present performance optimizations. We discuss mechanisms for efficient handling of

incremental updates in Section 2.5. We evaluate experimentally the performance of

the proposed techniques in Section 2.6.

Page 25: Secure kNN Query Processing in Untrusted Cloud Environments

9

2.1 Preliminaries

In this section, we introduce essential preliminary con-cepts, such as system model

(Section 2.1.1), privacy model (Section 2.1.2) and an overview of the mutable order

preserving encoding (mOPE) from [6] which we use as a building block in our work

(Section 2.1.3).

Fig. 2.1. System Model

2.1.1 System Model

The system model comprises of three distinct entities: (1) the data owner; (2) the

outsourced cloud service provider (for short cloud server, or simply server); and (3)

the client. The entities are illustrated in Figure 2.1.

The data owner has a dataset with n two-dimensional points of interest, but does

not have the necessary infrastructure to run and maintain a system for processing

nearest-neighbor queries from a large number of users. Therefore, the data owner

outsources the data storage and querying services to a cloud provider. As the dataset

of points of interest is a valuable resource to the data owner, the storage and querying

must be done in encrypted form (more details will be provided in the privacy model

description, Section 2.1.2).

The server receives the dataset of points of interest from the data owner in

encrypted format, together with some additional encrypted data structures (e.g.,

Voronoi diagrams, Delaunay triangulations) needed for query processing (we will

provide details about these structures in Sections 2.2 and 2.3). The server receives

kNN requests from the clients, processes them and returns the results. Although the

Page 26: Secure kNN Query Processing in Untrusted Cloud Environments

10

cloud provider typically possesses powerful computational resources, processing on en-

crypted data incurs a significant processing overhead, so performance considerations

at the cloud server represent an important concern.

The client has a query point Q and wishes to find the point’s nearest neighbors.

The client sends its encrypted location query to the server, and receives k nearest

neighbors as a result. Note that, due to the fact that the data points are encrypted,

the client also needs to perform a small part in the query processing itself, by assisting

with certain steps (details will be provided in Sections 2.2 and 2.3).

2.1.2 Privacy Model

As mentioned previously, the dataset of points of interest represents an impor-

tant asset for the data owner, and an important source of revenue. Therefore, the

coordinates of the points should not be known to the server.

We assume an honest-but-curious cloud service provider. In this model, the server

executes correctly the given protocol for processing kNN queries, but will also try to

infer the location of the data points. It is thus necessary to encrypt all information

stored and processed at the server.

To allow query evaluation, a special type of encryption that allows processing on

ciphertexts is necessary. In our case, we use the mOPE technique from [6]. mOPE

is a provably secure order-preserving encryption method, and our techniques inherit

the IND-OCPA security guarantee against the honest-but-curious server provided by

mOPE.

Furthermore, we assume that there is no collusion be-tween the clients and server,

and the clients will not disclose to the server the encryption keys.

2.1.3 Secure Range Query Processing Method

As we will show later in Sections 2.2 and 2.3, processing kNN queries on encrypted

data requires complex operations, but at the core of these operations sits a relatively

Page 27: Secure kNN Query Processing in Untrusted Cloud Environments

11

Fig. 2.2. MOPE Tree

Fig. 2.3. MOPE Table

simple scheme called mutable order-preserving encryption (mOPE) . mOPE allows

secure evaluation of range queries, and is the only provably secure order-preserving

encoding system (OPES) known to date. The difference between mOPE and previous

OPES techniques (e.g., Boldyreva et. al. [20,21]) is that it allows ciphertexts to change

value over time, hence the mutable attribute. Without mutability, it is shown in [6]

that secure OPES is not possible.

Since our methods use both mOPE and conventional symmetric encryption (AES),

to avoid confusion we will further refer to mOPE operations on plaintext/ciphertexts

as encoding and decoding, whereas AES operations are denoted as encryption/decryption.

Page 28: Secure kNN Query Processing in Untrusted Cloud Environments

12

The mOPE scheme in a client-server setting works as follows: the client has the

secret key of a symmetric cryptographic scheme, e.g., AES, and wants to store the

dataset of ciphertexts at the server in increasing order of corresponding plaintexts.

The client engages with the server in a protocol that builds a B-tree at the server.

The server only sees the AES ciphertexts, but is guided by the client in building the

tree structure. The algorithm starts with the client storing the first value, which

becomes the tree root. Every new value stored at the server is accompanied by an

insertion in the B-tree. Figure 2.2 shows an example where plaintext values are also

illustrated for clarity, although they are not known to the server (for simplicity we

show a binary tree in the example).

Assume the client wants to store an element with value 55: it first requests the

ciphertext of the root node from the server, then decrypts E(50) and learns that the

new value 55 should be inserted in the tree to the right hand side of the root. Next,

the client requests the right node of the root node and the server sends E(70) to the

client. The process repeats recursively until a leaf node is reached, and 55 is inserted

in the appropriate position in the sorted B-tree, as the left child of node 60. The

client sends the AES ciphertext E(55) to the server which stores it in the tree. The

encoding of value 55 in the tree is given by the path followed from the root to that

node, where 0 signifies following the left child, and 1 the right child. In addition, the

encoding of every value is padded to the same length (in practice 32 or 64 bits) as

follows [6]:

mOPE encoding = [mOPE tree path]10 . . . 0 (2.1)

The server maintains a mOPE table with the mapping from ciphertexts to encod-

ings, as illustrated in Figure 2.3 for a tree with four levels (four-bit encoding). Clearly,

mOPE is an order preserving encoding, and it can be used to answer securely range

queries without need to decrypt ciphertexts.

In addition, the mOPE tree is a balanced structure. Using a B-tree, it is possible

to keep the height of the tree low, and thus all search operations are efficient. In order

to ensure the balanced property, when insertions are performed, it may be necessary

Page 29: Secure kNN Query Processing in Untrusted Cloud Environments

13

to change the encoding of certain ciphertexts. Note that, the actual ciphertext image

does not change, only its position in the tree, and thus its encoding, changes. Typ-

ically, mutability can be done very efficiently, and the complexity of the operation

(i.e., the maximum number of affected values in the tree) is O(logn) where n is the

number of stored values.

As shown in [6], mOPE satisifies IND-OCPA [6], i.e., indistinguishability under

ordered chosen-plaintext attack. The scheme does not leak anything besides order,

which is the intended behavior to support comparison on ciphertexts.

2.2 One Nearest Neighbor (1NN)

2.2.1 Voronoi Diagram-based 1NN (VD-1NN)

In this section, we focus on securely finding the 1NN of a query point. We employ

Voronoi diagrams [22], which are data structures especially designed to support NN

queries. An example of Voronoi diagram is shown in Figure 4-1. Denote the Euclidean

distance between two points p and q by dist(p, q), and let P = p1, p2, , pn be a set of n

distinct points in the plane. The Voronoi diagram (or tesselation) of P is defined as

the subdivision of the plane into n convex polygonal regions (called cells) such that

a point q lies in the cell corresponding to a point pi if and only if pi is the 1NN of

q, i.e., for any other point pj it holds that dist(q, pi) < dist(q, pj) [22]. Answering a

1NN query boils down to checking which Voronoi cell contains the query point.

In our system model, both the data points and the query must be encrypted.

Therefore, we need to check the enclosure of a point within a Voronoi cell securely.

Next, we propose such a secure enclosure evaluation scheme.

2.2.2 Secure Voronoi Cell Enclosure Evaluation

Based on the secure range query processing method introduced in Section 2.1.3, we

develop a secure scheme that determines whether a Voronoi cell contains the encrypted

Page 30: Secure kNN Query Processing in Untrusted Cloud Environments

14

Fig. 2.4. Voronoi Diagram

query point. Consider the sample Voronoi cell from Figure 2.5. For simplicity, we

consider a triangle, but the protocol we devise works for any convex polygon as a

cell. The data owner sends to the server the encrypted vertices of the cell: V1(x1, y1),

V2(x2, y2) and V3(x3, y3).

Step 1: Filter Cells. Checking enclosure of a point within a convex polygonal

region is expensive, so the server first performs a filtering step, where it checks if the

query point is inside the minimum bounding rectangle (MBR) of the cell, identified

by its lower-left (LL) and upper-right (UR) corners. Checking enclosure within a

rectangle is much cheaper, and the polygon protocol is only performed for the cells

that pass the filter. For the filtering step, the data owner needs not send any additional

information to the server, since the coordinates of the MBR are already among the

vertex coordinates. The data owner only has to send the indices of the four rectangle

corner coordinates within the sequence of vertex coordinates, and the server will be

able to compute rectangle enclosure.

By using the secure range query processing method, the server determines if the

encrypted query point Q(xq, yq) is inside the MBR or not by checking the following

four conditions for every Voronoi cell:

xLL < xq (2.2)

xq < xUR (2.3)

Page 31: Secure kNN Query Processing in Untrusted Cloud Environments

15

yLL < yq (2.4)

yq < yUR (2.5)

In Figure 2.5, the left side boundary of the MBR is given by coordinate x1 and

the right side by x3. Similarly, the lower side of the MBR is given by coordinate y3

and the upper side by y2. If all the conditions hold, then the current cell is processed

in Step 2, otherwise it is discarded. To improve performance, if a condition is not

satisfied, the other conditions do not need to be checked, therefore reducing query

processing time.

Step 2: Calculate Intersection Edges. For cells that passed Step 1, the server

determines the intersection points of the cell edges with the vertical line that passes

through the query point. Note that, since Voronoi cells are convex polygons [22], the

vertical line always intersects exactly two edges of the cell. In this step, the server

determines the two edges which the vertical line intersects.

In Figure 2.5, the vertical line meets the Voronoi cell at points A and B. Thus,

the vertical line meets edges L1 and L3. This can be determined using the secure

range query processing method as follows:

x1 < xq < x2 for edge L1

x1 < xq < x3 for edge L3

Since xq < x2, the vertical line does not meet edge L2, so the server need not

consider edge L2 in Step 3, but only the two edges L1 and L3. Recall that, all

comparisons are done on encoded data, so no information about edge coordinates is

learned by the server.

Step 3: Determine Polygon Enclosure. In the third step, the server determines

whether the query point is in-between the two sides found in Step 2. Namely, the

query point needs to be below one of the sides and above the other. There are two

conditions to be checked, except that this time the sides may be neither horizontal,

nor vertical, which makes the evaluation more complicated.

Continuing the earlier example, the server must check whether the query point is

below the edge L1 and above L3. From Step 1, we know that the query point is within

Page 32: Secure kNN Query Processing in Untrusted Cloud Environments

16

Fig. 2.5. Secure Voronoi Cell Enclosure Evaluation

the cell MBR. Denote by fL the line equation corresponding to side L. Then we have

three possible cases for query point placement to consider: (i) yq1 > fL1(xq) and

yq1 > fL3(xq) (illustrated by Q1 in Figure 4-2); (ii) yq2 < fL1(xq) and yq2 > fL3(xq)

(illustrated by Q2); and (iii) yq3 < fL1(xq) and yq3 < fL3(xq) (illustrated by Q3). yq1

is the y-coordinate of Q1, yq2 is the y-coordinate of Q2 and yq3 is the y-coordinate of

Q3. In the first and third case, the query point is outside the cell. The second case is

the only one when the query point is inside the polygon. In the following, we show

how to check these cases.

For edge L1 in Figure 2.5, the line equation is:

y = (y2 − y1)/(x2 − x1) ∗ (x− x1) + y1 (2.6)

When we plug xq into Eq. 2.6, if yq is less than y, then the query point is in the

lower side of L1. On the other hand, when we plug xq into the equation of L3, if

yq is greater than y, then the query point is in the upper side of L3. The following

condition must be satisfied if the query point is in the lower side of L1, where S(i, j)

denotes the slope of the edge between two Voronoi vertices Vi and Vj.

yq < (y2 − y1)/(x2 − x1) ∗ (xq − x1) + y1 ⇔ yq < S1,2 ∗ (xq − x1) + y1 (2.7)

Page 33: Secure kNN Query Processing in Untrusted Cloud Environments

17

The values of xq and yq are variable for each query, but the Voronoi diagram does

not change with the query, so, xi, yi, and Si,j remain constant. We can rewrite the

equation above as follows:

L1,2 = yq − S1,2 ∗ xq < −1 ∗ S1,2 ∗ x1 + y1 = R1,2 (2.8)

where we denote the right-hand side and the left-hand side by Ri,j and Li,j, respec-

tively. Ri,j is constant for a given query, and can be determined by the data owner

when s/he uploads the database to the server. In addition, the data owner encrypts

the value of slope Si,j with conventional encryption (e.g., AES) and sends it to the

server.

For each of the intersecting edges determined in Step 2, the server assembles Eq.

2.8 and sends the encrypted value Si,j for each of the two edges to the client. The

client decrypts Si,j values with the secret AES key shared with the data owner. Next,

the client computes Li,j (Eq. 2.8), encodes it, and sends it back to the server. The

server is then able to check enclosure for the current cell, and thus find the final query

result. The following pseudocode summarized the protocol, and Figure 2.6 captures

the communication pattern between parties.

Fig. 2.6. VD-1NN

2.2.3 Performance Analysis

The Data Owner computes the order-1 Voronoi diagram of the dataset, determines

the MBR boundaries of each Voronoi cell and encodes using mOPE the cell vertices

coordinates, as well as the right side Ri,j of Eq. 2.8 for each edge of a Voronoi cell.

The slopes Si,j are encrypted using symmetric encryption (e.g., AES).

Page 34: Secure kNN Query Processing in Untrusted Cloud Environments

18

Table 2.1VD-1NN protocol

1 Data Owner sends to Server the encoded Voronoi cell vertices coordinates,

MBR boundaries for each cell, encoded right-hand side Ri,j,

and encrypted Si,j for each cell edge.

2 Client sends its encoded query point to the Server.

3 Server performs the filter step, determines for each kept cell the edges

that intersect the vertical line passing through the query point

and sends the encrypted slope Si,j of the two edges to the Client.

4 Client computes the left-hand side Li,j, encodes it and sends it to the Server.

5 Server finds the Voronoi cell enclosing the query point and returns result to Client.

Generation time for the Voronoi diagram is O(nlogn) using Fortunes algorithm

[22]. The number of Voronoi vertices that require mOPE encoding in a set of n data

points is at most 2n−5 [22]. Thus, the time to encode Voronoi points is proportional

to 4n since each Voronoi point has a x-coordinate and a y-coordinate. Furthermore,

the right side Ri,j of Eq. 2.8 must be encoded for each edge. The number of edges in

a Voronoi diagram is at most 3n−6. The total number of mOPE encoding operations

is proportional to 7n. The slopes Si,j are encrypted using AES encryption and do not

require mOPE encoding. In total, the Data Owner performs 3n AES encryption and

7n mOPE encoding operations.

In Line 2 of the pseudocode, the client encodes the query point with cost O(1). In

Line 4, the client encodes the left side Li,j of the two edges of the Voronoi cells whose

MBR boundaries contain the query point. The number of Voronoi cells considered in

this step is typically small, as we have found experimentally.

In Line 3, the server finds Voronoi cells whose boundaries enclose the query point.

Since there are n Voronoi cells, the processing time is O(n). When there are a lot of

Page 35: Secure kNN Query Processing in Untrusted Cloud Environments

19

data points, the time to filter Voronoi cells may be high. In Section 2.4, we provide

several optimizations to reduce this computational time.

2.3 k Nearest Neighbor (kNN)

To support secure kNN queries, where k is fixed for all querying users, we could

extend the VD-1NN method from Section 2.2 by generating order-k Voronoi diagrams

[22]. However, this method, which we call VD-kNN, has several serious drawbacks:

(1) The complexity of generating order-k Voronoi diagrams is either O(k2nlogn) [7]

or O(k(n−k)logn+nlog3n) [8], depending on the approach used. This is significantly

higher than O(nlogn) for order-1 Voronoi diagrams.

2.3.1 Secure Distance Comparison Method (SDCM)

Consider two given encrypted data points Pi and Pj and encrypted query point

Q(xq, yq). If we can securely test which data point is closer to the query point, then

by repeatedly applying this test we can find all k nearest neighbors of Q. In Section

2.2.2, we showed how to determine whether the query point is below or above an

edge of a Voronoi cell. SDCM is an extension of that scheme. Consider the example

Fig. 2.7. Secure Distance Comparison Method

in Figure 2.7, where there are two data points and one query point. First, the data

owner computes the middle point of the segment that connects the two data points,

Page 36: Secure kNN Query Processing in Untrusted Cloud Environments

20

denoted by Pi,j, as well as the perpendicular bisector Li,j of the segment. The slope

of the bisector is denoted by Si,j. The bisector equation is:

y = −1 ∗ (x2 − x1)/(y2 − y1) ∗ (x− xi,j) + yi,j ⇔ y = Si,j ∗ (x− xi,j) + yi,j (2.9)

When we plug xq into the equation, it follows that the query point is in the upper

side of the bisector, hence Pi is closer to Q than Pj, if and only if

yq > Si,j ∗ (xq−xi,j) + yi,j ⇔ Li,j = yq−Si,j ∗xq > −1 ∗Si,j ∗xi,j + yi,j = Ri,j (2.10)

Similar to the case of Section 2.2.2, we observe that the right-hand side Ri,j of Eq.

2.10 is independent of the query point, whereas the left-hand side Li,j depends on

the query point. The data owner can thus encode the right-hand side and send it to

the server, together with the slope Si,j of the bisector. Recall that, the slope may be

encrypted using conventional encryption, e.g., AES.

At query time, in order to determine which data point is closer, the server sends

the encrypted slope Si,j to the client. The client computes the left-hand side, encodes

it and sends it back to the server, which in turn determines the outcome of inequality

in Eq. 2.10.

2.3.2 Basic k Nearest Neighbor (B-kNN)

Based on SDCM, we introduce the basic secure kNN scheme (BkNN), which in

itself is not efficient, but it illustrates the general concept based on which we introduce

a more efficient approach in Section 2.3.3. For each pair of encrypted data points,

the server must determine according to SDCM which data point is closer to the

encrypted query point. When there are n data points, the perpendicular bisector

must be determined for every pair, for a total of (n(n− 1))/2 bisectors. The encoded

right-hand side and slope must be sent for each bisector from the data owner to the

server, and the server needs to perform (n(n− 1))/2 comparisons on encoded data to

find the first nearest neighbor. Clearly, such cost is prohibitive.

Page 37: Secure kNN Query Processing in Untrusted Cloud Environments

21

Fig. 2.8. BkNN using Query Square

To reduce this overhead, we propose a basic k nearest neighbor scheme which uses

the concept of query squares. We illustrate this concept in Figure 2.8: the small

query square with side 2r corresponds to a range query selected by the user, whereas

the large query square is computed as the smallest square that encloses the circle in

which the small query square is inscribed.

Suppose a user wishes to retrieve from the server the answer to a 3NN query

(k = 3). Assume the small query square contains three data points and the large

query square contains five data points. Note that, it is possible for a data point that

is outside the query square (in our example P4) to be closer to the query point than

some point inside the square (say P3). This means that if the small query square

contains at least k data points, the large query square will certainly contain k nearest

neighbors. If the small query square does not have at least k data points, then the

client will generate a larger query square and re-issue the query, in a process similar

to incremental range queries.

The size Ssq of the small query square can be determined by the client according

to the estimated number of data points in the data domain. For instance, when the

number of data points is n and the size of the data space side is l, then assuming the

data points are uniformly distributed, we have:

k : n = (2r)2 : l2 ⇔ Ssq = 2r =√kl2/n = l

√k/n (2.11)

Page 38: Secure kNN Query Processing in Untrusted Cloud Environments

22

The size of the query square is proportional to k and inversely proportional to n. If

k is large, we need a large query square, whereas if n is large (i.e., the dataset has a

higher density), then a smaller query square suffices.

The number of data points in the large query square is expected to be O(k), so

the number of bisectors used in the query processing step is O(k(k − 1)/2), which is

much cheaper than O(n(n−1)/2). The BkNN protocol is summarized in the following

Fig. 2.9. BkNN Protocol

pseudocode, and a system view with the communication pattern between parties is

provided in Figure 2.9.

Table 2.2BkNN protocol

1 Data Owner sends to Server: all encoded data points,

and for each pair of points the encoded right-hand side Ri,j

of Eq. 2.10, and encrypted slopes Si,j.

2 Client sends the encoded query to Server.

3 Server finds the data points in the large query square

and sends their AES-encrypted slopes Si,j to Client.

4 Client computes the encoded left-hand sides Li,j of Eq. 2.10

and sends them to Server.

5 Server returns k result points to Client.

Page 39: Secure kNN Query Processing in Untrusted Cloud Environments

23

The concept of using query squares has been used previously in [9], but that scheme

uses an encryption method which is not secure against chosen plaintext attacks, and

it also returns redundant results to the client.

Performance Analysis. Even if the query processing time is significantly re-

duced toO(k(k−1)/2) by using the query square concept, BkNN still incurs significant

data encryption time O(n(n− 1)2), because all perpendicular bisector slopes need to

be sent to the server. Next, we focus on reducing data encryption time.

2.3.3 Triangulation-based kNN (TkNN)

Triangulation-based kNN (TkNN) reduces the overhead at the data owner. TkNN

is an approximate method for k > 1, i.e., it may not always return the true kNN.

However, as we show later in Section 2.6.3, it achieves high precision in practice.

The Delaunay Triangulation is the dual of the order-1 Voronoi diagram [22], and is

illustrated in Figure 2.10. The thick lines show the edges of the triangulation, whereas

the dotted lines show the edges of the Voronoi cells. Let b denote the number of

points that lie on the boundary of the convex hull of the triangulation. Then the

triangulation has 2n− 2− b triangles and 3n− 3− b edges.

Fig. 2.10. Triangulation Example

In TkNN, the data owner computes bisectors for each edge of the triangulation,

for a total of 3n bisectors. This is significantly less than BkNN, which requires

n(n − 1)2 bisectors. In addition, since the Delaunay triangulation is the dual of the

Page 40: Secure kNN Query Processing in Untrusted Cloud Environments

24

order-1 Voronoi diagram, the data generation time (i.e., the time to compute the data

structure in plaintext) is O(nlogn), no larger than the VD-1NN case.

In TkNN, a bisector is determined for each edge of the triangulation. In the

example of Figure 2.11, there are five edges and a bisector is determined for each

edge. Note that, it is not necessary to determine a bisector for the pair of data points

P1 and P4 since they do not take part as vertices in any triangle together. Hence, we

can reduce the data encryption time and the query processing time to O(n), and the

query encryption time to O(k).

Fig. 2.11. TkNN Evaluation

Using SDCM for the left-hand triangle in Figure 2.11, the server determines which

data point is closer among P1,P2,P3. In addition, from the right-hand triangle, the

server determines which data point is closer among P2,P3,P4. For instance, from the

left-hand triangle, we know that P1 is the nearest to the query point, P3 is second-

nearest, and P2 is third-nearest. From the right triangle, P3 is nearest, P2 is second-

nearest, and P4 is third-nearest. Finally, combining the information from these two

triangles, P1 is the 1NN, P3 the 2NN, P2 the 3NN and P4 is the 4NN. The server is

able to determine the query answer completely from processing the triangulation.

However, the performance advantage of TkNN comes with a tradeoff in query

accuracy. Specifically, when two data points do not exist in the same triangle, a

bisector between the two data points is not determined. In this case, the server may

not be able to determine which one between the two data points is closer to the query

point.

For example, in Figure 2.12, when the query point is closer to P2, we can determine

from the left-hand triangle that P2 is nearest to the query point, P1 is second-nearest,

Page 41: Secure kNN Query Processing in Untrusted Cloud Environments

25

Fig. 2.12. TkNN Limitation

and P3 is third-nearest. From the right-hand triangle, it results that P2 is the near-

est, P4 is second-nearest, and P3 is third-nearest. From these two triangles, we can

establish a partial order for the four data points as follows:

p2 < {P1, P4} < P3 (2.12)

The first nearest neighbor is always correct. However, in cases where k > 1, the

rest of the returned k results may be approximate. For example, when a 2NN query

is issued, the server may return P2 and P4 to the client as the result. P2 is indeed the

first nearest neighbor, but the 2NN is actually P1.

Performance Analysis. The data generation time of TkNN is O(nlogn) and

the data encoding time is O(5n) (accounting for n two-dimensional points coordinates

and 3n bisector right-hand side equation values). This is superior to VD-1NN which

requires O(7n) data encoding time (2n two-dimensional Voronoi points and 3n right-

hand side equation values).

In addition, since VD-kNN has kn Voronoi cells, it has O(kn) query processing

time. Triangulation has n data points, hence only O(n) query processing time. TkNN

is k times faster than VD-kNN in terms of query processing.

Finally, BkNN has O(n(n − 1)/2) data encoding time and O(n(n − 1)/2) query

processing time. A performance comparison of the three schemes is provided in Table

??.

Page 42: Secure kNN Query Processing in Untrusted Cloud Environments

26

Table 2.3Performance Comparison

VD-kNN BkNN TkNN

Data Generation Time k2nlogn N/A nlogn

Data Encoding Time 7kn n(n− 1)/2 5n

Query Encoding Time O(1) O(k(k − 1)/2) O(k)

Query Processing Time kn n(n− 1)/2 n

2.4 Optimizations

Our proposed methods for secure nearest-neighbor evaluation perform query pro-

cessing on top of encrypted data, and for this reason they are inherently expensive. It

is a well-known fact that achieving security by processing on encrypted data comes at

the expense of significant computational overhead. Next, we propose two optimiza-

tions that aim at reducing this cost.

2.4.1 Hybrid Query Processing using Kd-trees

As shown in Table 2.3, the query processing time of VD-1NN and TkNN is O(n).

If there are a lot of data points, which is likely to be the case in cloud deployments,

the query processing time will be several seconds or higher. Since the server needs to

return the result to the client within a very short time for good usability (typically less

than 1 second), we propose a hybrid query processing method using kd-trees [10,22].

The data owner performs a pre-processing phase in which the set of data points is

partitioned according to a kd-tree space decomposition. Figure 2.13 illustrates how

the splitting is done. First, the data owner chooses a vertical line (e.g., x = x1) and

splits the set of points into two subsets of equal cardinality. Each of the resulting

subsets is further split along a horizontal line (e.g., y = y1,1 and y = y1,2) . In general,

the data owner splits with a vertical line nodes whose depth is even, and with a

Page 43: Secure kNN Query Processing in Untrusted Cloud Environments

27

horizontal line nodes whose depth is odd. The splitting ends when the cardinality of

a node drops below a certain threshold.

Fig. 2.13. Partitioning data points using a kd-tree

Each of the resulting 16 partitions in Figure 2.13 has roughly n/16 data points, and

is enclosed by its own MBR. MBRs of different nodes do not overlap. For example,

the 10th subspaces lower bound is (x2,3,y2,6) and the upper bound is (x1,y1,1). Each

MBR is encrypted by the data owner and sent to the server.

When the client sends an encrypted query to the server, the server finds a subspace

which contains the encrypted query point. For instance, for the example in Figure

2.13, secure range query processing first performs the following test:

x2,3 < xq < x1 (2.13)

y2,6 < yq < y1,1 (2.14)

If these two conditions are satisfied for some partition, then that partition contains

the query point. The subspace has roughly n/16 data points. Next, the server applies

VD-1NN or TkNN only to that partition. Consequently, query processing time is

reduced to about 1/16 of the query processing time when there are n Voronoi cells or

n data points. Furthermore, as the number of data points increases, the data owner

can choose a larger number of partitions. The disadvantage of this method is that

the server learns the count of data points in each partition, but since the partition

Page 44: Secure kNN Query Processing in Untrusted Cloud Environments

28

MBRs are encrypted, that does not disclose significant information (all partitions

have roughly the same cardinality).

2.4.2 Parallel Processing

In order to reduce the query processing time, the server can use parallel processing.

Note that, the operations performed by the server for each Voronoi cell or triangle

are independent from each other. Hence, each object (or partition of objects) can be

dispatched for processing to a different processor. The algorithms for querying are

embarrassingly parallel, which can lead to very good speedup values.

Nowadays, a lot of machines have multi-core proces-sors, so the parallel processing

optimization can be quite effective in practice. In addition, in the case of clusters

of computers, the query processing time can be further reduced by using a parallel

programming environment such as the Message Passing Interface (MPI). We will

show the effectiveness of parallel processing on reducing query processing time in the

experimental evaluation.

2.5 Incremental Updates

So far, we have considered only the case of static da-tasets of points. However,

in practice, dataset of locations of interest change quite frequently. Re-generating

a new encrypted dataset at the data owner each time some points change incurs a

prohibitively expensive overhead. In this context, it is important to address the issue

of incremental updates.

When data points move, it is not necessary to recalculate the entire Voronoi

diagram or Delaunay triangulation. These data structures can be updated in an

incremental manner. In addition, the topological structures of the Voronoi diagram

and the Delaunay triangulation are locally stable under sufficiently small continuous

motions of the data points [11]. For incremental updates, we consider only TkNN

and VD-1NN. BkNN is not considered since it is not based on triangulations or

Page 45: Secure kNN Query Processing in Untrusted Cloud Environments

29

Voronoi diagrams, and handling updates in the case of BkNN is straightforward,

albeit inefficient. Specifically, in the case of BkNN, when a data point moves, n

slopes are changed and must be reencrypted.

In the case of TkNN, if a data point moves, the position of the data point is

changed, and the slopes of d edges connected to the data point are also changed. The

complexity of the update is O(d), where d is the degree of the data point. Then, the

position of the data point and the right-hand sides of Eq. 2.10 must be re-encoded

with mOPE, whereas the slopes of the edges are re-encrypted with AES encryption.

Recall that, the encoded coordinates of the MBR are among those of the data points,

so no separate re-encoding for these is required.

In the case of VD-1NN, if a data point in the triangulation moves, the slopes of

three edges corresponding to the cell vertices of that point also change. Then, the

neighbors of that cell in the tessellation may also change. In total, when a data point

moves, d Voronoi points and 2d Voronoi edges are changed and must be re-encoded.

Note that, each cell vertex has three edges. However, an edge is shared with adjacent

cell vertices.

Next, we discuss how topological changes are performed. The work in [11] shows

how changes can be characterized as swaps of adjacent triples in the triangulation.

Recall that a Voronoi diagram is the dual of a Delaunay triangulation. When a data

point Pl leaves the circle determined by three points C(Pi, Pj, Pk), an inactive triple

{Pi, Pj, Pk} becomes activated. On the other hand, when a data point Pl enters the

circle, an active triple {Pi, Pj, Pk} becomes deactivated. Figure 2.14 illustrates this

concept.

Fig. 2.14. Change of the topological structure

Page 46: Secure kNN Query Processing in Untrusted Cloud Environments

30

The structure update proceeds in two steps: a preprocessing step and an iteration

step [11]. In the preprocessing step, the data owner computes the triangulation

and calculates the potential topological events. A potential topological event is a

pair of two adjacent triple which is called a quadrilateral, e.g., {Pi, Pj, Pk, Pl}. The

data owner builds up a balanced SWAP-tree. In the iteration step, when there is a

topological event, the data owner processes the event and updates the SWAP-tree.

The number of pairs of two adjacent triples is equal to the number of edges

which is 3n. The preprocessing step requires O(nlogn) time. Next, when there is

a swap, it determines the removal of only four quadliraterals (e.g., {Pr, Pi, Pj, Pl})

while other four quadrilaterals are generated (e.g., {Pr, Pi, Pk, Pl}). The update time

is O(logn) [11].

There are two separate cases:

(1) The data point Pl moves within the circle. In this case, the topology

is not changed. However, the position of the data point and the slopes of the edges

connected to the point are changed. Then, for TkNN, only the point Pl and the edges

including the point Pl and MBR boundaries of the triangles including the point Pl

should be updated. The update time is O(d) where d is the degree of the data point.

In the case of VD-1NN, since the data point Pl has d edges in the triangulation,

d Voronoi vertices are changed, as well as 2d edges.The update time is O(4d) where

d is the degree of the Voronoi points.

(2) The data point Pl moves outside the circle. In this case, the topology

is changed. The time to update the triangulation is O(logn) as explained earlier. In

addition, for TkNN, the moving data point, d edges connected to it and the MBR

boundaries of the triangles containing the data point should be updated. For VD-

kNN, two Voronoi points Va and Vb are deleted and two new Voronoi points Vc and Vd

are inserted. Then, O(d) Voronoi points, O(2d) edges including the Voronoi points,

and the MBR boundaries of the Voronoi cells corresponding to the Voronoi points

should be updated. The total update time is O(logn+ 4d).

Page 47: Secure kNN Query Processing in Untrusted Cloud Environments

31

In summary, the incremental update of TkNN is more efficient than that of VD-

1NN, since TkNN hasO(logn+d) time complexity versus VD-1NN which hasO(logn+

4d) time complexity.

2.6 Experimental Evaluation

2.6.1 Experimental Setup

We developed a Java prototype which implements the data owner, the server

and the client protocols. We used the Qhull library [12] to generate order-1 Voronoi

diagrams and Delaunay triangulations. We implemented mOPE [6] using 32-bit en-

coding. The parallel computing section of our code was implemented using Java

threads. Our experimental testbed consists of an Intel i7 CPU machine with four

cores.

We used datasets of two-dimensional point coordinates ranging in cardinality from

200,000 to 1 million. We consider a uniform distribution of points in the unit space.

We emphasize that, in the case of processing on encrypted data, the actual data dis-

tribution has little or no effect on performance, since all values are treated in a similar

way in encrypted form. Therefore, we omit results obtained for other distributions.

For encryption of slopes, we used 128 bit AES. The communication bandwidth for

the wireless connection between the server and the client is set to 1Mbps.

The main performance metrics used to evaluate the proposed techniques are query

response time and communication cost. The response time measures the duration

from the time the query is issued until the results are received at the client. It includes

the computation time at the server and the client, as well as the time required for

transfer of final and intermediate results between client and server. Communication

cost (measured in kilobytes) is important given that many wireless providers charge

customers in proportion to the amount of data transferred.

We briefly review the functionality of the proposed methods. In the setup phase,

the data owner builds the Voronoi diagram or Delaunay triangulation for the da-taset,

Page 48: Secure kNN Query Processing in Untrusted Cloud Environments

32

encrypts these structures and sends them to the server. At runtime, there are two

steps for each method:

VD-kNN. 1) The client sends its encoded query point to the server which finds

the Voronoi cells whose MBRs enclose the query point. For each of these cells, the

server sends the encrypted slopes Si,j of two cell edges intersecting the vertical line

passing through the query point. 2) The client computes the left-hand sides Li,j (Eq.

2.8) and sends their ciphertexts to the server, which finds the Voronoi cell enclosing

the query point.

TkNN. 1) The client sends the encoded query square to the server, and the server

finds the data points enclosed by the square. The server sends to the client the

encrypted slopes Si,j of the perpendicular bisectors corresponding to each such data

points. 2) The client computes the encoded left sides Li,j (Eq. 2.10) and sends them

to the server which finalizes processing and returns the results to the client.

on ASM-PH encryption and builds an encrypted R-tree index (shadow index) on

top of the data. The complete tree is sent to the client, who engages in a multiple-

round index traversal protocol with the server.

In Sections 2.6.2 and 2.6.3 we evaluate our techniques for 1NN and kNN queries,

respectively. Next, in Section 2.6.4 we measure the overhead incurred at the data

owner, which includes the time required to generate the Voronoi diagrams or Delaunay

triangulations on plaintexts, as well as encoding/encryption time of these structures.

Section 2.6.5 evaluates the precision of TkNN, whereas Section 2.6.6 measures the

performance of handling updates.

2.6.2 1NN

Figure 2.15 shows the query response time for all con-sidered methods. For the

benchmark method from [68] (label ASM-PH), the cost of transferring the shadow

index is very large, as the index can grow to more than 100 megabytes for the con-

sidered dataset. The authors in [68] argue that the cost of index transfer may be

Page 49: Secure kNN Query Processing in Untrusted Cloud Environments

33

amortized over multiple queries. Even in this case, ASM-PH is at least an order of

magnitude slower than our techniques. Therefore, we omit it from subsequent results.

Fig. 2.15. 1NN Response Time

Figure 2.16 shows the communication cost for VD-1NN and TkNN. The methods

exhibit comparable costs, with VD-1NN slightly more expensive, due to the fact that

more slopes need to be sent for a Voronoi cell. The absolute values do not exceed 4

kilobytes, even for the largest dataset considered.

Fig. 2.16. 1NN Communication Cost

Figure 2.17 provides a breakdown of the response time into client CPU time,

server CPU time and communication time. Note that, for both proposed methods

the client time is a negligible fraction of the total time. This is a desirable feature, as

clients are lightweight devices without powerful computation capabilities. In the case

of VD-1NN, the server CPU time is the predominant source of overhead, whereas for

Page 50: Secure kNN Query Processing in Untrusted Cloud Environments

34

TkNN there is a balanced split between server CPU and communication time. The

higher server CPU time for VD-1NN is due to the fact that it needs to inspect four

values for the MBR of each Voronoi cell, whereas TkNN needs only two values for

each data point. Furthermore, in the mOPE tree, the height of VD-1NN is higher

than that of TkNN since VD-1NN needs to represent 2n Voronoi points, compared

to n data points for TkNN.

Fig. 2.17. 1NN Response Time Breakdown

Overall, VD-1NN is considerably costlier than TkNN. However, the absolute re-

sponse time is less than 200 msec in the worst case, which proves the practical appli-

cability of both proposed methods. The response time of TkNN is always below 70

msec.

2.6.3 kNN

As discussed in Section 2.3, the cost of VD-kNN grows as O(k2nlogn) when k

increases, due to the need to create an order-k Voronoi diagram. Thus, VD-kNN is

not suitable for larger values of k. In this section, we consider only TkNN, and we

compare its performance against ASM-PH [68]. As TkNN is highly parallelizable, we

consider both the serial algorithm as well as a version with four CPU cores, which

also partitions the dataspace into four regions, as discussed in Section 2.4.1.

Page 51: Secure kNN Query Processing in Untrusted Cloud Environments

35

Figure 2.18 shows that the cost of ASM-PH with in-dex transfer is prohibitively

expensive for larger values of k as well. When k increases, the gap between ASM-PH

without index transfer and TkNN get smaller, but TkNN still outperforms in each

case. Furthermore, parallelism increases considerably the performance of TkNN. Note

that, ASM-PH cannot be parallelized, due to its sequential nature in traversing the

encrypted index. We do not consider ASM-PH further in this section.

Fig. 2.18. kNN Response Time

Figure 2.19 presents the communication cost for TkNN as k increases. Each line

in the graph corre-sponds to a different setting for dataset size. The amount of

communication grows linearly with k, which is intuitive, as a proportionally larger

number of results need to be returned to the client. Interestingly, increases in dataset

size do not determine a significant increase in the amount of communication required.

Fig. 2.19. kNN Communication Cost

Page 52: Secure kNN Query Processing in Untrusted Cloud Environments

36

Figure 2.20 provides a breakdown of the response time into client CPU time,

server CPU time and communication time. The CPU time is significantly reduced

by using parallelism. The split of the dataset into four sub-spaces using a kd-tree

further improves performance. The overall response time never exceeds half a second

for the considered range of k values, whereas the parallel version halves the time to

250 msec.

Fig. 2.20. kNN Response Time Breakdown

2.6.4 Data Encryption Time at the Data Owner

Figure 2.21 shows the data encryption time at the data owner for VD-1NN and

TkNN. VD-1NN generates 2 ∗ n Voronoi points, whereas TkNN has n data points.

In addition, the data owner must encrypt the right side Ri,j for each edge of every

Voronoi diagram cell and triangulation object. The total numbers of such edges is 3n

for both VD-1NN and TkNN. The overall data encryption overhead of VD-1NN is

proportional to 7n, whereas that of TkNN is proportional to 5n. Figure 2.21 captures

this advantage of approximatively 30% that TkNN has over VD-1NN.

If the case of VD-kNN (not shown in the graph), which has k ∗ n Voronoi cells,

2kn Voronoi points and 3kn edges are present, leading to an encoding overhead that

is proportional to 7kn. This is another reason why VD-kNN is not suitable for larger

k values. So in addition to the query response time evaluated in Section 2.6.2, TkNN

Page 53: Secure kNN Query Processing in Untrusted Cloud Environments

37

also has an advantage with respect to data encoding/encryption time for larger k

values compared to VD-kNN.

Fig. 2.21. Data Encryption Time

2.6.5 Precision of TkNN

Recall from Section 2.3.3 that TkNN yields approximate results for k > 1, since

perpendicular bisectors are deter-mined only for edges in the triangulation. When

two data points do not exist in the same triangle, TkNN may not be able to deter-

mine which data point is closer to the query point. However, in many cases we can

determine a total order from the partial orders given by individual triangles.

Next, we measure the precision of TkNN, defined as the ratio of the number of

correct k nearest neighbors to the returned k results. In addition, we also use a

weighted precision metric which assigns a higher score to the higher-order nearest

neighbors. It is calculated as follows.

Weighted Precision = (∑

j∈C1/Oj)/(

∑k

i=11/i) (2.15)

where C is the set of correct k nearest neighbors among the returned k results

and Oj is the order of the neighbor.

Figure 2.22 shows that the precision of TkNN reaches 88% and the weighted

precision 96%. Therefore, even though TkNN provides only approximate kNN results,

Page 54: Secure kNN Query Processing in Untrusted Cloud Environments

38

it does so with high accuracy, and in the vast majority of cases the exact NN points

are returned.

Fig. 2.22. TkNN Precision

2.6.6 Incremental Update Time

For TkNN, when a data point moves, the point and d edges connected to it

are changed and re-encoded/encrypted. For VD-1NN, when a data point moves, d

Voronoi points and 2d Voronoi edges connected to the Voronoi points are changed

and re-encoded/encrypted.

Incremental update time has two components: recon-struction time of Delau-

nay triangulations or Voronoi diagrams, and re-encoding/encryption time of changed

points and edges. The reconstruction time is short com-pared to re-encoding/encryption

time.

In Figure 2.23, the average per-point incremental up-date time of TkNN is about

three times faster than VD-1NN. The average incremental update time of TkNN is

about 15ms which is quite affordable in practice.

Page 55: Secure kNN Query Processing in Untrusted Cloud Environments

39

Fig. 2.23. Average Incremental Update Time per moving data point(1 million points dataset)

Page 56: Secure kNN Query Processing in Untrusted Cloud Environments

40

3. SECURE PROXIMITY DETECTION

An increasing number of services that provide a geo-spatial dimension to user inter-

action are present in today’s online landscape. These range from scenarios such as

snapshot queries sent to location-based services (LBS) (e.g., GoogleMaps, Yelp) to

more complex type of interactions where users report their history of movement in

return for personalized services, location-centric recommendations, etc. Most of these

applications also involve a social media component, where users interact with online

friends based on similarity in their preferences (general profiles), as well as based on

their geographical proximity.

Several location-based social networks (LBSN) (e.g., Foursquare, Facebook Places)

allow complex location-based interaction among users. However, many such providers

are not trustworthy, and loose terms of service agreements allow them to share loca-

tion data about users with various third parties, and for purposes that are often not

in the best interest of users. Access to personal location data may allow an adver-

sary to stage a broad stage of attacks, ranging from physical assault and stalking, to

inferring sensitive information about an individual’s health status, financial situation

or lifestyle choices. Therefore, it is necessary to build a secure framework for sharing

and processing location data, and recent research efforts found that cryptography is

the correct direction to follow to address location privacy concerns [26, 28,29,68].

Previous work [26,28] has addressed effectively the simple scenario where a client

finds securely the result to a query for a nearby point (e.g., nearest-neighbor, or NN

queries). However, in real-life scenarios, more complex types of interaction are nec-

essary. In a typical scenario, each user may want to establish a proximity zone of

interest, e.g., the down-town area of a city, or a region comprising of several city

blocks. In this context, users are eager to find friends with whom they are mutually

situated in each other’s interest zones, and they can engage together in various activ-

Page 57: Secure kNN Query Processing in Untrusted Cloud Environments

41

ities. For instance, Alice may be interested in finding friends who are situated within

several city blocks from her, and who would be interested in joining Alice for dinner.

Each user will have as personal data two objects: a point location (current user

location), and a proximity zone, in the form of a polygonal region. Users encrypt

this information, and upload it to the service provider (SP), e.g., Foursquare. Users

typically organize in groups, and they can also share encryption keys within a group,

which can be distributed using a different channel (e.g., through a secure connec-

tion established directly through their cell phone provider, encrypted email, etc.).

The challenge is to design cryptographic techniques that allow the users to evaluate

securely the proximity condition stated above, i.e., whether two users are situated

mutually in each other’s proximity zone. This condition should be evaluated by the

SP at the request of a querying user, but without requiring other users to be directly

involved in the protocol (i.e., in an “offline” setting). Due to the latter requirement,

existing interactive techniques for secure polygon enclosure evaluation [30], which are

similar in goals to our problem setting, are not suitable.

Furthermore, users may not fully trust their friends, or even if they do, they may

not want friends to always know where they are. Instead, location should be disclosed

only on a mutual basis, and only when two users are nearby. Therefore, the querying

user should not be able to learn the locations or the proximity zones of other users,

unless the evaluation results show that they satisfy the proximity condition.

Our contributions are:

• We formulate the problem of secure mutual proximity zone enclosure evaluation,

and we introduce a framework for solving it using homomorphic encryption

• We propose a secure point evaluation method (SPEM) that allows the client to

securely determine when a querying user’s location is enclosed in the encrypted

proximity zone of another user.

Page 58: Secure kNN Query Processing in Untrusted Cloud Environments

42

Fig. 3.1. System Model

• We introduce a secure line evaluation method (SLEM) which is the basis for

evaluating whether the encrypted location of a friend is located within the

encrypted proximity zone of the querying user.

• We provide performance optimizations that allow the client to filter friends

situated far from the query zone, and thus reduce the amount of rounds that

necessitate SPEM and SLEM evaluation with rather expensive cryptographic

primitives.

• We provide security and performance analyses of the proposed scheme.

• We perform an experimental evaluation that shows that our proposed technique

scales well to datasets of up to one million users.

The remainder of the chapter is organized as follows: Section 3.1 provides nec-

essary background on the system model and building-block cryptographic primitives

used. Section 3.2 introduces the proposed protocols for mutual proximity evaluation.

Sections 3.3 and 3.4 provide the security and performance analyses of our schemes,

respectively. Section 3.5 presents the experimental evaluation results of the proposed

techniques.

Page 59: Secure kNN Query Processing in Untrusted Cloud Environments

43

3.1 Preliminaries

3.1.1 System Model

Our system model is illustrated in Figure 3.1. There are three parties in our

model, and for consistency with the closely related field of data outsourcing we refer

to them as: owner, client, and outsourced service provider (SP). However, note that

in our application model the owner and client are not disjoint entities. In fact, the

data owner in our case can be seen as the set of service users, each of whom sends

his or her encrypted location data to the server. The client is also one of these users,

who fulfils the role of querying user. The SP is representing a location-centric service,

such as the Foursquare LBSN, for instance.

Our goal is to enable the client to determine securely which are the friends with

whom she is mutually situated in proximity. We need to protect both the client’s

and the friends’ locations privacy from the server. In addition, we have to protect

the friends’ locations from the client, in the sense that the client should only be able

to learn about the friends that satisfy the proximity condition, but no other users.

Figure 3.2 illustrates a case when the proximity condition is satisfied (the model we

use is similar to previous work such as [31]).

The challenging aspect of our problem is that the server needs to check the proxim-

ity condition on encrypted location data. In Section 3.2, we will introduce techniques

to securely achieve this goal. The problem is similar in nature to the secure polygon

enclosure problem [1, 27, 30–32]. In the mutual inclusion case, we can run a secure

polygon enclosure scheme such as [30] two times to check whether the client’s query

point is inside the friends’ polygon and vice versa. However, existing solutions are

interactive protocols by design, hence they need all parties to be online at evaluation

time. This is not a realistic setting in practice, as users cannot participate in intensive

computations all the time. We assume an “offline”, or asynchronous model, where the

data owner party (i.e., friends) is not involved in the proximity test. The owner only

Page 60: Secure kNN Query Processing in Untrusted Cloud Environments

44

Fig. 3.2. Proximity Definition

has to periodically upload encrypted location information to the SP. The proximity

testing protocol requires interaction only between the SP and the client.

We assume that the SP is honest but curious. The SP does not tamper with

the location data received from the users, it does not drop any messages, and also

runs the proximity detection protocol as designed. However, in addition to correctly

executing the protocol, it attempts to learn the locations of the users.

3.1.2 Paillier Cryptosystem

Our solution uses as building block the Paillier cryptosystem [23] which provides

asymmetric additively homomorphic encryption. Given the public key and the en-

cryptions corresponding to plaintexts m1 and m2, one can compute the cyphertext

for m1 + m2 without the need for the private key (i.e., without decrypting messages

first). Paillier encryption works as follows:

Key Generation chooses two random large prime numbers p and q such that

gcd(pq, (p − 1)(q − 1)) = 1. Both primes must have the same length. Then, the

modulus n is computed as n = pq and λ = lcm(p− 1, q − 1). Next, a random integer

g is selected, where g ∈ Zn2 . The public key is (n, g) and the private key is (λ, µ)

where µ = (L(gλ mod n2))−1 and L(u) = (u− 1)/n.

Page 61: Secure kNN Query Processing in Untrusted Cloud Environments

45

Encryption. Given plaintext message m ∈ Zn, select a random r where r ∈ Z∗nand compute the ciphertext as:

c = gm ∗ rn mod n2. (3.1)

Decryption. Let c be the ciphertext to decrypt, where c ∈ Z∗n2 . Plaintext m is

determined as follows:

m = L(cλ mod n2) ∗ µ mod n (3.2)

The Paillier cryptosystem has the additive homomorphic property. It also allows

multiplication with a plaintext pt under the ciphertext. Specifically:

D(E(m1, r1), E(m2, r2) mod n2) = (m1 +m2) mod n (3.3)

D(E(m1, r1)pt mod n2) = (m1 ∗ pt) mod n (3.4)

3.1.3 GT Protocol

Our solution also makes use of the secure comparison greater-than (GT) protocol,

proposed in [24]. By using the GT protocol, two parties holding one number each

can determine which one is greater between the numbers, without revealing any in-

formation to each other, except for the comparison outcome. The GT protocol uses

ElGamal encryption as a building block [25]. GT works as follows: given two values

x and y, denote the 1’s encoding of x by S1x and the 0’s encoding of y as S0

y . Then, x

is greater than y if and only if S1x and S0

y have a common element. Assume an n-bit

number s = snsn−1...s1 ∈ {0, 1}n. The 0- and 1-encodings are defined as:

S1s = {snsn−1...si|si = 1, 1 ≤ i ≤ n} (3.5)

S0s = {snsn−1...si+11|si = 0, 1 ≤ i ≤ n} (3.6)

For example, let x=6=110 and y=2=010. We have S1x={1,11} and S0

y={1,011}.

Since S1x ∩ S0

y 6= Ø, it results that x > y. On the other hand, if x=2=010 and

y=6=110, we have S1x={01} and S0

y={111}. Since S1x ∩ S0

y=Ø, it holds that x ≤ y.

Page 62: Secure kNN Query Processing in Untrusted Cloud Environments

46

If Alice and Bob compare the elements in S1x and S0

y one by one, the proto-

col will require O(n2) comparisons. However, they can compare the correspond-

ing strings of the same length in S1x and S0

y . This reduces the number of compar-

isons to O(n). So, Alice who holds number x = xnxn−1...x1 prepares a 2*n table

T [i, j], i ∈ {0, 1}, 1 ≤ j ≤ n by using ElGamal encryption:

T [xi, j] = E(1) and T [xi, j] = E(rj) for some random rj

Then, Bob who holds number y = ynyn−1...y1 computes the following for each t =

tntn−1...ti ∈ S0y .

ct = T [tn, n]� T [tn−1, n− 1]� ...� T [ti, i]

Finally, Alice decrypts D(ci) = mi, 1 ≤ i ≤ n, and determines that x > y if and

only if some mi = 1. Note that, if S1x and S0

y have a common element, then, mi = 1

by the properties of ElGamal encryption. Threfore, one of the parties (say, Alice)

must compute 2 ∗ n ElGamal encryptions.

As an example, consider that x=6=110 and y=2=010. Alice computes a 2 ∗ n

table T [i, j] as follows (n = 3) and sends the resulting values to Bob.

T = {{E(r), E(r), E(1)}, {E(1), E(1), E(r)}}

Next, Bob computes ct for each t = tntn−1...ti ∈ S0y and sends the results to Alice.

Since S0y = {1, 011}, then:

ct=1 = T [1, 1] = E(1)

ct=011 = T [1, 3]� T [1, 2]� T [0, 1] = E(r)� E(1)� E(r) = E(r2)

When Alice decrypts D(ci) = mi, since some mi = 1, she determines that x > y.

Page 63: Secure kNN Query Processing in Untrusted Cloud Environments

47

Fig. 3.3. Polygon Enclosure Evaluation Example

On the other hand, consider the case when x = 2 = 010 and y = 6 = 110, Alice

first computes the table as:

T = {{E(1), E(r), E(1)}, {E(r), E(1), E(r)}}

The table is sent to Bob, who next computes ct and sends the result back to Alice.

Since S0y = {111}, the result is:

ct=111 = T [1, 3]� T [1, 2]� T [1, 1] = E(r)� E(1)� E(r) = E(r2)

When Alice decrypts D(ct) = mt, she finds that it is not equal to 1, hence she

determines that x ≤ y.

3.2 Secure Proximity Detection

Our objective is to devise secure methods for mutual proximity zone inclusion

testing. Specifically, the client must be able to determine which friends are in her

proximity zone, and vice-versa. This is equivalent with a double point-in-polygon

enclosure check, as shown in Figure 3.2. The proposed solution comprises of three

steps.

The first step is a filtering step which allows the SP to filter out polygons whose

MBRs do not include the location (which we refer to as query point) of the client.

In the second step, for each matching polygon (i.e., which was not filtered out in

Page 64: Secure kNN Query Processing in Untrusted Cloud Environments

48

Fig. 3.4. SPEM Protocol

the first step), the goal is to find two of its sides which intersect the vertical line

passing through the client’s query point. Since the polygon is assumed convex, there

will always be two such lines. In the third step, we determine where exactly the

query point is situated relative to the two lines. Since the main functionality of

our cryptographic protocol lies in step three, we discuss it first. The first two steps

are auxiliary processes that support step three, so we defer their presentation until

Sections 3.2.5 and 3.2.6, respectively.

3.2.1 Secure Point Evaluation Method (SPEM)

Consider the case of the querying client who wants to determine securely whether

her location (i.e., query point) is situated in the upper side of a line (we assume

geographical orientation, where “upper” stands for Northern, although this is just a

convention, and the directions could be reversed). As illustrated in Figure 3.3, if the

query point is to the lower side of line L2 and to the upper side of line L3, it is inside

the polygon. Note that, the vertical line passing through the query point intersects

the lines L2 and L3 in this example. We propose a secure point evaluation method

(SPEM) that allows a client to verify this condition.

Page 65: Secure kNN Query Processing in Untrusted Cloud Environments

49

The SPEM protocol is illustrated in Figure 3.4. Each user, (i.e., friend) in the

dataset has a line (i.e., the ”upper“ line) with equation

y = S(x− xi) + yi (3.7)

where S is the slope of the line and Pi(xi, yi) is a point on the line, as shown in

Figure 3.3. The client has a query point Q(xq, yq). Equation (3.7) can be modified

as follows:

L = y − S ∗ x = −S ∗ xi + yi = −R (3.8)

In Equation (3.8), the right-hand side R has a fixed value (i.e., independent of the

query) and the left-hand side L is variable according to the query point. Therefore,

each user encrypts the right-hand side R for his equation using Paillier encryption,

and encrypts it again using AES encryption again as follows (the reasons for the

double encryption will be revealed later):

EA(E(R)) = EA(E(S ∗ xi − yi)) (3.9)

where E() denotes Pailler encryption and EA() denotes AES encryption. The

encrypted value is periodically uploaded to the SP as a location update. The SP has

the private key of the Paillier key pair, and the client has the public key, as well as

the secret AES key. In addition, the owner encrypts the slope S, also with double

encryption, as follows:

EA(E(S)) (3.10)

The latter encrypted item is also sent to SP. When the client sends a query to the

SP, both encrypted items are send back to the client (i.e., only for users that passed

the filtering step). The client decrypts them using AES, and then computes using the

homomorphic property of Paillier encryption the following encrypted left-hand side

of equation 3.8 as follows:

E(L) = E(yq)E(S)−xq = E(yq − S ∗ xq) (3.11)

Furthermore,

E(L+R) = E((yq − S ∗ xq) + (S ∗ xi − yi)) = E(yq − (S ∗ (xq − xi) + yi)) (3.12)

Page 66: Secure kNN Query Processing in Untrusted Cloud Environments

50

The obtained result could be sent by the client to the SP, which decrypts it since

it knows the Paillier private key. If the decrypted value L+R is greater than zero, it

signifies that the the query point is in the upper side of the line, since in the equation

above S ∗ (xq − xi) + yi is the value obtained when we plug xq into the line equation.

Therefore, if yq is greater than the value, it means that the query point is in the

upper side of the line. However, doing so reveals to the SP the value L + R, which

may lead the SP to infer information about the user location. To protect against

this disclosure, we employ additive disclosure, whereby the client selects a random

number k and computes:

E(L+R)E(k) = E(L+R + k) (3.13)

This value is sent instead to the SP, which obtains L + R + k. Next, the client

starts execution of the GT protocol [24], in order to learn which one is greater between

k and L+R+ k. If L+R+ k is greater than k, it means that L+R is greater than

zero. Thus, the query point must be in the upper side of the line. By using the GT

protocol, the SP does not learn the result of the comparison protocol, and does not

know L + R because of the random additive term k. In Section 3.3, we provide a

security analysis of the additive blinding. In brief, when L+R has n bits and k has

n′ bits, the domain size of L + R is m = 2n, that of k is m′ = 2n′, and the domain

size of L+R+ k is m′ +m− 1. The probability that the SP learns any information

about L + R is 1/2n′−n. For instance, when n = 64 and n′ = 128, the probability is

approximately 1/264.

As discussed in Section 3.1.3, the GT protocol requires the client to compute 2∗n

ElGamal encryptions, where n is the number of bits of the value, and the SP computes

the GT encoding and the multiplication from the client’s ElGamal encrypted items.

At the end of the protocol, the client learns the comparison outcome, and nothing else.

Most computational overhead of the GT protocol comes from computing ElGamal

encryptions at the client. In the final step of SPEM, the client selects a random

number k and it must compute 2 ∗n ElGamal encryptions for the random number k.

Page 67: Secure kNN Query Processing in Untrusted Cloud Environments

51

Fig. 3.5. t-SPEM Protocol

Note that, this protocol significantly reduces the amount of disclosure compared

to the earlier work in [1]. In [1], the client learns the slope S of the line for each

polygon that passes the filtering step. In the proposed protocol from this submission,

the slope S is encrypted using Paillier encryption, for which the client does not have

the decryption key. So neither the client, nor the SP learn any information about

locations or slopes.

3.2.2 Secure Point Evaluation Method for Two Lines (t-SPEM)

To securely evaluate polygon enclosure, the client must be able to determine where

the query point is situated against two lines, specifically the two lines of a polygon that

intersect the vertical line passing through the query point. Using SPEM, the client

can learn its positioning relative to a single line. However, to evaluate enclosure, the

client needs the comparison outcome with respect to two lines. For instance, when

the query is point Q1 illustrated in Figure 3.3, the client learns that Q1 is to the

upper side of L2 and L3. Therefore, the client determines that the point is outside

the polygon. The method we present next, called Secure Point Evaluation Method

for Two Lines (t-SPEM), achieves this objective.

When there are two lines y = S2 ∗ (x − x2) + y2 and y = S3 ∗ (x − x3) + y3 as

in Figure 3.3, if L2 + R2 < 0 and L3 + R3 > 0, then the query point is inside the

Page 68: Secure kNN Query Processing in Untrusted Cloud Environments

52

polygon. This is equivalent to checking that their product (L2 + R2)(L3 + R3) < 0.

The client determines:

E((Li +Ri)(Lj +Rj)) = E(LiLj)E(LiRj)E(RiLj)E(RiRj) (3.14)

Furthermore, E(LiLj) can be computed as follows:

E(LiLj) = E(Li)Lj = E(Li)

yq−Sj∗xq = E(Li)yq ∗ E(SjLi)

−xq (3.15)

whereas E(SjLi) is given by:

E(SjLi) = E(Sj(yq − Sixq)) = E(Sj)yq ∗ E(SiSj)

−xq (3.16)

The item E(SiSj) is uploaded to SP by each user when her location is updated.

Next, E(LiRj) can be computed as follows:

E(LiRj) = E(Rj)Li = E(Rj)

yq−Sixq = E(Rj)yq ∗ E(SiRj)

−xq (3.17)

where E(SiRj) is also uploaded with the location update. E(RiLj) can be com-

puted in a similar way and E(RiRj) is uploaded by the users. The t-SPEM protocol

is illustrated in Figure 3.5.

Every user sends periodically to the SP EA(E(RiRj)), EA(E(SiSj)), EA(E(SiRj)),

and EA(E(SjRi)) as well as EA(E(Ri)), EA(E(Si)), EA(E(Rj)), and EA(E(Sj)).

When the client requests a proximity evaluation, the values for the polygons that

pass the filter in step 1 are sent to the client by the SP. Then, the client selects a

random k and computes E((Li +Ri)(Lj +Rj) + k). The server decrypts the message

from the client and obtains (Li + Ri)(Lj + Rj) + k, and initiates the GT protocol.

Finally, the client determines whether (Li +Ri)(Lj +Rj) is less than zero. If so, the

query point must be inside the polygon.

3.2.3 Secure Line Evaluation Method (SLEM)

So far, we have seen how a client can learn securely whether another user’s polygon

(i.e., proximity zone) encloses the client’s location. Next, we show how to solve the

Page 69: Secure kNN Query Processing in Untrusted Cloud Environments

53

Fig. 3.6. SLEM Protocol

converse problem, namely, determine securely whether a friend’s location is inside

the proximity zone of the querying user. Recall that, we are solving this problem

in the more challenging case of a non-interactive protocol (i.e., “offline” case), where

only the querying user and the SP participate in the computation, but not the other

users. The SLEM and its associated t-SLEM protocol that we present next address

this problem.

For the secure line evaluation method (SLEM), each user has a data point P (xp, yp)

and the client has a line y = S(x−xi) + yi. The client must determine whether point

P is to the upper side of the line.

The SLEM protocol is illustrated in Figure 3.6. Each user sends periodically

EA(E(xp)) and EA(E(yp)) to the SP. The client receives these encrypted items from

the SP for all users that pass the filtering step, and computes the following:

E(R) = E(S ∗ xi − yi) (3.18)

E(L) = E(yp − S ∗ xp) = E(yp)E(xp)−S (3.19)

E(L)E(R)E(k) = E(L+R + k) (3.20)

where k is a random number. Then, the client sends E(L + R + k) to the SP which

obtains L + R + k, following which the client starts the GT protocol. Finally, the

Page 70: Secure kNN Query Processing in Untrusted Cloud Environments

54

client determines whether L + R is greater than zero. If so, the data point P is to

the upper side of the line.

3.2.4 Secure Line Evaluation Method for Two Lines (t-SLEM)

Using SLEM, the client can determine the placement of non-filtered users relative

to the two lines of the client. However, when we use SLEM, the client learns the

individual result of the positioning test against each line. This is additional disclosure

that we want to avoid. Therefore, the client must learn only whether the data point

is between the two lines or not.

The secure line evaluation method for two lines (t-SLEM) that we present next

achieves this purpose. t-SLEM is illustrated in Figure 3.7. Similar to t-SPEM, the

client needs to compute Ri, Rj, E(Li), E(Lj) and the following expression:

E((Li +Ri)(Lj +Rj)) = E(LiLj)E(LiRj)E(RiLj)E(RiRj) (3.21)

First, E(LiLj) is computed as:

E(LiLj) = E((yp − Si ∗ xp)(yp − Sj ∗ xp)) = E(y2p − (Si + Sj)xpyp + SiSjx2p)

= E(y2p)E(xpyp)−(Si+Sj)E(x2p)

SiSj (3.22)

EA(E(y2p)), EA(E(xpyp)), and EA(E(x2p)) are uploaded to the SP periodically, as

location updates, by all users. Next, E(LiRj) is computed as follows:

E(LiRj) = E(yp − Si ∗ xp)Rj = (E(yp)E(xp)−Si)Rj (3.23)

E(RiLj) can be computed in a similar way, and E(RiRj) is computed by the client.

Then, the client selects a random number k and computes E((Li +Ri)(Lj +Rj) + k)

which is sent to the SP. The SP decrypts the message using the Paillier private key

to obtain (Li +Ri)(Lj +Rj) + k, and the client starts the GT protocol. Finally, the

client learns whether (Li + Ri)(Lj + Rj) is less than zero, which signifies that the

location of the friend is inside the client’s polygon. Meanwhile, the SP does not learn

anything.

Page 71: Secure kNN Query Processing in Untrusted Cloud Environments

55

Fig. 3.7. t-SLEM Protocol

3.2.5 MBR Filtering

The protocols introduced so far can be rather expensive, due to the use of Paillier

and ElGamal encryption (the latter is used in GT). We introduce as an optimization

a Minimum Bounding Rectangle (MBR) filtering step, which executes as the first

step of the solution. Checking securely enclosure in a rectangle is much simpler and

inexpensive than checking polygon inclusion. We perform filtering in two ways: first,

we consider the case of mutable order preserving encryption (MOPE) [6], and second

we use k-d-tree [22] and Paillier encryption. Using mOPE is fast and is secure against

IND-OCPA, but can reveal relative ordering of MBRs. Using k-d-trees and Paillier

encryption is slower, but is able to prevent ordering leakage.

Using Mutable Order Preserving Encryption

mOPE is an order-preserving encryption method [6] and works by building a value

tree structure. The SP builds the tree. Each node has an AES encrypted message

EA(v) where v is the plaintext value. When a user encrypts a value v (in our case

locations), it requests the root of the tree from the SP. The user decrypts EA(root)

and if the value v is less than root, it sends a request for the left subtree to the

SP, otherwise it requests the right subtree. The search continues recursively on the

Page 72: Secure kNN Query Processing in Untrusted Cloud Environments

56

Fig. 3.8. MBR Filtering

selected branch, until either an equal value is found, or in case a leaf is reached and

no equality occurs, a new value is inserted. This way, the SP is able to know relative

ordering of values, without knowing the actual values.

In our scheme, each user computes the MBR of their proximity zone, encrypts the

MBR using MOPE, and send the lower bound (xl, yl) and upper bound (xu, yu) to

the SP as follows:

E(xl), E(yl), E(xu), E(yu) (3.24)

The client also encrypts its query point (xq, yq) using mOPE:

E(xq), E(yq) (3.25)

When the SP receives the query from the client, it checks whether the MBR contains

the query point:

E(xl) < E(xq) < E(xu) (3.26)

E(yl) < E(yq) < E(yu) (3.27)

Thanks to the mOPE properties, the SP can determine ordering using ciphertexts.

MBR filtering using mOPE is fast. However, the SP learns the order of MBRs on

either the longitude or latitude dimension, or both.

Page 73: Secure kNN Query Processing in Untrusted Cloud Environments

57

Filtering with k-d-trees

The idea behind this approach is to build an index of MBR objects based on the

coordinates of the lower-left and upper-right MBR corners encrypted with Paillier en-

cryption. Figure 3.8 illustrates an example, where each MBR has two points Pl(xl, yl)

and Pu(xu, yu). Before polygon enclosure evaluation, the client determines whether

the query point is inside the MBR of the polygon. The following conditions must be

satisfied:

xl < xq < xu and yl < yq < yu

The conditions can be re-written as follows.

(xq − xl)(xq − xu) < 0 and (yq − yl)(yq − yu) < 0

To evaluate the first condition, each user sends EA(E(xl)), EA(E(xu)), and EA(E(xlxu))

to the SP. The client receives them from the SP and computes the following:

E((xq − xl)(xq − xu) + k) = E(x2q − (xl + xu)xq + xlxu)E(k) =

E(x2q)E(xl + xu)−xqE(xlxu)E(k)

where k is a random number. Next, by using GT protocol, the client evaluates the

first condition. The second condition can be evaluated in a similar way. However,

in doing so, the client learns the individual result of each evaluation, whereas our

privacy model requires that the client learns only whether the query point is inside

the MBR. We address this aspect next.

The proposed MBR filtering protocol is illustrated in Figure 3.9. Each user sends

periodically to the SP EA(E(xl)), EA(E(xu), EA(E(yl)), and EA(E(yu)) to the server.

The client selects a random number k and computes the following:

E(xq)E(xl)−1E(k) = E(xq − xl + k) = E(v1)

E(xu)E(xq)−1E(k) = E(xu − xq + k) = E(v2)

E(yq)E(yl)−1E(k) = E(yq − yl + k) = E(v3)

E(yu)E(yq)−1E(k) = E(yu − yq + k) = E(v4)

Page 74: Secure kNN Query Processing in Untrusted Cloud Environments

58

Next, the client permutes the values and sends to the SP:

π(E(v1), E(v2), E(v3), E(v4))

The SP receives π(v1, v2, v3, v4), permutes them again and the client and SP initiate

the GT protocol. If all values vi are greater than k, it means that the query point is

inside the MBR. Thus, if the query point is inside the MBR, the four values should

be greater than zero. Otherwise, it is not. However, since the values are permuted,

the server and the client do not know which conditions are satisfied or not.

When there are n polygons, in order to find polygons whose MBRs contain the

query point, we should run this protocol n times. However, if we use k-d-trees [?, 1],

we can reduce the complexity to O(logn). According to the position of the data

points, the space is divided into four subspaces which have the same number of data

points at the root level. In addition, the order of entries in each node is permuted,

as shown in Figure 3.10, for additional protection from the SP.

MBR filtering checks whether each MBR contains the query point. As shown in

Figure 3.11, the MBR of a polygon may be larger than the subspace in which the data

point is situated in the kd-tree. We use a slight variation of the indexing structure,

which we call k-d*-tree, where each an MBR in a node of the tree may be overlapped

with other MBRs in other nodes. In order to find the appropriate leaf node, we may

need to check more than log4n, and possibly more than a single leaf node. Howerver,

as we discuss later in the experimental evaluation, the number of leaf nodes checked

in practice is similar to one on average. For instance, when there are 1M data points,

the average number of leaf nodes visited in MBR filtering is 0.96.

3.2.6 Finding Two Lines for t-SPEM

In the previous section, the client finds polygons whose MBRs contain the query

point. The next step is to find two lines of the polygons which the vertical line passing

through the query point intersects. Figure 3.8 illustrates a polygon and a query point

in the MBR of each polygon. Note that, the vertical line passing through the query

Page 75: Secure kNN Query Processing in Untrusted Cloud Environments

59

Fig. 3.9. MBR Filtering Protocol

Fig. 3.10. kd tree

Fig. 3.11. A node in kd*-tree

point always intersects two sides of a convex polygon. In Figure 3.8, there are two

intervals [xl, xi] and [xi, xu] since the polygon is a triangle. For general polygons, there

Page 76: Secure kNN Query Processing in Untrusted Cloud Environments

60

Fig. 3.12. Finding Two Lines for t-SPEM

may be more intervals. In order to find two sides which the vertical line intersects,

the client should be able to determine in which interval the query point is.

The protocol is illustrated in Figure 3.12. If the query point is inside an interval

[xi, xj], it should satisfy the following.

xi < xq < xj

which is equivalent to

(xq − xi)(xq − xj) < 0

Hence, the client must compute the following.

E(Ii + k) = E((xq − xi)(xq − xj) + k)

= E(x2q − (xi + xj)xq + xixj)E(k)

where k is a random number. The client sends it to the SP which decrypts it, and

the client then initiates the GT protocol. If Ii + k < k, it means that the vertical line

meets two sides in the interval.

However, in this case, the client knows in which interval the query point is. To

prevent the client from learning individual results, the SP can permute the values as

follows.

π(E(I1 + k), ..., E(Il + k))

Page 77: Secure kNN Query Processing in Untrusted Cloud Environments

61

In addition, the client can permute the values in t-SPEM with the same order. The

SP permutes them again. If there is an interval containing the query point, and the

query point is between the corresponding two sides in the interval, then the query

point is inside the polygon. However, since the values are permuted, the client and

the SP do not learn in which interval the query point is.

Fig. 3.13. Finding Two Lines for t-SLEM

3.2.7 Finding Two Lines for t-SLEM

Similar to the case of t-SPEM, we determine an interval of the client’s polygon

which the vertical line intersects for t-SLEM. In this case, each user has a data point

P (xp, yp) and the client has l intervals [x′i, x′j] where 1 ≤ i ≤ l and j = i + 1. This

protocol is illustrated in Figure 3.13. The client needs to determine:

E((xp − x′i)(xp − x′j))E(k) (3.28)

based on the following values received from the SP:

EA(E(xp)) and EA(E(x2p)) (3.29)

Through the homomorphic property of Paillier encryption, the client determines:

E(x2p)E(xp)−(x′i+x′j)E(x′ix

′j)E(k) = E(I ′i + k) (3.30)

Page 78: Secure kNN Query Processing in Untrusted Cloud Environments

62

Since there are l intervals, the client computes l values and permutes them as follows.

π{E(I ′1 + k), ..., E(I ′l + k)} (3.31)

When the SP receives the values, it decrypts and permutes them again. The client

initiates the GT protocol, and if one of the resulting values is less than k, it means

that the vertical line passing through the data point intersects the client’s polygon in

the interval.

3.2.8 Complete Protocol

Given n polygons at the server, in the first step the client finds several polygons

whose MBRs contain the query points by using MBR filtering. Second step is to

find two lines of the remaining users’ polygons that intersect the vertical line pass-

ing through the query point. This step is done along with the t-SPEM protocol.

The third step finds two lines of the client’s polygon which intersect the vertical line

passing through the data point of the users. This step is performed along with the

t-SLEM protocol. If the client knows the individual results of the second and the

third step, it is possible in some cases that the friend’s data point is inside the client’s

polygon, but not the other way around. Therefore, when the second step and the

third step are completed, the values resulting from the two steps are permuted by the

client and the SP. Hence, the client does not learn the individual results and it only

learns the combined outcome.

3.3 Security Discussion

Privacy of Client Location. The SP must not learn the location of the client,

nor the proximity zone of the client. For polygon enclosure evaluation, there are three

steps. First, in MBR filtering, the SP learns only whether the query point is inside

the MBR of the polygon of other users. If the query point is not inside the MBR,

Page 79: Secure kNN Query Processing in Untrusted Cloud Environments

63

Fig. 3.14. Additive Blinding: c = v + k

the SP does not learn where the query point is placed. Furthermore, since the client

permutes the values E(v1),E(v2),E(v3), and E(v4) (Section 3.2.5), the SP does not

learn where the query point is placed relative to the MBR.

Second, in the process of finding two lines, when there are l intervals, the client

permutes the values E(I1 + k), ..., E(Il + k). Hence, the SP does not know in which

interval the query point is situated. The SP learns only that there is an interval in

which the query point is.

Third, when running t-SPEM, the client computes values E((Li+Ri)(Lj+Rj)+k)

for each interval, hence the SP does not learn the value of (Li +Ri)(Lj +Rj).

Privacy of Friends’ Locations. Location data of friends must be protected

against both the SP and the client. In MBR filtering, the client learns only that the

query point is inside an MBR or not. No MBR coordinated are disclosed. Further-

more, the client permutes the values E(v1), E(v2), E(v3), and E(v4), and the SP

permutes them again during the GT protocol. The client learns the results of the GT

protocol, but the client does not learn the outcome of individual comparisons, as the

values during GT are permuted.

In the process of finding two lines, the client permutes the values E(I1 + k), ...,

E(Il + k) and the client starts the GT protocol. The client does not learn in which

Page 80: Secure kNN Query Processing in Untrusted Cloud Environments

64

interval of the polygon the query point is situated. In t-SPEM, the client permutes

the values E((Li +Ri)(Lj +Rj) + k) for each interval with the same order as in the

second step. The client determines whether there is an interval in which the query

point is situated, but does not learn anything about the specific polygon, except the

outcome of the enclosure evaluation. In addition, the SP does not learn the outcomes

of finding the two lines and t-SPEM.

Similarly, when evaluating enclosure of the location of a user in the polygon of

the client, the SP does not learn anything about any of these two items, other than

the outcome of polygon enclosure evaluation.

Additive Blinding. The final step of SPEM is to check whether L + R > 0,

without letting the SP know the value of L+R. When the client computes E(L+R),

it selects a random number k and computes E(L + R)E(k) = E(L + R + k). Then,

the SP obtains the plaintext L+R+k and the client starts the GT protocol. Finally,

the client determines whether L+ R + k > k, which is equivalent to L+ R > 0, but

the actual value is protected by additive blinding.

We provide a brief security analysis of the additive blinding operation. Let v =

L+R, then v has n bits and the size of the domain dv = [vl, vu] is m = 2n. Consider

a number k with n′ bits and value domain dk = [kl, ku] of size equal to m′ = 2n′.

Then, the domain dv+k of v + k is [vl + kl, vu + ku] and the size of the domain is

m′ +m− 1 = 2n′+ 2n + 1.

An example of additive blinding v + k is illustrated in Figure 3.14, where dv =

[−2, 2], dk = [−3, 3] and dv+k = [−5, 5]. Denote c = v+k. When c ∈ [vu+kl, vl+ku] =

[−1, 1], c can be the result of blinding any value in dv. Hence, the SP cannot learn

anything about v given c. However, when c is outside the highlighted interval, c may

not be obtained through blinding from any value in dv. Hence, there may be some

information that SP learns about v. Next, we quantify this probability and show it

is negligible in practice.

Page 81: Secure kNN Query Processing in Untrusted Cloud Environments

65

In Figure 3.14, there are m ∗ m′ = 2n+n′

elements in the table. The number of

elements in the interval is m∗ (m′−m+1) = 2n ∗ (2n′−2n+1) and the number of ele-

ments outside the interval is m(m−1) = 2n(2n−1). The probability that an element

is outside the interval is m(m− 1)/{m ∗m′} = (m− 1)/m′ = (2n− 1)/2n′ ≈ 1/2n

′−n.

When n = 64 and n′ = 128, the probability of returning an additively blinded value

which leaks information is 1/264, negligible in practice.

Fig. 3.15. Perturbation in kd*-tree

kd*-tree perturbation. The proposed scheme uses MBR filtering using Paillier

encryption and k-d*-trees. We discuss next the security properties of the perturbation

scheme for k-d*-trees. Each node has four entries, but for simplicity we perform the

analysis assuming each node has two entries (i.e., boundaries in a single dimension), as

shown in Figure 3.15. Assume that the adversary (i.e., the SP) knows a plaintext of an

entry. The entries of the subtree at depth 2 have a probability 2/n to be distinguished

from other nodes, where n = 2d, and the number of entries is n/2. Similarly, the

entries of the subtree of depth i have a probability 2i−1/n to be distinguished, and the

Page 82: Secure kNN Query Processing in Untrusted Cloud Environments

66

number of such entries is n/2i−1. The average probability of identifying a particular

entry as the requested one is:

Pavg = (d∑i=2

(n/2i−1) ∗ (2i−1/n)/(n− 2)

= (d− 1)/(n− 2) = (d− 1)/(2d − 2) ≈ d/2d

For instance, when there are one million MBRs (n = 106) and d = 20, the probability

of identifying a single entry as the requested one is 20/220 ≈ 1/50, 000, which is

negligible in practice. In addition, suppose that the adversary knows k plaintexts

(e.g., through access to background knowledge), where k = 2d′

and d′ < d. In this

case, suppose that the k plaintexts are uniformly distributed. The probability that

the adversary identifies a specific value among others is highest. In this case, the

adversary knows k subtrees at depth d − d′. The breach probability for an entry in

this case is:

(d− d′)/2d−d′ (3.32)

For example, when there are one million MBRs (n = 106), d = 20 and d′ = 5, the

SP has 25 plaintexts, and the breach probability is 15/215 ≈ 1/2000. We believe that

such a breach probability is acceptable in practice.

3.4 Performance Analysis

3.4.1 Computation Time

The proposed secure proximity test protocol consists of three steps: (i) MBR

Filtering, (ii) finding two lines in preparation for t-SPEM and t-SLEM, and (iii) the

proper t-SPEM and t-SLEM execution. MBR filtering is done using a k-d*-tree. The

tree’s depth is log4n where n is the number of polygons. Each node has four entries.

For each entry, the client must perform four AES decryptions CA and four Paillier

homomorphic ciphertext operations CPC , whereas the SP must perform four Paillier

decryptions CPD, four encodings CE and four multiplications CM . Finally, when

Page 83: Secure kNN Query Processing in Untrusted Cloud Environments

67

the SP returns the four multiplication results, the client must perform four ElGamal

decryptions CED. The computational overhead is:

4 ∗ log4n ∗ {4 ∗ (CA + CPC) + 4 ∗ (CPD + CE + CM) + 4 ∗ CED} (3.33)

To find two lines for t-SPEM, the client must perform 3∗ l AES decryptions and l

Paillier homomorphic ciphertext operations, whereas the SP must perform l Paillier

decryptions, l encodings and l multiplications, where l is the number of intervals.

Finally, the client performs l ElGamal decryptions, for a total time of:

(3 ∗ l ∗ CA + l ∗ CPC) + (l ∗ CPD + l ∗ CE + l ∗ CM) + l ∗ CED (3.34)

Finding two lines for t-SLEM requires an additional 2 AES decryptions and l

homomorphic ciphertext operations on behalf of the client:

(2 ∗ CA + l ∗ CPC) + (l ∗ CPD + l ∗ CE + l ∗ CM) + l ∗ CED (3.35)

For the t-SPEM proper step, the client performs eight AES decryptions and one

Paillier homomorphic ciphertext operation for each interval. The SP must perform

one Paillier decryption and one encoding, as well as one multiplication for each inter-

val. Finally, the client performs a ElGamal decryption for each interval:

(8 ∗ l ∗ CA + l ∗ CPC) + (l ∗ CPD + l ∗ CE + l ∗ CM) + l ∗ CED (3.36)

For the t-SLEM proper step, the client must complete five AES decryptions and

one Paillier computation for each interval, and the SP one Paillier decryption, one

encoding and one multiplication for each interval. Finally, the client performs a

ElGamal decryption for each interval:

(5 ∗ CA + l ∗ CPC) + (l ∗ CPD + l ∗ CE + l ∗ CM) + l ∗ CED (3.37)

However, note that the Paillier computations in t-SPEM and t-SLEM are more

expensive than MBR filtering. In the kd*-tree, since a node’s MBR may be overlapped

with other node’s MBRs, filtering may find more than one single leaf node. In the

experiments (Section 3.5), we found that on average MBR filtering yields 0.98 leaf

nodes when there are 1M polygons. For each of these nodes, finding of two lines for

t-SPEM and t-SLEM, as well as the proper protocols, must be completed.

Page 84: Secure kNN Query Processing in Untrusted Cloud Environments

68

3.4.2 Communication Bandwidth

We evaluate the communication cost overhead of the proposed protocols. For

simplicity, we do not include in our analysis the communication bandwidth for GT

table.

Each entry in the kd*-tree has four sides. When a node is checked, the SP must

send to the client four AES encrypted messages MA for these bounds, and then the

client responds with four Paillier encrypted messages ME. Finally, the SP sends four

ElGamal encrypted messages MG to the client. The total communication cost is:

4 ∗ log4n ∗ {4 ∗MA + (4 ∗ME) + 4 ∗MG} (3.38)

For finding two lines for t-SPEM, the resulting communication cost is:

l ∗ 3 ∗MA + (l ∗ME) + l ∗MG (3.39)

Similarly, finding two lines for t-SLEM requires a cost of

2 ∗MA + (l ∗ME) + l ∗MG (3.40)

For the t-SPEM proper step, the communication cost is

l ∗ 8 ∗MA + (l ∗ME) + l ∗MG (3.41)

whereas the t-SLEM proper step requires communication cost of:

5 ∗MA + (l ∗ME) + l ∗MG (3.42)

3.5 Experimental Results

We implemented a prototype of the proposed techniques for secure evaluation

of mutual location proximity, which include the MBR filtering protocol, t-SPEM,

t-SLEM and their respective auxiliary protocols for finding two lines. We also im-

plemented the building block GT protocol [24]. We developed the prototype code

using Java JDK 1.6. We use as performance metrics CPU time at the SP and client,

Page 85: Secure kNN Query Processing in Untrusted Cloud Environments

69

Fig. 3.16. Cpu Time with MOPEFig. 3.17. Cpu Time withk-d*-tree filtering

and the client-SP communication bandwidth consumed. We use Paillier encryption

with 1024-bit key strength, and 128-bit key AES encryption. For the GT protocol,

we use 128-bits key strength. The domain of user locations is selected as [0, 106]2.

We consider a number of users (i.e., polygons) between 200, 000 and 1, 000, 000. The

experiments were executed on a 3.4GHz Intel i7 CPU machine with 16GB RAM run-

ning Windows 8. For each experiment, we average the results over 1, 000 random

queries.

Cpu Time. Figures 3.16 and 3.17 illustrate the CPU time measurements ob-

tained. When using MOPE, the MBR filtering time is low, proving the advantage of

mOPE over the more expensive Paillier encryption. In the case of mOPE, the CPU

time is proportional to the number of polygons. We found that due to overlaps, there

are multiple MBRs which may contain the client’s query point. For example, when

there are 1, 000, 000 polygons, the number of matching MBRs is 0.96 on average.

When using Paillier encryption and the k-d*-tree structure, the cost is higher due

to the encryption overhead, even though the filtering complexity is proportional to

the depth of the tree, i.e., log4n. As shown in Figure 3.17, when there are one million

polygons, MBR filtering time is roughly five seconds, which corresponds to a tree

depth of 10 (each tree node has four children).

Page 86: Secure kNN Query Processing in Untrusted Cloud Environments

70

Fig. 3.18. CommunicationBandwidth with mOPE

Fig. 3.19. CommunicationBandwidth with k-d*-treefiltering

Communication Bandwidth. The communication bandwidth consumption is

illustrated in Figures 3.18 and 3.19. When we use mOPE, the communication band-

width for MBR filtering is relatively small. When using Paillier encryption and k-d*-

trees, the communication bandwidth for MBR filtering has complexity log4n where n

is the number of polygons. The communication bandwidth for SPEM and SLEM is

proportional to the total number of polygons, since more leaf entries will be matching

in the MBR filtering step. Most communication bandwidth is used for transferring

the 2 ∗m ElGamal encryption table from the client to the SP for the GT protocol.

Page 87: Secure kNN Query Processing in Untrusted Cloud Environments

71

4. AUTHENTICATED TOP-K AGGREGATION

The results of a top-k query are k objects that have the highest overall scores. Top-k

queries have attracted much interest in many different areas like network and system

monitoring [33], information retrieval [36], sensor networks [37, 38] and so on. The

main reason for such interest is that they reduce the overhead by pruning away

uninteresting answers.

As network and ubiquitous environments are emerging, the objects which are sub-

jects of top-k queries are distributed across nodes in the network. This means that

the target of top-k queries is no longer a centralized database but distributed multi-

ple databases. In such distributed environments, top-k aggregation first aggregates

the scores for each object which resides in multiple distributed databases, and then,

finds k objects whose aggregated score (mostly the sum of scores from distributed

databases) is ranked within top-k. An example of top-k aggregation is the top-k

query processing over a content distribution networks (CDN) for large enterprises.

Large enterprises have branch offices located around the globe. The number of branch

offices ranges from a few tens to a few thousands. Due to the diverse geographical

locations of branch offices, the links between the offices may have low bandwidth and

long round trip time. Successful operations of CDN rely on effective monitoring of

the activities on the network which means that the central management station is

often asked to answer top-k queries like list the top-k most popular documents across

the whole CDN. In this example, the score associated with a document is the number

of downloads for the document.

The existing top-k aggregation algorithms mainly address efficiency issues such

as how to find top-k objects with the least communication overhead. In the CDN

example, if the number of documents runs to millions, a nave method by which all

data are transmitted to the central manager is inefficient. Various algorithms have

Page 88: Secure kNN Query Processing in Untrusted Cloud Environments

72

been developed to reduce the network communication costs. However, there is one

more important issue in distributed top-k aggregation: the authentication issue.

In top-k aggregation, the multiple distributed databases can be autonomously

managed and sometimes outsourced. Also, the data service provider (in short, DSP)

which collects data from the databases and calculates top-k results can also be an

independent party and can be outsourced for cost down. In such distributed and

outsourced environment, the DSP or databases may be malicious or subverted by an

adversary. Even if just one among the DSP and the databases is compromised, it could

return tampered results, including: 1) incomplete results, 2) altered ranking, and 3)

spurious results. If an attacker drops from the result some higher ranked objects,

the user receives incomplete information. By tampering with the ranking order, the

attacker can bias the results. In addition, the attacker may add fake information to

the result.

In this chapter we investigate algorithms that authenticate top-k aggregation re-

sults in distributed and outsourced databases. However, we address not only authen-

tication but also efficiency. Our solution is based on a well-known top-k aggregation

algorithm: the Three Phase Uniform Threshold (TPUT) algorithm [33]. We first

develop an authenticated top-k aggregation algorithm based on TPUT. We call this

algorithm A-TPUT. The main strength of A-TPUT is that 1) it provides the authen-

tication capability which is not supported in the original TPUT algorithm and 2)

it only requires a fixed number of communication rounds between the DSP and the

databases regardless of the number of objects needed to find top-k results.

To develop an authenticated version of TPUT, we delicately integrate two authen-

tication techniques. The first technique is the Merkle Hash Tree (MHT) which is a

tree-based data structure for detecting tampering over a series of values. With MHT,

we can guarantee the completeness and correctness of data communicated between

trusted parties and untrusted parties. The second technique is the Condensed-RSA

algorithm [35]. Condensed-RSA is a digital signature technique which is suitable for

combining signatures generated by a single signer into a single condensed signature.

Page 89: Secure kNN Query Processing in Untrusted Cloud Environments

73

We use this signature scheme to reduce the communication cost between trust par-

ties and untrusted parties. By using Condensed-RSA, we can combine signatures of

multiple objects and send only one digital signature (instead of multiple signatures)

to reduce the communication cost.

Next, we develop an optimization technique for A-TPUT with regard to the com-

munication between the databases and the DSP. Here, we reduce communication costs

by increasing the threshold which is used to determine how many objects should be

transmitted from the databases to the DSP. Higher threshold means less data trans-

mission. We provide formal equations to find a higher threshold value, and then

prove that A-TPUT always finds the genuine top-k aggregation results and correctly

authenticates the results with the higher threshold value.

Through extensive experiments, we first show that our approach efficiently authen-

ticates top-k aggregation. The results show that our approach significantly reduces

not only communication costs but also response time compared to other algorithms.

Our contributions are summarized as follows:

• We develop authenticated top-k aggregation algorithms (A-TPUT and S-TPUT)

using MHT and Condensed-RSA for distributed and outsourced databases. We

also prove that A-TPUT and S-TPUT correctly authenticate the top-k results.

• We propose an optimization technique for A-TPUT and S-TPUT. This tech-

nique reduces communication costs between the databases and the DSP.

• With extensive experiments, we show the efficiency of our authentication algo-

rithms and optimization technique.

The rest of the chapter is organized as follows. Section 4.1 presents our sys-

tem model and attack model, and then briefly describes a basic top-k aggregation

algorithm (i.e., TPUT) which is the basis for our approach. Section 4.2 proposes

our authenticated top-k aggregation algorithms (A-TPUT and S-TPUT). Section 4.3

presents an optimization technique to reduce the amount of data transmission for

aggregating top-k results. Section 4.4 reports the experimental results.

Page 90: Secure kNN Query Processing in Untrusted Cloud Environments

74

Fig. 4.1. System Model for Top-k Aggregation

4.1 Preliminaries

In this section, we first introduce our system model and the attack model. Then,

we briefly discuss an unauthenticated top-k aggregation algorithm, TPUT, which is

basis of our approaches.

4.1.1 System Model

We consider distributed environments for top-k query processing and authentica-

tion of the top-k results. In such distributed environments, our system model involves

four parties: (i) the multiple data owners who provide data collection, (ii) the dis-

tributed databases which store a set of data, (iii) the data service provider (DSP)

which processes top-k aggregation by communicating with the databases, and (iv)

the users who issue top-k queries and receive the results from the DSP. In our system

model, we assume that the databases and the DSP are not trusted since they can be

outsourced. Figure 4.1 illustrates the four parties and data flows among them.

The data owners: Data owner DOi manages a data collection D comprising

n objects: D = O1, O2, , On, n1. For example, objects can be web pages in a web

server, inventory items, or people. Each object O is bound to a value V which is

the measure for deciding top-k results. For example, the value can be the number

of accesses for each web page, the number of inventory items, or the salary of an

individual. To compute a top-k aggregation, the data owner DOi provides a sorted

Page 91: Secure kNN Query Processing in Untrusted Cloud Environments

75

list Li defined as Li = [< O1, V1 >,< O2, V2 >, ,< On, Vn >] such that: (a) 1jn,

Li.Oj is an object in D and Li.Vj indicates the value bound to Oj; and (b) 1jln,

Li.VjLi.Vl. The data owner DOi also manages authentication information which we

will discuss in the next sections. For simplicity, we assume that all the data owners

have the same data collection (but different values may be bound to the same object

by different data owners).

The databases: Each data owner transfers its own list and authentication in-

formation to its associated database for query outsourcing. The databases are dis-

tributed in the network and autonomously managed by different authorities. The

databases can be compromised or the data stored by database can be tampered with.

Therefore, we assume that the databases cannot be trusted. The role of databases is

to provide (parts of) the lists and authentication information to the DSP on behalf

of the data owners.

The data service provider (DSP): The DSP accepts top-k queries from users

and returns the results to users. A top-k query is forwarded to the databases associ-

ated with the different data owners and the DSP computes the result based on the

data obtained from the databases. The query result for Q returned to the user, R,

is an ordered list of k entries, R = [< O1, V1 >,< O2, V2 >, ,< Ok, Vk >], in which

1jk, R.OjD is a result data item and R.VjR is its corresponding aggregate value. We

assume that the DSP also can be compromised and the top-k results can be tampered

with since the DSP can be outsourced.

The users: A user issues a query Q specifying a value for parameter k and

receives the result R from the DSP. The user needs to verify the top-k result R is

correct in cases in which the databases and the DSP cannot be trusted.

A correct query result R should relate to the query Q and the data collection D

as follows: The aggregate value of an object O is V (O) =∑m

j=1O.V j where m is the

number of databases in the distributed system. The query result R is correct if and

only if it satisfies the following conditions:

Page 92: Secure kNN Query Processing in Untrusted Cloud Environments

76

• The result entries are ordered according to non-increasing aggregated values,

i.e., 1 ≤ j ≤ l ≤ k, R.Vj ≥ R.Vl.

• All the objects that are excluded from R have lower aggregate values than the

last entry in R, i.e., for any non-result object O ∈ D, it holds that V (O) ≤ R.Vk.

4.1.2 Attack Model

As described in Section 4.1.1, among the entities in our system model, the DSP and

the databases are the potential adversaries as they could be subverted by attackers.

The attacks can happen both in the databases and in the DSP as follows:

• In the databases, the adversaries may alter the lists. This means that values

associated with objects may be altered or some objects and values may be

omitted.

• In the DSP, the adversaries may execute the top-k aggregation query processing

algorithm incorrectly or tamper with the results. This means that the order (i.e.,

ranking) of the top-k may be changed or some top-k results may be omitted.

Example 1: Assume that a top-3 query is given and assume that the correct

result is [< O1, V1 >,< O2, V2 >,< O3, V3 >] where V1 > V2 > V3. Now assume that

a malicious DSP changes V2 to V2 so that V1 > V3 > V2. In this case, even if the user

still gets the correct set of top-k objects, the ordering of these objects in the result

is not correct. In addition, a malicious DSP may drop the record < O3, V3 > from

the result and add a record < O4, V4 > where V3 > V4. In this case, the user gets an

incomplete result [< O1, V1 >,< O2, V2 >,< O4, V4 >].

The goal of this chapter is to protect top-k results against such attacks. To achieve

this goal, we will allow the users (i) to verify the correctness of the query results and

(ii) to check the completeness of the results.

Page 93: Secure kNN Query Processing in Untrusted Cloud Environments

77

4.1.3 Three Phase Uniform Threshold Algorithm

The Three Phase Uniform Threshold (TPUT) algorithm [33] is an efficient top-k

aggregation algorithm but it does not provide any authentication mechanism. We

will use TPUT as the basis of our authenticated top-k aggregation algorithm since it

is simple and has desirable features for distributed top-k aggregation such as a fixed

number of communication rounds between the DSP and the databases.

The Three Phase Uniform Threshold (TPUT) algorithm [33] consists of three

phases, each taking one round of communication:

• Phase 1: It establishes a lower bound on the true bottom. The DSP informs

all databases that it would like to start computing a top-k query. Each database

sends the top-k objects from its list. After receiving the data from all databases,

the DSP calculates the partial sums of the values of the objects. Then, it looks

at the k highest partial sums and takes the k-th one as the lower bound, denoted

as t1 and called ”phase 1 bottom”.

• Phase 2: It prunes away ineligible objects. The DSP sets a threshold T = t1/m

and sends it to all databases. Each database sends back the list of objects whose

values are greater or equal to T . The DSP then performs two tasks. First, it

calculates partial sums for the objects. Lets call the k-th highest sum ”phase

2 bottom” denoted by t2. Clearly, t1 ≤ t2. Then, it tries to prune away more

objects by calculating the upper bounds of the objects. The objects whose

upper bounds are less than t2 are eliminated. The set of remaining objects is

the candidate set S.

• Phase 3: It identifies the top-k objects. The DSP sends the set S to all

databases and each database sends back the values of the objects in S. The

DSP calculates the exact sum of the objects in S and selects the top-k objects.

Example 2: Consider the lists in Table 4.1. A top-2 query is given. In phase 1,

all databases send the data at positions 1 and 2 to the DSP. The DSP calculates the

Page 94: Secure kNN Query Processing in Untrusted Cloud Environments

78

partial sums: V (O2) = 0.97, V (O5) = 1.89, V (O6) = 0.89, and V (O0) = 1.59. The

two highest partial sums are 1.89 and 1.59 and the phase-1 bottom t1 is 1.59. Then,

the threshold T is set to 1.59/3 = 0.53.

Table 4.1An Example Data Set with Three Lists

Position L1 L2 L3

1 (O2, 0.97) (O5, 0.97) (O5, 0.92)

2 (O6, 0.89) (O0, 0.80) (O0, 0.79)

3 (O7, 0.45) (O3, 0.70) (O3, 0.72)

4 (O5, 0.44) (O7, 0.65) (O7, 0.64)

5 (O0, 0.36) (O2, 0.52) (O6, 0.29)

6 (O1, 0.28) (O4, 0.22) (O2, 0.28)

7 (O3, 0.19) (O1, 0.12) (O1, 0.24)

8 (O4, 0.13) (O6, 0.01) (O4, 0.01)

4.2 Authenticated Top-K Aggregation

In this section, we introduce two mechanisms for supporting authentication in

TPUT. The technical challenges for authentication are two folds: (1) to allow users

to verify the completeness and the correctness of the top-k results and (2) to minimize

data transmissions between the databases and the DSP. For addressing the former

issue, we first introduce the Skewed Merkle Hash Tree (S-MHT), and then, for ad-

dressing the latter issue, we develop a mechanism to reduce data transmission that

uses S-MHT and Condensed-RSA [35] together.

Page 95: Secure kNN Query Processing in Untrusted Cloud Environments

79

Fig. 4.2. Skewed Merkle Hash Tree

4.2.1 Authenticated TPUT

Our algorithm is based on TPUT extended by the use of the Skewed Merkle

Hash Tree (S-MHT). Merkle Hash Tree is a data structure to prove completeness and

correctness of a series of values by detecting tampering over the values. Therefore it

is suitable for authenticating top-k query results. In the TPUT algorithm, we observe

that the entries in the lists in the databases are sorted and accessed from the front.

This means that to calculate top-k results with TPUT, we only need a partial list

which begins from the first entry of the list. Based on this observation, we modify

the original MHT structure to skew the tree from left to right (i.e., construct the tree

structure from the first entries to the last entries) as shown in 4.2.

Our S-MHT scheme works as follows. We compute a hash chain over the records

in the list. We include the digest of each record in the digest computation of the

record immediately ahead of it. Finally, the digest of the first record is signed by the

private key of the data owner. This signature can be used to verify any j leading

records of the list. The details are as follows.

Let n be the number of records in a list Li.

Digesti,n = h(Oi,n|Vi,n) (4.1)

Digesti,j = h(Oi,j|Vi,j|Digesti,j+1, 1 ≤ j ≤ n− 1 (4.2)

Signaturei = Signski(digesti,1) (4.3)

Page 96: Secure kNN Query Processing in Untrusted Cloud Environments

80

These digests and the signature are computed by each trusted data owner and

sent to the corresponding database. When a database i sends data up to position

j, it sends the j-th digest and the signature of the data owner as well as the data.

When the DSP or the users receives them, they can verify that the data sent from

the database i was not tampered.

We now introduce how to use S-MHT to develop A-TPUT. A-TPUT has three

phases like TPUT. Only the phase 3 is modified for authentication.

• Phase 1: All databases send the top-k objects to the DSP and the DSP compute

the phase-1 bottom t1 (same to the original TPUT).

• Phase 2: The DSP sends the threshold T = t1/m to all databases and the

databases send the objects having values greater or equal than T . Then, the

DSP computes the phase-2 bottom t2 and prunes away objects whose upper

bounds are less than t2. The remaining objects are included in set S (same to

the original TPUT).

• Phase 3: The DSP sends S to all databases and each database sends the se-

quence containing the objects corresponding to set S. In addition, for authenti-

cation, each database sends its signature and a digest corresponding to the last

object which is located in the lowest position in the sequence.

Table 4.2 formally illustrates the algorithm for A-TPUT with S-MHT. Phase 1 is

step 1, phase 2 is from step 2 to 5, and phase 3 is from step 6 to 7.

Example 3: To illustrate the algorithm, consider the lists in Table 4.1. The

first two phases of A-TPUT are the same as in TPUT. However, in phase 3, since

S = O2, O5, O6, O0, O3, O7, the database 1 should send a sequence containing objects

up to position 7, Digest1,7, and the signature. The database 2 has to send a sequence

containing objects up to position 8, Digest2,8, and the signature. Finally the database

3 should transmit a sequence containing objects up to position 6, Digest3,6, and the

signature. When the DSP receives the sequences from the databases, it forwards

them to the user that can then verify the top-k result.

Page 97: Secure kNN Query Processing in Untrusted Cloud Environments

81

Table 4.2A-TPUT algorithm

1. Request the local top-k objects to all databases;

2. Compute a threshold T = t1/m where t1 is phase-1 bottom;

3. Request objects whose values ≤ T to all databases;

4. Compute phase-2 bottom t2;

5. Prune objects whose upper bounds are less than t2;

6. Request each sequence containing remaining objects, a digest,

and a signature from each database;

7. Report each sequence, each digest, and each signature

for each database to the user;

The following theorem establishes the correctness of algorithm A-TPUT with S-

MHT (i.e., the algorithm authenticates top-k results.)

Theorem 4.1. A-TPUT correctly authenticates the top-k objects.

Proof (sketch). We prove the theorem for all integer x ∈ [1, n]: x is determined

as the threshold and the remaining objects in phase 3. If the user accepts a sequence

from the database i in A-TPUT, then conditioned upon the top-(x− 1) values being

correct, Vx must be the value of the object Ox with the x-th largest value. Trivially

top-1 value is proved to be correct by Digesti,1 and Signaturei. Obviously, the

theorem can be trivially proved via an induction on x.

Let the object with the x-th largest value be object Ox and let its value be Vx. Let

Ox and Vx be the corresponding answers returned by the DSP. They may be forged.

We prove by contradiction. Assume that Vx 6= Vx. The user must successfully verify

Digestx for Vx. But, when the user checks whether V erifypk(Digest1, Signature) =

1, since Digest1 = h(. . . h(Ox|Vx|Digestx+1)) is different from Digest1, the verifica-

tion fails.

Page 98: Secure kNN Query Processing in Untrusted Cloud Environments

82

On the other hand, an adversary may drop an object at the bottom of the sequence.

The smallest value in the sequence should be less or equal to the local top-k value

and the threshold by the definition of A-TPUT. In addition, the sequence should

contain the top-k objects and the upper bounds of all remaining objects should be

greater or equal to the smallest top-k value. When the adversary drops an object,

these conditions may not be satisfied. If these conditions are not satisfied, it means

that the result was forged or an object was dropped.

Compared to the existing authenticated top-k aggregation algorithms in out-

sourced databases, TRA and TNRA [36], in our algorithm, the databases may send

more data than TNRA. However the response time of A-TPUT is much less than

TRA and TNRA since our algorithm has the strength on the fixed number of data

communication rounds in distributed environments. TRA and TNRA are based on

TA and NRA [39] and the latency of TRA and TNRA is unpredictable because the

number of rounds varies by data input. The response time consists of several round

trip times. Each round trip time contains transmission time, propagation delay, and

computation time at the DSP. In distributed environments, the propagation delay is

usually much longer than the transmission time.

For example, when the distance is 1000 km, the bandwidth is 100 Mbps, and

we send a packet of size 100 Bytes, then, the propagation delay is about 4 ms and

the transmission time is 0.008 ms. Even if databases send k records every round as

TPUT, the propagation delay is much longer than the transmission time. Moreover,

TA and NRA has much more rounds than TPUT. As the number of rounds increases,

the response time increases. So, for distributed databases, TRA and TNRA are not

desirable. We will show the advantage of our algorithm in the experimental section.

4.2.2 Signature-based TPUT

One weak point of A-TPUT is the number of data entries which have to be trans-

mitted from databases to DSP. This number depends on the threshold T in phase 2

Page 99: Secure kNN Query Processing in Untrusted Cloud Environments

83

and the set of remaining data S in phase 3. Here, we focus on the set S of phase 3.

We will focus on T in Section 4.3.

We note that in A-TPUT the amount of data transmission does not depend on

the number of objects in S but depends on the lowest rank in S. In our basic S-

MHT based algorithm, even though the number of remaining objects in S is small,

if the rank of an object in S is low, the databases should send a lot of data to the

DSP. This is because we should send the partial list which begins from the first entry

to the entry which has the lowest rank in S to authenticate the results (especially,

completeness). This means that we cannot omit any entry between the first entry

and the least ranked entry.

So, in this subsection, we exploit a signature-based technique to address this

problem (i.e., allowing us to omit useless entries in the list). In our approach, data

owners additionally sign each tuple using Condensed-RSA [35]. The Condensed-RSA

scheme is a simple extension of the standard RSA scheme. One of the well-known

features of RSA is its multiplicative homomorphic property. This property makes

RSA suitable for combining signatures generated on each data item in a set by a single

signer into a single condensed signature. Having successfully verified a condensed

signature, a user can be assured that each data covered by the condensed signature

was signed by the data owner.

Standard-RSA: A data owner has a public key pk = (n, e) and a secret key

sk = (d), where n is a k-bit modulus computed as the product of two random k/2-bit

primes p and q. The respective public and secret exponents e, d ∈ Z∗n′ satisfy ed ≡ 1

mod φ(n′) where φ(n′) = (p−1)(q−1). An RSA signature is computed over the hash

of the input message.

Let h() denote a suitable cryptographic hash function such as MD5 or SHA-1

which produces a fixed-length output h(m∗) upon a variable-length input m. A

standard RSA signature on message m∗ is computed as: σ = h(m∗)d (mod n′). RSA

signature verification involves checking that σeh(m∗) mod n′.

Page 100: Secure kNN Query Processing in Untrusted Cloud Environments

84

Condensed-RSA: Given j input messages {m1, . . . ,m+j} and their correspond-

ing signatures {s1, . . . , sj}, a Condensed-RSA signature is given by the product of the

individual signatures:

s1,j =∏l=1..j

sl (mod n′) (4.4)

The resulting signature s1,j has the same size as a standard RSA signature. When

verifying a condensed signature, the verifier needs to multiply the hashes of all input

data and check that:

(s1,j)e ≡

∏l=1..j

h(ml) (mod n′) (4.5)

Now we will explain S-TPUT algorithm. In phase 3, when the DSP requests

data corresponding to remaining objects, each database computes a Condensed-RSA

signature from the signatures of data corresponding to the remaining objects. Then,

each database sends the data, the digest corresponding to the last object in phase 2,

the signature of S-MHT and the Condensed-RSA signature to the DSP.

So, when a user receives the data, the digest, the signature of S-MHT and the

Condensed-RSA signature for each database, it knows which objects are remaining

objects whose upper bounds are greater than the smallest top-k value. Then, by using

the Condensed-RSA signature the user can verify whether the data corresponding to

remaining objects are forged or dropped.

Table 4.3 shows the S-TPUT algorithm. Compared to algorithm in Figure 3, Steps

1 to 5 are same but the remaining parts include the use of Condensed-RSA. In step

6, we can see that it requests a set of only data corresponding to remaining objects

instead of a sequence containing remaining objects.

By using S-TPUT, we can reduce the communication overhead between databases

and DSP. But, since each record need a signature, the databases have more storage

overhead. However, we assume that since there are big storages nowadays, the stor-

age overhead is not a significant problem. In addition, since S-TPUT exploits the

Condensed-RSA signature, the databases have more computation overhead than A-

TPUT. But, the computation times overlap with disk I/O time at the databases, and

Page 101: Secure kNN Query Processing in Untrusted Cloud Environments

85

Table 4.3S-TPUT algorithm

1. Request the local top-k objects to all databases;

2. Compute a threshold T = t1/m where t1 is phase 1 bottom;

3. Request objects whose values ≥ T to all databases;

4. Compute phase 2 bottom t2;

5. Prune objects whose upper bounds are less than t2;

6. Request data corresponding to the remaining objects;

7. Each database compute its Condensed-RSA signature

from the signatures corresponding to the remaining objects

8. Report the data, each digest, each signature of S-MHT,

and each Condensed-RSA signature for each database to the user;

S-TPUT needs only a small number of signatures of records when compared to all

records to be sent. So, S-TPUT has much less computation time than ASB-tree which

needs the signatures for all records [34]. In [34], one I/O operation time for random

access is 15ms and the cost of one modular multiplication with 128Byte modulus for

the Condensed-RSA signature is 100s.

Example 4: In phase 3 of A-TPUT, since S = O2, O5, O6, O0, O3, O7, the database

1 should send data up to position 7, Digest1,7, and the signature of S-MHT. The

database 2 has to send data up to position 8, Digest2,8, and the signature of S-MHT.

Finally the database 3 should transmit data up to position 6, Digest3,6, and the

signature of S-MHT.

However, in phase 3 of S-TPUT, the database 1 does not need to send (O1, 0.28)

since O1 is not in S. Instead, it computes a Condensed-RSA signature CS1 =

SO7SO5SO0SO3 and sends O7, O5, O0, and O3 with the aggregate signature CS1, the

digest, and the signature of S-MHT. The database 2 does not need to send O4 and O1.

It sends O2 and O6 with its Condensed-RSA signature CS2. The database 3 sends

Page 102: Secure kNN Query Processing in Untrusted Cloud Environments

86

O6 and O2 with its Condensed-RSA signature CS3. So, S-TPUT sends 3 records

less than A-TPUT in this example. Finally, when the user receives the data and the

Condensed-RSA signatures, it multiplies the hashes of the data from each database

corresponding to the remaining objects and it checks whether the product is equal to

each Condensed-RSA signature.

Theorem 4.2. S-TPUT correctly authenticates the top-k objects.

Proof (sketch). By Theorem 4.1, a sequence in phase 2 satisfies correctness and

the completeness since each database should send data whose values are greater or

equal to the threshold. Suppose that there are x remaining objects in phase 3. An

adversary succeeds in breaking Condensed-RSA if it produces a valid aggregated

signature for the remaining objects which passes verification. There are two cases.

First, the adversary can forge the value of an object. Second, it can drop an object.

First, suppose that the adversary changes the value Vx to Vx for the object Ox.

However, since it does not know the data owners private key, it cannot generate

valid individual signature Sx for the forged value Vx. Hence, it cannot generate valid

Condensed-RSA signature to pass the verification. Thus,se1,x 6=∏

l=1..x h(Ol|V ′l ) (mod n).

Second, the adversary may drop an object. In that case, by Theorem 4.1, the user

knows which objects are remaining objects in phase 3. So, if the adversary drops

an object, it is detected by the user. Therefore, S-TPUT correctly authenticates the

top-k objects.

4.3 Optimization

In this section, we present an optimization technique for A-TPUT and S-TPUT.

Optimization can be done between the databases and the DSP. We focus on mini-

mizing the amount of data transmission.

Even though A-TPUT and S-TPUT are efficient algorithms in that it reduces

the communication cost by pruning away ineligible data items, it can be inefficient

especially in the case where the threshold T is too small. In A-TPUT and S-TPUT,

Page 103: Secure kNN Query Processing in Untrusted Cloud Environments

87

the threshold is set to T = t1/m where t1 is the phase 1 bottom and m is the number

of databases. If T is small, the databases should send large parts of their data to

the DSP. This results in a large amount of data transmissions between databases and

DSP which makes A-TPUT and S-TPUT inefficient.

In this section, we introduce an approach, called Improved TPUT (I-TPUT), to

decrease the communication overhead of A-TPUT and S-TPUT by increasing the

threshold T . We observe that data about an object are not sent from all databases

in phase 1. This means that the local top-k objects are usually not exactly same in

all databases. We can use this observation to replace t1 by t1 which is greater than

t1. Consequently, we can use T = t1/m instead of T . This increases the threshold in

phase 1 since T > T and then decreases communication cost between databases and

DSPs.

We calculate T as follows. When the DSP receives the top-k objects from the

databases, it computes the global objects and their aggregate values. In addition, for

the object having the k-th largest value, it counts how many databases have sent the

object. The counter for object Ok is denoted by Ck. For instance, when Ck is equal

to j (1 ≤ j ≤ m), it means that the object Ok was received from j databases and

(mj) databases did not send the object Ok in phase 1.

First, we assume that the values of the object Ok in the unreported databases

are greater than or equal to T and make a new threshold as follows according to the

assumption:

T ′ = (t1 + (m− Ck) ∗ T )/m (4.6)

If the data values among databases are correlated , the assumption is true with

high probability. For the case where the assumption is not true, we will recalculate

T later in I-TPUT algorithm.

Next, the DSP requests the databases to send data whose values are greater than

or equal to T . When the DSP receives data whose values are greater than or equal

to T in phase 2, it should check that the k-th largest value is greater than or equal

to m ∗ T to see whether the assumption we used to calculate T is true or not:

Page 104: Secure kNN Query Processing in Untrusted Cloud Environments

88

1) If the k-th largest value is greater than or equal to m ∗ T , it means that the

assumption is true (i.e., the objects which are not reported so far do not have values

greater or equal to m ∗T ). Therefore, we can safely use T (> T ) as the threshold and

do not need to receive additional data from databases.

2) On the other hand, if the k-th largest value is less than m ∗ T , it means that

the assumption is false and we need to set a new threshold value T ∗ = t2/m. Since

t2 is the k-th largest value in phase 2, it is greater than or equal to t1. Therefore,

T ≤ T ∗ ≤ T and this means that the threshold is still greater than or equal to the

original TPUT. Now, the DSP requests the databases to send additional data whose

values are greater than T ∗.

Table 4.4 shows the I-PUT algorithm in detail.

Table 4.4I-TPUT algorithm

1. Request the local top-k objects to all databases;

2. Compute a threshold T = (t1 + (m− Ck) ∗ T )/m where t1 is phase 1 bottom;

3. Request objects whose values ≥ T to all databases;

4. Compute phase 2 bottom t2;

5. Check whether the smallest top-k value is greater than or equal to m ∗ T ;

6. If so, go to the step 9;

7. Otherwise, Request objects whose values ≥ T ∗(= t2/m) to all databases;

8. Compute phase 2 bottom t2;

9. Prune objects whose upper bounds are less than t2;

10. Request data corresponding to the remaining objects;

Example 6: Suppose that there are three databases and top-1 query is given.

If (O1, 0.6), (O1, 0.6), (O2, 0.7) are received from three databases in phase 1, then,

T = (0.6 + 0.6)/3 = 0.4. But, in I-TPUT, T = (1.2 + (3 − 2) ∗ 0.4)/3 = 0.53. So,

Page 105: Secure kNN Query Processing in Untrusted Cloud Environments

89

by using T instead of T , we can reduce the communication overhead. In phase 2,

when the DSP receives data whose values are greater than or equal to 0.53, it should

check whether the smallest top-k value is greater or equal to 1.59(=0.53*3). If so, the

algorithm can terminate. Otherwise, the DSP should receive data whose values are

greater or equal to T ∗ like the original TPUT.

Performance Analysis of I-TPUT. By using a threshold T greater than T ,

I-TPUT reduces the communication overhead corresponding to part A in Figure 4.3.

Since I-TPUT has higher threshold than A-TPUT, it sends less data than A-TPUT.

The data which is not needed to be sent is shown part A. However, since I-TPUT

may have more remaining objects than A-TPUT in phase 3, it may result in a higher

communication overhead corresponding to part B. If I-TPUT has more remaining

objects than A-TPUT, its lowest position of the remaining objects is lower than A-

TPUT. But, usually part A is much greater than part B. So, I-TPUT has much less

communication overhead than A-TPUT.

The communication overhead of A-TPUT is m∗n∗(1.0−T )+∑m

i=1 n∗pi∗(T−LOi)

where LOi is the smallest value for the remaining objects in list Li and pi is the

probability that LOi is less than T in list Li. The communication overhead of I-

TPUT is m ∗ n ∗ (1.0− T ′) +∑m

i=1 n ∗ p′i ∗ (T ′ − LO′i). Thus, the difference between

A-TPUT and I-TPUT is m∗n∗(T ′−T )+∑m

i=1 n∗pi∗(T−LOi)−∑m

i=1 n∗p′i∗(T ′−LO′i).

In correlated databases which have similar sets of top-k objects, usually p and p are

equal to 0. Thus, the difference is m ∗ n ∗ (T ′− T ) where T > T . Therefore, I-TPUT

has much less communication overhead than the original A-TPUT.

4.4 Experiments

4.4.1 Setup

We implemented the following algorithms in : A-TPUT, S-TPUT, AI-TPUT,

and SI-TPUT. To better assess our algorithms we also implemented two existing

algorithms:

Page 106: Secure kNN Query Processing in Untrusted Cloud Environments

90

Fig. 4.3. Performance Comparison of A-TPUT and I-TPUT

• Nave: The databases send all data to the DSP and the DSP forwards to the

user all data received from the databases.

• TNRA: NRA based authenticated top-k aggregation algorithm proposed in [36].

TRA also provides authentication for top-k aggregation, but we compare our

algorithms to only TNRA since [36] shows that TRA has worse performance

than TNRA.

We tested them over correlated synthetic data sets.

Correlated sets are data sets in which the values of the data in the lists are

correlated. In real-world applications, such correlations are common [40]. In our

experiments, we generate two sets of correlated data. Inspired from [40,41], we use a

correlation parameter (0 ≤ ≤ 1). We use two kinds of synthetics as follows:

• Zipf law (CZ-Data): The first set of correlated databases was generated as

follows. For the first list, we randomly select the position of data items. Let p1

be the position of a data item in the first list, then for each list Li (2 ≤ i ≤ m)

we generate a random number r in the interval [1 . . . n∗ ] where n is the number

of data items, and we insert the data item in the list at a position p such that

its distance from p1 is r. If p is occupied previously by another data item, we

Page 107: Secure kNN Query Processing in Untrusted Cloud Environments

91

insert the data item at the free position closest to p. After setting the positions

of all data items in all lists, we generate the values of the data items in each

list in such a way that they follow the Zipf law. The Zipf law states that the

value of an item in a ranked list is inversely proportional to its rank (position).

Such distribution is commonly observed in many kinds of phenomena, e.g. the

frequency of words in a corpus of natural language utterances.

• Uniform distribution (CU-Data): The second set of correlated databases was

generated as follows. For the first list, we randomly generate a number for an

object Oj. The values follow the uniform distribution. Let p1,j be the number.

Then for each list Li (2 ≤ i ≤ m) we generate a random number r in interval

[−, , ] and we set pi,j to p1,j+r. By controlling the value of , we create databases

with stronger or weaker correlations.

Our default settings for different experimental parameters are shown in Table 4.5.

In our tests, the number of databases, i,e, m, is a varying parameter. The default

number of databases is 128. The default number of data items in each database

is 10,000. Typically, users are interested in a small number of top answers, thus

unless specified we set k = 100. Like many previous approaches to top-k query

processing [33], we use a scoring function that computes the sum of the local values.

In addition, the default number of correlation parameter is 0.01.

Table 4.5Experimental Parameters

Parameter Values

Number of databases (m) 32,64,128,256,512

Number of records (n) 2500,5000,10000,20000,40000

k in top-k 25, 50, 100, 150, 200

Correlation parameter α 0.001, 0.005, 0.01, 0.05, 0.1

Page 108: Secure kNN Query Processing in Untrusted Cloud Environments

92

To evaluate the performance of the algorithms, we measure the following metrics:

communication overhead between databases and DSP, and response time taken to get

the authenticated top-k results from the databases. Concerning the communication

overhead it is important to notice that even though the number of remaining objects

at phase 3 is small, if the position of the last top-k object is low, in A-TPUT databases

would typically send a lot of data to the DSP. By measuring the number of records

transmitted by A-TPUT and S-TPUT, we can verify that S-TPUT is more efficient

than A-TPUT.

On the other hand, when the threshold is very small in A-TPUT or S-TPUT, each

database should send a lot of data to the DSP. By measuring the communication

overhead of AI-TPUT and SI-TPUT, we can verify that AI-TPUT and SI-TPUT are

more efficient than A-TPUT and S-TPUT. AI-TPUT and SI-TPUT uses I-TPUT

technique of section V.A.

Response time is a time that an algorithm executes for finding the top-k data

items. TNRA requires several rounds. By contrast our algorithms only require three

rounds. In distributed environments, a round trip time is much longer than a trans-

mission time. So, TNRA has much longer response time than ours. We compare our

algorithm to only TNRA since TRA has much larger communication overhead than

TNRA [36].

For the experiments in Sections 4.4.2 ∼ 4.4.4, we use the synthetic data sets for the

experiments since we can fine tune characteristics of data. We omit the experiment

for real data due to space constraint.

4.4.2 Communication Cost of S-TPUT

In this experiment, we compare the communication overhead of our algorithms

with that of nave approach to show the effect of the S-TPUT and the I-TPUT opti-

mization (i.e., higher threshold values). In this subsection, we evaluate the efficiency

of S-TPUT, whereas in next subsection we evaluate that of I-TPUT. The communica-

Page 109: Secure kNN Query Processing in Untrusted Cloud Environments

93

tion cost metric is the number of records transmitted from the databases to the DSP.

The result with CZ-Data is shown in Figure 4.4 ∼ 4.7 and the result with CU-Data

is shown in Figure 4.8 ∼ 4.11. From the results, we can see that the efficiency of

our proposed algorithm is largely depends on the data distribution but our algorithm

overwhelms the existing algorithms in most cases.

In Figure 4.4 ∼ 4.7 we can see that with CZ-data, S-TPUT incurs about 100

times less communication overhead than A-TPUT. This is due to that, in phase-3,

S-TPUT receives only the data corresponding to the set S of the remaining objects

instead of the sequences containing all of the objects in S as with A-TPUT. When the

number of remaining objects S in phase 3 becomes small, the advantage of S-TPUT

becomes large. In addition, when the position of the last objects becomes low, the

communication cost of A-TPUT becomes large.

Fig. 4.4. Communication Cost by m in Zipf law

The experiments in Figure 4.8 ∼ 4.11 show that, unlike with CZ-Data, S-TPUT

has a similar communication overhead with CU-Data compared to A-TPUT. With

CZ-Data, S-TPUT has a small number of remaining objects in phase 3 compared to

A-TPUT. But, with CU-Data, S-TPUT has a small threshold and there are a lot of

data, whose values are greater than the threshold, to be sent in phase 2. On the

other hand, with CZ-Data, even if S-TPUT has a small threshold, there are no a lot

of data whose values are greater than the threshold in phase 2 since the values follow

the Zipf law.

Page 110: Secure kNN Query Processing in Untrusted Cloud Environments

94

Fig. 4.5. Communication Cost by n in Zipf law

Fig. 4.6. Communication Cost by k in Zipf law

Fig. 4.7. Communication Cost by α in Zipf law

For example, lets assume that the threshold is 0.1. Since the values are between

0 and 1, with CZ-Data, a database sends only 10 records in phase 2 to DSP since

the values follow the Zipf law. But, with CU-Data, the database should send about

Page 111: Secure kNN Query Processing in Untrusted Cloud Environments

95

90% of records since the values follow uniform distribution. Therefore, S-TPUT is

efficient with CZ-Data, but it is not with CU-Data.

Fig. 4.8. Communication Cost by m in Uniform distribution

Fig. 4.9. Communication Cost by n in Uniform distribution

Fig. 4.10. Communication Cost by k in Uniform distribution

Page 112: Secure kNN Query Processing in Untrusted Cloud Environments

96

Fig. 4.11. Communication Cost by α in Uniform distribution

4.4.3 Communication Cost of I-TPUT

From the experiments in Figure 4.4 ∼ 4.7, we can see that I-TPUT is not much

efficient for CZ-data. This is due to that, in the Zipf law, the value of an item in a

ranked list is inversely proportional to its rank. Since the value is between 0 and 1,

the value in the first rank is 1, the value in the second rank is 0.5, and the value in

the j-th rank is 1/j. So, for example, when k=100, a database sends records whose

values are greater than 0.01 in phase 1 by the Zipf law since it sends local top-100

records in phase 1. If the threshold of S-TPUT is greater than 0.01, SI-TPUT is not

much efficient since even if SI-TPUT has a higher threshold than S-TPUT, it already

sent records whose values are greater than 0.01 in phase 1.

In contrast, in Figure 4.8 ∼ 4.11, with CU-Data, we can see that I-TPUT is much

efficient compared to A-TPUT and S-TPUT. AI-TPUT and SI-TPUT incur about

40% less communication overhead than A-TPUT and S-TPUT. This is due to the

fact that when the threshold is small in A-TPUT or S-TPUT, the databases have

to send a lot of data to the DSP since the values are uniformly distributed. But, in

AI-TPUT or SI-TPUT, by increasing the threshold using our I-TPUT algorithm, we

can reduce the communication overhead compared to A-TPUT and S-TPUT.

The results in Figure 4.10 show that, when k is 25, A-TPUT and S-TPUT have

more communication overhead than the other cases in which k is greater than 25.

Page 113: Secure kNN Query Processing in Untrusted Cloud Environments

97

When the values are uniformly distributed, if k is too small, we cannot find the

proper threshold. As k increases, we can get the better threshold.

4.4.4 Comparing S-TPUT with TNRA

In this experiment, we compare our S-TPUT with the existing authenticated top-

k aggregation algorithm, TNRA [36]. We measure the response time for the DSP to

receive all data for top-k from databases. As we described in Section 4.3, counting

the number of transmitted records is not feasible to compare the response time since

TNRA has unpredictable number of round trips and the round trip rime is much

longer than the transmission time. But our approach only requires a fixed number of

rounds. This feature significantly reduced actual response time since in distributed

environments the round trip time is much higher than the packet transmission time.

The round trip time is proportional to the distance between a database and a DSP.

We assume that the round trip time is 10ms and the processing time is trivial. For

TNRA, the database sends k (=100) records every round.

In Figures 4.12 ∼ 4.15, we can see that, in all experimental instances, S-TPUT

has a constant response time, whereas TNRA has a response time much greater than

S-TPUT. As the parameters like the number of databases, k in top-k, the number of

records, and correlation ratio increase in TNRA, the response time increases. The re-

sults show that S-TPUT is the most suitable algorithm for a distributed environment.

Thus, S-TPUT has much lower response time than TNRA.

Page 114: Secure kNN Query Processing in Untrusted Cloud Environments

98

Fig. 4.12. Response Time by m in TNRA and S-TPUT

Fig. 4.13. Response Time by n in TNRA and S-TPUT

Fig. 4.14. Response Time by k in TNRA and S-TPUT

Page 115: Secure kNN Query Processing in Untrusted Cloud Environments

99

Fig. 4.15. Response Time by α in TNRA and S-TPUT

Page 116: Secure kNN Query Processing in Untrusted Cloud Environments

100

5. SECURE PROXIMITY-BASED ACCESS CONTROL

The wide deployment of wireless technology is today making possible to remotely read

and control new generation of medical devices. While this increases the ease of use

of these devices, it also makes easier to tamper these devices and gain unauthorized

access to personal information. Confidentiality and integrity concerns have been a

factor impeding a rapid uptake of wireless technology in the clinical field [43, 44].

Access control methodologies could reduce the risk of confidentiality and integrity

breaches and thus help the adoption of wireless technology in the clinical setting.

In this chapter, we focus on access control in the context of ubiquitous medical

applications [45]. The use case for our proposed scheme is a physician accessing some

medical device using a wireless device, such as a tablet or smartphone. The medical

device may be an implantable device such as a cardiac defibrillator, a drug delivery

system, or a neurostimulator. The medical device could also be a mobile device worn

by the patient and used to help manage diseases such as arrhythmia, diabetes and

Parkinsons disease. A variety of wearable sensors available in the market would fit

this definition [46–48].

The medical device is assumed to have wireless network interfaces so that medical

personnel can interact with the device even if it is implanted within the patient. The

wireless connectivity enables ease of access; however at the same time it is crucial

that unauthorized access to potentially sensitive medical data be prevented. An

example is the ”bored but curious” employee; such a person may access the record of

a celebrity undergoing treatment in the same hospital despite having no valid reason

to do so [49]. In addition, in a government or military setting, secure processing of

confidential material might require restricting such accesses to a single room or set of

rooms [49]. To prevent unnecessary exposure of medical information, we can exploit

the concept of authentication based on location i.e., even if an authorized reader is

Page 117: Secure kNN Query Processing in Untrusted Cloud Environments

101

requesting access to the medical device, the access is only granted if the reader is

located within a trusted area.

Figure 5.1 illustrates a running example of authentication based on location. Here,

an authorized reader R1 can access medical devices MD1, MD2, and MD3 as R1 is

within the trusted area; by contrast the other authorized reader R2 should not be

able to access the medical devices as R2 is located outside of the trusted area.

Fig. 5.1. Proximity-based Access Control

In order to prevent unauthorized access to private medical records, proximity-

based access control systems with distance bounding protocols have been proposed

[45, 50–54]. These protocols first estimate the location of the user, and then, allow

access to information to only authorized medical personnel that is within specific

physical locations. For example, an authorized physician can have access to data

stored on a patients implanted medical device only when in an ambulance, hospital,

or doctors office. The restriction reduces exposure of private medical data to potential

adversaries.

To estimate the location of the user, distance bounding protocols utilize special

devices that communicate through ultrasound [45,52] or Ultra Wide Band (UWB) [53]

in order to ascertain the distance from the two parties wishing to communicate [54].

First, the verifier (i.e., a patients medical device) sends a signal to the prover (i.e.,

a physicians medical device). Once the prover receives the signal, it processes it,

and sends a reply to the verifier. The communication latency is used to compute the

distance between the prover and the verifier; therefore should be accurately measured.

However, since the signal travels at the speed of light, the latency is nominally on the

Page 118: Secure kNN Query Processing in Untrusted Cloud Environments

102

nano second scale and cannot be easily measured. Even though we can use ultrasound

or UWB based approaches, they are not applicable in many real environments as

the devices are expensive or not widely deployed [55] and in addition fast-processing

hardware is required to estimate location in real time [56]. The prices of the devices are

usually several thousand dollars [57]. This approach relies on costly devices - accurate

clocks and fast hardware to process the signals in the protocol with infinitesimal delay.

Consequently the approach has not seen wide deployment.

To alleviate the practical usability issues, we propose a secure proximity-based

access control technique using multiple distributed location based service (LBS) de-

vices that utilize Bluetooth [58]. Nowadays, Bluetooth is cheap and widely deployed

in a lot of devices such as smartphones, laptops, and tablets. We first strategically

deploy multiple LBS devices so as to determine whether a physician is in a trusted

area where the physician can access the information stored on a medical device. Each

such trusted area is the area obtained by the overlap of the radio transmission ranges

of multiple LBS devices. Therefore, when a physician is physically located within

such a trusted area and wishes to access information stored on a medical device, this

means that the physicians device can communicate with each LBS device which forms

the overlapped area. In what follows, we refer to the physicians medical device as the

reader and refer to the patients medical device as the medical device. Note that the

doctors device may also be used to control the patients device, not just to read data

from it.

Our approach to guarantee that the reader is in the trusted area is as follows:

1) each LBS device forming the overlapped area (i.e., the trusted area) provides a

partial key ki and a signature si to the reader, 2) the reader computes an aggregate

signature from the received signatures, and 3) the medical device grants the reader

access only if the aggregate signature is valid.

The proposed approach has three important strengths:

Page 119: Secure kNN Query Processing in Untrusted Cloud Environments

103

• It enforces proximity-based access control to the medical device without requir-

ing the use of expensive technology such as ultrasound, UWB, or fast-processing

hardware.

• It is designed to be resilient to LBS device or wireless link failures, such as fail-

ures due to interference or malicious denial of service attacks. This means that

our protocols operate correctly as long as a subset of LBS devices is operating

correctly.

• It provides an easy and efficient key management scheme which does not require

updating keys in each medical device when the administrator changes the partial

keys in the LBS devices by exploiting bilinear mapping technique [35, 59] and

(k, n) threshold algorithm [42]. We note that the key management scheme is

an important feature of our approach since it also provides an efficient way to

address security and administrative concerns.

The rest of the chapter is organized as follows. Section 5.1 discusses necessary

background and introduces our proposed system. Section 5.2 details our secure

proximity-based access control scheme using distributed LBS devices, aggregate sig-

nature, and the (k, n) threshold algorithm. Section 5.3 presents the experimental

results.

5.1 Preliminaries

5.1.1 System Model

There are four components in our proposed system: an administrator, multiple

LBS devices, a reader, and a medical device. The administrator is a trusted entity

that produces keys which are needed for the readers to access the medical device.

The reader is a device used by a physician to read or configure a patients medical

device. LBS devices are configured by the administrator and are used to determine

the location of the reader.

Page 120: Secure kNN Query Processing in Untrusted Cloud Environments

104

The system flow is as follows. A physician wants to gain access to a patient’s

medical device. To do so, the physician must be physically located within a predeter-

mined trusted location that is setup by an administrator. Once a physician is within

a trusted location, the reader communicates with nearby LBS devices, which provides

the keys to access the medical device. Once the reader receives the keys, it sends the

keys to the medical device. If the keys are valid, then the reader is able to access or

configure the medical device.

5.1.2 Attack and Failure Model

We consider the following attack scenario. An attacker wants to access or modify

medical data stored on a patients medical device when the patient is outside of a

trusted location [45]. The motivations for this kind of attack can be anything from

medical identity theft, blackmail, or causing physical harm to the patient. In section

5.2, we describe methods that utilize multiple location-based service (LBS) devices

to prevent these types of attacks.

In a more complicated attack scenario, when there are two or more colluding

readers, they can share the keys received from LBS devices in order to generate the

key for granting access to a medical device even though none of the readers is within

the trusted areas. In section 5.2.3, we provide methods to prevent the collusion attack.

Another attack scenario is a replay attack. When a reader is within a trusted

area, it receives the keys but later leaves the trusted and then attempts to access the

medical device. A method to prevent replay attacks is presented in section 5.2.3.

In addition, we assume that some LBS devices can be faulty and thus unresponsive,

possibly because of a denial of service attack, faulty hardware, or etc. It is critical

that our system be resilient to such faults so as to be able to work also in case of time

critical medical emergencies. In section 5.2.4, we give a method to be resilient to the

LBS device faults.

Page 121: Secure kNN Query Processing in Untrusted Cloud Environments

105

5.1.3 Protocol Overview

The administrator has a key set and the key set is needed for the reader to access

the medical device. The administrators key set is comprised of n keys where n is the

number of LBS devices forming a trusted area. The administrator provides a key to

each LBS device in the setup phase. If the administrator changes the key set, it should

send each key to each LBS device again. The LBS devices are beacons emitting signals

for location detection. We assume that the LBS devices communicate via Bluetooth.

The readers can be tablets, smart phones, or laptops used by physicians in order

to acquire information from a medical device. Medical devices are any information

sources for a patient or any device that conducts a physiological function in a patient.

In our attack model, the administrator is trusted, but LBS devices and readers are

not. The LBS devices may be misbehaving and the readers may be malicious.

To enforce location based access control, when the reader broadcasts a request

message, each nearby LBS device sends its key and its signature which verifies the

key to the reader. The reader then computes an aggregate signature from the received

signatures and then sends the keys and the aggregate signature to the medical device.

At the medical device, if the aggregate signature is valid, the reader is given access to

the stored medical information. Here, we achieve the proximity-based access control

by enforcing the reader can receive all the required keys and signature pairs only if

the reader is physically located within the trusted area. For this, the LBS devices are

strategically placed to create the trusted area where each LBS transmission range is

overlapped, therefore only a reader that can communicate with all LBS devices can

create a correct aggregate signature. On the contrary, if the reader is not within the

area, it cannot receive all the key and signature pairs from the LBS devices and will

fail to produce a valid aggregate signature. Consequently, the medical device will

reject the readers communication request. How the reader calculates the aggregate

signature as well as the verification method at the medical device are detailed in

section 5.2.

Page 122: Secure kNN Query Processing in Untrusted Cloud Environments

106

Additionally, fault tolerance is achieved into our proposed scheme by the use of

the (k, n) threshold algorithm. If a reader is within a trusted area but only k LBS

devices among n LBS devices forming a trusted area are accessible, the reader may

still have access to the medical device. In more detail, the administrator makes n

partial keys from the original key K. Each partial key ki is assigned to each LBS

device. If the reader is only within the trusted area, it can receive at least k partial

keys from k LBS devices. If the reader has at least k partial keys, it can access the

medical device since the medical device will re-compute the original key K from the

k partial keys. Thus, even if there are at most n − k unresponsive LBS devices and

if the reader is within the trusted area, it can access the medical device.

We note that our scheme can be easily extended to support multiple trusted areas

by deploying adjacent LBS devices to form overlapped areas for each trusted areas.

By placing these LBS devices strategically (e.g., deploying LBS devices as a mesh

structure), the administrator can reduce the number of the LBS devices compared

to the number of the trusted areas. In section 5.2.5, we will provide the method in

detail.

5.1.4 Bilinear Mapping

To provide efficient key management mechanism, we exploit bilinear mapping. Let

G1 and G2 be two (multiplicative) cyclic groups of prime order p with an additional

group GT such that |G1| = |G2| = |GT |. A bilinear map is a map e : G1G2GT with

the following properties:

1. Bilinear: for all u ∈ G1,v ∈ G2 and a, b ∈ Z, e(ua, vb) = e(u, v)ab

2. Non-degenerate: e(g1, g2) 6= 1 where g1 is a generator of G1 and g2 is a generator

of G2.

These properties imply two more properties:

1. for any u1, u2 ∈ G1,v ∈ G2, e(u1u2, v) = e(u1, v)e(u2, v)

Page 123: Secure kNN Query Processing in Untrusted Cloud Environments

107

2. for any u, v ∈ G1 , e(u, ψ(v)) = e(v, ψ(u)) where ψ is a computable isomorphism

from G1 to G2.

5.2 Secure Proximity-based Access Control

5.2.1 Simple Proximity-based Access Control Method

The goal is to grant an authentic reader access to a medical device only when the

reader is physically within a trusted area. In this section, we propose a method to

achieve this goal without using distance-bounding protocols [50, 51] which requires

expensive special purpose devices and fast-processing hardware. We will use this as

a baseline protocol and build on it in subsequent sections.

Figure 5.2 illustrates an example of a simple setup for our proposed scheme with

an administrator, four LBS devices, a reader, and a medical device. Each LBS device

has its own key given by the administrator. The overlapped area of the four radio

ranges of the four LBS devices forms a trusted area. Therefore, in order to access

the medical device, the reader must have all the four keys from the LBS devices and

this is only possible when the reader is within the trusted area. The reader sends

the received keys to the medical device, and the medical device sends its medical

information to the reader only when it receives all the four keys. In Figure 5.2,

we assume that Bluetooth [58] whose radio range is approximately 10m is used for

communication between a reader and LBS devices.

5.2.2 Aggregate Signature using Bilinear Mapping

Along with the key, the LBS devices send a signature to verify the key is correct.

Then, the reader sends the key and signature pairs to the medical devices, and the

medical device verifies whether the signature is correct. In our proposed scheme,

to reduce the communication overhead between the reader and medical device, the

reader sends an aggregate signature based on bilinear mapping instead of sending each

Page 124: Secure kNN Query Processing in Untrusted Cloud Environments

108

Fig. 5.2. Simple Proximity-based Access Control

individual signature received from LBS devices. With the aggregate signature, we can

reduce the readers power consumption which is crucial in wireless environments.

Figure 5.3 illustrate the communication flow for the aggregate signature. When

there are n LBS devices, each LBS device has its own key ki and its own private

key xi and generates its own signature si. The reader receives n signatures. Then,

the reader computes an aggregate signature s =∏si from the n signatures. So, the

reader does not need to send the n signatures to the medical device. Instead, it sends

the only aggregate signature to the medical device and the medical device verifies that

the keys are correct with respect to the aggregate signature. Note that the medical

device has the public key gxi2 of each LBS device to verify the aggregate signature.

Fig. 5.3. Aggregate Signature using Bilinear Mapping

Page 125: Secure kNN Query Processing in Untrusted Cloud Environments

109

The goal of the aggregate signature scheme is to aggregate signatures generated

by distinct signers (i.e., LBS devices) on different messages (i.e., key ki) into one short

signature based on elliptic curves and bilinear mappings.

The aggregate signature scheme [35] uses a full-domain hash function H() : 0, 1∗ →

G1. The private key generation algorithm picks a random xi ∈ Zp, and computes the

public key v = gxi2 where v ∈ G2. When a LBS device has a key ki, it computes

hi = H(ki) where hi ∈ G1 and the signature is computed as follows.

si = H(ki)xi = hkii (5.1)

To verify the signature, one checks the following by bilinear mapping.

e(si, g2)? = e(H(ki), gxi2 ) (5.2)

To aggregate n signatures, the reader computes of n signatures as follows.

s =∏i=1..n

si (5.3)

The aggregate signature s has the same size as the signature si, i.e., 1024 bits.

The verification of an aggregate signature is executed by computing the product

of all hashes of keys ki and checking the following equality.

e(s, g2)? =∏i=1..n

e(hi, gxi2 )⇔ e(

∏i=1..n

H(ki)xi , g2)? =

∏i=1..n

e(H(ki), gxi2 ) (5.4)

5.2.3 Preventing Attacks

In our privacy model, even if a reader has an authorization to access the medical

device, if it is not in the trusted area, it should not be able to access the medical

device. However, there are several possible attacks which aim to bypass the proximity

based access control. In this section, we consider three attack types: replay attacks,

collusion attacks, and distance spoofing attacks such as Bluesniping [64].

Page 126: Secure kNN Query Processing in Untrusted Cloud Environments

110

Replay Attack

In a replay attack a reader acquires valid keys when the reader is in the trusted

area and reuses these keys to access the medical devices at a later time when the

reader is not in the trusted area any longer. To mitigate such attack, our scheme

uses timestamp whenever the LBS devices create a signature. First, the reader sends

a request to the medical device and the medical device sends a timestamp tst to the

reader. Next, the reader sends the timestamp to the LBS devices that then generate

their own signature using the timestamp as follows.

si = (H(ki) + tst)xi = (hi + tst)

ki (5.5)

where tst ∈ G1.

Then, the reader computes an aggregate signature s =∏n

i=1 si. The signature is

verified at the medical device as follows.

e(s, g2)? =n∏i=1

e(hi + tst, gxi2 ) (5.6)

Note that the timestamp is valid for only a predetermined time. Therefore once

the reader receives the signatures from the LBS devices, the reader can access the

medical device only within the predetermined time with the signature.

Collusion Attack

In a collusion attack two or more colluding readers share the keys and signatures

received from LBS devices in order to acquire enough keys and signatures for accessing

a medical device even though none of the readers is within the trusted areas. An

example of collusion attack is as follows. Suppose that the reader R1 is outside the

trusted area as illustrated in Figure 5.4. It can receive three keys k1, k2, k4 and three

signatures s1, s2, s4 but it cannot receive the key k3 and the signature s3. However,

a colluding reader R2 can send a request to L3 and upon receiving the key and the

signature can forward them to R1.

Page 127: Secure kNN Query Processing in Untrusted Cloud Environments

111

Fig. 5.4. Collusion Attack

To mitigate such attack, our scheme uses a digital fingerprint to uniquely identify a

reader. Various schemes have been proposed to uniquely identify devices with wireless

network cards. For example, radiometric identification establishes the identity of

wireless network devices by analyzing idiosyncratic artifacts in transmission signals

[65]. In particular, the PARADIS technique proposed by Brik et al is able to establish

unique identities with accuracy in excess of 99% on 130 identical 802.11 wireless

network cards.

When the LBS device receives a request from a reader, we assume that the LBS

device can compute a fingerprint of the reader. By the fingerprints, the LBS devices

can distinguish a reader from other readers. The signature is generated using the

fingerprint of the reader by the LBS device as follows.

si = (h(ki) + tst + fu)xi = (hi + tst + fu)

xi (5.7)

where tst, fu ∈ G1 and fu is the fingerprint of the reader Ru. The fingerprint is

encrypted by the medical devices public key and is transmitted to the medical device

through the reader. For example, when the LBS device receives a request from a

reader R1, the LBS device computes the readers fingerprint f1. If a reader R2 sends a

request, the readers fingerprint f2 is generated by the LBS device. So, when R1 and

R2 collude in Figure 5.4, the signatures are generated as follows.

s1 = (h1+tst+f1)x1s2 = (h2+tst+f1)

x2s4 = (h4+tst+f1)x4s3 = (h3+tst+f2)

x3 (5.8)

Page 128: Secure kNN Query Processing in Untrusted Cloud Environments

112

Note that the signature s3 was generated using the fingerprint f2 of the reader R2.

The signatures are aggregated at the reader R1. Then, the medical device verifies

whether the aggregate signature is correct as follows.

e(s, g2)? =n∏i=1

e(hi + tst + f1, gxi2 ) (5.9)

Since the signature s4 was generated using the fingerprint f2 by L3, the verification

at the medical device fails. The collusion attack is thus prevented. Note that the

medical device receives the encrypted fingerprint f1 of a reader from the LBS device

through the reader.

This solution is inspired by [50, 56] in which the reader utilizes tamper-proof

hardware such that the authentication material is not revealed to the attacker and

the device cannot be cloned [66]. Another possibility is that the LBS devices perform

device fingerprinting [67] by which they identify each device as unique. The LBS

device can identify the reader by the unique fingerprint that characterizes its signal

transmission. Cellular network companies utilize this process in order to prevent cell

phone cloning fraud [50]. A cloned phone does not have the same fingerprint as the

legal phone with the same electronic identification numbers [50].

Distance Spoofing

In a distance spoofing attack a malicious reader which is outside the trusted area

accesses the medical devices by extending the communication range with external

antennas. It is known that Bluetooth transmission range can be increased via an

external directional antenna a technique known as Bluesniping [64]. By utilizing an

external antenna, even if the reader is outside the trusted area, it can send a request

to the LBS device and receive a signature from the LBS device.

For instance, in Figure ??, the original radio range of R1 does not cover L3.

However if R1 extends its radio range, the reader can send a request to L3 and receive

a signature.

Page 129: Secure kNN Query Processing in Untrusted Cloud Environments

113

Fig. 5.5. Bluesniping Attack

In order to prevent the Bluesniping attack, we use rejecters and trusted areas iden-

tifier as described in [55]. When a reader sends a request to LBS devices, the request

broadcast message contains an identifier of the trusted area where the medical device

is located. On the other hand, each LBS device has a set of trusted area identifiers.

For example, L1, L2, L3, L4 have the trusted areas identifier A1 and L1, L5, L6, L7 have

the trusted areas identifier A2. L1 has two trusted areas identifiers A1 and A2. In

Figure 5.5, L7 is set to the trusted area A1s rejecter.

Since the medical device is in A1, the request message from R1 should contain

the trusted areas identifier A1. However, since R1 uses extended radio range in order

to cover L3, its request message is also received at L7. Since L7 is A1s rejecter and

the request message contains A1, L7 knows that R1 is not in A1. Then, it raises an

alarm. The alarm is transmitted to the medical device through other LBS devices or

the administrator. It may take a time to transmit the alarm to the medical device.

Therefore, before the medical device sends its medical information to the reader, it

should wait for a predetermined time to check whether there is an alarm.

5.2.4 Secure and Resilient Proximity-based Access Control

As described in the previous section, to access the medical device, the reader

should receive keys and signatures from all n LBS devices. However, there can be

unresponsive LBS devices due to maintenance or denial of service attacks. In such

Page 130: Secure kNN Query Processing in Untrusted Cloud Environments

114

case, the reader will not be able to access the medical device even if it is in the trusted

area. This kind of unavailable access situation should be avoided especially in medical

applications. In this section, we provide an enhancement to our scheme for enabling

resilience to failed LBS devices.

To achieve this goal, we utilize the (k, n) threshold algorithm [42]. The idea is to

divide the original key K into n pieces such that the key K is easily reconstructed from

any k pieces. To prevent partially leaking the information about K, it is also crucial

to assure that obtaining less than k pieces does not reveal any information about K.

This scheme is based on polynomial interpolation. Given k points (x1, y1), . . . , (xk, yk),

there is only one polynomial q(x) of degree k − 1. To divide the original key K into

n pieces, we select a random k − 1 degree polynomial as follows.

q(x) = a0 + a1x+ . . .+ ak−1xk−1 mod p (5.10)

Then, K = a0 and the n partial keys ki are as follows.

k1 = q(1), k2 = q(2), . . . , kn = q(n) (5.11)

Given any k partial keys, we can find the coefficients of q(x) by interpolation as

follows:

Then, we can get the original key K from q(0).

q(x) =k∑

u=1

kiu

k∏j=1,j 6=u

(x− ij)/(iu − ij) mod p (5.12)

An example of (3,5) threshold algorithm when p = 13 and K = 10 is given.

Suppose that

q(x) = 10 + 7x+ 6x2 mod 13 (5.13)

Then, the five partial keys are as follows.

k1 = q(1) = 10 (5.14)

k2 = q(2) = 9 (5.15)

Page 131: Secure kNN Query Processing in Untrusted Cloud Environments

115

k3 = q(3) = 7 (5.16)

k4 = q(4) = 4 (5.17)

k5 = q(5) = 0 (5.18)

When the medical device receives three partial keys k1, k3, k5 from the reader, it

can reconstruct the original key as follows.

q(x) = {10∗ (x− 3)(x− 5)

(1− 3)(1− 5)+ 7∗ (x− 1)(x− 5)

(3− 1)(3− 5)+ 0∗ (x− 1)(x− 3)

(5− 1)(5− 3)} mod 13 (5.19)

q(x) = (62 + 111x+ 19x2) mod 13 (5.20)

The medical device derives K ′ = q(0) = 62 mod 13 = 10, which is equal to the

original key K.

The (k, n) threshold algorithm has one more important property. It is easy to

change the n partial keys without changing the original key K. If we use the same

a0 with a different ai, we can change the n partial keys with the same original key

K. This property is useful in our system when we update the partial keys of LBS

devices.

The threshold algorithm makes our proposed scheme robust against faulty LBS

devices. In the setup phase, the administrator generates an original key K. The

original key is given to the medical device. Note that even if the medical device is

compromised and the original key K is given to the adversary, it does not affect our

secure access control. The purpose of the (k, n) threshold algorithm is to convince

the medical device that the reader received at least k partial keys and signatures.

The administrator divides the original key into n partial keys ki. Each partial key is

given to each LBS device such that each LBS device Li has its own partial key ki.

By the threshold algorithm, , if there are at least k working LBS devices, the

reader can receive k partial keys and the k partial keys are verified at the medical

device.

Note that we assume that from (k, n) threshold algorithm, k is dynamically deter-

mined by the administrator to meet various security requirements at different trusted

Page 132: Secure kNN Query Processing in Untrusted Cloud Environments

116

areas. As described earlier, the administrator can change the partial keys without

changing the original key K. So, even if the administrator changes the partial keys

and the parameter k from (k, n) threshold algorithm frequently, it does not need to

update the original key K at the medical device. This is a good feature to provide

easy and efficient key management mechanism.

5.2.5 Covering multiple areas using SPAC

In section ??, we described a simple configuration with four LBS devices per

trusted area when fault tolerance is not considered. In this section, we provide a

more complicated configuration with five or more devices per trusted area and show

how we can achieve fault tolerance against unresponsive LBS devices . Figure 5.6

shows 12 LBS devices that are deployed for four trusted areas when tolerance against

unresponsive LBS devices is not considered. Since the two top and the two left LBS

devices can be shared with adjacent areas, we need only 8 LBS devices for four areas.

This means that we need only two LBS device per trusted area as we increase the

number of adjacent trusted areas.

Figure 5.7 shows 14 LBS devices that are placed for four trusted areas considering

fault-tolerance (only need to communicate 3 LBS devices among 5 LBS devices form-

ing a trusted area). Because the top two LBS devices and the left two devices can be

shared with the next areas, we just need to have 10 LBS devices for four areas. Thus,

we need to deploy just 2.5 LBS devices per an area. Therefore, there is a cost saving

by considering adjacent areas as opposed to trusted areas that are at a distance from

each other.

When the number n of the LBS devices per trusted area is greater than 4, the

average number of the LBS devices per trusted area, An, is as follows.

An = 2 ∗ bn4c+ b

n− 4 ∗ bn4c

2c+

n− 4 ∗ bn4c − 2 ∗ bn−4∗b

n4c

2c

2(5.21)

If we assume that each area has four sides, and two sides (top and left) are shared

with pre-existing areas, first, we can allocate bn/4c LBS devices per a side. Second,

Page 133: Secure kNN Query Processing in Untrusted Cloud Environments

117

Fig. 5.6. Covering multiple areas (no fault tolerance)

Fig. 5.7. Covering multiple areas (fault tolerance(3,5))

we should allocate half of the remaining LBS devices, thus b(n − 4 ∗ bn/4c)/2cLBS

devices per a pair of two sides (top and left, bottom and right) respectively. Note that

the first two described sides (top and left) are shared sides with pre-existing areas and

the other two sides (bottom and right) are sides which are newly added. Finally, we

allocate half of the remaining LBS devices, thus (n−4∗bn/4c−2∗b(n−4∗bn/4c)/2c)/2

LBS devices per two sides respectively.

For example, when each trusted area needs seven LBS devices, first a LBS device

is assigned to each side. Second, a LBS device is assigned to each pair of two sides

(i.e., top and left, bottom and right). Finally, there is a remaining LBS device and

it is assigned to the pair of bottom and right. So, the pair of bottom and right has

Page 134: Secure kNN Query Processing in Untrusted Cloud Environments

118

totally four LBS devices and the pair of top and left has totally three LBS devices

and they are shared with adjacent trusted areas. Averagely each trusted area needs

four LBS devices when it needs seven LBS devices and there are adjacent multiple

trusted areas.

When a LBS device is shared with two adjacent trusted areas and the reader

sends a request message to the LBS device, the request message should contain the

identifier Ax of the trusted area which contains the target medical device. Then, the

LBS device makes a signature as follows.

si = (H(ki) + tst + fu + ax)xi (5.22)

The signatures are aggregated at the reader and the aggregated signature s =∏ni=1 si is verified at the medical device as follows.

e(s, g2)? =n∏i=1

e(hi + tst + f1 + Ax, gxi2 ) (5.23)

The final protocol of SPAC is illustrated in Figure 5.8.

Fig. 5.8. SPAC protocol

5.3 Experimental Results

5.3.1 Setup

To show computational efficiency of our approach, we measure the computation

time for the proposed SPAC protocol at the three components: 1) at the LBS device,

Page 135: Secure kNN Query Processing in Untrusted Cloud Environments

119

2) at the reader, and 3) at the medical device. The LBS device computes a signature

si for each key ki. The reader computes an aggregate signature s =∏

i=1..n si . The

medical device verifies the aggregate signature.

In addition, to show the communication efficiency of our approach, we measure

the communication overhead between a reader and a medical device in: 1) a nave

algorithm that uses 1024 bit RSA signature and 2) our SPAC algorithm. In the nave

algorithm, the reader does not compute an aggregate signature and just forwards the

keys and the signatures to the medical device.

Our prototype is built in Java and has four parties: trusted administrator, trusted

but possibly inoperable LBS devices, reader, and medical device. The administrator

picks an original key K and divides this key into n partial keys ki. The original

key is given to the medical device and each partial key is given to each LBS device.

The aggregate signature is represented as a point on a curve. Each LBS device picks

a random private key xi and computes its public key gxi2 which is then given to

the medical device in the setup phase. We use cryptographic pairing for all bilinear

pairing computations through the use of the Jpair package [60]. The program achieves

security equivalent to 1024-bit RSA [60].

When the reader broadcasts a request message, the LBS devices send their key ki

and signature si. Then, the reader computes an aggregate signature from the received

signatures. Finally, the medical devices verification runs bilinear pairing procedure

over the aggregate signature.

We conducted computational cost experiments on an Intel i7 CPU (2.2GHz) with

8GB RAM running 64-bit Windows 7. Additionally, we tested our scheme on an

Android OS 4.0.4 [61] running on a Samsung Galaxy S3 [62] which utilizes a Quad-core

processor clocked at 1.4GHz. We investigated the performance on Android devices

due to their portability and potential usage in clinical settings.

Page 136: Secure kNN Query Processing in Untrusted Cloud Environments

120

5.3.2 Computaional Cost

Our experiment shows how the number of LBS devices influences the time it takes

for the LBS device to compute a signature for the partial key ki, for the reader to

compute an aggregate signature, and for the medical device to verify the aggregate

signature. The numbers of LBS devices we considered are 3, 5, 7, and 9. Further, the

number of partial keys is equal to the number of LBS devices.

First, we show that the computation time of our algorithm in a high-performance

PC. Figure 5.9 shows the computation time at the LBS device. The computation time

at the LBS device is almost constant since each LBS device generates a signature.

Figure 5.10 provides the computation cost at the reader. The reader computes an

aggregate signature. Since the reader receives n signatures from n LBS devices, the

computation time to generate an aggregate signature is proportional to the number

of LBS devices.

Figure 5.11 shows the computation cost at the medical device. The medical device

verifies that the aggregate signature is correct by bilinear mapping. The computation

complexity is linear in the number of LBS devices.

The experimental results indicate that secure access to the medical information

can be done within about 100 ms. It could be argued that the latency is reasonable

for real world applications.

Fig. 5.9. Computation Time at LBS on Desktop

Page 137: Secure kNN Query Processing in Untrusted Cloud Environments

121

Fig. 5.10. Computation Time at Reader on Desktop

Fig. 5.11. Computation Time at Medical Device on Desktop

Next, we show that the performance of our algorithm in a mobile device which has

relatively constrained computational resources. Figure 5.12 ∼ 5.14 shows the com-

putation time when the protocols are executed on an Android 4.0.4 system running

on the Samsung Galaxy S3. In Figure 5.14, the experiment results for the medical

device on Android show that the verification time is relatively longer.

When the number of LBS devices is 3 or 5, we can see that the computation times

are acceptable even with mobile devices. Additionally, the operating time must be

improved as efficient response time is critical for medical emergencies. [60] states that

the Stanford PBC (Paring-Based Cryptography) library [63] is about six times faster

Page 138: Secure kNN Query Processing in Untrusted Cloud Environments

122

than Jpair in computing a pairing. In order to reduce the computation time, we can

use PBC instead of Jpair.

Fig. 5.12. Computation Time at LBS on Android

Fig. 5.13. Computation Time at Reader on Android

5.3.3 Communication Cost

In this section, we compare the communication overhead of our SPAC scheme with

the communication overhead of the nave algorithm that uses 1024 bit RSA signatures.

The reader in the nave algorithm does not compute the aggregate signature and sim-

ply forwards the keys and the signatures to the medical device. The communication

overhead between a reader and a medical device in the nave algorithm is O(n) where

Page 139: Secure kNN Query Processing in Untrusted Cloud Environments

123

Fig. 5.14. Computation Time at Medical Device on Android

n is the number of LBS devices. However, the communication overhead in SPAC is

O(1) when we consider only the aggregate signature. Therefore, our SPAC scheme

has about three times less communication overhead than the nave algorithm when

there is a large number of LBS devices as shown by Figure 5.15.

Fig. 5.15. Communication Overhead

Page 140: Secure kNN Query Processing in Untrusted Cloud Environments

124

6. SECURE SENSOR NETWORK SUM AGGREGATION

WITH DETECTION OF MALICIOUS NODES

Wireless sensor networks are being increasingly deployed in many application domains

ranging from environment monitoring to supervising critical infrastructure systems.

Each sensor node measures a certain parameter value of the system, and the measure-

ment results are disseminated towards a central component called the network base

station (or sink) which processes the data. Each node could send its individual read-

ing to the sink via several hops of communication, from node to node. However, such

a method is inefficient, and typically data are aggregated to reduce communication

cost and conserve scarce sensor battery energy. Aggregate functions such as MIN,

MAX, COUNT, SUM, and AVG are computed using in-network aggregation. In this

model, sensor nodes are organized hierarchically into a dissemination tree, and each

node has several children nodes and one parent node. Each node receives messages

from its children nodes, it performs some computation (i.e., aggregation) based on

the received values and its local value, and sends a single aggregated message to its

parent node [70]. This operation mode reduces significantly communication overhead,

but introduces concerns from a security point of view. Some sensor nodes may be

compromised by an adversary, and as a result data originating from a large number of

sensors can be lost or forged. Furthermore, in the outsourced aggregation model [71],

the aggregation function is delegated to sensor nodes operated by a third party. In

this case, even in the absence of external attackers, the outsourcing service provider

may not be fully trusted, and could alter the aggregation result.

Several protocols such as SHIA [69], SECOA [71], SIES [68] and others [72–74]

acknowledge this security threat and provide data integrity of hierarchical in-network

aggregation in the presence of malicious nodes. All these protocols deal with stealth

attacks where the malicious nodes try to modify the aggregation result without being

Page 141: Secure kNN Query Processing in Untrusted Cloud Environments

125

detected. Such techniques can verify whether the aggregation result is correct or not,

and in case they detect the data have been tampered with they raise an alarm. How-

ever, they cannot pinpoint the source of the attack. Hence they cannot identify and

remove malicious nodes, leaving the network vulnerable to denial-of-service attacks.

More recently, several mechanisms have been proposed to find malicious nodes

which are corrupting the process of in-network aggregation [75–77]. However, some

of these protocols [75, 76] depend on SHIA to perform integrity checks, which in

turn has high communication overhead, and assumes that the base station knows the

entire network topology. Furthermore, it is assumed [76] that the nodes can group

themselves into several partitions and the partition leader can communicate with the

base station without interference from the malicious nodes. In [77], a mechanism that

calculates an approximate SUM is proposed, but it cannot handle exact results, and

it requires flooding in order for the base station to communicate with other nodes.

As a result, the communication overhead is very high.

In this paper, we introduce a novel secure aggregation protocol for SUM which not

only detects forged results, but also localizes malicious nodes and removes them from

the aggregation process. The proposed protocol is efficient, as it uses only symmetric

key cryptography. Furthermore, in order to check the integrity of SUM aggregation,

our technique uses as a building block SIES [68], which incurs significantly lower

communication overhead than SHIA.

In order to find malicious nodes that tamper with in-network aggregation, the base

station must be able to check whether the partial sum is correct at the aggregators,

as well as the base station. In addition, attackers can drop messages from the base

station to prevent it from communicating with other nodes, so we need to devise a

reliable and efficient communication method which does not use flooding.

The specific contributions of this paper are:

• A flexible aggregation structure (FAS) which allows the base station to check

the integrity of partial sums at aggregators as well as the base station, with the

help of checksum information enclosed in each message.

Page 142: Secure kNN Query Processing in Untrusted Cloud Environments

126

• An advanced ring structure (ARS) which allows reliable and efficient communi-

cation in the presence of malicious nodes. ARS does not employ flooding, hence

is more efficient than existing techniques.

• A divide-and-conquer (DAC) algorithm that builds upon FAS and ARS to find

malicious nodes and to remove them from the aggregation structure, allowing

the network to return to normal behavior after attacks.

The rest of the chapter is organized as follows: In Section 6.1 we outline the system

and attack models. In Section 6.2, we introduce the proposed schemes for detection

of attacks, identification and removal of malicious nodes. Section 6.3 provides a theo-

retical analysis of our proposed methods. In Section 6.4, we evaluate experimentally

the proposed techniques.

6.1 Preliminaries

6.1.1 System Model

We assume a sensor network outsourced aggregation model with two types of

sensor nodes: sources and aggregators [68,71]. Each source produces a sensor reading,

and is situated at the leaf level of the aggregation tree. Aggregation is performed in

aggregator nodes, which belong to a third party service provider. In addition, there

is a base station (or a sink) which receives the final aggregation results, and is also

referred to as querier.

Typically, in sensor networks data are transported along a hierarchical communi-

cation structure called a dissemination tree. However, in the presence of malicious

nodes it is important to use multi-path communication, which is more robust [70].

Each source node has several parent aggregators, and it can select some of them as

its active parents. The reading of each source node is divided into l pieces and sent

to parent aggregators. Therefore, the network topology of the system we consider is

not a tree but a graph.

Page 143: Secure kNN Query Processing in Untrusted Cloud Environments

127

Each node has an assigned level which represents the number of hops in distance

from the querier. The level of the querier is 1 and it is situated in the ring 1 [70, 78]

When a node receives a broadcast message from the querier directly, the level of the

node is 2. The nodes which have level k belong to ring k. Source nodes cannot have

children nodes, only aggregators and the sink can have children nodes. When an

aggregator receives messages from its children, it computes an aggregated sum that

is subsequently transmitted to its parent aggregator or to the querier.

Our protocol for secure aggregation with detection and removal of malicious nodes

consists of four steps. In the first step, each source node creates l pieces from its own

sensor reading and sends them to its parent nodes. In the second step, aggregators

receive several messages from their child nodes and compute an aggregated SUM, then

send the result to their parent aggregators. In the third step, the querier receives the

final aggregated SUM and verifies it. In the fourth step, if the aggregated SUM

is not correct, the protocol finds malicious aggregators and removes them from the

dissemination graph, then restarts the aggregation process.

6.1.2 Attack Model

We assume that aggregators are operated by a thirdparty service provider which is

not fully trusted. Aggregators may forge aggregated SUM messages, or drop messages

altogether. In addition, we consider that there may be failed source nodes, but we

do not consider that source nodes can report wrong sensor readings. Note that,

related approaches from literature that are orthogonal to our scheme can be used in

conjunction with our methods to achieve protection against malicious sources [68,71,

73,74].

First, we have to achieve confidentiality. A curious aggregator will attempt to

learn the value of a sensor reading. Each source node must hide its sensor reading

from the aggregators by using encryption. Second, we must preserve the integrity

of aggregated SUM. Malicious aggregators will attempt to make the querier accept

Page 144: Secure kNN Query Processing in Untrusted Cloud Environments

128

forged aggregate SUMs. When the aggregated SUM is modified by the malicious

aggregators, the querier should be able to detect it. Third, if the malicious aggregators

are not found, the querier cannot get the correct aggregate SUM in future rounds

either. When the querier detects that the aggregated SUM is manipulated, it must

be able to find malicious aggregators and remove them, before restarting the next

round of aggregation.

6.1.3 Additively Homomorphic Symmetric Encryption

We employ the use of an additively homomorphic symmetric cryptographic func-

tion that was originally introduced in [79] to achieve confidentiality, and later ex-

tended in [68,74] to also support authentication of messages. This encryption function

is based on a combination of modulo arithmetic in a prime-order group and secret

sharing. Let p be a prime, denote by K0 a secret key known to all data source nodes

but not known to aggregators, and let ki < p be a secret key known only to node

i. These secrets are used as the seeds of pseudo-random functions (PRF), and based

on them the values Kt = HM256(K, t) and kit = HM256(ki, t) are derived at each

round, using HMAC PRF HM256() implemented with SHA-256.

Let mit < p be the message generated by node i at time t: mit contains sensor

reading vit followed by logn zeros (up to 8bytes) and a secret share ssit = HM1(ki, t)

where HM1() is the HMAC PRF that uses SHA-1. The size of mit is 32 bytes, vit

has 4 bytes and ssit has 20 bytes. Even if mit contains logn zeros, we will write mit

as vit||ssit for simplicity. We present next the three operations performed using the

additively homomorphic symmetric encryption scheme:

Encryption/Decryption. Node i generates a ciphertext cit of its reading at

time t as follows:

cit = E(mit, Kt, kit, p) = E(vit||Kt, kit, p) = Kt·mit+kit mod p = Kt·(vit||ssit)+kit mod p

(6.1)

Page 145: Secure kNN Query Processing in Untrusted Cloud Environments

129

The decryption of a ciphertext is performed as follows:

mit = vit||ssit = D(cit, Kt, kit, p) = (cit − kit)K−1t mod p (6.2)

where K−1t is the multiplicative inverse of Kt modulo p.

Homomorphic Property/Aggregation. Given two ciphertexts c1t and c2t cor-

responding to plaintexts m1t and m2t, the encryption of the sum m1t +m2t is:

c1t+c2t = E(m1t, Kt, k1t, p)+E(m2t, Kt, k2t, p) = Kt ·(m1t+m2t)+(k1t+k+2t) mod p

(6.3)

which can be decrypted using keys Kt and k1t+k2t as follows.

m1t +m2t = (v1t + v2t)||(ss1t + ss2t) = D(c1t + c2t, Kt, k1t + k2t, p) (6.4)

Authentication. The sum mit = vit||ssit can be extracted from cit using keys Kt

and kit in the decryption function. In addition, each node has its secret share ssit.

The base station knows the sum of all shares st = ssit. In the decryption step, it

obtains vit||ssit and verifies that indeed ssit is equal to st. If it is, the result received

is authentic. Otherwise, the sink concludes that the aggregated value vit was altered

by malicious nodes.

6.2 Proposed Approach

We present the details of the proposed approach, which consists of several distinct

procedures illustrated by the high-level overview diagram in Figure 6.1. Our solution

consists of two main techniques: flexible secure aggregation (FSA) which allows secure

computation of sums, and divide and conquer (DAC) which identifies malicious nodes

in those cases when FSA detects that the result is incorrect. Both FSA and DAC

rely on two middleware primitives, namely flexible aggregation structure (FAS) and

advanced ring structure (ARS): the former specifies a flexible manner of performing

the aggregation by splitting each value into a number of fragments that are routed on

distinct paths; the latter is an extension of the ring topology from [80] and provides

Page 146: Secure kNN Query Processing in Untrusted Cloud Environments

130

Fig. 6.1. Proposed Scheme Overview

a robust routing mechanism needed in the DAC malicious node identification phase.

Finally, at the base of FAS and ARS sits bitmap dissemination method (BDM), a

mechanism for disseminating aggregation and routing schedules, in order to instruct

the base station about the particular fragmentation and routing schedules used.

6.2.1 Bitmap Dissemination Method

In this section, we present the bitmap dissemination method (BDM) mechanism

which is used to identify the fragmentation schedule used when performing aggrega-

tion. BDM is used in the flexible aggregation structure (FAS) presented in Section

6.2.2, and in the advanced ring structure (ARS) presented in Section 6.2.3.

Generating a Counting Bitmap. The counting bitmap Fi for node i contains

n-bits Bi , d×n-bits Ci and a MAC which is constructed from Bi , a secret share ssi

and a unique ki shared with the base station, where n is the number of source nodes

and 2d is a maximum number of routing multipaths. The format of Fi is as follows.

Fi = (Bi||Ci||MACi = E(Bi||SSi, K, ki, p)) (6.5)

Node i sets the i-th bit of Bi to 1. Ci is an array of counters Cik which represent the

number of k-th bits which is set to 1 when several bitmaps Bj are aggregated. Thus,

Ci is equal to Ci1||||Cin. In the source node, the i-th counter is set to 1. E() is the ad-

ditively homomorphic encryption function introduced in Section 6.1.3. For example,

Page 147: Secure kNN Query Processing in Untrusted Cloud Environments

131

when n is 4, d = 2 and i = 3, then F3 = {B3 = 0010||C3 = 00, 00, 01, 00||MAC3 =

E(0010||ss3, K, k3, p)}.

Aggregation of Counting Bitmaps. When an aggregator receives two counting

bitmaps Fi = (Bi||Ci,MACi = E(Bi||ssi, K, ki, p)) and Fj = (Bj||Cj,MACj =

E(Bj||ssj, K, kj, p)), the aggregated counting bitmap Fa is defined as follows.

Fa = {Ba = Bi ∨Bj||Ca = Ci � Cj||MACa = MACi +MACj} (6.6)

where MACi + MACj = E(Bi||ssi, K, ki, p) + E(Bj||ssj, K, kj, p) = E((Bi +

Bj)||(ssi+ssj), K, ki+kj, p). The symbol signifies bit-wise OR operation and signifies

an addition of counters. Thus, when Bi = Bi1|| . . . ||BinandBj = Bj1|| . . . ||Bjn, then

BiBj = (Bi1Bj1)|| . . . ||(BinBjn) and when Ci = Ci1|| . . . ||CinandCj = Cj1|| . . . ||Cjn,

then CiCj = (Ci1 + Cj1)|| . . . ||(Cin + Cjn).

Authentication of Counting Bitmaps. When the base station receives Fa, since it

knows ssi and ki, it can obtain Bi from the aggregated MACa. Also, it can get Bi

from Bi and Ci. If there were no malicious nodes tampering with the bitmaps, these

two values should be equal.

For example, when there are two counting bitmaps F1 = {B1 = 01110||C1 =

01110||MAC1 = E(01110||ss1, K, k1, p)} and F2 = {B2 = 01011||C2 = 01011||MAC2 =

E(01011|ss2, K, k2, p)}, an aggregator computes the aggregated counting bitmap Fa =

{Ba = B1B2 = 01111||Ca = C1C2 = 02121||MACa = E(B1 + B2 = 11001||(ss1 +

ss2), K, k1 + k2, p)}. When the base station receives it, since it knows k1 + k2 and

ss1 + ss2, it can get B1 +B2 = 11001 from the MACa. On the other hand, it can get

B1+B2 = 11001 from Ba and Ca. Since Ba = B1B2 = 01111 and Ca = C1C2 = 02121,

the base station knows Bi = 01111 + 01000 + 00010 = 11001 because the 2nd counter

in Ca is 2 and the 4th counter in Ca is 2. Then, the base station verifies that the

aggregated counting bitmap is not modified. So, the counting bitmap Fi is preserved.

Page 148: Secure kNN Query Processing in Untrusted Cloud Environments

132

Fig. 6.2. Flexible Aggregation Structure

6.2.2 Flexible Aggregation Structure

In this section, we introduce the flexible aggregation structure (FAS) that builds

upon the bitmap dissemination method (BDM) presented in Section 6.2.1. This

structure is later used as a building block in the flexible secure aggregation (FSA)

scheme we will present in Section 6.2.4, as well as in the divide-and-conquer (DAC)

algorithm for finding malicious nodes in Section 6.2.5.

The goal of FAS is to allow the base station to verify whether the received partial

sums are correct at the aggregator level. This is an improvement compared to previous

work [68,71,73], where the base station can verify only the final aggregated results, but

not partially aggregated results. Since each aggregator has its own counting bitmap

which keeps track of which source nodes are its descendants, the base station can

perform verification after receiving the partially aggregated sums and corresponding

counting bitmaps from its aggregators.

FAS allows the base station and aggregators to determine which nodes are their

descendants. For example, since the aggregated counting bitmap FA5 of the aggre-

gator A5 is {BA5 = 11100||CA5 = 22100||MACA5} in Figure 6.2, A5 knows that its

descendants are the source nodes S1, S2, and S3, and that ss1, k1, ss2 and k2 are

added two times in the MACA5 since the 1st and 2nd counters of CA5 are 2. This

information is used in the verification of FAS at the base station.

Page 149: Secure kNN Query Processing in Untrusted Cloud Environments

133

In contrast with the aggregation tree structure that is typically used in most pre-

vious research [68, 69, 71, 77], FAS results in an aggregation graph, which improves

resilience to attacks by ensuring that a single malicious aggregator cannot completely

compromise the data from any source node. Thus, each source node can have several

parent aggregator nodes as shown in Figure 6.2. If a source node has several aggre-

gators in its transmission range, they become its parent aggregators. Assume that

the querier wants each source node to have l parent nodes. In Figure 6.2, l = 2. If

a source node has more parent nodes than l, it selects l parent nodes among them.

Otherwise, it has to send its message l times to its parent node. When there are

no malicious nodes, the base station should receive an aggregated counting bitmap

which should be 22222 in Figure 6.2. On the other hand, if there are malicious nodes

or failed source nodes, each counter of the aggregated bitmap may be less than l after

verification of the result. If a counter is less than l and greater than 0, it means

that there are certainly malicious nodes dropping messages in the immediate parent

nodes. When the counter is equal to 0, it may be either because of malicious nodes

or because of failed source nodes.

The execution of the FAS protocol proceeds as follows:

Step 1. The base station broadcasts a FAS construction message. When an

aggregator i receives this message from another aggregator j for the first time, j be-

comes the parent aggregator of i. On the other hand, when a source node receives the

FAS construction message from several aggregators, these become its parent nodes.

Step 2. Each source node i generates a counting bitmap Fi containing n-bits Bi,

d × n-bits Ci and a MACi generated from Bi, secret share ssi and a unique key ki

and sends it to l parent aggregators. If a source node has less parent nodes than l, it

sends the counting bitmap for a total of l times to its parent nodes.

Fi = (Bi||Ci||MACi = E(Bi||ssi, K, ki, p)) (6.7)

Page 150: Secure kNN Query Processing in Untrusted Cloud Environments

134

Step 3. When an aggregator receives several messages from other nodes, they

are aggregated using BDM. The resulting sum is transmitted to only one parent

aggregator.

Fa = (Ba = Bi ∨Bj||Ca = Ci � Cj||MACa = MACi +MACj) (6.8)

Step 4. The aggregated message is verified in the base station with respect to its

BDM.

The FAS protocol is performed only one time in the setup phase and it is not

executed again unless malicious nodes are found. If malicious nodes modify the

aggregated bitmap, the verification of FAS will fail at the querier.

6.2.3 Advanced Ring Structure

The advanced ring structure (ARS) allows the base station to communicate using

multipath with the child nodes of an aggregator suspected of being malicious. In

our attack model, we assume that the malicious nodes may drop messages from the

base station to a node. A nave method for the base station to communicate with

child nodes of a suspected aggregator via multipath is to use flooding [77]. However,

as discussed earlier, flooding is inefficient. For this reason, a ring communication

structure was proposed in previous work [78,80], which we extend in our paper.

The ring structure can reduce the communication overhead compared to flooding.

However, the original ring structure still has high communication overhead. For

example, if a destination node is in the level i, all nodes less than the level i should

forward a message when the BS broadcasted it. ARS reduces communication overhead

compared to the original ring structure, and also provides multipath communication

towards the base station. In ARS, even if a node has a level less than i, it does not

forward a message when the destination is not its descendant.

The ARS protocol is executed as follows:

Step 1. After the base station produces a flexible aggregation structure (FAS)

for the first time, it broadcasts an ARS construction message.

Page 151: Secure kNN Query Processing in Untrusted Cloud Environments

135

Fig. 6.3. Advanced Ring Structure

Step 2. If a source node Si receives the message, it sends a counting bitmap

Fi = Bi||Ci||MACi generated by using the bitmap dissemination method (BDM) to

its l parent aggregators.

Step 3. When an aggregator receives several counting bitmaps from other nodes,

they are aggregated. Then, the result is transmitted to l parent aggregators.

Step 4. The aggregated counting bitmap is verified at the base station.

There is an important difference between FAS and ARS. In FAS, when an aggre-

gator A generates an aggregated counting bitmap FA, it sends FA to only one parent

node as shown in Figure 6.2. In contrast, in ARS an aggregated counting bitmap

FA, is sent to l parent nodes as shown in Figure 6.3. The difference arises since the

purpose of FAS is aggregation whereas that of ARS is supporting multipath commu-

nication towards the base station. Like FAS, ARS is executed only one time in the

setup phase and it is not executed again unless malicious nodes are detected.

Suppose node A5 is malicious in Figure 6.3. In the Divide-and-Conquer (DAC)

algorithm that will be presented later in Section 6.2.5, the base station will commu-

nicate with nodes A1 and A2 which are the child nodes of the suspected node A5. So,

when the base station wants to communicate with the node A2, it knows that it can

communicate with it via node A6. Since bitmap BA2 of FA2 in the node A2 is equal

to 01100 and BA6 in the node A6 is 01111, BA6 contains BA2.

Page 152: Secure kNN Query Processing in Untrusted Cloud Environments

136

Next, the base station broadcasts the partial sum request (PSR) message whose

destination is the node A2 and B2 is set to 01100. Then, the malicious node A5 may

drop it and the node A6 will receive it. Since B6(= 01111) contains B2(= 01100),

it results that A2 is a child node of A6, and it will forward the message to node A2.

On the other hand, when there is another node A7 which does not have the node

A2 as its descendant in the ring 2, it will drop the PSR message since B7 does not

contain B2. So, ARS can reduce communication overhead compared to the original

ring structure. On the contrary, in the original ring, the node A7 will try to forward

the message to its child nodes since it does not know that the node A2 is not its

descendant. Therefore, ARS has significantly less communication overhead than the

original ring structure.

Moreover, ARS has one other important advantage. By using ARS, we can dis-

tinguish failed source nodes from message dropping attacks. If a node is alive, it can

send its bitmap to the base station by using ARS. So, when a bit of a bitmap of FAS

is equal to 0, if the bit of ARS is 0, a node corresponding to the bit is a dead node.

Otherwise, it is a node that has been the victim of message dropping. For example,

let the second bit of the bitmap of FAS be 0. It means that the second source node

is dead or a message drop attack occurred. But, if is the source node is not dead, the

second bit of bitmap of ARS is not equal to 0 with high probability, as we will show

later in Section 6.3.2. Then, it must be the victim of a message dropping attack.

6.2.4 Flexible Secure Aggregation

FSA consists of the following phases:

Encryption of a Sensor Reading. When a source node Si has a sensor reading

vi and l parent aggregators, it randomly divides vi into l pieces: vir, . . . ,vis where

vi = vij, r ≤ j ≤ s and l = s − r + 1, and it generates l encrypted messages

cij = E(vij||ssi, K, ki, p) = K(vi||ssi) + ki mod p where ssi and ki are secret keys

shared with the base station. Then it sends each cij to its parent node j.

Page 153: Secure kNN Query Processing in Untrusted Cloud Environments

137

Aggregation. An aggregator j receives m messages from its child nodes. (p ≤

i ≤ q and m = q− p+ 1) It produces an aggregated sum cj = cpj++cqj and sends it

to its parent aggregator. When there are two messages c1 and c2, the aggregation is

done as follows.

c1+c2 = E(v1||ss1, K, k1, p)+E(v2||ss2, K, k2, p) = K·{(v1+v2)||(ss1+ss2)}+(k1+k2) mod p

(6.9)

Authentication. Since the base station knows how many times the k-th secret

key ssi and ki are added in the aggregated sum by FAS, it can verify whether the

sum is correct or not. So, integrity of FSA is preserved.

For example, in Figure 6.3, node A2 receives c22 and c31 and node A3 receives c32

and c41. Node A2 sends c22 + c31 to node A5. Node A3 sends v32 + v41 to node A6.

Then, node A5 receives c11 + c12 + c21 and c22 + c31 and node A6 receives c32 + c41 and

c42+c51+c52. Next, the base station (e.g., the querierQ) receives c11+c12+c21+c22+c31

and c32 + c41 + c42 + c51 + c52. The aggregated sum at the base station is ci =

(c11+c12)+(c21+c22)+(c31+c32)+(c41+c42)+(c51+c52) where the aggregated counting

bitmap Fqs Cq is 22222 according to FAS. Thus, ss1, k1, ss2, k2, ss3, k3, ss4, k4, ss5, and

k5 are added two times. The base station can get the aggregated sum vi from the

aggregated message ci. ssi extracted from ci must be equal to s since the base station

knows all ssi and s = ssi considering the aggregated counting bitmap. Otherwise,

the base station will detect an attack and will run the divide-and-conquer (DAC)

algorithm which will be presented in Section 6.2.5 in order to identify malicious nodes.

6.2.5 DAC Algorithm for Finding Malicious Nodes

The goal of divide-and-conquer (DAC) is to find malicious nodes when the aggre-

gated sum is detected to be incorrect at the base station. With the help of DAC, the

base station is able to check all of its child nodes in the aggregation structure. If it

receives an aggregated counting bitmap and a partial sum of an aggregator by ARS,

it can check whether the aggregator is correct. If an aggregator is correct, the base

Page 154: Secure kNN Query Processing in Untrusted Cloud Environments

138

station doesnt need to check the sub-tree rooted in the aggregator using the divide-

and-conquer approach. However, if an aggregator is reporting an incorrect sum, the

base station will check all of the child nodes of the aggregator recursively. In addition,

even if a partial sum of an aggregator is not correct, if it is equal to the addition of

the partial sums of all its child nodes, the aggregator is considered not malicious, as

explained next.

Consider the example in Figure 6.3, and assume that the node A5 is malicious

and that node A6 is honest. By checking the counting bitmap of the partial sum, the

base station verifies that node A6 is honest and that node A5 is not honest. So, the

base station doesnt need to check the nodes A3 and A4 which are child nodes of node

A6. On the other hand, it should check nodes A1 and A2 (which are child nodes of

the node A5) recursively by using ARS.

The principles behind the functionality of DAC are similar to those of the ma-

licious node identification technique proposed in [76]. However, the distinguishing

characteristic is that our method does not require partitioning [76] and leader selec-

tion [81]. In DAC, when there is an aggregator which has several child nodes, the

sub-trees rooted in the child nodes are partitions and the child nodes are the leaders

of the partitions.

In addition, [76] has three more disadvantages. First, it depends on SHIA [69] to

check integrity of the sum. But SHIA has higher communication overhead O(dnlog2n)

where d is the maximum node degree, whereas DAC depends on FSA which incurs

overhead O(n). Second, it assumes that the partition leader can communicate with

the base station via multi-hop communication. However, in our attack model, mali-

cious nodes can drop messages between the base station and nodes. So, it cannot be

guaranteed that the nodes communicate with the base station unless using flooding,

which has much higher communication overhead. On the other hand, in DAC the

base station can communicate with nodes by using ARS which has significantly lower

communication overhead than flooding and the original ring structure. Third, [76]

has to divide each partition until the size of the partition is one, in order to find

Page 155: Secure kNN Query Processing in Untrusted Cloud Environments

139

malicious nodes. In contrast, in DAC when the sum of an aggregator is not correct

and all its child nodes are correct, it means that only the aggregator is malicious.

In this case, the base station doesnt need to generate more partitions like in [76].

Therefore, DAC can reduce computation and communication overhead significantly

compared to [76].

6.3 Analysis

6.3.1 Security Analysis

THEOREM 1. In the bitmap dissemination method (BDM), the probability

that the base station accepts an incorrect F = (Bs||Cs||MACs) is lower than 2−256.

PROOF. Suppose that the base station receives Bs = Bi, Cs = Ci, and MACs =

MACi. By the definition of BDM, let Bs be Bs1|| . . . ||Bsn and Cs be Cs1|| . . . ||Csn. So,

Bi = Csi ∗2n−i. On the other hand, we can get Bi from Bi = (MACi−Csiki)K−1 (for

simplicity, we do not consider ssi here, but the reasoning still holds). Therefore, from

these two equations, Csi∗2n−i = (MACi−Csiki)K−1. So, MACi = KCsi∗2n−i+Csiki.

If an adversary wants to forge the counting bitmap, it should modify Csi and

MACi. But, since it does not know ki and K, this can only happen with the proba-

bility of 2−256 which is negligible.

6.3.2 Reliability of Multipath Routing

We compare the resilience of a single aggregation tree against that of using mul-

tipath. We use p as the probability that a node is malicious and h as the maximum

number of hops from the base station. This analysis is similar to [80].

We assume that we have a complete d-ary tree of height h. The probability of a

value from the base station to reach a node at level i is proportional to (1− p)i. The

expected value of successful transmission is E(success) = (1− p)i · ni where ni is the

number of nodes at level i. This gives E(success) = ((1−p)d)i = (d− pd)h+1 − 1/(d−

Page 156: Secure kNN Query Processing in Untrusted Cloud Environments

140

pd− 1). For h = 10, d = 3 and p = 0.1 (10% malicious nodes), the expected number

of successful transmission is poor, only 0.369.

We assume that starting with the base station at level 0, each node at level i

has exactly d neighbors within its broadcast range at level i + 1. From these, each

node selects k ≤ d as its children and it transmits its message to all these k nodes.

Let Ei denote the event that a copy of the message of the base station reached level

i conditioned on the message having reached level i − 1. So, Pr[Ei] = (1 − pk)

and the probability of a message successfully reaching a node is Pr[Ei] = (1 − pk)h.

Expectation of successful transmission is

E(success) = (1− pk)i · ni = (d− pkd)i = (d− pkd)h+1 − 1/(d− pkd− 1). (6.10)

For k = 2, p = 0.1 and h = 10 we get E(success) ≈ 0.9n (n is the number of

nodes). For k = 3 the bound is close to 0.99n.

6.4 Experimental Results

We implemented the proposed protocols in Java, using JDK 1.6. We ran our ex-

periments on a 2.1 GHz Intel Core 2 duo with 3GB RAM running Windows Vista. We

also implemented SHIA [69] and SIES [68] as benchmarks. We consider a 400m400m

sensor network of, where each sensor node has 50m transmission range [82]. We use

the network topology from the real dataset Intel Lab [83].

6.4.1 Flexible Secure Aggregation

In this section, we measure the communication overhead of SHIA, SIES, and FSA.

In FSA-1, each node has a single parent node. In FSA-2, each node has at most two

parent nodes. In FSA-4, each node has at most four parent nodes. Since SIES also

has a parent node, it shows the same performance to FSA-1. In SHIA, each node has

O(dlog2n) node congestion where d is the maximum degree since it has to broadcast

the final sum and the off-path values. So, when there are n nodes in the network, its

Page 157: Secure kNN Query Processing in Untrusted Cloud Environments

141

communication overhead is O(dnlog2n). On the other hand, SIES and FSA have O(1)

node congestion and O(n) communication overhead. In Figure 6.4, when there are

1000 nodes, SHIA incurs about 100 times higher communication overhead than SIES

and FSA. Hence, SHIA is not practical and it should not be used to check integrity

in secure aggregation. Since in FSA each source node divides its sensor readings into

l pieces and sends them to its parent nodes, it has slightly higher communication

overhead than SIES. But it is still much less expensive than SHIA. Furthermore, the

functionality of FSA is far superior to SIES which does not allow identification and

removal of malicious nodes.

Fig. 6.4. Communication Overhead in Aggregation

6.4.2 Advanced Ring Structure

To evaluate ARS, we performed two experiments. In the first experiment, the

network size is 400m400m and we use 500 nodes. This experiment illustrates how

many nodes can receive a message from the base station in the presence of malicious

nodes. Figure 6.5 shows the number of nodes which receive a message from the base

station according to the percentage of malicious nodes. In the Flooding benchmark,

if a node receives a message, it forwards it to all neighbors. In the Ring benchmark,

it forwards only to nodes in the next level. In ARS, when a node has the destination

node as a descendant, it forwards the message to its children. If its counting bitmap

Page 158: Secure kNN Query Processing in Untrusted Cloud Environments

142

contains the destinations counting bitmap, it means that the destination is a descen-

dant of the node. So, it forwards the message to its child nodes. But if a malicious

node receives a message from the base station, it does not forward it to the neighbor

nodes or child nodes.

Flooding obtains 100% transmission success rate: all messages from the base

station are transmitted to all honest nodes. As expected, Ring and Advanced Ring

Structure (ARS) perform slightly worse than Flooding, but they still achieve 90%

transmission success rate. ARS-1 which is a traditional aggregation tree obtains only

30% success rate. Since each node has only one parent, if its parent node is malicious,

it cant receive any message from the base station. This proves the benefit obtained

by using multipath routing.

ARS-4 shows similar performance with Ring. However, the communication over-

head in ARS-4 is about 10 times less than Ring, as shown in Figure 6.6. If there are

more nodes in the network, ARS-4 and Ring will show higher transmission success

rate, since there will be more opportunities for multipath routing.

The next experiment measures the communication overhead of Flooding, Ring

and ARS versus the number of sensor nodes in Figure 6.6. We count the number of

sent packets when the base station sends a message to all nodes. In Flooding, all

nodes should send a packet. On the other hand, in Ring, when a destination node

has level i, nodes whose level is greater or equal to i do not forward the message. So,

the communication of Ring is less than Flooding. In addition, in ARS, even if a node

has a higher level than the destination node, if the destination node is not among its

descendants, it does not need to forward the message. Therefore, the communication

overhead of ARS is significantly lower (about 10 times less) than Ring.

6.4.3 Divide and Conquer Algorithm Evaluation

Figure 6.7 shows the number of malicious nodes which are found by DAC. DAC-2

and DAC-4 perform better than DAC-1. Since each node has only one parent node

Page 159: Secure kNN Query Processing in Untrusted Cloud Environments

143

Fig. 6.5. Number of Successful Transmission

Fig. 6.6. Communication Overhead in ARS

in DAC-1, if the parent node is malicious, the base station cant communicate with

its child nodes in order to find malicious nodes. So, DAC-1 cant find the malicious

aggregators at all.

When there are small amounts of malicious nodes, DAC-2 and DAC-4 can find

most of the malicious nodes by using ARS. In addition, in a denser network, we

expect that DAC can find more malicious nodes than obtained in the network setting

we used. Sometimes when all parent nodes of a node are malicious, the base station

cant communicate with that node. In this case, even if the node is honest, it will

be regarded as malicious. Still, the base station will produce new FAS and ARS

Page 160: Secure kNN Query Processing in Untrusted Cloud Environments

144

instances excluding those nodes and will be able to perform aggregation correctly

again.

Fig. 6.7. Performance of DAC

Page 161: Secure kNN Query Processing in Untrusted Cloud Environments

145

7. SUMMARY

In this dissertation, we considered privacy preserving query processing, authenticated

query processing, malicious node detection in outsourced cloud environments. For pri-

vacy preserving query processing, we proposed secure kNN query processing scheme

by giving secure polygon enclosure evaluation scheme and secure distance compari-

son method based on Voronoi diagram, Delaunay Triangulation, and mutable order

preserving encryption and suggested secure proximity detection scheme by providing

secure point evaluation method and secure line evaluation method based on Paillier

encryption, AES encryption, and GT protocol. For authenticated query processing,

we proposed authenticated top-k aggregation scheme by using Merkle Hash Tree and

Condensed-RSA and suggested secure proximity-based access control by using Blue-

tooth, Bilinear Mapping, and Threshold algorithm. For malicious node detection,

we proposed flexible secure aggregation scheme and divide-and-conquer algorithm to

find malicious nodes by giving bitmap dissemination method, flexible aggregation

structure, and advanced ring structure.

For future work, query processing on encrypted data needs more various operations

as well as equality query, range query, kNN query, proximity query, and SUM query.

In addition, we need to consider authentication query processing along with privacy

preserving query processing.

Page 162: Secure kNN Query Processing in Untrusted Cloud Environments

LIST OF REFERENCES

Page 163: Secure kNN Query Processing in Untrusted Cloud Environments

146

LIST OF REFERENCES

[1] Sunoh Choi et.al., Secure kNN Query Processing in Untrusted Cloud Environ-ments, IEEE TKDE, 2014

[2] Sunoh Choi et.al., Secure Proximity Detection in Untrusted Cloud Environments,submitted to VLDB, 2014

[3] Sunoh Choi et.al., Authenticated Top-K Aggregation in Distributed and Out-sourced Databases, IEEE PASSAT, 2012

[4] Sunoh Choi et.al., Secure and Resilient Proximity-based Access Control, ACMDARE, 2013

[5] Sunoh Choi et.al., Secure Sensor Network SUM Aggregation with Detection ofMalicious Nodes, IEEE LCN, 2012

[6] Raluca Ada Popa et.al., An Ideal-Security Protocol for Order-Preserving Encod-ing, IEEE SP, 2013

[7] Der-Tsai Lee, On k-Nearest Neighbor Voronoi Diagrams in the Plane, IEEETransactions on Computers, 1982

[8] Pankaj K. Agarwal et.al., Constructing Levels in Arrangements and Higher OrderVoronoi Diagrams, SIAM J. Comput., 1998

[9] Huiqi Xu et.al., Building Confidential and Efficient Query Services in the Cloudwith RASP Data Perturbation, IEEE TKDE, 2012

[10] Jon Louis Bentley, Multidimensional Binary Search Trees used for AssociativeSearching, ACM Communications, 1975

[11] Thomas Roos, Voronoi diagrams over dynamic scenes, Discrete Applied Mathe-matics, 1993

[12] http://www.qhull.org

[13] Haibo Hu et.al., Processing Private Queries over Untrusted Data Cloud throughPrivacy Homomorphism, IEEE ICDE, 2011

[14] Gruteser M. et.al., Anonymous usage of location-based services through spatialand temporal cloaking, ACM MOBISYS, 2003

[15] Gedik B. et.al., Location privacy in mobile systems: a personalized anonymiza-tion model, IEEE ICDCS, 2005

[16] Mokbel M. F. et.al., The new Casper: query processing for location serviceswithout compromising privacy, VLDB, 2006

Page 164: Secure kNN Query Processing in Untrusted Cloud Environments

147

[17] P. Kalnis. et.al., Preserving location-based identity inference in anonymous spa-tial queries, IEEE TKDE, 2007

[18] Gabriel Ghinita et.al., A Hybrid Technique for Private Location-Based Querieswith Database Protection, SSTD, 2009

[19] W. K. Wong et.al., Secure kNN Computation on Encrypted Databases, ACMSIGMOD, 2009

[20] A. Boldyreva et.al., Order Preserving Symmetric Encryption, EuroCrypt, 2009

[21] A. Boldyreva et.al., Order Preserving Encryption Revisited: Improved SecurityAnalysis and Alternative Solutions, Crypto, 2011

[22] Mark de Berg et.al., Computational Geometry, Springer, 3rd Edition

[23] P. Paillier. et.al., Public key cryptosystems based on composite degree residuosityclasses, EUROCRYPT, 1999

[24] Hsiao-Ying Lin et.al., An efficient solution to the millionaires’ problem based onhomomorphic encryption, ACNS’05

[25] ElGamal, A Public-Key Cryptosystem and a Signature Scheme based on DiscreteLogarithm, IEEE TOIT, 1985

[26] Gabriel Ghinita et.al., Private Queries in Location Based Services: Anonymizersare not Necessary, ACM SIGMOD, 2008

[27] Gabriel Ghinita et.al., Approximate and Exact Hybrid Algorithms for PrivateNearest Neighbor Queries with Database Protection, Geoinformatica’11

[28] Bin Yao et.al., Secure Nearest Neighbor Revisited, IEEE ICDE, 2013

[29] Arvind Narayanan et.al., Location Privacy via Private Proximity Testing, NDSS,2011

[30] Mikhail J. Atallah et.al., Secure Multi-Party Computational Geometry, Work-shop on algorithms and data structures, WADS, 2001

[31] Xin Lin et.al., Private Proximity Detection and Monitoring with Vicinity Re-gions, ACM MobiDE’13

[32] Bin Mu et.al., Private Proximity Detection for Convex Polygons, ACM MobiDE,2013

[33] Pei Cao et.al., Efficient Top-K Query Calculation in Distributed Networks, ACMPODC, 2004

[34] Feifei Li et.al., Dynamic Authenticated Index Structures for OutsourcedDatabases, ACM SIGMOD, 2006

[35] Einar Mykletun et.al., Authentication and Integrity in Outsourced Databases,NDSS, 2004

[36] HweeHwa Pang et.al., Authenticating the Query Results of Text Search Engines,VLDB, 2008

Page 165: Secure kNN Query Processing in Untrusted Cloud Environments

148

[37] Rui Zhang et.al., Verifiable Fine-Grained Top-k Queries in Tiered Sensor Net-works, IEEE INFOCOM, 2010

[38] Fei Chen et.al., SafeQ: Secure and Efficient Query Processing in Sensor Networks,IEEE INFOCOM, 2010

[39] Ronald Fagin et.al., Optimal Aggregation Algorithms for Middleware, ACMPODS, 2001

[40] Sebastial Michel et.al., KLEE: A Framework for Distributed Top-k Query Algo-rithms, VLDB, 2005

[41] Reza Akbarinia et.al., Best Position Algorithms for Top-k Queries, VLDB, 2007

[42] Adi Shamir, How to Share a Secret, Communications of the ACM, 1979

[43] D. Halperin et.al., Security and Privacy for Implantable Medical Devices, IEEEPervasive Computing Magazine, 2008

[44] K. Venkatasubramanian et.al., Security and Interoperable Medical Device Sys-tems, IEEE Security and Privacy Magazine, 2012

[45] Kasper B. Rasmussen et.al., Proximity-based Access Control for ImplantableMedical Devices, ACM CCS, 2009

[46] Matthew L. Lee et.al., Lefelogging memory appliance for people with episodicmemory impairment, ACM UbiComp, 2008

[47] Seonguk Heo et.al., Lifelog Collection Using a Smartphone for Medical History,IT Convergence and Services, 2011

[48] http://www.healthcare.philips.com/

[49] Michael S. Kirkpatrick et.al., Enforcing Spatial Constraints for Mobile RBACSystems, ACM SACMAT, 2010

[50] Srdjan Capkun et.al., Secure Positioning of Wireless Devices with Applicationto Sensor Networks, IEEE INFOCOM, 2005

[51] Stefan Brands et.al., Distance-Bounding Protocols, Workshop on the theory andapplication of cryptographic techniques on Advances in cryptology, 1994

[52] Andreas Savvides et.al., Dynamic Find-Grained Localization in Ad-Hoc Net-works of Sensors, ACM MOBICOM, 2000

[53] Robert J. Fontana et.al., Ultra-Wideband Precision Asset Location System,IEEE UWST, 2002

[54] Kasper Bonne Rasmussen et.al., Realization of RF Distance Bounding, USENIXSecurity, 2010

[55] Adnan Vora et.al., Secure Location Verification Using Radio Broadcast, IEEETDSC, 2006

[56] Srdjan Capkun et.al., Secure Positioning in Wireless Networks, IEEE JSAC, 2006

Page 166: Secure kNN Query Processing in Untrusted Cloud Environments

149

[57] http://www.zebra.com

[58] http://standards.ieee.org/findstds/standard/802.15.1-2002.html

[59] Dan Boneh et.al., Aggregate and Verifiably Encrypted Signatures from BilinearMaps, Eurocrypt, 2003

[60] https://personal.cis.strath.ac.uk/changyu.dong/jpair

[61] http://www.android.com

[62] http://www.samsung.com/global/galaxys3/

[63] http://crypto.stanford.edu/pbc/

[64] http://en.wikipedia.org/wiki/Bluesniping

[65] Vladimir Brik et.al., Wireless device identification with radiometric signatures,ACM MOBICOM, 2008

[66] http://en.wikipedia.org/wiki/Trusted Platform Module

[67] D. Shaw et.al., Multifractal Modelling of Radio Transmitter Transients of Clas-sification, IEEE CPC, 1997

[68] S. Papadopoulos et.al., Secure and Efficient In-Network Processing of Exact SUMQueries, IEEE ICDE, 2011

[69] H. Chan et.al., Secure Hierarchical In-Network Aggregation in Sensor Networks,ACM CCS, 2006

[70] S. Madden et.al., TAG: a Tiny AGgregation Service for Ad-Hoc Sensor Networks,OSDI, 2002

[71] Suman Nath et.al., Secure Outsourced Aggregation via One-way Chains, ACMSIGMOD, 2009

[72] K. Frikken et.al., An Efficient Integrity-Preserving Scheme for Hierarchical Sen-sor Aggregation, ACM WISEC, 2008

[73] K. Minami et.al., Secure Aggregation in a Publish-Subscribe System, ACMWPES, 2008

[74] C. Castelluccia et.al., Efficient and Provably Secure Aggregation of EncryptedData in WSN, ACM TOSN, 2009

[75] Parisa Haghani et.al., Efficient and Robust Secure Aggregation for Sensor Net-works, IEEE NPSec, 2007

[76] Galareh Taban et.al., Efficient Handling of Adversary Attacks in AggregationApplications, ESORICS, 2008

[77] Binbin Chen et.al., Secure Aggregation with Malicious Node Revocation in Sen-sor Networks, IEEE ICDCS, 2011

[78] S. Nath et.al., Synopsis Diffusion for Robust Aggregation in Sensor Networks,ACM Sensys, 2004

Page 167: Secure kNN Query Processing in Untrusted Cloud Environments

150

[79] C. Castelluccia et.al., Efficient Aggregation of Encrypted Data in Wireless SensorNetworks, IEEE MobiQuitous, 2005

[80] J. Considine et.al., Approximate Aggregation Techniques for Sensor Databases,IEEE ICDE, 2004

[81] Y. Yang et.al., SDAP: A Secure Hop-by-Hop Data Aggregation Protocol forSensor Networks, ACM Mobihoc, 2006

[82] W. He et.al., PDA: Privacy-preserving Data Aggregation in Wireless Sensor Net-works, IEEE INFOCOM, 2007

[83] http://db.csail.mit.edu/labdata/labdata.html

Page 168: Secure kNN Query Processing in Untrusted Cloud Environments

VITA

Page 169: Secure kNN Query Processing in Untrusted Cloud Environments

151

VITA

Sunoh Choi was born and raised in Kangneung, Korea. He completed his PhD

from Purdue University in 2014, where his major professor was Prof. Elisa Bertino.

He received his BS and MS from Korea Univeristy, Seoul in 2005 and 2008 respectively.

His primary research interests include Private Query Processing on Encrypted Data

and Authenticated Query Processing in Outsourced Cloud Environments. He did an

internship with Samsung Software Research Center in 2013.