d3.1 enablers for iot security and privacy baseline...secure data sharing from data owners (domains)...
TRANSCRIPT
D3.1 Enablers for IoT Security and Privacy Baseline
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 779852
2
Title: Document Version:
D3.1 Enablers for IoT Security and Privacy Baseline 0.8
Project Number:
Project
Acronym: Project Title:
779852 IoTCrawler IoTCrawler
Contractual Delivery Date: Actual Delivery Date: Deliverable Type* -
Security**:
30/04/2019 30/04/2018 R - PU * Type: P - Prototype, R - Report, D - Demonstrator, O - Other
** Security Class: PU- Public, PP - Restricted to other programme participants (including the
Commission), RE - Restricted to a group defined by the consortium (including the Commission),
CO - Confidential, only for members of the consortium (including the Commission)
Responsible and Editor/Author: Organization: Contributing WP:
Hien Truong NEC Laboratories Europe
GmbH WP3
Authors (organizations):
Hien Truong (NEC), Felix Klaedtke (NEC), Juan A. Martinez (OdinS), Pedro Gonzalez (UMU), Antonio
Skarmeta (UMU).
3
Abstract:
This deliverable summarises what has been done in work package 3 (T3.1, T3.2 and T3.3). The
main focus is the design of security architecture applied orthogonally at multi-layered
architecture of the IoTCrawler framework. Outcomes of this work are a set of security, privacy and
trust-aware enablers which cover various security aspects. As parts of the IoTCrawler framework,
both managing planes, data plane and control plane, take security as the critical part of the
design. At the control plane, legit registration of IoT devices initially joining the network is
authorised by the secure bootstrapping enabler. Appropriate occurrences of system events in
compliant to administration policies are monitored by the policy compliance check enabler. At
the data plane, the most focused part of the security architecture, authorised accesses to domain
IoT data resources are controlled by the authorization enabler. It is possible for multiple domains
to agree on access and sharing data by using the blockchain network and smart contracts that
regulate the logic of policy management in distributed manner. The data privacy enabler ensures
that IoT data (or meta data) is stored and queried in privacy-preserving manner.
Keywords:
IoTCrawler framework, security architecture, security enabler, secure bootstrapping, distributed
access control, blockchain-based policy management, private data sharing, data-centric privacy.
Disclaimer:
The present report reflects only the authors’ view. The European Commission is not responsible
for any use that may be made of the information it contains.
4
Revision History
The following table describes the main changes done in the document since created.
Revision Date Description Author (Organization)
V0.1 21/03/2019 Initial Proposal for TOC Hien Truong (NEC)
v0.2 04/04/2019 Contribution to Section 3 Juan A. Martinez (OdinS)
v0.3 05/04/2019 Contribution to Section 4.4 Pedro Gonzalez, Antonio
Skarmeta (UMU), Juan A.
Martinez (OdinS)
v0.4 08/04/2019 Contribution to Section 5 Pedro Gonzalez, Antonio
Skarmeta (OdinS)
v0.5 09/04/2019 Contribution to Section 4.5 Felix Klaedtke (NEC)
v0.6 15/04/2019 Contribution to Section 4.1 Hien Truong (NEC)
v0.7 19/04/2019 Contribution to Section 4.3 Hien Truong (NEC)
v0.8 22/04/2019 Contribution to Section 2 Hien Truong (NEC)
v0.9 25/04/2019 Abstract, Executive Summary, Introduction Hien Truong (NEC)
v1.0 26/04/2019 Conclusion, overall edition Hien Truong (NEC)
v1.1 28/04/2019 Internal review of the document Pavel Smirnov (AGT)
v1.2 30/04/2019 Internal review of the document Narges Pourshahrokhi (UoS)
v1.3 30/04/2019 Modification to Section 3 Juan A. Martinez (OdinS)
v1.4 30/04/2019 Contribution to Section 4.4 and 5 Pedro Gonzalez, Antonio
Skarmeta (OdinS)
v1.5 30/4/2019 Document Edition Hien Truong (NEC)
5
Abbreviations
AAA Authentication Authorization and Accounting
ABAC Attribute-Based Access Control
ACL Access Control List
API Application Programming Interface
CP-ABE Cyphertext Policy Attribute-Based Encryption
CM Capability Manager
DCapBAC Distributed Capability-Based Access Control
DHT Distributed Hash Table
DID Decentralized Identifier
DLT Distributed Ledger Technology
EAP Extensible Authentication Protocol
ECC Elliptic-curve cryptography
IoT Internet of Things
JSON JavaScript Object Notation
LTL Linear-time Temporal Logic
MDR Metadata Repository
MSK Master Session Key
MSP Membership Service Provider
NGSI Next Generation Service Interface
NGSI-LD NGSI Linked Data
PAP Policy Administration Point
PDP Policy Decision Point
PKI Public Key Infrastructure
RBAC Role-Based Access Control
REST Representational State Transfer
SPKI Simple Public Key Infrastructure
XACML Extensible Access Control Markup Language
XML Extensible Markup Language
6
Executive Summary
This deliverable focuses on the work done in the work package 3 (T3.1, T3.2 and T3.3) of the
H2020 IoTCrawler project. To achieve an IoT search engine that ensures security, privacy
and trust awareness from collecting to processing and sharing the IoT data, these tasks
focus on designing and developing a set of security enablers that are integrated into every
layer of the IoTCrawler framework. These enablers allow secure bootstrapping IoT devices
initially joining into the network, secure access control to data resources taking account the
distributed nature of the IoT multi-domain system that lacks trust among domains, and
secure data sharing from data owners (domains) to data consumers (end-users, system
softwares/services). While data access to intra-domain data at domain layer is based on
distributed capability-based access control, access cross domains at inter-domain layer is
handled by blockchain handler, blockchain network and smart contracts. Not only security
and trust are main focuses but also data-centric privacy awareness so that data is visible
only to valid and relevant entities.
Disclaimer
This project has received funding from the European Union’s Horizon 2020 research and
innovation programme under grant agreement No 779852, but this document only reflects
the consortium’s view. The European Commission is not responsible for any use that may
be made of the information it contains.
7
Table of Contents
1 Introduction .............................................................................................................................................................. 10
2 Security Architecture for IoTCrawler Platform ...............................................................................12
2.1 IoTCrawler General Architecture......................................................................................................12
2.2 Security Architecture ................................................................................................................................ 13
3 Trust and Security Enablers for IoT Crowd-Sourced Sensing ........................................... 17
3.1 Enabler for Secure Bootstrapping .................................................................................................. 17
3.1.1 Overview .................................................................................................................................................. 17
3.1.2 Basic Concepts .................................................................................................................................... 19
3.1.3 Main Interactions ................................................................................................................................ 19
4 Enablers for a Decentralised Platform ..................................................................................................21
4.1 Enabler for Inter-Domain Policy Management ......................................................................21
4.1.1 Overview ...................................................................................................................................................21
4.1.2 Basic Concepts .................................................................................................................................... 22
4.1.3 Design ......................................................................................................................................................... 25
4.2 Decentralized Identifiers Technology ......................................................................................... 26
4.2.1 Overview .................................................................................................................................................. 26
4.2.2 Basic Concepts .................................................................................................................................... 26
4.2.3 Detailed Specifications .................................................................................................................. 27
4.3 Enabler for Private Data Sharing ..................................................................................................... 30
4.3.1 Overview ................................................................................................................................................. 30
4.3.2 Basic Concepts .................................................................................................................................... 32
4.3.3 Main Interactions ................................................................................................................................ 32
4.4 Enabler for Intra-Domain Capability-Based Access Control ......................................36
8
4.4.1 Overview ..................................................................................................................................................36
4.4.2 Basic Concepts .................................................................................................................................... 37
4.4.3 Main Interactions ............................................................................................................................... 40
4.5 Enabler for Policy Compliance Check ......................................................................................... 42
4.5.1 Overview .................................................................................................................................................. 42
4.5.2 Basic Concepts .................................................................................................................................... 43
4.5.3 Policy Specifications....................................................................................................................... 46
5 Data-centric Privacy Enablers for Differential Accessibility ............................................... 50
5.1 Context-Aware Privacy Policies ...................................................................................................... 50
5.1.1 Overview ................................................................................................................................................. 50
5.1.2 Sharing Policies ................................................................................................................................... 51
5.2 Privacy Enabler ............................................................................................................................................. 51
5.2.1 Overview .................................................................................................................................................. 51
5.2.2 Basic Concepts .................................................................................................................................... 52
5.2.3 Main Interactions ................................................................................................................................ 52
6 Conclusions .............................................................................................................................................................. 54
7 References ................................................................................................................................................................ 55
9
Table of Figures and Tables
Figure 1: IoTCrawler architecture 12
Figure 2: Overview of security enablers 14
Figure 3: Integration of security enablers 15
Figure 4: LO-CoAP-EAP exchange 19
Figure 5: Inter-domain policy management overview 22
Figure 6: Distributed smart contract 23
Figure 7. An example DID Document 27
Figure 8: Integration of IoT and blockchain 31
Figure 9: Data sharing scheme based on ACL 34
Figure 10: Data sharing scheme based on prefix encryption 35
Figure 11: Authorisation enabler placed in the architecture 37
Figure 12: Capability Token Example 39
Figure 13: PAP adds XACML policy 41
Figure 14. Authorisation granting interactions 41
Figure 15. Policy compliance check enabler overview 42
Table 1. CoAP-EAP methods 20
Table 2. DID format 28
1 Introduction
The Internet of Things (IoT) has been envisioned as a large distributed system of
devices equipped with sensors and actuators. IoT devices create and exchange vast
amounts of data, thereby bringing forth unprecedented challenges in terms of
security and scalability. An IoT search engine such as IoTCrawler that provides
distributed crawling and indexing mechanisms is expected to enable data collection
and retrieval in a secure and privacy- and trust-aware manner.
IoTCrawler architecture contains of multiple layers ranging from physical layer
(micro layer) where actual sensors and actuators connect to the system, and higher
layers (domain and inter-domain layers), to the highest layer (application layer)
where applications are deployed (see deliverable D2.2 for details). To ensure
security of the system as a whole, it requires development of a number of security
enablers at each layer as well as interactions between these enablers across layers
so that the entire system is secured against attacks. Furthermore, from vertical view,
the IoTCrawler architecture has two management layers so-called “control plane”
and “data plane”. Our security enablers are designed to provide protections over
these planes as well.
The security architecture, a part of the general IoTCrawler platform, presented in
this deliverable includes following sets of enablers: (1) trust and security enablers for
IoT and crowd-sourced sensing; (2) enablers for decentralized IoTCrawler platform;
and (3) data-centric privacy enablers. Each enabler is designed and prototyped
using either existing technologies or our new innovative solutions.
The first enabler working at the lowest layer of the IoTCrawler architecture is the
secure bootstrapping enabler, which ensures authentication and authorisation of IoT
devices at initial phase joining the network. This enabler uses the Low Overhead
CoAP-EAP (LO-CoAP-EAP) protocol that is designed specifically for IoT, the
Extensible Authentication Protocol (EAP) and the Authentication Authorization and
Accounting (AAA) infrastructure. We design the enabler using these existing
technologies adapted for constraint settings of IoT systems.
At the data plane, our authorization enablers allow secure registration of IoT
resources, as well as secure dissemination of that information to the legitimate
11
users. These enablers are deployed at domain layer. The core enabler is the
Distributed Capability-Based Access Control (DCapBAC). The DCapBAC introduces a
new component called Capability Manager (CM) which issues an authorisation token
containing information regarding the granted authorisation, such as the subject, the
issuer and the lifetime of it defined by two timestamps (beginning and end of
period) among other. Access control mechanisms rely on policies which are decided
by the PDP (Policy Decision Point). We adopt distributed ledger technology (DLT) to
distribute trust among untrusted domains so that access policies are defined,
approved, distributed and applied by all domains. Also in the data plane, we
introduce private data sharing enabler which allows IoT data to be shared securely
among various entities complying to access control policies.
At the control plane, our policy compliance check enabler provides monitoring
system behaviour and checking it during runtime against system policies, for
example, many system and security policies are requirements on the temporal
occurrence and nonoccurrence of system events. This enabler, called POLÌMON,
supports a rich formally defined specification language based on a real-time
extending LTL for expressing a large variety of policies. That is, the policies handled
by this enabler are temporal requirements on system events. Furthermore,
POLÌMON monitors system components and checks their behaviour and
interactions against the given policy specifications at runtime. Noncompliant
behaviour is reported promptly, even in the presence of knowledge gaps.
The document is structured as follows. The general security architecture is
described in Section 2. From Section 3 to Section 5, details of each security, privacy
and trust enablers are described with overview, basic concepts and specifications.
Section 3 gives focus on trust and security enablers for IoT and crowd-sourced
sensing. In Section 4, description of enablers for securing decentralized platform is
given. This section focuses on security at intra-domain and inter-domain levels.
Section 5 covers details of data-centric privacy enabler.
12
2 Security Architecture for IoTCrawler Platform
2.1 IoTCrawler General Architecture
We focus on the layered-interconnected architecture of the IoTCrawler framework
(Figure 1). Details of the architecture are given in the deliverable D2.2.
The IoTCrawler architecture comprises the following layers (following a bottom-up
approach): Micro layer, Domain layer, Inter-domain layer, Internal processing layer
and Application layer. The different IoT domains are interconnected with the base
IoTCrawler platform through a federation approach using Metadata Repositories
(MDRs) at different levels, over which semantics empower user-level searches.
The separation of the platform into control plane, represented by dotted arrows,
and data plane, continuous lines, is one of the key aspects of the architecture.
Figure 1. IoTCrawler architecture
In this security-focused architecture, security components play critical roles to
ensure secure data flows from lowest-level layer (Micro layer) to the highest-level
layer (Application layer). At the core of the security part are various enablers for
managing secure access to the IoT data, making access policies and securely
sharing data. The IoTCrawler framework is built in distributed manner, therefore,
13
challenges raise when different IoT domains interact to each other while lacking of
mutual trust. To overcome such challenges, we adopt distributed ledger technology
(DLT) and blockchain together with smart contracts to handle inter-domain policy
management at the Inter-domain layer.
At the control plane of the IoTCrawler architecture, security enablers are designed
to ensure authorised registration of new IoT devices at the Micro layer and valid
system events in compliance to administrator policies at the Internal processing
layer.
2.2 Security Architecture
To align with separation of control plane and data plane in the IoTCrawler
architecture, we design the security architecture as a set of enablers (Figure 2).
From the horizon views, secure bootstrapping enabler operates at the lowest level
(the Micro layer). At this physical level, IoT devices and IoT gateways have to handle
registration in a secure way such that it minimises that chance to include malicious
devices to join the network (see Section 3.1 for details).
Authorisation enabler is one of our core enablers operating at the Intra-domain
layer. The enabler leverages Distributed Capability-Based Access Control
(DCapBAC) technology. This authorisation method integrates the XACML framework
that generates an authorisation token. The XACML framework is also used for
generating the authorisation policies, as well as for issuing an authorisation verdict
thanks to the PDP (Policy Decision Point) and the XACML policies generated by the
PAP (Policy Administration Point) (see Section 4.4 for details).
When it comes to the operations between different IoT domains, the policy enabler
and data sharing enabler operating at the Inter-domain layer allow for different
domains to agree on global as well as local data access policies. The policy decision
making process can be done in autonomous manner via deploying and execution of
smart contracts therefore do not require human user interventions. This design
makes it a more suitable solution to IoT environments where devices and machines
directly communicate and exchange data with each other. Policies once made over
the blockchain network are broadcasted and updated by all peers i.e. domains.
These domains take those policies as reference for the authorization enabler to
14
grant access tokens to requesting entities. See Section 4.1 for details of the policy
enabler.
Data sharing is an important part of the IoTCrawler framework especially when the
system grows largely and IoT devices produce vast amount of data making it more
complex for IoT domains to share such data securely. Our goal is to provide a
security architecture that allows sharing data not only in secure but also transparent
and auditable fashion. To achieve this goal, the data sharing enabler is designed by
using the blockchain network focusing on processes updating policies based on
exchange conditions among entities. This enabler also adopts prefix encryption to
mitigate data leak to unauthorized parties, for example, a malicious cloud storage
where IoT data is hosted. See Section 4.3 for more details of this enabler.
Figure 2. Overview of security enablers.
15
Inter-connections among enablers provide security guarantee against cyber-attacks
to communication channels crossing between layers. In our security architecture,
the policy enabler, the data sharing enabler and the authorisation enabler have
dependences one to the other. For example, access decision is granted by the
authorisation enabler is dependent on the policies made and handled by the policy
enabler. Similarly, the policy enabler is dependent to the data sharing enabler where
policies are created or updated according to exchange transaction outputs.
Figure 3. Integration of security enablers.
Figure 3 presents the integration of the authorisation enabler with the policy enabler
via the Blockchain Handler component. Here the Blockchain Handler is introduced
to manage messages flow between the two enablers. As each component is
developed with completely different technologies and standards, to ensure
compatibility, the PDP will translate its queries from its own format e.g. NGSI-10 to
the format that the Blockchain network can process e.g. web Restful API. Blockchain
handler takes input from PDP and feed it to the blockchain network for execution of
respective operations such as approving a policy. The blockchain handler also
returns output as results from the network to the queries from the PDP. With this
design, policy decision making process is ported to the blockchain network, thus
providing a number of benefits such as relaxing prior trust requirements and
achieving auditability and integrity of policies.
Another dependency among enablers is between the policy compliance check
enabler and the secure bootstrapping enabler. System monitoring relies on the
correct input of events provided by different devices from the entire network. This is
only ensured when devices are authorised and honest. Malicious entities might
inject corrupted or synthesized events to fool the checker’s outputs. One of method
for the policy compliance check enabler to ensure getting correct event inputs from
16
devices is by connecting to the bootstrapping enabler and query to authorisation of
a particular device before accepting its input data.
In the next sections, we will present details of these security enablers.
17
3 Trust and Security Enablers for IoT Crowd-Sourced
Sensing
3.1 Enabler for Secure Bootstrapping
3.1.1 Overview
The term bootstrap refers to “pull oneself up by one’s bootstraps”. In the area of
computer science, it is associated to a self-starting process without external input.
The term has evolved over time, adding more functionality or details specifically
associated to the area where the concept is applied.
Concretely, in the context of the Internet of Things (IoT) bootstrapping is associated
to the initial process a smart object performs to be able to join a network and
operate securely as a trustworthy entity within the IoT domain where it is deployed.
From the work of García-Morchon et al. [Garcia-Morchon et al. 2016], bootstrapping
is defined as the process of authenticating and authorising a device to enter a
security domain obtaining the credentials needed to operate in that domain. In a
nutshell, bootstrapping a device provides the basis to secure the communications of
the devices in IoT.
To provide bootstrapping, we propose the use of Low Overhead CoAP-EAP(LO-
CoAP-EAP) [Garcia-Carrillo and Marin-Lopez 2016, Garcia-Carrillo 2017], a protocol
that is designed specifically for IoT and brings to the table features that are
expected to handle large IoT deployments with multi-domain support, whilst
keeping a reduced footprint not only on the program itself but in the reduced
number of bytes needed to perform the bootstrapping exchange compared to
current standards.
LO-CoAP-EAP raises as redesign of CoAP-EAP [Garcia-Carrillo et al. 2016], which
provides an alternative to the Protocol for Carrying Authentication for Network
Access (PANA), a current standard and commonly proposed to be used in IoT. Next,
we elaborate the basis of LO-CoAP-EAP and the difference with PANA and why we
see it as an alternative for IoT.
18
Dealing with many devices, such in the case of IoT, we need to consider that the
heterogeneity of the hardware, software, manufactures, owners and managing
organizations, demands flexibility on the authentication methods to be used. In this
sense, there will be devices with higher computation capabilities, that others, as well
as IoT networks with bandwidth constraints, which may preclude the use of
computationally consuming and large message exchanges authentication
protocols. Furthermore, the organization managing these devices might have a
specific policy in place that requires a minimum level of security associated to
authentication (i.e., require a set of specific protocols to be used to authenticate their
devices). Therein lies the need to provide top provide different authentication
protocols that best suits the specific deployment. To supply bootstrapping with this
flexibility, the Extensible Authentication Protocol (EAP) [Aboba et al. 2008] is used.
EAP provides with many Authentication Protocols (and growing) that gives us the
possibility of choosing the authentication method depending on the needs of the
IoT deployment.
Another real possibility is that we have an IoT deployment that is managed by an
organization and that some of the smart objects deployed in said domain are
managed/owned by a different organization, or simply that an organization has
several deployments (e.g., a University with several campuses). This brings the need
to provide the authentication in such a way that regardless of where the device is
deployed, we can reach the managing organization of the smart object so it can be
authenticated. To achieve this, we leverage Authentication Authorization and
Accounting (AAA) [de Laat et al. 2000] infrastructures. AAA gives the flexibility of
having a multi-domain deployment, where a foreign device is deployed and through
pre-established bi-lateral agreement among the organizations, create a federation
to manage the authentication. We also will point out that EAP and AAA have
compatibility so to provide the functionality described above.
Lastly, the use EAP and AAA in this case is nothing new, it has been for some time,
but the protocols used in the network section between the smart object and the
Controller of the domain where the smart object is deployed (e.g., PANA) were not
designed with the constrains of IoT in mind. They were just re-used as existing
19
standards to IoT stacks such as Zigbee IP [ZigBee Alliance 2014]. But in this case,
given the heterogeneity of the devices and communication technologies we see the
need to provide a protocol to carry EAP (what is known as EAP lower layer in EAP
lingo) and therefore we propose to use LO-CoAP-EAP.
3.1.2 Basic Concepts
LO-CoAP-EAP architecture defines 3 entities (see Figure 4).
The Smart Object, that is the object that intends to join the security domain. The
Controller, which is the entity in charge of managing the security domain and steer
the authentication process. And the AAA server, which performs the authentication
of the Smart Object and sends to the Controller the needed information once the
authentication has completed successfully.
3.1.3 Main Interactions
The protocol flow for LO-CoAP-EAP is as follows:
Figure 4: LO-CoAP-EAP exchange (without handshake) using a generic EAP method
The first message (message 1) triggers the LO-CoAP-EAP authentication, signaling
to the Controller that the Smart Object is read to perform the bootstrapping. In this
message the identity of the Smart Object is sent, so the Controller can send this
information to the AAA server (message 2), starting the EAP authentication method
for authentication (chosen by the AAA server). In the EAP method exchange
20
(messages 3-10) the AAA server and the Smart Object perform the authentication
according to the EAP method chosen. If the authentication is successful, the AAA
server sends the EAP success message along with authorization information and the
cryptographic material needed for the Controller known as Master Session Key
(MSK) (message 11). With the MSK the controller starts the last exchange, at this
point only between itself and the Smart Object. In this message exchange a new key
is derived from the MSK (CoAP_PSK) and used to mutually authenticate and protect
the two last messages (messages 12 and 13), using the CoAP AUTH Option.
At this point the Smart Object is recognized as a trustworthy entity by the Controller
and can start accessing the services offered by the controller or other entities of the
domain.
CoAP-EAP exports the following CoAP methods are described in Table 1.
Table 1. CoAP-EAP methods
Entity Method URI Description
Controller POST /b Service available without security to let the Smart
Objects know it supports LO-CoAP-EAP.
Smart
Object
POST /b/X When the bootstrapping is triggered, the Smart
Object acts as a CoAP Server, exposing the LO-
CoAP-EAP service so the controller can interact with
it to carry out the bootstrapping.
The X refers to the reserved resource associated to
an ongoing bootstrapping procedure. The first
exchange does not have this identifier.
21
4 Enablers for a Decentralised Platform
4.1 Enabler for Inter-Domain Policy Management
4.1.1 Overview
In the inter-domain layer, we have presented a federation of MDRs with a
distributed approach. The relationship between MDRs must be controlled to allow
only legitimate MDRs to participate in the federation. We also assume these MDRs
to be linked to multiple domains do not maintain established trust relationship, yet
they have to agree on common policies on accessing and sharing data resources.
We need a security mechanism that allows us to define global data sharing policies
between the different MDRs of our federation which must be contractually agreed
by all participating domains of the federation. For example, “No data owner (domain)
can give consumers access to their data to untrusted parties such as embargoed
countries”. At this layer, we introduce the use of blockchain handler to solve the
inter-domain policy management. Our policy model includes global policies and
domain-specific policies. Note that the “domain” here refers to a virtual concept
about data source providers. Each domain is responsible for a set of sensor data it
provides to the system. In following, we will present an overview of our inter-domain
policy management enabler (see Figure 5).
Global data sharing policies: In the federation model that includes many IoT
domains, it is necessary to maintain a common practice of rules for the federation
that involving domains must comply with. Here we define global policies as ones
that are mutually agreed by all (or major) domains and are compliant to by all
domains. Data sharing communications to external entities definitely must follow
those policies. Global policies once created are updated or removed only with
agreement of major or all domains in the federation.
Domain-specific data access policies: Each domain is a data source owner and it
has full rights to set its own policies. These policies are typically about who can
access which data under which circumstances. Importantly, domain-specific
policies are not allowed to be conflict with global policies to ensure consistency for
federation in managing and sharing data securely.
22
Thanks to this security mechanism, the system only executes the agreed and
validated policies. So, once a domain policy is validated, further modifications
require the approval of the other domains of the network to be executed. Another
important fact is that a domain cannot revoke a policy as it might cause broken
services relying on the modified policies.
Figure 5. Inter-domain policy management overview
4.1.2 Basic Concepts
Distributed Ledger Technology (DLT)
DLT aims to solve the problem where we have a P2P network of untrusted entities
exchanging transactions. Additionally, this P2P network does not have a central
administration role and does not rely on a particular hierarchy because all
participants have the same capabilities.
In these networks, each new party member must be agreed by all peers, and a
single ledger of ordered transactions is shared by all peers in real time. So, the
ledger is consistently replicated by each peer. This is the reason why tampering
with data is impossible without simultaneously hacking every peer in the network.
23
Furthermore, additional restrictions can be applied so that access is granted only to
a limited set of entities.
A blockchain is one instance of DLT, defined as an immutable distributed ledger
that records transactions which happened in the network of mutually untrusting
peers. The ledger is replicated by peers and these peers execute a consensus
protocol to validate transactions, group them in blocks and add new block to the
ledger.
Blockchain is an implementation of distributed ledger technology. Every transaction
is recorded in the ledger in order of occurrence. In blockchain, a group of
transactions is recorded in a block. Blocks are chained by including cryptographic
hash value of previous block into the newly created next block. This technique of
chaining with linked hash values prevents tampering transactions without being
detected.
Smart Contract
Smart contracts are computer programs (code) that handle the business logic which
was pre-agreed by the network members (Figure 6). They run on the blockchain
and provide an interface to interact with the data. This code is available (i.e. can be
inspected) to all the members present on the network (i.e. orderers, peers).
Blockchain members add smart contracts to the blockchain in a similar way of
adding transactions, thus these smart contracts are also included in blocks.
Transactions that update smart contract states are also recorded in the next block
created after the changed had been made. This mechanism makes smart contracts
immutable in the same manner for
transactions.
Figure 6. Distributed smart contract
24
Smart contracts are typically enforced by the nodes of the system, therefore is not
possible for a single entity to bypass the rules defined within this code, since it
would require the agreement of the majority of the participants. The main
advantage of smart contracts is that they can automate an organization’s business
logic. In turn, the switch to automation cancels the effects of human errors and
misunderstandings
that may lead to legal disputes. A legal contract or a law might be subject to
personal interpretations, but software is deterministic; there is no room for
subjective interpretations.
HyperLedger Fabric
Hyperledger Fabric (simply Fabric) is an open-source blockchain platform
[Androulaki 2018] managed by the Linux Foundation1. Fabric has widely range of use
in prototypes, proof-of-concepts and industrial production. Use cases of Fabric
include various areas such as supply chain management, contract management,
data provenance, identity management.
Fabric uses hybrid replication design which incorporates primary-backup (passive)
replication and active replication. Primary-backup replication in Fabric means every
transaction is executed only by a subset of peers based on endorsement policies.
Fabric adopts active replication that transactions are written to the ledger once
reaching consensus of total order. This hybrid design makes Fabric a scalable
permissioned blockchain.
Use-cases of Fabric include various areas such as supply chain management,
contract management, data provenance identity management. Fabric is a new
blockchain architecture that overcomes limitations of previous permissioned
blockchain platform on flexibility, scalability and confidentiality. With this goal,
Fabric is designed as a modular and extensible permissioned blockchain. It supports
the execution of distributed applications written in general purpose programming
languages such as Go and Java; following execute-order-validate paradigm for
untrusted code in untrusted environment. A distributed application consists of a 1 https://www.hyperledger.org
25
chaincode (smart contract) and endorsement policy. The chaincode implements the
application logic and runs in the execution phase. The endorsement policy is
evaluated in the validation phase and it is only modified by trusted entities e.g.
administrators. Fabric uses a hybrid replication design which incorporates primary-
backup (passive) replication and active replication. Primary-backup replication in
Fabric means every transaction is executed only by a subset of peers based on
endorsement policies. Fabric adopts active replication such that transactions are
written to the ledger once consensus is reached on the total order. This hybrid
design is what makes Fabric a scalable permissioned blockchain. Fabric contains
the following modular blocks:
• Ordering service that broadcasts updates to peers and establishes
consensus on transaction orders.
• Membership Service Provider (MSP) that associates peers with
cryptographic identities.
• Peer-to-peer gossip service to disseminate blocks to all peers in the
blockchain network.
• Smart Contracts that run application logic in container environments.
• Ledger that is maintained by peers in append-only form.
A Fabric permissioned blockchain consists of a set of nodes enrolled by the
modular MSP. Each node can be a client, a peer and an orderer. As a client, the node
can submit transaction proposals for execution, execute them and then broadcast
them for ordering. As a peer, the node can execute transaction proposals and
validate transactions. The node only executes transactions that are endorsed, as
specified by the endorsement policy, and it stores transactions in an append-only
ledger. As a role of orderer, the node runs the ordering service to establish total
order for transactions in Fabric.
4.1.3 Design
At the core of our policy management enabler is the creation and execution of
smart contracts for access control policies. We consider policies are special data
that is advertised and approved by the blockchain network. A domain once created
a policy for a specific data item can advertise the policy over the blockchain
26
network and wait for approval. In our design, global policies can be initiated by any
node in the network and require to be approved by major number of nodes before
becoming valid. Validated policies are visible to every node so that they can later
verify if data accesses and data sharing comply with those policies.
The smart contracts for adding policies, verifying and approve such policies are
implemented and deployed at the initial phases of the blockchain network. These
smart contracts regulate business logic of a specific system for how policies are
described, added and validated. They can also verify integrity and compatibility of
global and local policies. For example, a local policy must be compatible (not
conflict) to existing global policies. This check can be implemented by functions of
smart contracts.
Our implementation (will be done in next phase) of the inter-domain policy
management will use HyperLedger Fabric as the blockchain platform.
4.2 Decentralized Identifiers Technology
4.2.1 Overview
In the deliverable D2.2 [Skarmeta 2019], we also proposed the use of Decentralized
Identifiers (DID) [Kortesniemi et al. 2019] as a technology to represent all the subjects
considered in data models. This technology can be used together with a Distributed
Ledger Technology, such as Blockchain to uniquely identify the digital entities
defined in this environment.
4.2.2 Basic Concepts
Decentralized Identifiers (DIDs) are self-sovereign identifiers for individuals,
organizations or things. They comprise two different thigs: a unique identifier and an
associated DID Document. The unique identifier is as follows:
did:sov:3k9dg356wdcj5gf2k9bw8kfg7a
The DID is divided in three parts: the scheme which is did, the method applied to this
DID which is sov (Sovrin), and the method-specific identifier which corresponds to
the right-most part of the DID. It is safe enough to think about it as the unique ID for
looking up a DID document.
27
The DID document is a JSON-LD object stored in some central location so that it can
be easily looked up. This document, represented in Figure 7, is expected to be
“persistent and immutable” and it might include the following items:
a timestamp of when it was created
a cryptographic proof that the DID Document is valid
a list of cryptographic public keys
a list of ways that the DID can be used to authenticate
a list of services where the DID can be used
any number of externally defined extensions
Figure 7. An example DID Document (from the W3C DID Specification)
A key aspect of DIDs is that they are designed not to be dependent on a central
issuing party (identity provider or IdP) that creates and controls the identifier.
Instead, DIDs are created and managed by the identity owner. Among its features
we can highlight: decentralized, persistent and cryptographically verifiable. It can be
registered in a DLT and is created and managed by an identity controller.
4.2.3 Detailed Specifications
28
Regarding the format of DID. As we previously commented, it follows “did:” +
<method> + “:” <method-specific-identifier>. The method defines how / where you
are going to find the DID. The following table includes the registered methods so far.
The majority of them: Bitcoin, Ethereum, Sovrin, IPFS, and Veres One are based on
DLT (see Table 2).
Table 2. DID format
Method Name
Status DLT or
Network
Authors Link
did:abt: PROVISIONAL ABT Network ArcBlock ABT DID Method
did:btcr: PROVISIONAL Bitcoin Christopher Allen, Ryan Grant, Kim
Hamilton Duffy
BTCR DID Method
did:stack: PROVISIONAL Bitcoin Jude Nelson Blockstack DID
Method
did:erc725: PROVISIONAL Ethereum Markus Sabadello, Fabian
Vogelsteller, Peter Kolarov
erc725 DID Method
did:example: PROVISIONAL DID
Specification
W3C Credentials Community Group DID Specification
did:ipid: PROVISIONAL IPFS TranSendX IPID DID method
did:life: PROVISIONAL RChain lifeID Foundation lifeID DID Method
did:sov: PROVISIONAL Sovrin Mike Lodder Sovrin DID Method
did:uport: PROVISIONAL Ethereum uPort
did:v1: PROVISIONAL Veres One Digital Bazaar Veres One DID
Method
did:dom: PROVISIONAL Ethereum Dominode
did:ont: PROVISIONAL Ontology Ontology Foundation Ontology DID
Method
did:vvo: PROVISIONAL Vivvo Vivvo Application Studios Vivvo DID Method
did:icon: PROVISIONAL ICON ICON Foundation ICON DID Method
did:iwt: PROVISIONAL InfoWallet Raonsecure InfoWallet DID
Method
did:ockam: PROVISIONAL Ockam Ockam Ockam DID Method
did:ala: PROVISIONAL Alastria Alastria National Blockchain
Ecosystem
Alastria DID Method
did:op: PROVISIONAL Ocean
Protocol
Ocean Protocol Ocean Protocol DID
Method
did:jlinc: PROVISIONAL JLINC Protocol Victor Grey JLINC Protocol DID
Method
did:ion: PROVISIONAL Bitcoin Various DIF contributors ION DID Method
did:jolo: PROVISIONAL Ethereum Jolocom Jolocom DID
29
Method
did:ethr: PROVISIONAL Ethereum uPort ETHR DID Method
did:bryk: PROVISIONAL bryk Marcos Allende, Sandra Murcia, Flavia
Munhoso, Ruben Cessa
bryk DID Method
did:peer: PROVISIONAL peer Daniel Hardman peer DID Method
did:selfkey: PROVISIONAL Ethereum SelfKey SelfKey DID Method
30
4.3 Enabler for Private Data Sharing
4.3.1 Overview
Following the trend of leveraging blockchain for IoT, a number of schemes for
secure sharing of data over blockchains have been explored by the researcher
community [Shafagh et al. 2017, Kokoris-Kogias et al. 2018]. Kokoris-Kogias et al.,
proposed CALYPSO for auditable sharing of private data where data is stored
onchain and collective authorities formed over the blockchain are responsible for
enforcing access control policies. This design is not suitable for dealing with huge
amount of IoT data generated by large numbers of IoT devices in real-world
systems. Shafagh et al., designed a system for sharing time-series IoT data where
data owners have to issue transactions for setting policies each time the data is
shared with another party and only the owner can change that policy later. Similarly,
Laurent et al., [Laurent et al. 2018] proposed a system where blockchain is used to
handle transactions between parties before granting permissions, however how
such transactions are done is not addressed, and only owners can change policies.
Our design allows ACL updates or decryption key distributions to be autonomously
done over the blockchain back-end without any data owner's interventions. Owners
only specify for data offers, trading and granting are handled by the blockchain.
Despite the specific use-case scenario for adopting DLT to inter-domain policy
management, we have opted for designing and developing a generic private data
sharing enabler. Certainly, the enabler can be used to manage policies.
In our generic design, we consider main IoT platform components including IoT
domains (IoT stakeholders, IoT Discovery and IoT Brokers). It is depending on
specific deployment to decide which component can participate in the blockchain
channel. Figure 8 shows how IoT domains are linked to blockchain. Router and
Blockchain handlers operate on top of existing IoT platform e.g. FIWARE. They are
meeting points for existing IoT components to communicate with additional
services such as data market and data storage.
31
Figure 8: Integration of IoT and blockchain
The router plays a central role where the remaining components connect to. It
checks incoming/outcoming data and directs the data to corresponding
component. When data is received from IoT stakeholders, it will be handled by
either cloud handler or Blockchain handler depending on the content of incoming
messages. If the incoming messages are about data offers, it will direct request to
data market place via Blockchain handler. To serve as discovery service, the Router
also updates the IoT Broker and IoT discovery with the appropriate metadata
depending on the design of a specific system.
The blockchain handler manages all communications to the blockchain when data
market operations take place. It will handle data submitted to the blockchain and
also data queries. It also supports IoT access control component. The data will be
available to access or retrieve when access policies are satisfied. IoT access control
keeps policies synchronized with the data market via the blockchain handler.
The data sharing enabler leveraging blockchain technology enhances transparency
of data exchanges in an IoT framework. It simplifies the data management process
while providing transparency and monetization. For proof-of-concept deployment,
this enabler will be implemented on FIWARE and Hyperledger Fabric platforms.
32
Once the enabler is used for inter-domain policy management, each policy is
created and validated via smart contract executions. Factoring that the blockchain
network is secure, that means no malicious peer (domain) can alter any validated
policies. Any changes on existing policies will require a new approval by executing
new smart contracts. IoTCrawler is presented as a special domain which is also the
gateway to communicate to external entities. Hence, it will handle data access
according to valid policies in the blockchain channel of inter-domain policies.
4.3.2 Basic Concepts
Our design includes five entities: IoT Domain which contains IoT Stakeholder, Cloud
Storage, Blockchain and Key Authority (optional). We consider IoT Domain as a
virtual concept for a group of IoT stakeholders that share common settings (e.g.
devices located in a same building, produce same type of context data). The IoT
Domain acts as a node that can be both data producer and data consumer. It can be
referred as IoT middleware layer component which provides capabilities to connect
to further systems and applications that low-level IoT hardware-based components
lack of. Each IoT Domain maintains a router to forward communications from IoT
stakeholders to further applications. Our extension is done by adding two new
handlers that we name Blockchain Handler and Router. These additions come from
the fact that components in the integrated system are implemented in different
languages following various standards and thus these components cannot
communicate directly to each other.
The heterogeneity becomes even larger when we consider various cloud providers
which have their own web APIs. Figure 8 gives an overview of integration with data
flow between components and standards being used for communication protocols.
4.3.3 Main Interactions
In the design of our enabler, we store IoT data off-chain but offload to the
blockchain the access control functionality—currently handled by a centralized
entity such as the IoT broker. In particular, a smart contract handles access control
policies and evaluates access requests. Data owners push their data (file-based) to
the off-chain storage and advertise it to the smart contract through an “offer”. The
33
latter may define the price to be paid in order to gain access to the data. Similarly,
access requests from consumers are issued to the smart contract that evaluates the
request against the policy and makes an access decision. The smart contract also
bookkeeps trade information between owners and consumers via IOU4 accounts.
As such, our platform creates a “data marketplace” where owners sell and
consumers buy data.
As we handle access control in the blockchain, we ensure that access policies are
correctly managed and access requests are duly evaluated. We also ensure
remuneration of data owners and auditability of all operations. Since data is actually
stored off-chain, the storage provider is also a blockchain node executing the smart
contract. Once an access decision is made, the storage provider acts accordingly—
allowing or denying access to the data—thereby performing as the policy
enforcement point. Up to this point, our solution overview assumes a trusted
storage provider. Nevertheless, a malicious storage provider may abuse its
functionality as policy enforcement point and, e.g., share data with unauthorized
parties. We address this issue in our extended design where 1) data (file-based) is
encrypted before uploading it to the storage provider, and 2) key distribution to
authorized parties is handled by a cohort of key authorities. As a result, the storage
provider only handles ciphertexts and unauthorized access requires compromise of
all the parties acting as key authorities. Given the application scenario where data
(and sensors) is usually organized in a hierarchical fashion, we identify prefix
encryption as a suitable encryption scheme that allows fine-grained access control
while minimizing key requests to the (distributed) authority.
34
Figure 9: Data sharing scheme based on ACL
An Access Control List (ACL) is a list of permissions to manage who can access a
data
resource. A data owner stores its data in the clear at the cloud storage that holds the
latest version of the ACL—since the storage provider is also a node of the
blockchain—and enforces access control decisions. Note that data owners maintain
their own storages that are NGSI-compatible to IoTCrawler’s protocols. Figure 9
depicts the sharing steps where producer and consumer advertise and accept data
offers. The data offer is described in a smart contract and the corresponding ACL is
updated by adding the identity of the consumer once its access request has been
accepted. Note that ACL updates are done over the blockchain (by executing the
marketplace smart contract) and do not require operations from data owners.
Enforcement via ACLs provides a straightforward means to regulate access to data.
Nevertheless, this scheme assumes a trusted storage provider that does not abuse
its role as a policy enforcement point to, e.g., leak data to unauthorized parties.
35
Figure 10: Data sharing scheme based on prefix encryption
We now introduce a scheme where access control is cryptographically enforced so
that we can refine the trust assumption on the storage provider. In particular we
assume data is encrypted by owners before it is uploaded to the storage provider
and we use the blockchain-managed ACL to reflect parties authorized to access the
corresponding decryption keys. Figure 10 provides an overview of this design. Here
the cloud is detached from the blockchain network.
We use prefix encryption as it is particularly suited for IoT scenarios where a
multitude of devices are organized in a hierarchy. For example, Alice may organize
her IoT devices in an hierarchical namespace alice/, including alice/house/ for
devices installed at her house and alice/car/ for devices installed in her car. Alice
can thus create her own master key-pair (i.e., by running Setup(1)) and load the
master public key on her devices. Each device produces data encrypted under its
own “prefix”. For example, the smart thermostat may encrypt its measurement m via
Encrypt(mpk; alice/home/thermostat;m) and upload the ciphertext to the platform.
Envelope encryption may be used to encrypt bulk data.
36
Distribution of secret keys to eligible parties can happen in a number of ways. Alice
may setup her own service or, alternatively, delegate this role to either a single or a
distributed authority. The party taking up such a role must hold the master secret
key msk created by Alice at Setup time5 and must synchronize with the blockchain
to get the up-to-date ACL for a given prefix and distribute decryption keys
accordingly. The
key authority receives a request to access a given prefix, e.g.,
alice/home/thermostat) and uses the ACL to decide whether to grant or deny the
request. If the request is granted, the authority runs Extract to compute the secret
decryption key and securely transfer that key to the requestor. Note that requests
carry the public key of the requestor so that the authority can securely transfer keys
to the intended party. Once a party obtains a decryption key associated to a given
prefix it can ask for data produced under that prefix to the storage provider. The
latter does not manage cleartext data as devices only upload encrypted data.
Therefore, the storage provider does not carry out any policy enforcement but
simply forwards the ciphertexts to the requestor. Finally, the requestor holding the
decryption key runs the Decrypt algorithm and recovers the cleartext data. Note a
requestor may use the decryption key for a prefix (e.g., alice/home/) to decrypt
ciphertexts produced by any device under that prefix (e.g., alice/home/thermostat
or
alice/home/doorlock).
4.4 Enabler for Intra-Domain Capability-Based Access
Control
4.4.1 Overview
Access Control is a security aspect of utmost importance that we have considered
from the beginning in this project, since our goal is to provide a mechanism that
allows for a secure registration of IoT resources, as well as a secure dissemination of
that information to the legitimate users. For this reason, the architecture presented
in deliverable D2.2 includes an authorisation enabler at the domain layer (Figure 11).
This way, every single action over the MDR must be previously authorised.
37
Figure 11: Authorisation enabler placed in the IoTCrawler architecture
There are different alternatives that can implement the access control mechanism
and they usually define a set of access control policies where they include the
entity requesting the access, the resource to be accessed and the action to be
performed over the latter. We have selected the Distributed Capability-Based
Access Control [Hernandez-Ramos et al. 2014] one because it decouples the grant
of access and the enforcement of the access control in two different phases
allowing the device/service to perform the first phase once and employ the token
as long as it is valid. The second phase basically consists of presenting a token
together with the requesting message.
4.4.2 Basic Concepts
The technology that it implements it is called Distributed Capability-Based Access
Control (DCapBAC). This authorisation method integrates the XACML [Anderson et
al. 2003] framework with the concept of generating an authorisation token which is
presented later on to an enforcement point. The XACML framework is used for
generating the authorisation policies, as well as for issuing an authorisation verdict
thanks to the PDP (Policy Decision Point) and the XACML policies generated by the
PAP (Policy Administration Point).
DCapBAC introduces a new component called Capability Manager (CM) which, after
a positive verdict from the PDP (Policy Decision Point), issues an authorisation token
called Capability Token (CT) based on the authorisation request and which is
38
received by the requester entity. This CT contains information regarding the granted
authorisation, such as the subject, the issuer and the lifetime of it defined by two
timestamps (beginning and end of period) among other.
Therefore, the enabler comprises the following entities:
CM: The entity receiving the access control request which forwards them to
the XACML PDP to validate it.
XACML PDP: The entity responsible for making access control decision based
on XACML policies defined by PAP.
XACML PAP: The entity responsible for generating the XACML access control
policies.
Capability Token
The CM, after receiving a positive verdict from the PDP, generates a CT which is
based on a JSON document. Compared to more traditional formats such as XML,
JSON is getting more attention from academia and industry in IoT scenarios, since it
can provide a simple, lightweight, efficient, and expressive data representation,
which is suitable to be used on constrained networks and devices.
As shown below, this format follows a similar approach to JSON Web Tokens
(JWTs) [Jones et al. 2012], but including the access rights that are granted to a
specific entity.
39
Figure 12. Capability Token Example
Figure 12 shows a capability token example. Below, a brief description of each field
is provided.
Identifier (ID). This field is used to unequivocally identify a capability token. A
random or pseudorandom technique will be employed by the issuer to
ensure this identifier is unique.
Issued-time (II). It identifies the time at which the token was issued as the
number of seconds from 1970-01-01T0:0:0Z.
Issuer (IS). The entity that issued the token and, therefore, the signer of it.
Subject (SU). It refers to the subject to which the rights from the token are
granted. A public key has been used to validate the legitimacy of the subject.
Specifically, it is based on ECC, therefore, each half of the field represents a
public key coordinate of the subject using Base64.
Device (DE). It is a URI used to unequivocally identify the device to which the
token applies.
Signature (SI). It carries the digital signature of the token. As a signature in
ECDSA is represented by two values, each half of the field represents one of
these values using Base64.
{
“id”: “eg3fq:fb5r23tra3”,
“ii”: 1485172121,
“is”: “[email protected]”,
“su”: “zNwS5FetB4rwzSKsWwSBAxm5wDa=JgLjHU8zSnmeSFQgSG9HhdsJrE8=”,
“de”: “coap://sensortemp.floor1.computersciencefaculty.um.es”,
“si”: “SbUudG4zuXswFBxDeHB87N6t9hR=PBQqCN3gpu7nSkuPzDk7kaR3dq1=”,
“ar”: [
{
“ac”: “GET”,
“re”: “temperature”
}
],
“nb”: 1485172121,
“na”: 1485174121
}
40
Access Rights (AR). This field represents the set of rights that the issuer has
granted to the subject.
Action (AC). Its purpose is to identify a specific granted action. Its value could
be any CoAP method (GET, POST, PUT, DELETE), although other actions
could be also considered.
Resource (RE). It represents the resource in the device for which the action is
granted.
Condition flag (F). It states how the set of conditions in the next field should
be combined. A value of 0 means AND, and a value of 1 means OR.
Conditions (CO). Set of conditions which have to be fulfilled locally on the
device to grant the corresponding action.
Condition Type (T). It is the type of condition to be verified.
Condition value (V). It represents the value of the condition.
Condition Unit (U). It indicates the unit of measure that the value represents.
Its value could be any of predefined format.
Not Before (NB). It is the time before which the token must not be accepted.
Its value cannot be earlier than the II field and it implies the current time must
be after or equal than NB.
Not After (NA). It represents the time after which the token must not be
accepted.
4.4.3 Main Interactions
The authorisation process comprises the generation of XACML policies, its validation
and its enforcement. For this reason, we have defined three kinds of interactions:
Definition of authorization policies
XACML is a framework for authorization and access control that is consistent with
the Attribute-Based Access Control Model. XACML uses policies to express the
actions that some entity can or not perform.
A Policy Administration Point (PAP) generates and stores the XACML policies (in XML
format) and feeds them to the Policy Decision Point (PDP) (see Figure 13).
41
Figure 13: PAP adds XACML policy
Authorisation granting process
When an entity intends to register or access a specific resource of the platform, it
must be granted first. Following the DCapBAC procedure, the entity must first
request access to that resource.
According to Figure 14, it issues an authorisation request to the CM specifying the
resource, sort of access, and the likes. Such request is analysed by the Capability
Manager which issues an XACML authorisation request to the PDP, which, after
validating it by checking the XACML policies generated by the PAP, responds with a
verdict.
Figure 14. Authorisation granting interactions
The CM, after receiving a positive answer, generates the corresponding CT which is
attached in the final authorisation response to the requester entity.
Authorisation enforcing
The enforcement of the authorisation policy is delegated to the PEP_Proxy entity
which will validate the CT which is sent by the requester entity together with the
NGSI-LD query.
The interface provided by this access control enabler comes from the CM entity,
which is the one reachable by requesting entities. It expects to receive authorisation
42
requests for specific resources, and it answers with a CT in case the access control
decision is positive.
Additionally, the PAP also offers a Web-based interface which allows for an easy
definition of XACML policies which are later checked by the PDP.
4.5 Enabler for Policy Compliance Check
4.5.1 Overview
Monitoring system behaviour and checking it during runtime is widely used for a
diversity of systems and policies. Many system and security policies are
requirements on the temporal occurrence and non-occurrence of system events.
Linear-time temporal logics like LTL [Pnueli, 1977] or variants and extensions thereof
are well suited to formally express such requirements. See, e.g., also [Basin et al.,
2015].
This enabler, called in the following POLÌMON, supports a rich formally defined
specification language based on a real-time logic [Alur and Henzinger 1992,
Koymans 1990] extending LTL for expressing a large variety of policies. That is, the
policies handled by this enabler are temporal requirements on system events.
Simple policy examples are that requests must be served within a given time period
and a failed login must not be directly followed by another login attempt.
Furthermore, POLÌMON monitors system components and checks their behaviour
and interactions against the given policy specifications at runtime. Noncompliant
behaviour is reported promptly, even in the presence of knowledge gaps.
Figure 15. Policy compliance check enabler overview
More concretely, POLÌMON's input comprises (1) a specification and (2) event
streams from the monitored system components. See also Figure 1. POLÌMON
processes the incoming events, which it either receives over a UDP socket or reads
43
from a file, iteratively and checks them against a given specification, which is a
formula in the real-time logic MTL [Koymans 1990] extended with the freeze
quantifier. POLÌMON does not require that the events are received in the order in
which they are generated. Each event is processed immediately, that is, POLÌMON
interprets the incoming message that describes the event, updates the monitor
state, and outputs the computed verdicts. The verdicts are forwarded to the system
administrator or another system component that can take appropriate
countermeasures, e.g., by reconfiguring the system, terminating or isolating an ill-
behaving system component.
4.5.2 Basic Concepts
POLÌMON processes the incoming events in a pipeline, where the pipeline stages
are executed as separate processes. See Figure 15. The first stage, the interpreter,
parses an event, extracts the event's data values, and determines the interpretation
of the predicate symbols at the events' time point via regular-expression matching.
The second stage, the timeliner, determines the time periods for which all events
have been received. Finally, the third stage, the monitor, computes the verdicts.
POLÌMON currently comprises two monitors, one for the propositional setting and
one that additionally handles data values.
POLÌMON makes the following system assumptions. First, a system's behaviour is
described completely through infinitely many events. Note that POLÌMON does not
require that all of them are received in the limit. Second, the monitored system
components are fixed and known to POLÌMON. Note that this assumption can be
relaxed by a mechanism to register components before they become active and
unsubscribing them when they become inactive. To register components we can,
e.g., use a simple protocol where a component sends a registration request and
waits until it receives a message that confirms the registration. Third, components
do not tamper with messages and do not send bogus events. The assumption ruling
out tampering and improper delivery can be discharged in practice by adding
information to each message, such as a recipient identifier and a cryptographic hash
value, which are checked when receiving the message. Furthermore, the events’
44
integrity can be protected by utilizing the trusted execution environments at a
component for sending messages to POLÌMON.
POLÌMON's underlying time model is based on wall-clock time. POLÌMON requires
that the events are linearly ordered by their timestamps. Note that this is a strong
requirement, since in particular for distributed systems, events can actually only be
partially ordered by logical clocks. If events are, however, only partially ordered, the
reasoning becomes significantly more difficult since a finite partially ordered trace
can have exponentially many interleavings. Furthermore, we argue that when
events are generated at the speed of microseconds, existing protocols like the
Network Time Protocol (NTP) work well in practice to synchronize clocks between
distributed components [Mills, 1995]. This justification becomes however
questionable when events are generated at a very high speed, e.g., in the low
microsecond range or even in the nanosecond range.
We conclude this section by illustrating POLÌMON's usage through a simple
example. Consider a system in which agents can issue tickets. Issuing a ticket is
represented by an event ticket(𝑎,𝑛), where 𝑎 is the agent who issued the ticket
and 𝑛 is the ticket identifier. Furthermore, consider the simple specification that an
agent must wait for some time (e.g. 100 milliseconds) before issuing another ticket.
A formalization in POLÌMON's specification language is as follows.
FREEZE a[agent], n[id]. ticket(a, n)
IMPLIES
NOT EVENTUALLY(0, 100ms] FREEZE m[id]. ticket(a, m)
The FREEZE quantifiers bind the logical variables to the data values that occur in the
events. The data values are stored in predefined registers (agent and id in the above
formula) and each event determines the register values at the event's occurrence.
Note that the data values from an event bound by the outer FREEZE quantifier in the
above formula stem from an event that is different from the one for the inner
FREEZE quantifier, since the metric constraint of the temporal future-time
connective EVENTUALLY excludes 0. Furthermore, note that an event also
determines the interpretation of the predicate symbol ticket at the event's
occurrence, namely, the singleton {(𝑎, 𝑛)}. Finally, note that there is an implicit
45
outermost temporal connective ALWAYS. For each received event, a reported verdict
describes whether the given specification is violated or satisfied at the event's time.
POLÌMON requires that events are timestamped (in Unix time with a precision up to
microseconds). POLÌMON additionally requires that each event includes the event's
system component and a sequence number, i.e., the 𝑖th event of the component.
POLÌMON uses the sequence numbers to determine whether it has received all
events from a component within a given time period. For instance, when POLÌMON
is monitoring a single component and it has received events with the sequence
number 𝑖 and 𝑖 + 2 but no event with the sequence number 𝑖 + 1, then POLÌMON
knows that there is a knowledge gap between the two events with the sequence
numbers 𝑖 and 𝑖 + 2.
For our example, assume that POLÌMON is monitoring the single component
syscomp and receiving the following events from it.
1548694551.904@[syscomp] (1): ticket(alice, 34)
1548694551.996@[syscomp] (2): ticket(bob, 8)
1548694552.059@[syscomp] (3): ticket(charlie, 52)
1548694552.084@[syscomp] (5): ticket(bob, 11)
1548694552.407@[syscomp] (6): ticket(charlie, 1)
1548694552.071@[syscomp] (4): ticket(charlie, 99)
Note that the events are received in the order of their timestamps, except the event
with the sequence number 4 is received out of order. The delay of receiving this
event might, e.g., be caused by network latencies. POLÌMON processes the events
in the order it receives them.
POLÌMON does not output any verdict after receiving the first two events. When
processing the third event, POLÌMON outputs the verdict
1548694551.904: true.
The reason is that enough time has elapsed for alice to issue another ticket. This
inference is sound since no events are missing up to the third event and the time
difference between the first event and the third event is larger than 100
milliseconds. The next verdict, POLÌMON outputs is when processing the second
event with agent bob:
1548694551.996: false.
46
Note that although there is a knowledge gap (the event with the sequence number
4 is missing), outputting this verdict is sound, since no matter how this gap is filled,
the second event with agent bob causes a violation of the specification for the first
event with agent bob. When receiving the second last event, POLÌMON outputs the
verdict
1548694552.084: true
but not the verdict 1548694552.059: true. Outputting the latter verdict would
not be sound because of the knowledge gap between the events with sequence
numbers 3 and 5. In fact, the event, which arrives late, causes a violation. Finally,
when receiving this last event, POLÌMON outputs the following two verdicts.
1548694552.071: true
1548694552.059: false
4.5.3 Policy Specifications
In this section, we describe POLÌMON’s input in more detail, namely, we describe (1)
the core of the policy specification language and (2) the message format.
1) Policy Specification Language
Policies are expressed as formulas of a temporal logic. More precisely, POLÌMON
uses a variant of the real-time logic MTL with a point-based semantics and a dense
time domain. Furthermore, a freeze quantifier accounts for data values. We refer to
Alur and Henzinger [1992] for theoretical background, in particular, for the
underlying temporal model and the semantics of the specification language.
The core grammar of the policy specification language is given by the following
grammar.
spec ::= TRUE
| p(𝑥1, … , 𝑥𝑛)
| NOT spec
| spec OR spec
| PREVIOUS[𝑎, 𝑏] spec
| NEXT[𝑎, 𝑏] spec
| spec SINCE[𝑎, 𝑏] spec
47
| spec UNTIL[𝑎, 𝑏] spec
| FREEZE 𝑥1[𝑟1], … , 𝑥𝑛[𝑟𝑛]. spec
Here, p is a predicate symbol, 𝑥1, … , 𝑥𝑛 range over variables, 𝑎 and 𝑏 are
nonnegative integers, and 𝑟1, … , 𝑟𝑛 range over data registers. Intuitively, the
semantics is as follows (we omit here the formal semantics, which can be found in
Basin et al. [2017]).
TRUE specifies the trivial policy that any behaviour satisfies.
p(𝑥1, … , 𝑥𝑛) specifies the policy that a behaviour satisfies at time 𝑡 whenever
the assignment of the variables (𝑥1, … , 𝑥𝑛) is in the relation that interprets the
predicate symbol p at time 𝑡.
NOT spec specifies the policy that a behaviour satisfies whenever it does not
satisfy spec.
spec1 OR spec2 specifies the policy that a behaviour satisfies whenever it
satisfies spec1 or spec2.
PREVIOUS[𝑎, 𝑏] spec specifies the policy that a behaviour satisfies at the
previous time point. Additionally, the previous time point must satisfy the
metric constraint [𝑎, 𝑏] relative to the current time point.
NEXT[𝑎, 𝑏] spec specifies the policy that a behaviour satisfies at the next time
point. Additionally, the previous time point must satisfy the metric constraint
[𝑎, 𝑏] relative to the current time point.
spec1 SINCE[𝑎, 𝑏] spec2 specifies the policy that a behaviour satisfies at time 𝑡
whenever the behaviour satisfies at some time 𝑠 ≤ 𝑡, with 𝑎 ≤ 𝑡 − 𝑠 ≤ 𝑏, the
policy spec2, and since 𝑠, the behaviour satisfies the policy spec2.
spec1 UNTIL[𝑎, 𝑏] spec2 specifies the policy that a behaviour satisfies at time 𝑡
whenever the behaviour satisfies at some time 𝑠 ≥ 𝑡, with 𝑎 ≤ 𝑠 − 𝑡 ≤ 𝑏, the
policy spec2, and until 𝑠, the behaviour satisfies the policy spec2.
FREEZE 𝑥1[𝑟1], … , 𝑥𝑛[𝑟𝑛]. spec specifies the policy that a behaviour satisfies at
time 𝑡 whenever the behaviour satisfies the policy spec, where 𝑥1 to 𝑥𝑛 are
assigned to the register values 𝑥1 to 𝑥𝑛 at time 𝑡.
Additionally, there are predefined predicate symbols for comparison, which are
written infix: =, /=, <=, >= for comparing integers and == and =/= for comparing
strings. Furthermore, various additional syntactic sugar is defined. First, the standard
48
Boolean connectives AND (conjunction) and IMPLIES (implication) are defined. The
unary temporal connectives EVENTUALLY, ALWAYS, ONCE, and HISTORICALLY are
derived from the binary temporal connectives UNTIL and SINCE. For example,
ONCE[𝑎, 𝑏] spec abbreviates TRUE SINCE[𝑎, 𝑏] spec. A behaviour satisfies the policy
ONCE[𝑎, 𝑏] spec at time 𝑡 whenever the behaviour satisfies spec at some time 𝑠, with
𝑎 ≤ 𝑡 − 𝑠 ≤ 𝑏. Fourth, register names can be omitted if they are identical to the
corresponding variable names. For example, FREEZE 𝑥. spec abbreviates
FREEZE 𝑥[𝑥]. spec. Furthermore, note that the connectives are assigned to different
binding strengths, which allow one to omit parenthesis. We use the standard
conventions here. For example, NOT binds stronger than OR. By this convention, NOT
spec1 OR spec2 abbreviates ((NOT spec1) OR spec2). We also allow open
intervals and half-open intervals as metric temporal constrains like (𝑎, 𝑏) and [𝑎, 𝑏),
for integers 𝑎 and 𝑏 with 0 ≤ 𝑎 < 𝑏. In these two cases 𝑏 could also be infinity
(denoted by the symbol *) to specify an unbounded interval. The interval [0,*),
which does not impose any metric temporal constraint, can be dropped. A time unit
(e.g., ms, s, and m) can be attached to the numbers. If no time unit is given the
numbers are interpreted as seconds.
An example of a policy specification is given in Section 4.5.2.
2) Message Format and Message Interpretation
The messages that are sent to POLÌMON must consist of the following fields.
Examples of messages are given in Section 4.5.2. The timestamp is given in Unix
time, with a possibly fractional part and specifies when the action was carried out.
Furthermore, the message must contain the sender and a sequence number. Recall
that the sequence number is the number of messages that the sender has sent so
far to POLÌMON. POLÌMON uses it to determine whether it has received all
messages up to a given time. Finally, the message must describe the performed
action. For its interpretation, POLÌMON additionally requires a configuration file that
specifies how to extract information from the description like data values and the
interpretation of the predicate symbols. This configuration file consists of rules with
49
regular-expression matching to identify the performed action and to extract
relevant data values. The first matching rule determines the interpretation of the
message. One can think of these rules as an if-then-else program.
The configuration file for the example in Section 4.5.1 is as follows.
["syscomp"]:
[event matches "ticket\((?P<agent>[a-z]+),(?P<id>[0-9]+)\)"]
==>
{ticket(<agent>, <id>)}
[ ]
==>
ignore
;;
The second rule matches everything and ignores the message. Note that it only
applies if the first rule does not match.
50
5 Data-centric Privacy Enablers for Differential Accessibility
5.1 Context-Aware Privacy Policies
5.1.1 Overview
The requirements presented by common IoT scenarios require more flexible data
sharing models between entities while the privacy of smart objects involved is still
preserved. Unlike the current Internet, IoT interaction patterns are often based on
short and volatile associations between entities without a previously established
trust link. In this section, we review existing cryptographic schemes that can be
potentially used to implement a secure information sharing based on the push
model. These mechanisms should be applied on the smart objects themselves in
order to provide an end-to-end secure data dissemination.
Attribute-Based Encryption (ABE) is gaining attention because of its high level of
flexibility and expressiveness, compared to previous schemes. In ABE, a piece of
information can be made accessible to a set of entities whose real, probably
unknown identity, is based on a certain set of attributes. This represents a step
forward in order to realize a privacy-preserving and secure data sharing scheme in
pervasive and ubiquitous scenarios, since consumers do not need to reveal their
true identity to obtain information, while producers can be sure that their data are
accessed only by authorized entities.
Based on ABE, two alternative approaches were proposed. In KP-ABE [Goyal et al.
2006], a cipher text is encrypted under a set or list of attributes, while private keys of
participants are associated with combinations or policies of attributes. In this case, a
data producer has limited control over which entities can decrypt the content, being
forced to rely on the AA entity issues appropriate keys for getting access to
disseminated information. In contrast, in a CP-ABE scheme [Bethencourt et al. 2007],
a cipher-text is encrypted under a policy of attributes, while keys of participants are
associated with sets of attributes. Thus, CP-ABE could be seen as a more intuitive
way to apply the concepts of ABE; on the one hand, a producer can exert greater
control over how the information is disseminated to other entities, On the other
51
hand, a user’s identity is intuitively reflected by a certain private key. In addition, CP-
ABE is secure to collusion attacks, that is, different keys from their corresponding
entities cannot be combined to create a more powerful decryption key. This feature
is due to the use of individual random factors for each key generation. Moreover, in
order to enable the application of CP-ABE on constrained environments, the
scheme could be used in combination with Symmetric Key Cryptography (SKC)
[Garcia-Morchon et al. 2013]. Thus, a message would be protected with a symmetric
key, which would be encrypted with CP-ABE under a specific policy.
5.1.2 Sharing Policies
These CP-ABE policies indicate the set of entities which will be enabled to decrypt
the information to be shared by specifying the sets of attributes that these entities
must satisfy.
The resulting CP-ABE policy is used to encrypt the information to be shared, and
consequently sent to the CP-ABE engine subcomponent.
The following piece of code shows an example of a CP-ABE policy which, as we can
see is a logical combination of specific attributes that can be associated to an entity,
for example, CP-ABE policy = ”role:admin AND company:OdinS”.
5.2 Privacy Enabler
5.2.1 Overview
According to the previous description, we have defined a PEP_Proxy component
which implements a CP-ABE engine. This component receives the queries that must
be sent to the MDR to make two different activities: the first one is to obtain and
validate the Capability Token that must be associated to each query so as to decide
if it must forward the received query to the MDR or not. The second activity is linked
to the application of CP-ABE privacy encryption policies to the content that must be
registered into the MDR.
The MDR provides two different ways of registering the information which can be by
pushing the information into the MDR or to be a provider for the information
registered in it. For the former one, the PEP_Proxy can receive instructions of the
52
CP-ABE policies that must be applied to perform the encryption over the attributes
of entities to be registered. By having this information as metadata, it modifies the
body of the registering or updating query and substitutes the plaintext representing
entities’ attributes by the corresponding cypher-text encrypted with the
corresponding CP-ABE policy.
5.2.2 Basic Concepts
Regarding privacy this PEP_Proxy, as already mentioned allows for storing
information in the MDR in a privacy preserving way. It comprises the following
entities:
- HTTPS proxy: So as to deal with secure messages by implementing TLS.
- NGSI-LD proxy: This module captures the queries aimed at the MDR and
process the body of the query to find attributes that must be encrypted with
the CP-ABE algorithm.
- CP-ABE engine: This module is responsible for encrypting the attributes with
the CP-ABE policy.
5.2.3 Main Interactions
The interactions with the MDR are practically the one as a Proxy. If an attribute
contains meta-information indicating that the CP-ABE encryption must be
performed over it, then the PEP_Proxy interacts with the CP-ABE engine to encrypt
the information with the desired CP-ABE policy. This way, it modifies the current
representation of the information substituting the plane text of the attributes with
the encrypted value.
The envisioned API functions for the use of a CP-ABE schema to realize group
communication security are:
addSharingPolicy (sharing_policy, cpabe_policy)
It adds a sharing_policy to a policy set, specifiying the corresponding cpabe_policy
to be used in case such sharing_policy is successfully evaluated.
getSharingPolicy (sharing_policy_id) : SharingPolicy
It returns the SharingPolicy corresponding to a given sharing_policy_id.
updateSharingPolicy (sharing_policy_id)
53
It updates the sharing policy corresponding to a given sharing_policy_id.
deleteSharingPolicy (sharing_policy_id)
It deletes the sharing policy corresponding to a given sharing_policy_id.
getCPABEKey (attributes) : CPABEKeyattributes
It returns a CP-ABE key associated to a specific set of attributes. A mechanism to
prove the entity possesses such set of attributes is required (e.g. X.509 certificates or
anonymous credential systems)
getCPABEPolicy (sharingPolicies, data, trust&reputation, context) : CPABEPolicy
Based on the set of predefined sharingPolicies, trust and reputation values and the
current context in which sharing transaction is going to be done, it evaluates the
sharing policies and returns a CP-ABE policy that will be used by encryptData
function.
share (data)
It first obtains the context and the trust and reputation values. Then it calls the
getCPABEPolicy method to obtain a CPABEPolicy. Afterwards, it calls the
encryptData operation and finally push (or publish in case of a publish/subscribe
scenario) the encrypted data.
encryptData (data, CPABEPolicy) : ciphertext
It takes the data to be encrypted, and a CPABEPolicy representing subsets of
attributes which are allowed to decrypt data. It returns ciphertext containing
CPABEPolicy.
decryptData (ciphertext, CPABEKeyattributes): data
It takes the ciphertext to be decrypted, and a CPABEKeyattributes representing a secret
key with an associated set of attributes. It returns data in the case CPABEKeyattributes
satisfies the CPABEPolicy which is contained in ciphertext.
54
6 Conclusions
This deliverable has summarized the work done in scope of work package 3 (tasks
T3.1, T3.2 and T3.3). The main focus of this work package is on designing a security
architecture applied to the IoTCrawler framework. Our overall goal is to ensure
large-scale and distributed search engine such as IoTCrawler operate in secure
manner. We have tackled many security and privacy challenges regarding secure
bootstrapping, secure authorisation, private data sharing, decentralized policy
management, and system monitoring. To achieve our goals, we have leveraged
most up-to-date technologies and carefully designed each enabler taking into
account characteristics and constraints of IoT devices and systems.
Our design on security and privacy enablers is tightly coupled with the architecture
of the IoTCrawler framework. These enablers for providing security, trust-
awareness, and privacy cover all layers from Micro layer to Application layer of the
framework. Therefore, it ensures that security and privacy issues are addressed at
every level of the framework. Furthermore, we integrate these enablers to bring
mutual benefits to each individual enabler as well as strengthen the overall security.
We have adopted the most advanced technology – the distributed ledger
technology DLT – with blockchain as its instance and smart contracts to enhance
the crossing domains policy management and private data sharing of our
IoTCrawler framework. We went further than state-of-the art solutions by providing
a blockchain network that provides auditable access control policy updates.
Operations done over the blockchain network allow for domains without prior trust
to develop, distribute and approve access policies applied to data resources
provided by those domains.
Last but not least, we have addressed the data privacy challenge by applying
cryptographic mechanisms such as attribute-based encryption and prefix
encryption to ensure that data is shared only between relevant entities.
For the next steps, we will complete prototypes these enablers and integrate them
in the IoTCrawler framework. From that, we can evaluate their performance against
practical datasets and real-world use-case scenarios.
55
7 References
Alur and
Henzinger 1992
Alur and T. A. Henzinger. Logics and models of real time: A
survey. In Proceedings of the 1991 REX Workshop on Real Time:
Theory in Practice, volume 600 of Lect. Notes Comput. Sci.,
pages 74–106. Springer, 1992.
Aboba et al.
2008
Aboba, B., Simon, D., Eronen, P., Extensible Authentication
Protocol (EAP) Key Management Framework, Internet Requests
for Comments, RFC5247, pages 1-79, 2008.
Androulaki et al.
2018
E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A.
De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, S.
Muralidharan, C. Murthy, B. Nguyen, M. Sethi, G. Singh, K. Smith,
A. Sorniotti, C. Stathakopoulou, M. Vukoli´c, S. W. Cocco, and J.
Yellick. Hyperledger fabric: A distributed operating system for
permissioned blockchains. In Proceedings of the Thirteenth
EuroSys Conference, EuroSys ’18, pages 30:1–30:15, New York,
NY, USA, 2018. ACM.
Baier 2008 C. Baier and J.-P. Katoen. Principles of model checking. MIT
Press, 2008.
Basin et al. 2015 D. Basin, F. Klaedtke, and E. Zălinescu. Monitoring metric first-
order temporal properties. J. ACM, 62(2):15, 2015.
Basin et al. 2017 D. Basin, F. Klaedtke, and E. Zălinescu. Runtime verification of
temporal properties over out-of-order data streams. In
Proceedings of the 29th International Conference on Computer
Aided Verification (CAV), volume 10426 of Lect. Notes Comput.
Sci., pages 356–376. Springer, 2017.
Garcia-Carrillo et
al. 2017
Garcia-Carrillo, D., Marin-Lopez, R., Kandasamy, A., & Pelov, A.
(2017). A CoAP-based network access authentication service for
low-power wide area networks: LO-CoAP-EAP. Sensors, 17(11).
Garcia-Carrillo
and Marin-
Lopez 2016
Garcia-Carrillo, D., Marin-Lopez, R. (2016). Lightweight coap-
based bootstrapping service for the internet of things. Sensors,
16(3), 358.R.
56
Garcia-Morchon
et al. 2016
Garcia-Morchon, O.; Kumar, S.; Keoh, S.; Hummen, R.; Struik, R.
Security Considerations in the IP-Based Internet of Things.
Available online: https://tools.ietf.org/html/draft-garcia-core-
security-06 (accessed on 7 March 2016).
Garcia-Morchon
et al. 2006
Garcia-Morchon, O., Kumar, S., Struik, R., Keoh, S., & Hummen, R.
(2013). Security Considerations in the IP-based Internet of Things.
Available online: https://tools.ietf.org/html/draft-garcia-core-
security-06.
Fei Hu 2016 Fei Hu. Security and Privacy in Internet of Things (IoTs): Models,
Algorithms, and Implementations, 2016
Kokoris-Kogias
et al. 2018
E. Kokoris-Kogias, E. C. Alp, S. D. Siby, N. Gailly, L. Gasser,
P. Jovanovic, E. Syta, and B. Ford. Calypso: Auditable sharing of
private data over blockchains. Cryptology ePrint Archive, Report
2018/209, 2018. https://eprint.iacr.org/2018/209.
Koymans 1990 R. Koymans. Specifying real-time properties with metric
temporal logic. Real-Time Systems 2(4):255-299, 1990.
Laurent et al.
2018
M. Laurent, N. Kaaniche, C.-Y. Le, and M. V. Plaetse. An access
control scheme based on blockchain technology. 2018.
De Laat et al.
2000
de Laat, C., Gross, G., Gommans, L., Vollbrecht, J., Spence, D.,
Generic AAA Architecture, Internet Requests for Comments,
RFC2903, pages 1-26, 2000.
Mills 1995 D. L. Mills. Improved algorithms for synchronizing computer
network clocks. IEEE/ACM Trans. Netw., 3(3):245–254, 1995.
Pnueli 1977 A. Pnueli. The temporal logic of programs. In Proceedings of the
18th Annual Symposium on Foundations of Computer Science
(FOCS), pages 46-57. IEEE Computer Society, 1977.
Shafagh et al.
2017
H. Shafagh, L. Burkhalter, A. Hithnawi, and S. Duquennoy.
Towards blockchain-based auditable storage and sharing of iot
data. In Proceedings of the 2017 on Cloud Computing Security
Workshop, CCSW ’17, pages 45–50, New York, NY, USA, 2017.
ZigBee Alliance
2014
ZigBee IP Specification—ZigBee Document 095023r34; ZigBee
Alliance: USA, 2014.
57
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 779852
@IoTCrawler IoTCrawler EUproject /IoTCrawler www.IoTCrawler.eu [email protected]