Small or medium-scale focused research project (STREP) FP7-ICT-2013-11
Water analytics and Intelligent Sensing for Demand Optimised
Management
WISDOM
Project Duration: 2014.02.01 – 2017.01.31 Grant Agreement number: 619795 Collaborative Project WP.5
D5.2 INTEL
Base technology validation
Submission Date:
03.11.2016 Due Date:
01.11.2016
Status of Deliverable: DrA WoD ReL DeL AppR
Nature of Deliverable: R P D O
Dissemination Level:
PU PP RE CO Project Coordinator: Daniela BELZITI
Tel: +33 4 93 95 64 14 Fax: +33 4 93 95 67 33 E-mail: [email protected] Project website: www.wisdom-project.eu
WISDOM D5.2 – Base Technology Validation 2
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Document Information Submission Date: 31.10.16
Version: V1.3
Author(s): Keith Ellis, Eugene Ryan (INTEL)
Contributor(s): Tom Beach (CU), Davide Caboni (INTEL), Eric Pascual (CSTB), Reza Akhava (ICL), Julie McCann (ICL), Alberto Musetti (DAPP), Antonio Annis (IDRAN), Manuel Ramiro (ADV)
Reviewer(s): Mathew Preece (CCC), Elenia Duce (DAPP), Anna Taliana (DCWW), Gareth Glennie (DCWW)
Document History Date Version Name1 Remark 14.09.2016 V0.1 Keith Ellis (INTEL) Initial version & ToC
19.09.16 V0.2 Keith Ellis (INTEL) Initial content , value & progress against objectives
20.10.16 V0.3 Eugene Ryan (INTEL) Specfic code snippet examples & content
27.10.16 V0.4 Keith Ellis (Intel) Strcuture edits & content additions
29.10.16 V0.5 Eugene Ryan (INTEL) Tom Beach content integration, overall edits
30.10.16 V0.6 Keith Ellis (Intel) Eugene Ryan & Davide Carboni content integration
30.10.16 V0.7 Keith Ellis (Intel) ICL content integration
31.10.16 V0.8 Keith Ellis (Intel) DAPP & IDRAN content integration
31.10.16 V0.9 Keith Ellis (Intel) Conclusions
02.11.16 V1.0 Keith Ellis (Intel) ADV content integration & final edits
03.11.16 V1.1 Keith Ellis (Intel) Edits post PSC comments
03.11.16 V1.2 Keith Ellis (Intel) Edits post PSC comments
03.11.16 V1.2 Keith Ellis (Intel) Edits post PSC comments
1 Main Contributor and Partner
WISDOM D5.2 – Base Technology Validation 3
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Copyright
© Copyright 2014 WISDOM Consortium consisting of CSTB, DAPP, CU, CCC, ASP, SAT, INTEL, ICL, ADV, IDRAN and DCWW. This document may not be copied, reproduced, or modified in whole or in part for any purpose without written permission from the WISDOM Consortium. In addition to such written permission to copy, reproduce, or modify this document in whole or part, an acknowledgement of the authors of the document and all applicable portions of the copyright notice must be clearly referenced. All rights reserved.
WISDOM D5.2 – Base Technology Validation 4
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
TABLE OF CONTENTS
1.1 List of Figures ..................................................................................................................................... 5
1.2 List of Tables ...................................................................................................................................... 5
1.3 List of Code snippets .......................................................................................................................... 5
1.4 Abbreviations ..................................................................................................................................... 6
1. Executive Summary ............................................................................................................. 7
1.1 The proposed value of the work ....................................................................................................... 9
1.2 Analysis of Progress against Objectives ........................................................................................... 10
1.3 Development, Verification & Validation methodology ................................................................... 12
2. EDGE TIER: development, verification and validation ......................................................... 18
2.1 DAN server ....................................................................................................................................... 18
2.2 The WISDOM gateway ..................................................................................................................... 18
3. PLATFORM TIER: development, verification and validation ................................................ 19
3.1 Overview .......................................................................................................................................... 19
3.2 Rule Engine Service .......................................................................................................................... 20
3.3 Governance service ......................................................................................................................... 21
3.4 Message exchange service .............................................................................................................. 22
3.5 Event Service ................................................................................................................................... 23
3.6 Ontology service .............................................................................................................................. 23
4. SERVICES TIER: development, verification and validation ................................................... 25
4.1 Overview .......................................................................................................................................... 25
4.2 Leakage Detection – CMR ................................................................................................................ 25
4.3 Night Flow Readings ........................................................................................................................ 25
4.4 CMR / EPANET Models .................................................................................................................... 26
4.5 CSO Prediction ................................................................................................................................. 26
4.6 Optimisation Module ....................................................................................................................... 26
4.7 Telemetry data services ................................................................................................................... 26
4.8 GUI: Residential visualisation service .............................................................................................. 26
4.9 GUI: Utility Visualisation service ...................................................................................................... 27
4.10 Disaggregation Service .................................................................................................................... 29
4.11 Leakage detection & localisation experimentation ......................................................................... 30
4.12 Trust network experimentation ...................................................................................................... 30
5. Summary / Conclusion ....................................................................................................... 32
6. References ........................................................................................................................ 34
WISDOM D5.2 – Base Technology Validation 5
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
1.1 List of Figures
Figure 1: WISDOM deliverables mapped to the development Life cycle ........................................................... 7 Figure 2: WISDOM components / services relevant to the conceptual Edge, Platform and Service tiers ......... 8 Figure 3: Research & Development design science approach ......................................................................... 12 Figure 4: Design science research methodology.............................................................................................. 13 Figure 5: WISDOM action research as illustrated in D1.3 .............................................................................. 13 Figure 6: WISDOM approach to requirement elicitation from D1.3 ................................................................ 14 Figure 7: the V-Model verification and validation ........................................................................................... 15 Figure 8 Clipping of the data source template provided by CCC to partners .................................................. 17 Figure 9: WISDOM Core Platform Components ............................................................................................... 19 Figure 10: Algorithm for Computing Trust ...................................................................................................... 31 Figure 11: the conceptual tiers of the WISDOM SoS ....................................................................................... 32
1.2 List of Tables
Table 1: Validation summary Platform & Edge components / services ............................................................ 9 Table 2: Validation summary Domain / analytical Services in the Cloud & at the Edge ................................. 10 Table 3 disaggregation codebase list & description ........................................................................................ 29
1.3 List of Code snippets
Code snippet 1: ScalaTest Rules Engine ........................................................................................................... 21 Code snippet 2: ScalaTest Governance Service ............................................................................................... 21 Code snippet 3: Unit test Message exchange .................................................................................................. 23 Code snippet 4: Unit test Message pub/sub .................................................................................................... 23 Code snippet 5: Unit test Night Flows ............................................................................................................. 26 Code snippet 6 testLatestData ........................................................................................................................ 28 Code snippet 7 testOss..................................................................................................................................... 28
WISDOM D5.2 – Base Technology Validation 6
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
1.4 Abbreviations
Acronym Full name API Application Programming Interface CSO Combined Sewer Overflow DAN Data acquisition and actuation DoW Discription of Work FDI False Data Injection GAD general anomaly detection GUI Graphical User Interface HW Hardware ICT Information and Communications Technology IIC Industrial Internet Consortium IoT Internet of Things JUnit Java Unit JVM Java Virtual Machine MQTT Messaging Queue Telemetry Transfer RTD Research Technology and Development SDLC Software Development Life Cycle SDS System Design Science SoS System - of- systems SW Software WP Work Package
WISDOM D5.2 – Base Technology Validation 7
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
1. EXECUTIVE SUMMARY
This document is the written output of task T5.2 ‘Base technology Validation’ which is of type ‘Report’ and ‘Public’ in nature. While the process of technology validation is captured in this report, the task of validation was very much a horizontal and iterative process conducted throughout the project. Figure 1 outlines the WISDOM deliverables mapped to the technology development life cycle. The technology outputs of WP2 and WP3 are all essentially software components that required ‘coding’. These technology outputs constitute the components and services that form the WISDOM system-of-systems (Figure 2) and each ‘blue’ block of Figure 2 has essentially progressed through the development, verification and validation process, depicted in Figure 1, whereby the requirement elicitation and development steps stem from and feed back into the goal of a validated technology solution.
Figure 1: WISDOM deliverables mapped to the development Life cycle
The focus is ‘base technology validation’ i.e. the objective is to understand if the “technology” delivered meets the requirements specified in WP1 and the system design specification defined in deliverable D2.1 [1]. It is not an assessment of the usefulness or impactfulness of the services implemented, which is the remit of deliverable D5.3 ‘Pilot Demonstrator Validation’. It is very much focused on verification and validation that the ICT connects the data prosumers (producers & consumers) of the physical world with the services of the cyber world.
WISDOM D5.2 – Base Technology Validation 8
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
The rest of the report is structured as follows: - The posited ‘value of the work’, ‘progress against objectives’ and ‘validation methodology’ completes
this introductory section.
- Sections 2 and 3 describe the validation of the ‘enabling’ services of WP2. Primarily the ‘edge’ and ‘platform’ tiers of Figure 2 which underpins the analytical services of WP3.
- Section 4 outlines the validation of domain focused / analytical services deployed as part of WP3.
- Section 5 Summaries and concludes the report
Figure 2: WISDOM components / services relevant to the conceptual Edge, Platform and Service tiers
Figure 2 outlines the overall system of interest and outlines the components/services deployed. This is an abstract description designed to aid the understanding. As described in D4.3 the figure has a loosely coupled three tier structure based on the ‘Edge, Platform and Service tiers’ of the Industrial Internet Consortium (IIC) [12] reference architecture (more detail in D5.5 [5]).
WISDOM D5.2 – Base Technology Validation 9
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
1.1 The proposed value of the work
The value of the overall WISDOM system-of-systems (SoS) approach was summarised in deliverable D4.3 as “the ability to address technical, syntactic and semantic interoperability linking heterogeneous domain infrastructure and analytical / ICT services”. In short, the overall value is the integration of hitherto unconnected systems and services. This is achieved through the implementation of the WISDOM ontology of deliverable D2.2 [2], the edge compute (DAN & Gateway) of D2.3 [3], the cloud storage and compute of D2.4 [4], the Rule Engine of D3.1 [6] and the visualisation capability outlined in D2.5 [5] The specific value of D5.2 rests in capturing the unit, integration and system testing completed as part of the iterative development cycle utilised in producing those components and in validating the system implementation relevant to the design of D2.1 and the requirements of WP1. It should be noted that this iterative process was conducted as part of the development tasks of WP2 and WP3 with system testing, in the main, conducted post the deployments of WP4. Additionally, given the division of work, as per the description of work (DoW), the developers of individual components and services were typically the unit testers of same, with INTEL assuming oversight for integration and system testing. Table 1 summarizes the primary developer organization, tester organization and the relevant approach to validation for each of the ‘platform’ and ‘edge’ components/services, while Table 2 outlines the same for domain and analytical components /services. While sections 2, 3 and 4 give more detail on each component/service.
Table 1: Validation summary Platform & Edge components / services
Component / service Name Developer Tester Nature of testing completed
(Methods, techniques, tools used etc.)
Event Store Data Query
(cloud)
INTEL INTEL Unit Testing: JUnit, ScalaTest
Integration testing: ScalaTest
Rule Engine
(cloud)
INTEL INTEL Unit Testing: JUnit, ScalaTest
Integration testing: ScalaTest
Message Exchange
(cloud)
INTEL INTEL Unit Testing: JUnit, ScalaTest
Integration testing: ScalaTest
Ontology
(cloud)
CU CU Ontology validated via workshops (Described in
D2.2). Software architecture fully tested by using
process of test queries being compared against
manual ontology query.
Governance
(cloud)
DAPP DAPP REST api tested with c URL command line tool, with
a variety of valid and invalid input.
DAN
WaterCore & Severlec
(edge)
CU CU,DCWW Black and white box testing conducted using test
data and real data. Results of integrated tested
against original DCWW data sources.
DAN
Concordia
ADV ADV Testing conducted using test data. Connectivity
and data transmission tests performed.
WISDOM D5.2 – Base Technology Validation 10
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
(edge)
Gateway
(Edge)
CSTB INTEL, CU unit testing : Python unit test suites
SW integration testing : same with mockups for
involved external components
User based field testing: long runs (several weeks
at least) in test lab with a typical configuration
including associated HW (sensors)
Table 2: Validation summary Domain / analytical Services in the Cloud & at the Edge
Component / service
Name
Developer Tester Nature of testing completed
(Methods, techniques, tools used etc.)
CSO Prediction
(cloud)
CU CU Unit testing using test and live data. Results of tests
compared against manual execution of CSO
prediction software.
Pump Optimisation
(cloud)
CU CU Unit testing using test and live data. Results of tests
compared against manual execution of CSO
prediction software.
GUI Residential
(cloud)
CU CU,DCWW,CCC Unit Testing, user based testing
GUI Utility
(cloud)
INTEL CU, DCWW Unit Testing, user based testing
Leakage detection &
localisation
(offline edge)
ICL ICL Not a WISDOM integrated service, so no code testing
per-se. Rather validation of the basic hardware and
approach. The experiment focused on algorithm
testing using offline stored data, results of tests
compared against artificial leakage in water test rig.
Disaggregation
(offline cloud)
INTEL INTEL Not a WISDOM integrated service, so no code testing
per-se. Experiment ‘R’ based simulation exercise.
Validation to date is rather a logical validation of the
algorithmic model versus ground truth data.
Trust Network
(offline cloud)
ICL ICL Unit testing, using offline time-series data sets
collected from past deployments. No validation from
an ICT coding perspective per-se, as this is a research
based experiment not an integrated service.
Validation to date is rather a logical validation of the
model versus ground truth reality.
CMR
- EPANet
- Night flow readings
(cloud)
CU, IDR,
INTEL
INTEL, CU, IDR Unit Testing: Junit, ScalaTest. EPANET Model created
to simulate optimised sensor location
Consumption profiling
(edge)
CSTB CSTB Unit Testing, user based testing
Leakage indicator
(edge)
CSTB CSTB Unit Testing, user based testing
1.2 Analysis of Progress against Objectives
WISDOM D5.2 – Base Technology Validation 11
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Deliverable and task text from the DoW are in ‘Bold’. What follows looks to directly address the objectives of the DoW and how those objectives have been addressed. D5.2) Base Technology Validation: This deliverable will document all validation work of the WISDOM base technology T5.2: Base technology validation and service level satisfaction This task expresses the requirements that shall be tackled by other WPs in terms of the various components development in order for WP5 to assess the impact of WISDOM deployments and to provide a vision for replication. Impact is addressed by D5.3 while replication strategy is addressed by D5.4, it is the goal of D5.2 to inform those deliverables by providing a validated underpinning ICT infrastructure and by ensuring services and assets are interconnected from a technical, syntactic and semantic perspective. Requirements will be in terms of sensed data and software functionality that deliver the smart resource, demand management, and adaptive pricing schemes. This task will address components across WPs: (Obj 1) - WP1 for the devised domestic, corporate and city water systems and components integration strategy. The integration strategy in WP1 is essentially the event based/DAN/EDGE processing architecture that was carried through into WP2. The architecture was implemented and tested and that implementation is the primary validation of the approach taken. (Obj 2) - WP2 for semantic model completeness and data storage performance and scalability. This work was primarily tackled by D2.2 (ontology), D2.3 (DAN & Gateway) and D2.4 (cloud platform). More detail on the approach to validation is given in sections 2 and 3.
(Obj 3) - WP3 for water decision support efficiency and level of delivery of resource and demand management strategies and pricing schemes. As per D3.4 demand pricing was not implemented. The justification for this is briefly discussed in section 4.1. All other aspects of decision support and demand management followed a user-based testing approach.
(Obj 4) - WP4 for all data related to pilots’ characteristics and past performances (water consumption from monitoring or invoice, building occupancy, weather data, and user behaviour). The highest priority is to ensure that enough data is available to rebuild past water data. For this purpose, an internal document, listing required data, will be delivered by the task to all pilot managers
The template used to ensure relevant data capture for the defined use-cases is outlined in section 1.3 while the validation approach to service tier of Figure 2 are discussed in section 4. (Obj 5) - In return relevant deliverables issued from WP1, WP2, WP3, and WP4 will be thoroughly analysed, critiqued and feedback conveyed to respective authoring team. Feedback was part of the continuous development approach undertaken with respect to WP2 components and the rule engine of D3.1. While the unit testing of WP3 components and services were undertaken primarily by the developers of same and appropriate partners where additional testing was required.
WISDOM D5.2 – Base Technology Validation 12
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
1.3 Development, Verification & Validation methodology
The approach to development, verification and validation within the project did not strictly follow a ‘waterfall’ or an ‘agile’ approach but rather given the nature of the project and the division of work different methodologies where utilised at different moments with different partners utilising what made sense for the organisations at specific junctures. The sub-sections that follow briefly outline the primary approach taken in respect to research and development and with respect to verification and validation. Research & Development Methodology
As described in our introduction the output of WP2 and WP3 tasks constitute the components and services of the WISDOM SoS. Those RTD (Research, Technology & Development) tasks as the name suggests involved elements of research and also involved technology development. The latter being at different levels of technology readiness depending on maturity of the research. Nevertheless each output of WP2 and WP3 are a result of traversing the cycle illustrated in Figure 3.
Figure 3: Research & Development design science approach
This cycle is a simplified version of Figure 4 which illustrates the ‘System Design Science’ (SDS) methodology. The architectural approach adopted in the research and development of WISDOM utilises SDS and loosely follows and agile approach to delivery. Systems Design Science seeks to establish the usefulness or utility of an artefact. The objective of Design Science is to design an artefact that has ‘usefulness’ in solving a stated problem, and also provide supporting evidence to inform the design. An artefact is developed and evaluated for efficacy. The artefact is then refined in an iterative or cyclical process. In this regard, Design Science is complimentary to Agile Software development processes. The WISDOM architecture adopts the Design Science framework [13], [14]. The solution is built to provide a solution to a clearly identified problem in how to integrate, manage and use sensor data from multiple systems whose sources can include computing devices to low level sensors. The solution is evaluated as whole and in its component parts, to ensure both a working (useful) system and optimal component elements. Each component element is evaluated to work towards an optimal design through a cyclical design-evaluate process.
WISDOM D5.2 – Base Technology Validation 13
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
This is essentially the process that was illustrated in deliverable D1.3 with respect to an action research methodology i.e. research through performing actions as per Figure 5, with requirement elicitation illustrated in Figure 6.
Figure 4: Design science research methodology
Figure 5: WISDOM action research as illustrated in D1.3
WISDOM D5.2 – Base Technology Validation 14
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Figure 6: WISDOM approach to requirement elicitation from D1.3
WISDOM D5.2 – Base Technology Validation 15
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Verification & Validation methodology The approach utilised in the main for validation was the ‘Verification and Validation’ model of the Software Development Life cycle (SDLC) which is also known as the V - model. So called because execution of processes happens in a sequential manner which is typically illustrated in a V-shape (see Figure 7 which is a redrafting of Figure 1). A per Figure 7 the V-models has an associated testing phase for each development stage.
Figure 7: the V-Model verification and validation
D5.2 is tasked with ‘base technology validation’, however, much of the testing done was about verification. Verification answers the question ‘did we build the system right?’ while validation answers the question ‘did we build the right system?’ i.e. did we build something as per customer / end –user requirements that is useful in practice? In the context of ‘base technology validation’ the user acceptance testing which is the point at which validation does occur was essentially done by the developers of services / components within the service tier of Figure 2 i.e. the task of utilising the APIs etc. validated if the platform and edge services met requirements in terms of serving the requisite data for the given service or in triggering the correct business rule etc.
WISDOM D5.2 – Base Technology Validation 16
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Part of the process and remit of D5.2 was to assist partner developers in identifying what indeed was the requisite data for the given use-cases and in that regard the template of
Figure 8 was developed and shared amongst the partners addressing objective 4, section 1.2.
Physical Equipment Lead partner Scenario Location
Flow and pressure data-loggers DCWW Clean Water Optimisation Tywyn Aberdovey
AAB Pressure Sensors SAT Leakage localisation and Pumping Optimisation SAT Pipeline, La Spezia
AAB Flow Meters SAT Leakage localisation and Pumping Optimisation SAT Pipeline, La Spezia
AAB Inverters SAT Leakage localisation and Pumping Optimisation SAT Pipeline, La Spezia
AQUA troll 200 (Temp, conductivity, Groundwater level probe) ASP Water Pollution (Well) ASP, ACAM locations, La Spezia
ISCO 5800 Sampling system ASP Water Pollution (Well) ASP, ACAM locations, La Spezia
HACH 1720E (Turbidi ty sensor) ASP Water Pollution (Spring) ASP, ACAM locations, La Spezia
250+ Arad allegro meters DCWW Demand management (House Type 1) Cardiff
10+ Arad ER meters DCWW Demand management (House Type 2) Cardiff
10+ ADV electrical EM24 DCWW Demand management (House Type 2) Cardiff
Temperature sensors (16) CSTB Demand management and Advanced devices (Heat Recovery) Aquasim
Multiple system control sensors DCWW Waster Water Prediction Gowerton
Other data sources/info Lead partner Scenario Location
Online/in house Portal DCWW Demand management (House Type 1, 2) Cardiff
District Management Area Data DCWW Demand management (House Type 1, 2) Wales
Water Audits (before and after) DCWW Demand management (House Type 1, 2) Cardiff
Real Time Flow data from house types DCWW Demand management (House Type 1, 2) Cardiff
Ask Cardiff Survey CCC/CU Demand management (House Type 1, 2) Cardiff
Questionnaire following trial DCWW Demand management (House Type 1, 2) Cardiff
Weather Data (Met Office link) DCWW Demand management and Waste Water Prediction? Gowerton and Cardiff
Hydraulic Model DCWW Clean Water Optimization/Demand Management Tywyn Aberdovey
Historic energy tarrif cost/usage data DCWW Clean Water Optimization/Demand Management Wales
Optimisation Algorithm (simulated energy data) DCWW/CU? Clean Water Optimization/Demand Management/WW Prediction Wales
Historic six monthly domestic meter readings DCWW Demand management (House Type 1, 2) Cardiff
Historical and Current DMA Flow data DCWW Demand management (House Type 1, 2) Cardiff
CSO Event Prediction (CONFIDENTIAL) DCWW Clean Water Optimization/Demand Management/WW Prediction Wales
Network Energy Data DCWW Demand Management Wales
Unit cost of supplying drinking water to Tywyn Aberdovey DCWW Clean Water Optimization/Demand Management Tywyn Aberdovey
Energy Data of Pumping SAT Clean Water Optimization/Demand Management La Spezia
Turbidity Data ASP Water Pollution La Spezia
Conductivity Data ASP Water Pollution La Spezia
Piezometric Data ASP Water Pollution La Spezia
WISDOM D5.2 – Base Technology Validation 17
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Figure 8 Clipping of the data source template provided by CCC to partners
Physical Equipment Lead partner Scenario Location
Flow and pressure data-loggers DCWW Clean Water Optimisation Tywyn Aberdovey
AAB Pressure Sensors SAT Leakage localisation and Pumping Optimisation SAT Pipeline, La Spezia
AAB Flow Meters SAT Leakage localisation and Pumping Optimisation SAT Pipeline, La Spezia
AAB Inverters SAT Leakage localisation and Pumping Optimisation SAT Pipeline, La Spezia
AQUA troll 200 (Temp, conductivity, Groundwater level probe) ASP Water Pollution (Well) ASP, ACAM locations, La Spezia
ISCO 5800 Sampling system ASP Water Pollution (Well) ASP, ACAM locations, La Spezia
HACH 1720E (Turbidi ty sensor) ASP Water Pollution (Spring) ASP, ACAM locations, La Spezia
250+ Arad allegro meters DCWW Demand management (House Type 1) Cardiff
10+ Arad ER meters DCWW Demand management (House Type 2) Cardiff
10+ ADV electrical EM24 DCWW Demand management (House Type 2) Cardiff
Temperature sensors (16) CSTB Demand management and Advanced devices (Heat Recovery) Aquasim
Multiple system control sensors DCWW Waster Water Prediction Gowerton
Other data sources/info Lead partner Scenario Location
Online/in house Portal DCWW Demand management (House Type 1, 2) Cardiff
District Management Area Data DCWW Demand management (House Type 1, 2) Wales
Water Audits (before and after) DCWW Demand management (House Type 1, 2) Cardiff
Real Time Flow data from house types DCWW Demand management (House Type 1, 2) Cardiff
Ask Cardiff Survey CCC/CU Demand management (House Type 1, 2) Cardiff
Questionnaire following trial DCWW Demand management (House Type 1, 2) Cardiff
Weather Data (Met Office link) DCWW Demand management and Waste Water Prediction? Gowerton and Cardiff
Hydraulic Model DCWW Clean Water Optimization/Demand Management Tywyn Aberdovey
Historic energy tarrif cost/usage data DCWW Clean Water Optimization/Demand Management Wales
Optimisation Algorithm (simulated energy data) DCWW/CU? Clean Water Optimization/Demand Management/WW Prediction Wales
Historic six monthly domestic meter readings DCWW Demand management (House Type 1, 2) Cardiff
Historical and Current DMA Flow data DCWW Demand management (House Type 1, 2) Cardiff
CSO Event Prediction (CONFIDENTIAL) DCWW Clean Water Optimization/Demand Management/WW Prediction Wales
Network Energy Data DCWW Demand Management Wales
Unit cost of supplying drinking water to Tywyn Aberdovey DCWW Clean Water Optimization/Demand Management Tywyn Aberdovey
Energy Data of Pumping SAT Clean Water Optimization/Demand Management La Spezia
Turbidity Data ASP Water Pollution La Spezia
Conductivity Data ASP Water Pollution La Spezia
Piezometric Data ASP Water Pollution La Spezia
WISDOM D5.2 – Base Technology Validation 18
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
2. EDGE TIER: DEVELOPMENT, VERIFICATION AND VALIDATION
2.1 DAN server
Development In the main, development of the DAN servers the specification and architecture defined in WP2, however the specific implementations where completed slightly differently depending on specific in-the-field characteristics for each instantiation. Testing/Validation Testing of each DAN server was conducted as follows. - Copying the WisdomTester command line tool that was developed by Intel to each DAN server, along
with appropriate SSL certificates and Governance user/password information in configuration files. This proves that observation messages can be sent to the WISDOM cloud and that these messages end up in the EventServer.
- When each DAN server was initially programmed and deployed, a sample set of data observations was extracted from the initial data set that was loaded into the DAN (for DCWW Serverlec, for example). This sample set was then run through the WisdomTester.EventServer.checkObservations tester tool to verify that the observations messages went through and were processed successfully in WISDOM cloud – the Event Server is the final and permanent destination and this then is a natural place to check for the existence of the data.
2.2 The WISDOM gateway
Development Because the gateways chosen were Intel Edison, with CSTBox running on Debian Linux, a lot of off-the-shelf components were used – i.e. the basic gateway platform did not have to be developed from scratch. As seen in deliverable D2.3, Shinken and ngrok were used to monitor and communicate with the gateway, with CSTBox software used for remotely managing the devices. Nevertheless the integration and operation of these components needed to be validated with respect to the WISDOM platform. Testing / Validation - SSH - Basic MQTT tests, similar to the DAN server - User (in this case ‘installer’) based testing in terms of software deployment of Debian & CSTBox. With a
focus on ensuring CSTBox services are configured / running on deployment out of the box. - Integration testing by verifying sample messages in the WisdomTester.EventServer.checkObservations
tester tool to ensure that the observations messages went through and were processed successfully in WISDOM cloud
- Long runs (several weeks) in CSTB lab test (Gerhome) - Operational deployment with extensive instrumentation running 24/7 since March 2016 in type 3
house - Cardiff University test house ahead of customer deployments Both the ‘leakage indicator’ and ‘consumption profiling’ services where tested in the same way using unit testing for development and then user-based testing when deployed on the gateway.
WISDOM D5.2 – Base Technology Validation 19
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
3. PLATFORM TIER: DEVELOPMENT, VERIFICATION AND VALIDATION
3.1 Overview
The interdependent components in this section collectively constitute the WISDOM core platform. As can be seen in Figure 9, these core services communicate to provide an enabling framework that supports analytical and domain focused services.
Message exchange
MQTT
CAS
server
Governance
R
REST
Gov-
Plugin
Apollo
Broker
MySQL
RREST
Ontology
updater
MQTT
SubscriberSPARQL query
R
REST API
Ontology API
JENA
Ontology serviceModule name
Ontology
query proxyRDF triple
store
MQTT
R
R
REST API
Rules Engine
WISDOM Rules
Component
MQTT
subscriber
Drools
Rules Engine
RREST
MQTT
subscriber
Event
LoggerData push
R
REST API
Metrics query
Cassandra
KairosDBEvent logging service
Event query
proxy
R
REST API
Figure 9: WISDOM Core Platform Components
WISDOM D5.2 – Base Technology Validation 20
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
In light of this, initial unit testing was used in the initial development / testing of each component, and test artefacts were developed and used for integration of the components. Test driven development was used in all cases here, where tests were written, then software developed to conform to the tests. For Scala components, the tests were written using ScalaTest; for Java components, JUnit tests were written. Java and Scala classes (both run on a Java Virtual Machine (JVM)) were developed to test each core WISDOM component. These were made available as a library to test each core component, via Junit or ScalaTest for Unit and Integration Testing. These classes were also then exposed via a WisdomTester command-line tool for command line testing in remote components such as the DAN instantiations. JUnit2 is a unit testing framework for Java and Scala (run through ScalaTest3) and the ScalaTest4 itself is a specification-based testing library for Scala that allows both unit tests and acceptance criteria (the criteria that allow the tests to pass) to be specified. For example: - MessageService.MQTTSend - MessageService.MQTTSubscribe - MessageService.Parser.checkJSON - EventServer.checkObservation - WisdomSecurityAPIHelper.login - WisdomSecurityAPIHelper.hasAccessRight
3.2 Rule Engine Service
Integration Testing of MQTT Message Service and Rules Engine These tests prove the integration of the Message Service and the Rules Engine by sending MQTT messages, and checking that a collocated Rules Engine triggers certain DCWW rules as a result. These examples use ScalaTest. "DCWW DCWW_SRV1_HHLEVEL rule" should "be not triggered on non-threshold
message with maintenance OFF, but DCWW_SRV1_HLEVEL should instead" in {
val ksession = DroolsHelper.getSession
var flatOMSafe = new FlatObservationMessage("device", "a51234sc",
TimeHelper.getNewTime.toString,
DCWWRuleTestSpec.DCWW_SRV1_LEVEL, 1098, TimeHelper.currentTime,
"", "", "SRV")
val pub = new MqttPublisher
pub.publish(flatOMSafe.toJSONString)
action.setConfig(ActionHelper.DCWW_SRV1_MAINTENANCE, "false")
action.setMessage("")
val actionFM = ksession.insert(action)
ksession.fireAllRules()
action.getMessage should be ("DCWWRuleTestSpec.DCWW_SRV1_HLEVEL rule
triggered")
}
"DCWW DCWW_SRV1_NORMAL rule" should "be not triggered again on non-threshold
2 http://junit.org/junit4/ 3 http://www.scalatest.org/getting_started_with_junit_4_in_scala 4 http://www.scalatest.org/
WISDOM D5.2 – Base Technology Validation 21
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
message with maintenance OFF, but DCWW_SRV1_HLEVEL shouldd" in {
val ksession = DroolsHelper.getSession
var flatOMSafe = new FlatObservationMessage("device", "a51234sc",
TimeHelper.getNewTime.toString,
DCWWRuleTestSpec.DCWW_SRV1_LEVEL, 1040, TimeHelper.currentTime,
"", "", "SRV")
val pub = new MqttPublisher
pub.publish(flatOMSafe.toJSONString)
action.setConfig(ActionHelper.DCWW_SRV1_MAINTENANCE, "false")
action.setMessage("")
val actionFM = ksession.insert(action)
ksession.fireAllRules()
action.getMessage should be ("DCWWRuleTestSpec.DCWW_SRV1_NORMAL_LEVEL
rule triggered")
}
Code snippet 1: ScalaTest Rules Engine
3.3 Governance service
Unit testing
"A WISDOM governance system" should "not log in invalid users" in {
//Wisdom Security Initialization https://cas.wisdom-project.eu
val ws = new WisdomSecurityAPIHelper
//Sign in into the cas server and generate the session ticket (TGT)
val didLogin = ws.login(invalidUsername,invalidPassword);
(didLogin) should be (false)
}
"A WISDOM governance system" should "log in valid users" in {
//Wisdom Security Initialization https://cas.wisdom-project.eu
val ws = new WisdomSecurityAPIHelper
//Sign in into the cas server and generate the session ticket (TGT)
val didLogin = ws.login(validUsername,validPassword);
(didLogin) should be (true)
// invoke https://auth.wisdom-project.eu service to get access rights
on an asset for the logged user.
ws.obtainAccessRights(assetId)
//invoke https://auth.wisdom-project.eu service to check if the logged
user has permission on the defined asset
val hasAccessRight = ws.hasAccessRight(assetId, rightName);
(hasAccessRight) should be (true)
// Logout CAS
val didLogout = ws.logout();
(didOut) should be (true)
}
Code snippet 2: ScalaTest Governance Service
WISDOM D5.2 – Base Technology Validation 22
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
REST api Testing The REST (application programming interface) api was tested with the following cURL command line tests:
1. CURL test to validate the call for the user authentication : $> curl -X POST -H "Content-Type: application/text" -d '[email protected]&password=bbb' "https://danserver.doc.ic.ac.uk/cas/v1/tickets" -v
2. CURL to test the creation a new service ticket in order to access to WISDOM services:
$> curl -X POST -H "Content-Type: application/text" -d 'service=https://danserver.doc.ic.ac.uk/wisdom-authorization/api/admin/users/password'"https://danserver.doc.ic.ac.uk/cas/v1/tickets/TGT-96-kAMcMfVCGQJwmTB2yHEFCtHzf9EBOKJa2Z1cURzJwYSKslEg9G-danserver.doc.ic.ac.uk"
3. CURL test to check the service ticket validity
$> curl -X GET https://danserver.doc.ic.ac.uk/cas/p3/serviceValidate?service=https://danserver.doc.ic.ac.uk/wisdom-authorization/api/admin/users/password&ticket=ST-86-mz5bsc9Sbpi0Og7QviEF-danserver.doc.ic.ac.uk
3.4 Message exchange service
Unit Testing The following unit tests test out that the observation messages required by the Message Service are parsed correctly.
"JSON parser" should "allow pressure metrics" in {
val request = """
{
"metrics": [
{
"tags": {
"device": [
"E124678"
],
"variable": "flow.testlocation"
},
"name": "pressure",
"aggregators": [
{
"name": "sum",
"align_sampling": true,
WISDOM D5.2 – Base Technology Validation 23
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1357023600000,
"end_relative": {
"value": "2",
"unit": "hours"
}
}
"""
MessageParser.isValidRequest(request) should be (true)
}
Code snippet 3: Unit test Message exchange
Integration / end-to-end testing of Message Service This test proves that messages can be sent and received on the MQTT. The test is to test out the MQTT publish and subscribe functions, without having the Rules engine etc. connected – therefore it is for unit testing.
"MQTT publisher " should "should allow receiving of same message" in {
var flatOMSafe = new FlatObservationMessage("device", "a51234sc",
TimeHelper.getNewTime.toString,
DCWWRuleTestSpec.DCWW_SRV1_LEVEL, 1098, TimeHelper.currentTime,
"", "", "SRV")
val pub = new MqttPublisher
pub.publish(flatOMSafe.toJSONString)
val rec = new MqttPublisher
val messages = rec.receive()
(messages.size) should be (1)
val msg = messages(0)
(msg.is) should be ("a51234sc")
}
Code snippet 4: Unit test Message pub/sub
3.5 Event Service
Testing of the Event service was based on the correctly parsed persistence of data within Cassandra and KairosDB.
3.6 Ontology service
Initial testing of the ontology service was conducted by performing a series of queries using the developed API. The results of these queries were then compared to manual queries using the ontology editor Protégé, in order to validate the functionality of the service.
WISDOM D5.2 – Base Technology Validation 24
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Testing was also conducted as to the performance of the ontology service within the cloud platform for both retrieval and updating of data, through the RESTful GET and PUT methods. These utilized the SPARQL SELECT and UPDATE functions respectively. The semantic model tested was an instantiation of the water value chain and domestic model, consisting of 1722 named individuals. 11 identical GET requests were issued to the service to retrieve the current sensor reading at an arbitrary sensor in the network, and this test was repeated 5 times, with the service restarted between each test to reset any caching which had occurred. A similar testing protocol was conducted for PUT requests to update the sensor reading, and more realistic testing was conducted by varying the GET request issued, varying the PUT request issued, and finally alternating between GET and PUT requests. The results of the GET request testing showed that the typical response time which could be expected would be circa 550ms. The PUT testing indicated a very similar trend, but with approximately an additional 100ms response time across the requests. Changing the request between subsequent requests didn't result in any significant difference in the response time to these results. The performance of the inference functionality has also been tested, with the time taken for the inference engine to reason over the ontology being observed. At this time the runtime is sub-second, thus it is deemed satisfactory.
WISDOM D5.2 – Base Technology Validation 25
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
4. SERVICES TIER: DEVELOPMENT, VERIFICATION AND VALIDATION
4.1 Overview
As per deliverable D4.3 this tier is primarily concerned with analytical and domain focused services. Some of these services were described as being ‘online’ connected into the WISDOM platform while some are ‘offline’ and have not been integrated with the platform. The sections that follow describe the validation effort where appropriate for each service/component. But first a note on adaptive pricing. Previously, a survey of adaptive pricing was conducted in D4.1, this survey found that the majority of adaptive prices deployments either adopted “Block Pricing” or “Seasonal Pricing”. These schemes were not applicable for the purposes of our studies. While, the survey also found limited adoption of critical load pricing. In short, to the best of the consortiums knowledge there is no current adaptive pricing scheme deployed in a real world scenario. We expanded on this work in D3.4 to define our own models, but due to constraints within our pilots (i.e. UK regulatory restrictions), direct real time pricing could not be directly implemented and as such there is no associated validation of the underlying technology in this case. Demand side management is addressed through other services such as the residential GUI and validation of the same is addressed in section 4.8.
4.2 Leakage Detection – CMR
The leakage detection service from CMR was not test here beyond the ability to integrate it with the platform and the development of the ‘night flow readings’ services was developed specifically as a trigger for this third party service.
4.3 Night Flow Readings
Unit Testing the Night flow API as per Specs2 test The following unit tests test out that the observation messages required by the Night Flow API are parsed correctly. "JSON parser" should "allow pressure metrics" in {
val request = """
{
"metrics": [
{
"tags": {
"device": [
"Q12732"
],
"variable": "Italy"
},
"name": "pressure",
"aggregators": [
{
"name": "sum",
"align_sampling": true,
"sampling": {
"value": "1",
WISDOM D5.2 – Base Technology Validation 26
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1357023600000,
"end_relative": {
"value": "2",
"unit": "hours"
}
}
"""
MessageParser.isValidRequest(request) should be (true)
}
Code snippet 5: Unit test Night Flows
4.4 CMR / EPANET Models
An EPANET model that simulates the SAT water distribution network was built and used to generate and simulate a wide set of leakage scenarios in order to identify the best and most cost-effective configuration /location for flow/pressure sensors. The CMR algorithm applied to the EPANET model allows to optimize the leakage localization along the SAT network once the leakage has been detected.
4.5 CSO Prediction
The CSO prediction module has been tested in two phases. Firstly, the matlab model that underpins the CSO prediction module has been tested, by comparing selected results with manually calculated results. Once this testing has been completed, the integration of the matlab module with the WISDOM system was tested by monitoring the incoming data from the water network, and comparing the output of the integrated algorithm – with manual results obtained using the matlab model.
4.6 Optimisation Module
The optimisation module has been tested in the same way as the CSO prediction module, firstly the results from the optimisation models have been tested against manual results to ensure their accuracy. Subsequently, these results are then compared to the results achieved from the integrated module.
4.7 Telemetry data services
Part of standard testing of MQTT based messaging approach, section 0.
4.8 GUI: Residential visualisation service
The residential GUI has been tested in a variety of ways. In terms of the design of the interface, this has been validated by both consultation with partners within the WISDOM consortium and our special interest group. On a technical level; the initial version has had significant testing by a set of internal testers within the developing partners (Cardiff University). These tests have focused on functionality of the systems.
WISDOM D5.2 – Base Technology Validation 27
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Subsequently these tests have been expanded to include people external to CU. These tests have focused firstly on the functionality of the interface from utilising the view of a water company who may release it to their customers, by testing being performed by water company users. The testing has also examined how the interface will function from the user perspective, by performing testing with non-water company users, who are not expects in this field
4.9 GUI: Utility Visualisation service
The API/UI uses ‘Mocha’ for Unit testing/Mocking. Unit tests The javascript test framework ‘mocha’ is used to run unit tests on the GUI code. Mocha is described as ‘a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases. Hosted on GitHub’. The unit tests are all located in the ‘/test/’ directory and all start with the word ‘test’ (see list below), followed by the name of the module / file that is to be tested. With mocha we can run an individual test with the command ‘mocha test/testV1.js’ or we can run all the tests in one go simply with the command ‘mocha’. This gives us a report on how many of the test pass, and how many fail. The four main test utilised are:
- testEventAlert - testLatestData - testOss - testUtilities
The code snippets below are from testLatestData and testOss.
WISDOM D5.2 – Base Technology Validation 28
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Code snippet 6 testLatestData
Code snippet 7 testOss
WISDOM D5.2 – Base Technology Validation 29
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
4.10 Disaggregation Service
The disaggregation services as per deliverable D4.3 is offline and not integrated into the WISDOM platform although the hooks to receive data are in place. The disaggregation work remains an ‘R’ based exercise. The code-base for the service is described in Table 3. Of course R can accommodate test cases using RUnit which is the R flavor of the classic xUnit family of test tools or one can use the ‘testthat’ testing framework. However, this was not adopted in WISDOM mainly because work needed to progress in a more explorative way as it was more research based, plotting graphs and tweaking with code without the overhead of test-case/code/test/refine/deploy lifecycle.
Table 3 disaggregation codebase list & description
Code/class Description
1 util.R: some utilities used by many R scripts
2 loadType3.R: manages low level format of data produced by private home in France installation
3 washing.R test hypothesis - days with washing machine have a higher consumption
4 supervised.R uses private home in France data to make classification of his own water events and estimate learning curves and confusion matrix
5 disaggregation.R component to perform unsupervised disaggregation of daily consumption
6 energyWater2.R old report to test hypothesis that during clothes washing or dish washing programs the average energy consumption is larger than usual. Hypothesis rejected.
7 energyWater3.R shows the energy (normalized by duration of occurrence) used in water events It does not show evidence of any variation between classes.
8 energyWaterLargeVols.R test the hypothesis that events with large volumes like washers and clothes washing have different energy profiles. The hypothesis is null
9 k-means.R clustering k-means on water events. it does not provide valuable results to unsupervised classification
10 largeVolumes.R Test hypothesis usages like showers/washing consume by far more water then tap activation like drinking, basin, toilet, etc. It is a valid hypothesis and show how to calculate the two of mixed distribution.
11 showerVsClothes.R unsupervised clustering that shows how showers and clothes wash have completely different time patterns
12 simulate.R this module simulates a water time series for one household as poisson rectangular pulse model as in "Stochastic Water Demand Modelling: Hydraulics in Water Distribution Networks by Mirjam Blokker pp. 60-61
13 synth.R this module creates synthetic data for training a classifier used to generate a seq for aquasim, never tested really to train
14 toiletFixedVolCase.R assumes to know beforehand the volume of toilet flush (single flush, constant). This show a bimodal dist where toilet is one peak
WISDOM D5.2 – Base Technology Validation 30
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
15 morningPatterns.R hypothesis is that most of usage events in the morning hours are caused by personal cleaning -- accepted
16 repatterns.R simple PoC of events mining algorithm based on regular expressions
17 ASLoader.R module to load data from Aquasim simulation. Don't expect to maintain this module. It is more a workaround as Aquasim should comply with better data specifications.
18 ASTest1.R script PoC that fits a classifier with Eric's data and predicts on Aquasim data.
4.11 Leakage detection & localisation experimentation
This is an experimental research exercise and not a WISDOM integrated service as yet. So there is no code testing per-se. Rather validation of the basic hardware used and more importantly the approach and algorithms. The experiments focused on algorithm testing using offline stored data, with results of tests compared against artificial leakage in water test rig, a description of which follows.
As explained in D3.2, the validation of leakage detection and localisation algorithms have been done using a water test rig in DCWW which was instrumented with sensor nodes based on Intel Edison development boards and NEC Tokin Ultra high-sensitivity vibration sensors. On top of the test infrastructure, 7 nodes equipped with vibration sensors were deployed. Using the aforementioned infrastructure, a variety of burst emulations were conducted at different but representative pressure levels as prescribed by the DCWW expert on leakage. The leaks were recorded by the vibration sensor at 1000 samples per second in parallel from each of the 7 sensor nodes. The in-node anomaly detection algorithm resides in the sensor nodes and uses 16000 data points and is designed to detect the leakage event. To test this a water burst was simulated on a highly pressurised water network, again as recommended by the domain expert. The performance of the in-node decision-making algorithm was measured in terms of average compression rates to reduce data communication volumes, anomaly detection accuracy as compared to ground truth, and communication savings in terms of numbers of packets for transmitting the compressed data. In some tests only timestamps of the event were sent across the network to both indicate that it has occurred and the time it happened. This reduced communications load, saving up to 90% in the numbers of packets communicated compared with traditional periodical reporting mechanisms. This also saves energy as the cost of communications dominates battery drainage. Just using our timestamps and enhanced triangulation our localisation can find the position of the anomaly for our particular scenario within 0.5ms which is very acceptable to the water distribution community.
4.12 Trust network experimentation
Again, this is an experimental research exercise and not a WISDOM integrated service. So there is no code testing per-se. Rather validation of the approach, a description of which follows. A novel unified framework for computing uncertainty based on accuracy and trust was presented in D3.2. The approach functionality was evaluated by applying it to data sets collected from past deployments and demonstrated its benefits for in-network processing as well as fault detection. Figure 10 shows the algorithm for computing trust. The off-line data trustworthiness evaluation system establishes if sensor data is useful. However, this earlier trustworthiness work, though very useful, will tend to mark all unusual data as being of low trust. Some of this unusual data could indeed be an accurate reading but odd.
WISDOM D5.2 – Base Technology Validation 31
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
Therefore a fully distributed general anomaly detection (GAD) scheme was developed, to separate out genuine anomalies that represent the phenomenon (though different) from the readings that are unusual because of sensor decalibration or failure. The performance of GAD’s trustworthiness algorithm has been evaluated by simulating the false data injection (FDI) attacks in the U.S. smart grid. More details are available in D3.2. The Trust experimentation and validation was not validated on data coming in from the real pipe system for two reasons. Firstly, there was no access to large volumes of data that would exercise the trust algorithms (i.e. the small rig data would be too simple). Secondly, ground truth data was required which had both the unusual valid events and the unusual untrustworthy sensor events recorded so that we could compare in a scientific way. Therefore, the dataset used in the validation studies came from sensor applications that have similar behaviours to water distribution sensor networks and therefore given the high levels of accuracy there is high confident that the algorithms will operate as expected in the wild.
Figure 10: Algorithm for Computing Trust
5
distribution. The prior values for α and β are set to 1 initially
to simulate uniform distribution of trust for sensor nodes that
do not have recent observation experiences. The expectation
of the posterior distribution (E[Ti j ]) is used as the trust value.
Algorithm 3 Compute Trust
Require: Acceptancet h r esh ol d ,M easur ement i ,M easur ement j ,Obser vat i on l en gt h
Ensure: Tr ust i j
Obser ve M easur ement j f orObser vat i on l en gt h number of measur ementsαpr i or = 1βpr i or = 1Acceptablem = 0Unacceptablem = 0Compute T r ust i j f or each obser vt i onif |M easur ement i − M easur ement j |> Acceptancet h r esh ol d then
Unacceptablem = Unacceptablem + 1else
Acceptablem = Acceptablem + 1end ifαpost er i or = αpr i or + Acceptablem
βpost er i or = βpr i or + Unacceptablem
Tr ust i j =α p o s t e r i o r
α p o s t e r i o r + βp o s t e r i o r
D. Computing Uncertainty
Given a sensed data, a data consumer might be uncertain
about whether the data is accurate or not and whether the
source of the data is trustworthy or not. Suppose the sys-
tematic error in a particular measurement is negligible and
the measuring device is trusted. In this case, the accuracy
of the measurement is determined by the precision of the
measuring device, i.e., the data consumer can be certain that
the sensed data is as accurate as the sensor’s capability permits.
Consequently, the uncertainty is estimated by the accuracy of
the measuring device [5]. On the other hand, for cases of
degraded precision, significant systematic error and less than
full trust (trust value 1), the certainty is discounted by the
accuracy as well as trustworthiness of the measuring device.
To capture this notion, we model uncertainty as follows:
Uncer tainty(x j ) = 1− Ti j ∗ Accuracy(x j )
The Uncertainty Unit uses this model and the values
obtained from the Accuracy and Trust units to compute
the uncertainty of a sensor node. It should be noted
that while the precision and accuracy values of a sen-
sor are computed by itself, trust and uncertainty are com-
puted by a neighbouring sensor that is dealing with the
sensor (e.g., aggregating data obtained from the sensor).
Also, the precision, accuracy and trust computations in-
volve parameters – M easur ementper i od, Obser vationper i od ,
Acceptancet h r eshol d respectively – whose values should be
chosen in application dependent manner. The measurement pe-
riod (M easur ementper i od) for computing precision should be
long enough to allow making enough measurements for com-
puting the statistical parameters but not too long so as to avoid
capturing actual variations of the measured quantity. The same
applies to the observation period (Obser vationper i od) for
TABLE I: Rules for combining uncertainties.
f ϵf
f (x, y) = kx + my ϵf = k2 (ϵx )2 + m2 (ϵy )2 + 2mk(ϵx y )
f (x, y) = xy
ϵf = f ( ϵxx
)2 + (ϵy
y)2 − 2(
ϵx y
x y)
computing accuracy. However, given a M easur ementper i od,
while the Precision Unit can get as many measurements as
the maximum sampling frequency supported by the sensor
would allow, the Accuracy Unit may not be able to do so
within the observation period (M easurementper i od) since its
measurements come from passive observation of neighbouring
sensor nodes.
E. Propagating Uncertainty
The measurement of a physical quantity to determine its
numerical value and estimating the associated uncertainty with
that value is an important part of experiments in physical
sciences. Consequently, the propagation of uncertainty is a
well studied concept in these fields. Since sensors are (or
can be viewed as) measuring devices, the uncertainty of data
generated by sensors can be propagated by applying the
methods used in measurement uncertainty analysis. In this
work, we adopt these methods for propagating uncertainties
in sensor networks.
In addition to computing the uncertainty of a neighbouring
node, the Uncertainty Unit also deals with combining (propa-
gating) uncertainties of sensed data if it processes data with un-
certainty values received from other sensors (e.g., in-network
processing). Consider a multi-variable function f (x, y, z, ...)
with independent variables. The error in f , denoted by ϵf ,
as a result of the errors in x, y, z, ... denoted by ϵx , ϵy , ϵz , ...
respectively can be approximated as follows [4]:
(ϵf )2
=∂ f
∂x
2
(ϵx )2
+∂ f
∂y
2
(ϵy )2
+∂ f
∂z
2
(ϵz )2
+ ...
Hence, once the uncertainty in each independently measured
quantity is approximated (by the Uncertainty Units of each
sensor node), the uncertainty in a combined quantity (e.g., the
difference of two measurements) can be computed using this
relation and the uncertainties of the component quantities. As
an example, Table I shows rules for combinining uncertainties
in simple multi-variable functions where ϵx y is the covariance
between x and y. These rules could be specified in the
Uncertainty Unit.
IV. EVALUATION
We test our scheme using two data sets from the Grand
Saint Bernard [14] and Victoria & Albert Museum (of London)
deployments.
1) Grand Saint Bernard: The Grand Saint Bernard (GSB)
deployment consisted of 17 Sensorscope stations [15] along a
900 metre path at the Grand Saint Bernard Pass (a mountain
pass located between Switzerland and Italy). The deployment
measured a number of environmental quantities every two
WISDOM D5.2 – Base Technology Validation 32
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
5. SUMMARY / CONCLUSION
This document is the written output of task T5.2 ‘Base technology Validation’ which is of type ‘Report’ and ‘Public’ in nature. The primary objective of the task being to validate if the base/enabling technology delivered over the course of the project meets the requirements specified in WP1 and the system design specification defined in deliverable D2.1 [1]. The overall validation methodology adopted is discussed in section 1.3. While the specific approach taken by the various partners to component/service validation is documented in sections 2, 3 and 4 respectfully. The components/services of the overall WISDOM system-of-systems are again depicted in Figure 11 illustrating the conceptual Edge, Platform and Service tiers.
Figure 11: the conceptual tiers of the WISDOM SoS
What follows, summaries progress against objectives and in considering that progress the authors deem the ‘base technology’ of Figure 11 to have been validated. (Obj 1) - WP1 for the devised domestic, corporate and city water systems and components integration strategy. The integration strategy in WP1 is essentially the event based/DAN/EDGE processing architecture that was carried through into WP2. The architecture was implemented and tested and that implementation is the primary validation of the approach taken.
WISDOM D5.2 – Base Technology Validation 33
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
(Obj 2) - WP2 for semantic model completeness and data storage performance and scalability. This work was primarily tackled by D2.2 (ontology), D2.3 (DAN & Gateway) and D2.4 (cloud platform). More detail on the approach to validation is given in sections 2 and 3. But again at a more abstract level the connecting and integration of the data producing and consuming assets of the Edge tier with the Services of the Service tier via the Platform Tier is deemed validation in this respect. Consuming services are served the requisite data, while data is acquired, aggregated and acted upon from cooperating assets within the domain.
(Obj 3) - WP3 for water decision support efficiency and level of delivery of resource and demand management strategies and pricing schemes. As per D3.4 demand pricing was not implemented. The justification for this is briefly discussed in section 4.1. All other aspects of decision support and demand management followed a user-based testing approach. With many services enabled via the Rule Engine which forms part of the overall Platform Tier.
(Obj 4) - WP4 for all data related to pilots’ characteristics and past performances (water consumption from monitoring or invoice, building occupancy, weather data, and user behaviour). The highest priority is to ensure that enough data is available to rebuild past water data. For this purpose, an internal document, listing required data, will be delivered by the task to all pilot managers. The template used to ensure relevant data capture for the defined use-cases is outlined in section 1.3 while the validation approach to the Service tier of Figure 2 / Figure 11 are discussed in section 4. Again all requisite data was made available via the platform where required in an online fashion, while offline data was additionally made available where applicable. (Obj 5) - In return relevant deliverables issued from WP1, WP2, WP3, and WP4 will be thoroughly analysed, critiqued and feedback conveyed to respective authoring team. Feedback was part of the continuous development approach undertaken with respect to WP2 components and the rule engine of D3.1. While the unit testing of WP3 components and services were undertaken primarily by the developers of same and appropriate partners where additional testing was required.
WISDOM D5.2 – Base Technology Validation 34
Small or medium-scale focused research project (STREP) FP7-ICT-2013-11 – GA: 619795
6. REFERENCES
[1] D2.1 WISDOM Software and Hardware Detailed System Design
[2] D2.2 WISDOM Water Semantics Model
[3] D2.3 WISDOM Data Acquisition, Fusion and Analytics [4] D2.4 WISDOM Data Storage and Security [5] D2.5 WISDOM Interface and Visualization Environment [6] D3.1 WISDOM Water Rule-based Engine [7] D3.2 WISDOM Sensor Data Fusion, Reliability and Trust Framework [8] D3.3 Water Optimisation module [9] D4.2 WISDOM Sensing Infrastructure Enhancements Delivery [10] D4.3 WISDOM Operational and Business Services Configuration and Deployment [11] D5.5 Report: Project Image on Standardisation and liaison activities [12] IIC reference architecture Industrial Internet Consortium, “Industrial Internet Reference Architecture.”
[Online]. Available: http://www.iiconsortium.org/IIRA.htm. [13] Simon, H. A. (1969/1996). The sciences of the artificial (3rd, rev. ed. 1996; Orig. ed. 1969; 2nd, rev. ed.
1981) (3 ed.). Cambridge, MA: MIT Press [14] Hevner, A.R., March, S.T., Park, J. and Ram, S., “Design Science in Information Systems Research, MIS
Quarterly, 28(1), 2004, pp. 75-105.