deliverable d100.1 state of the art analysis update€¦ · deliverable d100.1 state of the art...

140
Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2-LA-2013-609143 Project acronym: ProSEco Project title: Collaborative Environment for Eco-Design of Product-Services and Production Processes Integrating Highly Personalised Innovative Functions Funding Scheme: IP Date of latest version of Annex I against which the assessment will be made: 01.10.2013 (Version I) Project co-ordinating Partner: Fundación Tecnalia Research & Innovation Project co-ordinator contact details: Mikel Sorli, Dr +34 946 400 450 [email protected] Project website address: www.proseco-project.eu Start date of the project: 01.10.2013 Duration: 48 months Responsible of the Document: UNINOVA Document Ref.: D100.1 State of the Art Update Analysis Version: V1.0 Issue Date: 31.03.2014

Upload: others

Post on 08-Oct-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

Deliverable D100.1

State of the Art Analysis Update WP 100

Grant Agreement number: NMP2-LA-2013-609143

Project acronym: ProSEco

Project title: Collaborative Environment for Eco-Design of Product-Services and Production Processes Integrating Highly Personalised Innovative Functions

Funding Scheme: IP

Date of latest version of Annex I against which the assessment will be made:

01.10.2013 (Version I)

Project co-ordinating Partner: Fundación Tecnalia Research & Innovation

Project co-ordinator contact details: Mikel Sorli, Dr +34 946 400 450 [email protected]

Project website address: www.proseco-project.eu

Start date of the project: 01.10.2013

Duration: 48 months

Responsible of the Document: UNINOVA

Document Ref.: D100.1 State of the Art Update Analysis

Version: V1.0

Issue Date: 31.03.2014

Page 2: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page I D100.1 State of the Art Analysis Update

DISSEMINATION LEVEL

PU Public X

PP Restricted to other programme participants (including the Commission Services)

RE Restricted to a group specified by the consortium (including the Commission Services)

CO Confidential, only for members of the consortium (including the Commission Services)

CHANGE HISTORY

Version Notes Date

0.1 Creation of the document 06.01.2014

0.2 Updated content: Contents Integration 03.03.2014

0.3 Updated content: Content Update 18.03.2014

0.4 Updated content: Review and comment addition 21.03.2014

0.5 Updated content: Changes according to received feedback 26.03.2014

0.6 Updated content: Fixed of missing parts 27.03.2014

0.7 Updated content: Addition of Conclusion and finalization of the document 30.03.2014

1.0 Final Document 31.03.2014

This document is the property of the ProSEco Consortium.

This document may not be copied, reproduced, or modified in the whole or in the part for any purpose without written permission from the ProSEco coordinator with acceptance of the Project Consortium.

This publication was completed with the support of the European Commission under the 7th Framework Programme. The contents of this publication do not necessarily reflect the Commission's own position.

Page 3: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page II D100.1 State of the Art Analysis Update

TABLE OF CONTENTS

1 Executive Summary...................................................................................................... 6

2 Introduction .................................................................................................................. 7

2.1 Document Purpose. .............................................................................................. 7

2.2 Approach Applied. ................................................................................................ 7

2.3 Document structure. ............................................................................................. 8

3 State of the Art topics .................................................................................................. 9

3.1 Collaborative product/process design .................................................................... 9

3.1.1 Definition ............................................................................................................................. 9

3.1.2 Relevant methods and tools ............................................................................................... 10

3.1.3 Relevant projects ............................................................................................................... 13

3.1.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 15

3.2 AmI information processing ..................................................................................18

3.2.1 Definition ........................................................................................................................... 18

3.2.2 Relevant methods and tools ............................................................................................... 18

3.2.3 Relevant projects ............................................................................................................... 20

3.2.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 22

3.3 Context Sensitive approaches ..............................................................................23

3.3.1 Definition ........................................................................................................................... 23

3.3.2 Relevant methods and tools ............................................................................................... 23

3.3.3 Relevant projects ............................................................................................................... 26

3.3.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 26

3.4 Simulation support for the product process design ...............................................28

3.4.1 Definition ........................................................................................................................... 28

3.4.2 Relevant methods and tools ............................................................................................... 28

3.4.3 Relevant projects ............................................................................................................... 32

3.4.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 33

3.5 Life cycle management for product optimization applying eco-design perspectives ........................................................................................................34

3.5.1 Definition ........................................................................................................................... 34

3.5.2 Relevant methods and tools ............................................................................................... 38

3.5.3 Relevant projects ............................................................................................................... 40

3.5.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 40

3.6 Lean design principles in the overall life cycle of the product including organizational aspects .........................................................................................42

3.6.1 Definition ........................................................................................................................... 42

3.6.2 Relevant methods and tools ............................................................................................... 43

3.6.3 Relevant projects ............................................................................................................... 47

Page 4: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page III D100.1 State of the Art Analysis Update

3.6.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 49

3.7 Service Oriented Architecture (SOA) in manufacturing industry and Cloud manufacturing ......................................................................................................51

3.7.1 SOA: Definition and Basic Concepts .................................................................................. 51

3.7.2 SOA in manufacturing ........................................................................................................ 51

3.7.3 Cloud Manufacturing: Definitions and Basic Concepts ........................................................ 53

3.7.4 Relevant methods and tools ............................................................................................... 55

3.7.5 Relevant projects: Overview .............................................................................................. 57

3.7.6 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 59

3.8 Service composition .............................................................................................61

3.8.1 Definition ........................................................................................................................... 61

3.8.2 Service Orchestration ........................................................................................................ 61

3.8.3 Service Choreography ....................................................................................................... 61

3.8.4 Relevant methods and tools ............................................................................................... 61

3.8.5 Relevant projects ............................................................................................................... 63

3.8.6 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 64

3.9 Ontologies applicable for Meta Products and context modelling ...........................66

3.9.1 Definition ........................................................................................................................... 66

3.9.2 Relevant methods and tools ............................................................................................... 66

3.9.3 Relevant projects ............................................................................................................... 69

3.9.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 71

3.10 Tools for collaborative product/process design .....................................................72

3.10.1 Definition ........................................................................................................................... 72

3.10.2 Relevant methods and tools ............................................................................................... 72

3.10.3 Relevant projects ............................................................................................................... 77

3.10.4 Main gaps in the state-of-the-art and recommendations for ProSEco.................................. 80

3.11 Analysis of the relevant standards ........................................................................82

3.11.1 Definition ........................................................................................................................... 82

3.11.2 Relevant standards organisations and results .................................................................... 82

3.11.3 Relevant Platform standards .............................................................................................. 84

3.11.4 Relevant Product and entity identification standards ........................................................... 87

3.11.5 Relevant Semantic technologies standards ........................................................................ 88

3.11.6 Main considerations in the State of the Art and recommendations for ProSEco .................. 90

4 Conclusions .................................................................................................................91

5 References ...................................................................................................................96

6 Annex(es) ...................................................................................................................113

6.1 Annex 1 .............................................................................................................113

6.1.1 Platform standards........................................................................................................... 113

Page 5: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page IV D100.1 State of the Art Analysis Update

6.1.2 Product and entity identification standards ....................................................................... 130

6.1.3 Semantic technologies standards..................................................................................... 131

TABLE OF FIGURES

Figure 1 - ProSEco Collaborative Environment for Design and PES ......................................................... 7

Figure 2 – ManuCloud architecture [68] ................................................................................................. 14

Figure 3 – Triple bottom line .................................................................................................................. 34

Figure 4 – Life cycle of the product ........................................................................................................ 34

Figure 5 - Levels of improvement and benefits derived from the application of different eco-innovation strategies in a company .......................................................................................................... 36

Figure 6 – Schema of ecodesign methodology....................................................................................... 37

Figure 7 - Ecodesign strategies along the life cycle in order to optimize the impact of each phase.......... 38

Figure 8 - Life Cycle Management is connecting various operational concept and tools ......................... 38

Figure 9 – Plan for Every Person. Example from a manufacturing firm. [207] ......................................... 46

Figure 10 – Average Time Spent on One Occurrence of Rework [208] ................................................... 49

Figure 11 – Cloud Computing Service Delivery Model [232] ................................................................... 53

Figure 12 – Layered framework for implementing CMfg [227]................................................................. 54

Figure 13 – Cloud Computing and Cloud Manufacturing in a nutshell ..................................................... 55

Figure 14 - The two components of a meta-product: the physical part and the logical part ...................... 66

Figure 15 – Architecture of the ontology proposed in [258] ..................................................................... 67

Figure 16 - The architecture of the IoT Toolkit ........................................................................................ 69

Figure 17 - Technical architecture of the device proposed within the patent ........................................... 70

Figure 18 – The various stages in the development of the web: Web 1.0 static uni-directional information transfer, Web 2.0 dynamic multi-directional information transfer and Web 3.0 adds inferred knowledge to increase the quality of the information transfer. .................................................. 73

Figure 19 – BIM main aspects and their interrelationship ....................................................................... 74

Figure 20 – Repcon Configurator. Configuration Agent web tool ............................................................ 76

Figure 21 – Empresa 2.0 integrated information sources and content types ........................................... 76

Figure 22 – Trans-IND software architecture layers ............................................................................... 79

Figure 23 – Schema for Product Modelling XG OWL-based ontology development ................................ 80

Figure 24 – Lean and green integration [195] ......................................................................................... 91

Figure 25: Service orientation interaction ............................................................................................. 122

Figure 26: Physical bundle life-cycle .................................................................................................... 122

Figure 27: GTIN Structure ................................................................................................................... 130

Figure 28: GLN Structure ..................................................................................................................... 130

Figure 29: GIAI Structure ..................................................................................................................... 130

Figure 30 - Descriptions of relevant standards ..................................................................................... 132

Figure 31: Overview of the OWL 2 language ........................................................................................ 134

Figure 32: On the left, what browsers see – on the right, what humans see ......................................... 137

Page 6: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page V D100.1 State of the Art Analysis Update

ABBREVIATIONS LIST

BC Business Case

CA Consortium Agreement

CMfg Cloud Manufacturing

DPWS Device Profile for Web Services

i.e. id est (engl. = that is to say)

IPR Intellectual Property Rights

IT Information Technology

QoS Quality of Service

RTD Research and Technological Development

S & T Scientific and Technological

SME Small and Medium-sized Enterprise

SOA Service Oriented Architecture

WP Work package

w.r.t. With respect to

Page 7: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 6 of 139 D100.1 State of the Art Analysis Update

1 Executive Summary

The strategic objective of the ProSEco project is to provide a novel methodology and a comprehensive ICT solution for collaborative design of product-services (Meta Product) and production processes, using Ambient Intelligence (AmI) technology, lean and eco-design principles and applying Life Cycle Assessment techniques, allowing for effective extensions of products of manufacturers in different sectors (automotive, home appliances, automation equipment etc.) and for enhancement of the products-services and their production processes in the direction of eco-innovation.

The ProSEco project intends to apply a Cloud Manufacturing approach for effective collaborative design of product-services and their production processes, and the effective implementation of innovative services in order to strengthen manufacturing companies’ competitiveness in market sharing. To do that new eco-innovative product-services or in other word Meta-Products, which integrate highly personalised innovative functions with minimal environmental footprint impact along the overall Life Cycle, will be provided, which in turn will open new business opportunities.

Taking into account the project key objectives and the overall ambition, the present document provides an analysis of the State-of-the-art research initiative and technologies that can be potentially relevant for the project. Thereby, the document is focused upon several key RTD problems and technologies and is aimed to give brief overview of them in the context of manufacturing production systems in order to identify the main gaps and, most important, where the ProSEco project is going to deliver innovation.

From both the methodological and software platform points of view, similar gaps have been identified pointing to a lack of collaboration paradigms covering products, services and ecological aspects. Progress in ProSEco should address this by extending product-oriented ontologies, methodologies and software tools to cover product-related services and the ecological aspects involved, also encouraging reuse of previously designed elements.

On the collaborative dimension, existing software tools should be enhanced to easily and effectively involve internal and external actors in a collaborative design effort, taking into account aspects arising from new contribution sources, including recognition and IPR considerations, security and trust among the actors.

This document provides of the state-of-the-art analysis and overview, in all areas relevant for the envisaged ProSEco RTD activities, several solutions and concepts are identified, which may serve as a foundation to realise the envisioned innovative approach

ProSEco will provide several innovative solutions in order to close the identified gaps while making the State of the Art update, namely by making a new methodology on how to combine eco- and lean-design principles for eco-driven Meta Product and production design, addressing both technological and organisational aspects related to context sensitivity, AmI based monitoring and collaborative building of PES within product ecosystems in global market. The project will go a large step ahead, by addressing lean principles for Meta Product design. Furthermore ProSEco will focus on identification how eco-design techniques can be determined as being compatible with the product development process. Another innovation of the project will be to provide a new platform for collaborative design of Meta Products and their associated production processes within product ecosystem, set of new core services and service engineering tools, providing means to easily configure new (ICT-driven) Product Extension Services and customise them to specific customer groups (and their dynamically changing requirements).

All the above points will be realised under a context sensitivity approach to allow for efficient self-adaptation of the PES to the individual customers’ needs in specific situation (during product/service use). This involves innovative context modelling, based upon ontological approach, and an ontology for extending the collaborative aspects and the physical layer of Meta Products to match also abstract concepts and meet dynamically changing context of users of Product Extension Services, as well as new real-time, dynamic context monitoring services providing data needed to extract current user context based on AmI systems. ProSEco will go beyond the state of the art by developing and demonstrating a web-based solution for simulating the lifecycles of Meta Products and processes, which combines Systems Dynamics with simulation.

Page 8: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 7 of 139 D100.1 State of the Art Analysis Update

2 Introduction

2.1 Document Purpose.

The purpose of this document is to review state-of-the-art methods, technologies and tools in the areas relevant to the project, particularly in eco-design principles, collaborative product/process design, context representation, AmI based monitoring, etc. This could be helpful to understand current situation, the main gaps and to understand the best strategy for ProSEco.

2.2 Approach Applied.

The overall aim of the ProSEco project is to implement a novel Cloud-enabled and extensible platform for collaborative design of product-services and production processes (see Figure 1). The ProSEco platform provides the generic software infrastructure that enables the envisioned collaboration and supports the development, provisioning, and deployment of novel, value-added functionalities as core services, which are sets of ICT based services that make possible specific applications supporting Product Extension Services (PES) around various products. PES services are result from the composition of these core generic services and application specific software.

The ProSEco framework as shown in Figure 1 is composed of various components that interact to provide a set of services.

Figure 1 - ProSEco Collaborative Environment for Design and PES

As it can be seen ProSEco is a Hybrid Solution enjoying different disciplines and needs development in various areas from technologies to methodologies and concepts to simulation models. Manufacturers that wish to exploit the ProSEco platform use the platform components and associated services to construct applications targeting domain and product-specific needs and requirements.

In this work deliverable topics were driven by the main innovation areas within the ProSEco framework and the table of content for “the state of the art analysis” was provided by UNINOVA and approved by partners to receive standard style and format from all RTD partners. Each partner has their contribution based on mutual agreement and their role in the project. The standard draft was including definition, relevant methods and tools, relevant projects and main gaps. At the end of each section, the final part is including recommendation for ProSEco which would be the main guideline for the projects in future. Each document was provided by responsible partner and was reviewed by a group of experts to be aligned with the other document and the main theme of the report. All sub reports are integrated to accomplish work package

Page 9: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 8 of 139 D100.1 State of the Art Analysis Update

D100.1 and finally the last aggregated version based on all received sections was prepared and was sent for partners for final review and confirmation.

2.3 Document structure.

To encompass all the gaps that the ProSEco project is intended to fill, the current document is structured as follows:

Chapter 2: details the document purpose as well as, the approach applied for the development of the document while underlining the role played by this document with respect to the whole project.

Chapter 3: describes the “hot topics” that will be faced during the ProSEco project, or in other words, gives an overview about the current technologies, methodologies, as well as, research initiatives and industrial applications in order to introduce the domain of application and to frame the research activity. Thereby, this chapter will provide the theoretical foundation upon which the ProSEco project will be built, i.e. the baseline for defining the best strategy to pursue and the major challenges that the ProSEco project is intended to tackle. Moreover, this chapter also provide a critical analysis of these technologies, methodologies, research initiatives and so on, in order to clearly remark the relevance of the project.

Chapter 4: extract the main conclusions and final remarks.

Annexe: provides an overview on several supporting technologies that can be used during the implementation activities.

Page 10: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 9 of 139 D100.1 State of the Art Analysis Update

3 State of the Art topics

3.1 Collaborative product/process design

3.1.1 DEFINITION

Throughout the years people have found ways of dealing with problems to generate solutions. Collaboration has always been a part of identifying problems and finding generic solutions. Collaboration can be between individuals or groups of individuals. What is essential is that individuals interact [1]. Increased demand from the global market on companies to provide technical innovations and maintain cost efficiency has consequently resulted in an increased complexity of different relationships [2]. Dealing with these demands adopting a wider perspective on collaborative product development by including both internal and external collaboration in product development paves the way for new approaches and insights [3]. This is a generic introduction to what collaboration is in reference to organisations and technology, a more detailed definition follows in the next section.

3.1.1.1 Collaboration and collaborative process

Montiel-Overall defines collaboration as “a trusting, working relationship between two or more equal participants involved in shared thinking, shared planning and shared creation” [4]. This definition highlights the two main characteristics of collaborative work: it is a type of group work and it aims at building up a shared understanding between its participants. These two features are examined to describe collaboration as a process.

First, collaboration can be considered as a type of group work. This implies that the co-workers must follow a four-phase process before forming a group and achieving efficiency towards a common goal [5]. These phases are called forming, storming, norming and performing. During these phases, co-workers gain independence over their assigned roles in the project [6]. Instead of being bonded by the organisation structure, they must therefore build strong interpersonal relationships to preserve the unity of their group [6]. The flexibility required by the workers to redefine their roles necessitates that leaders replace managers. Indeed, in collaborative work, leaders must respect the opinions of their colleagues while guiding them. On the opposite side, in co-operation, managers tend to dictate objectives and roles to their subordinates [7]. Ultimately, collaboration should allow the formation of social bonding, which appears when groups evolve into teams [7].

Secondly, collaboration aims at sharing an understanding, implies that the co-workers must build on each other’s knowledge. As suggested in social psychology, this knowledge can be represented as mental structures called schemata, which are stored in the long-term memory [8]. These schemata are built through experience by modifying existing schemata. They are reinforced through usage and allow people to react to new situations by identifying analogies with previous experiences. The person schemata [9] are of particular interest for group work because they correspond to the representation each worker has of his/her colleagues. They enable the prediction of others’ behaviour and the better understanding of what should be shared with colleagues.

The above description of collaboration demonstrates the importance of human interactions. Indeed, groups must evolve through conflicts [5] and towards non-detailed objectives [6] before reaching efficiency. Collaborators must share fate with others [10], and be willing to communicate extensively with each other [7]. It shows the importance of co-workers’ motivation, which is the focus of the next part.

Collaboration and collaborative working can be facilitated or hindered by a number of factors amongst members of project teams. In their work to explore the underlying concepts necessary to create a collaborative culture, Gautier et al [11] introduced a theoretical model identifying four main requirements: learning, communication, trust and respect. These requirements represent enablers to support collaboration that could be enhanced by technologies management [12]. These requirements form the list of sub factors of the conceptual framework model [13], which identified seven categories of factors: Context, Support, Tasks, Interaction Processes, Teams, Individuals and Overarching Factors. In this framework, each main factor has a number of sub factors identified with a number of key points linked to each sub-factor.

Page 11: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 10 of 139 D100.1 State of the Art Analysis Update

3.1.1.2 Collaborative product development

CPD is a technology-centred process including two or more partners with diverse competence, experience, culture, skill and location joining complementary resources to design/develop new/innovative/improved products in order to gain competitive advantage, innovate, explore new markets, share risks and costs and accelerate the product development process [14].

By combining the strength and expertise of the best diverse and geographically dispersed product development teams, better mission scenarios, designs and the corresponding products and technologies can be developed in less time [15]. Recent literature has widely addressed the belief that CPD and participation in a collaborative network of enterprises is commonly assumed to bring valuable benefits to the involved entities, including an increase in the ‘survival capability’ in a context of market turbulence as well as the possibility of better achieving common goals [16].

Buyukozkan and Arsenyan [14], have summarised several CPD definitions found in the literature as follows:

Cooperative relationship between firms aimed at innovation and the development of new products [17];

Means by which problematic aspects of product development such as complexity, involving many different areas of skill and expertise as markets and technologies converge, shorter product lifecycles, increasingly rapid change of technology and pressure to reduce product development periods can be lessened [15];

Application of team-collaboration practices to an organisation’s total product development efforts [18];

Internet-based computational architecture that supports the sharing and transferring of knowledge and information of the product lifecycle among geographically distributed companies to aid taking right engineering decisions in a collaborative environment [19];

Continued and parallel responsibility of different design disciplines and lifecycle functions for product and process specifications and their translation into a product that satisfies the customer but does not presuppose one single organisation [20];

Integrated framework that product companies can adopt to become competitive, innovative and leaders in their sphere of influence [21];

Cross-organisational linkage, which in addition to high levels of integration is characterised by high levels of transparency, mindfulness and synergies in participants’ interactions [22];

Virtual process where one or more activities of the product development process are performed by different enterprises or the results of one or more activities of the PD process come from different enterprises [23].

3.1.2 RELEVANT METHODS AND TOOLS

3.1.2.1 Collaborative Product development related work

System integration and collaboration can be identified as the key issue of CPD research [24]. In the review of Li et al.[25], two types of collaborations are identified, horizontal collaboration and hierarchical collaboration. The former concerns the cooperation of team members from the same discipline to carry out a complex task in either a synchronous or asynchronous way while the latter emphasizes the cooperation between upstream design and downstream manufacturing [25]. A number of methods have been developed in terms of how several systems, or components within a system, can be integrated, namely, agent-based integration, Web-based integration and the integration of the two [26].

Information representation and sharing is fundamental to CPD as it involves the objects and their relationship in a design solution. In collaborative design, information for visualization, products, project management, etc. needs to be represented and shared [25].

Lots of methods and infrastructural technologies have been developed and applied to support system integration and collaboration for CPD as identified in the literature by Wang and Zhang [24] and listed here:

Szykman et al. [27] proposed a product model to make design information assessable to users in a Web-based system;

Page 12: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 11 of 139 D100.1 State of the Art Analysis Update

Kim et al. [28] developed an ontology- based assembly design method that aims to make heterogeneous modelling terms semantically processed both by design collaborators and intelligent systems;

Chu et al. [29] implemented a 3D design environment where the information with different levels of detail is transferred to different design engineers. It has been applied in intelligent manufacturing, enterprise integration and supply chain management [30];

Wang et al. [31], proposed a collaborative design system, which employs several software agents to interact with each other and perform such tasks as communication, product data management, etc.;

Hao et al. [32],developed a collaborative e-Engineering environment on which an industrial case was studied;

Chao et al. [33], proposed an agent-based approach to engineering design, which aims to use the agent attributes such as proactiveness and autonomy to achieve effective integration of design tools.

To obtain improved capabilities, agent technology has also been integrated with other technologies [26]:

Wang et al. [34] developed a Web/agent-based multidisciplinary design optimization environment;

Shen et al. [35] proposed an agent-based service-oriented integration architecture for collaborative intelligent manufacturing.

Interoperability between computer-aided engineering software tools is of significant importance to achieve greatly increased benefits in a new generation of product development systems [36]. Lots of work has been done on the system integration and collaboration technologies, e.g. Web, agent, Web Services and semantic Web, etc., and open standards and commercial tools have already been available [37]. The development of infrastructural technology for CPD systems has also been researched to support distributed integration and collaboration with improved effectiveness and efficiency [24].

Fan et al. [38] developed a distributed collaborative design framework using Peer to Peer (P2P) technology and grid technology;

Cheng and Fen [39] developed a Web-based distributed problem-solving environment where computational codes can be accessed and integrated to solve engineering problems.

Simulation technology is playing an increasingly important role in design validation and verification [40] which raises the need to develop infrastructure for distributed modelling and simulation in the context of CPD.

Reed et al. [41], developed a Web-based modelling and simulation system which was applied to the aircraft design process and argued that such a system could improve the design process;

Byrne et al. [42] reviewed recent research on Web-based Simulation (WBS) and its supporting tools and concluded a number of advantages of WBS, including: easy use; collaboration features; licence and deployment models, etc.

One of the important enabling technologies of WBS is the middleware which enables different modules in a WBS to interoperate [42]. Some middleware technologies, e.g. CORBA, Web Services and the High Level Architecture (HLA), have been used in developing collaborative simulation systems for engineering design [43]. Among these technologies, Web Services is a very promising technology for system integration on the Web whilst HLA is an important and heavily researched standard for distributed simulation [42]. Two ways have been identified and researched for the integrated use of Web Services and HLA, namely developing HLA enabling tools using Web Services and making HLA federation to interoperate with other software applications [31], [44].

The capability of Web-based system is to enable collaborative work in an Internet-distributed environment [24]. Web Services technology offers a flexible integration framework where more dynamic interoperations between modules of a system can be supported [24]. HLA is particularly powerful for applications requiring complex interaction, coordination and synchronization. Using Web and Web Services can achieve the goal of supporting collaborative work and dynamic integration of simulations. However, new requirements have been identified for CPD systems, including: integration with physical testing and validation systems for ‘‘hardware-in-the-loop’’ simulations; and semi-automated interactive systems that involves human interventions [26].

Page 13: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 12 of 139 D100.1 State of the Art Analysis Update

With the integration of virtual reality, software agents, and Internet/Web-based technologies, collaborative virtual environments are being widely applied in almost all e-business and engineering domains for collaboration among distributed teams [37]:

Rosenman et al. [45] presented a framework for collaborating in a virtual environment including a database containing the various models and relationships, a virtual world environment for collaboration, and an agent-based society for handling communication between the users.

Aspin [46] proposed an interaction mechanism that enables a group of co-located users to collaboratively interact with a common visual environment through the use of lightweight remote computing devices. Applying an object-based distributed shared memory (DSM) system enables the description of the active sessions to be distributed to both the collection of services, forming the design/review session configuration, and the remote interface applications that support individual user-interaction. This distributed system then forms a synchronized, distributed description of the session content that both informs services of the session content and provides a centralized system for managing user-interaction.

Hammond et al. [47], in an interesting experimental work, used a socio-technical theory as a framework to explore differences in engineering design team decision-making as a function of various communication media. Their results indicate that design teams communicating via an electronic medium perceive an increase in mental workload and interact less frequently, but for a greater total amount of time [37].

Is worth mentioning that various development and collaboration tools have been developed by research organizations, consortia and software vendors for systems integration and collaboration in the architecture, engineering, construction and facilities management (AEC/FM) sector that ProSEco can learn from. Shen et al [37] summarise these as follows:

Industry Foundation Classes (IFC) Toolboxes: A good number of tools have been commercially available to support the development of IFC compliant applications. As an example, ST-Developer is a commercial STEP SDK from STEP Tools Inc. that comes with pre-installed libraries for use with the AEC standards defined by STEP and others, including IFC, CIS/2, and STEP AP 225.

CORBA, COM/DCOM, and Java RMI: Most integrated systems will still be implemented using these distributed object technologies.

Agent system development tools: While a large number of academic, commercial, or open source agent system development tools are available [48] the most widely used one is JADE (Java Agent Development Framework).

Web services development tools: A wide range of tools available for Web services development and deployment from powerful tool packages like Rational Application Developer Tools to simple and practical tools like Eclipse [37].

Commercial collaboration tools: Several commercial collaboration tools have been available for AEC/FM. The most popular ones include, ArchiCAD, TeamWorkTM [49] Autodesk BuzzsawTM, and Bentley ProjectWise [50].

3.1.2.2 Product - Service Systems

Today, Customers have a certain need, which needs to be fulfilled until they are satisfied [51]. Many of them do not want to buy a specific product. Instead, they demand for solutions of a problem. Therefore, companies started to offer integrated packages of “hard” tangible products as well as “soft” intangible services. These packages or bundles mostly have been called Product-Service Systems (PSS) in literature.

In a recent literature review [52] highlight that companies continually strive to increase production, but in recent years, the effects of this effort have demonstrated that providing products alone is insufficient in terms of remaining competitive [53]. Thus, companies have begun to offer solutions aimed at increasing market share as well as customer satisfaction [54], [55]. The ultimate PSS objective is to increase a company’s competitiveness and profitability [56], and another of the PSS objectives is to reduce the consumption of products through alternative scenarios of product use instead of acquiring it. In reality, the PSS has the potential to re-orient the current standards of consumption and production, thus enabling a move towards a more sustainable society [57].

Page 14: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 13 of 139 D100.1 State of the Art Analysis Update

When defining the PSS, in Beuren at al [52] literature review a third of the articles (total 149) cite Goedkoop et al. [58], who defines it as a combination of products and services in a system that provides functionality for consumers and reduces environmental impact. Mont [59] highlights how the PSS offers a product and system of integrated products and services that are intended to reduce the environmental impact through alternative scenarios of product use. The key elements of the PSS are (i) the product; (ii) the service, in which an activity is performed without the need for a tangible good or without the need for the system; and (iii) the combination of products, services, and their relationships [58]. Most authors consider the PSS as simply a competitive proposal intended to satisfy consumer demand. Certain authors, however, assert that the PSS goes beyond this view and, instead, aims at sustainability by seeking a balance between environmental, economic, and social concerns [60], [61].

The main benefits of the PSS are related to the continuous improvement of the business, innovation in quality, and the satisfaction of consumer demand [62]. The results of the PSS are strengthened provider - consumer relationships as well as greater consumer loyalty. Moreover, companies can use the in- formation obtained from their consumer relationships to develop new systems that improve the product performance [63]. In this fashion, a company can improve its position in the value chain and increase its innovation potential [64]. Taken together, by generating products and services that reduce waste, companies that implement the PSS can help to minimize the consumption of scarce resources and environmental degradation [61].

When planning the implementation of a PSS, a company must change from “product thinking” to “system thinking,” or in other words, they must focus on the use of the product [65]. The PSS requires the producers and service providers to extend their responsibilities throughout the product life cycle [63], [65].

A variety of methodologies for the PSS can be found in the literature, such as [66]: PROSECCO (Product & Service Co-design), INNOPSE (Innovation Studio and Exemplary Developments for Product Service Engineering), HICS (Highly Customerised Solutions), and MEPSS (Methodology for Product Service Systems). A high-quality example of a methodology that offers a set of tools for the sustainable development of a PSS is MEPSS [67].

3.1.3 RELEVANT PROJECTS

There are several relevant projects already identified in the DoW for ProSEco. More projects have been identified and briefly discussed in the previous section were the tools and methodologies were presented. In addition to these the following projects are worthwhile mentioning, they are related to Cloud Manufacturing using a collaborative approach that feature strongly within the ProSEco project.

3.1.3.1 ManuCloud

The ManuCloud Project [68], funded under the European Commission’s Seventh Framework Program for Research (FP7), is perhaps the research project most relevant to ProSEco as it explores the application of Cloud Manufacturing (ManuCloud). According to Meier et al. [69] the ManuCloud project is meant to enable creation of integrated manufacturing networks spanning multiple enterprises which are facilitated by service oriented information technologies. Wu et al [68] explain that “[The Manu- Cloud architecture] provides users with the ability to utilize the manufacturing capabilities of configurable, virtualized production networks, based on cloud-enabled, federated factories, supported by a set of software-as-a-service applications”. This architecture is reproduced in Figure 2

.

Page 15: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 14 of 139 D100.1 State of the Art Analysis Update

Figure 2 – ManuCloud architecture [68]

The ManuCloud project architecture is very similar to the strategic vision of Cloud manufacturing as it will be presented in the later section 3.7 of this report, and represents a major advancement toward the realization of Cloud Manufacturing [68].

3.1.3.2 Quirky

According to the Economist [70], Quirky offers users with access to a complete product creation enterprise while not a pure cloud-based manufacturing environment, Quirky is enabled by manufacturing resources virtualized over the internet and available for use by distributed designers. The Quirky business model incorporates the originating designers into the wealth-sharing model and provides them with a portion of the profits that their products yield [68].

3.1.3.3 Shapeways

The Economist [70] also discusses Shapeways, a company which offers 3D printing services over the internet. In contrast to the vetting process used in the Quirky business model, Shapeways offers users immediate access to 3D printers to make any object, which they desire [68].

3.1.3.4 Ponoko

Chafkin [71] also discuss Ponoko, a product creation website which providers designers with access to the manufacturing resources they need to realize their products. This company prices the products based upon the materials they require and the amount of machine time needed to make the part [71]. Review of the Ponoko website shows that a number of 2D and 3D manufacturing services are offered to designers, and the website even enables manufacture of electronic components by offering access to hundreds of electronic components which the designer can specify and create designs with [68].

In the same issue of the Economist [70] mentioned above MFG.com is classed as one of the most promising cloud manufacturing companies [68]. MFG.com connects consumers with over 200,000 manufacturers in 50 states [70]. According to MFG.com, buyers request services by providing technical product specifications, which are communicated to appropriate suppliers for quoting. Suppliers are selected based upon their manufacturing capabilities, expertise, and instantaneous production capacity. The MFG.com platform hosts all activities from creating the Request for Quote to the shipping of the final product (MFG.com website).

Page 16: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 15 of 139 D100.1 State of the Art Analysis Update

3.1.3.5 ECOLEAD

During the last decade a large number of R&D projects have made substantial progress, not only in terms of development of support platforms and tools, but also contributing to a better conceptual understanding and characterization of the area of collaborative networks [72]. An example is ECOLEAD an integrated, project including 28 partners from 15 countries, with duration of more than 4 years, and that addressed several key issues of collaborative networks under a holistic perspective [73].

Understanding that collaborative networks require contributions from multiple ‘‘adja- cent’’ disciplines and a more structured theoretical foundation is leading to the consolidation of the area as a new discipline [74]. ECOLEAD has made contributions towards the major necessity in elaborating a reference model that could provide a general basis for understanding the significant concepts, entities, and relationships of some domains, and therefore a ‘‘foundation’’ for the area of collaborative networks [72].

3.1.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

The following pages present gaps and future direction as these have been identified in the literature reviewed in the areas of Collaborative Product Development, Product Service Systems and Cloud Manufacturing. The need for a lean approach to ProSEco and the issue of Knowledge integration is also briefly introduced as a direction recommendation for the ProSEco project. Community of practice (CoP) as well as the role of societies will be considered as an approach in the ProSEco project.

3.1.4.1 Gaps and future direction in collaborative product development

Studies tackling collaboration within ‘product development’ are generally focused on one problem of the process [14]. Integrated studies handling Collaborative Product Development (CPD) as a whole are non-existent, whereas a large number of issues or challenges emerge from the increased complexity in product development processes involving a collaborative network of players from different organisations [75].

Buyucozkan and Arsenyan [14] discuss that to support CPD at a technological level, various solutions have been suggested by many researchers and practitioners [76]–[78]. They go on to highlight the following gaps that need addressing:

A guide mapping the requirements of the right tools;

A partnership formation process mapping a detailed approach;

Integrated environment for product development must enforce four dimensions of e-engineering [79]:

o Process;

o Information;

o Organisation;

o Technology.

The three different processes (collaboration, innovation and product development) within CPD should be considered separately, these separate approaches should be later integrated into CPD in order to manage and control CPD efforts [21];

Partner selection, in modern manufacturing, firms tend to overcome their traditional boundaries by creating cooperation links with partners or even competitors [80]. Although inter-organisational structures involve leveraging the assets and capabilities of the firms located at various points along the value (supply) chain [75];

A consistent and generic model built around the partnership process, incorporating diverse approaches is required. CPD partnership studies usually focus on the R&D phase of the process. The existing literature lacks a general evaluation of various potential partners such as manufacturers, customers, suppliers, competitors, R&D departments, marketing departments and universities;

Organisational learning during collaboration is another undervalued topic, yet CPD efforts not only result in new products and services, but also increase corporate experience and knowledge;

Page 17: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 16 of 139 D100.1 State of the Art Analysis Update

Impact on organisational structure including knowledge management, IT management, R&D efforts, etc.

Please note the above gaps were identified by the researcher at an early phase of the project. These gaps are addressed as pointers to what ProSEco should be addressing. However, as the work progresses, researchers will carry on additional investigation to decide which one(s) of the above will be taken forward for further investigation in the project.

3.1.4.2 Future direction for Production Service Systems (PSS)

Designers of the PSS require methodologies and tools for visualizing the network of stakeholders and their needs [52].

Morelli [81] points out that the methodology for the PSS should be directed to each individual situation. It should include the following:

(i) Identification of the actors involved (companies, institutions and final users) in the network on the basis of defined analytical frameworks;

(ii) Possible PSS scenarios, verified use cases, and sequences of actions and actors’ role;

(iii) Defined requirements for the PSS and the logical and organizational structure of the PSS; and

(iv) Possible representation and management tools to represent a PSS in all of its components, i.e., physical elements, logical links and temporal sequences.

Because services are dominating the market to a larger extent, businesses and researchers must develop tools that consider service operation [69]. A business should also take into account the environmental limits [62]. Dematerialization may be a tactic to analyse the flow of materials and to curb waste [82]. There is also a need for new studies considering sustainability, stakeholders, and property transfer [83] as well as research on the consumer behaviour and on the experience of providing services [52].

ProSEco should also focus in providing solutions in extending international collaboration. Boehm and Thomas [51] suggest that often studies within the PSS field are conducted explicitly for one industry in one country. Although there are projects and studies like the EU projects SusProNet [64] and MEPSS [67], which involved companies from more countries and adopted a multidisciplinary approach, in general the collaboration between international researchers on papers is limited. There are only a very few contributions which compare two or more cases of different countries, however, this is necessary in order to avoid the creation of results that are applicable only in culture. Furthermore, this helps overcoming barriers such as language problems or cultural uncertainties [51].

In future, technology will offer new opportunities for companies to offer new products and services [84]. One direction is the advent of cyber-physical systems (CPS). CPS are understood as the integration of embedded systems with global networks such as the Internet [85]. Complex interactions and dynamics of how cyber and physical subsystem interact with each other are one possible future direction of research [86].

Although the ideas of collaboration in product design and development have been around for years, there are business model issues listed below that the ProSEco project could contribute in addressing.

Wu et al [68], summarise that when considering collaboration that involve product development and cloud manufacturing the business model development should focus on a few main research questions:

How will equity be assured when value is delivered as a result of shared-interest, multiple-party work? How will value be maximized and distributed in accordance with value added?

Should collaborators be bound by formal operating agreements, or should they be subject to a free market style environment? Does this vary based upon the situation, and why? Perhaps a hybrid environment would be best?

How should IP be handled in collaborative environments? What about background and foreground rights?

How can we investigate the communication and interaction patterns between service providers and consumers in order to capture the implicit collaboration structure and key service providers and consumers in collaborative networks?

Page 18: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 17 of 139 D100.1 State of the Art Analysis Update

3.1.4.3 Lean thinking and eco-impact

One of the aims of the ProSEco project will be to explore the links between collaboration and lean paradigm as this provides the platform to achieve process optimisation and minimise waste as well as having an impact on the environment. We strongly believe the concept of lean philosophy should be the overarching theme and the one that will provide the theoretical grounding for the project.

A lean organisation needs to understand what value means for a specific customer, knows how value streams creates that value, creates the flow of value to the customer, utilizes the power of pull system, and relentlessly pursues perfections through continuous improvement. Looking at lean manufacturing as a process that aims to eliminate waste, its principles can be applied to other areas such as healthcare, education, government, supply chain, and specific to ProSEco in new product development (NPD) [87]; [88].

Page 19: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 18 of 139 D100.1 State of the Art Analysis Update

3.2 AmI information processing

3.2.1 DEFINITION

Ambient Intelligence refers to electronically enabled environments that are aware of and responsive to the presence of people. In [89]–[95], The Ambient Intelligence (AmI) vision implies the concept of “smart spaces" populated by intelligent entities [96]. Such an environment is sensitive to the presence of living creatures in it, and supports their activities. It “remembers and anticipates” in its behaviour. The general concept of Ambient Intelligence (AmI) determines the system that is based on information and communication technologies, that supports its human users in an effective and unobtrusive way.

Another definition indicates that Ambient Intelligence refers to electronic environments that are aware of and responsive to the presence of people [97].

These definitions indicate that the general objective of AmI is to promote a better integration of technology into the environment, so that people can freely and interactively use it. The problem is that the particular application of this concept to the Manufacturing Industry is not direct, as it may be derived checking the mentioned ISTAG report where only in a final annex the description of the industrial scenario is roughly introduced (see [98] “Annex 5: Towards Industrial AmI applications”). According to this annex, a new definition for AmI in Manufacturing Industry is proposed, taking into account the core concepts from the previous definitions: “An AmI system in manufacturing surrounds the human operator, effectively and transparently supporting his/her activity through the use of context information of such activity”. So according to this definition, AmI systems in manufacturing are characterised by three features:

Human-centred,

Support to activity

Context awareness

Since the “Context Awareness” is the core of AmI, special attention has to be given to context modelling and extraction (see section 3.3). The context-aware applications look at who, where, when and what of entities and use this information to determine why a situation is occurring.

From a technology point of view, AmI systems in industrial environments can be seen as a new step ahead from traditional sensor systems. The focus on manufacturing industry [99] is providing a definition of AmI and out-lining a reference architecture for AmI –based control systems of devices/processes in manufacturing industry, aiming at a unified representation of essential control features of these devices. The related definition of AmI systems especially outlines the following elements:

Physical and contextual ambience in which human operators interact,

Process and plant containing the manufacturing related infrastructure,

AmI control system or information system, and

Human operator itself, who explicitly as well as implicitly interacts with the other elements.

Also this definition is highly user centric and requires for an existence of an AmI System that certain characteristics are present and especially requests for a:

Multimodal and easy interaction with the human operator,

Existence of knowledge on the human operator, the process and the system itself, and

Active support of the human operator, not requiring an interruption of the related activity.

The way forward for process innovation in industry lies in the radical innovation of the whole working environment to focus it on the human actor. The Ambient Intelligence (AmI) philosophy sees this human actor surrounded by environments which are sensitive and responsive to their wishes [100].

3.2.2 RELEVANT METHODS AND TOOLS

Based on above definitions of AmI, and specifically based on the above mentioned reference architecture of AmI systems in industry [99], the methods and tools relevant for AmI application in the ProSEco project can be grouped in three main areas: AmI solutions for enhanced user interaction, context sensitivity and active human operator support by AmI systems. Context sensitivity is analysed in section 3.3.

Page 20: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 19 of 139 D100.1 State of the Art Analysis Update

3.2.2.1 User interaction

User interaction enables the user of the AmI environment to control and interact with it in a natural and personalised way.

In the last years, there has been an improvement in user interaction technology: as an evolution from a classical “display+keyboard” approach, current trends are more oriented to mobility, taking advantage of new technologies (e.g. wireless networks, personal digital assistants, smart phones).

This trend will continue in a direction where interaction will be adapted according to the user that is using the system, and through the best interaction method for that user. One clear interest in the future will be mobility: in the shop floor environment, the user has to be able to access services and obtain information not only using a standard personal computer, but also as they move within the environment, maybe attending other tasks at the same time [94].

This will involve creation of wireless networks in the shop floor, to connect mobile devices used by human operators to personal computers that will centralise and process data.

Considering that the best user interface is not to have one at all, the trend for interface systems will be to imitate multimodal communication characteristics of human-human interaction. These multimodal interfaces address the user by means of a range of natural communication modalities: voice, eye gaze, facial expressions and gestures.

This approach must leave out the usual human-machine interaction based on keyboard, windows, mouse, etc. as it should not be the best option in all cases (for example, the user moves out the from the computer desk). In any case, multimodal interfaces are not ready for industrial uses, as user experience aspects are only beginning to be tackled fundamentally [89].

3.2.2.2 Active Human Operator Support

Support to the human operator activity in process/product design can be achieved at three levels:

Affecting the design process itself, i.e. taking some autonomous decisions based on available information, without user interaction;

Affecting interaction between human operator and the system, i.e. using context information to know what is the most relevant information to be presented to the user;

Affecting the ambience, i.e. changing environmental conditions due to the presence or absence of the user.

Due to the general definition of AmI as a human centred technology, up to now, most research has been done mostly on the last two points, with several projects involving user-system interaction and adapting ambience to the human operator needs (like the MIMOSA [101] or ASK-IT [102] projects).

This is changing nowadays, due to transformations in the manufacturing industry. This industry considers that its success will also be directly influenced by innovations that enable an energy efficient process and product design. In order to achieve this, use of AmI in the manufacturing process has been identified as an enabling technology by the ProSEco project for a context sensitive support.

When aiming at exploiting the potentials of context awareness for an energy efficient design, one can structure systems to collect and enrich information according to the following three levels for generating context awareness:

Manufacturing equipment, machine and installation related sensors that can acquire related data from the user and/or environment;

Extracting relevant data and knowledge from existing production data capturing systems that are generally used for production planning and control;

Collection and analysis of acquired data in distributed processing nodes for making the results available for improving the energy efficiency of product and processes;

It is considered that the use of AmI approaches can provide a type of active support of the user/designer. New type of information can be combined with existing data and knowledge and shall be correlated to its context with respect to the product, process and infrastructure. In relation to the role of the different users

Page 21: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 20 of 139 D100.1 State of the Art Analysis Update

in the process, this information could be possibly better exploited. This is especially true for energy efficiency related information to be measured in later steps in the product life-cycle that is often not available as design input in the earlier steps. By making the designers aware of the consequences of their design decisions, this could contribute to a support concurrent design of Meta Products and their production systems.

Even though nowadays is common to have systems that store user preferences, AmI systems require the expansion of this functionality towards “user behaviour analysis”. Currently, this is done combining previously stored users' preferences, and context from his/her past interactions.

The aim is to achieve that the AmI systems should also know the operator’s intention (i.e. what the user wants to do). This information, combined with information from other sensors (e.g. current temporal and spatial location [90], enables a better-adjusted interaction between the user and machines, cars or home appliances.

Eventually, it will be possible to abstract the user profile to more general community profiles and use this for the building of user tailored services.

3.2.3 RELEVANT PROJECTS

3.2.3.1 AmI4SME:

The AMI-4-SME project objective was to propose a new scheme for systemic innovation of industrial working environments in SMEs by applying Ambient Intelligence (AmI) technology. The traditional approach of implementing the AmI concept, was oriented to surround people with electronic environments, sensitive and responsive to their wishes. AMI-4-SME provided an essential RTD contribution in the migration of this approach to manufacturing SMEs. A reference model to support SMEs (in Manufacturing Industry) in the identification of human centric aspects in the scope of the specification of new methods of work based on AmI technologies has been developed in the scope of the EU-funded project [103]. The elaborated reference model indicates where AmI technologies offer potentials to optimise the interaction of human operators within their working environment and covers the following areas:

Human Operator Input/Output;

Ambient (environment) Input/Output;

Ambient (process) Input/Output;

AmI System “Observer” Part:;

AmI System “Controller” Part.

Relevance to and potential complementarity with ProSEco

It will be analysed in how far the reference model defined in the AmI4SME project could be applied to or adapted for the needs of the ProSEco project. It is likely that at least a part of this model and the list of AmI solutions provided in the project will be re-used for building service engineering tool in ProSEco.

3.2.3.2 AmI-MoSES: Ambient-Intelligent Interactive Monitoring System for Energy Use Optimisation in Manufacturing SMEs

The AmI-MoSES project is about optimising Energy Efficiency by realising an innovative, beyond the state-of-the-art solution for manufacturing companies. The solution is based, on one hand, on a novel approach to energy consumption monitoring by introduction of so-called Ambient Intelligence parameters and, on the other hand, on a combination of AmI data and classical energy consumption data with Knowledge Management technologies to realise a decision support as add-on to the currently used Energy Management Systems.

AmI-MoSES system comprises two subsystems – subsystem “Energy Consumption Data Monitoring and Energy Use Parameters” and “Service Platform with belonging services”.

System functionalities from the point of view of end users are realised as Energy Efficiency Services, where four services are included:

Page 22: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 21 of 139 D100.1 State of the Art Analysis Update

Service for Condition-based Warning;

Service for On-line Diagnostics of Energy related problems;

Service for Installation and Ramp-up Support;

Service for Continuous Improvement;

Additional details can be found at http://www.ami-moses.eu/.

Relevance to and potential complementarity with ProSEco

The approach used in the AmI-MoSES project for identifying AmI and other measurement systems that may provide context-dependent information on energy use, which may support optimal energy use in the process operation phase, is planned to be reused and enhanced in terms of emphasizing energy efficiency aspects relevant for Meta-Products/process eco-design.

The reference model for AmI solutions (already reused in AmI-MoSES from AmI4SME) can be reused again to identify information on energy use patterns (and ecological impact of the designed process) one could obtain from such AmI systems.

3.2.3.3 InLife – Integrated Ambient Intelligence and Knowledge-Based Services for Optimal Life-Cycle Impact of Complex Manufacturing and Assembly Lines

InLife explored how a combination of advanced Ambient Intelligence (AmI) and Knowledge Management (KM) technologies could be used to assure a sustainable and safe use of manufacturing and assembly lines (MAL) and their infrastructure over their life-cycle. The objective of the developed system was to assure an optimal life-cycle impact of complex MAL using ambient intelligence and knowledge-based services. InLife provides methods for calculation and prediction of life-cycle parameters as well as a set of services to support life-cycle management.

Relevance to and potential complementarity with ProSEco

The experience in application of AmI solutions in manufacturing industry for life-cycle optimisation of the machines and manufacturing and assembly lines sustainability can be partly re-used in ProSEco. The methodology how to apply AmI solutions in machine industry will serve as starting point in building the engineering tool in ProSEco.

3.2.3.4 InAmI - Innovative Ambient Intelligence Based Services to Support Life-Cycle Management of Flexible Assembly and Manufacturing Systems

The main focus of the project was to develop an eCollaborative platform, based on Ambient Intelligence (AmI) and Semantic Based Knowledge Management (SBKM) technologies, to create and optimise different services for management of complex Assembly and Manufacturing Systems (AMS), with special emphasis on Automation and Robotics, over the whole production-cycle. The project explored how a combination of advanced Agent, AmI and SBKM technologies can be used to assure an improved collaborative use of industrial installations over the production-cycle. InAmI has specifically contributed to optimise the whole production-cycle of AMS and its products, as it provided a platform for radically improved collaborative services (e.g. reconfiguration of AMS, problem solving, intelligent monitoring etc.). The project focus was on the paradigm of eCollaboration, defined as collaboration among individuals engaged in a common task using electronic technologies, to provide services supporting the access to relevant knowledge/information through a common interface for different agents along the production process.

Relevance to and potential complementarity with ProSEco

The experience in application of AmI solutions to support collaborative use of AMS will be useful for building engineering tool in ProSEco, as some of the concepts/solutions can be re-used for different services to be built e.g. around machines, or home appliance, or in automotive industry, i.e. in all four Business Cases in ProSEco.

Page 23: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 22 of 139 D100.1 State of the Art Analysis Update

3.2.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

The project will provide an important contribution to further application of AmI solutions in industry by developing a solution to effectively use AmI to enhance product/process design from customer satisfaction and ecological points of view as well aiming to increase efficiency of the design process. For example, context sensitive measurements of processes and operator behaviour may provide better insight on how customer use product/service and/or in the material and energy use patterns etc. The critical issue will be to extract such patterns from AmI based and other measurement data. The development of Product Embedded Information Devices (PEID), such as RFID, sensors, on-board computers, etc., is expected to progress rapidly and be largely used for advanced Product Lifecycle Management and real-time data-monitoring throughout the product supply chain in the next few years. In particular, this technology will allow producers to increase their capability and capacity to offer high-quality after-sales services and product updates while, at the same time, being able to demonstrate responsibility as producers of environmental friendly and sustainable products.

One of the ProSEco main focuses is effective use of AmI to enhance product/process and services design from different points of view and the AmI based monitoring concepts from the AmI-MoSES project could be reused for the ProSEco services’ concepts.

The idea at ProSEco is to reuse reference models of AmI solutions in manufacturing industry, from AmI4SME and AmI-MoSES to build the engineering tool for selection/integration of AmI solutions in different products, processes and services. However, these lists have to be extended to include how different AmI solutions can be used to extend the products and processes, i.e. how AmI solutions can be used to build various services. Therefore, the engineering tool for selection/integration of AmI solutions will be built based on the existing reference models but the aspects relevant for product extension will be investigated. The engineering tool will specifically support selection of AmI solutions which may provide information needed for assessing environmental impact of products and processes and by this allow for optimisation of products/processes/services from sustainability point of view. As indicated above, the reference model and lists of AmI solutions and information which provide energy use patterns of manufacturing processes, elaborated in AmI-MoSES project, will be specifically re-used and extended to address product/services aspects as well as other environmental impacts of products/processes (e.g. material aspects etc.). It should be stressed that none of the previous research and projects have systematically addressed up to now how AmI solutions can be used to extend products by innovative services and how the information provided by the AmI systems can be used to optimise products from sustainability point of view.

The AmI based monitoring Core Service will provide inputs to both context awareness services and environmental impact monitoring based on information from AmI solutions integrated in the Meta Product and processes. The big advantage of such approach is that it will allow collection of data on product service dynamics (see Section 3.4.2) in real time. Such services have not been investigated up to now.

Page 24: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 23 of 139 D100.1 State of the Art Analysis Update

3.3 Context Sensitive approaches

3.3.1 DEFINITION

It is difficult to find a single definition for the notion of context, but its importance in communication, categorization, intelligent information retrieval and knowledge representation has been recognized for many years. In the artificial intelligence domain, the concept of context is usually defined as the generalization of a collection of assumptions [104]–[106]. A common, pragmatic definition for context-aware applications, defines context as “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves” [107], [108]. With the recent advance of context-aware computing, an increasing need arises for developing formal context modelling and reasoning techniques. Formal techniques would facilitate context representation, context sharing and semantic interoperability between heterogeneous systems. Context modelling for collaboration teams presents additional challenges, as these teams are highly dynamic in their constitution and objectives, reside in distributed environments and are usually knowledge-intensive [109].

The amount of information to be handled by “collaboration tools” is very significant in present days, as a result of the number of ICT systems that act as information sources. The success of collaboration depends on a timely access to the relevant information by the adequate collaborator. Collaboration tools are required to develop semantics and ontology services to filter and contextualise information to users. The sharing of this information must be very well balanced, taking into account both the needs of the cooperative task in development and security and intellectual property rights (IPR) management issues. The extraction of context is therefore demanding for such tools. This will provide workers with a formal means to invoke resources, instead of only relying on their instinct and memory.

By context modelling, the problem of how to represent the context information is solved. However, how to extract context from the knowledge process and how to manipulate the information to meet the requirement of knowledge enrichment remains to be solved. Since it is planned to model context with ontology, context extraction mainly is issue of context reasoning and context provisioning: how to inference high level context information from low level raw context data.

3.3.2 RELEVANT METHODS AND TOOLS

Incorporation of AmI systems in the manufacturing environments will make possible to increment their context-aware capabilities. In [93], Pascoe introduced a set of four context-aware capabilities that applications can support:

Contextual sensing: the most basic level of context-awareness, where the system simply detects various environmental states, and presents them to the user. Most manufacturing environments are at this level nowadays;

Contextual adaptation: applications use data from a sensory system to adapt their behaviour to integrate more seamlessly with the user’s environment;

Contextual resource discovery: a system can locate and exploit resources which share part or its entire context;

Contextual augmentation: extends previous capabilities through augmenting the environment with additional information. This is achieved by associating digital data with a particular related context (data extended with contextual information);

A couple of systems to handle context awareness were proposed by the research community, many of which are directed at the needs of wireless networks and mobile computing. For instance, the middleware solutions of [110] and [111] are ontology-based, concerned with the semantic representation of context and personalized service search & retrieval techniques. [112] and [113] recognize the need to go beyond context representation to context reasoning, classification and dependency. Related problems such as strategies to store context information into a database server [114] and to process context aware queries [115] have also been investigated.

Page 25: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 24 of 139 D100.1 State of the Art Analysis Update

3.3.2.1 General Process in Context-aware Systems

The basis for context-aware applications is a well-designed context model. A context model enables applications to understand the user’s activities in relation to situational conditions. Typical context modelling techniques include key-value models, object-oriented models, and ontological methods [116].

With the maturing of semantic web technologies (W3CSW08), ontology-based context modelling becomes more and more important both in academy and industry. In this scope ontology is defined as “a formal explicit specification of a shared conceptualisation” [117], [118]. Ontology based modelling is considered as the most promising approach, as they enable a formal analysis of the domain knowledge, promoting contextual knowledge sharing and reuse in an ubiquitous computing system, and context reasoning based on semantic web technologies [96], [119] – see Section 3.3.2.1. Furthermore, present research on context modelling is mostly focused on ontology (see section 3.3.2.13.3.2.2).

An approach for context awareness and context identification in collaborative work was developed in the EU-funded K-NET project (see section 3.3.3.1), together with context models for collaborative work.

Although there are various types of context-aware systems, in general a context-aware system follows four steps to fully enable context-awareness [120]:

Acquisition of context information: because of the diversity of context information types, context information can be acquired in various ways such as using physical sensors, virtual sensors, sensor fusion, and so on. Software modules that perform context acquisition can be seen as virtual sensors, because just as physical sensors convert physical properties into context data, virtual sensors analyse data from diverse sources and convert them into context data;

Storing acquired context information into a repository. What kind of data model is used to represent context information is very important. The closely related topic of context modelling is described below in detail;

Controlling the abstraction level of context information by interpreting or aggregating context data. This is done to convert raw data from sensors into information of a useful detail level, e.g. aggregate many low-level signals into a manageable number of high level information;

Utilizing the context information for services or applications. In general, context information can be used for two purposes – context information as triggering condition and context information as additional information.

Even if nowadays some context-aware systems have already been developed, they are primarily focused on the home or office environments. Design of such systems for applications in the shop-floor environment is going on. One of the trends is to adapt already existing context aware systems to manufacturing lines.

3.3.2.2 Context Modelling

The basis for context-aware applications is a well-designed context model, which enables applications to understand the user’s activities in relation to situational conditions. The overall goal is to develop evolvable context-aware applications. Therefore the design of the general functions of such applications should not be intertwined with the definition and evaluation of context information, which is often subject to change. A good context information modelling formalism reduces the complexity of context-aware applications and improves their maintainability and evolvability. In addition, since gathering, evaluating and maintaining context information is expensive, re-use and sharing of context information between context aware applications should be considered from the beginning. The existence of well-designed context information models eases the development and deployment of future applications. Moreover, a formal representation of context data within a model is necessary for consistency checking, as well as to ensure that sound reasoning is performed on context data. Existing approaches to context information modelling or context modelling as they are often referred to differ in the ease with which real world concepts can be captured by software engineers, in the expressive power of the context information models, in the support they can provide for reasoning about context information, in the computational performance of the reasoning, and in the scalability of the context information management [121].

Most common approaches to context-modelling are key-value models, Object-Role Modelling, Context information model, Mark-up Scheme Models, Graphical Models based on UML, Object-Oriented Models, Logic Based Models and Ontology Based Models [116]. The latest development, for more complex systems is the context modelling approaches that try to integrate different models and different types of reasoning in order to obtain more flexible and general systems [121].

Page 26: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 25 of 139 D100.1 State of the Art Analysis Update

Some context models can be found, both informal and formal, in the literature. Informal context models are often based on proprietary representation schemes without facilities for sharing the understanding about context between different systems. The Context Toolkit [122] represents context in form of attribute-value tuples. Cooltown [123] uses a Web-based context model in which every object has a corresponding Web description. Existing formal context models address a certain level of context reasoning. Henricksen et al. [124], model context using both ER and UML models, allowing contexts to be managed with relational databases. Ranganathan and Campbell [125] represent context in the Gaia system as first-order predicates.

3.3.2.3 Context Modelling based on Ontologies

With the emerging and maturing of semantic web technologies (http://www.w3.org/2001/sw/), Ontology based context modelling became a new trend in both academy and industry. Present research on context modelling is mostly focused on ontologies. For example, the EU-funded project inContext used OWL to build their context model to support collaborative working environments [126]. Sattanathan et al. [127] present an ontology-based approach for the specification and reconciliation of contexts of heterogeneous Web services.

An ontology is defined as “a formal explicit specification of a shared conceptualisation” [117], [118]. Shareable ontologies are a fundamental precondition for reusing knowledge, serving as a means for integrating problem-solving domain-representation, and knowledge-acquisition modules. Ontology-based methods offer many advantages, such as allowing context-modelling at a semantic level, establishing a common understanding of terms and meaning and enabling context sharing, reasoning and reuse [128].

A shared context is referred to as an ontology because the domain ontology provides a common understanding of the involved design concepts and of the topological relations between them. Essentially, a context ontology does not differ significantly from any other knowledge-representation systems: each context contains a set of concepts that describes the basic terms used to encode knowledge in the ontology. Furthermore, each context contains a set of constrains that restrict the manner in which instances of these concepts may be created and combined. In addition to these basic functions, however, the role of context ontologies places a number of further requirements on the representation language.

Several semantic specification languages such as RDF [129] and OWL [130] provide potential solutions for context modelling (especially for the future pervasive computing environment where contextual information should be provided and consumed anywhere and anytime).

Luther et al. [131] discuss the logical foundations of OWL and examine how this modelling language can be used to express a user’s situation. All users – both human and machine – can share and understand the information available on the semantic web. RDF is a simple model which supports large-scale information management and processing in a variety of different contexts. The assertions from different sources can be combined and interlinked, providing more information together than they contain separately.

The Context Ontology Language (CoOL) [132] uses the Aspect-Scale-Context (ASC) model. Each aspect is a set of scales defining the range of valid context information. The model includes a set of quality aspects (e.g. minError, timestamp), which may be assigned to context information using the hasQuality property [133].

3.3.2.4 Context extraction and reasoning

Based on the formal description of context information, context can be processed with contextual reasoning mechanisms. Since contextual information has some inherent features (it can be considered incomplete, temporal, and interrelated) context reasoning can exploit existing reasoning mechanisms to deduce high level, inferred context from low-level raw contextual information. Furthermore, contextual reasoning can be used to verify and possibly solve inconsistent context knowledge due to imperfect input.

For example, Luther et al. [131] show the needs for ontology support and reasoning in mobile applications; their case study is conducted with the Protégé knowledge workbench [134] for ontology modelling and OWL editing, and the RACER inference engine [135] for proof checking, ontology validation and classification.

A more flexible use of ontological reasoning is presented by Forstadius et al. [136]. Their framework utilises context-awareness for service classification. Their model enables a flexible way of describing context-based rules, which can be used for constructing prioritised service lists and for recommending available services. In addition, the model provides a way for describing context-triggered actions, e.g. notifications. CONON [137] is an OWL-based context ontology and allows for contextual logic based reasoning.

Page 27: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 26 of 139 D100.1 State of the Art Analysis Update

There are three main categories of reasoning, Deductive reasoning, Event-Condition-Action reasoning and Statistical reasoning which can be distinguished.

3.3.3 RELEVANT PROJECTS

3.3.3.1 K-NET

The key hypothesis of the project was that the context under which knowledge is collectively generated and managed can be used to enhance this knowledge for its further utilisation within the intra-enterprise collaboration. According to this, the project addressed topics that could enable an efficient context-dependent knowledge sharing and re-use within the networked enterprises. Thus, the aim of K-NET was to monitor (collaborative) knowledge generation/usage processes, extracting context from such processes and enrich the knowledge, and to apply extracted context to support (re-)use of the knowledge within future work.

Additional details can be found http://www.uninova.pt/i-control/k-net.

Relevance to and potential complementarity with ProSEco

The approach for context awareness and context identification in collaborative work developed in K-NET, together with context models for collaborative work, will serve as a basis for context sensitivity in ProSEco. As indicated above, the important topic was to monitor (collaborative) knowledge generation/usage processes, extracting context from such processes and enrich the knowledge, and to apply extracted context to support (re-)use of the knowledge within future work. These approaches for context awareness and context identification will be a basis for the context awareness approach in ProSEco, but would have to be adapted and extended to be applicable for the context of Meta-Products/Processes development.

3.3.3.2 SELF-LEARNING – Reliable self-learning production systems based on context aware services

The objective was to develop highly reliable and secure service-based self-learning solutions to enable tight integration of control and maintenance of production systems.

Approaches based on SOA principles, using distributed networked embedded services in device space, are expected to be most appropriate for implementation of such self-learning solutions. Context awareness, providing information about the processes & equipment and circumstances under which the services operate and allowing them to react accordingly, is seen as a promising holistic approach to assure needed self-learning adaptation to changes in processes and equipment states. Therefore the project developed context models and self-learning context extractor, self-learning adapter for control, maintenance and parameter identification, as well as a SOA based infrastructure including security & trust framework and an agent based middleware introduced in the service structure.

Additional details can be found at http://www.selflearning.eu/

Relevance to and potential complementarity with ProSEco

Especially the SOA based infrastructure as well as the context models and context extractor from Self-Learning could be interesting for ProSEco. These results will be extended/adapted in order to be applicable for the context information that is needed to make services around the products context sensitive.

3.3.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

One of the basic assumptions of the ProSEco project is that context monitoring services will allow for context awareness within PES and personalisation of services. Therefore a key task for ProSEco is the definition of a context model and ontologies to enable context awareness, taking into account the context of the different Meta Products’ use.

The approach for context awareness and context identification and extraction in collaborative work developed in the K-NET project could be a possible basis for the context awareness approach in ProSEco,

Page 28: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 27 of 139 D100.1 State of the Art Analysis Update

but it would have to be adapted to the needs of enhancement of product-services and their production processes.

The concepts and relations of the context model to be defined will be derived from reference models and algorithms for production systems and smart device spaces, where experience from the Self-Learning project could be reused. ProSEco will specifically focus on identifying models that characterise the use of product and processes. These models will be used to extract meta-data as upgrades of an ontology to support context awareness. The ontologies applied for building context models in K-NET and Self-Learning as well as in the current U-QASAR project (such as K-Net Knowledge Context Ontology, Self-Learning Ontology, U-QASAR Team Model or U-QASAR Activities Model) could be re-used. In addition to that a number of standard ontologies (e.g. foaf, skos or doap) will be analysed for re-use within ProSEco context model.

The context ontology shall represent the extracted information as explicit machine interpretable knowledge and will serve as a base for context extraction, refining and reusing. Technically speaking it is considered to use an OWL-based context ontology and modify/extend it using an ontology editor like Protégé [138]. The open source semantic web framework Jena [139] and its SPARQL query engine [140] could be used to support persistency, updating and querying on the ontology, while reasoning could be done using existing tools like PELLET [141] or RACER [135].

The existing context extractors from different projects (e.g. Self-Learning) could be used as a basis for development of such solutions, but adaptions will be needed to apply those solutions for the ProSEco objective to provide context sensitive services around various projects, as well as a knowledge-based decision support system for enhancement of the process of building/maintaining and configuring Meta Products. The important aspect to be addressed is that the ProSEco solutions should be used for decision making at various levels, i.e. at operational level, but also at a management level. Context sensitivity will be used to adapt the solution to various users, e.g. to present data to user depending on their roles and current context.

It should be stressed that such context models and context extractors to support context sensitivity of services around wide scope of products and services are not investigated up to now.

Page 29: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 28 of 139 D100.1 State of the Art Analysis Update

3.4 Simulation support for the product process design

3.4.1 DEFINITION

Following meetings with the ProSEco industrial partners, it is apparent that their priority for support is the modelling and simulation of the digital service aspects of their innovative cloud-based web-enabled offerings. This is their priority, because they all have decades of successful experience of developing physical goods and associated physical services. However, innovative cloud-based web-enabled offerings are inevitably much more dependent upon digital services. Furthermore, their concern is with customer and end-user behaviour in cloud-based web-enabled business. This is because the complex behaviour of digital marketplaces is unfamiliar to them. Moreover, this is uncharted territory for many European industrial companies as they try to attract and retain new types of customers. Yet, this territory must be explored and conquered. This is because of the increasing maturity and heightened competition of their traditional industrial product service markets. However, it is import to note that cloud-based web-enabled innovative services can also be complex on the supply-side. This is because of the unpredictability of dealing with a wide variety of sometimes occasional suppliers in order to meet long-tail demand. This is demand comprising millions of markets of tens rather than tens of markets of millions. Thus far, this demand has been addressed by, for example, Amazon, but industrial European companies deal in more complicated goods and related services than books.

In addition, cloud-based web-enabled innovative product-service offerings can involve prosumption, where consumers to a greater or lesser extent co-design, co-produce, co-create their own product-services. Accordingly, the modelling and simulation needs to encompass not only supply-side, not only demand-side, but also the intersection of supply and demand in prosumption. Thus, it is the business eco-system as a whole can be considered, and how business eco-systems vary from territory to territory, such as from North America to China. For example, the industrial partner Desma aims to introduce an innovative cloud-based web-enabled product service offering involving a shoe co-creation web-site and shop size shoe making machines in selected high volume shopping areas in the major cities of, for example, Europe, China and North America. This involves consideration of factors such as the relative effectiveness of different multi-channel retailing strategies and user take-up of a new type of co-creation website. It is not possible to model and simulate all aspects of different cloud-based web-enabled innovative product-service offerings in different territories. However, key topics for literature review are Web-based word-of-mouth marketing phenomena and other network effects; differences between geographic territories arising from pervasive factors such as cultural differences.

The complex phenomena within cloud-based web-enabled innovative product-service offerings in different territories require modelling and simulation from two perspectives: top-down aggregate dynamic complexity involving broad patterns; and bottom-up emerging phenomena arising from the interactions of multiple agents such as individual consumers. Each of these perspectives has its own strengths. However, neither is complete in itself. Accordingly, System Dynamics modelling / simulation (top-down) will be applied in conjunction with Agent Based modelling/simulation (bottom-up). This involves considerable novelty as, although the potential of this combined modelling/simulation has been recognized for some years, it has not been widely applied thus far. As a result, extant studies have not directly covered the specific topics outlined above. However, the literature review finding related to SD/AB combination address research fields that are of broad relevance such as diffusion of innovations. Moreover, the extant studies provide insights into the potential and challenges of combining SD and AB.

The literature findings are reported in the remaining sub-sections as follows: combining System Dynamics and Agent Based; phenomena in consumer behaviour; phenomena in business eco-systems. The findings from the literature will provide valuable background information as the specific key modelling and simulation foci emerge through further investigation of the business cases during the first year of the project.

3.4.2 RELEVANT METHODS AND TOOLS

3.4.2.1 System Dynamics, Agent-based modelling, and combining SD and ABM

Real world systems are in many cases too complex for human brains to analyse. Dynamic complexity can arise from feedback loops, delays, nonlinearities, and simple connections between agents. Humans are not able to see the outcomes of their own actions, especially when operating in a dynamic world, where long delays occur between the actions and the consequences. System dynamics and Agent-based modelling are tools combining systems thinking and mathematical modelling, which together can help decision makers to evaluate and understand what is actually causing the problems and how to better affect the system behaviour. The underlying assumption in both methodologies is that the structure is more important determining the behaviour than the individual actors.

Page 30: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 29 of 139 D100.1 State of the Art Analysis Update

Quite often in decision making, the feedback loops are left outside the scope and systems are examined with simple open loop diagrams. However, when investigating the closed loop system, i.e. feedback system, the questions that seemed uninteresting in open loop system may reveal to be the most important part of system, especially in long-term. For example, it is common to neglect the consequences of one's own action. Often people also treat closed systems as open systems and therefore completely miss the feedback view. This leads to event-oriented problem solving, where every event has a cause, which is an effect of some earlier cause. This chain of cause and effect can be extended, presumably, until we find some “root cause”, however, as Sterman points out in [142], we probably lose our interest before that.

Agent-based modelling is a computational approach of studying emergent phenomena arising from interactions of autonomous agents. It has a bottom-up perspective to systems, unlike system dynamics which is essentially top-down approach. Basically, the idea in agent based modelling and simulation is to study system’s macro level behaviour which is determined by a set of simple behaviour rules of individual agents. The behaviour rules can be information flows from agents to agents, competition, or cooperation for example. Recently, there has been lots of research activity to study the characteristics of the network structures of agent interactions. The network, in this sense, determines with whom the agents interact. The structure of the network affects to a great extent system’s macro level behaviour.

Why combine SD and ABM? There are synergies available in combining these methods, because both SD and ABM have their pros and cons. For example, SD is very powerful describing aggregate phenomena and dynamic complexity whereas ABM is very powerful describing the emerging phenomena arising from the interactions of multiple agents.

Borshchev and Filippow [143] address in their paper how existing system dynamics model can be build using ABM and what kind of additional value this brings. Schieritz and Größler [144] combines SD and ABM and uses SD to describe the dynamics of the agents and the ABM to describe the overall behaviour of the system. This way the agents have memory and act as dynamic decision makers and still the overall structure of the whole system can reorganize itself as it is common in ABM. This kind of system restructuring is not possible in SD and therefore combining these methods expands the applicability of SD. One way of integrating SD and ABM is building the agent based model using system dynamics as in [145] and [146] approach. This type of models can be said to be agent-oriented system dynamics models [145]. This approach overcomes the difficulties of combining two different softwares. Rahmandad and Sterman [147] studied the spreading of contagious diseases using SD and ABM and compared the benefits and drawbacks of both methods. Contagious diseases are closely linked to innovation adoption and other social phenomena which are analogous to processes of diffusion and social contagion.

The combination of SD and ABM can better enable simulation of the business ecosystems that new product-services can stimulate and operate in. In particular, combining these approaches can better reveal what is missing from the ecosystem in order to support the new service.

3.4.2.2 Phenomena in consumer behaviour

One of the recent trends is increasing online sales. Recent findings imply, for example, that companies should look for geographical areas where the traditional retail strategies do not succeed or meet the needs of consumers [148]. According to Fa et al. in [148], the marketing strategy should be implemented in a way that increases the probability of word-of-mouth and tailor the strategy to meet the demographics and other relevant characteristics of the given area. Another trend is vanishing distinction between physical and online retailing. Online and offline retailers are competing in novel and innovative ways due to availability of online information independent of place [149]. Complexity of the system and consequent unpredictability of the success of different strategies and policies calls for cheaper and safer ways to build understanding. Simulation is an effective approach for this. Combining top-down system dynamics to bottom-up agent-based simulation as complementary approaches serves to study the complexity of changing markets.

The interest to the use of agent-based modelling in combination with traditional marketing research approaches has been growing and at the moment there is substantial amount of research covering the subject. Rand and Rust [150] claim that agent based modelling is a very suitable method for understanding how complex marketing phenomena emerge from a set of simple decision rules. Traditional marketing research approaches, such as qualitative and wide variety of quantitative statistical techniques, are not well suited for studying such emergent phenomena. Rand and Rust contributed by setting rigorous standards of how to conduct agent-based modelling; as the lack of standards has been one of the reasons why the acceptance of agent based modelling has been hindered. Different seeding strategies for word-of-mouth (WoM) and viral marketing are one of the areas ABM has been applied to, for example the work of [151] and [152]. This is probably due to the suitability of ABM for understanding different complex phenomena, for example mechanisms of innovation adoption e.g. mechanisms of agent’s personal influence, or the

Page 31: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 30 of 139 D100.1 State of the Art Analysis Update

positive network externalities, that is, how the value of a product or service depends on the number of adopters. However, the research conducted so far in WoM marketing has produced conflicting results of what is the best seeding strategy [151]. After comparing different strategies, Hinz et al. conclude in [151], that the best seeding strategy is to seed to well-connected people. According to [152] one obstacle in WoM is the difficulty in measuring the actual performance and therefore there is need to better understanding of how WoM creates value. They have created an agent based model, which can be used to evaluate the created social value of the WoM seeding strategy.

Recently, there has been lots of research activity to study the characteristics of the network structures of agent interactions. The network, in this sense, determines with whom the agents interact. The structure of the network affects to a great extent system’s macro level behaviour. Akkermans [146] studied strategies for the diffusion of innovations on social networks by means of agent based modelling and simulation. They found out that even though the structure of the social network cannot be known a priori, some characteristics of the network can be learned from aggregate sales data. [153] studied effects of various network structures and relational heterogeneity on innovation diffusion within market networks. They studied typical network topologies, such as random networks, two dimensional lattice, small world network, power-law distribution networks.

There are several different ways of finding key players in networks [154]. Typically they are based on measures of centrality, such as centralities of degree, closeness, betweennes, prestige, power, and eigenvector related measures, information centrality, flow betweennes centrality, rush index, influence path-transfer flow, optimal inter-centrality, entropy, [154]–[157]

In addition to set of key players, there is often an interest to find community structures within networks as products and services may be targeted to segments forming complex communities. There are several ways to find out and explore communities in networks [158]–[160]. However, it is a challenging problem (e.g. due to computational complexity), and thus, there is no single good solution for every situation. There are several ways of finding community structures fit for different purposes: graph partitioning, hierarchical clustering, partitional clustering, spectral clustering [158]. Depending on the product service, it is important to be aware of the relevant consumer network structures e.g. various communities as potential target customer segments.

Considering this research we consider aspects of marketing and diffusion of innovations, such as word-of-mouth. However, as mentioned briefly earlier, implications of simulations and empirical examples are somewhat contradicting [151]. Understanding word-of-mouth is not only important considering its benefits but also due its adverse effects. It has to be examined and elaborated more thoroughly. In practice, assumptions of the simulation models must be re-examined in different situations. There are several characteristics of models that are considered separately in the literature. Nevertheless, thorough study in all these situations is needed. Preliminary categorization is set along four dimensions:

What marketing strategies should be used in following situations? Categorization along four dimensions:

o (1) characteristics of product service markets: SI, SIR, SIS e.g. [142]

SI (susceptible, infected): visible brand

SIR (susceptible, infected, recovered): invisible brand and immunity

SIS (susceptible, infected, susceptible)

o (2) categories of network externalities e.g. [161]

direct positive network externalities (e.g. Facebook): exposure thresholds

indirect positive network externalities (e.g. electronic invoicing, operating systems of computers and mobile devices): third party exposure thresholds

no network externalities (e.g. most consumer products): probabilities of spreading information

negative network externalities (e.g. services with limited access): over-exposure thresholds

both positive and negative externalities (e.g. fashion products): exposure threshold and over-exposure thresholds

o (3) characteristics of relevant networks and summary statistics e.g. [162]

Page 32: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 31 of 139 D100.1 State of the Art Analysis Update

scale free (e.g. network of acquaintances)

sparse network

dense community structures

small-world

components and their sizes

correlated authority and degree

o (4) network stability (temporal dimension in network formation)

stable (e.g. Facebook)

unstable (e.g. events of networking, workshops)

Interesting research question could be the effectiviness of WoM in different cultures, i.e. in low or high context cultures

3.4.2.3 Phenomena in business eco-systems

In biology the study of ecosystems is concentrated on how the living organisms and the non-living environment are interconnected and depend upon them self and each other. This means that the living organisms compete and/or cooperate with each other, but they also depend on and shape the environment. This works also the other way, i.e. environment is setting opportunities and limits to the living organisms and at the same time as the environment changes it is forcing the living organisms to adapt to the new conditions.

The study of business ecosystems can be seen in many ways as the biological counterpart, it is interested in of the interconnections between the business entities, both the competition and cooperation. The environment is often analogous to a platform that is serving the whole ecosystem. In addition, it uses a lot of analogies from ecology, e.g. niche, ecosystem service, and environment. Ecosystem point of view stresses a holistic view of the world rather than reductionists. It addresses the feedback loops linking the entities together as well as the environment, i.e. they form complex networks of interaction [163]. A complex feedback system cannot be broken into smaller pieces; it needs to be studies as a whole or otherwise the interesting behaviour is lost [164]. These kinds of “business ecosystems” have been studied lately increasingly in many points of view [163], [165]–[171]

The need for ecosystem’s point of view lies in the holistic view the traditional problem framing lacks. In ecosystems, instead of competition, cooperation is often seen as a key to success, e.g. co-evolution. Co-evolution is a positive feedback loop, a reciprocal cycle, in which change in entity A affects the change in entity B and vice versa. [169] Co-evolution describes how the actors in the ecosystem develops whereas co-creation describes almost the same process, but from product’s/service’s point of view, i.e. how the value of the product/service is generated together with the company and consumer. This has a close linkage to service and innovation research. In open innovation approach the organizations are on purpose opening up the innovation process in order to accelerate the company’s own innovation process by utilizing the inflow and outflow of knowledge and at the same time expanding the external market for new innovations [172]. On the other hand service research, especially service dominant logic emphasizes the idea that the value creation happens when the consumer uses the product/service, and therefore the company co-creates value together with the consumer, i.e. the consumer is endogenous part of the system [173], [174].

System dynamics and agent based modelling are in general suitable methods for studying business ecosystems because they are holistic methods and can tackle dynamically complex problems with many interacting entities. Business ecosystem approach tries to looks the problem more holistically than the approaches where the ecosystem is divided into smaller segments (e.g. marketing, logistics, capacity, consumer demand, competitors, etc.). For agent based modelling see [167]. For system dynamics see [165], [166].

[169] underlines that companies should be seen as a part of a business ecosystem that crosses many industries rather than as a part of a single industry. As already said earlier, in a business ecosystem the emphasis is also on cooperation as well as on competition. When seen a company only as a part of an industry, the competition is overemphasised. A business ecosystem is built around a new innovation and it has four phases of evolution: 1) birth, 2) expansion, 3) leadership, and 4) self-renewal/death. Managers usually focus on how to operate successfully in one of the four phases rather than trying to figure out how to move successfully from one phase to another and still keep the leadership and competitive advantage.

Page 33: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 32 of 139 D100.1 State of the Art Analysis Update

In ProSEco, investigations suggest that the companies are in different phases of the evolution of their business ecosystems. For example, Desma is trying to set a new ecosystem that competes with existing ecosystems. The interesting research questions are: How to build a new ecosystem that supports Desma’s long term goals (i.e. being an ecosystem leader)? What are the missing parts of the package that adds value to customers? How can Desma design co-operation holistically? How to move from one phase to another without losing the ecosystem leadership? Whereas Desma is struggling in the birth phase, Electrolux, Volkswagen, and ONA are struggling in self-renewal phase. They are facing totally different challenges; they are trying to keep the existing ecosystem competitive and alive in order to prevent a new ecosystem to emerge.

3.4.3 RELEVANT PROJECTS

As discussed above, the complexity of whole business ecosystems, and consumer behaviour in business ecosystems, needs to be considered in the introduction of innovative cloud-based web-enabled product-services. Business ecosystems and marketing strategies have been studied in several EU-funded projects such as those summarized in the following paragraphs.

3.4.3.1 DBE - Digital business ecosystem

“The overall objective of the DBE was aimed at proving Europe with a recognized advantage in innovative software application development by its SME industry, launching a disruptive technology paradigm for the creation of a digital business ecosystems for SMEs and software providers thus improving their value network.”

3.4.3.2 EBEST - Empowering business ecosystems of small service enterprises to face the economic crisis

“Addresses small companies providing services to other businesses, often by subcontracting, in different industrial sectors. The objective was to set-up, experiment and promote the adoption of new collaboration practices within each business ecosystem and across the ecosystem borders, taking advantage of a shared knowledge, an agreed corpus or rules and codes of practice, and a suite of ICT tools to support the intended ecosystem dynamics. The operational objectives were: Presenting worldwide the offers of the single service companies and clusters, and the integrated offer of the ecosystem as a whole, by exploiting sectoral and multilingual ontologies and taxonomies.)”

3.4.3.3 COIN - Collaboration and interoperability for networked enterprises

“The mission of the COIN IP was to study, design, develop and prototype an open, self-adaptive, generic ICT integrated solution to support the above 2020 vision, starting from notable existing research results in the field of Enterprise Interoperability (made available by the Enterprise Interoperability DG INFSO D4 Cluster and specifically by the projects ATHENA, INTEROP, ABILITIES, SATINE, TRUSTCOM) and Enterprise Collaboration (made available by projects ECOLEAD, DBE, E4 and ECOSPACE”

3.4.3.4 IMS - Marketing Strategy Implementation as Source of Firms´ Competitive Advantage

“How can innovative marketing strategies be effectively implemented? While marketing scholars agree widely on the importance of effective strategy implementation for strategies performance outcomes, effective implementation of innovative marketing strategies and its relationship to firm performance is still not understood well. As a result, while firms invest significant amounts of resources in the implementation of their marketing strategies, many implementation initiatives fall far behind expectations and more often than not firms considerable investments in innovative strategies do not result, if at all, in notable effects on their performance. Against this background, the proposed research aims at investigating the effective implementation of innovative marketing strategies and how strategy implementation efforts affect both, the strategy-performance link and the sustainability of firms/ competitive advantage.”

Page 34: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 33 of 139 D100.1 State of the Art Analysis Update

3.4.3.5 The Role of Marketing Actions in Accelerating the Time-to-Take-Off for Emerging Technologies: An Econometric Analysis

“Achieving rapid take-off for new technology-based product categories is a common managerial objective. The fast emergence of a dominant design can have a significant impact on competitive dynamics of a market. The central concern of this paper was how firms can reduce "take-off time" for an emerging technology. The intended contribution was to enrich the diffusion and technology adoption literature by improving our understanding of the strategic factors affecting marketing decisions in the context of emerging technologies and by extending the body of empirical work in the area of marketing strategy. Contribution was also made by providing managerial guidelines for achieving more rapid technology take-off.”

None of these studies has examined the complex nature of these dynamic phenomena. These studies have mainly used traditional qualitative and quantitative approaches, which cannot tackle the dynamic complexity thoroughly. System dynamics and agent based modelling as complementary approaches can deepen and broaden the understanding of these complex phenomena.

3.4.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

Latest trends on the markets may cause the traditional marketing strategies to fail because of the changed environment. Failing marketing strategies may cause enormous losses to the responsible companies, for example, due to adverse effects word-of-mouth. New strategies may be so elaborate that it is necessary to test those utilizing simulations. Also, the contradictory implications of the earlier research needs to re-examined and validated.

Most of the models presented in the literature are only conceptual models and not tested by simulation. In ProSEco some of the relevant conceptual models can be tested and validated by simulation.

One important challenge that needs to be addressed is the difficulty of obtaining modelling/simulating data for market offerings that are new-to-the-world. For example, if no data is available about a region’s social networks, then it is very difficult to compare model of a future market offering to reality, and therefore formulate robust implementation strategies. In order to address this challenge, it is proposed to draw upon market reports etc., concerning the nearest equivalent existing market offerings, and to use qualitative research techniques such as focus group to investigate potential reactions to new offerings.

One of the current challenges in combining Systems Dynamics and Agent Based simulation are the technical solutions for connecting the modelling methods and the possible time scale problems. The technical solutions relate to the software to be used, as these are yet to be developed fully. The time scale of the two simulation methods needs to be considered too, while in some cases the dynamics of individual agents might be far faster than the dynamics of the whole system. This might cause problems of “stiff” system, i.e. the time step required by the fast system must be so small that the slow dynamics is corrupted by round-off errors. One way overcoming the lack of suitable software is using Python or some other programming language. [175]

Page 35: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 34 of 139 D100.1 State of the Art Analysis Update

3.5 Life cycle management for product optimization applying eco-design perspectives

3.5.1 DEFINITION

3.5.1.1 Sustainability, life cycle thinking and life cycle management for eco-design

The concept of sustainable development described in a 1981 White House Council on Environmental Quality report: “The key concept here is sustainable development” [176] and the concept of sustainability was first formulated in the 1987 with the Brundtland Report stating that the goal of sustainability is to “meet the needs of the present generation without compromising the ability of future generations to meet their own needs” (World Commission on Environment and Development (WCED), 1987) [177]. In 1987 the Brundtland Report recognized that economic development taking place today could no longer compromise the development needs of future generations. This concept of sustainable development aimed to encourage people to reflect on the harm economic development was having on both the environment and on society. According to that, the report highlighted three fundamental components to sustainable development: environmental protection, economic growth and social equity. It is so-called triple bottom line (see Figure 3).

Figure 3 – Triple bottom line

Life cycle thinking is essential to sustainable development. It is about going beyond the traditional focus on production site and manufacturing processes so to include the environmental, social, and economic impact of a product over the entire life cycle (see Figure 4) .The main goals of life cycle thinking are to reduce a product’s resource use and emissions to the environment as well as improve its socio-economic performance throughout its life cycle. This may facilitate links between the economic, social and environmental dimensions within an organization and throughout its entire value chain.

Figure 4 – Life cycle of the product

Page 36: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 35 of 139 D100.1 State of the Art Analysis Update

Life cycle thinking expands the established concept of cleaner production to include the complete product life cycle and its sustainability.

Life cycle management (LCM) is the systematic application of life cycle thinking in business practice with the aim of providing more sustainable products and services. In the past, products have been designed and developed without considering its adverse impacts on the environment. Typical factors considered in product design included function, quality, cost, ergonomics and safety. However, no consideration was given specifically to the environmental aspects of a product throughout its entire life cycle, especially at the beginning (extraction of raw material) and ending of the life cycle (use, maintenance, reuse and recycling stages). Often times, adverse impacts on the environment occurred from other life cycle stages such as use, end-of-life, distribution, and raw material acquisition. Without addressing the environmental impacts from the entire life cycle of a product, one cannot resolve all the environmental problems accruing from both the production and consumption of the product.

To take into consideration all the above a new concept knew as Eco-design has emerged. Ecodesign concept developed by the World Business Council for Sustainable Development (WBCSD) at the Rio summit is the culmination of a holistic, conscious and proactive approach. Ecodesign makes possible product or service designing and its impacts on the environment minimization. It has influence on every stage of a life cycle of the products: raw material extraction, production, packaging, distribution, use, recovery, recycling, etc. [178]

Ecodesign is the key concept towards the sustainability and responsible consumption due to the fact that incorporates new concepts like: the new system-product vision, the life cycle concept and the integration of all the actors implicated in the improvement of all the environmental aspects of the products.

Ecodesign is therefore a new approach to product design. It depends on identifying environmental aspects connected with the product and incudes these aspects to design process already on the early stage of the product development together with others like functionality, security, viability, facticity, etc…

Eco-design is therefore an essential way to reduce the environmental impacts and economic cost of processes and systems, as well as products. Thus, this project proposes an eco-design method which can generate one of the best design solution taken into account user requirements and economical and environmental impacts, but we will not use any mathematical optimization model to develop this solution as we consider both quantitative and qualitative aspects very important to be taken into account.

3.5.1.2 Ecodesign definition and methodology

Ecodesign is a concept that integrates multifaceted aspects of design and environmental considerations.

We can found different definitions in professional literature for Ecodesign depending on which is the main objective of this new concept, here we have selected two:

The first one, consider that the main objective of Ecodesign is to create sustainable solutions that satisfy human needs and desires and come to the following definition:

[179]. “Sustainable solutions are products, services, hybrids or system changes that minimize negative and maximize positive sustainability impacts - economic, environmental, social and ethical - throughout and beyond the life-cycle of existing products or solutions, while fulfilling acceptable societal demands/needs”

The second one considers that the aim of Ecodesign is primary to reduce the adverse impact of a product on the environmental, economic and social context, coming with the following definition:

[180]. “Through an intelligent utilization of the available resources, Ecodesign aims at a product and process design that ensures maximum benefict for all actors involved as well as consumer satisfaction, while causing only minimum environmental impacts”

From the last definition the following guiding principles for Ecodesign can be derived:

Service orientation

Resource efficiency

Use of renewable resources

Multiple use

Flexibility and adaptation abilities

Page 37: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 36 of 139 D100.1 State of the Art Analysis Update

Failure tolerance and risk prevention

Ensuring work, income and quality of life.

Design process is different depending on the type of company and also the type of product or service. In practice industries joint different approaches and tools to Ecodesign and develop their products.

In smaller organizations the product development can be performed by just one person, who probably works more intuitively than formally. In large enterprises is usually formalized. Therefore, in the process of applying Ecodesign, a company (small or large) must decide which changes and tools must incorporate in his usual design methodology and the objectives to be achieved by those.

Moreover, the benefits derived from the implementation of these changes will also be different and can be classified in the following 4 levels:

Figure 5 - Levels of improvement and benefits derived from the application of different eco-innovation strategies in a company

The implementation of the different strategies implies different improvements for the company but also different implications and time consuming. The first two levels (Level 1 and 2) can also be understood as eco-innovation, (it implies a big sustainability concern of the company for the creation of a new product/services sustainably correct which can imply a new orientation of the company) and the second two (level 3 and 4) are the already called ecodesign (It implies the re-design of an existing product taken into account sustainability issues).

Therefore the scope of the changes will always depend on the objectives of the company.

In the professional literature it is easy to find different methodologies and tools to be applied for ecodesign. Here we will present the one explained at eco-union.org report [181]. In order to understand the whole process and the stage of the process that imply a greater difference from the conventional design process regarding prior environmental analysis (to identify the problems to be solved) and the integration of ecodesign strategies in the design process.

Page 38: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 37 of 139 D100.1 State of the Art Analysis Update

Figure 6 – Schema of ecodesign methodology

During the Design and Development phase is necessary the generation and development of new concepts. These new concepts (different product lines) will be developed in a collaborative way in the Cloud Manufacturing by the conceptualization of the selected strategies that will give answer to the environmental requirements. Afterwards the different conceptual proposals have to be analysed taken into account the environmental, technical, economic and social requirements previously identify in order to choose the best concept with most potential viability. And therefore we consider that collaboration tools will be required in order to optimize this process.

3.5.1.3 Ecodesign Strategies

Ecodesign strategies are environmental improvement proposals that are incorporated into the design process in order to prevent environmental impact of the product.

Ecodesign strategies are concrete actions, such as using recycled material, reducing the number of production stages, implement renewable energy sources, etc.., that try to respond to environmental or critical points detected in the initial environmental analysis and reflected in the eco-briefing.

These ecodesign strategies are strategies oriented to reduce resource consumption (materials, water, energy, soil, etc...) and/or minimize waste generation (emissions to air, water or soil) associated with the life cycle of your product or service.

To simplify the selection of ecodesign strategies, these are grouped by 8 blocks depending on the stage of the life cycle on affecting mainly in order to optimize or minimize the impact.

Page 39: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 38 of 139 D100.1 State of the Art Analysis Update

Figure 7 - Ecodesign strategies along the life cycle in order to optimize the impact of each phase

Designing products that give maximum use with minimal environmental impact is central to eco-design. A good knowledge of the product's environmental impact in a life-cycle perspective (raw materials, manufacture, use, waste management and transport), as well as of how to work with product development is necessary in order to do this. The focus is on making sure that the product will do its job, provide its benefit in the best way.

Correctly implemented, ecodesign means that products are cheaper to manufacture, since they require fewer materials and less energy. It also means that products are more attractive to customers.

3.5.2 RELEVANT METHODS AND TOOLS

A range of life cycle tools and methods exist for guiding life cycle management (see Figure 8) and ecodesign analysis and strategies.

Figure 8 - Life Cycle Management is connecting various operational concept and tools

The EcoDesign Checklist is a checklist of questions that provides support for the analysis of a product’s impact on the environment. It provides relevant questions that need to be asked when

LCM

Life CycleThinking

Business case for

sustainability

Proceduresand methods

Tools and Techniques

Design for Sustainability: Eco-design

• Checklists• LCA• LCC• Material and Substance

Flow Analysis (MFA/SFA)• Input/Output Analysis (IDA)• Material input per unit of

services (MIPS)• Cumulative Energy

Requirements Analysis(CEPA)

• Cleaner Production Assess(CPA)

• MET Matrix

Page 40: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 39 of 139 D100.1 State of the Art Analysis Update

establishing environmental bottlenecks during the product life cycle [182]. The checklists should be prepared and implemented by people with sufficient experience in the product to be analysed, and must contain:

o Analysis of needs: product functionality

o Life cycle of the product

Life Cycle Assessment: When considering the environmental implications of product and process design, think beyond the cost, technology and functional performance of the design and consider the broader consequences at each stage of the value chain. This fundamental principle, life cycle thinking, has motivated the development of life cycle assessment [183]. LCA is the most commonly used method to identify, quantify, evaluate and prioritize potential environmental impacts directly attributable to sustainability of products. In ISO 14040 standards, LCA is defined as the “compilation and evaluation of the inputs, outputs and the potential environmental impacts of a product system throughout its life cycle”. When used in sustainable design, LCA is intended to incorporate environmental factors into early design phases to support the comparison of design options and the identification of improvement potentials, such as for material selections, manufacturing process methods, recycling strategies, and for revealing of environmental profiles [184].

Life Cycle Cost (LCC) The methodology of life cycle assessment (LCA) is a worldwide spread technique that allows to identify and quantify the environmental impact of goods and services during their entire life cycle, however, the LCA methodology does not take into account financial aspects; that is why the LCC methodology (Life Cycle Cost) exists, in order to allow to analyse both environmental and economic [185]. LCC are summations of cost estimates from inception to disposal for both equipment and projects as determined by an analytical study and estimate of total costs experienced in annual time increments during the project life with consideration for the time value of money. LCC is an economic model over the project life span.

Material and Substance Flow Analysis (also referred to as substance flow analysis or SFA) according to the UNEP definition [186] is an analytical method of quantifying flows and stocks of materials or substances in a well-defined system. MFA is an important tool to assess the physical consequences of human activities and needs in the field of Industrial Ecology, where it is used on different spatial and temporal scales. Examples are accounting of material flows within certain industries and connected ecosystems, determination of indicators of material use by different societies, and development of strategies for improving the material flow systems in form of material flow management.

Material input per unit of services The MIPS Concept is the measurement for material and energy intensity from processes, products, infrastructure and services in our economic system This concept uses a resource indicator to measure the environmental performance of a cradle to grave business activity. Calculations are made per unit of delivered “service” or function in the product during its entire life cycle (manufacturing, transport, package, use, reuse, recycling, new manufacturing from recycled material, and final disposal as waste). The MIPS is thereby defined for service yielding final goods and not for raw or auxiliary materials which enter the manufacturing process of the final good [187].

Cumulative energy demand (CED) of a product represents the direct and indirect energy use throughout the life cycle, including the energy consumed during the extraction, manufacturing and disposal of the raw and auxiliary materials. Different concepts for determining the primary energy requirement exist. For CED calculations one may chose the lower or the upper heating value of primary energy resources where the latter includes the evaporation energy of the water present in the flue gas. Furthermore one may distinguish between energy requirements of renewable and non-renewable resources [188].

The MET matrix is a qualitative or semi qualitative environmental analysis method that is applied to provide a general view of the inputs and outputs of each phase of the product life cycle and to identify the main environmental aspects and possible environmental improvement options. Prioritization of environmental aspects is based on environmental knowledge, although the MET matrix requires quantitative data. The relatively simple structure of the matrix allows the ecodesign team to analyse all phases of product life cycle (vertical analysis) and the various environmental impacts associated with each phase (horizontal analysis). This was achieved by grouping environmental aspects into three main categories (Material use, Energy use and Toxic materials and emissions including waste) [189].

Page 41: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 40 of 139 D100.1 State of the Art Analysis Update

3.5.3 RELEVANT PROJECTS

3.5.3.1 e-CUSTOM Project

Today’s market is asking for replacing mass production by mass customisation. The need for satisfying the individual customer’s requirements is now stronger than ever.

Towards that end, the e-CUSTOM project, as ProSEco, aimed at integrating the customer in the design phase of highly personalised products and allowing the decentralised manufacturing of these products at low costs, high quality and reduced environmental footprint.

The project yielded 4 different exploitable results in close cooperation with committed end-users from the Automotive, CNC Machines and Healthcare sectors that expressed the true industrial needs.

The first set of exploitable results (User Adaptive Design System – UADS) engages the customers in the design and development of personalized products from the initial product design phase up to the after-market segment. The web-based platform allows customers to modify a set of characteristics, including materials choice as well as the modification of the standard geometry of parts belonging to a carefully chosen, personalisation-enabling, series of components of different models and variants. Augmented Reality visualisation and product personalisation tools have been developed and have also been deployed to mobile devices running Android OS.

The second set of exploitable results is the development of the Decentralized Manufacturing Platform – DEMAP, also deployed as a mobile app, which focuses on reaching an efficient level of decentralised manufacturing. Depending on the selected customisation options certain manufacturing processes will be possible to be carried out by the material/parts suppliers or by the local distributors and/or service providers in a coordinated manner.

The third set of exploitable results is the Environmental Assessment Module – EAM for the quantified estimation of the environmental footprint of the possible solutions in order to consider it when deciding on the most appropriate manufacturing solution.

Finally, the development of the Network Infrastructure and Systems Integration - NISI ensured the integration and interoperability between all software components through web services. The tools have been designed and developed using free and open-source software, the JAVA programming framework and non-proprietary database management systems.

3.5.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

Eco-Design Methodology especially applicable to SMEs. Ecodesign is the new paradigm for product design. It depends on identifying environmental aspects connected with the product and includes these aspects to design process already on the early stage of the product development together with others like functionality, security, viability, facticity, etc… As it has been summarized in this section, this methodology includes different stages and phases which imply different team members with different specific knowledge. Therefore during this project we may analyse the cost implication of this methodology (time, persons, technical and environment knowledge needed, etc..) in order to simplify as soon as possible this methodology throw ProSEco software tools where most of the external knowledge may be embedded.(Configuration, Collaboration, Virtual Scenarios, etc…)

Explore the links between collaboration and lean paradigm. One of the aims of the ProSEco project will be to explore the links between collaboration and lean manufacturing as this provides the platform to achieve process optimisation and minimise waste as well as having an impact on the environment. A lean organisation needs to understand what value means for a specific customer. Looking at lean paradigm as a process that aims to eliminate waste, its principles can be applied to other areas such as healthcare, education, government, supply chain, and specific to ProSEco in new product development (NPD) [87]; [88].

Integration of Lean Thinking, Life cycle thinking and Green Paradigm Given that manufacturing operations through product design and process technologies can critically influence environmental performance [190], [191], the relentless pursuit of waste minimisation embedded in lean management practices [87], opens doors for continued efforts in reducing the risk for the environment [192]. In fact, waste is the common denominator for lean and green management [193]. The continuous effort through lean management to reduce operational waste either from discarded materials, consumption of energy, or water usage translates into lower environmental

Page 42: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 41 of 139 D100.1 State of the Art Analysis Update

harm, thus enhancing environmental performance (green paradigm) [191], [194]. Dues et al [195] describe how both Lean and Green paradigm look into how to integrate product and process redesign in order to prolong product use, or enabling easy recycling of products as well as making processes more efficient, i.e. less wasteful [196].

Page 43: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 42 of 139 D100.1 State of the Art Analysis Update

3.6 Lean design principles in the overall life cycle of the product including organizational aspects

3.6.1 DEFINITION

Lean Management can be considered as a management philosophy. It focuses on maximizing value brought by the organization to its customers and on building smooth flow of products (e.g. by minimizing waste). The roots of lean management are closely connected with manufacturing systems. The term “lean” has been coined by Krafcik [197] and elaborated in larger detail by Womack, Jones and Roos [198]. Womack and Jones [87] have identified five principles of lean management: define the value from the customer perspective, organize all value-adding activities in a value stream, create continuous flow, implement pull system that reacts on current customer needs and continuously improve your processes. The importance of the lean approach has grown in last two decades and it has become one of the dominant approaches of managing production systems. Also outside of manufacturing lean management implementation is gathering impact [199].

An application of lean management approach into product development setting is known under the name of Lean Product Development (LPD). As there is no single definition of LPD in the literature [200] authors of this deliverable propose to identify LPD with a concept of implementing lean management approach onto product development processes. By lean management authors understand a management approach based on five principles derived from Toyota and described above.

LPD is a management approach that should be interpreted as a system [201]–[204]. It should not be considered as a set of tools or methods from which one can pick up the selected ones and await improvements. It is a system of related rules, techniques or components that should be utilized together in order to achieve best results. In lean management process steps that do not add value from the perspective of the customer are called waste. In product development wastes are [205]:

Overproduction of information;

Overprocessing of information;

Miscommunication of information;

Stockpiling of information;

Generating defective information;

Correcting information;

Waiting of people;

Unnecessary movement of people.

Wastes may be difficult to notice but they often lead to visible problems. These problems according to Wheelwright and Clark [206] in product development processes may include:

resource unavailability (due to multi-tasking, firefighting, unbalanced needs or not stabilized processes);

slow hand-offs;

old customs (unnecessary processing or reviews);

poor decisions in the front-end development phase;

delays in initial development phases;

too detailed design;

problems with suppliers;

shortening time for production preparation;

late design changes;

narrow engineer specialization;

fuzzy responsibility.

Page 44: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 43 of 139 D100.1 State of the Art Analysis Update

Additional problems in contemporary meta-product development processes may include late deliverables, poor quality design, delays in new product development, communication barriers due to geographical dispersion of company employees working together and due to background differences (IT specialists, mechanical engineers, electronic engineers, etc.) or difficulties in current project performance tracking at the operational level.

3.6.2 RELEVANT METHODS AND TOOLS

Even though LPD should be interpreted as a system it comprises of various tools and methods. Some of them, which can be relevant for designing Product Service Systems are described below.

3.6.2.1 Workload levelling – Heijunka

In product development processes peaks of workload and timeframes with a little work occur. In the first case key company resources become overloaded and as a result deliverables get late, coordination gets difficult and firefighting becomes common.

Heijunka Box (Table 1) is a tool that is utilized to level the workload. In product development context it may have a form of a box or a matrix with one row corresponding to one engineer / developer and columns representing a timeframe, usually a day, with space corresponding to time available for work. Tasks are placed in the available slots with information how long they should take and who has ordered it.

Table 1 – An example of a heijunka Box

Name Time Monday Tuesday Wednesday Thursday Friday

John 8-12 Project A Project A Project D Project D Project F

12-16 Project A Project D Project D

Steven 8-12 Project B Project C Project E Holiday Holiday

12-16 Project C Project E Holiday Holiday

Rachel 12-16 Project D Project A Project A Project A Project A

This tool enables the control of work scheduled for an engineer or developer and avoidance of overburden. When urgent tasks occur one needs to physically move other tasks from the box into an available slot with the acceptance of person ordering them.

3.6.2.2 Value Stream Mapping

Product development takes long time, sometimes as much as several years or months. Long development lead time is a risk. It enlarges the time between user requirements gathering and product introduction to market therefore user requirements may be outdated when the product arrives to the market. Also when eliminating waste from their process organisations face a risk of sub-optimization.

Value Stream Mapping for product development is an analytical tool aiming at assessing existing development processes and re-designing them based on lean management principles. By gathering knowledge on service family to be analysed, consecutive process steps, customer demand etc. one can depict the current state of the process. As it is done one can design the future state together with an action plan and start implementing the changes [199].

Value Stream Mapping enhances shortening development lead time and waste elimination. By applying a holistic perspective it supports global optimization of development processes and helps to avoid sub-optimization.

3.6.2.3 5S

Companies struggle with implementing lean management and continuous improvement. They tackle with problems like long time wasted on searching for things, lack of process stabilization or realizing processes

Page 45: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 44 of 139 D100.1 State of the Art Analysis Update

that an organization got used to realize but that are not really needed by their customers (so called monuments).

A method that should serve as a base for lean management implementation is 5S. It is a set of 5 interrelated practices that should be treated as a system. It focuses on:

sorting needed items from the unnecessary ones that are being disposed;

setting in order the items that are left according to the rule “a place for everything and everything in its place”;

cleaning the kept items that are important to the process;

maintaining the cleanliness by regular review and repeating of the first three steps;

ensuring discipline in the 5S system.

5S enables development of good work organization in both real and virtual environment. It enhances creation of stabilized processes by eliminating disruptions, unnecessary elements and monuments.

3.6.2.4 LAMDA Cycle

Product development companies struggle with learning of their employees and with fully understanding development problems by engineers. Also the application of PDCA cycle to problem solving in product development context is difficult as engineers tend to minimize the importance of plan and check phases [203].

LAMDA cycle has been introduced by A. Ward. It comprises of 5 steps [207]:

look – go and see, make sure the best information possible to solve a problem is available;

ask – ask questions to understand the root-cause of a problem (use tools like fishbone diagram, five why, etc.) and to identify who may know useful things about the issue;

model – utilize analysis, simulations, models, prototypes to anticipate the results and to share a common picture of a problem;

discuss – discuss models with people impacted by the solution, the experts identified in ask step and with the person who will make the final decision about the actions to be taken and make a decision;

act – take action on the decision.

LAMDA enables continuous, independent learning and in-depth understanding of the topic. It is also a tool for systematic problem solving.

3.6.2.5 Front-Loading

A tough challenge for various companies is problem identification, problem solving and often in consequence design change that occurs late in the project. The later in the project an issue is identified and worked on the more time it may take to fix it [208].

At Toyota the early phase of development with intense engineering activity is referred to as kentou. In this phase various product models are being created (i.e. from clay). The cross-functional module development teams meet and acquire knowledge on different technical perspectives of these models and design proposals [204].

This way of work enhances early problem root cause identification and elimination and minimizes late problem solving and late design changes. Additionally front-loading provides a way to control variation inherent in the product development process [204]. That is why it enables queues reduction and avoiding delays during the execution phase.

3.6.2.6 Takt Time – Flow – Pull

As noted above product development companies struggle with unbalanced processes of introducing new products and waste in their communication channels.

Page 46: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 45 of 139 D100.1 State of the Art Analysis Update

Takt time is a pace of customer demand and helps to identify the pace with which an organization should work. Flow is a way of handing over the work to the next process. It should be done not in batches but in smaller pieces (ideally of constant size). In a pull system work is conducted based on real customer (internal or external) demand. The supplier is not processing anything unless the customer indicates a real need.

Thanks to these three concepts a company can start the development work according to real project needs. [203]

Variability of product development processes (including its variability in complexity of the process, in level of difficulty of problems being solved or in the amount of input to the process) extends project lead times. One of its causes is a fact that managers and engineers have their own ways of conducting their duties that differ from one specialist to another.

Standardized work is the best known way of doing a task. It should be identified, documented and shared. It can be utilized for planning, executing and documenting projects.

Standardized Work enables variability reduction as each disruption can be easily noticed, greater ease of introducing new employees to a job, overburden reduction and is a base for further improvement [207].

3.6.2.7 Usable Knowledge

Unsuccessful development projects are often the results of lack of sufficient knowledge available in the right time and place. The tools that help to solve this problem are e.g. Trade-off Curves.

Trade-off curves describe boundary parameters for each design concept. They describe a relationship between two or more parameters that depict a relation between design decision and a factor important to the customer.

Product development companies should use tools like trade-off curves, design guidelines and other ones that enable knowledge creation, capturing and sharing.

3.6.2.8 Quality Function Deployment

People from various disciplines working together on one product tend to have different points of view on certain aspects of a product. In this variety of mutual understanding everybody must also understand the “voice of the customer”. This is an important task that not all product development organisations are coping with.

Quality Function Deployment (QFD) is a tool for decision making that is being utilized by multidisciplinary teams. It is a scheme in a form of a quality house. Participants of QFD sessions include multidisciplinary teams and critical components’ suppliers [207].

QFD binds multidisciplinary teams together and enhances in agreeing on common product specifications in line with the “voice of the customer”. Therefore it can help to eliminate a lot of design changes.

3.6.2.9 Root-Cause Analysis Diagram

Lack of root causes identification and elimination results in rework and late design changes. This leads to overburden of experienced engineers who are able to solve the problem, tackling the same issue over and over and may cause late deliverables.

A root-cause analysis diagram is one of the seven basic tools of quality. It enables the identification of potential factors that could cause an effect. On the diagram the causes are grouped in categories. Development of a diagram should be conducted in a multidisciplinary group to include complementary points of view and expertise.

Application of the diagram enhances good problem identification process. It is a base for further problem solving activities conducted in the early phases of product development process.

3.6.2.10 Chief Engineer / Shusa

The person managing the development project is without a doubt very important. However product development companies have problems with leaders. Functional departments tend to focus on their goals

Page 47: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 46 of 139 D100.1 State of the Art Analysis Update

(leading to suboptimization) and are lacking a holistic view of a product under development that is being processed by various functions. This is due to a lack of sufficient leadership related to a specific product.

A Chief Engineer is a manager with great technical knowledge who is fully responsible for the development of a product line. He leads a small group of dedicated people that develop product concept and its business case, manage the development and design processes, coordinate work of industrial engineers, sales and marketing and manage the production preparation process. The majority of the core product development team are subordinates of respective functional managers and not the chief engineer [203].

Chief Engineer realises his vision of a product. In comparison functional managers are aware what is possible to achieve. This contrast creates a desirable tension between the Chief Engineer and department leaders and enhances successful product development.

3.6.2.11 Plan for Every Person

Figure 9 – Plan for Every Person. Example from a manufacturing firm. [207]

Employee development in a coherent way with company expectations is important for all companies but especially important for product development firms. Companies struggle with narrow engineers’ specialization. When peaks of work for one kind of engineering specialists occur others who are available but do not have required skills cannot assist and unburden them. The narrow specialization and lack of skills is also one of the barriers when transferring the product development department to other locations.

Plan for Every Person is a schedule of training and development for employees. It consists of information about already acquired skills and abilities and about the ones that are required (Figure 9) [207].

With this tool one can assess the progress in employee training in the domains required by the company. It helps to extend engineers’ narrow specializations.

3.6.2.12 Set-Based Concurrent Engineering

Companies struggle with lowering the cost and shortening the lead times of development projects. On the other hand they look for short term benefits of point-design that may result in rework, unsuccessful projects and lack of learning of an organization.

Integrated Product Teams or Concurrent Engineering is a solution where different departments/functions (e.g. production, quality assurance, marketing) are integrated (ideally physically collocated) and work simultaneously on the project. The set-based prefix means that the team considers in parallel sets of possible solutions and gradually eliminate these that are not relevant based on possessed data.

Set-based Concurrent Engineering enables shortening of PD total lead time by as much as 33% with higher quality designs produced [209].

Page 48: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 47 of 139 D100.1 State of the Art Analysis Update

3.6.2.13 Design-In

Product development companies struggle with problems with their suppliers. Poor hand-offs due to communication barriers and low level of supplier involvement are only a few of the root causes.

Design-In is a method of cooperation between the customer and supplier. The customer delivers cost assumptions and functional requirements and supplier realises in detail the design of the component and its production process. The supplier usually delegates a resident or a guest engineer to work at the customer site.

This method enables the company to limit the total product cost and helps to ensure that the component will harmonize with the whole product. [207]

3.6.2.14 Obeya

Project management in product development is difficult due to a large amount of variables and great number of people involved in the process. With various functions involved and several issues often being tackled at a time it is difficult for everybody involved to maintain focused on a company goal.

Obeya is a concept similar to former “war rooms”. It’s a place (often a room) with boards and charts that depict the program progress, milestones, long-term and short-term schedules as well as problem solving actions. But obeya is not only a room but also a rule of regular meetings. [207]

The aims of Obeya are to assure a project success, maintain focused on main project goals and establish clear and direct communication between different levels in the organisation.

3.6.3 RELEVANT PROJECTS

In this section we have described a Business Case, a PhD thesis and a European project. They relate to the Toyota Product Development System explained in the Morgan and Liker book [204].

3.6.3.1 Case Study. Body and Stamping Development at Ford

In this section we will illustrate the transformation to lean product development in the Body and Stamping Development Unit at Ford Motor Company in order to underline the principles of lean product development that could represent an ideal toward which companies can strive.

It is important to start saying that in the mid-1980s Ford firstly began a lean manufacturing process called the Ford Production System (FPS) and in 2004 Ford created the Global Product Development System (GPDS).

The GPDS was deployed in Body and Stamping Engineering in order to plan and execute a comprehensive overhaul to the way they engineered and tooled exterior bodies. The journey was challenging and progress was certainly not linear, but it resulted in a far more effective process, stronger organization, and, most importantly, greatly improved products for Ford customers.

The changes were made in the three key subsystems of Lean Product Development:

People Transformation. They started with an attitude change. The focus of the B&SE team began a relentless focus on the customer, and they set out to become key enablers to deliver value to the customer;

Process Transformation. B&SE mapping and benchmarking revealed a wasteful, uncompetitive development process. They held value-stream mapping events to enable cross-functional and external organization dialogues;

Tools Transformation. The team realized that before they could fully leverage the capability of digital tools, they had to be certain that they fit and supported the people and process, and that critical knowledge was validated, up to date, and that basic engineering disciplines were in place.

The improvements at Ford in this critical element of automotive product development over five years of this transformation process—2004 to 2009—have surpassed benchmarked levels of performance for quality,

Page 49: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 48 of 139 D100.1 State of the Art Analysis Update

lead time, and cost. Over the past five years they reduced average overall lead time by 40% and reduced internal tool and die construction timing by an average of 50%.

Most importantly they simultaneously improved quality by more than 35% as measured by things gone wrong for the body sub-system, and increased dimensional accuracy by 30% including dramatically improved craftsmanship and body fit and finish that is now among the very best in the world

There are a number of important lessons to be learned from this case:

1. Lean processes can be effective in driving high quality, low cost, and short lead times in product development. A lean process is driven by continuous improvement to eliminate waste, which surfaces problems, reducing the time from problem to solution. The basic principle is to shorten the loops of plan-do-check-act as much as possible.

2. The transformation requires a long-term commitment and a staging of the transformation process. In the case of Ford, the starting point was an overall transformation model applied to pieces of the development process in pilots. Those who led the pilots and learned from them were then transferred into the operational unit to lead from within. Over time the process was spread, tools were added, organizational changes were made, and progress accelerated.

3. Driven, accountable team members transform lean product development from static tools to a living high-performance system. The more recent success at Ford was due to a greatly simplified tool set that was actually used throughout the body engineering organization.

4. The main role of lean tools is to make problems visible and provide a method of solving them at the root cause. Some of the most useful tools were very simple. (The obeya room, the A3 process of problem solving, engineering checklists and health charts)

5. Lean implementation is a social, cultural, and political transformation. Lean implementation is a leadership process, not a technical process. Leadership is much more than managing to a linear plan. Leadership requires a wide range of skills including reading political situations, understanding the culture, building relationships, understanding mass psychology, penetrating the psychology of individuals, and in the end winning over the mass of engineers to work toward the vision.

3.6.3.2 Lean PPD Project

In this section we will illustrate the outcomes of the Lean Product and Process Development (Lean PPD) project in order to take into understood the outcome tools and methods that represent an ideal framework which companies could work with during this project.

Firstly, Lean PPD Project starts its research analysing methods of measuring and assessing the level of adoption of lean thinking principles, that is: how can an organisation that strives to be lean in its product design and development activities actually measure and monitor its progress in the area of lean thinking? To finally, develop an assessment tool called “LeanPPD Readiness Assessment and Transformation Tool” to measure the performance of an enterprise considering human resources, technology factors and processes and develop reports regarding Lean Product Design.

Secondly, Lean PPD project provides an evaluation of a selection of the Business Process Modelling tools that are currently commercially available. Business Process Modelling has an essential role enabling two important functions:

(1) To capture existing processes by structurally representing their activities and related elements; and

(2) To represent new processes in order to evaluate their performance.

But in addition to these functions, in Lean PPD the business process modelling method (3) should enable process evaluation and selection of alternatives.

The conclusion if this study is an innovative value stream mapping tool. The tool is called VIWET (Value Identification and Waste Elimination Tool). At its core is a comprehensive process modelling data structure which allows different views of the data to be presented as required for the purposes of process analysis and improvement, lean implementation and so on.

Thirdly, Lean PPD provides a review and collection of the state of the art and current best practice in KM (acquisition and modelling) in order to develop KBE methods and tools that will help manufacturing

Page 50: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 49 of 139 D100.1 State of the Art Analysis Update

companies to achieve a lean development process and produce a lean design. This tool will ensure that any product development decisions taken are based on proven knowledge and experience.

Fourthly, Lean PPD provides a set Based Concurrent Engineering (SBCE) Model and develops a novel set-based lean design tools (SBLDT) that ensure the concurrent development of Lean Product & Process design and it associated lean manufacturing system.

And finally, Lean PPD develop a “LeanPPD Model” (based on Toyota product development system) that provides an integrated framework for all the above explained tools with the following components:

1. A development process: Set-Based Concurrent Engineering

2. Vision, strategy and planning: Value-Focused planning and development

3. A leadership system: Chief Engineer technical project leadership

4. People, infrastructure, and other capabilities: Knowledge-Based Environment

5. The organisational culture: Continuous Improvement

3.6.3.3 Embedded software projects in a PhD thesis

J. Kato in his PhD thesis [208] analysed 3 embedded software development projects. These projects were conducted at an American and Japanese companies based in Japan. The focused teams consisted of 5-6 engineers and several managers but the total number of involved engineers was over 100 per project. Projects scope included both the investigation and detailed design phases. Kato has identified issues that arose in those projects like: major design changes, outsourcing, sharing engineers across projects, unsynchronized processes due to multi-tasking, 50% shorter expected completion time in comparison to previous project or lack of communication channels with users.

In his work Kato has applied Value Stream Mapping and developed a process for measuring waste. He has identified nine waste indicators that are effects of potential waste in the development process (effects are much easier to measure than root-causes). These indicators are: overproduction, waiting, transportation, over processing, motion, rework, re-invention, hand-off and defective information. Ha has also applied the root-cause analysis diagram for identifying typical root-causes for waste.

Amongst nine waste indicators three of them were more significant than others: over processing, rework and defective information. Additionally time wasted per one occurrence of rework exponentially increases as time spent on the project increases (Figure 10). In one of the investigated projects 6% of information that was inventoried for one month caused additional engineering work (information got rotten at a pace of 6% per month).

Figure 10 – Average Time Spent on One Occurrence of Rework [208]

3.6.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

LPD comprises of various tools. They enable solving the majority of the problems mentioned in this section. But there is also space for new developments as new issues arise with technological advancements (meta-

Page 51: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 50 of 139 D100.1 State of the Art Analysis Update

products, growing role of modelling and simulation techniques) and macroeconomic processes (globalization, off shoring, etc.). Gaps identified include:

1. People and organization transformation. One of the key aspects to be taken into account is related on how we can facilitate the Lean implementation as a social, cultural and political transformation in the companies. This will imply understand the Lean implementation process as a people mindset transformation that will conclude with a new shared vision of the company and a new Cloud culture.

2. Understanding of user requirements and customer involvement. It will be very important in ProSEco to bring customers’ views on value into Product Design Development. So we will need to work on Lean companies by:

a) getting the customer intimately involved at the inception

b) formalize translation of needs to engineering language and resolve any disconnects promptly

c) confirm by continuing customer involvement

3. Process-oriented metrics. There is a lack of process-oriented metrics for product development that would enable project progress tracking on a weekly or daily basis. Such metrics could be based on lean management principles [210]. A performance measurement methodology that would enhance decision making and problem solving should be developed.

4. Oobeya for globally distributed teams. Additionally Oobeya rooms being used in successful development projects enable people to get a common understanding of the status of the development process. But they require these people to be “physically present” in the room. A problem arises when people involved in a project are distributed amongst different locations and cannot meet regularly in one place. Questions like: what information should be posted and how should it be used?; how can we use the Oobeya for virtual teams?; is a computer representation as effective as printed out and posted representations?; should be stated and an answer searched for.

5. LPD System for small companies. As it has been described in the previous section, there are a lot of methods and tools that have been tested and successfully used in different companies, especially in bigger ones. But small companies also need to improve the product development process to avoid wastes and identify problems. So we consider interesting to investigate in a Mini-LPD System that is a system of related rules, techniques or components that should be utilized together in order to achieve best results in an SME.

Page 52: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 51 of 139 D100.1 State of the Art Analysis Update

3.7 Service Oriented Architecture (SOA) in manufacturing industry and Cloud manufacturing

3.7.1 SOA: DEFINITION AND BASIC CONCEPTS

The term Service Oriented Architecture (SOA) [211] [212] [213] is still one of the most promising architectural designs for rapid integration of data and business processes, representing an emerging approach that addresses the requirements for loosely coupled, standard-based, and protocol-independent distributed computing.

The SOAs are being promoted as a next evolutionary step to help organization to meet more complex challenges imposed by globalization and market fragmentation establishing an architectural model that aims to enhance efficiency, agility, and productivity of an enterprise by positioning services as the building block.

SOA provides a framework or platform for realizing rapid system development, easily modified systems, while enhancing systems integration capabilities and overall system quality. Using the definition given by Komoda [214], a Service Oriented Architecture is a design framework for construction of systems by combination of services and using deeply ICT infrastructure as communication backbone.

The existence of Web Services technology has enabled and stimulated the implementation and development of SOAs. As stated in [215], although Web Services do not necessarily translate to SOA, and not all SOA is based on Web services, the relationship between the two technology is important and they are mutually influential.

The World Wide Web Consortium (W3C) in [216] and (www.socrades.eu) define Web Services as: “a software system design to support interoperable device-to-device, device-to-system and system-to-system, e.g., machine-to-machine interaction over a network. It has an interface described in a machine-processable format (specifically WSDL). Other systems interact with the Web Service in a manner prescribed by its description using SOAP messages typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards”.

Backed by a matured and universally accepted set of interoperability standards (e.g. HTTP, JSON, XML, SOAP, UDDI, WSDL, WS-* standards) for building, describing, cataloguing and managing reusable services, service orientation is the foundational architecture for today’s mash-ups, software as a service and service-cloud [217]. These standards give service oriented architectures a distinct advantage over other architectural styles, since it makes interoperability one of its intrinsic characteristics, which eases the integrations of heterogeneous systems including legacy devices and systems and provides a major enhancement in business agility.

3.7.2 SOA IN MANUFACTURING

Manufacturing companies of today are acting and competing in a new challenging environment characterized by frequently changing in markets demands, reduced time-to-market, increasing consumer demand for highly quality and customized products at low cost, efficient energy management, [218]. As stated in [219], this trend imposes the demand for more and more exclusive, efficient, effective and particularly flexible production systems in order to better meet the fluctuate market needs whilst maintaining the low cost base of heavily automated mass production techniques [220]. As a result, the key to competitiveness is represented by the reduction of the production costs during production system lifecycle and the capability to have systems that are able to quickly respond to markets variations and demands for new customized products.

Nowadays, manufacturing companies are completely dependent on their Information Technology (IT) backbone. As a matter of fact, as exposed in [221] IT has played a major role inside any modern enterprise since its introduction by running almost of the processes of the enterprise, be they related to manufacturing, distribution, logistic, sales, customer management, accounting, or any other type of business process. The software infrastructure of today’s manufacturing enterprises is highly heterogeneous triggering the vertical adoption of the SOA paradigm as a desirable requirement to lead to a homogeneous communication infrastructure based on a single communication paradigm [219]. In fact, there is a strategic demand for SOA systems capable of realising the vision of exposing, delivering and also using information-a-la-carte, i.e., any content, anytime, anywhere, and on different ICT platform.

As listed in [219], the most important requirements to be tackled by the manufacturing plants of the future include:

Page 53: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 52 of 139 D100.1 State of the Art Analysis Update

Inter-enterprise dynamic integration capabilities;

Cross-enterprise collaboration;

Support of heterogeneous yet interoperable hardware and software environment;

Business Agility through a production system flexibility, adaptability and re-configurability;

Scalability by adding or reducing resources without disrupting operations;

Fault tolerant and efficient and effective recovery from failures in real-time production conditions.

Most research on this area has been directed at two major domains: (i) e-business and inter-enterprise interactions to improve enterprise agility, (ii) flexible and reconfigurable automation systems based on the application of SOA to the control, monitoring and management of manufacturing production processes ([222], [223], see e.g., www.socrades.eu, www.imc-aesop.eu).

As stated in [224], the agile performance of an enterprise is strictly limited by its least building block, meaning that to be agile all the enterprise IT levels, from business to device and/or shop floor level need to be agile. Thereby, the vertical application of the SOA paradigm in the context of machine to machine communication at shop floor level and to enable the information integration between this level and the higher levels (Resource Planning level and Business level) of a manufacturing company is fundamental. The connection between shop-floor devices and the enterprises services is essential to create more sophisticated high level services and to support more reliable decision making and closed loop control and management processes.

Based on the fundamental set of requirements addressed above, nowadays manufacturing companies are engaged in an innovation race to implement more and more exclusive and efficient production systems. As stated in [218], several manufacturing processes, based on the most diverse technologies, architectures, approaches and methodologies, have been designed and implemented through the years by researches and practitioners to satisfy mass customization requirements. Some manufacturing processes are focused on the improving of responsiveness, re-configurability and lead time; while others are focused on the improving of the final product quality, optimization of the production activities, waste elimination, integration of secondary processes in the main control, improving the visibility inside the manufacturing companies facilitating the information flow between all the layers of a manufacturing company. In this scenario, the SOA paradigm and/or approach can contribute to the design and development of control, automation and management function applications at the shop floor level in three different ways with a different level of abstraction, namely:

1. Decision Support System/System Expert (level 3 (MES – Manufacturing Execution System) and level 4 (ERP) of the standard ISA’95 enterprise architecture (www.isa95.org));

2. High Level Supervisory Control (level 2 (SCADA) of the standard ISA’95 enterprise architecture (www.isa95.org));

3. Real-Time Control (levels 0 (sensor/actuator) and level 1 (PLC/RC/CNC) of the standard ISA’95 enterprise architecture (www.isa95.org)).

3.7.2.1 Decision Support System/Intelligent System

A decision support system/Intelligent system can be defined as a computer system that behaves like a human expert giving advices and eventually explaining the logic and/or the relevant steps behind the advice [225]. The adoption of SOA approaches is related to the design and development of a communication infrastructure that facilitate the cross-layer interaction between upper levels of the enterprise organization (www.socrades.eu) and to integrate them within existing knowledge based systems while improving their capabilities and pushing them to a new level where awareness, responsiveness, adaptability, control and maintenance efficiency and learning by experience are the key aspects [218].

3.7.2.2 High Level Supervisory Control

To promote system agility, the design and development tools should straightforwardly and intuitively allow the integrator to build his application easily and faster than older approaches with, at least, the same level of robustness and performance – this is the key aspect to a wider adoption of SOA on any domain of application. The aspect of data acquisition and control (SCADA) within SOA approaches is mostly related

Page 54: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 53 of 139 D100.1 State of the Art Analysis Update

to the process of making several services work together to create added value in the form of a more complex process ( [226], www.imc-aesop.eu).

3.7.2.3 Real-Time Control

The recurrent requirement for real-time due to the strict performance and safety processes in industrial automation domain is amongst the most common critics when referring the employment of SOA approaches, particularly when addressing control aspects. Undoubtedly, the SOA paradigm has helped the design and development of modern distributed control paradigms and architectures by providing systems with the following characteristics: encapsulated self-contained functionalities, independent development of each service, environment independency, loosely coupled, distributed complexity. However, little work has been conducted to introduce real-time capabilities to SOA application and the majority of the existing SOA deployments in this domain still consist of test-benches with no genuine concerns on real-time constraints.

3.7.3 CLOUD MANUFACTURING: DEFINITIONS AND BASIC CONCEPTS

3.7.3.1 Cloud Computing

Cloud Computing emerges as the latest computing paradigm that promises flexible IT architectures, configurable software services, and QoS (Quality of Service) guaranteed service environments. As stated in [227], the main goal of cloud manufacturing is to provide on-demand computing services with the following main characteristics: high reliability, scalability and availability in a distributed environment. Although the term cloud computing was only coined in 2007, the concept is quite old an rooted in the 1960s and relates with the delivering of computing resources over a global network [228]. A more formal definition of this concept and/or paradigm was given by the National Institute of Standards and Technology (NIST), cloud manufacturing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” [229]. Therefore, looking at this definition, it is possible to state that in cloud computing everything is treated as a service (XaaS). As exposed in [230], cloud computing refers to both the application delivered as a service of the internet and the hardware and system software that provide those services. The services in cloud computing paradigm can be divided in three distinct categories according to the abstraction level of the capability provided and the service model of the providers, namely: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) [231]. The service delivery model of a typical cloud based system is shown in Figure 11.

Figure 11 – Cloud Computing Service Delivery Model [232]

According to [231], the SaaS provides services and/or applications that can be accessed by users through Web portals, meaning that service consumers are shifting from locally installed computer programs to on-line (cloud) services that offers the same functionalities. The PaaS provides developers with a platform for allowing the creation, deployment and hosting of web applications. Finally, the IaaS provides the fundamental infrastructure i.e. the necessary hardware on-demand in the sense that customers can pay for using cloud resources.

Page 55: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 54 of 139 D100.1 State of the Art Analysis Update

3.7.3.2 From Cloud Computing to Cloud Manufacturing

Nowadays, Cloud Computing is emerging as a new paradigm for providing computing services on-demand in different anywhere and everywhere enabling the sharing and the aggregation of geographically distributed resources over the network [233]. This paradigm has evolved from a relatively simple infrastructure that delivers storage capabilities to one that is economy based and aimed to deliver more complex services that rely on abstract resources. The Cloud Computing paradigm is completely changing the way enterprises interact with each other and above all with their customers. Therefore, it is creating new solutions and opportunities to the modern enterprises, including the manufacturing industry [234].

Cloud Manufacturing (CMfg) represents the extension of the concept of Cloud Computing applied to the manufacturing industry. This concept was created to achieve the demand for globalization by transforming the manufacturing business into a new paradigm where manufacturing capabilities and resources are componentized, integrated and optimized globally, making it accessible to users everywhere [235]. As argued in [236], there is a big difference between the Cloud computing and CMfg concepts. In cloud computing the resources are primarily computational resources (e.g. servers, storage, network, software, etc.) that are provided in the form of services belonging to one of the three different categories presented in section 3.7.3.1, namely: IaaS, PaaS, and SaaS. In CMfg, the resources are manufacturing resources, or in other words, physical manufacturing devices, machines and more in general systems are abstracted in terms of their functionalities and capabilities and are provided to the user in the form of a CMfg service belonging to IaaS, PaaS, and SaaS. A layered framework for implementing CMfg consisting of four layers can be considered [227] (see Figure 12).

Figure 12 – Layered framework for implementing CMfg [227]

The Manufacturing Resource layer contains the physical manufacturing resources and the shop floor capabilities that are provided to the user as SaaS and/or IaaS. The virtual service layer is responsible for virtualizing the manufacturing resources and encapsulates them into cloud manufacturing services that in turn are provided to the Global Service Layer. Therefore, the Global Service Layer is responsible to manage the cloud manufacturing services. Finally the Application layer is the entry point of the manufacturing companies and provides to the user the possibility to build/construct manufacturing application from the virtualized resources.

As exposed in [237], all the cloud manufacturing services are aimed to be provided to the user for encompassing the whole lifecycle of manufacturing. In such a way, typical CMfg services can include: Design as a Service, Manufacturing as a Service, Experimentation as a Service, Simulation as a Service, Management as a Service, Maintain as a Service, and Integration as a Service (see Figure 13).

Page 56: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 55 of 139 D100.1 State of the Art Analysis Update

Figure 13 – Cloud Computing and Cloud Manufacturing in a nutshell

Thereby, the CMfg paradigm and concept provides a collaborative network environment (the Cloud) where users can select the suitable manufacturing services from the Cloud and dynamically assemble them into a virtual manufacturing solution to execute a selected manufacturing task. In this scenario, the CMfg paradigm provides a new business model moving from traditional production-oriented manufacturing to service-oriented manufacturing [227] where the key characteristics are: better intra-enterprise collaboration, higher flexibility and agility in the management of the enterprise operations and supply chains, transparency between all the layer of the manufacturing enterprise. Therefore, mirroring the definition of cloud computing given in section 3.7.3.1, CMfg can be defined as: ‘‘a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable manufacturing resources (e.g., manufacturing software tools, manufacturing equipment, and manufacturing capabilities) that can be rapidly provisioned and released with minimal management effort or service provider interaction.’’[227] [238].

Since, in CMfg everything is a service available on the cloud (i.e. on the network), it is fundamental to consider that any operation requires a base level of trust and security among the organizational members. Therefore, the trust and security issue plays a critical role in the Cloud environment. Although it is hard to establish a trust service-oriented grid architecture because of the lack of supporting user single sign-on and dynamic transient service, there are many approaches attempting to solve this problem [239], [240]. However, a lack of trust between Cloud users and providers has hindered the universal acceptance of Clouds as outsourced computing services. What is fundamental to understand is that CMfg is much more than just store and retrieving data using services.

3.7.4 RELEVANT METHODS AND TOOLS

3.7.4.1 WildFly Application Server

WildFly1, or formerly known as JBoss AS or simply JBoss, is an open source java-based application server. It is used for developing, integrating, and deploying applications, portals and web services. WildFly is fully Java EE compatible. Not much can be said here: JAXB, EJB, CDI, LOL, WTF, anything you throw on it will be handled gracefully. At the same time, it is compliant with OSGi and allows you to take the best of both worlds.

1 http://www.wildfly.org/

Page 57: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 56 of 139 D100.1 State of the Art Analysis Update

3.7.4.2 Apache Tomcat Application Server

Apache Tomcat2 is an open source web server and servlet container developed by the Apache Software Foundation (ASF). Tomcat implements the Java Servlet and the Java Server Pages (JSP) specifications from Sun Microsystems, and provides a "pure Java" HTTP web server environment for Java code to run in [241].

3.7.4.3 Apache CFX framework

Apache CXF3 is an open source services framework. CXF helps you build and develop services using frontend programming APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, or CORBA and work over a variety of transports such as HTTP, JMS or JBI.

3.7.4.4 OSGi

The OSGi Alliance4 is a worldwide consortium of technology innovators that advances a proven and mature process to create open specifications that enable the modular assembly of software built with Java technology. Modularity reduces software complexity; OSGi is the best model to modularize Java.

The OSGi technology facilitates the componentization of software modules and applications and assures remote management and interoperability of applications and services over a broad variety of devices. Building systems from in-house and off-the-shelf OSGi modules increases development productivity and makes them much easier to modify and evolve. The OSGi technology is delivered in many Fortune Global 100 company products and services and in diverse markets including enterprise, mobile, home, telematics and consumer.

3.7.4.5 Device Profile for Web Services (DPWS)

DPWS5 [242] is a widely employed solution to offer SOA-based features at device level. It is currently a standard by the OASIS Web Services Discovery and Web Services Devices Profile Technical Committee, since June 2009. DPWS is a stack of web-based protocols and profile for devices, which relies on two fundamental elements, namely: the device and its hosted services. In this scenario, devices play a fundamental role in the discovery and metadata exchange protocols, while their hosted services provide the main functionalities of the devices.

Besides hosted services, the DPWS also specifies a set of infrastructure services:

Discovery Services (WS-Discovery): these services are used whenever a new device is connected to the network to publish itself and find other services.

Metadata exchange Services (WS-MetadataExchange): these services provide access to devices hosted services and to their metadata.

Events publish/subscribe services (WS-Eventing): these services are responsible for allowing other devices to subscribe to asynchronous messages produced by a given vendor-defined service.

The DPWS is built on top of the SOAP 1.2 standard, and relies on additional WS specifications, such as WS-Addressing and WS-Policy, to further constrain the SOAP messaging model. At the highest level, the messages correspond to vendor-specific actions and events. Messages are delivered using HTTP, Transmission Control Protocol (TCP) and UDP transport protocols.

2 http://tomcat.apache.org/ 3 http://cxf.apache.org/index.html 4 http://www.osgi.org/About/HomePage 5 http://docs.oasis-open.org/ws-dd/ns/dpws/2009/01

Page 58: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 57 of 139 D100.1 State of the Art Analysis Update

3.7.4.6 OPC- Unified Architecture (UA)

The OPC UA6 is the new version of the vastly deployed OPC architecture originally designed by the OPC Foundation to connect industrial devices to control and supervision applications [243], [244].

The focus of OPC is on getting access to large amounts of real-time data while ensuring performance constraints without disrupting the normal operation of the devices. The original OPC specifications, based on Microsoft COM/DCOM, are becoming obsolete and are gradually being replaced by new interoperability standards, including web services. This has led the OPC Foundation to publish a new architecture: OPC UA.

The OPC-UA architecture comprises support for secure communications, unification of several OPC models (Data Access, Alarms & Event and Historical Data Access) in a single set of services.

The OPC UA strategy is focused on collaboration with major industry standards organizations and on how to move the information models without restrictions from these other industry standards organizations to an enduser community. These organizations include Electronic Device Description Language Cooperation Team (ECT), FDT Future Device Integration alliance (FDI), Machinery Information Management Open Systems Alliance (MIMOSA), GridWise, Building and Automation Controls Networks (BACnet), Instrumentation, Systems and Automation Society’s Batch Control S88, ISA S95 and Open Modular Architecture Control (OMAC). OPC UA thus provides a homogeneous and generic meta-model and defines a set of web services interfaces to represent and access both structure information and state information in a wide range of devices.

3.7.4.7 MTConnect

MTConnect7 is an open, royalty-free standard that is intended to provide greater interoperability between devices and software applications. The MTConnect is aimed at establishing an open and extensible channel of communication for plug-and-play interconnectivity between devices, equipment and systems, allowing in such a way the data exchanging between them. This common communication is facilitated by XML and HTTP technology to provide real-time data from throughout a factory. This common communication empowers and enables software developers to implement applications aimed at providing more efficient operations, improved production optimization, adaptation of production systems to increase productivity, final product quality and agility.

3.7.5 RELEVANT PROJECTS: OVERVIEW

Several small and large scales EU research projects have reported relevant results which can be exploitable in the context of the ProSEco project. The cloud manufacturing research domain is strictly related to the SOA paradigm as a mean to enable the architectural design/definition and web-services technology for concrete implementation. Therefore, the table summarizes the most relevant projects in the context of SOA and Web-Services and related with the Cloud Manufacturing (CMfg) paradigm.

3.7.5.1 Detailed Research Project Description

3.7.5.1.1 CHAT8 - Control of heterogeneous automation systems: technologies for scalability, reconfigurability and security

CHAT (Control of heterogeneous automation systems: technologies for scalability, reconfigurability and security) is about developing core components (algorithms, protocols and procedures) of the next generation of distributed control systems, able to tackle the supervision and control of larger and more complex plants while drastically reducing infrastructure, maintenance and reconfiguration costs. CHAT deals with lower level control modelling, i.e. no SOA level integration. ProSEco targets a wider integration of devices, subsystems and functionalities based in a service approach.

6 http://www.opcfoundation.org/Default.aspx/01_about/UA.asp?MID=AboutOPC 7 http://www.mtconnect.org/ 8 http://www.ict-chat.eu/

Page 59: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 58 of 139 D100.1 State of the Art Analysis Update

3.7.5.1.2 DISC - Distributed supervisory control of large plants

DISC9 (Distributed supervisory control of large plants) is about the design of supervisors and fault detectors exploiting the concurrency and the modularity of the plant model. It is to use several techniques like modularity in the modelling and control design phases; decentralized control with communicating controllers; modular state estimation, distributed diagnosis and modular fault detection based on the design of partially decentralized observers; fluidization of some discrete-event dynamics to reduce state-space cardinality. The expected outcome of this project are: new methodologies for modular control design and diagnosis of complex distributed plants and new tools for the modelling, simulation and supervisory control design that will be part of an integrated software platform.

3.7.5.1.3 FEEDNETBACK - Feedback design for wireless networked systems

FEEDNETBACK10 (Feedback design for wireless networked systems) aims to close the control loop over wireless networks by deriving and applying a co-design framework that allows the integration of communication, control, computation and energy management aspects. It addresses issues on complexity, temporal and spatial uncertainties, such as delays and bandwidth in communications and node availability. Whereas FEEDNETBACK focuses on efficient, robust and affordable networked control that scales and adapts to changing application demands, ProSEco is focussing on fundamental/general control and monitoring paradigm investigating event-based operation.

3.7.5.1.4 FLEXWARE - Flexible wireless automation in real-time environments

FLEXWARE11 (Flexible wireless automation in real-time environments) focuses on Wi-Fi radio control for the use in real-time process monitoring and control. The middleware of providing services and service integration at device, subsystem and system level addressed by ProSEco can directly make use of FLEXWARE device technology. For instance, by running an SOA stack on such devices we can realize device integration.

3.7.5.1.5 GINSENG - Performance control in wireless sensor network

GINSENG (Performance control in wireless sensor networks) deals with QoS at the communication level. Current results indicate that devices can be given the ability to determine communication QoS. This can then be offered as a service to a global device, subsystem and functionality integration.

3.7.5.1.6 HD-MPC - Hierarchical and distributed model predictive control of large-scale systems

HD-MPC12 (Hierarchical and distributed model predictive control of large-scale systems) deals with the complexity of the control task. The project proposes to use a hierarchical control set-up in which the control tasks are distributed over time and space. In such a set-up, systems of supervisory and strategic functionality reside at higher levels, while at lower levels the single units, or local agents, must guarantee specific operational objectives.

9 http://www.diee.unica.it/automatica/disc/index.html 10 http://www.feednetback.eu/ 11 http://www.flexware.at/ 12 http://www.ict-hd-mpc.eu/

Page 60: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 59 of 139 D100.1 State of the Art Analysis Update

3.7.5.1.7 IMC-AESOP - ArchitecturE for Service-Oriented Process - Monitoring and Control.

(i) develop a System-of-Systems (S-o-S) approach for Monitoring and Control based on Service-oriented Architecture (SoA) for Very Large Scale Distributed Systems in Process Control Applications (up to tens of thousands of smart devices) and (ii) propose a migration path from legacy systems towards next generation SoA based SCADA/DCS systems and an infrastructure that will be the perfect legacy system from the integration viewpoint in 20 years from now. The results of IMC-AESOP especially in the engineering area will be used in ProSEco.

3.7.5.1.8 PlantCockpit

Deals with a “Production Logistics and Sustainability Cockpit” (PLANTCockpit) as central environment for monitoring and control of all intra-logistical processes. Research results of the PLANTCockpit project provide production supervisors, foremen, and line managers with the required visibility to make well-informed decisions to optimize plant processes.

3.7.6 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

The progress beyond the state-of-the-art that the project is aimed to provide are clustered into three main groups and/or perspectives, namely:

Tool perspective

Managerial perspective

Technical perspective

These three perspectives are considered necessary and sufficient to analyse the strengths, the weakness, as well as, the opportunities and the threats of the cloud manufacturing.

3.7.6.1 Tool Perspective

The ProSEco project is aimed to apply the CMfg paradigm for allowing the collaborative design of product-services and their production processes, and the effective implementation of innovative services. State-of-the-art tools and methodologies (see section 3.7.4) will be critically investigated in order to identify the most suitable technologies for implementing a collaborative cloud-based environment in a real manufacturing enterprise to enable also cloud manufacturing across geographical distributed factories. Currently manufacturing companies do not have the necessary degree of transparency and visibility, since more of the information gathered during the production process is analysed at the subsystem level, or in other words, the information flow inside a manufacturing company is often limited at the department level. This aspect underlines the need for an improved strategy for improving the transparency of a manufacturing company at system level in order to build an integrated view inside the manufacturing company on processes, i.e. to provide the right information, in right place, at the right time. This integrated view is the baseline for any decision-making process. However, to create such an integrated view tools and technologies need to be investigated, implying that one of the ProSEco main goals is to identify tools and technologies that can potentially impact on the entire enterprise transparency and visibility. Therefore, tools such as implementations of DPWS and OPC-UA will be considered as possible way to create a connection point between the management levels of a manufacturing enterprise with the process. Once this connection point is created, it is peremptory to guarantee that the transmitted information is not manipulated by any external entity, meaning that relevant issues about how to improve security and trust when using services will also be investigated. Another point which will be also considered is the creation of a connection point between systems in factories below the level 3 (MES) of the ISA’95 to enable collaborative networks of factories.

3.7.6.2 Managerial Perspective

The advantage of CMfg is passing limits and provide platform for all to have access from different locations. It means that we will face new challenge of increasing the level of dynamism in the problem due to geographical variations and different populations. ProSEco is looking to the implementation of CMfg as a dynamic problem in which we are dealing with more challenges and will use different tools and methodologies to manage the situation.

Page 61: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 60 of 139 D100.1 State of the Art Analysis Update

Moreover, Using Dynamic MADM could help to use the advantage of Cloud approach to facilitate the decisions in hierarchical level of organizations and help for ranking the alternatives or selecting based on their utilities/performances. It could be utilized for online decision and offline such as well know problem of Supplier/Vendor selection.

3.7.6.3 Technical Perspective

The analysis of the relevant research projects shown in section 3.7.5 provides a fundamental insight on the most relevant and critical issues of implementing the CMfg paradigm. In particular, one of the current challenges is the virtualization of the physical manufacturing resources, or in other words, the representation of physical manufacturing resources in the cloud. In this scenario, the results of previous research projects will be used as the baseline to enable manufacturing resource virtualization and information sharing over the network. The virtualization of manufacturing resource arises several issues and problems, namely:

1. How to semantically represent the virtualized resource and the services that it bring to the cloud.

2. How to use these services in a composition.

Once manufacturing resources are virtualized, i.e. represented in the cloud, the next fundamental challenge is to use all the data provided over the network as the foundation for system adaptation. As a matter of fact, for every component that is connected to the cloud and is capable to provide information, there is more processing required and more rules to be evaluated and decisions to be taken in order to adapt the system in the best way possible. In this case, the following issues and questions should be considered:

1. How to process the data available in the cloud.

2. How to adapt the system, or in other words, how to consume this data and take actions.

3. How to include humans in the process of adaptation and use human feedback for learning.

4. How to implement secure clouds (and sub-clouds) in which virtualized components are reachable

The ProSEco project will provide methodologies and tools for enabling the manufacturing components virtualisation and description in terms of services and skills. This semantic associated to a component is the baseline for analysing manufacturing components in a cyber space (the cloud), i.e. for enabling the extraction of data and the use of them in order to facilitate the decision making process or to enable cross-factory collaboration possibilities/features/options.

Finally, the last challenge is represented by the requirement of security and trust. As a matter of fact, a certain level of security and trust must be guaranteed in the cloud. Results from previous projects will be used and extended to improve the secure communication and data exchanging between virtual devices inside the cloud. It is important to stress that even if the security and trust problem has been deeply studied in other research project the issue security is a rising topic which becomes more and more importance through the rapid achievement of the commercialization border, meaning that further research for the practical implementation is needed.

Page 62: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 61 of 139 D100.1 State of the Art Analysis Update

3.8 Service composition

3.8.1 DEFINITION

Services are the building block of a SOA, they provide simple interactions between client and server provider. However, sometimes atomic services need to be straightforwardly combined and/or assembled in order to generate more complex ones rising the service abstraction as referred by [245]. In this scenario as argued in [246], the term service composition is referred to the process of developing a composite service. Moreover, a composite service can be defined as the service that is obtained by the composition of the functionalities of several simplest services.

Currently in the domain of SOA-based systems, two main approaches can be used for the service composition, namely [247]: orchestration and choreography.

3.8.2 SERVICE ORCHESTRATION

In the orchestration approach, a central node controls a workflow that interacts with a set of services following a predetermined logic during the execution of the according complex process. The workflow logic involved in orchestration consists in several rules, conditions and events, i.e. it specifies how different entities should interoperate with the central node in order to carry out a predefined task. Therefore, from the point of view of an orchestration, the central node is the coordinator of the entire composition while the services of the composition are merely components agnostic to the fact that they are taking part in a larger process [248]. Thereby, the process logic is centralized yet still extensible and composable, being at the same time a way to abstract a process in a single service.

A heterarchical approach is also possible by having several orchestrators at different levels of composition, i.e. one of the partners of the orchestration can be itself another orchestrator that encapsulates and orchestrates other partners in a hierarchical way. Since each orchestrator has its own process logic, it is possible to imagine an orchestrator that sometime during its process execution needs to invoke a service provided by another orchestrator (at the same composition level or any other), and vice-versa.

Several composition mechanisms are available for services in general and Web Services in particular, such as Ontology Web Language for Services (OWL-S), Business Process Execution Language (BPEL), and XLANG [249].

3.8.3 SERVICE CHOREOGRAPHY

Choreography defines same-level collaboration behaviour between distributed participants.

The goal is to set up an organized collaboration between different distributed services without any other entity controlling the collaboration logic, as discussed by [247]. Choreography is aimed at exposing the entire flow of interactions to all the parties involved in the composition [248]. It considers the rules that define both the messages and interactions that must occur during the execution of a given process through a particular interface [250]. Therefore, a choreography schema assumes that there is no owner of the global collaboration logic but composition is spread among peers, on contrary to orchestration where the process execution is controlled and incorporated in a single element.

In order to expose certain choreography, each service must know its own role within the current process; i.e. what the service supports and how to react or proactively execute in a particular context. These services are also referred as participants. Each possible contact between two roles in choreography is identified as a relationship. Multiple participants can assume different roles and have different relationships.

As stated in, service choreography is targeted by Web Service Choreography Description Language (WS-CDL) [251]. The WS-CDL can be used to specify the peer-to-peer collaboration of all the participants engaged in the choreography.

3.8.4 RELEVANT METHODS AND TOOLS

3.8.4.1.1 OWL-S

OWL-S (Semantic Markup for Web Services) is an ontology of services that also pursues the automation of web service tasks including automated service discovery, execution, interoperation, composition and execution monitoring. OWL-S supplies web service providers with a core set of markup language constructs

Page 63: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 62 of 139 D100.1 State of the Art Analysis Update

for describing the properties and capabilities of their web services in unambiguous, computer-processable form. The OWL-S ontology has three main parts: the service profile for advertising and discovering services; the process model, which gives a detailed description of a service’s operation; and the grounding, which provides details on how to interoperate with a service. The main development of OWL-S specifications took mainly into account the following main features:

Automatic Discovery: in this case OWL-S can provide/support computer interpretable semantic for services description.

Automatic Invocation: in the same case of the discovery the OWL-S can also support the automatic invocation of services by providing a generic a way to describe a service and its functionalities in terms of interface and required parameters.

Automatic Composition and interoperation: OWL-S provides declarative specifications of the prerequisites and consequences of application of individual services, and a language for describing

service compositions and data flow interactions.

3.8.4.1.2 Web Service Modelling Ontology

Web Service Modelling Ontology13 (WSMO) provides a conceptual framework and a formal language for semantically describing all relevant aspects of web services in order to ease discovery, composition and invocation of electronic services. Having the Web Service Modelling Framework (WSMF) as support base, this one is refined and extended through a formal ontology and a Web Service Modelling Language (WSML). WSMF defines four different main elements for describing semantic web services: Ontologies to provide the terminology used by other elements; Goals to define the intentions that should be solved by a service; Descriptions that define various aspects of the service; and Mediator to resolve interoperability problems.

3.8.4.2 Open ESB

OpenESB14 is the easiest and the most efficient ESB tool for building Integration and SOA applications. Initially designed and developed by Sun Microsystems and Seebeyond today it is improved and maintained by an open community of developers spread around the world. Released under CDDL license OpenESB community edition is free of charge for development and deployment. OpenESB offers a complete set of tools to design, develop, test and deploy integration applications and Service Oriented Applications. Relying on JBI (Java Business Integration), OpenESB proposes a unique development process which promotes migration to real service oriented development and organisation.

3.8.4.3 jBPM

jBPM15 is a flexible Business Process Management (BPM) Suite. It makes the bridge between business analysts and developers. Traditional BPM engines have a focus that is limited to non-technical people only. jBPM has a dual focus: it offers process management features in a way that both business users and developers like it.

The core of jBPM is a light-weight, extensible workflow engine written in pure Java that allows you to execute business processes using the latest BPMN 2.0 specification. It can run in any Java environment, embedded in your application or as a service.

13 http://www.wsmo.org/ 14 http://www.open-esb.net/ 15 https://www.jboss.org/jbpm

Page 64: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 63 of 139 D100.1 State of the Art Analysis Update

3.8.4.4 Apache ODE

Apache ODE16 (Orchestration Director Engine) software executes business processes written following the WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application.

WS-BPEL (Business Process Execution Language) is an XML-based language defining several constructs to write business processes. It defines a set of basic control structures like conditions or loops as well as elements to invoke web services and receive messages from services. It relies on WSDL to express web services interfaces.

3.8.5 RELEVANT PROJECTS

Several research initiatives have pushed the early adoption of SOA paradigm and service orchestration and composition mechanisms in the context of industrial production systems. However, history shows that only a reduced part of the results obtained during the research projects were really applied and used on a daily basis in real industrial production scenarios.

The results of the previous small and large scale EU projects paved the way to the most recent ones present. In this direction, the next sub-sections summarizes the most recent projects that continue to address and lead the research community and convince the industry partners by providing robust and innovative solutions/approaches as a mean to overcome the difficulties faced by modern manufacturing industries.

3.8.5.1 Detailed Research Project Description

3.8.5.1.1 ITEA - SIRENA

The SIRENA17 project started in 2003 to leverage Service Oriented Architectures (SOA) to seamlessly interconnect (embedded) devices inside and between four distinct domains - the industrial, telecommunication, automotive and home automation domain. A framework was developed to achieve this aim as well as to assure interoperability with existing devices and extensibility of networks based on SIRENA technology. The core of the framework is the Devices Profile for Web Services (DPWS). The first DPWS stack worldwide for embedded devices developed by the SIRENA consortium is described and its operation evidenced in several demonstrators.

3.8.5.1.2 InLife

InLife18 Project explored how a combination of advanced Ambient Intelligence (AmI) and Knowledge Management (KM) technologies can be used to assure a sustainable and safe use of manufacturing and assembly lines (MAL) and their infrastructure over their life-cycle.

InLife specifically aimed to improve the whole service-life operational costs and impact of MAL, providing new ways to monitor on-line Life-Cycle Parameters of MAL and improved services to support MAL along its whole life-cycle.

3.8.5.1.3 ITEA – SODA

The SODA19 project focused on building a complete service-oriented ecosystem that can be used throughout an application’s life cycle. It builds on the foundations laid by the ground-breaking ITEA SIRENA framework for high-level device communications that exploits service-oriented architecture (SOA).

16 http://ode.apache.org/ 17 http://www.itea2.org/project/index/view?project=57 18 http://sites.uninova.pt/i-control/software/project-inlife-fp6-nmp2-ct-2005-517018 19 http://www.itea2.org/project/index/view?project=163

Page 65: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 64 of 139 D100.1 State of the Art Analysis Update

3.8.5.1.4 SOCRADES

The SOCRADES20 integrated project created new methodologies, technologies and tools for the modelling, design, implementation and operation of networked systems made up of smart embedded devices. Achieving enhanced system intelligence by co-operation of smart embedded devices pursuing common goals is relevant in many types of perception and control system environments. In general, such devices with embedded intelligence and sensing/actuating capabilities are heterogeneous, yet they need to interact seamlessly and intensively over a (wired or wireless) network.

The middleware technologies developed in this project are based on the Service-Oriented Architecture (SOA) paradigm, encompass both wired and wireless networking technologies, and provide open interfaces enabling interoperability at the semantic level.

3.8.5.1.5 4CaaST

The 4CaaSt21 project aims to create an advanced PaaS Cloud platform which supports the optimized and elastic hosting of internet-scale, multi-tier applications and embeds all the necessary features easing programming of rich applications and enabling the creation of a true business ecosystem where applications coming from different providers can be tailored to different users, mashed up and traded together.

Overall, the project will bring significant benefits to the European economy via a greatly simplified design and delivery model for services and service compositions. The developed platform will lead to the establishment of new and highly dynamic and innovative service ecosystems.

3.8.6 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

The following pages present gaps and future direction as these have been identified in the literature reviewed in the areas of SOA. The need for introducing SOA concepts and associated engineering methods and tools to ProSEco are briefly introduced as a direction recommendation for the project.

3.8.6.1 Tool Perspective

An overview of reported engineering tools for covering different phases of a manufacturing life cycle, when they are built on SOA-technology or they are able to support SOA-based design allows pointing out:

It is necessary to find a standard methodology and associated tool to design SoA-based workflow supporting the Collaborative Environment for Eco-Design of Product-Services and service-based Production Processes. The ProSEco project will provide a user friendly environment to enable the discovering and invoking of services in order to create and customise different types of products/processes for supporting services.

Given that the products and production components are SOA-based developed, the composition and orchestration of services need to be managed by right orchestration or choreography tools. These tools may facilitate and build the Collaborative Environment. In this direction, the ProSEco project will provide an environment for service composition i.e. the combination of atomic services in order to create high level and abstract processes. To do that languages such as BPL and BPMN will be investigated among others.

Strictly related to the previous two topics, the presence of a well-structured semantic description of services will be a necessary condition upon which it is possible to develop tools and mechanisms for services integration, composition, orchestration, and/or choreography. Currently, there is a lack of coherent semantics representation of services. Hence, the ProSEco project will investigate mechanisms and tools for semantically describing services.

20 http://www.socrades.eu 21 http://www.4caast.eu/

Page 66: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 65 of 139 D100.1 State of the Art Analysis Update

An effective integration of orchestration and choreography tools with AmI-tools would be desired to support the workflow with context-aware information.

3.8.6.2 Managerial Perspective

A shop floor built on basis of SoA-concepts and technologies will be managed by service-interaction, service composition, service orchestration, service choreography. Instead of being oriented to perform traditional control and automation functions, the SOA-based shop floor (including the products) are completely managed following the concepts of business processes. SOA principles make a shop floor an asynchronous system, i.e. reactive to events and information exposed a-la-carte by production components.

Service composition is a concept which was mostly applied for one organization and support operational level process such as shop floor. In ProSEco we are looking to expand it and bring wider perspective when there are network of organizations and looking to see how it could support organization in higher level such as strategic decision in organization. The concept of extending the model of service composition vertically through different levels in organization and horizontally by supporting different players in the network at the same time could be novelty of ProSEco.

3.8.6.3 Technical Perspective

ProSEco is having one industrial use case based on real production environments offered by DESMA. Such systems are built in traditional fashion, i.e., with standard traditional control and automation components and smart (not all) production components and systems. The introduction of SOA-concepts, methods and tools in such industrial environments is a big challenge, both from organizational but particularly from technical point of view. A SOA-based ProSEco infrastructure goes beyond the objectives of the project but an initial feasibility study and corresponding first steps of migration of those industrial environments to a SOA-based infrastructure would be highly recommendable.

As part of this feasibility study, possible activities covered in the R&D work packages would be:

Prototyping interfaces for SoA-based control and automation devices in labor-premises at academic/research partners. The main focus in this topic will be the definition of clear interfaces and communication channels between the automation devices and the ProSEco solution.

Prototyping service Orchestration (Composition) engines to be embedded into control and automation devices and systems in labor-premises at research and academic partners. The main focus here will be the dynamic adaptation of the service composition, i.e. service substitution, re-composition, and composition re-execution according to the environmental information.

Related with the previous two topics there is the transformation “system-to-service”. In this scenario, the ProSEco project will trigger this transformation by abstracting manufacturing resources in the cyber space in the form of services. Hence, it is expected that the ProSEco project will provide an important contribution in the direction of Cyber-Physical-Systems (CPSs).

Prototyping interaction between Ami Tools and SoA-based control and monitoring tools in labor-premises at research and academic partners

Emulating parts of the ProSEco industrial demonstrators with SoA-based features in labor-premises at research and academic partners, in order to evaluate the feasibility of using the SOA-paradigm and evaluating the acceptance from the industrial partners. The main focus here, will be the evaluation of the SOA paradigm in the context of real manufacturing production system in order to identify how novel paradigms based on software engineering techniques can enhance current monitoring and control solutions.

Generating a possible Roadmap for introducing the ProSEco with integrated SoA-paradigm into the industrial domain.

Page 67: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 66 of 139 D100.1 State of the Art Analysis Update

3.9 Ontologies applicable for Meta Products and context modelling

3.9.1 DEFINITION

According to Hazenberg et al. [252], meta products (the Greek “meta” means “higher or beyond”) are dedicated networks of services or products fed by the information flows made possible by the web and related frameworks.

In other words, meta products consist of physical elements and their corresponding web elements. The physical objects may have sensorial functions serving as inputs for the web elements or may have activator functions initiated by the web elements.

A different term related to meta products is “Internet of Things” (IoT), which refers to uniquely identifiable objects and their virtual representations in an Internet-like structure. The term was proposed by Kevin Ashton in 1999 [253], but the concept has been discussed in the literature since 1991 [254].

According to [255], the power of meta-products (or Internet of Things) is the possibility of transforming the entire life if all objects were equipped with identifying devices or machine-readable identifiers. In [255], Magrassi says that a person's ability to interact with objects could be altered remotely based on immediate or present needs, in accordance with existing end-user agreements.

The right moment of meta-products is approaching, as Gartner there will be nearly 26 billion devices on the

Internet of Things by 202022 and According to ABI Research more than 30 billion devices will be wirelessly

connected to the Internet of Things (Internet of Everything) by 202023.

In [256], Weber et. al. facilitates the exchange of goods and services and Kopetz says that the connection of physical things to the Internet gives the possibility to access remote data and to control the physical world from the distance [257].

Cisco created a dynamic "connections counter" to track the estimated number of connected things from July 2013 until July 2020 (methodology included).[12] This concept, where devices connect to the internet/web via low-power radio, is the most active research area in IoT.

3.9.2 RELEVANT METHODS AND TOOLS

As shown in the previous section, a meta product consists of two parts: a physical part and a logical (web-enabled) component, which communicate with each other. The logical component is a virtual copy of the real-world object. This shifted the value to the content (informational) part, rather than on the object itself. The logical component may undergo any data process: storage, computing, visualization etc.

Figure 14 - The two components of a meta-product: the physical part and the logical part

3.9.2.1 Communication streams

A Meta Product may have different communication streams depending on the goals of the communication in the network (e.g. to alert the E.R. when the vital signs of a patients are out of the regular values), and

22 http://www.gartner.com/newsroom/id/2636073 23 https://www.abiresearch.com/press/more-than-30-billion-devices-will-wirelessly-conne

Page 68: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 67 of 139 D100.1 State of the Art Analysis Update

the touchpoints (sensors, web applications, RFID tags, machines and so on) that are best suited to fulfil those goals. To give an idea of what these communication streams might look like, Hazenberg et al. [252] categorizes them into three basic types:

1. between a touchpoint and the actual world information (e.g. sensing your body's information, the energy in your house, the growth of a plant);

2. between touchpoint and touchpoint (e.g. machine to machine, sensors to actuators);

3. between a touchpoint and the meta data or information (e.g. smartphone connecting to the API of a web service).

3.9.2.2 Ontologies for meta-products

According to Wang et al. [258], the key concepts of met-products are entities, resources and IoT. The entity is the main focus of interactions by humans and/or software agents. IoT services expose resource functionality hosted on devices that provide some forms of physical access to the entity. An entity is modelled to have attributes that tie it to the domain (i.e. observable or actionable features), location attributes as well as type and identifier specifications. Also captured are optional temporal features and links to known vocabularies for specifying ownership. The resource model captures different resource types (e.g. sensor, actuator, RFID tag), hosting device location as well as a link to the service model that exposes the resource capabilities. The service model exposes resource functionalities in terms of the IOPE (input, output, precondition, effect) aspects. The type of the service specifies the actual technology used to invoke the service (e.g. OWL-S, REST etc.). Two important aspects captured in the service model are the service area and the service schedule. For sensing services, the service area would be the observed area, while actuating services would specify the area of operation. The possibility of specifying time constraints on service availability is captured through the service schedule feature. The IoT-A models form the basis of the model proposed in this paper, which are extended to encompass service quality and testing aspects.

That is why Wang et al. propose an ontology for representing knowledge (meta-products and their relationships) in the IoT [258]. From an architectural point of view, the ontology contains seven modules (see Figure 15):

1. IoT service: are the means of interfacing the physical world with context intelligent business applications and services;

2. Service Test: proposed for testing functional and non-functional capabilities of IoT;

3. QoS and QoI: are important concepts in many areas, such as communications and networking;

4. Deployment, System and Platform: provides information about how resources are organized and deployed;

5. Observation and Measurement: contains the information collected from the physical world;

6. IoT Resources: focused on sensors and sensor networks;

7. Entity of Interest and Physical Locations: is an object in the physical world which is of interest for the user.

Figure 15 – Architecture of the ontology proposed in [258]

Page 69: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 68 of 139 D100.1 State of the Art Analysis Update

W3C semantic sensor network incubator group developed the SSN ontology [259]. The ontology consists of 41 concepts and 39 object properties, which means 117 concepts and 142 object properties including

those from DUL (Dolce + DnS)24. The ontology can describe sensors themselves along with their accuracy and capabilities, observations and methods used for sensing. Materials, such as units of measurements, locations where left aside.

Another ontology [260] presents a semantic modelling and linked data approach to create an information framework for IoT. The paper describes a platform to publish instances of the IoT related resources and entities and to link them to existing resources on the Web. The developed platform supports publication of extensible and interoperable descriptions in the form of linked data.

The IoT is supposed to integrate a huge amount of heterogeneous entities. That is why a significant research effort has been allocated for interoperability of IoT. Van der Veer et al. presents in [261] an overview of the approach of ETSI (The European Telecommunications Standards Institute) to ensure interoperable standards. Kotis [262] proposes the semantic interoperability for interconnected and semantically coordinated smart entities in a Web of Things, that is a use case scenario and requirements related to the semantic registration, coordination and retrieval of smart entities. In another paper [263], extends the previous work and propose an algorithm for automatically aligning semantic descriptions of the smart/control entities. More specific, they propose a configurable, multi-lingual and synthesis-based ontology alignment tool.

There is available a toolkit – IoT Toolkit, which is an open source project to develop a set of tools for building multi-protocol IoT gateways and services that enable horizontal cooperation between multiple different protocols and cloud services. The project consists of the Smart Object API, gateway services and related tools:

Smart Object API gateway service reference implementation;

HTTP-to-CoAP Semantic mapping proxy;

Gateway-as-a-Service deployment;

Application framework, embedded software Agents;

Semantic discovery and linkage, Linked Data compatibility;

Tools for multiple sensor net clients;

Raspberry Pi and cloud micro-instance deployment images (Ubuntu).

24 http://ontologydesignpatterns.org/wiki/Ontology:DOLCE+DnS_Ultralite

Page 70: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 69 of 139 D100.1 State of the Art Analysis Update

Figure 16 - The architecture of the IoT Toolkit25

3.9.3 RELEVANT PROJECTS

3.9.3.1 Airstrip patient monitoring

Airstrip patient monitoring, provide clinicians on-the-go access to detailed data and vital signs for their monitored patients. According to Milošević et al. [264], Airstrip Patient Monitoring creates a “virtual bedside” showing vital data in context, enabling clinicians to closely monitor patients from anywhere, anytime. This can dramatically reduce or eliminate time delay and miscommunication in the assessment and treatment of patients throughout the care continuum, including:

Critical care;

ED;

ICU;

Laboratory;

Pre-hospital (EMS);

Pre- and post-surgery;

Telemetry units.

With hundreds of hospitals using AirStrip already, AirStrip Technologies have an experienced point of view on how to achieve what they call 'meaningful mobility' in healthcare. Meaningful mobility is defined by AirStrip Technologies as "native mobile technology that improves clinical decision making at the point of care through data transformation and the secure, real time, and ubiquitous delivery of visually compelling intelligence by incorporating evidence-based medicine and knowledge-based prompts.”

3.9.3.2 Patent request WO2013085406 A1 - REMOTE SURVEILLANCE AND SIGNALING SYSTEM OF THE HUMAN BODY PARAMETERS

The patent [265] relates to a system that monitors body parameters, comparing them with the normal ones and remotely sends a warning signal, so that the patient or his tutors, or the special services can take

25 The source of the image, as well as the homepage of the project is http://iot-toolkit.com/

Page 71: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 70 of 139 D100.1 State of the Art Analysis Update

action. Obviously, the domain in which this invention can be applied is that of health, such as clinics, hospitals, emergency services, rescue, SMURD, social establishments etc. it is consists of a sensory module (8) composed of sensor (9) for pulse, temperature, etc., an amplifier (10), a microcontroller (11), a communication interface (12) wired or wireless, and a communication device (1) that contains a communication interface (2), an antenna (3), a telephone interface (4), a processor (6), a screen (7) and a GPS (5) (see Figure 17).

Figure 17 - Technical architecture of the device proposed within the patent

3.9.3.3 Emotiv EPOC

Another project described in [252] is Emotiv EPOC, which is a compact and affordable headset capable of measuring brain waves. Is may be used as an interface for human computer interaction. It can be used by developers (that is why it comes with its own API), but it could be used also for reflecting facial expressions in the virtual world or control the movement of game characters by measuring the brain waves. There have been experiments linking the headset to a car, thereby making it possible to drive a car with your mind. This project is a very good example of an innovative way of connecting the physical object with the virtual entity.

3.9.3.4 Xively

Xively is an on-line database service allowing developers to create meta-products by connecting sensor-derived data to the Web and to build their own applications based on that data [266]. Following the nuclear accidents in Japan in 2011, Xively was used by volunteers to interlink Geiger counters across the country to monitor the fallout.

Xively manages millions of datapoints per day from thousands of individuals, organisations and companies around the world. Data can be pushed from the sensor or device end, with the data feeds then made available in multiple formats at no charge.

Many projects and applications have been developed on top of Xively. Mraz et al. have developed SenseMap [267], a web-based interactive visualization, monitoring and controlling tool. Yang et al. describes in [268] an automated parking management system, which might be a solution for urban parking problems. Kamilaris uses Xively to analyse the impact of remote sensing on the everyday lives of mobile users in urban areas [269]. This is one of the first major research on how meta-products and Internet of Things influences everyday life.

Page 72: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 71 of 139 D100.1 State of the Art Analysis Update

3.9.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

Although meta-products are a pretty new research topic, there are already stable and commercial platforms, applications and projects which use the concept of meta-products and Internet of Things. A very good example OpenIoT (www.openiot.eu), which is an open-source middleware for getting information from sensor clouds, without worrying what exact sensors are used. OpenIoT is pertinent to a wide range of interrelated scientific and technological areas spanning: (a) Middleware for sensors and sensor networks, (b) Ontologies, semantic models and annotations for representing internet-connected objects, along with semantic open-linked data techniques (c) Cloud/Utility computing, including utility based security and privacy schemes. However, OpenIoT was not designed for collaborative work and for collaborative meta-products. Within ProSEco, a special care will be taken for making the meta-products collaborative because this will bring significant added value to the framework. This innovative concept – of collaborative meta-products is included also in the Electrolux business case.

Another innovation to be brought by ProSEco resides in the fact that the physical layer of a meta-product may not be so physical like a sensor or product. We need to extend the concept to more abstract objects, such as classes of objects (washing powder, pre-cooked pizza’s) or processes (e.g. receipt). In this case the interaction between the two layers (digital and physical) becomes more abstract and also the communication between two meta-products is more abstract.

In other words, the existing frameworks and concepts for meta-products need to be improved for:

Collaboration;

Abstractization of the physical layer of meta-products.

Page 73: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 72 of 139 D100.1 State of the Art Analysis Update

3.10 Tools for collaborative product/process design

3.10.1 DEFINITION

In recent years, collaborative software, which started basically as tools to enhance personal group communication, mostly through Internet, has dramatically increased its interest as an effective tool in working environments. The analysis of industrial requirements poses new requirements for the ICT services collaborative work in industrial environments, so a collaborative product/process design software platform should offer:

Support for Extended Enterprise (EE) environments (customers, manufacturers, providers…)

Support for different collaboration patterns, with special emphasis on temporal aspects (synchronous and asynchronous collaboration)

Support for distributed work, including:

o Identification of appropriate expertise

o Team forming

o Checking availability

Support effective knowledge management and decision making

Support rich communication services, enabling multimedia content sharing

Integration with other services and content providers

Support dynamic changes in work conditions

Such platforms most often will be based on service oriented architecture (SOA) principles.

3.10.2 RELEVANT METHODS AND TOOLS

3.10.2.1 Knowledge management

Knowledge modelling and management is a key issue in any collaborative product/process design. Several relevant technologies should me mentioned in this regard:

3.10.2.1.1 Semantic Web

The Semantic Web is a collaborative movement led by W3C which promotes common data formats on the World Wide Web. By encouraging the inclusion of semantic content in web pages, the Semantic Web aims at converting the current web, dominated by unstructured and semi-structured documents into a "web of data".

According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries." The term was coined by Tim Berners-Lee for a web of data that can be processed by machines.

Page 74: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 73 of 139 D100.1 State of the Art Analysis Update

Figure 18 – The various stages in the development of the web: Web 1.0 static uni-directional information transfer, Web 2.0 dynamic multi-directional information transfer and Web 3.0 adds inferred

knowledge to increase the quality of the information transfer.

3.10.2.1.2 OWL

The Web Ontology Language (OWL) is a family of knowledge representation languages or ontology languages for authoring ontologies or knowledge bases. The languages are characterised by formal semantics and RDF/XML-based serializations for the Semantic Web. OWL is endorsed by the World Wide Web Consortium (W3C) and has attracted academic, medical and commercial interest.

The data described by an ontology in the OWL family is interpreted as a set of "individuals" and a set of "property assertions" which relate these individuals to each other. An ontology consists of a set of axioms which place constraints on sets of individuals (called "classes") and the types of relationships permitted between them. These axioms provide semantics by allowing systems to infer additional information based on the data explicitly provided. A full introduction to the expressive power of the OWL is provided in the W3C's OWL Guide.

3.10.2.1.3 Business Process Modelling (BPM)

Business process modelling (BPM) in systems engineering is the activity of representing processes of an enterprise, so that the current process may be analysed and improved. BPM is typically performed by business analysts and managers who are seeking to improve process efficiency and quality. The process improvements identified by BPM may or may not require information technology involvement, although that is a common driver for the need to model a business process, by creating a process master. Business process modelling results in the improvement of the way tasks performed by the business. They can pick up errors or cons about the way processes are currently being performed and model an improved way of carrying out these processes.

BPM suite software provides programming interfaces (web services, application program interfaces (APIs)) which allow enterprise applications to be built to leverage the BPM engine. This component is often referenced as the engine of the BPM suite.

Programming languages that are being introduced for BPM include Business Process Modelling Notation (BPMN), Business Process Execution Language (BPEL), Web Services Choreography Description Language (WS-CDL) and XML Process Definition Language (XPDL).

Page 75: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 74 of 139 D100.1 State of the Art Analysis Update

3.10.2.1.4 Building Information Modelling (BIM)

Building Information Modelling (BIM) is an example of how knowledge management and collaboration techniques can be applied to building and infrastructure projects. It is becoming more and more important for managing complex collaboration and communication processes in such projects. BIM comprises 2 main aspects: an intelligent model and an approach for integrated collaboration, with the focus on open information sharing and integration. Furthermore, BIM encompasses both framework and technology. The main aspects of BIM and their interrelationships are illustrated in the following scheme.

Figure 19 – BIM main aspects and their interrelationship

Organisationally, the projects involve many stakeholders, including: multiple clients and end-users; multidisciplinary advisory, design and engineering teams; numerous construction companies, specialist contractors and suppliers. The way of collaboration between these stakeholders is a subject to the decision on many possible procurement and contractual arrangements.

3.10.2.2 Web Social Networks

Web social networks, not so long ago present mainly in private lives, are becoming increasingly important technologies in working environments, as content providers and communication enablers.

3.10.2.2.1 LinkedIn

LinkedIn is a social networking website for people in professional occupations. Founded in December 2002 and launched on May 5, 2003, it is mainly used for professional networking. In 2006, LinkedIn increased to 20 million viewers. As of June 2013, LinkedIn reports more than 259 million acquired users in more than 200 countries and territories.

3.10.2.2.2 Facebook

Facebook is an online social networking service. Its name comes from a colloquialism for the directory given to students at some American universities. Facebook was founded in February 2004. The founders had initially limited the website's membership to Harvard students, but later expanded it to colleges in the Boston area, the Ivy League, and Stanford University. It gradually added support for students at various other universities before it opened to high-school students, and eventually to anyone aged 13 and over. Facebook now allows anyone who claims to be at least 13 years old to become a registered user of the website.

Framework

Technology

Intelligent

building

model

• nD modelling

• object libraries

• etc.

• open standard

• open source

• model server

• etc.

Integrated

collaboration

approach

• life-cycle approach

• concurrent design

and engineering

• etc.

• communication

and decision tools

• model checking

and clash detection

tools

• etc.

The main aspects of BIM

(by Dr. Rizal Sebastian – TNO, NL)

Framework

Technology

Intelligent

building

model

• nD modelling

• object libraries

• etc.

• open standard

• open source

• model server

• etc.

Integrated

collaboration

approach

• life-cycle approach

• concurrent design

and engineering

• etc.

• communication

and decision tools

• model checking

and clash detection

tools

• etc.

The main aspects of BIM

(by Dr. Rizal Sebastian – TNO, NL)

Page 76: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 75 of 139 D100.1 State of the Art Analysis Update

3.10.2.2.3 Twitter

Twitter is an online social networking and microblogging service that enables users to send and read "tweets", which are text messages limited to 140 characters. Registered users can read and post tweets, but unregistered users can only read them. Users access Twitter through the website interface, SMS, or mobile device app. Twitter was created in March 2006 and by July 2006, the site was launched. The service rapidly gained worldwide popularity, with 500 million registered users in 2012, who posted 340 million tweets per day.

3.10.2.3 HTML 5.0

HTML was primarily designed as a language for semantically describing scientific documents, although its general design and adaptations over the years have enabled it to be used to describe a number of other types of documents. The main area that has not been adequately addressed by HTML is a vague subject referred to as Web Applications. HTML 5.0 attempts to rectify this, while at the same time updating the HTML specifications to address issues raised in the past few years.

Its core aims have been to improve the language with support for the latest multimedia while keeping it easily readable by humans and consistently understood by computers and devices. It includes detailed processing models to encourage more interoperable implementations; it extends, improves and rationalises the markup available for documents, and introduces markup and application programming interfaces (APIs) for complex web applications. For the same reasons, HTML5 is also a potential candidate for cross-platform mobile applications. Many features of HTML5 have been built with the consideration of being able to run on low-powered devices such as smartphones and tablets.

3.10.2.4 Collaborative product commerce (CPC)

Designing and bringing to market products quickly, predictably, and with a high degree of responsiveness are critical elements for success. Products today have ever-shorter lifecycles, unpredictable demand levels, greater SKU (Stock-keeping unit) variation, and an unprecedented number of features and user-selected preferences. To cope with this reality, geographically scattered design teams and supply chain partners are collaboratively designing products on a virtual basis. Static designs are being replaced by mass customization – often using predefined modules or building blocks to rapidly configure new product platforms that can be flexibly managed through their lifecycle. In this dynamic environment, collaborative product commerce (CPC) has emerged as a critical capability that organizations must acquire to remain competitive.

Collaborative product commerce is an e-business strategy for exploiting new Web-based commerce opportunities across product development and product life cycle processes. CPC opportunities include both inbound (business-to-business) and outbound (business-to-consumer) commerce such as collaborative product development, customer driven design, collaborative product and component sourcing, manufacturing/supply-chain collaboration, and product maintenance self-service portals. It improves the management of a product's entire lifecycle. The benefits of CPC include faster time to market, lower overall supply chain costs, and more satisfied customers. It enables all intra- and inter-enterprise participants to work together from any geographic location with full data security and control through tailored, role-based workflow. CPC uses Web-based enterprise application integration, leverages existing data and systems, and couples with Web-based workflow to provide closed-loop business process management and execution. Effectively executed, CPC can shave days, weeks, or even months off the critical gap between concept and product launch.

3.10.2.5 Repcon Configurator

Repcon Configurator is a software suite which provides users with the use of advanced configuration techniques which simplify the management of industrial product information and final customer requirements. Some of their features are:

Multilevel product navigation and configuration with aids in the form of images associated with the options;

Ability to perform configuration based on 2D-3D product geometry;

ERP integration module (BaaN, SAP…);

Page 77: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 76 of 139 D100.1 State of the Art Analysis Update

Output generation: BOM, route, code, name, documentation, drawings, custom calculations, product cost;

Creation and maintenance of quotes which include various custom products;

Web integration.

Figure 20 – Repcon Configurator. Configuration Agent web tool

3.10.2.6 Empresa 2.0

Empesa 2.0 is a social intranet and extranet. This collaborative solution provides a private social network for employees, customers, account managers and/or resellers, fidelity card owners…. Also, social networks can be organized around a specific product, new brand, or professional event (congresses).

The core component for Empresa 2.0 is Visualizar 2.0, which offers the following functionalities:

See all information required from a single solution;

Internal information than can be accessed includes: folders, documents, projects, internal portal, information systems (ERP, CRM, …);

It also integrates with external information sources: external webs, social networks;

It visualizes different content types: documents, photos, folders, external links;

It runs on different devices: computer, smartphones, tablets.

Figure 21 – Empresa 2.0 integrated information sources and content types

Page 78: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 77 of 139 D100.1 State of the Art Analysis Update

Empresa 2.0 also offers the possibility to introduce comments to every piece of information (project, folder, document, photo, external news, internal blog post …). It includes blogs and forums, allows for the creation of specific communities inside the main social network, events announcements and management, among several other customizable possibilities.

3.10.3 RELEVANT PROJECTS

3.10.3.1 COINS and VISI initiatives in the Netherlands

COINS and VISI are initiatives to apply collaborative design techniques in the building industry domain.

VISI is a Dutch acronym and stands for “Creating conditions for introducing ICT standardisation in the building industry” and has the following objectives:

Parties get the disposal of general purpose agreements with regard to the content and organization of the communication;

As a result of these agreements parties will be able to start collaborations and raise communication structures faster and more flexible;

The parties will be able to act more verifiable to the outside world and increase the overall quality of the resulting product;

Using these agreements will lead to better exploiting the available information and communication technology.

Simply said it amounts to that VISI aims to make unambiguous agreements on digital communication on the information exchange interfaces between business partners in a building project. These agreements should guide parties to find each other blindly.

COINS is another Dutch initiative to develop a flexible BIM standard which should be applicable on any lifecycle stage of a building product and which should not be too restrictive in the way business partners are able or willing to communicate in a building project. For example, 3D models are preferred; however 2D drawings could perfectly be fitted in the total information structure. In this way no blockades are put in advance to switch over to this method of information exchange.

Where VISI aims to improve the communication process in a building project, COINS aims to improve the object oriented content of the communication transfers themselves.

3.10.3.2 SEVENPRO

SEVENPRO is an FP6 project seeking to improve the process of product engineering and development in manufacturing and engineering companies, by means of semantically (ontology-based) supported acquisition, formalisation and usage of knowledge. For that purpose, it developed technologies and tools supporting the annotation and deep mining of product engineering repositories and enabling semantically enhanced Virtual Reality interaction with engineering knowledge in engineering teams, aimed to be useful also in other domains. Repositories under focus for semantic annotation and integration include: CAD designs, documentation and ERP/databases. The most significant advantages brought by the project are in enhanced sharing and reuse of knowledge during product development and along the whole knowledge lifecycle.

From a functional point of view, the project achieved a complete integrated environment for improving the productivity and re-utilisation within and between corporate teams with special emphasis in the engineering teams of industrial companies (SME and Big), making the knowledge management concept a reality for managing effectively the main added value of the European companies.

From a technical point of view, the project has led to creating an environment totally driven by semantic technologies that, apart from exploiting all the possibilities that these technologies bring with regards to flexibility, also implements the needed technical requirements of an "application implementable in real companies": performance, security, scalability.

Page 79: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 78 of 139 D100.1 State of the Art Analysis Update

3.10.3.3 SWOP

SWOP is another FP6 project targeting a breakthrough in manufacturing by offering a new and extensible approach for ICT support in the engineering of complex products, covering two main aspects: product development and product configuration (or customisation). There are two main business drivers behind this innovation: (1) to reduce wasted effort in terms of cost and time in re-designing and re-specifying products when for the most part the work has been done before, and (2) to configure solutions from pre-defined partial solutions (‘modules’) rather than design from scratch. When there are choices, product configurations are optimized in SWOP by applying optimisation techniques (Genetic Algorithms - GA) so that the resulting product is not just a valid solution but even a near-optimal solution that can be achieved following design constraints, end-user requirements and optimisation criteria. Thus, the main expected impact is to transform the engineering process from a low productivity craft-like repetitive activity into a systemised knowledge-driven business.

3.10.3.4 SODA

SODA is an ITEA project which is a good example for SOA approach, and for business processes modelling techniques, applied at their integration with device-level application design.

SODA main objectives were:

Implementing a complete ecosystem for designing, building, deploying and running device-based applications using a service-oriented architecture (SOA) approach;

Providing a complete set of tools for the entire system lifecycle support;

Insuring seamless integration of device-provided services with high-level business processes;

Develop elaborated experimental applications in several application domains: industrial, automation, telecommunications, home networking and automotive electronics.

The SODA ecosystem features:

an infrastructure for service-oriented systems based on embedded devices;

a methodology and guidelines helping users to move from traditional architectures towards service-oriented ones;

a tooling chain supporting the design, development and operations of service-oriented applications.

3.10.3.5 Trans-IND

Trans-IND is an European Commission’s Seventh Framework Programme project which serves as example for collaborative approach and knowledge management techniques, in this case applied to bridge projects.

The Trans-IND Software Platform (TSP) is intended to support all actors involved in a bridge project along the whole lifecycle of the bridge. Potential end users of the software include but are not limited to: owner/client, designers and engineers, manufacturers, contractors, suppliers, inspectors, certification authorities.

The TSP provides broad functionalities that support the actors of the value chain in achieving their tasks more efficiently and accessing up-to-date information about the project. All these functionalities are accessed via a web browser, not requiring the distribution/installation of dedicated software.

Hence, the TSP is a single entry point to access functionalities that can be roughly grouped as follows:

Configuration;

Manufacturing support;

Assembly support;

Knowledge Management (information and document retrieval).

Page 80: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 79 of 139 D100.1 State of the Art Analysis Update

Figure 22 – Trans-IND software architecture layers

3.10.3.6 SOFIAS

SOFIAS is a Project financed by the Spanish Government oriented to the development of a software suite to help building professionals in the design of ecologically sustainable buildings.

Among the goals of this software suite there are:

The development of a simulation module to help reduce the energy consume an economical costs of the building;

To provide a collaborative tool which empowers information and knowledge exchange among the different agents participating in the value chain of the project, from providers to manufacturers, engineers, builders…

3.10.3.7 Product Modelling using Semantic Web Technologies

This was an initiative hosted by a Web Incubator Group (XG). Web incubator groups’ mission was to foster rapid development, on a time scale of a year or less, of new Web-related concepts. Target concepts included innovative ideas for specifications, guidelines, and applications that were not yet clear candidates as Web standards developed through the more thorough process afforded by the W3C Recommendation Track.

This XG sought to enable the use of the (Semantic) Web for Product Modelling (PM): the definition, storage, exchange and sharing of product data. Product data is information about the structure and behaviour of things that are realized in industrial processes. So principally product data is about things that are manmade, but it can also be about things in the natural world that interact with those industrial processes and/or its resulting products. Typical products would include automobiles, airplanes, buildings, infrastructures, ships and other manmade complex products.

The scope of this XGR was 'Product Modelling (PM)' with 'Semantic Web (SW)'-technologies; most notably the Web Ontology Language (OWL) but also potentially specifications like SPARQL and RIF (or related specs like UML, ODL and SysML). A product ontology can be defined from scratch in OWL but it was found that there were many generic features that could be modelled by a reusable, generic upper ontology for product modelling. This upper ontology or “core” could be imported and specialised for any product ontology which on its term could be instantiated for specific product models associated to real-life (existing or planned) products. Said otherwise, the idea was that this PM XG would address an OWL-based 'language' for product ontology development as shown in the next figure:

Page 81: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 80 of 139 D100.1 State of the Art Analysis Update

Figure 23 – Schema for Product Modelling XG OWL-based ontology development

3.10.4 MAIN GAPS IN THE STATE-OF-THE-ART AND RECOMMENDATIONS FOR PROSECO

According to the technologies, tools and projects studied, certain improvement opportunities can be considered:

3.10.4.1 Lack of services approach

Product design methodologies and tools are usually almost completely manufacture-oriented, centred in the physical aspects of the product. Therefore, more intangible aspects such as services related to the product aren’t properly addressed.

We strongly believe that most of the principles applied to collaborative product design and configuration can be easily applied to services design and configuration too. These principles include.

Using ontologies as a powerful knowledge management tool;

Encouraging the reuse of previously designed elements;

Cloud-based approach;

SOA software design principles.

3.10.4.2 Lack of embedded ecoDesign concepts and considerations

It is not very common to find ecological considerations in product design world. But it is precisely one of

ProSEco main goals to integrate lean and eco-design principles, and applying Life Cycle Assessment techniques, into the collaborative design of product-services.

Not only is it required to assess the environmental impact of the designed products and services, but it is necessary to take into account this issue from the very beginning of the design process, allowing for optimisation of the overall environmental impact through the whole life cycle of the product/service.

Page 82: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 81 of 139 D100.1 State of the Art Analysis Update

3.10.4.3 Allowing for collaboration among internal and external actors

A collaborative product/process design tool, in order to maximize the possibilities for information and knowledge exchange among every actor involved in the design process should be seamlessly extensible to include extranet actors (such as clients or providers) as well as external information sources (for instance, external web sites).

For instance, it may be desirable to be able to create mixed team groups with internal employees but also with external collaborators, with restricted access to some of the internal product information, and the possibility to organise their own blogs or forums. Usually, very technical product design tools lack this openness.

3.10.4.4 Real integration between collaboration tools and design tools

Related to the previous point, it is uncommon to find technical product design tools really prepared to enhance online user-friendly communication exchanges among those participating in the design process.

Page 83: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 82 of 139 D100.1 State of the Art Analysis Update

3.11 Analysis of the relevant standards

3.11.1 DEFINITION

Standards are crucial for efficient interactions whether it be between organisations such as manufacturers and services providers or with individuals who have purchased products. They are often a prerequisite for systems to be able to communicate for data interchange. They ensure uniqueness and reduce ambiguity of data, which is important for the retrieval of reliable product or context related information in an open environment and for the attribution of data to an individual product or consumer. They also are essential building blocks for providing the functionalities envisioned by ProSEco through the provision of an open and extensible framework for collaborative design of product-services and production processes.

This analysis focuses on standards that support business processes and transactions that are enabled by the ProSEco framework and technologies describing the expected technology and product based standards that will potentially be exploited within ProSEco and in some cases may be extended to support new ProSEco capabilities.

The set of standards described can be roughly grouped into platform, product and entity identification and semantic technology standards.

3.11.2 RELEVANT STANDARDS ORGANISATIONS AND RESULTS

Many standardisation processes in principle follow the same sequential steps, but they do not necessarily generate the same results. This may be due to differences between the nature of the standards developing organisations (SDO), or due to a specific approach (e.g. formal or non-formal) towards standardisation processes. It can also be a result of an organisation’s participants aiming at specific standardisation deliverables (e.g. guidelines documents or test specifications).

When considering the various candidate standards related to ProSEco we must consider the differences between types of standards bodies, as well as differences between the standardisation processes they support, and between the standardisation deliverables they produce.

3.11.2.1 Different types of standards bodies

On a European level, there are three formal standards organisations: CEN26, CENELEC27 and ETSI28. These are recognised by the European Commission and meet the World Trade Organisation criteria for standards setting. All three have cooperation arrangements in place with their global counterparts: ISO, IEC and ITU. In addition, there are several formal standards bodies working at a national level, which also

have wider impact (e.g. DIN29, ANSI30 or BSI31).

Formal standardisation processes require relatively long periods for approval processes to be completed. However, many aspects of ICT standardisation are covered by industry consortia and trade organisations rather than formal standards bodies. Industry consortia do not primarily aim at producing formal standards, and many times set out to address or resolve only a limited number of specific issues. Despite the less formal character of the industry standards they produce, their strong focus on specific market segments or technical challenges often proves to be an efficient way for generating critical mass among stakeholders, necessary for successfully completing standardisation processes.

26 European Committee for Standardization 27 European Committee for Electrotechnical Standardization 28 European Telecommunications Standards Institute 29 German Institute for Standardization 30 American National Standards Institute 31 British Standards Institute

Page 84: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 83 of 139 D100.1 State of the Art Analysis Update

3.11.2.2 Different types of standardisation results

The ICT standardisation environment related to ProSEco is characterised by a large number of standards bodies, generating an even larger number of standardisation activities. Even with these differences however, the outputs resulting from these activities can be grouped as follows:

Formal standards, sometimes also referred to as de jure standards, are normative documents from formal standards bodies and have passed through a full and open consensus process. They are implemented on a national level and there is strong pressure to apply them; formal standards have a legal basis and can be made mandatory but considerable time (up to 4 years) is needed for completing the full approval process.

Technical or industry specifications are based on consensus among members of standards bodies, consortia or trade organisations and do not have a formal character or legal basis; they are recommendations and require less time to produce (1-3 years) but when widely accepted and used in practice by relevant market players they can become de facto standards.

Workshop Agreements are industry recommendations developed by interested stakeholders through a short-track process (6-12 months) often facilitated by several formal standards bodies; workshop agreements serve as industrial consensus documents between participating individuals and organisations, and can be revised relatively easily.

Conformance, test applications, reference implementations and guidelines aim to support interoperability between and easy rollout by market players of products and services based on formal standards or industry specifications. They have an informative character and are usually produced in a relatively short timeframe (6-12 months).

Technical reports are informative documents supporting further standardisation work, e.g. by identifying the need for additional technical clarifications in – or between – existing specifications, standards, or guideline documents.

Both formal standards and industry specifications that are developed in an open process and are publicly available under so called Fair, Reasonable and Non-Discriminatory (FRaND) terms, can be regarded as ‘open standards’. Nevertheless, there can be a trade-off between the formal impact of a standard, and the amount of time (and in some cases also resources) it takes to produce.

The work within ProSEco may contribute to the establishment of new standards, or the extension of existing ones, that may be needed to support the extensible framework for collaborative design of product-services and production processes that will be developed in the project. The type of standards results that will be the focus for the project will be dependent on the SDOs that are targeted.

3.11.2.3 Emerging standards

The ProSEco project is addressing state-of-the-art technologies for a Cloud based framework to enable collaborative design of product-services and production processes, which is a relatively new field of technology development. Some of the relevant technologies and specifications related to ProSEco are de facto standards that are recognised more for their widespread usage or acceptance, rather than having passed through a process of industry review and consensus or formal approvals.

These emerging standards often have associated communities of users who create momentum for industry acceptance and adoption even though the standards development process is often no more structured than a website for downloading a specification or an open source reference implementation, and a discussion board where the main contributors to the specifications and industrial and academic users can interact. Nonetheless, there are several examples of successful industry standards that were established through very similar arrangements (e.g. Linux).

A natural progression for many of these emerging standards is that alternative implementations or specifications appear which are driven by different application domain specific requirements. When this occurs and the emerging standards reach a level of maturity, efforts to converge the various alternatives are undertaken. This convergence process functions quite similarly to the consensus process utilised by member based standards bodies, consortia or trade organisations.

The ProSEco project expects that some of the specifications and technologies that will eventually be considered as established ProSEco standards will follow similar informal paths to industry acceptance and de facto standardisation. For this reason, some of the relevant emerging standards related to ProSEco research and development have been included in this analysis.

Page 85: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 84 of 139 D100.1 State of the Art Analysis Update

3.11.3 RELEVANT PLATFORM STANDARDS

As shown in Figure 1 in the introduction section, ProSEco is composed of various components that interact to provide a set of services. Some of these services are visible directly as user features, while others are enabling services that are used to implement higher level user features or ensure certain characteristics of the ProSEco framework are maintained such as those related to performance levels, security, interoperability and many others. These interactions, and desire for interoperability while attaining required performance and security levels, strongly motivates the consideration of various standards when specifying the ProSEco platform.

The main technology areas where there are opportunities to exploit standards within the ProSEco platform are the following:

Application development

Integration mechanisms

Server-side components

Event processing

System and data integration

Security

The ProSEco platform components themselves must interact and operate as an integrated system to deliver required services and must support the exchange of information both internally and with external resources and products. Finally all of the activities carried out in ProSEco must be built upon security technologies that provide required levels of assurance for both manufacturers and product consumers.

The following sections describe the opportunities in each of the main platform technology areas to exploit established and emerging standards.

3.11.3.1 Application development

The ProSEco platform will need to support the development of custom applications for creating new and innovative PES. To ease the work of software development during their implementation of applications, some specific standards for the development tools and technologies for hiding the complexity of the ProSEco platform will likely be needed.

The three core technologies where standards exist that can be used for ProSEco application development are:

Languages – used by application developers to implement PES features in software. These are used to implement application logic, present and manipulate the interface to users, and to implement the protocols to interact with ProSEco components. Potential candidates for language standards for developing ProSEco functionalities and application software are Java, C++, C#, HTML, HTML5, CSS and JavaScript. For a brief description of each of the language standards see Annex 1.

Development environments – provide a set of supporting tools and plug-ins to assist the application developer in automating the design, testing and deployment of application specific services (PES) implemented in one or more programming languages. The industry recognised standards for development environments Eclipse and NetBeans are potential candidates for developer support in creating ProSEco PES applications. They are standards of certain fora or consortia and described in Annex 1.

The ProSEco project will likely use the Java programming language standard as the basis for implementing most of the components of the ProSEco platform. However, this does not preclude the use of other language standards for the development of PES applications that exploit the ProSEco platform.

Page 86: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 85 of 139 D100.1 State of the Art Analysis Update

3.11.3.2 Integration mechanisms

The application integration features of the ProSEco platform may target the ability to provide functionalities to organise and customise product and application specific environments by combining parts of different ProSEco core services to present various features and capabilities to the product owner. There are several tools available that could be utilised to provide this kind of integration. Some examples are:

Apache Rave – an open source social mashup engine developed under the umbrella of the Apache Software Foundation. Published applications can be added to and freely arranged in a user’s personal workspace.

JBoss GateIn – allows a user to create a personal dashboard and populate it with several applications (so-called portlets).

Standards that are used for these types of application integration tools are primarily specifications for widgets and portlets, which are parts of applications that can be manipulated within a product or application specific working space or collaboration environment. Related standards of certain companies, fora or consortia in these areas are Widgets and Java Portlets which are described in Annex 1.

3.11.3.3 Server-side components

ProSEco utilises a Service Oriented Architecture (SOA) to provide service interfaces for product and domain specific applications to provide functionality based on context, knowledge stores, AmI monitoring and environmental aspects. Application specific PES exploit these features provided on the server-side within the client server model used for ProSEco applications. Standards exist to support ProSEco in the implementation of server side services, which also provide the necessary interfaces, and a web- or application-server which executes the implemented services. The most relevant standards for ProSEco server-side components are described in Annex 1. They are Java Servlet, JavaServer Pages and WebSockets.

The following are possible candidate server implementations that support all or some subset of the standards mentioned. These implementations have largely become de facto standards from their widespread usage:

Apache Tomcat – Tomcat is an open source application server for Java-based applications. It implements the Java Servlet and Java ServerPages specification directly, but also can be extended to support the whole Enterprise Java Stack.

Eclipse Jetty – Jetty is an alternative application server to Tomcat. It also implements the Servlet and ServerPages specification, but provides additionally support for the WebSockets protocol.

Spring – Spring is one of the most popular frameworks for enterprise Java applications. It consists of a variety of different modules, which can be used in very different scenarios and has facilities for linking to social networking sites like Facebook and Twitter.

CometD Java Server – CometD framework allows the technology independent integration of server push mechanisms.

The ProSEco project will likely utilise one of these server implementations within the ProSEco platform.

3.11.3.4 Event processing

The ProSEco platform will likely need facilities for communications between two or more components providing core services or enabling collaboration. One potential interaction pattern that might be used is a Publish/Subscribe mechanism where a consumer (i.e. platform component) registers to specific events (subscriptions) for example, a change in context of product use, and a producer pushes events to the communications channel (publish) via a Pub/Sub Event API.

Standards that are potential candidates for supporting event processing within the ProSEco platform are: JSON, JMS and DDS (see also Annex 1 for a brief description).

Page 87: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 86 of 139 D100.1 State of the Art Analysis Update

3.11.3.5 System and Data Integration

One of the roles of the ProSEco platform is to provide a robust and scalable infrastructure that enables integration of data from AmI monitoring, environmental parameters, context and knowledge systems associated with the products and services of manufacturers and third parties. To facilitate the implementation of web based, ProSEco applications, standardised unifying data models, data mediation tools and system integration APIs may be considered.

REST or Representational State Transfer based interfaces often appear within the context of system or component integration. We do not specifically identify REST as a standard within the context of this analysis as REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as SOAP to connect machines, the standard Hypertext Transfer Protocol (HTTP) protocol is used to provide the fundamental mechanisms to post data (create and/or update), read data (e.g., make queries), and delete data. RESTful protocols do not follow a prescribed standard except for the underlying use of HTTP, and it is very likely the ProSEco project will create and utilise several RESTful APIs specific to ProSEco for integrating ProSEco components and other third party systems.

Candidate standards related to system and data integration are (see also Annex 1):

SOAP

NGSI

OSGi

Standards that will enable flexible system and data integration of ProSEco components that might be used in developing the ProSEco platform may be:

Java Dependency Injection API

JAX-RS - Java API for RESTful Web Services

JAX-WS Java API for XML based Web Services

These are also described in Annex 1.

3.11.3.6 Security

The ProSEco platform aims to provide secure and reliable exchange of confidential product information and provide services through transactions that utilise secure authentication and authorization methods that meet required levels of security assurance. Key areas to be addressed are authentication, access control and secure communications.

Well-established standards exist in each of these areas:

for authentication:

o SAML

o OpenID

o OAuth

for access control:

o XACML

o LDAP

for secure communications:

o S/MIME

o TLS

See Annex 1 for the descriptions of these security related standards.

Page 88: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 87 of 139 D100.1 State of the Art Analysis Update

3.11.4 RELEVANT PRODUCT AND ENTITY IDENTIFICATION STANDARDS

Manufacturers providing new PES innovations and their associated third party service providers will likely be concerned with identification of real-world entities (both physical and virtual), capture of identification and other data from physical objects (e.g. products,), and sharing of information concerning real-world entities (e.g. service providers, retailers, etc.).

Supply chains can be divided into two categories:

Open Supply Chains – An open supply chain is one in which the complete set of trading partners is not known in advance and which changes continually. This has great significance for the architecture of systems that support PES.

Closed Supply Chains – in a closed supply chain, a fixed universe of partners is known in advance, and so interfaces can be negotiated in a controlled, coordinated way, and change management is simplified because all parties can agree to make changes simultaneously.

Open supply chains require that interfaces be negotiated and implemented outside the context of any particular trading relationship, and be adhered to by all parties so that interoperability in support of new PES applications will be achieved despite the fact that specific actors on each side of the interface are not able to negotiate in advance. It leads to the definition of broadly accepted standards, in which the emphasis is placed on interoperability, maximum applicability to a broad range of business contexts, and minimisation of choices that require pre-coordination between interfacing parties.

Several sets of standards have been developed by the GS1 organisation to address entity identification, electronic product coding, product data capture and data exchange amongst real-world entities, which are described below. See also Annex 1 for further details.

3.11.4.1 Entity Identification

Standards in entity identification provide the means to identify real-world entities so that they may be a subject of electronic information that is stored and/or communicated. An entity may be:

Physical: A tangible object in the real world made of matter. In particular, a physical object is something to which a physical data carrier (bar code or RFID tag) may be physically affixed.

Abstract: A virtual object or process, including legal abstractions (e.g. a party), business abstractions (e.g. a class of a trade item) and so on.

An attribute is a piece of information associated with an entity. Information systems refer to a specific entity by means of a key. It serves to uniquely identify that entity, within some specific domain of entities. An information system uses a key as a proxy for the entity itself.

The GS1 Identification Key is a standard identifier that refers to a specific business entity and is positioned by the GS1 organisation as being:

Unique

Non-significant

International

Secure

GS1 Identification Keys may be used in all countries and all sectors, have a defined structure and most include check digits for additional security.

3.11.4.2 Electronic Product Code

GS1 defines an Electronic Product Code (EPC) as a unique, individual meta key for different types of business objects (i.e. individual products, returnable transport items (RTI), shipments, etc.). The EPC is intended for use in systems like those being developed in ProSEco that need to track or otherwise refer to business objects. Large subsets of applications that use the EPC rely upon RFID tags as a data carrier. However, RFID is not necessarily needed to utilise the EPC standard.

Depending on its field of application, an EPC can be displayed in three different forms:

Page 89: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 88 of 139 D100.1 State of the Art Analysis Update

EPC Pure Identity (used in application systems)

EPC Tag URI (used in RFID middleware systems)

EPC binary code (used on RFID transponders)

The latter two are related to RFID only. It’s not yet clear what will be the most important EPC representation related to ProSEco.

3.11.4.3 Data capture

The capture standards in the GS1 System are standards for automatically capturing identifying information and possibly other data that is associated with a physical object (i.e. product). The industry term Automatic Identification and Data Capture (AIDC) is sometimes used to refer to the standards in this group.

GS1 Standard data elements are defined in a data carrier neutral way so that their semantics is the same regardless of what data carrier is used to affix them to a physical object (and also the same outside of a physical data carrier, such as an electronic message). GS1 data standards make the following distinctions:

Application Data - these are GS1 data elements defined in a data-carrier neutral way. A business application sees the same data regardless of which type of data carrier is used.

Transfer Encoding - this is the representation of data used in the interface between a capturing application and the hardware device that interacts with the data carrier (bar code scanner or RFID interrogator). The transfer encoding provides access to control information and carrier information and therefore is different for different data carrier types.

Carrier Internal Representation – this is the representation of data in the data carrier itself. In a bar code, this is the pattern of light and dark bars or squares. In an RFID tag, this is the binary data stored in the digital memory of the RFID chip.

The GS1 System provides several types of bar code depending on the application. The reasons for this vary because different bar code types have different strengths. The bar codes used by GS1 include EAN/UPC, GS1 DataBar, GS1-128, ITF-14, GS1 DataMatrix, Composite Component and GS1 QR Code.

In the context of internet applications these data carrier standards become increasingly important for mobile applications (e. g. reading of a barcode by creating an image of the code via smart phone camera, decoding the information and executing defined actions accordingly).

3.11.4.4 Data exchange

The use of standards in data exchange provides a predictable structure of electronic business messages, enabling business partners to communicate business data in an automated way, efficiently and accurately, irrespective of their internal hardware or software types. GS1 offers standards for the identification of (see also Annex 1):

products

locations and parties (buyer, seller, and any third parties involved in the transaction)

logistic units

The same identification keys are bar coded on product packaging in the retail environment and on logistic units. Within the ProSEco project distinction is made between internal and external data exchange. While in ProSEco internal communication will likely be based on Cloud technologies, classical communication technologies in place based on existing supply chains and business relations will likely need to be supported and many of these already use the GS1 standards for data exchange.

3.11.5 RELEVANT SEMANTIC TECHNOLOGIES STANDARDS

Standards related to semantic technologies address key challenges in providing new innovations in PES applications and services:

Heterogeneity of actors – different actors along the supply chain can collect and utilise data using standard vocabularies, and only exchange data they feel comfortable with making public.

Page 90: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 89 of 139 D100.1 State of the Art Analysis Update

Heterogeneity of data – different actors will publish data using different vocabularies but the ability to create mapping ontologies makes the integration of disparate data feasible.

Open supply chain integration – some manufactured products are delivered through channels where there is a lack of supply chain integration. Semantic technologies are designed for data integration, and provide the glue for increased integration.

Within the product domains targeted by ProSEco data might come from multiple sources – some static and others dynamic. These might include websites, handheld devices, smartphones, sensors, and a variety of smart devices which are increasingly being used along the supply chains of manufacturers as well as data from the manufacturer’s products themselves. Data based on semantic technologies can be in structured formats following formal vocabularies/ontologies so as to be semantically correct and machine readable.

The flexibility of semantic technologies allows for the addition of new data types easily without the restructuring or redesign of databases and communication protocols. Thus if suddenly there was a need to know (for example) exactly how much energy is used by a specific product or process, such an additional ‘field’ can be added with very little overhead.

3.11.5.1 Information exchange and management

The most prominent semantic technology standards are a series of World Wide Web Consortia (W3C) standards covering the following aspects:

Uniform Resource Identifiers (URI)

Resource Description Framework (RDF)

RDF Schema (RDFS)

Web Ontology Language (OWL) and OWL 2

Simple Knowledge Organization System (SKOS)

RDFa

SPARQL query language

These provide a rich and extensible environment in which to information and knowledge about products and associated services, and to exploit new opportunities for PES. The fundamental idea behind the standards associated with semantic technologies has been to create a an interoperable and highly extensible ‘web of data’ that is machine readable data and that can be linked and associated with the ‘web of documents’ which already exists, which can form the basis for new applications and services associated with PEC. The format of data within the “semantic web” follows certain standard rules, and uses a variety of standard vocabularies or ‘ontologies’, and enables different, disparate data to be interlinked to provide new services based on connecting people, products and knowledge.

See Annex 1 for details concerning the semantic technology standards.

3.11.5.2 Vocabularies and ontologies

A number of vocabularies or ontologies exist related to the target domains and capabilities ProSEco is addressing. Many are official standards that have been produced by a standards body, while others have become de facto standards as they are widely used without having been formally ratified. These include the following:

Observations & Measurements

Sensor Model Language (SensorML)

Sensor and Sensor Network Ontology

Geography Markup Language

GeoSPARQL

FOAF

Page 91: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 90 of 139 D100.1 State of the Art Analysis Update

GoodRelations Ontology

Each of these is briefly described in Annex 1.

Standard ontologies are the structural frameworks for organising information on the semantic Web and within semantic enterprises. They have the potential to provide important benefits to the ProSEco framework in the area of discovery, flexible access and information integration due to their inherent connectedness. Ontologies can be layered on top of existing information assets, which means they are an enhancement and not a displacement for prior investments of manufacturers and their partners providing services. In addition, ontologies may be developed and matured incrementally such that they can be gradually introduced within manufacturing organisations in support of new PES and expanded and more widely adopted as their effectiveness and benefits become evident.

3.11.6 MAIN CONSIDERATIONS IN THE STATE OF THE ART AND RECOMMENDATIONS FOR

PROSECO

Overall, there are standards that can be utilised in constructing the ProSEco platform that are specifically related to the platform itself, standards that relate to how sources of product data are identified and attributed to support greater analysis of product movement and usage, and standards that support how product data is structured and managed between systems, devices and actors in order that product information can be understood and analysed. These existing standards can form the basis for the ProSEco platform development and support a wide range of new PES applications provided both by manufacturers, as well as third party suppliers of applications and services.

Use of standards within ProSEco is important for a number of reasons. The main motivations for the project to adopt standards include the following:

The ProSEco platform components themselves must interact and operate as an integrated system to deliver required service, and must support the exchange of information both internally and with external resources and products. Integration of components that utilise standardised interfaces requires less effort and often there are open source libraries available that can be used to implement standardised protocols between components and also with external resources.

Manufacturers need to be able to extend and adapt the ProSEco platform to their specific product and context data structures as well as their existing internal systems. Standards will facilitate extensibility both for the components utilised for the ProSEco platform and in the data that is captured and used to support new PES applications.

Manufacturers will require assurance that the ProSEco platform is stable and secure. The use of standards that provide reference implementations already widely used for critical systems and security standards that have passed formal security verification reduces the risks perceived by manufacturers associated with adopting new ProSEco technologies.

Within the industry domains targeted by ProSEco data might come from multiple sources including websites, handheld devices, smartphones, sensors, and a variety of smart devices, as well as data from the manufacturer’s products themselves. Standards provide the capabilities to bridge these disparate sources into a coherent stream of data that can be analysed and exploited for new PES applications.

Finally, use of standards encourages the formation of an ecosystem where providers of PES applications see greater opportunities for return on investments by exploiting ProSEco capabilities. The support of standards means that applications developed for the ProSEco platform in one industry domain can be easily adapted for use in other related industries domains.

Page 92: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 91 of 139 D100.1 State of the Art Analysis Update

4 Conclusions

The aim of this chapter is to address key points from state of the art analysis and to introduce interdisciplinary recommendations based on different sections of the report. ProSEco could bring novel approach by combining different concepts, methodologies and technologies introduced in different sections to create state of the art platform for business cases.

The deliverable includes update of the analysis of the state-of-the-art technologies/methodologies relevant for the project. In each area a brief overview of the existing Methodologies/ solutions/technologies are provided and the main gaps in the State-of-the-Art have been identified to comprehend the needs and guide the focus of ProSEco. This led to the description of innovations that will be pursued by the project.

The main focus in ProSEco is Lean and Green integration. The development and fabrication of products in a green way in addition to being lean will increase value delivery to customers. By introducing Green products, a company can distinguish itself from the competitors, target new customer groups and tap into new markets [195]. Extending the production practices with Green features will bring additional profits to the company without requiring much investment [270].

Figure 24 – Lean and green integration [195]

To determine the best lean and green integration, it is necessary to understand the distinguishing attributes of the two paradigms. The overlap of Lean and Green paradigm is constituted in the following common attributes: waste and waste reduction techniques, people and organisation, lead-time reduction, supply chain relationship. The literature analysis by Dues et al [195] identifies that Lean not only serves as a catalyst but also is also synergistic for Green. This means that Lean is beneficial for Green practices and the implementation of Green practices in turn also has positive influence on existing business practices.

Shifting to overlap of Lean and Green paradigm should be based on Knowledge sharing and integration when cloud could support such discontinues innovative change. Research showed that companies face some major knowledge management challenges during collaborative product development [271] since it thrives upon the exploration of knowledge under a high degree of uncertainty [272].

Page 93: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 92 of 139 D100.1 State of the Art Analysis Update

Since collaborators do not have to become experts in each other’s fields, they do not have to share all their knowledge. Yet, they have to be able to integrate their knowledge bases in a sensible manner. Therefore, Berends et al.[272] claimed that, during collaborative product development, knowledge management should focus on knowledge integration instead of focusing on knowledge transfer. One example could be the development of a Virtual collaborative Obeya room, where the team members can collaborate in the different phases of the product design process without physically share the same room.

On the other side, Ambient Intelligence refers to electronically enabled environments that are aware of and responsive to the presence of people. Such an environment is sensitive to the presence of living creatures in it, and supports their activities. It “remembers and anticipates” in its behaviour. The general concept of Ambient Intelligence (AmI) determines the system that is based on information and communication technologies, that supports its human users in an effective and unobtrusive way.

As we have shown in the previous sections the general objective of AmI is to promote a better integration of technology into the environment, so that people can freely and interactively use it. But the particular innovation of ProSEco regarding this point will be to use AMI to optimize product/process design from customer satisfaction and ecological points of view as well aiming to increase efficiency of the collaborative design process.

The idea at ProSEco is to reuse reference models of AmI solutions in manufacturing industry to build the engineering tool for selection/integration of AmI solutions in different products, processes and services. The AmI based monitoring Core Service will provide inputs to both context awareness services and environmental impact monitoring based on information from AmI solutions integrated in the Meta Product and processes. The big advantage of such approach is that it will allow collection of data on product service dynamics (see Section 3.4.2) in real time.

Since the “Context Awareness” is the core of AmI, ProSEco will also put special attention to context modelling and extraction (as it has been pointed in section 3.3). As the problem of how to represent the context information is solved, ProSEco innovation will focus on how to extract context from the knowledge process and how to manipulate the information to meet the requirement of knowledge enrichment. Moreover, the approach for context awareness and context identification and extraction in collaborative work will also be studied in ProSEco in order to include the collaborative approach into this context.

The complex phenomena within cloud-based web-enabled collaborative innovative product-service offerings in different territories require modelling and simulation from two perspectives: top-down aggregate dynamic complexity involving broad patterns; and bottom-up emerging phenomena arising from the interactions of multiple agents such as individual consumers. Each of these perspectives has its own strengths. However, neither is complete in itself. The collaborative environment for product/service design will offer collaboration among remote actors using cloud computing. This collaborative design will provide access to the simulation tools to support the development of PES and production system by taking into account business models and selected services being a user oriented simulation. The simulation will help to identify the eco-constrains in use phase in order to improve the design of new products/services as well as to reveal weak spots in terms of efficiency due to the application of the lean and green paradigm.

ProSEco is trying to bring together several concepts which are discussed in the literature applicable and deliver intangible values for industries by shifting to new lean-Green paradigm. In theory there are many related works when the question is how to put them to work in as a comprehensive solution. Most of applications so far are in narrow bands while the challenge of putting concepts in reality to work in a wide scale is the main concern in this project.

ProSEco also will provide bridge between information from different internal/external sources and actions/Decisions by using hybrid services and tools. Hybrid here means combining different services, models and tools to satisfy the objectives and meet the needs of the project. The horizontal and vertical expansion of current solutions to support collaboration and enjoy the benefit from different levels and various actors could also be recognised as novelty while to reduce the complexity of the problems/systems usually simplification by imposing assumptions could be helpful but reduce the level of accuracy. Thanks to system approach and related tools we are in new stage to build new generation of hybrid models to face what is called organised complex problem.

ProSEco strategy is dealing with the challenge of dynamism caused by Time, Population and Location concurrently and will employ hybrid solutions to manage the complexity of the problem. As an example when we are dealing with data from different regions it could be assumed that “math of average” could be helpful and aggregating the information regardless of the origins could lead us to conclusion but we have to understand that although abstraction could reduce the complexity but increase the error and decrease the accuracy of the model and related decisions. Here in ProSEco the effort will focus to solve these kind

Page 94: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 93 of 139 D100.1 State of the Art Analysis Update

of problems with less pre-assumption and try to consider the changes in basic and supportive variables at the same time. This could be done thanks to combination of models, methodologies and tools which are going to be used in ProSEco to work and interact with each other. On the other side qualitative and quantitative models are used simultaneously to meet the project targets and to test the result using simulation tools. Validating conceptual models could guaranty better decision and results in designing new or updating existing product and services

In ProSEco one important issue is to develop a platform which could be supplied from different data sources and communication channels while the Cloud brings the opportunity of mass data which need to be aggregated intelligently. This means the data will be used in way not just to produce useful information but provided knowledge to do final decision and take action to follow the process of designing and manufacturing.

Thanks to the wider perspective of ProSEco comparing with similar projects it is tried to generalise current solutions to make them applicable and simply available in different levels while SMEs are important beneficiaries.

As a conclusion of the state-of-the-art analysis and overview, in all areas relevant for the envisaged ProSEco RTD activities, several solutions and concepts are identified, which may serve as a foundation to realise the envisioned innovative approach, but they will need to be enhanced and harmonised in order to meet the ProSEco project objectives. A final selection and decision on required further investigations is to be made in the System Concept creation.

The report presents a survey of the state-of-the-art in several domains identified as the most relevant for the ProSEco project. Briefly summarized conclusions related to the areas investigated are presented in the following Table.

Table 2 Summarized Conclusions

State-of-the-Art topic

Identified approach Identified gaps ProSEco novelty

Collaborative

product/process design

Concept is developed and the

process as well as methodolo-gies are introduced

Mass data analysis and using

multi group feedback from different regions need to be

considered

Mass data analysis will be

possible and more players could collaborate and partici-

pate at the product//process

design

AmI information processing

Engineering tool for selec-tion/integration of AmI solu-

tions in process – reference

model of AmI solutions ap-

plicable in industry from

AmI4SME and AmI-MoSES

Problem of how AmI solu-tions can be used to extend

the product, i.e. how AmI so-

lutions can be used to build

various services not ad-

dressed

Engineering tool to support selection of AmI solutions to

provide information needed

for assessing environmental

impact of products and allow

for optimisation of prod-ucts/processes/services

Ami Monitoring concept party developed in AmI-Mo-

SES but only for processes

No generic service allowing for collection of data on prod-

uct service dynamics based

on AmI solutions integrated

in Meta Products

AmI based monitoring ser-vice to provide inputs to con-

text awareness services and

environmental impact moni-

toring based on information

from AmI solutions inte-

grated in Meta Products

Context Sensitive approaches

Context modelling based on ontology approach, OWL-

based context ontologies from

several previous projects and

standard ontologies can be re-

used

No context model for design and application within Meta

products

Extend/combine existing models and ontologies for

context awareness within

Meta product design (sup-

porting decision making at

various levels) and within

services to identify user con-

text and allow for service en-

hancement

Page 95: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 94 of 139 D100.1 State of the Art Analysis Update

State-of-the-Art topic

Identified approach Identified gaps ProSEco novelty

Context extractors from dif-

ferent projects (e.g. Self-

Learning) could be used as a

basis for development of ser-vice for context awareness

No services to extract context

from Meta products , i.e.

based on information from

services, and from Meta prod-uct design processes

Context sensitive services

around products and

knowledge-based decision

support system for enhance-ment of Meta products design

Ontologies for meta-products

and context mod-

elling

There are already frameworks for meta-products

The existing ontologies and frameworks are not collabora-

tive

The collaborative component of the frameworks and ontol-

ogies needs to be defined and

developed

The ontologies for meta-prod-

ucts make use of a physical

layer as well as a digital layer

(copy) of the object

The existing frameworks

make of very specific physi-

cal layer of a meta-product,

whereas in the ProSEco we need a more abstract layer

The physical layer of the

meta-products will be extends

to match also abstract con-

cepts (such as processes and classes of objects)

Simulation sup-port for the prod-

uct process de-

sign

Models are developed to rep-resent situation using holistic

approach

Most of the models in this area are conceptual models

Some of the relevant models can be tested and validated by

simulation

Models are based on availa-ble data

Lack of data could affect the process of modelling and

there is no sufficient way to

deal with this challenge

Using data from market re-ports and qualitative research

techniques such as focus

group to overcome the prob-

lem of missing/lack of data

Life cycle man-agement for prod-

uct optimization

applying eco-de-

sign perspectives

There are several methodolo-gies for product design and

Eco-Design process is getting

more important and clear as a

new paradigm.

Complexity of different meth-odologies to be applicable for

Eco-Design in SMEs

Develop Eco-Design Method-ology applicable to SME’s

Lean design prin-ciples in the over-

all life cycle of

the product in-

cluding organiza-tional aspects

LPD enjoy various tools which effectively are used

Current tools have weak-nesses to deal with new tech-

nological advancement

Considering and supporting virtual collaboration, transfor-

mation in different levels,

process-Oriented metrics and

small companies

Lean implementation process

as a people mindset transfor-

mation that will conclude

with a new shared vision of the company and a new

Cloud culture.

ProSEco will bring custom-

ers’ views on value into Prod-uct Design Development

A performance measurement

methodology based on lean

management principles that would enhance decision mak-

ing and problem solving will

be developed..

Obeya for globally distributed team will be defined to get a

common understanding of the

status of the development

process when people involved

in a project are distributed

amongst different locations

Page 96: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 95 of 139 D100.1 State of the Art Analysis Update

State-of-the-Art topic

Identified approach Identified gaps ProSEco novelty

and cannot meet regularly in

one place.

Service Oriented Architecture

(SOA) in manu-

facturing industry

and Cloud manu-

facturing

Collaborative networks of factories

Collaborative cloud-based en-vironment in a real manufac-

turing enterprise to enable

cloud manufacturing across

geographical distributed fac-

tories

Results from previous pro-jects will be improved to ena-

ble the creation of a connec-

tion point between systems in

factories below the level 3

(MES) of the ISA’95 to ena-ble collaborative service-

based networks of factories to

exchange maintenance infor-

mation and production sched-

uling information.

Security and trust to be guar-

anteed in the cloud

No elaborated approach for

security and trust of CMfg

and especially for Meta prod-

ucts design

Results from previous pro-

jects to be extended to im-

prove the secure communica-

tion and data exchanging be-tween virtual devices inside

the cloud

Collaborative design of prod-uct-services and their produc-

tion processes

Manufacturing companies do not have the necessary degree

of transparency and visibility,

since more of the information

gathered during the produc-

tion process is analysed at the

subsystem level.

Create a connections point between the management lev-

els of a manufacturing enter-

prise with the process.

Service composi-tion

Effective implementation of innovative services

No tools and methodologies for an effective implementa-

tion of innovative services for

cloud manufacturing ap-

proaches.

ProSEco project will provide methodologies and tools for

enabling the manufacturing

components virtualisation and

description in terms of ser-

vices and skills

Tools for collabo-

rative prod-

uct/process de-sign

Product design methodolo-

gies and tools are manufac-

ture oriented

Intangible aspects such as

service related to the product

aren’t properly addressed

Extend the application of the

tools to service design

There are several tools to sup-port collaboration usually

among internal or external ac-

tors

It is uncommon to find tech-nical product design tools

prepared to enhance online

user-friendly communications

Develop collaborative prod-uct/process design tool allow-

ing for collaboration among

internal and external actors

Page 97: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 96 of 139 D100.1 State of the Art Analysis Update

5 References

[1] S. Elfving, “Managing Collaborative Product Development : A Model for Identifying Key

Factors in Product Development Projects,” 2007.

[2] S. Leek, P. W. Turnbull, and P. Naudé, “How is information technology affecting business

relationships? Results from a UK survey,” Ind. Mark. Manag., vol. 32, no. 2, pp. 119–126,

Feb. 2003.

[3] J. B. QUINN, “Outsourcing Innovation: The New Engine of Growth,” Sloan Manage. Rev.,

no. 4, p. 2000, 41.

[4] P. Montiel-Overall, “Toward a Theory of Collaboration for Teachers and Librarians,” Sch.

Libr. Media Res., vol. 8, 2005.

[5] B. W. Tuckman, “Developmental sequence in small groups,” Psychol. Bull., vol. 63, no. 6,

pp. 384–399, 1965.

[6] G. Gautier, C. Piddington, and T. Fernando, “Understanding the Collaborative Work-

spaces,” in Enterprise Interoperability III, P. D.-I. K. Mertins, D. R. Ruggaber, P. K. Pop-

plewell, and P. X. Xu, Eds. Springer London, 2008, pp. 99–111.

[7] H. Hills, Team-based Learning. Gower Publishing, Ltd., 2001.

[8] R. C. Anderson, “The Notion of Schemata and the Educational Enterprise: General Discus-

sion of the Conference,” in Schooling and the Acquisition of Knowledge, Lawrence Erl-

baum, 1984, pp. 415–31.

[9] S. Fiske and S. Taylor, “Social cognition, 1991,” Soc. Cogn. 2nd Ed Xviii 717 Pp N. Y. NY

Engl. Mcgraw-Hill Book Co., 1991.

[10] V. H. Vroom, Work and motivation. Oxford, England: Wiley, 1964.

[11] G. Gautier, M. Bassanino, T. Fernando, and S. Kubaski, “Solving the Human Problem: In-

vestigation of a Collaboration Culture,” in International Conference on Interoperability for

Enterprise Software and Applications China, 2009. IESA ’09, 2009, pp. 261–267.

[12] G. Gautier, G. Kapogiannis, C. Piddington, T. Fernando, and Y. Polychronakis, “Pro-active

Project Management,” in International Conference on Interoperability for Enterprise Soft-

ware and Applications China, 2009. IESA ’09, 2009, pp. 320–326.

[13] H. Patel, M. Pettitt, and J. R. Wilson, “Factors of collaborative working: A framework for

a collaboration model,” Appl. Ergon., vol. 43, no. 1, pp. 1–26, Jan. 2012.

[14] G. Büyüközkan and J. Arsenyan, “Collaborative product development: a literature over-

view,” Prod. Plan. Control, vol. 23, no. 1, pp. 47–66, 2012.

[15] D. Littler, F. Leverick, and M. Bruce, “Factors affecting the process of collaborative product

development: A study of UK manufacturers of information and communications technology

products,” J. Prod. Innov. Manag., vol. 12, no. 1, pp. 16–32, Jan. 1995.

[16] L. M. Camarinha-Matos and A. Abreu, “Performance indicators for collaborative networks

based on collaboration benefits,” Prod. Plan. Control, vol. 18, no. 7, pp. 592–609, 2007.

[17] H. Lawton Smith, K. Dickson, and S. L. Smith, “‘There are two sides to every story’: Inno-

vation and collaboration within networks of large and small firms,” Res. Policy, vol. 20, no.

5, pp. 457–468, Oct. 1991.

[18] J. M. Davis, L. K. Keys, I. J. Chen, and P. L. Petersen, “Collaborative product development

in an R&D environment,” Natl. Aeronaut. Space Adm. E-14440, 2004.

Page 98: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 97 of 139 D100.1 State of the Art Analysis Update

[19] K. Rodriguez and A. Al-Ashaab, “Knowledge web-based system architecture for collabo-

rative product development,” Comput. Ind., vol. 56, no. 1, pp. 125–140, Jan. 2005.

[20] X. Mi, W. Shen, and W. Zhao, “Collaborative product development in SMEs: requirements

and a proposed solution,” in Proceedings of the Ninth International Conference on Com-

puter Supported Cooperative Work in Design, 2005, 2005, vol. 2, pp. 876–882 Vol. 2.

[21] A. Sharma, “Collaborative product innovation: integrating elements of CPI via PLM frame-

work,” Comput.-Aided Des., vol. 37, no. 13, pp. 1425–1434, Nov. 2005.

[22] Z. Emden, R. J. Calantone, and C. Droge, “Collaborating for New Product Development:

Selecting the Partner with Maximum Potential to Create Value,” J. Prod. Innov. Manag.,

vol. 23, no. 4, pp. 330–341, 2006.

[23] W. D. Li and Z. M. Qiu, “State-of-the-art technologies and methodologies for collaborative

product development systems,” Int. J. Prod. Res., vol. 44, no. 13, pp. 2525–2559, 2006.

[24] H. Wang and H. Zhang, “A distributed and interactive system to integrated design and sim-

ulation for collaborative product development,” Robot. Comput.-Integr. Manuf., vol. 26, no.

6, pp. 778–789, Dec. 2010.

[25] W. D. Li, W. F. Lu, J. Y. H. Fuh, and Y. S. Wong, “Collaborative computer-aided design—

research and development status,” Comput.-Aided Des., vol. 37, no. 9, pp. 931–940, Aug.

2005.

[26] W. Shen, Q. Hao, and W. Li, “Computer supported collaborative design: Retrospective and

perspective,” Comput. Ind., vol. 59, no. 9, pp. 855–862, Dec. 2008.

[27] S. Szykman, J. Racz, C. Bochenek, and R. D. Sriram, “A web-based system for design arti-

fact modeling,” Des. Stud., vol. 21, no. 2, pp. 145–165, Mar. 2000.

[28] K.-Y. Kim, D. G. Manley, and H. Yang, “Ontology-based assembly design and information

sharing for collaborative product development,” Comput.-Aided Des., vol. 38, no. 12, pp.

1233–1250, Dec. 2006.

[29] C.-H. Chu, P.-H. Wu, and Y.-C. Hsu, “Multi-agent collaborative 3D design with geometric

model at different levels of detail,” Robot. Comput.-Integr. Manuf., vol. 25, no. 2, pp. 334–

347, Apr. 2009.

[30] Y.-E. Nahm and H. Ishikawa, “A hybrid multi-agent system architecture for enterprise in-

tegration using computer networks,” Robot. Comput.-Integr. Manuf., vol. 21, no. 3, pp. 217–

234, Jun. 2005.

[31] J. X. Wang, M. X. Tang, L. N. Song, and S. Q. Jiang, “Design and implementation of an

agent-based collaborative product design system,” Comput. Ind., vol. 60, no. 7, pp. 520–

535, Sep. 2009.

[32] Q. Hao, W. Shen, Z. Zhang, S.-W. Park, and J.-K. Lee, “Agent-based collaborative product

design engineering: An industrial case study,” Comput. Ind., vol. 57, no. 1, pp. 26–38, Jan.

2006.

[33] K.-M. Chao, P. Norman, R. Anane, and A. James, “An agent-based approach to engineering

design,” Comput. Ind., vol. 48, no. 1, pp. 17–27, May 2002.

[34] Y. D. Wang, W. Shen, and H. Ghenniwa, “WebBlow: a Web/agent-based multidisciplinary

design optimization environment,” Comput. Ind., vol. 52, no. 1, pp. 17–28, Sep. 2003.

[35] W. Shen, Q. Hao, S. Wang, Y. Li, and H. Ghenniwa, “An agent-based service-oriented in-

tegration architecture for collaborative intelligent manufacturing,” Robot. Comput.-Integr.

Manuf., vol. 23, no. 3, pp. 315–325, Jun. 2007.

Page 99: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 98 of 139 D100.1 State of the Art Analysis Update

[36] S. Szykman, S. J. Fenves, W. Keirouz, and S. B. Shooter, “A foundation for interoperability

in next-generation product development systems,” Comput.-Aided Des., vol. 33, no. 7, pp.

545–559, Jun. 2001.

[37] W. Shen, Q. Hao, H. Mak, J. Neelamkavil, H. Xie, J. Dickinson, R. Thomas, A. Pardasani,

and H. Xue, “Systems integration and collaboration in architecture, engineering, construc-

tion, and facilities management: A review,” Adv. Eng. Inform., vol. 24, no. 2, pp. 196–207,

Apr. 2010.

[38] L. Q. Fan, A. Senthil Kumar, B. N. Jagdish, and S. H. Bok, “Development of a distributed

collaborative design framework within peer-to-peer environment,” Comput.-Aided Des.,

vol. 40, no. 9, pp. 891–904, Sep. 2008.

[39] H.-C. Cheng and C.-S. Fen, “A web-based distributed problem-solving environment for en-

gineering applications,” Adv. Eng. Softw., vol. 37, no. 2, pp. 112–128, Feb. 2006.

[40] M. S. Shephard, M. W. Beall, R. M. O’Bara, and B. E. Webster, “Toward simulation-based

design,” Finite Elem. Anal. Des., vol. 40, no. 12, pp. 1575–1598, Jul. 2004.

[41] J. A. Reed, G. J. Follen, and A. A. Afjeh, “Improving the Aircraft Design Process Using

Web-based Modeling and Simulation,” ACM Trans Model Comput Simul, vol. 10, no. 1, pp.

58–83, Jan. 2000.

[42] J. Byrne, C. Heavey, and P. J. Byrne, “A review of Web-based simulation and supporting

tools,” Simul. Model. Pract. Theory, vol. 18, no. 3, pp. 253–276, Mar. 2010.

[43] N. Senin, D. R. Wallace, and N. Borland, “Distributed Object-Based Modeling in Design

Simulation Marketplace,” J. Mech. Des., vol. 125, no. 1, pp. 2–13, Mar. 2003.

[44] J. M. Pullen, R. Brunton, D. Brutzman, D. Drake, M. Hieb, K. L. Morse, and A. Tolk, “Us-

ing Web services to integrate heterogeneous simulations in a grid environment,” Future

Gener. Comput. Syst., vol. 21, no. 1, pp. 97–106, Jan. 2005.

[45] M. A. Rosenman, G. Smith, M. L. Maher, L. Ding, and D. Marchant, “Multidisciplinary

collaborative design in virtual environments,” Autom. Constr., vol. 16, no. 1, pp. 37–44,

Jan. 2007.

[46] R. Aspin, “Supporting Collaboration, in Colocated 3D Visualization, through the Use of

Remote Personal Interfaces,” J. Comput. Civ. Eng., vol. 21, no. 6, pp. 393–401, 2007.

[47] J. M. Hammond, C. M. Harvey, R. J. Koubek, W. D. Compton, and A. Darisipudi, “Distrib-

uted Collaborative Design Teams: Media Effects on Design Processes,” Int. J. Hum.-Com-

put. Interact., vol. 18, no. 2, pp. 145–165, 2005.

[48] W. Shen, D. H. Norrie, and J.-P. Barthes, Multi-Agent Systems for Concurrent Intelligent

Design and Manufacturing. CRC Press, 2003.

[49] “About ArchiCAD — A 3D CAD software for architectural design & modeling.” [Online].

Available: http://www.graphisoft.com/archicad.

[50] “Engineering Content Management & Project Collaboration Software.” [Online]. Availa-

ble: http://www.bentley.com/en-US/Products/ProjectWise+project+team+collaboration/.

[51] M. Boehm and O. Thomas, “Looking beyond the rim of one’s teacup: a multidisciplinary

literature review of Product-Service Systems in Information Systems, Business Manage-

ment, and Engineering & Design,” J. Clean. Prod., vol. 51, pp. 245–260, Jul. 2013.

[52] F. H. Beuren, M. G. Gomes Ferreira, and P. A. Cauchick Miguel, “Product-service systems:

a literature review on integrated products and services,” J. Clean. Prod., vol. 47, pp. 222–

231, May 2013.

Page 100: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 99 of 139 D100.1 State of the Art Analysis Update

[53] M. Yu, W. Zhang, and H. Meier, “Modularization based design for innovative product-

related industrial service,” in IEEE International Conference on Service Operations and

Logistics, and Informatics, 2008. IEEE/SOLI 2008, 2008, vol. 1, pp. 48–53.

[54] E. Sundin, “Life-Cycle Perspectives of Product/Service-Systems: In Design Theory,” in In-

troduction to Product/Service-System Design, T. Sakao and M. Lindahl, Eds. Springer Lon-

don, 2009, pp. 31–49.

[55] T. Sakao, G. Ö. Sandström, and D. Matzen, “Framing research for service orientation of

manufacturers through PSS approaches,” J. Manuf. Technol. Manag., vol. 20, no. 5, pp.

754–778, Jun. 2009.

[56] X. Geng, X. Chu, D. Xue, and Z. Zhang, “An integrated approach for rating engineering

characteristics’ final importance in product-service system development,” Comput. Ind.

Eng., vol. 59, no. 4, pp. 585–594, Nov. 2010.

[57] E. Manzini, C. Vezzoli, and G. Clark, “Product-Service Systems. Using an Existing Concept

as a New Approach to Sustainability,” J Des. Res., vol. 1, no. 2, p. 0, 2001.

[58] M. J. Goedkoop, Product service systems, ecological and economic basics. Ministry of

Housing, Spatial Planning and the Environment, Communications Directorate, 1999.

[59] O. K. Mont, “Clarifying the concept of product–service system,” J. Clean. Prod., vol. 10,

no. 3, pp. 237–245, Jun. 2002.

[60] D. Maxwell, W. Sheate, and R. van der Vorst, “Functional and systems aspects of the sus-

tainable product and service development approach for industry,” J. Clean. Prod., vol. 14,

no. 17, pp. 1466–1479, 2006.

[61] T. S. Baines, H. W. Lightfoot, S. Evans, A. Neely, R. Greenough, J. Peppard, R. Roy, E.

Shehab, A. Braganza, A. Tiwari, J. R. Alcock, J. P. Angus, M. Bastl, A. Cousens, P. Irving,

M. Johnson, J. Kingston, H. Lockett, V. Martinez, P. Michele, D. Tranfield, I. M. Walton,

and H. Wilson, “State-of-the-art in product-service systems,” Proc. Inst. Mech. Eng. Part B

J. Eng. Manuf., vol. 221, no. 10, pp. 1543–1552, Oct. 2007.

[62] J. C. Aurich, C. Mannweiler, and E. Schweitzer, “How to design and offer services success-

fully,” CIRP J. Manuf. Sci. Technol., vol. 2, no. 3, pp. 136–143, 2010.

[63] E. Sundin, G. Ö. Sandström, M. Lindahl, and A. Ö. Rönnbäck, “Using Company–Academia

Networks for Improving Product/Service Systems at Large Companies,” in Introduction to

Product/Service-System Design, T. Sakao and M. Lindahl, Eds. Springer London, 2009, pp.

185–196.

[64] A. Tukker and U. Tischner, New business for old Europe: product-service development,

competitiveness and sustainability. Greenleaf Pubns, 2006.

[65] J. Lee and M. AbuAli, “Innovative Product Advanced Service Systems (I-PASS): method-

ology, tools, and applications for dominant service design,” Int. J. Adv. Manuf. Technol.,

vol. 52, no. 9–12, pp. 1161–1173, Feb. 2011.

[66] A. Tukker and C. Van Halen, “Innovation scan for product service systems,” PriceWater-

houseCoopers Lond., 2003.

[67] C. van Halen, C. Vezzoli, and R. Wimmer, Methodology for Product Service System Inno-

vation: How to Develop Clean, Clever and Competitive Strategies in Companies. Uitgeverij

Van Gorcum, 2005.

[68] D. Wu, M. J. Greer, D. W. Rosen, and D. Schaefer, “Cloud manufacturing: Strategic vision

and state-of-the-art,” J Manuf Syst Available Httpwww Sci. Comsciencearticlepii S, 2013.

Page 101: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 100 of 139 D100.1 State of the Art Analysis Update

[69] M. Meier, J. Seidelmann, and I. Mezgár, “ManuCloud: The next-generation manufacturing

as a service environment,” ERCIM News, no. 83, pp. 33–34, 2010.

[70] “All together now,” The Economist, Apr-2012.

[71] “The Future of Manufacturing,” Inc.com. [Online]. Available: http://www.inc.com/maga-

zine/20091001/the-future-of-manufacturing.html.

[72] L. M. Camarinha-Matos, “Collaborative networked organizations: Status and trends in man-

ufacturing,” Annu. Rev. Control, vol. 33, no. 2, pp. 199–208, Dec. 2009.

[73] L. M. Camarinha-Matos and H. Afsarmanesh, Collaborative Networks:Reference Model-

ing: Reference Modeling. Springer, 2008.

[74] L. M. Camarinha-Matos and H. Afsarmanesh, “Collaborative networks: a new scientific

discipline,” J. Intell. Manuf., vol. 16, no. 4–5, pp. 439–452, Oct. 2005.

[75] H. Noori * and W. B. Lee, “Collaborative design in a networked enterprise: the case of the

telecommunications industry,” Int. J. Prod. Res., vol. 42, no. 15, pp. 3041–3054, 2004.

[76] A. Al-Ashaab, K. Rodriguez, A. Molina, M. Cardenas, J. Aca, M. Saeed, and H. Abdalla,

“Internet-Based Collaborative Design for an Injection-moulding System,” Concurr. Eng.,

vol. 11, no. 4, pp. 289–299, Dec. 2003.

[77] B. Kayis, M. Zhou, S. Savci, Y. B. Khoo, A. Ahmed, R. Kusumo, and A. Rispler, “IRMAS

– development of a risk management tool for collaborative multi-site, multi-partner new

product development projects,” J. Manuf. Technol. Manag., vol. 18, no. 4, pp. 387–414,

May 2007.

[78] M. Li, C. C. Wang, and S. Gao, “Real-Time Collaborative Design With Heterogeneous

CAD Systems Based on Neutral Modeling Commands,” J. Comput. Inf. Sci. Eng., vol. 7,

no. 2, pp. 113–125, Sep. 2006.

[79] A. Molina, J. Aca, and P. Wright, “Global collaborative engineering environment for inte-

grated product development,” Int. J. Comput. Integr. Manuf., vol. 18, no. 8, pp. 635–651,

2005.

[80] P. Burlat and M. Benali, “A methodology to characterise cooperation links for networks of

firms,” Prod. Plan. Control, vol. 18, no. 2, pp. 156–168, 2007.

[81] N. Morelli, “Developing new product service systems (PSS): methodologies and operational

tools,” J. Clean. Prod., vol. 14, no. 17, pp. 1495–1501, 2006.

[82] M. Li, H. Zhang, Z. Li, and L. Tong, “Economy-wide material input/output and demateri-

alization analysis of Jilin Province (China),” Environ. Monit. Assess., vol. 165, no. 1–4, pp.

263–274, Jun. 2010.

[83] Y. Geum and Y. Park, “Development of technology roadmap for Product-Service System

(TRPSS),” in 2010 IEEE International Conference on Industrial Engineering and Engi-

neering Management (IEEM), 2010, pp. 410–414.

[84] J. Sztipanovits, “Cyber Physical Systems - Convergence of Physical and Information Sci-

ences.,” It - Inf. Technol., vol. 54, no. 6, pp. 257–265, 2012.

[85] M. Broy, M. V. Cengarle, and E. Geisberger, “Cyber-Physical Systems: Imminent Chal-

lenges,” in Large-Scale Complex IT Systems. Development, Operation and Management,

R. Calinescu and D. Garlan, Eds. Springer Berlin Heidelberg, 2012, pp. 1–28.

[86] L. Sha, S. Gopalakrishnan, X. Liu, and Q. Wang, “Cyber-Physical Systems: A New Fron-

tier,” in Machine Learning in Cyber Trust, Springer US, 2009, pp. 3–13.

Page 102: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 101 of 139 D100.1 State of the Art Analysis Update

[87] J. P. Womack and D. T. Jones, Lean Thinking: Banish Waste and Create Wealth in Your

Corporation. Simon and Schuster, 2010.

[88] T. H. Davenport and J. Glaser, “Just-in-time delivery comes to knowledge management,”

Harv. Bus. Rev., vol. 80, no. 7, pp. 107–111, 2002.

[89] R. Bernhaupt, “User Experience as Missing Link in Multimodal Interfaces for Ambient In-

telligence Environments,” in Proceedings of CHI-2004 workshop: Lost in Ambient Intelli-

gence, 2004.

[90] A. Dogac, G. B. Laleci, and Y. Kabak, “A context framework for ambient intelligence,”

Build. Knowl. Econ. Issues Appl. Case Stud., vol. 913, 2003.

[91] M. Friedewald, O. Da Costa, and others, “Science and technology roadmapping: Ambient

intelligence in everyday life (AmI@ Life),” Karlsr. Fraunhofer-Inst. Für Syst. Innov. FhG-

ISI, 2003.

[92] V. Issarny, D. Sacchetti, F. Tartanoglu, F. Sailhan, R. Chibout, N. Levy, and A. Talamona,

“Developing Ambient Intelligence Systems: A Solution based on Web Services,” Autom.

Softw. Eng., vol. 12, no. 1, pp. 101–137, Jan. 2005.

[93] J. Pascoe, “Adding generic contextual capabilities to wearable computers,” in Second Inter-

national Symposium on Wearable Computers, 1998. Digest of Papers, 1998, pp. 92–99.

[94] S. Piva, C. Bonamico, C. Regazzoni, and F. Lavagetto, “A flexible architecture for ambient

intelligence systems supporting adaptive multimodal interaction with users,” Ambient In-

tell., pp. 97–120, 2005.

[95] IST Advisory Group, “Software technologies, embedded systems and distributed systems,”

2002.

[96] F. Paganelli, G. Bianchi, and D. Giuli, “A Context Model for Context-Aware System Design

Towards the Ambient Intelligence Vision: Experiences in the eTourism Domain,” in Uni-

versal Access in Ambient Intelligence Environments, C. Stephanidis and M. Pieper, Eds.

Springer Berlin Heidelberg, 2007, pp. 173–191.

[97] E. Aarts and R. Roovers, “IC Design Challenges for Ambient Intelligence,” in Proceedings

of the Conference on Design, Automation and Test in Europe - Volume 1, Washington, DC,

USA, 2003, p. 10002–.

[98] IST Advisory Group, “Scenarios for Ambient Intelligence in 2010,” 2001.

[99] D. Stokic, U. Kirchhoff, and H. Sundmaeker, “Ambient Intelligence in Manufacturing In-

dustry: Control System Point of View,” presented at the Control and Applications, 2006.

[100] E. Aarts, R. Harwig, and M. Schuurmans, Ambient Intelligence//The Invisible Future: The

Seamless Integration of Technology into Everyday Life/Ed. by PJ Denning. New York:

McGraw-Hill Companies, 2001.

[101] A. C. Group, “Alma Consulting Group.” [Online]. Available: http://www.mimosa-

fp6.com/.

[102] “::: ASK-IT :::” [Online]. Available: http://www.ask-it.org/.

[103] S. S. Uwe Kirchhoff, “The AmI-Book - Ambient Intelligence in Manufacturing SMEs,”

2008.

[104] P. Brézillon, “Context in Artificial Intelligence: I. A survey of the literature,” Comput. Artif.

Intell., vol. 18, pp. 321–340, 1999.

Page 103: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 102 of 139 D100.1 State of the Art Analysis Update

[105] P. Brézillon, “Context in Artificial Intelligence: II. Key elements of contexts,” Comput. Ar-

tif. Intell., vol. 18, pp. 425–446, 1999.

[106] L. Serafini and P. Bouquet, “Comparing formal theories of context in AI,” Artif. Intell., vol.

155, no. 1–2, pp. 41–67, May 2004.

[107] A. K. Dey, “Providing architectural support for building context-aware applications,” Geor-

gia Institute of Technology, 2000.

[108] A. K. Dey, G. D. Abowd, and D. Salber, “A Conceptual Framework and a Toolkit for Sup-

porting the Rapid Prototyping of Context-aware Applications,” Hum-Comput Interact, vol.

16, no. 2, pp. 97–166, Dec. 2001.

[109] H. J. Ahn, H. J. Lee, K. Cho, and S. J. Park, “Utilizing knowledge context in virtual collab-

orative work,” Decis. Support Syst., vol. 39, no. 4, pp. 563–582, Jun. 2005.

[110] P. Bellavista, A. Corradi, R. Montanari, and A. Toninelli, “Context-aware semantic discov-

ery for next generation mobile systems,” IEEE Commun. Mag., vol. 44, no. 9, pp. 62–71,

Sep. 2006.

[111] A. Toninelli, A. Corradi, and R. Montanari, “Semantic-based discovery to support mobile

context-aware service access,” Comput. Commun., vol. 31, no. 5, pp. 935–949, Mar. 2008.

[112] T. Gu, H. K. Pung, and D. Q. Zhang, “A service‐oriented middleware for building context‐aware services,” J. Netw. Comput. Appl., vol. 28, no. 1, pp. 1–18, Jan. 2005.

[113] S. Kim, E. Suh, and K. Yoo, “A study of context inference for Web-based information sys-

tems,” Electron. Commer. Res. Appl., vol. 6, no. 2, pp. 146–158, 2007.

[114] J.-W. Chang and Y.-K. Kim, “Design and Implementation of Middleware and Context

Server for Context-Awareness,” in High Performance Computing and Communications, M.

Gerndt and D. Kranzlmüller, Eds. Springer Berlin Heidelberg, 2006, pp. 487–494.

[115] L. Feng, P. M. G. Apers, and W. Jonker, “Towards Context-Aware Data Management for

Ambient Intelligence,” in Database and Expert Systems Applications, F. Galindo, M.

Takizawa, and R. Traunmüller, Eds. Springer Berlin Heidelberg, 2004, pp. 422–431.

[116] T. Strang and C. Linnhoff-Popien, “A Context Modeling Survey,” in Workshop Proceed-

ings, Nottingham, UK, 2004.

[117] T. R. Gruber, “Toward principles for the design of ontologies used for knowledge sharing?,”

Int. J. Hum.-Comput. Stud., vol. 43, no. 5–6, pp. 907–928, Nov. 1995.

[118] G. L. Zúñiga, “Ontology: Its Transformation from Philosophy to Information Systems,” in

Proceedings of the International Conference on Formal Ontology in Information Systems -

Volume 2001, New York, NY, USA, 2001, pp. 187–197.

[119] R. Glassey, G. Stevenson, M. Richmond, P. Nixon, S. Terzis, F. Wang, and R. I. Ferguson,

“Towards a middleware for generalised context management,” in First International Work-

shop on Middleware for Pervasive and Ad Hoc Computing, Middleware 2003, Rio de

Janeiro, Brazil, 2003.

[120] S. Lee, J. Chang, and S. Lee, “Survey and trend analysis of context-aware systems,” Inf.-

Int. Interdiscip. J., vol. 14, no. 2, pp. 527–548, 2011.

[121] C. Bettini, O. Brdiczka, K. Henricksen, J. Indulska, D. Nicklas, A. Ranganathan, and D.

Riboni, “A survey of context modelling and reasoning techniques,” Pervasive Mob. Com-

put., vol. 6, no. 2, pp. 161–180, Apr. 2010.

Page 104: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 103 of 139 D100.1 State of the Art Analysis Update

[122] D. Salber, A. K. Dey, and G. D. Abowd, “The Context Toolkit: Aiding the Development of

Context-enabled Applications,” in Proceedings of the SIGCHI Conference on Human Fac-

tors in Computing Systems, New York, NY, USA, 1999, pp. 434–441.

[123] T. Kindberg, J. Barton, J. Morgan, G. Becker, D. Caswell, P. Debaty, G. Gopal, M. Frid, V.

Krishnan, H. Morris, J. Schettino, B. Serra, and M. Spasojevic, “People, Places, Things:

Web Presence for the Real World,” Mob Netw Appl, vol. 7, no. 5, pp. 365–376, Oct. 2002.

[124] K. Henricksen, J. Indulska, and A. Rakotonirainy, “Modeling Context Information in Per-

vasive Computing Systems,” in Pervasive Computing, F. Mattern and M. Naghshineh, Eds.

Springer Berlin Heidelberg, 2002, pp. 167–180.

[125] A. Ranganathan and R. H. Campbell, “A Middleware for Context-Aware Agents in Ubiq-

uitous Computing Environments,” in Middleware 2003, M. Endler and D. Schmidt, Eds.

Springer Berlin Heidelberg, 2003, pp. 143–161.

[126] H.-L. Truong, S. Dustdar, D. Baggio, S. Corlosquet, C. Dorn, G. Giuliani, R. Gombotz, Y.

Hong, P. Kendal, C. Melchiorre, S. Moretzky, S. Peray, A. Polleres, S. Reiff-Marganiec, D.

Schall, S. Stringa, M. Tilly, and H. Yu, “inContext: A Pervasive and Collaborative Working

Environment for Emerging Team Forms,” in International Symposium on Applications and

the Internet, 2008. SAINT 2008, 2008, pp. 118–125.

[127] S. Sattanathan, N. C. Narendra, and Z. Maamar, “Ontologies for Specifying and Reconciling

Contexts of Web Services,” Electron. Notes Theor. Comput. Sci., vol. 146, no. 1, pp. 43–

57, Jan. 2006.

[128] N. Georgalas, S. Ou, M. Azmoodeh, and K. Yang, “Towards a Model-Driven Approach for

Ontology-Based Context-Aware Application Development: A Case Study,” in Fourth In-

ternational Workshop on Model-Based Methodologies for Pervasive and Embedded Soft-

ware, 2007. MOMPES ’07, 2007, pp. 21–32.

[129] G. Klyne and J. Carroll, “Resource Description Framework (RDF): Concepts and Abstract

Syntax.”

[130] P. F. Patel-Schneider, P. Hayes, I. Horrocks, and others, “OWL web ontology language

semantics and abstract syntax,” W3C Recomm., vol. 10, 2004.

[131] M. Luther, B. Mrohs, M. Wagner, S. Steglich, and W. Kellerer, “Situational reasoning - a

practical OWL use case,” in Autonomous Decentralized Systems, 2005. ISADS 2005. Pro-

ceedings, 2005, pp. 461–468.

[132] T. Strang, C. Linnhoff-Popien, and K. Frank, “CoOL: A Context Ontology Language to

Enable Contextual Interoperability,” in Distributed Applications and Interoperable Sys-

tems, J.-B. Stefani, I. Demeure, and D. Hagimont, Eds. Springer Berlin Heidelberg, 2003,

pp. 236–247.

[133] Technical University of Cluj-Napoca, “Context-Modelling in Ambient Intelligence.” North

University Center of Baia Mare, Nov-2013.

[134] N. F. Noy, M. Sintek, S. Decker, M. Crubézy, R. W. Fergerson, and M. A. Musen, “Creating

Semantic Web Contents with Protégé-2000,” IEEE Intelligent Systems, vol.

16, no. 2, pp. 60–71, 2001.

[135] V. Haarslev and R. Müller, “RACER System Description,” in Automated Reasoning, R.

Goré, A. Leitsch, and T. Nipkow, Eds. Springer Berlin Heidelberg, 2001, pp. 701–705.

[136] J. Forstadius, O. Lassila, and T. Seppänen, “RDF-based model for context-aware reasoning

in rich service environment,” in Third IEEE International Conference on Pervasive Com-

puting and Communications Workshops, 2005. PerCom 2005 Workshops, 2005, pp. 15–19.

Page 105: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 104 of 139 D100.1 State of the Art Analysis Update

[137] X. H. Wang, D. Q. Zhang, T. Gu, and H. K. Pung, “Ontology based context modeling and

reasoning using OWL,” in Proceedings of the Second IEEE Annual Conference on Perva-

sive Computing and Communications Workshops, 2004, 2004, pp. 18–22.

[138] “Protégé.” [Online]. Available: http://protege.stanford.edu/.

[139] “Apache Jena - Home.” [Online]. Available: http://jena.apache.org/.

[140] E. Prud’Hommeaux, A. Seaborne, and others, “SPARQL query language for RDF,” W3C

Recomm., vol. 15, 2008.

[141] E. Sirin, B. Parsia, B. C. Grau, A. Kalyanpur, and Y. Katz, “Pellet: A practical OWL-DL

reasoner,” Web Semant. Sci. Serv. Agents World Wide Web, vol. 5, no. 2, pp. 51–53, Jun.

2007.

[142] J. D. Sterman, Business Dynamics: Systems Thinking and Modeling for a Complex World.

Irwin McGraw-Hill, 2000.

[143] A. Borshchev and A. Filippov, “From System Dynamics and Discrete Event to Practical

Agent Based Modeling: Reasons , Techniques , Tools,” in The 22nd International Confer-

ence of The System Dynamics Society, 2004.

[144] N. Schieritz and A. Größler, “Emergent Structures in Supply Chains — A Study Integrating

Agent-Based and System Dynamics Modeling,” in Proceedings of the 36th Annual Hawaii

International Conference on System Sciences, 2003.

[145] J. Duggan, “Equation-based policy optimization for agent-oriented system dynamics mod-

els,” Syst. Dyn. Rev., vol. 24, no. 1, pp. 97–118, 2008.

[146] H. Akkermans, “Emergent Supply Networks: System Dynamics Simulation of Adaptive

Supply Agents,” in Preceedings of the 34th Hawaii International Conference on System

Sciences, 2001, vol. 00, no. c, pp. 1–11.

[147] H. Rahmandad and J. Sterman, “Heterogeneity and Network Structure in the Dynamics of

Diffusion: Comparing Agent-Based and Differential Equation Models,” Manag. Sci., vol.

54, no. 5, pp. 998–1014, May 2008.

[148] L. L. Fa, D. R. Bell, J. Choi, and L. Lodish, “What Matters Most in Internet Retailing,” MIT

Sloan Management Review, no. 54116, 2012.

[149] B. E. Brynjolfsson, Y. J. Hu, and M. S. Rahman, “Competing in the Age of Omnichannel

Retailing Competing in the Age of Omnichannel Retailing THE LEADING QUESTION,”

no. 54412, 2013.

[150] W. Rand and R. T. Rust, “Agent-based modeling in marketing: Guidelines for rigor,” Int. J.

Res. Mark., vol. 28, no. 3, pp. 181–193, Sep. 2011.

[151] O. Hinz, B. Skiera, C. Barrot, and J. U. Becker, “Seeding Strategies for Viral Marketing:

An Empirical Comparison,” J. Mark., vol. 75, no. November, pp. 55–71, 2011.

[152] B. Libai, E. Muller, and R. Peres, “Decomposing the Value of Word-of-Mouth Seeding

Programs : Acceleration Versus Expansion,” J. Mark. Re, vol. L, no. April, pp. 161–176,

2013.

[153] J. D. Bohlmann, R. J. Calantone, and M. Zhao, “The Effects of Market Network Heteroge-

neity on Innovation Diffusion: An Agent-Based Modeling Approach,” J. Prod. Innov.

Manag., vol. 27, no. 5, pp. 741–760, 2010.

Page 106: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 105 of 139 D100.1 State of the Art Analysis Update

[154] D. Ortiz-arroyo, “Chapter 2: Discovering Sets of Key Players in Social Networks,” in Com-

putational Social Network Analysis, A. Abraham, A.-E. Hassanien, and V. Sná¿el, Eds.

London: Springer London, 2010.

[155] S. P. Borgatti, “Centrality and network flow,” Soc. Netw., vol. 27, no. 1, pp. 55–71, Jan.

2005.

[156] S. P. Borgatti and M. G. Everett, “A Graph-theoretic perspective on centrality,” Soc. Netw.,

vol. 28, no. 4, pp. 466–484, Oct. 2006.

[157] S. P. Borgatti, K. M. Carley, and D. Krackhardt, “On the robustness of centrality measures

under conditions of imperfect data,” Soc. Netw., vol. 28, no. 2, pp. 124–136, May 2006.

[158] S. Fortunato, “Community detection in graphs,” Phys. Rep., vol. 486, no. 3–5, pp. 75–174,

Feb. 2010.

[159] A. Lancichinetti, S. Fortunato, and J. Kertész, “Detecting the overlapping and hierarchical

community structure in complex networks,” New J. Phys., vol. 11, no. 3, p. 033015, Mar.

2009.

[160] H.-W. Shen, Community Structure of Complex Networks. Springer, 2013.

[161] M. L. Katz and C. Shapiro, “Network Externalities , Competition , and Compatibility,” Am.

Econ. Rev., vol. 75, no. 3, pp. 424–440, 1985.

[162] M. O. Jackson, Social and Economic Networks. New Jersey: Princeton University Press,

2008, p. 503.

[163] J. Liu, T. Dietz, S. R. Carpenter, C. Folke, M. Alberti, C. L. Redman, S. H. Schneider, E.

Ostrom, A. N. Pell, J. Lubchenco, W. W. Taylor, Z. Ouyang, P. Deadman, T. Kratz, and W.

Provencher, “Coupled human and natural systems,” AMBIO J. Hum. Environ., vol. 36, no.

8, pp. 639–49, Dec. 2007.

[164] J. W. Forrester, Principles of Systems. Portland, Oregon, US: Productivity Press, 1968.

[165] A. Mathur, “The Evolution of Business Ecosystems: Interspecies Competition in the Steel

Industry,” Massachusetts Institute of Technology, 2010.

[166] T. F. Piepenbrock, “Toward a Theory of the Evolution of Business Ecosystems: Enterprise

architectures, Competitive Dynamics, Firm Performance & Industrial Co-Evolution,” Mas-

sachusetts Institute of Technology, 2009.

[167] Erik den Hartigh, M. Tol, J. Wei, W. Visscher, and M. Zhao, “Modeling a business ecosys-

tem: An agent-based simulation,” in ECCON 2005 Annual meeting, 2005, pp. 1–18.

[168] C. Weiller and A. Neely, “Business Model Design in an Ecosystem Context.”

[169] J. F. Moore, “Predators and Prey: A New Ecology of Competition,” Harv. Bus. Rev., 1993.

[170] P. Monge, B. M. Heiss, and D. B. Margolin, “Communication Network Evolution in Or-

ganizational Communities,” Commun. Theory, vol. 18, no. 4, pp. 449–477, Oct. 2008.

[171] P. Monge, S. Lee, J. Fulk, M. Weber, C. Shen, C. Schultz, D. Margolin, J. Gould, and L. B.

Frank, “Research Methods for Studying Evolutionary and Ecological Processes in Organi-

zational Communication,” Manag. Commun. Q., vol. 25, no. 2, pp. 211–251, Apr. 2011.

[172] H. W. Chesbrough, Open Innovation: The New Imperative for Creating And Profiting from

Technology. Boston, MA: Harward Business School Press, 2003.

[173] S. L. Vargo and R. F. Lusch, “Evolving to a New Dominant Logic for Marketing,” J. Mark.,

vol. 68, no. January, pp. 1–17, 2004.

Page 107: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 106 of 139 D100.1 State of the Art Analysis Update

[174] R. F. Lusch and S. L. Vargo, “Service-dominant logic: reactions, reflections and refine-

ments,” Mark. Theory, vol. 6, no. 3, pp. 281–288, Sep. 2006.

[175] S.-Y. Hong and T. G. Kim, “Specification of multi-resolution modeling space for multi-

resolution system,” SIMULATION, vol. 89, no. 1, pp. 28–40, Jan. 2013.

[176] O. of R. and D. US EPA and O. US EPA, “Sustainability Information | EPA Research |

EPA.” [Online]. Available: http://www.epa.gov/sustainability/basicinfo.htm.

[177] M. Arena, N. D. Ciceri, S. Terzi, I. Bengo, G. Azzone, and M. Garetti, “A state-of-the-art

of industrial sustainability: definitions, tools and metrics,” Int. J. Prod. Lifecycle Manag.,

vol. 4, no. 1, pp. 207–251, Jan. 2009.

[178] J. Jeswiet and M. Hauschild, “EcoDesign and future environmental impacts,” Mater. Des.,

vol. 26, no. 7, pp. 629–634, 2005.

[179] R. Karlsson and C. Luttropp, “EcoDesign: what’s happening? An overview of the subject

area of EcoDesign and of the papers in this special issue,” J. Clean. Prod., vol. 14, no. 15–

16, pp. 1291–1298, 2006.

[180] W. Wimmer and R. Züst, ECODESIGN Pilot: Product Investigation, Learning and Optimi-

zation Tool for Sustainable Product Development. Springer, 2003.

[181] “eco-union.” [Online]. Available: http://eco-union.org/en.

[182] “Delft Design Guide : EcoDesign Checklist.” [Online]. Available:

http://ocw.tudelft.nl/courses/product-design/delft-design-guide/course-home/.

[183] J. Fiksel, Design for Environment: A Guide to Sustainable Product Development: Eco-Ef-

ficient Product Development: Eco-Efficient Product Development. McGraw Hill Profes-

sional, 2009.

[184] Q. Yang, “Life cycle assessment in sustainable product design,” SIM Tech Tech., vol. 8, no.

1, pp. 57–64, 2007.

[185] M. Otero, A. Pastor, J. Portela, J. Viguera, and M. Huerta, “Methods of Analysis for a Sus-

tainable Production System,” 2011.

[186] A. Remmen, A. A. Jensen, and J. Frydendal, Life Cycle Management: A Business Guide to

Sustainability. UNEP/Earthprint, 2007.

[187] M. F. Cahyandito, “The MIPS Concept (Material Input Per Unit of Service): A Measure for

an Ecological Economy,” Department of Management and Business, Padjadjaran Univer-

sity, Working Papers in Business, Management and Finance 200901, 2009.

[188] M. A. J. Huijbregts, L. J. A. Rombouts, S. Hellweg, R. Frischknecht, A. J. Hendriks, D. van

de Meent, A. M. J. Ragas, L. Reijnders, and J. Struijs, “Is Cumulative Fossil Energy De-

mand a Useful Indicator for the Environmental Performance of Products?,” Environ. Sci.

Technol., vol. 40, no. 3, pp. 641–648, Feb. 2006.

[189] C. Rocha, I. Celades, T. R. Dosdá, D. Camocho, S. Bajouco, M. H. Arroz, M. Baroso, I.

Brarens, P. G. Grais, M. Almeida, and others, “Innovation and Ecodesign in Ceramic In-

dustry.”

[190] S. L. Hart, “A Natural-Resource-Based View of the Firm,” Acad. Manage. Rev., vol. 20,

no. 4, pp. 986–1014, Oct. 1995.

[191] S. Hajmohammad, S. Vachon, R. D. Klassen, and I. Gavronski, “Lean management and

supply management: their role in green practices and performance,” J. Clean. Prod., vol.

39, pp. 312–320, Jan. 2013.

Page 108: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 107 of 139 D100.1 State of the Art Analysis Update

[192] R. Florida, “Lean and Green: THE MOVE TO ENVIRONMENTALLY CONSCIOUS

MANUFACTURING,” Calif. Manage. Rev., vol. 39, no. 1, pp. 80–105, 9/1/1996 1996.

[193] M. E. Porter and C. Van der Linde, “Green and competitive: ending the stalemate,” Harv.

Bus. Rev., vol. 73, no. 5, pp. 120–134, 1995.

[194] A. A. King and M. J. Lenox, “Lean and Green? an Empirical Examination of the Relation-

ship Between Lean Production and Environmental Performance,” Prod. Oper. Manag., vol.

10, no. 3, pp. 244–256, 2001.

[195] C. M. Dües, K. H. Tan, and M. Lim, “Green as the new Lean: how to use Lean practices as

a catalyst to greening your supply chain,” J. Clean. Prod., vol. 40, pp. 93–100, Feb. 2013.

[196] J. Sarkis, “A strategic decision framework for green supply chain management,” J. Clean.

Prod., vol. 11, no. 4, pp. 397–409, Jun. 2003.

[197] J. F. Krafcik, “Triumph of the lean production system,” Sloan Manage. Rev., vol. 30, no. 1,

pp. 41–51, 1988.

[198] J. P. Womack, D. T. Jones, and D. Roos, “The machine that changed the world: The story

of lean production. 1st Harper Perennial Ed,” N. Y., 1991.

[199] D. A. Locher, Value Stream Mapping for Lean Development: A How-To Guide for Stream-

lining Time to Market. CRC Press, 2011.

[200] H. C. M. León and J. A. Farris, “Lean Product Development Research: Current State and

Future Directions,” Eng. Manag. J., vol. 23, no. 1, pp. 29–51, Mar. 2011.

[201] C. Karlsson and P. Åhlström, “The Difficult Path to Lean Product Development,” J. Prod.

Innov. Manag., vol. 13, no. 4, pp. 283–295, 1996.

[202] J. Hoppmann, E. Rebentisch, U. Dombrowski, and Thimo Zahn, “A Framework for Organ-

izing Lean Product Development,” Eng. Manag. J., vol. 23, no. 1, pp. 3–15, Mar. 2011.

[203] A. C. Ward, J. Shook, and D. Sobek, Lean Product and Process Development. Lean Enter-

prise Institute, 2007.

[204] J. M. Morgan and J. K. Liker, “The Toyota product development system,” Integrating Peo-

ple Process Technol. Charlotte NC BT, 2006.

[205] J. Oehmen and E. Rebentisch, “Waste in Lean Product Development,” Lean Advancement

Initiative, Technical Report, Jul. 2010.

[206] S. C. Wheelwright, Managing New Product and Process Development: Text Cases. Simon

and Schuster, 2010.

[207] L. Lexicon, “The Lean Enterprise Institute,” ISBN 0-9667843-6-7, 2008.

[208] J. Kato, “Development of a process for continuous creation of lean value in product devel-

opment organizations,” Thesis, Massachusetts Institute of Technology, 2005.

[209] D. B. Stagney, “Organizational implications of,” Real-Time Concurr. Eng. Term Chall.

Long Term Solut., 2003.

[210] C. Loch and S. Kavadias, Handbook of New Product Development. Elsevier Publishers,

2007.

[211] T. Erl, Service-Oriented Architecture: Concepts, Technology, and Design. Upper Saddle

River, NJ, USA: Prentice Hall PTR, 2005.

[212] M. Papazoglou and W.-J. van den Heuvel, “Service oriented architectures: approaches, tech-

nologies and research issues,” VLDB J., vol. 16, no. 3, pp. 389–415, 2007.

Page 109: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 108 of 139 D100.1 State of the Art Analysis Update

[213] N. Josuttis, Soa in Practice, First. O’Reilly, 2007.

[214] N. Komoda, “Service Oriented Architecture (SOA) in Industrial Systems,” in Industrial In-

formatics, 2006 IEEE International Conference on, 2006, pp. 1 –5.

[215] Y. V. Natis, Service-oriented architecture scenario. 2003.

[216] W. W. W. Consortium and others, “Web Services Glossary,” Retrieved May, vol. 31, p.

2009, 2004.

[217] S. Karnouskos, A. W. Colombo, T. Bangemann, K. Manninen, R. Camp, M. Tilly, P. Stluka,

F. Jammes, J. Delsing, and J. Eliasson, “A SOA-based architecture for empowering future

collaborative cloud-based industrial automation,” in IECON 2012 - 38th Annual Conference

on IEEE Industrial Electronics Society, 2012, pp. 5766–5772.

[218] G. Di Orio, “Adapter module for self-learning production systems,” FCT-UNL, 2013.

[219] F. Jammes and H. Smit, “Service-oriented paradigms in industrial automation,” Ind. Inform.

IEEE Trans. On, vol. 1, no. 1, pp. 62 – 70, Feb. 2005.

[220] V. Hajarnavis and K. Young, “An Assessment of PLC Software Structure Suitability for the

Support of Flexible Manufacturing Processes,” Autom. Sci. Eng. IEEE Trans. On, vol. 5,

no. 4, pp. 641 –650, Oct. 2008.

[221] D. Krafzig, K. Banke, and D. Slama, Enterprise Soa: Service-Oriented Architecture Best

Practices. Prentice Hall Professional, 2005.

[222] A. W. Colombo and S. Karnouskos, “Towards the factory of the future: A service-oriented

cross-layer infrastructure,” ICT Shap. World Sci. View Eur. Telecommun. Stand. Inst. ETSI

John Wiley Sons, vol. 65, p. 81, 2009.

[223] S. Karnouskos, D. Savio, P. Spiess, D. Guinard, V. Trifa, and O. Baecker, “Real-world

Service Interaction with Enterprise Systems in Dynamic Manufacturing Environments,” in

Artificial Intelligence Techniques for Networked Manufacturing Enterprises Management,

L. Benyoucef and B. Grabot, Eds. Springer London, 2010, pp. 423–457.

[224] G. Cândido, J. Barata, A. W. Colombo, and F. Jammes, “{SOA} in reconfigurable supply

chains: A research roadmap,” Eng. Appl. Artif. Intell., vol. 22, no. 6, pp. 939 – 949, 2009.

[225] E. Turban, J. Aronson, and T.-P. Liang, Decision Support Systems and Intelligent Systems

7”‘Edition. Pearson Prentice Hall, 2005.

[226] S. Karnouskos and A. W. Colombo, “Architecting the next generation of service-based

SCADA/DCS system of systems,” in IECON 2011 - 37th Annual Conference on IEEE In-

dustrial Electronics Society, 2011, pp. 359–364.

[227] X. Xu, “From cloud computing to cloud manufacturing,” Robot. Comput.-Integr. Manuf.,

vol. 28, no. 1, pp. 75–86, Feb. 2012.

[228] J. C. R. Licklider, “Topics for Discussion at the Forthcoming Meeting, Memorandum for:

Members and Affiliates of the Intergalactic Computer Network.” .

[229] P. Mell and T. Grance, “Perspectives on cloud computing and standards,” Natl. Inst. Stand.

Technol. NIST Inf. Technol. Lab., 2009.

[230] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patter-

son, A. Rabkin, I. Stoica, and M. Zaharia, “A View of Cloud Computing,” Commun ACM,

vol. 53, no. 4, pp. 50–58, Apr. 2010.

[231] T. Chou, Introduction to Cloud Computing. Cloudbook.

Page 110: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 109 of 139 D100.1 State of the Art Analysis Update

[232] A. Marinos and G. Briscoe, “Community Cloud Computing,” in Cloud Computing, M. G.

Jaatun, G. Zhao, and C. Rong, Eds. Springer Berlin Heidelberg, 2009, pp. 472–484.

[233] Q. Liu, L. Gao, and P. Lou, “Resource management based on multi-agent technology for

cloud manufacturing,” in 2011 International Conference on Electronics, Communications

and Control (ICECC), 2011, pp. 2821–2824.

[234] X. V. Wang and X. W. Xu, “ICMS: A Cloud-Based Manufacturing System,” in Cloud Man-

ufacturing, W. Li and J. Mehnen, Eds. Springer London, 2013, pp. 1–22.

[235] X. Vincent Wang and X. W. Xu, “An interoperable solution for Cloud manufacturing,” Ro-

bot. Comput.-Integr. Manuf., vol. 29, no. 4, pp. 232–247, Aug. 2013.

[236] F. Tao, L. Zhang, V. C. Venkatesh, Y. Luo, and Y. Cheng, “Cloud manufacturing: a com-

puting and service-oriented manufacturing model,” Proc. Inst. Mech. Eng. Part B J. Eng.

Manuf., vol. 225, no. 10, pp. 1969–1976, Oct. 2011.

[237] F. Tao, Y. LaiLi, L. Xu, and L. Zhang, “FC-PACO-RM: A Parallel Method for Service

Composition Optimal-Selection in Cloud Manufacturing System,” IEEE Trans. Ind. In-

form., vol. 9, no. 4, pp. 2023–2033, 2013.

[238] Industrial Cloud-Based Cyber-Physical Systems - The IMC-AESOP Approach. .

[239] T.-Y. Li, H. Zhu, and K.-Y. Lam, “A Novel Two-Level Trust Model for Grid,” in Infor-

mation and Communications Security, S. Qing, D. Gollmann, and J. Zhou, Eds. Springer

Berlin Heidelberg, 2003, pp. 214–225.

[240] K. Hwang, Y.-K. Kwok, S. Song, M. C. Y. Chen, Y. Chen, R. Zhou, and X. Lou, “GridSec:

Trusted Grid Computing with Security Binding and Self-defense Against Network Worms

and DDoS Attacks,” in Computational Science – ICCS 2005, V. S. Sunderam, G. D. van

Albada, P. M. A. Sloot, and J. Dongarra, Eds. Springer Berlin Heidelberg, 2005, pp. 187–

195.

[241] “Apache Tomcat,” Wikipedia, the free encyclopedia. 09-Feb-2014.

[242] O. Standard, Devices Profile for Web Services Version 1.1. January, 2009.

[243] T. Hadlich, “Providing device integration with OPC UA,” in Industrial Informatics, 2006

IEEE International Conference on, 2006, pp. 263–268.

[244] S.-H. Leitner and W. Mahnke, “OPC UA–service-oriented architecture for industrial appli-

cations,” ABB Corp. Res. Cent., 2006.

[245] G. Candido, “Service-oriented Architecture for Device Lifecycle Support in Industrial Au-

tomation,” FCT-UNL.

[246] S. Dustdar and W. Schreiner, “A survey on web services composition,” Int. J. Web Grid

Serv., vol. 1, no. 1, pp. 1–30, 2005.

[247] C. Peltz, “Web services orchestration and choreography,” Computer, vol. 36, no. 10, pp.

46–52, 2003.

[248] A. Bucchiarone, H. Melgratti, and F. Severoni, “Testing service composition,” in Proceed-

ings of the 8th Argentine Symposium on Software Engineering (ASSE’07), 2007.

[249] J. Brnsted, K. M. Hansen, and M. Ingstrup, “Service composition issues in pervasive com-

puting,” Pervasive Comput. IEEE, vol. 9, no. 1, pp. 62–70, 2010.

[250] F. Jammes, H. Smit, J. L. M. Lastra, and I. M. Delamer, “Orchestration of service-oriented

manufacturing processes,” in Emerging Technologies and Factory Automation, 2005. ETFA

2005. 10th IEEE Conference on, 2005, vol. 1, p. 8 pp. –624.

Page 111: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 110 of 139 D100.1 State of the Art Analysis Update

[251] N. Kavantzas, D. Burdett, G. Ritzinger, T. Fletcher, Y. Lafon, and C. Barreto, “Web services

choreography description language version 1.0,” W3C Candidate Recomm., vol. 9, 2005.

[252] Hazenberg, Meta Products. Building the Internet of Things. Uitgeverij Bis, 2011.

[253] K. Ashton, “That ‘internet of things’ thing,” RFiD J., vol. 22, pp. 97–114, 2009.

[254] F. Mattern and C. Floerkemeier, “From the Internet of Computers to the Internet of Things,”

in From Active Data Management to Event-Based Systems and More, K. Sachs, I. Petrov,

and P. Guerrero, Eds. Springer Berlin Heidelberg, 2010, pp. 107–121.

[262] P. Magrassi and T. Berg, “A world of smart objects,” Gartner research report R-17-2243,

12 August 2002, 2002.

[256] R. H. Weber and R. Weber, Internet of Things. Springer, 2010.

[257] H. Kopetz, “Internet of Things,” in Real-Time Systems, Springer US, 2011, pp. 307–323.

[258] W. Wang, S. De, R. Toenjes, E. Reetz, and K. Moessner, “A Comprehensive Ontology for

Knowledge Representation in the Internet of Things,” in 2012 IEEE 11th International Con-

ference on Trust, Security and Privacy in Computing and Communications (TrustCom),

2012, pp. 1793–1798.

[259] M. Compton, P. Barnaghi, L. Bermudez, R. García-Castro, O. Corcho, S. Cox, J. Graybeal,

M. Hauswirth, C. Henson, A. Herzog, V. Huang, K. Janowicz, W. D. Kelsey, D. Le Phuoc,

L. Lefort, M. Leggieri, H. Neuhaus, A. Nikolov, K. Page, A. Passant, A. Sheth, and K.

Taylor, “The SSN ontology of the W3C semantic sensor network incubator group,” Web

Semant. Sci. Serv. Agents World Wide Web, vol. 17, pp. 25–32, Dec. 2012.

[260] S. De, T. Elsaleh, P. Barnaghi, and S. Meissner, “An Internet of Things Platform for Real-

World and Digital Objects,” Scalable Comput. Pract. Exp., vol. 13, no. 1, Apr. 2012.

[261] H. van der Veer and A. Wiles, “Achieving technical interoperability,” Eur. Telecommun.

Stand. Inst., 2008.

[262] K. Kotis and A. Katasonov, “Semantic Interoperability on the Web of Things: The Semantic

Smart Gateway Framework,” in 2012 Sixth International Conference on Complex, Intelli-

gent and Software Intensive Systems (CISIS), 2012, pp. 630–635.

[263] K. Kotis, A. Katasonov, and J. Leino, “Aligning Smart and Control Entities in the IoT,” in

Internet of Things, Smart Spaces, and Next Generation Networking, S. Andreev, S. Bal-

andin, and Y. Koucheryavy, Eds. Springer Berlin Heidelberg, 2012, pp. 39–50.

[264] M. Milošević, M. T. Shrove, and E. Jovanov, “Applications of Smartphones for Ubiquitous

Health Monitoring and Wellbeing Management,” JITA - J. Inf. Technol. Appl. Banja Luka

- APEIRON, vol. 1, no. 1, Jun. 2011.

[265] O. D. Matei, “Remote Surveillance and Signaling System of the Human Body Parameters,”

WO/2013/08540614-Jun-2013.

[266] C. Perera, A. Zaslavsky, C. H. Liu, M. Compton, P. Christen, and D. Georgakopoulos, “Sen-

sor Search Techniques for Sensing as a Service Architecture for the Internet of Things,”

IEEE Sens. J., vol. 14, no. 2, pp. 406–420, Feb. 2014.

[267] L. Mraz and M. Simek, “Poster Abstract: Visualization and Monitoring Tool for Sensor

Devices,” in Real-World Wireless Sensor Networks, K. Langendoen, W. Hu, F. Ferrari, M.

Zimmerling, and L. Mottola, Eds. Springer International Publishing, 2014, pp. 61–64.

Page 112: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 111 of 139 D100.1 State of the Art Analysis Update

[268] Kuo-pao Yang, Ghassan Alkadi, Bishwas Gautam, Arjun Sharma, Darshan Amatya, Sylvia

Charchut, and Matthew Jones, “Park-A-Lot: An Automated Parking Management System,”

Comput. Sci. Inf. Technol., vol. 1, pp. 276–279, 2013.

[269] A. Kamilaris and A. Pitsillides, “The Impact of Remote Sensing on the Everyday Lives of

Mobile Users in Urban Areas,” Proc ICMU Singap., 2014.

[270] P. Gordon, Lean and Green: Profit for Your Workplace and the Environment. Berrett-Koeh-

ler Publishers, 2001.

[271] M. Kleinsmann, J. Buijs, and R. Valkenburg, “Understanding the complexity of knowledge

integration in collaborative new product development teams: A case study,” J. Eng. Tech-

nol. Manag., vol. 27, no. 1–2, pp. 20–32, Mar. 2010.

[272] H. Berends, W. Vanhaverbeke, and R. Kirschbaum, “Knowledge management challenges

in new business development: Case study observations,” J. Eng. Technol. Manag., vol. 24,

no. 4, pp. 314–328, Dec. 2007.

[273] “Wikipedia.” [Online]. Available: http://www.wikipedia.org/.

[274] “Eclipse - The Eclipse Foundation open source community website.” [Online]. Available:

http://eclipse.org/.

[275] “Packaged Web Apps (Widgets) - Packaging and XML Configuration (Second Edition).”

[Online]. Available: http://www.w3.org/TR/widgets/.

[276] “The Java Community Process(SM) Program - JSRs: Java Specification Requests - detail

JSR# 286.” [Online]. Available: https://www.jcp.org/en/jsr/detail?id=286.

[277] “Introduction | RajeevaLochana B R.” .

[278] “JSON.” [Online]. Available: http://www.json.org/.

[279] “OSGi in a nutshell.” [Online]. Available: http://gravity.sourceforge.net/service-

binder/osginutshell.html.

[280] “The Java Community Process(SM) Program - JSRs: Java Specification Requests - detail

JSR# 330.” [Online]. Available: https://jcp.org/en/jsr/detail?id=330.

[281] “The Java Community Process(SM) Program - JSRs: Java Specification Requests - detail

JSR# 311.” [Online]. Available: https://jcp.org/en/jsr/detail?id=311.

[282] “The Java Community Process(SM) Program - JSRs: Java Specification Requests - detail

JSR# 224.” [Online]. Available: https://jcp.org/en/jsr/detail?id=224.

[283] E. H.-L. <[email protected]>, “The OAuth 1.0 Protocol.” [Online]. Available:

http://tools.ietf.org/html/rfc5849.

[284] L. Masinter, T. Berners-Lee, and R. T. Fielding, “Uniform Resource Identifiers (URI): Ge-

neric Syntax.” [Online]. Available: http://tools.ietf.org/html/rfc2396.

[285] “Latest ‘RDF Concepts and Abstract Syntax’ versions.”[Online]. Available:

http://www.w3.org/TR/rdf-concepts/.

[286] “RDF Schema 1.1.” [Online]. Available: http://www.w3.org/TR/rdf-schema/.

[287] “OWL 2 Web Ontology Language Document Overview (Second Edition).” [Online]. Avail-

able: http://www.w3.org/TR/owl2-overview/.

[288] “SKOS Simple Knowledge Organization System Reference.” [Online]. Available:

http://www.w3.org/TR/skos-reference/.

Page 113: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 112 of 139 D100.1 State of the Art Analysis Update

[289] “RDFa 1.1 Primer - Second Edition.” [Online]. Available: http://www.w3.org/TR/rdfa-pri-

mer/.

[290] “SPARQL 1.1 Query Language.” [Online]. Available: http://www.w3.org/TR/sparql11-

query/.

Page 114: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 113 of 139 D100.1 State of the Art Analysis Update

6 Annex(es)

6.1 Annex 1

6.1.1 PLATFORM STANDARDS

6.1.1.1 Languages

6.1.1.1.1 Java

“Java is a general-purpose, concurrent, class-based, object-oriented computer programming language that is specifically designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere”, meaning that code that runs on one platform does not need to be recompiled to run on another. Java applications are typically compiled to bytecode (class file) that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is, as of 2012, one of the most popular programming languages in use, particularly for client-server web applications, with a reported 10 million users ... The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them...

One characteristic of Java is portability, which means that computer programs written in the Java language must run similarly on any hardware/operating-system platform. This is achieved by compiling the Java language code to an intermediate representation called Java bytecode, instead of directly to platform-specific machine code. Java bytecode instructions are analogous to machine code, but they are intended to be interpreted by a virtual machine (VM) written specifically for the host hardware. End-users commonly use a Java Runtime Environment (JRE) installed on their own machine for standalone Java applications, or in a Web browser for Java applets. Standardized libraries provide a generic way to access host-specific features such as graphics, threading, and networking. A major benefit of using bytecode is porting. However, the overhead of interpretation means that interpreted programs almost always run more slowly than programs compiled to native executables would. Just-in-Time (JIT) compilers were introduced from an early stage that compiles bytecodes to machine code during runtime.” [273]

6.1.1.1.2 C++

“C++ ...is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises both high-level and low-level language features ... C++ is one of the most popular programming languages and is implemented on a wide variety of hardware and operating system platforms. As an efficient compiler to native code, its application domains include systems software, application software, device drivers, embedded software, high-performance server and client applications, and entertainment software such as video games. Several groups provide both free and proprietary C++ compiler software, including the GNU Project, LLVM, Microsoft, Intel and Embarcadero Technologies. C++ has greatly influenced many other popular programming languages, most notably C# and Java.

C++ is also used for hardware design, where the design is initially described in C++, then analysed, architecturally constrained, and scheduled to create a register-transfer level hardware description language via high-level synthesis. The language began as enhancements to C, first adding classes, then virtual functions, operator overloading, multiple inheritance, templates and exception handling, among other features. After years of development, the C++ programming language standard was ratified in 1998 as ISO/IEC 14882:1998. The standard was amended by the 2003 technical corrigendum, ISO/IEC 14882:2003. The current standard extending C++ with new features was ratified and published by ISO in September 2011 as ISO/IEC 14882:2011 (informally known as C++11).” [273]

6.1.1.1.3 C#

“C# ... is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional, procedural, generic, object-oriented (class-based), and component-oriented programming disciplines. It was developed by Microsoft within its .NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 23270:2006). C# is one of the programming languages designed for the Common Language Infrastructure.

C# is intended to be a simple, modern, general-purpose, object-oriented programming language. By design, C# is the programming language ... that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework.

Page 115: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 114 of 139 D100.1 State of the Art Analysis Update

However, the language specification does not state the code generation requirements of the compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Theoretically, a C# compiler could generate machine code like traditional compilers of C++ or Fortran.” [273]

6.1.1.1.4 HTML

HyperText Markup Language “(HTML) is the main markup language for creating web pages and other information that can be displayed in a web browser. HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>), within the web page content. Most commonly, HTML tags come in pairs like <h1> (opening tag) and </h1> (closing tag), although some tags (known as empty elements) are unpaired (e. g. <img>). The first tag in a pair is the start tag, and the second tag is the end tag (they are also called opening tags and closing tags). In between these tags web designers can add text, tags, comments and other types of text-based content.

The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. The browser does not display the HTML tags, but uses the tags to interpret the content of the page. HTML elements form the building blocks of all websites. HTML allows images and objects to be embedded and can be used to create interactive forms. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. It can embed scripts written in languages such as JavaScript which affect the behaviour of HTML web pages.

Web browsers can also refer to Cascading Style Sheets (CSS) to define the appearance and layout of text and other material. The W3C, maintainer of both the HTML and the CSS standards, encourages the use of CSS over explicit presentational HTML markup.” [273]

6.1.1.1.5 HTML5

“HTML5 is a markup language for structuring and presenting content for the Web and a core technology of the Internet. It is the fifth revision of the HTML standard ...” and “its core aims have been to improve the language with support for the latest multimedia while keeping it easily readable by humans and consistently understood by computers and devices (web browsers, parsers, etc.). HTML5 is intended to subsume not only HTML 4, but also XHTML 1 and DOM Level 2 HTML.

Following its immediate predecessors HTML 4.01 and XHTML 1.1, HTML5 is a response to the observation that the HTML and XHTML in common use on the Web are a mixture of features introduced by various specifications, along with those introduced by software products such as web browsers, those established by common practice, and the many syntax errors in existing web documents. It is also an attempt to define a single markup language that can be written in either HTML or XHTML syntax. It includes detailed processing models to encourage more interoperable implementations; it extends, improves and rationalises the markup available for documents, and introduces markup and application programming interfaces (APIs) for complex web applications. For the same reasons, HTML5 is also a potential candidate for cross-platform mobile applications. Many features of HTML5 have been built with the consideration of being able to run on low-powered devices such as smartphones and tablets...

In particular, HTML5 adds many new syntactic features. These include the new <video>, <audio> and <canvas> elements, as well as the integration of scalable vector graphics (SVG) content (that replaces the uses of generic <object> tags) and MathML for mathematical formulas. These features are designed to make it easy to include and handle multimedia and graphical content on the web without having to resort to proprietary plug-ins and APIs. Other new elements, such as <section>, <article>, <header> and <nav>, are designed to enrich the semantic content of documents. New attributes have been introduced for the same purpose, while some elements and attributes have been removed. Some elements, such as <a>, <cite> and <menu> have been changed, redefined or standardized. The APIs and Document Object Model (DOM) are no longer afterthoughts, but are fundamental parts of the HTML5 specification. HTML5 also defines in some detail the required processing for invalid documents so that syntax errors will be treated uniformly by all conforming browsers and other user agents.” [273]

6.1.1.1.6 CSS

“Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation semantics (the look and formatting) of a document written in a markup language. It’s most common application is to

Page 116: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 115 of 139 D100.1 State of the Art Analysis Update

style web pages written in HTML and XHTML, but the language can also be applied to any kind of XML document, including plain XML, SVG and XUL.

CSS is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation, including elements such as the layout, colours, and fonts. This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple pages to share formatting, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design). CSS can also allow the same markup page to be presented in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser or screen reader) and on Braille-based, tactile devices. It can also be used to allow the web page to display differently depending on the screen size or device on which it is being viewed. While the author of a document typically links that document to a CSS file, readers can use a different style sheet, perhaps one on their own computer, to override the one the author has specified.

CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable.

The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998), and they also operate a free CSS validation service...The W3C has now deprecated the use of all presentational HTML markup.” [273]

6.1.1.1.7 JavaScript

“JavaScript (JS) is an interpreted computer programming language. It was originally implemented as part of web browsers so that client-side scripts could interact with the user, control the browser, communicate asynchronously, and alter the document content that was displayed. More recently, however, it has become common in both game development and the creation of desktop applications.

JavaScript is a prototype-based scripting language that is dynamic, is type safe, and has first-class functions. Its syntax was influenced by the C language. JavaScript copies many names and naming conventions from Java, but the two languages are otherwise unrelated and have very different semantics. The key design principles within JavaScript are taken from the Self and Scheme programming languages. It is a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles.

JavaScript's use in applications outside of web pages—for example, in PDF documents, site-specific browsers, and desktop widgets—is also significant. Newer and faster JavaScript VMs and frameworks built upon them (notably Node.js) have also increased the popularity of JavaScript for server-side web applications.

JavaScript was formalized in the ECMAScript language standard and is primarily used as part of a web browser (client-side JavaScript). This enables programmatic access to computational objects within a host environment.” [273]

6.1.1.2 Development environments

6.1.1.2.1 Eclipse

Eclipse “is a multi-language software development environment comprising a workspace and an extensible plug-in system... It can be used to develop applications in Java and, by means of various plug-ins, other programming languages... The Eclipse Platform uses plug-ins to provide all functionality within and on top of the runtime system, in contrast to some other applications, in which functionality is hard coded... With the exception of a small run-time kernel, everything in Eclipse is a plug-in” [273], so a complete ecosystem adding new functionalities has been created around the core parts of Eclipse. “Eclipse is a community for individuals and organizations who wish to collaborate on commercially-friendly open source software. Its projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle. The Eclipse Foundation is a not-for-profit, member supported corporation that hosts the Eclipse projects and helps cultivate both an open source community and an ecosystem of complementary products and services.

The Eclipse Project was originally created by IBM in November 2001 and supported by a consortium of software vendors. The Eclipse Foundation was created in January 2004 as an independent not-for-profit

Page 117: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 116 of 139 D100.1 State of the Art Analysis Update

corporation to act as the steward of the Eclipse community. The independent not-for-profit corporation was created to allow a vendor neutral and open, transparent community to be established around Eclipse. Today, the Eclipse community consists of individuals and organizations from a cross section of the software industry.” [274]

6.1.1.2.2 NetBeans

“NetBeans is an integrated development environment (IDE) for developing primarily with Java, but also with other languages, in particular PHP, C/C++, and HTML5. It is also an application platform framework for Java desktop applications and others. The NetBeans IDE is written in Java and can run on Windows, OS X, Linux, Solaris and other platforms supporting a compatible JVM.

The NetBeans Platform allows applications to be developed from a set of modular software components called modules. Applications based on the NetBeans Platform (including the NetBeans IDE itself) can be extended by third party developers... NetBeans IDE is an open-source integrated development environment. NetBeans IDE supports development of all Java application types (Java SE (including JavaFX), Java ME, web, EJB and mobile applications) out of the box. Among other features are an Ant-based project system, Maven support, refactorings, version control (supporting CVS, Subversion, Mercurial and Clearcase)...

All the functions of the IDE are provided by modules. Each module provides a well defined function, such as support for the Java language, editing, or support for the CVS versioning system, and SVN. NetBeans contains all the modules needed for Java development in a single download, allowing the user to start working immediately. Modules also allow NetBeans to be extended. New features, such as support for other programming languages, can be added by installing additional modules. For instance, Sun Studio, Sun Java Studio Enterprise, and Sun Java Studio Creator from Sun Microsystems are all based on the NetBeans IDE.” [273]

6.1.1.3 Integration mechanisms

6.1.1.3.1 Widgets

“The specifications of the World Wide Web Consortium W3C provide a standard packaging format and metadata for a class of software known commonly as packaged apps or widgets. Unlike traditional user interface widgets (e.g., buttons, input boxes, toolbars, etc.), widgets as specified in this standard are full-fledged client-side applications that are authored using technologies such as HTML and then packaged for distribution. Examples range from simple clocks, stock tickers, news casters, games and weather forecasters, to complex applications that pull data from multiple sources to be "mashed-up" and presented to a user in some interesting and useful way...

The specification relies on PKWare's Zip specification as the archive format, XML as a configuration document format, and a series of steps that runtimes follow when processing and verifying various aspects of a package. The packaging format acts as a container for files used by a widget. The configuration document is an XML vocabulary that declares metadata and configuration parameters for a widget. The steps for processing a widget package describe the expected behaviour and means of error handling for runtimes while processing the packaging format, configuration document, and other relevant files.” [275]

6.1.1.3.2 Java Portlets

This specification (see [276]) deals with Portlets as Java technology based web components. “A portlet is an application that provides a specific piece of content (information or service) to be included as part of a portal page. It is managed by a portlet container that processes requests and generates dynamic content. Portlets are used by portals as pluggable user interface components that provide a presentation layer to information systems. The content generated by a portlet is also called a fragment. A fragment is a piece of markup (e.g. HTML, XHTML, WML) adhering to certain rules and can be aggregated with other fragments to form a complete document. The content of a portlet is normally aggregated with the content of other portlets to form the portal page. The lifecycle of a portlet is managed by the portlet container.

Web clients interact with portlets via a request/response paradigm implemented by the portal. Normally, users interact with content produced by portlets, for example by following links or submitting forms, resulting in portlet actions being received by the portal, which are forwarded by it to the portlets targeted by the user's

Page 118: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 117 of 139 D100.1 State of the Art Analysis Update

interactions.” (http://jsr286tutorial.blogspot.de/p/portlet_21.html)” The content generated by a portlet may vary from one user to another depending on the user configuration for the portlet...

A portlet container runs portlets and provides them with the required runtime environment. A portlet container contains portlets and manages their lifecycle. It also provides persistent storage for portlet preferences. A portlet container receives requests from the portal to execute requests on the portlets hosted by it. A portlet container is not responsible for aggregating the content produced by the portlets. It is the responsibility of the portal to handle the aggregation. A portal and a portlet container can be built together as a single component of an application suite or as two separate components of a portal application.” [277]

6.1.1.4 Server-side components

6.1.1.4.1 Java Servlet

“A servlet is a Java programming language class specification used to extend the capabilities of a server. It is used in combination with Apache Tomcat for instance. Although servlets can respond to any types of requests, they are commonly used to extend the applications hosted by web servers, so they can be thought of as Java Applets that run on servers instead of in web browsers. These kinds of servlets are the Java counterpart to other dynamic Web content technologies such as PHP and ASP.NET...

Servlets are most often used to:

Process or store data that was submitted from an HTML form

Provide dynamic content such as the results of a database query

Manage state information that does not exist in the stateless HTTP protocol, such as filling the articles into the shopping cart of the appropriate customer ...

A "servlet" is a Java class that conforms to the Java Servlet API, a protocol by which a Java class may respond to requests. Servlets could in principle communicate over any client–server protocol, but they are most often used with the HTTP protocol. Thus "servlet" is often used as shorthand for "HTTP servlet". Thus, a software developer may use a servlet to add dynamic content to a web server using the Java platform. The generated content is commonly HTML, but may be other data such as XML. Servlets can maintain state in session variables across many server transactions by using HTTP cookies, or URL rewriting.

To deploy and run a servlet, a web container must be used. A web container (also known as a servlet container) is essentially the component of a web server that interacts with the servlets. The web container is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that the URL requester has the correct access rights.

The Servlet API, contained in the Java package hierarchy javax.servlet, defines the expected interactions of the web container and a servlet. A Servlet is an object that receives a request and generates a response based on that request. The basic Servlet package defines Java objects to represent servlet requests and responses, as well as objects to reflect the servlet's configuration parameters and execution environment. The package javax.servlet.http defines HTTP-specific subclasses of the generic servlet elements, including session management objects that track multiple requests and responses between the web server and a client. Servlets may be packaged in a WAR file as a web application.

Servlets can be generated automatically from Java Server Pages (JSP) by the JavaServer Pages compiler. The difference between servlets and JSP is that servlets typically embed HTML inside Java code, while JSPs embed Java code in HTML. The direct usage of servlets to generate HTML has however become quite rare. A somewhat older usage is to use servlets in conjunction with JSPs in a pattern called "Model 2", which is a flavour of the model–view–controller pattern.

The current version of Java Servlet is 3.0.” [273]

6.1.1.4.2 JavaServer Pages

“JavaServer Pages (JSP) is a technology that helps software developers create dynamically generated web pages based on HTML, XML, or other document types... JSP is similar to PHP, but it uses the Java programming language.

To deploy and run JavaServer Pages, a compatible web server with a servlet container, such as Apache Tomcat or Jetty, is required... Architecturally, JSP may be viewed as a high-level abstraction of Java

Page 119: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 118 of 139 D100.1 State of the Art Analysis Update

servlets. JSPs are translated into servlets at runtime; each JSP's servlet is cached and re-used until the original JSP is modified.

JSP can be used independently or as the view component of a server-side model–view–controller design, normally with JavaBeans as the model and Java servlets (or a framework such as Apache Struts) as the controller. This is a type of Model 2 architecture.

JSP allows Java code and certain pre-defined actions to be interleaved with static web markup content, with the resulting page being compiled and executed on the server to deliver a document. The compiled pages, as well as any dependent Java libraries, use Java bytecode rather than a native software format. Like any other Java program, they must be executed within a Java virtual machine (JVM) that integrates with the server's host operating system to provide an abstract platform-neutral environment.

JSPs are usually used to deliver HTML and XML documents, but through the use of OutputStream, they can deliver other types of data as well. The Web container creates JSP implicit objects like pageContext, servletContext, session, request & response.” [273]

6.1.1.4.3 WebSockets

“WebSocket is a web technology providing full-duplex communications channels over a single TCP connection. The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011, and the WebSocket API in Web IDL is being standardized by the W3C.

WebSocket is designed to be implemented in web browsers and web servers, but it can be used by any client or server application. The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request. The WebSocket protocol makes possible more interaction between a browser and a web site, facilitating live content and the creation of real-time games. This is made possible by providing a standardized way for the server to send content to the browser without being solicited by the client, and allowing for messages to be passed back and forth while keeping the connection open. In this way a two-way (bi-directional) ongoing conversation can take place between a browser and the server. A similar effect has been achieved in non-standardized ways using stop-gap technologies such as Comet.

In addition, the communications are done over TCP port number 80, which is of benefit for those environments which block non-standard Internet connections using a firewall. WebSocket protocol is currently supported in several browsers including Google Chrome, Internet Explorer, Firefox, Safari and Opera. WebSocket also requires web applications on the server to support it...

Like TCP, WebSocket provides for full-duplex communication. Websocket differs from TCP in that it enables a stream of messages instead of a stream of bytes. Before WebSocket, port 80 full-duplex communication was attainable using Comet channels; however, comet implementation is nontrivial, and due to the TCP handshake and HTTP header overhead, it is inefficient for small messages. WebSocket protocol aims to solve these problems without compromising security assumptions of the web.” [273]

6.1.1.5 Event processing

6.1.1.5.1 JSON

“JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.

JSON is built on two structures:

A collection of name/value pairs. In various languages, this is realized as an object, record,

struct, dictionary, hash table, keyed list, or associative array.

An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.

Page 120: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 119 of 139 D100.1 State of the Art Analysis Update

These are universal data structures. Virtually all modern programming languages support them in one form or another. It makes sense that a data format that is interchangeable with programming languages also be based on these structures.” [278]

6.1.1.5.2 JMS

“The Java Message Service (JMS) API is a Java Message Oriented Middleware (MOM) API for sending messages between two or more clients. JMS is a part of the Java Platform, Enterprise Edition, and is defined by a specification developed under the Java Community Process as JSR 914. It is a messaging standard that allows application components based on the Java Enterprise Edition (JEE) to create, send, receive, and read messages. It allows the communication between different components of a distributed application to be loosely coupled, reliable, and asynchronous...

Messaging is a form of loosely coupled distributed communication, where in this context the term 'communication' can be understood as an exchange of messages between software components. Message-oriented technologies attempt to relax tightly coupled communication (such as TCP network sockets, CORBA or RMI) by the introduction of an intermediary component. This approach allows software components to communicate 'indirectly' with each other. Benefits of this include message senders not needing to have precise knowledge of their receivers...

JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the Java Naming and Directory Interface (JNDI) information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then use populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages.” [273]

6.1.1.5.3 DDS

“The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) M2M middleware standard that aims to enable scalable, real-time, dependable, high performance and interoperable data exchanges between publishers and subscribers. DDS is designed to address the needs of mission- and business-critical applications like financial trading, air traffic control, smart grid management, and other big data applications. The standard is being increasingly used in a wide range of industries including Intelligent Systems. Among various applications, DDS is currently being smartphone operating systems, transportation systems and vehicles, software defined radio, and by healthcare providers. DDS is positioned to play a large role in the Internet of Things.”

DDS simplifies complex network programming. “It implements a publish/subscribe model for sending and receiving data, events, and commands among the nodes. Nodes that are producing information (publishers) create "topics" (e.g., temperature, location, pressure) and publish "samples." DDS takes care of delivering the sample to all subscribers that declare an interest in that topic.

DDS handles all the transfer chores: message addressing, data marshalling and demarshalling (so subscribers can be on different platforms than the publisher), delivery, flow control, retries, etc. Any node can be a publisher, subscriber, or both simultaneously. The DDS publish-subscribe model virtually eliminates complex network programming for distributed applications.

DDS supports mechanisms that go beyond the basic publish-subscribe model. The key benefit is that applications that use DDS for their communications are entirely decoupled. Very little design time has to be spent on how to handle their mutual interactions. In particular, the applications never need information about the other participating applications, including their existence or locations. DDS automatically handles all aspects of message delivery, without requiring any intervention from the user applications, including:

determining who should receive the messages

where recipients are located

what happens if messages cannot be delivered

This is made possible by the fact that DDS allows the user to specify Quality of Service (QoS) parameters as a way to configure automatic-discovery mechanisms and specify the behaviour used when sending and receiving messages. The mechanisms are configured up-front and require no further effort on the user's part. By exchanging messages in a completely anonymous manner, DDS greatly simplifies distributed application design and encourages modular, well-structured programs.

Page 121: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 120 of 139 D100.1 State of the Art Analysis Update

DDS also automatically handles hot-swapping redundant publishers if the primary fails. Subscribers always get the sample with the highest priority whose data is still valid (that is, whose publisher-specified validity period has not expired). It also automatically switches back to the primary when it recovers.” [273]

6.1.1.6 System and Data Integration

6.1.1.6.1 SOAP

“SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It relies on XML Information Set for its message format, and usually relies on other Application Layer protocols, most notably Hypertext Transfer Protocol (HTTP) or Simple Mail Transfer Protocol (SMTP), for message negotiation and transmission.

SOAP can form the foundation layer of a web services protocol stack, providing a basic messaging framework upon which web services can be built. This XML based protocol consists of three parts: an envelope, which defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing procedure calls and responses. SOAP has three major characteristics: Extensibility (security and WS-routing are among the extensions under development), Neutrality (SOAP can be used over any transport protocol such as HTTP, SMTP, TCP, or JMS) and Independence (SOAP allows for any programming model). As an example of how SOAP procedures can be used, a SOAP message could be sent to a web site that has web services enabled, such as a real-estate price database, with the parameters needed for a search. The site would then return an XML-formatted document with the resulting data, e.g., prices, location, features. With the data being returned in a standardized machine-parsable format, it can then be integrated directly into a third-party web site or application.

The SOAP architecture consists of several layers of specifications for: message format, Message Exchange Patterns (MEP), underlying transport protocol bindings, message processing models, and protocol extensibility. SOAP is the successor of XML-RPC, though it borrows its transport and interaction neutrality and the envelope/header/body from elsewhere (probably from WDDX).” [273]

6.1.1.6.2 NGSI

The NGSI-932 and NGSI-1033 interfaces from the Open Mobile Alliance (OMA) provide facilities to manage Context Information. Through these interfaces, a Context Management component will provide its context management services to actors outside of a single network. These actors can:

provide Context Information (update operations)

consume Context Information (query and subscribe/notify operations)

discover context entities through query or notifications (register and discover operations)

The basic elements of the NGSI Context Management Information Model are the following:

Entities The central aspect of the NGSI-9/10 information model is the concept of entities. Entities are

the virtual representation of all kinds of physical objects in the real world. Examples for

physical entities are tables, rooms, or persons. Virtual entities have an identifier and a type.

For example, a virtual entity representing a person named “John” could have the identifier “John” and the type “person”.

32 http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FI-WARE_NGSI-9_Open_RESTful_API_Speci-

fication 33 http://www.openmobilealliance.org/Technical/release_program/docs/Copy-

rightClick.aspx?pck=NGSI&file=V1_0-20101207-C/OMA-TS-NGSI_Context_Management-V1_0-20100803-

C.pdfware.eu/plugins/mediawiki/wiki/fiware/index.php/FI-WARE_NGSI-10_Open_RESTful_API_Specifica-

tion

Page 122: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 121 of 139 D100.1 State of the Art Analysis Update

Attributes Any available information about physical entities is expressed in the form of attributes of vir-

tual entities. Attributes have a name and a type as well. For example, the body temperature of

John would be represented as an attribute having the name “body_temperature” and the type

“temperature”. Values of such attributes are contained in value containers. This kind of con-

tainer does not only consist of the actual attribute value, but also contains a set of metadata.

Metadata is data about data; in in our body temperature example this metadata could repre-

sent the time of measurement, the measurement unit, and other information about the attrib-

ute value.

Attribute Domains There also is a concept of attribute domains in OMA NGSI 9/10. An attribute domain logi-

cally groups together a set of attributes. For example, the attribute domain "health_status"

could comprise of the attributes "body_temperature" and "blood_pressure". The interfaces

defined in NGSI 10 include operations to exchange information about arbitrary sets of enti-

ties and their attribute values, while NGSI-9 consists of functions for exchanging information

about the availability of information about entities.

Context Elements The data structure used for exchanging information about entities is context element. A con-

text element contains information about multiple attributes of one entity. The domain of

these attributes can also be specified inside the context element; in this case all provided at-

tribute values have to belong to that domain. Formally, a context element contains the fol-

lowing information

o an entity id and type

o a list of triplets <attribute name, attribute type, attribute value> holding information

about attributes of the entity

o (optionally) the name of an attribute domain

o (optionally) a list of triplets <metadata name, metadata type, metadata value> that

apply to all attribute values of the given domain

OMA NGSI defines two interfaces for exchanging information based on the information model. The interface OMA NGSI-10 is used for exchanging information about entities and their attribute, i.e., attribute values and metadata. The interface OMA NGSI-9 is used for availability information about entities and their attributes. Here, instead of exchanging attribute values, information about which provider can provide certain attribute values is exchanged.

6.1.1.6.3 OSGi

Open Services Gateway Initiative (OSGi) is the most standardized specification for modularization in the Java programming language. Roughly it consists of a runtime container and modules. The modules are deployable in the runtime container. Each module has a description with it public packages and dependencies on other modules. The runtime container enforces module class loader isolation.

The OSGi “specifications define the OSGi Service Platform, which consists of two pieces: the OSGi framework and a set of standard service definitions. The OSGi framework, which sits on top of a Java virtual machine, is the execution environment for services. The OSGi framework was originally conceived to be used inside restricted environments, such as a set-top box. OSGi can however be used in other domains, as for example, an infrastructure to support underlying release 3.0 of the eclipse IDE...

The framework can be divided in two main elements:

Services platform. Deployment infrastructure.

A services platform is defined as a software platform that supports the service orientation interaction depicted in Figure 25. This interaction involves three main actors: service providers, service requesters and a service registry, although only the service registry belongs to the services platform. In the service

Page 123: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 122 of 139 D100.1 State of the Art Analysis Update

orientation interaction, service providers publish service descriptions, and service requesters discover services and bind to the service providers. Publication and discovery are based on a service description.

Figure 25: Service orientation interaction

In the context of OSGi, a service is described as a Java class or interface, the service interface, along with a variable number of attributes, the service properties, which are name and value pairs. Service properties allow different service providers that provide services with the same service interface to be differentiated. The service registry allows service providers to be discovered through queries formulated in an LDAP syntax. Additionally, notification mechanisms allow service requesters to receive events signalling changes in the service registry; these changes include the publication or retrieval of a particular service.

In OSGi, service providers and requesters are part of an entity called a bundle that is both a logical as well as physical entity. Service interfaces are implemented by objects created by the bundle. In standard OSGi, the bundle is responsible for run-time service dependency management activities which include publication, discovery and binding as well as adapting to changes resulting from dynamic availability (arrival or departure) of services that are bound to the bundle.

For the deployment infrastructure, a bundle correspond to a delivery and deployment unit that is materialized by a JAR file that contains code and resources (i.e., images, libraries, etc.) along with a file that contains information about the bundle, the manifest file. The OSGi framework provides mechanisms to support continuous deployment activities (for example a local console or an administration web page). These deployment activities include installation, removal, update, starting (activation) and stopping (de-activation) of a physical bundle. Once a bundle is installed in the platform, it can be activated if deployment dependencies that are associated to the bundle are fulfilled.

Figure 26: Physical bundle life-cycle

Deployment activities are realized according to a well defined series of states depicted in Figure 26; these states correspond to the physical bundle life-cycle. The activation or de-activation of a physical bundle results in the creation or destruction of a unique logical bundle, materialized by an instance from a class inside the bundle called a bundle activator. When the instance is created, the execution environment calls

Page 124: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 123 of 139 D100.1 State of the Art Analysis Update

an activation method that signals the logical bundle that it is active. When the physical bundle is de-activated, the execution environment calls a de-activation method. When the logical bundle is active, it can publish or discover services and bind with other bundles by accessing the framework's services registry. It can also be notified from changes that occur in the framework by subscribing as an event listener.” [279]

6.1.1.6.4 Dependency Injection

Dependency Injection provides a mechanism to allow greater flexibility in developing and deploying systems that are comprised of multiple interacting components. The protocol provides the ability to annotate a component’s dependency on other system components or components that might be external to the system. This allows greater flexibility to modify components that other components rely on or to reconfigure what components provide what services within the system.

The Dependency Injection JSR 330 specifies a means for obtaining objects in such a way as to maximize reusability, testability and maintainability compared to traditional approaches such as constructors, factories, and service locators (e.g., JNDI). This process, known as dependency injection, is beneficial to most nontrivial applications.

Many types depend on other types. For example, a Stopwatch might depend on a TimeSource. The types on which a type depends are known as its dependencies. The process of finding an instance of a dependency to use at run time is known as resolving the dependency. If no such instance can be found, the dependency is said to be unsatisfied, and the application is broken.

In the absence of dependency injection, an object can resolve its dependencies in a few ways. It can invoke a constructor, hard-wiring an object directly to its dependency's implementation and life cycle:

class Stopwatch {

final TimeSource timeSource;

Stopwatch () {

timeSource = new AtomicClock(...);

}

void start() { ... }

long stop() { ... }

}

If more flexibility is needed, the object can call out to a factory or service locator:

class Stopwatch {

final TimeSource timeSource;

Stopwatch () {

timeSource = DefaultTimeSource.getInstance();

}

void start() { ... }

long stop() { ... }

}

In deciding between these traditional approaches to dependency resolution, a programmer must make trade-offs. Constructors are more concise but restrictive. Factories decouple the client and implementation to some extent but require boilerplate code. Service locators decouple even further but reduce compile time type safety. All three approaches inhibit unit testing. For example, if the programmer uses a factory, each test against code that depends on the factory will have to mock out the factory and remember to clean up after itself or else risk side effects:

void testStopwatch() {

TimeSource original = DefaultTimeSource.getInstance();

Page 125: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 124 of 139 D100.1 State of the Art Analysis Update

DefaultTimeSource.setInstance(new MockTimeSource());

try {

// Now, we can actually test Stopwatch.

Stopwatch sw = new Stopwatch();

...

} finally {

DefaultTimeSource.setInstance(original);

}

}

In practice, supporting this ability to mock out a factory results in even more boilerplate code. Tests that mock out and clean up after multiple dependencies quickly get out of hand. To make matters worse, a programmer must predict accurately how much flexibility will be needed in the future or else suffer the consequences. If a programmer initially elects to use a constructor but later decides that more flexibility is required, the programmer must replace every call to the constructor. If the programmer errs on the side of caution and write factories up front, it may result in a lot of unnecessary boilerplate code, adding noise, complexity, and error-proneness.

Dependency injection addresses all of these issues. Instead of the programmer calling a constructor or factory, a tool called a dependency injector passes dependencies to objects:

class Stopwatch {

final TimeSource timeSource;

@Inject Stopwatch(TimeSource TimeSource) {

this.TimeSource = TimeSource;

}

void start() { ... }

long stop() { ... }

}

The injector further passes dependencies to other dependencies until it constructs the entire object graph. For example, suppose the programmer asked an injector to create a StopwatchWidget instance:

/** GUI for a Stopwatch */

class StopwatchWidget {

@Inject StopwatchWidget(Stopwatch sw) { ... }

...

}

The injector might:

1. Find a TimeSource 2. Construct a Stopwatch with the TimeSource 3. Construct a StopwatchWidget with the Stopwatch

This leaves the programmer's code clean, flexible, and relatively free of dependency-related infrastructure.

In unit tests, the programmer can now construct objects directly (without an injector) and pass in mock dependencies. The programmer no longer needs to set up and tear down factories or service locators in each test. This greatly simplifies our unit test:

void testStopwatch() {

Stopwatch sw = new Stopwatch(new MockTimeSource());

Page 126: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 125 of 139 D100.1 State of the Art Analysis Update

...

}

The total decrease in unit-test complexity is proportional to the product of the number of unit tests and the number of dependencies.

Programmers annotate constructors, methods, and fields to advertise their injectability (constructor injection is demonstrated in the examples above). A dependency injector identifies a class's dependencies by inspecting these annotations, and injects the dependencies at runtime. Moreover, the injector can verify that all dependencies have been satisfied at build time. A service locator, by contrast, cannot detect unsatisfied dependencies until run time.

A programmer configures a dependency injector so it knows what to inject. Different configuration approaches make sense in different contexts. One approach is to search the classpath for dependency implementations, avoiding the need for the programmer to write explicit code. This approach could be useful in quick-and-dirty prototypes. A programmer working on a large, long-lived application might prefer a more explicit, compartmentalized approach. For example, the programmer could write XML that tells the injector to inject an EJB client proxy named "NetworkTimeSource" when a class needs a TimeSource:

<binding type="TimeSource" ejb="NetworkTimeSource"/>

It often makes sense to use more than one configuration mechanism in the the same application. For example, a quick and dirty prototype might grow into a real application, and the programmer could incrementally migrate to a more maintainable configuration. As another example, a program might configure some resources explicitly, while others are configured automatically based on an external XML file or database.

This JSR standardizes a low-level kernel API that can be used directly by a user or as an integration point for higher level configuration approaches. This approach enables portable Java applications without quashing innovation in dependency injector configuration. [280]

6.1.1.6.5 JAX-RS

The JAX-RS API provides developers with the ability to quickly develop Java based Web applications and components that utilise RESTful APIs. The JAX-RS API provides a high level easy-to-use API for developers to write RESTful web services independent of the underlying technology. The API reduces the development effort in utilising RESTful Web services using the Java Platform by reducing or eliminating the requirement of using low-level APIs like Java Servlets.

The Java JSR 311 specification provides and API that will enable developers to rapidly build Web applications in Java that are characteristic of the best designed parts of the Web. This JSR provides an API for providing REST (Representational State Transfer) support in the Java Platform. Lightweight, RESTful approaches are emerging as a popular alternative to SOAP-based technologies for deployment of services on the internet. Prior to this API building RESTful Web services using the Java Platform was significantly more complex than building SOAP-based services and requires using low-level APIs like Servlets or the dynamic JAX-WS APIs. Correct implementation requires a high level of HTTP knowledge on the developer's part.

This JSR aims to provide a high level easy-to use API for developers to write RESTful web services independent of the underlying technology and will allow these services to run on top of the Java EE or the Java SE platforms. The goal of this JSR is to provide an easy to use, declarative style of programming using annotations for developers to write RESTful Web Services and also enable low level access in cases where needed by the application. [281]

6.1.1.6.6 JAX-WS

The JAX-WS API is the de facto standard API for building Web Services in Java. It provides a standardised facility for webservice publication and invocation using XML as the basis for communication between Web-based services and clients. SOAP is an example of an XML based message format and WSDL uses an XML format for describing network services. The JAX-WS API makes enables developers to easily utilise XML based Web Services frameworks.

XML is a platform-independent means of representing structured information. XML Web Services use XML as the basis for communication between Web-based services and clients of those services and inherit

Page 127: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 126 of 139 D100.1 State of the Art Analysis Update

XML’s platform independence. SOAP describes one such XML based message format and “defines, using XML technologies, an extensible messaging framework containing a message construct that can be exchanged over a variety of underlying protocols.” WSDLis “an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information.” WSDL can be considered the defacto service description language for XML Web Services.

JAX-RPC defined APIs and conventions for supporting RPC oriented XML Web Services in the Java platform and interoperability between JAX-RPC implementations and with services implemented using other technologies. JAX-WS 2.0 is a follow-on to JAX-RPC 1.1 providing:

Support for SOAP 1.2 while maintaining support for SOAP 1.1

Support for WSDL 2.0 while maintaining support for WSDL 1.1.

Use of JAXB for XML to Java mappings

Java annotations to simplify the most common development scenarios for both clients and

servers

Support for client side asynchronous operations

Simplified client and service access to the messages underlying an exchange

Support for message based session management

New mechanisms to produce fully portable clients

Alignment with other updated JSRs related to Web Services Metadata (JSR 181), Java to WSDL mapping (JSR 109), Security (JSR 183) and others.

JAX-WS provides the standard mechanisms for supporting Web Services for Java based systems. [282]

6.1.1.7 Security

6.1.1.7.1 Authentication

6.1.1.7.1.1 SAML

“Security Assertion Markup Language (SAML) is an XML-based open standard data format for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML is a product of the OASIS Security Services Technical Committee. SAML dates from 2001; the most recent update of SAML is from 2005.

The single most important problem that SAML addresses is the web browser single sign-on (SSO) problem. Single sign-on solutions are abundant at the intranet level (using cookies, for example) but extending these solutions beyond the intranet has been problematic and has led to the proliferation of non-interoperable proprietary technologies. (Another more recent approach to addressing the browser SSO problem is the OpenID protocol.)

The SAML specification defines three roles: the principal (typically a user), the identity provider (aka IdP), and the service provider (aka SP). In the use case addressed by SAML, the principal requests a service from the service provider. The service provider requests and obtains an identity assertion from the identity provider. On the basis of this assertion, the service provider can make an access control decision - in other words it can decide whether to perform some service for the connected principal.

Before delivering the identity assertion to the SP, the IdP may request some information from the principal - such as a user name and password - in order to authenticate the principal. SAML specifies the assertions between the three parties: in particular, the messages that assert identity that are passed from the IdP to the SP. In SAML, one identity provider may provide SAML assertions to many service providers. Conversely, one SP may rely on and trust assertions from many independent IdPs.

SAML does not specify the method of authentication at the identity provider; it may use a username/password, multifactor authentication, etc. A directory service, which allows users to login with a user name and password, is a typical source of authentication tokens (i.e., passwords) at an identity provider. Any of the popular common internet social services also provide identity services that in theory could be used to support SAML exchanges.” [273]

Page 128: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 127 of 139 D100.1 State of the Art Analysis Update

6.1.1.7.1.2 OpenID

“OpenID34 is an open standard of the consortium OpenID Foundation that allows users to be authenticated by certain co-operating sites (known as Relying Parties or RP) using a third party service, eliminating the need for webmasters to provide their own ad hoc systems and allowing users to consolidate their digital identities.

Users may create accounts with their preferred OpenID identity providers, and then use those accounts as the basis for signing on to any website which accepts OpenID authentication. The OpenID standard provides a framework for the communication that must take place between the identity provider and the OpenID acceptor (the "relying party"). An extension to the standard (the OpenID Attribute Exchange) facilitates the transfer of user attributes, such as name and gender, from the OpenID identity provider to the relying party (each relying party may request a different set of attributes, depending on its requirements).

The OpenID protocol does not rely on a central authority to authenticate a user's identity. Moreover, neither services nor the OpenID standard may mandate a specific means by which to authenticate users, allowing for approaches ranging from the common (such as passwords) to the novel (such as smart cards or biometrics). The term OpenID may also refer to an identifier as specified in the OpenID standard; these identifiers take the form of a unique URI, and are managed by some 'OpenID provider' that handles authentication.

OpenID authentication is now used and provided by several large websites. Providers include Google, Yahoo!, PayPal, BBC, AOL, LiveJournal, MySpace, IBM, Steam, Sherdog, Orange and VeriSign.” [273]

6.1.1.7.1.3 OAuth

“OAuth35 is an open standard for authorization. OAuth provides a method for clients to access server resources on behalf of a resource owner (such as a different client or an end-user). It also provides a process for end-users to authorize third-party access to their server resources without sharing their credentials (typically, a username and password pair), using user-agent redirections.” [273]

“The OAuth protocol was originally created by a small community of web developers from a variety of websites and other Internet services who wanted to solve the common problem of enabling delegated access to protected resources. The resulting OAuth protocol was stabilized at version 1.0 in October 2007, and revised in June 2009 (Revision A)...

In the traditional client-server authentication model, the client uses its credentials to access its resources hosted by the server. With the increasing use of distributed web services and cloud computing, third-party applications require access to these server- hosted resources.

OAuth introduces a third role to the traditional client-server authentication model: the resource owner. In the OAuth model, the client (which is not the resource owner, but is acting on its behalf) requests access to resources controlled by the resource owner, but hosted by the server. In addition, OAuth allows the server to verify not only the resource owner authorization, but also the identity of the client making the request.

OAuth provides a method for clients to access server resources on behalf of a resource owner (such as a different client or an end- user). It also provides a process for end-users to authorize third- party access to their server resources without sharing their credentials (typically, a username and password pair), using user- agent redirections.

For example, a web user (resource owner) can grant a printing service (client) access to her private photos stored at a photo sharing service (server), without sharing her username and password with the printing service. Instead, she authenticates directly with the photo sharing service which issues the printing service delegation-specific credentials.

In order for the client to access resources, it first has to obtain permission from the resource owner. This permission is expressed in the form of a token and matching shared-secret. The purpose of the token is to make it unnecessary for the resource owner to share its credentials with the client. Unlike the resource

34 http://openid.net/specs/openid-authentication-2_0.html 35 http://tools.ietf.org/html/rfc6749

Page 129: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 128 of 139 D100.1 State of the Art Analysis Update

owner credentials, tokens can be issued with a restricted scope and limited lifetime, and revoked independently.

This specification consists of two parts. The first part defines a redirection-based user-agent process for end-users to authorize client access to their resources, by authenticating directly with the server and provisioning tokens to the client for use with the authentication method. The second part defines a method for making authenticated HTTP requests using two sets of credentials, one identifying the client making the request, and a second identifying the resource owner on whose behalf the request is being made.” [283]

6.1.1.7.2 Access control

6.1.1.7.2.1 XACML

“XACML stands for eXtensible Access Control Markup Language. The standard defines a declarative access control policy language implemented in XML and a processing model describing how to evaluate authorization requests according to the rules defined in policies.

As a published standard specification, one of the goals of XACML is to promote common terminology and interoperability between authorization implementations by multiple vendors. XACML is primarily an Attribute Based Access Control system (ABAC), where attributes (bits of data) associated with a user or action or resource are inputs into the decision of whether a given user may access a given resource in a particular way. Role-based access control (RBAC) can also be implemented in XACML as a specialization of ABAC.

The XACML model supports and encourages the separation of the authorization decision from the point of use. When authorization decisions are baked into client applications (or based on local machine userids and Access Control Lists (ACLs)), it is very difficult to update the decision criteria when the governing policy changes. When the client is decoupled from the authorization decision, authorization policies can be updated on the fly and affect all clients immediately.

Version 2.0 was ratified by OASIS standards organization on February 1, 2005. The first committee specification of XACML 3.0 was released August 10, 2010. The latest version, XACML 3.0, was standardized in January 2013.” [273]

6.1.1.7.2.2 LDAP

“The Lightweight Directory Access Protocol (LDAP) is an application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network.

Directory services may provide any organized set of records, often with a hierarchical structure, such as a corporate email directory. Similarly, a telephone directory is a list of subscribers with an address and a phone number.

LDAP is specified in a series of Internet Engineering Task Force (IETF) Standard Track Request for Comments (RFCs), using the description language ASN.1. The latest specification is Version 3, published as RFC 4511.

For example, here's an LDAP search translated into plain English: "Search in the company email directory for all people located in Boston whose name contains 'Jesse' that have an email address. Please return their full name, email, title, and description."

A common usage of LDAP is to provide a "single sign-on" where one password for a user is shared between many services, such as applying a company login code to web pages (so that staff log in only once to company computers, and then are automatically logged in to the company intranet).“ [273]

LDAP also provides the mechanisms for maintaining the attributes associated with users and devices, which can be used in determining access privileges to secure resources.

6.1.1.7.3 Secure communications

6.1.1.7.3.1 S/MIME

“S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public key encryption and signing of MIME data... S/MIME provides the following cryptographic security services for electronic messaging applications: authentication, message integrity, non-repudiation of origin (using digital signatures), privacy and data security (using encryption). S/MIME specifies the MIME type application/pkcs7-mime (smime-type

Page 130: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 129 of 139 D100.1 State of the Art Analysis Update

"enveloped-data") for data enveloping (encrypting) where the whole (prepared) MIME entity to be enveloped is encrypted and packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity.

Before S/MIME can be used in any of the above applications, one must obtain and install an individual key/certificate either from one's in-house certificate authority (CA) or from a public CA. The accepted best practice is to use separate private keys (and associated certificates) for signature and for encryption, as this permits escrow of the encryption key without compromise to the non-repudiation property of the signature key. Encryption requires having the destination party's certificate on store (which is typically automatic upon receiving a message from the party with a valid signing certificate). While it is technically possible to send a message encrypted (using the destination party certificate) without having one's own certificate to digitally sign, in practice, the S/MIME clients will require you to install your own certificate before they allow encrypting to others.

A typical basic ("class 1") personal certificate verifies the owner's "identity" only insofar as it declares that the sender is the owner of the "From:" email address in the sense that the sender can receive email sent to that address, and so merely proves that an email received really did come from the "From:" address given. It does not verify the person's name or business name. If a sender wishes to enable email recipients to verify the sender's identity in the sense that a received certificate name carries the sender's name or an organization's name, the sender needs to obtain a certificate ("class 2") from a CA who carries out a more in-depth identity verification process, and this involves making inquiries about the would-be certificate holder...

Depending on the policy of the CA, your certificate and all its contents may be posted publicly for reference and verification. This makes your name and email address available for all to see and possibly search for. Other CAs only post serial numbers and revocation status, which does not include any of the personal information. The latter, at a minimum, is mandatory to uphold the integrity of the public key infrastructure.” [273]

6.1.1.7.3.2 TLS

“Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide communication security over the Internet. They use asymmetric cryptography for authentication of key exchange, symmetric encryption for confidentiality and message authentication codes for message integrity. Several versions of the protocols are in widespread use in applications such as web browsing, electronic mail, Internet faxing, instant messaging and voice-over-IP (VoIP).

In the TCP/IP model view, TLS and SSL encrypt the data of network connections at a lower sublayer of its application layer. In OSI model equivalences, TLS/SSL is initialized at layer 5 (the session layer) then works at layer 6 (the presentation layer): first the session layer has a handshake using an asymmetric cipher in order to establish cipher settings and a shared key for that session; then the presentation layer encrypts the rest of the communication using a symmetric cipher and that session key. In both models, TLS and SSL work on behalf of the underlying transport layer, whose segments carry encrypted data.

TLS is an IETF standards track protocol, first defined in 1999 and last updated in RFC 5246 (August 2008) and RFC 6176 (March 2011). It is based on the earlier SSL specifications (1994, 1995, 1996)... The TLS protocol allows client-server applications to communicate across a network in a way designed to prevent eavesdropping and tampering.

Since protocols can operate either with or without TLS (or SSL), it is necessary for the client to indicate to the server whether it wants to set up a TLS connection or not. There are two main ways of achieving this; one option is to use a different port number for TLS connections (for example port 443 for HTTPS). The other is to use the regular port number and have the client request that the server switch the connection to TLS using a protocol specific mechanism (for example STARTTLS for mail and news protocols).

Once the client and server have decided to use TLS, they negotiate a stateful connection by using a handshaking procedure. During this handshake, the client and server agree on various parameters used to establish the connection's security... If any one of the steps fails during the negotiation process, the TLS handshake fails and the connection is not created.” [273]

Page 131: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 130 of 139 D100.1 State of the Art Analysis Update

6.1.2 PRODUCT AND ENTITY IDENTIFICATION STANDARDS

6.1.2.1 Entity Identification

A prerequisite for the allocation of GS1 Identification Keys is a GS1 Global Company Prefix (GCP). The GCP is allocated by GS1 and it is part of every GS1 Identification Key and guarantees its uniqueness. The Global Trade Item Number (GTIN) identifies each product or service by its unique number that is generated based on the GCP of the brand owner, brand co-operative or producer. The GTIN-structure is 14-digit with N1 and N14 being the indicator and check digit respectively. The remaining digits are used by the GS1 Company Prefix and the item reference (with a variable number of digits assigned to each). Depending on the application surrounding N1 up to N6 can assume the value 0.

Figure 27: GTIN Structure

The Global Location Number (GLN) is the worldwide unique identification of each company or physical location within a company.

Figure 28: GLN Structure

The Global Individual Asset Identifier (GIAI) is one of the two GS1 keys for asset identification. GIAI is used to identify fixed assets. This could be a computer, a desk or a component part of an aircraft. It enables assets to be individually recorded as part of a fixed asset inventory control system. In simple terms this means any individual asset within a company of any value that needs to be identified uniquely.

The GIAI is a unique identification key that can be used globally to identify the asset. Detailed information regarding the asset will be recorded in a database and the GIAI is the key that provides the link to that information. It may be produced as a GS1-128 bar code, or held in a GS1 EPC tag or used in a database.

The function of a GIAI is to provide an identification point which can be used to retrieve information held in a database associated that particular asset.

Figure 29: GIAI Structure

The above is an extract from: http://www.gs1.com/barcodes/technical/id_keys.

Page 132: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 131 of 139 D100.1 State of the Art Analysis Update

6.1.3 SEMANTIC TECHNOLOGIES STANDARDS

6.1.3.1 Information exchange and management

6.1.3.1.1 Uniform Resource Identifiers (URI)

“Uniform Resource Identifiers (URI) provide a simple and extensible means for identifying a resource. This specification of URI syntax and semantics is derived from concepts introduced by the World Wide Web global information initiative, whose use of such objects dates from 1990.

A URI is a compact string of characters for identifying an abstract or physical resource. The specification standard defines the generic syntax of URI, including both absolute and relative forms, and guidelines for their use. It also defines a grammar that is a superset of all valid URI such that an implementation can parse the common components of a URI reference without knowing the scheme-specific requirements of every possible identifier type.

A resource can be anything that has identity. Familiar examples include an electronic document, an image, a service (e.g. "today's weather report for Los Angeles"), and a collection of other resources. The resource is the conceptual mapping to an entity or set of entities, not necessarily the entity which corresponds to that mapping at any particular instance in time. Thus, a resource can remain constant even when its content – the entities to which it currently corresponds – changes over time, provided that the conceptual mapping is not changed in the process.

The URI syntax is dependent upon the scheme. In general, absolute URI are written as follows:

<scheme>:<scheme-specific-part>

An absolute URI contains the name of the scheme being used (<scheme>) followed by a colon (":") and then a string (the <scheme-specific-part>) whose interpretation depends on the scheme. The URI syntax does not require that the scheme-specific-part have any general structure or set of semantics which is common among all URI. However, a subset of URI do share a common syntax for representing hierarchical relationships within the namespace. This "generic URI" syntax consists of a sequence of four main components:

<scheme>://<authority><path>?<query>

each of which, except <scheme>, may be absent from a particular URI. For example, some URI schemes do not allow an <authority> component, and others do not use a <query> component.

URI that are hierarchical in nature use the slash "/" character for separating hierarchical components. For some file systems, a "/" character (used to denote the hierarchical structure of a URI) is the delimiter used to construct a file name hierarchy, and thus the URI path will look similar to a file pathname. This does not imply that the resource is a file or that the URI maps to an actual filesystem pathname.” [284]

6.1.3.1.2 Resource Description Framework (RDF)

“The Resource Description Framework (RDF) is a framework for representing information in the Web. The standard defines an abstract syntax on which RDF is based, and which serves to link its concrete syntax to its formal semantics. The development of RDF has been motivated by the following uses, among others:

Web metadata: providing information about Web resources and the systems that use them

(e.g. content rating, capability descriptions, privacy preferences, etc.)

Applications that require open rather than constrained information models (e.g. scheduling

activities, describing organizational processes, annotation of Web resources, etc.)

To do for machine processable information (application data) what the World Wide Web

has done for hypertext: to allow data to be processed outside the particular environment in

which it was created, in a fashion that can work at Internet scale.

Interworking among applications: combining data from several applications to arrive at

new information.

Automated processing of Web information by software agents: the Web is moving from

having just human-readable information to being a world-wide network of cooperating

processes. RDF provides a world-wide lingua franca for these processes.

Page 133: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 132 of 139 D100.1 State of the Art Analysis Update

RDF is designed to represent information in a minimally constraining, flexible way. It can be used in isolated applications, where individually designed formats might be more direct and easily understood, but RDF's generality offers greater value from sharing. The value of information thus increases as it becomes accessible to more applications across the entire Internet.

The underlying structure of any expression in RDF is a collection of triples, each consisting of a subject, a predicate and an object. A set of such triples is called an RDF graph. This can be illustrated by a node and directed-arc diagram, in which each triple is represented as a node-arc-node link (hence the term "graph").

Figure 30 - Descriptions of relevant standards

Each triple represents a statement of a relationship between the things denoted by the nodes that it links. Each triple has three parts:

a subject,

an object, and

a predicate (also called a property) that denotes a relationship.

The direction of the arc is significant: it always points toward the object.

The nodes of an RDF graph are its subjects and objects.

The assertion of an RDF triple says that some relationship, indicated by the predicate, holds between the things denoted by subject and object of the triple. The assertion of an RDF graph amounts to asserting all the triples in it, so the meaning of an RDF graph is the conjunction (logical AND) of the statements corresponding to all the triples it contains.

URI-based Vocabulary and Node Identification

A node may be a URI with optional fragment identifier (URI reference, or URIref), a literal, or blank (having no separate form of identification). Properties are URI references. A URI reference or literal used as a node identifies what that node represents. A URI reference used as a predicate identifies a relationship between the things represented by the nodes it connects. A predicate URI reference may also be a node in the graph.

A blank node is a node that is not a URI reference or a literal. In the RDF abstract syntax, a blank node is just a unique node that can be used in one or more RDF statements, but has no intrinsic name.

A convention used by some linear representations of an RDF graph to allow several statements to reference the same unidentified resource is to use a blank node identifier, which is a local identifier that can be distinguished from all URIs and literals. When graphs are merged, their blank nodes must be kept distinct if meaning is to be preserved; this may call for re-allocation of blank node identifiers. Note that such blank node identifiers are not part of the RDF abstract syntax, and the representation of triples containing blank nodes is entirely dependent on the particular concrete syntax used.

Datatypes

Datatypes are used by RDF in the representation of values such as integers, floating point numbers and dates. A datatype consists of a lexical space, a value space and a lexical-to-value mapping. For example, the lexical-to-value mapping for the XML Schema datatype xsd:boolean, where each member of the value space (represented here as 'T' and 'F') has two lexical representations, is as follows:

Value Space {T, F}

Lexical Space {"0", "1", "true", "false"}

Lexical-to-Value Mapping {<"true", T>, <"1", T>, <"0", F>, <"false", F>}

Page 134: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 133 of 139 D100.1 State of the Art Analysis Update

RDF predefines just one datatype rdf:XMLLiteral, used for embedding XML in RDF. There is no built-in concept of numbers or dates or other common values. Rather, RDF defers to datatypes that are defined separately, and identified with URI references. The predefined XML Schema datatypes are expected to be widely used for this purpose.

RDF provides no mechanism for defining new datatypes. XML Schema Datatypes provides an extensibility framework suitable for defining new datatypes for use in RDF.

RDF Expression of Simple Facts

Some simple facts indicate a relationship between two things. Such a fact may be represented as an RDF triple in which the predicate names the relationship, and the subject and object denote the two things. A familiar representation of such a fact might be as a row in a table in a relational database. The table has two columns, corresponding to the subject and the object of the RDF triple. The name of the table corresponds to the predicate of the RDF triple. A further familiar representation may be as a two place predicate in first order logic.

Relational databases permit a table to have an arbitrary number of columns, a row of which expresses information corresponding to a predicate in first order logic with an arbitrary number of places. Such a row, or predicate, has to be decomposed for representation as RDF triples. A simple form of decomposition introduces a new blank node, corresponding to the row, and a new triple is introduced for each cell in the row. The subject of each triple is the new blank node, the predicate corresponds to the column name, and object corresponds to the value in the cell. The new blank node may also have an rdf:type property whose value corresponds to the table name.” [285]

6.1.3.1.3 RDF Schema (RDFS)

“The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web and RDF Schema (RDFS) specification introduces RDF's vocabulary description language and RDF Schema. It is complemented by several companion documents which describe RDF's XML encoding and mathematical foundations.

RDF properties may be thought of as attributes of resources and in this sense correspond to traditional attribute-value pairs. RDF properties also represent relationships between resources. RDF however, provides no mechanisms for describing these properties, nor does it provide any mechanisms for describing the relationships between these properties and other resources. That is the role of the RDF vocabulary description language, RDF Schema. RDF Schema defines classes and properties that may be used to describe classes, properties and other resources.

RDF's vocabulary description language, RDF Schema, is a semantic extension of RDF. It provides mechanisms for describing groups of related resources and the relationships between these resources. RDF Schema vocabulary descriptions are written in RDF using the terms described in the RDFS specification. These resources are used to determine characteristics of other resources, such as the domains and ranges of properties.

The RDF vocabulary description language class and property system is similar to the type systems of object-oriented programming languages such as Java. RDF differs from many such systems in that instead of defining a class in terms of the properties its instances may have, the RDF vocabulary description language describes properties in terms of the classes of resource to which they apply. This is the role of the domain and range mechanisms described in this specification. For example, we could define the eg:author property to have a domain of eg:Document and a range of eg:Person, whereas a classical object oriented system might typically define a class eg:Book with an attribute called eg:author of type eg:Person. Using the RDF approach, it is easy for others to subsequently define additional properties with a domain of eg:Document or a range of eg:Person. This can be done without the need to re-define the original description of these classes. One benefit of the RDF property-centric approach is that it allows anyone to extend the description of existing resources, which is one of the architectural principles of the Web.

The RDF Schema specification does not attempt to enumerate all the possible forms of vocabulary description that are useful for representing the meaning of RDF classes and properties. Instead, the RDF vocabulary description strategy is to acknowledge that there are many techniques through which the meaning of classes and properties can be described. Richer vocabulary or 'ontology' languages such as DAML+OIL, W3C'sOWL language, inference rule languages and other formalisms (for example temporal logics) will each contribute to the ability to capture meaningful generalizations about data in the Web. RDF

Page 135: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 134 of 139 D100.1 State of the Art Analysis Update

vocabulary designers can create and deploy Semantic Web applications using the RDF vocabulary description language 1.0 facilities, while exploring richer vocabulary description languages that share this general approach.

The language defined in the standard consists of a collection of RDF resources that can be used to describe properties of other RDF resources (including properties) in application-specific RDF vocabularies. [286]

6.1.3.1.4 Web Ontology Language (OWL) and OWL 2

“The OWL and subsequent OWL 2 Web Ontology Language is an ontology language for the Semantic Web with formally defined meaning. OWL 2 ontologies provide classes, properties, individuals, and data values and are stored as Semantic Web documents. OWL 2 ontologies can be used along with information written in RDF, and OWL 2 ontologies themselves are primarily exchanged as RDF documents.

Ontologies are formalized vocabularies of terms, often covering a specific domain and shared by a community of users. They specify the definitions of terms by describing their relationships with other terms in the ontology. OWL 2 is an extension and revision of the OWL Web Ontology Language developed by the W3C Web Ontology Working Group and published in 2004 (referred to as “OWL 1”). OWL 2 is designed to facilitate ontology development and sharing via the Web, with the ultimate goal of making Web content more accessible to machines.

Figure 31 gives an overview of the OWL 2 language, showing its main building blocks and how they relate to each other. The ellipse in the centre represents the abstract notion of an ontology, which can be thought of either as an abstract structure or as an RDF graph. At the top are various concrete syntaxes that can be used to serialize and exchange ontologies. At the bottom are the two semantic specifications that define the meaning of OWL 2 ontologies.

Figure 31: Overview of the OWL 2 language

Most users of OWL 2 will need only one syntax and one semantics; for them, this diagram would be much simpler, with only their one syntax at the top, their one semantics at the bottom, and rarely a need to see what's inside the ellipse in the center.

Page 136: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 135 of 139 D100.1 State of the Art Analysis Update

6.1.3.1.4.1 Ontologies

The conceptual structure of OWL 2 ontologies is defined in the OWL 2 Structural Specification document. This document uses UML to define the structural elements available in OWL 2, explaining their roles and functionalities in abstract terms and without reference to any particular syntax. It also defines the functional-style syntax, which closely follows the structural specification and allows OWL 2 ontologies to be written in a compact form.

Any OWL 2 ontology can also be viewed as an RDF graph. The relationship between these two views is specified by the Mapping to RDF Graphs document, which defines a mapping from the structural form to the RDF graph form, and vice versa.

6.1.3.1.4.2 Syntaxes

In practice, a concrete syntax is needed in order to store OWL 2 ontologies and to exchange them among tools and applications. The primary exchange syntax for OWL 2 is RDF/XML; this is indeed the only syntax that must be supported by all OWL 2 tools.

While RDF/XML provides for interoperability among OWL 2 tools, other concrete syntaxes may also be used. These include alternative RDF serializations, such as Turtle; an XML serialization; and a more "readable" syntax, called the Manchester Syntax, that is used in several ontology editing tools. Finally, the functional-style syntax can also be used for serialization, although its main purpose is specifying the structure of the language.

6.1.3.1.4.3 Semantics

The OWL 2 Structural Specification document defines the abstract structure of OWL 2 ontologies, but it does not define their meaning. The Direct Semantics and the RDF-Based Semantics provide two alternative ways of assigning meaning to OWL 2 ontologies, with a correspondence theorem providing a link between the two. These two semantics are used by reasoners and other tools, e.g., to answer class consistency, subsumption and instance retrieval queries.

The Direct Semantics assigns meaning directly to ontology structures, resulting in a semantics compatible with the model theoretic semantics of the SROIQ description logic—a fragment of first order logic with useful computational properties. The advantage of this close connection is that the extensive description logic literature and implementation experience can be directly exploited by OWL 2 tools. However, some conditions must be placed on ontology structures in order to ensure that they can be translated into a SROIQ knowledge base; for example, transitive properties cannot be used in number restrictions. Ontologies that satisfy these syntactic conditions are called OWL 2 DL ontologies. "OWL 2 DL" is used informally to refer to OWL 2 DL ontologies interpreted using the Direct Semantics.

The RDF-Based Semantics assigns meaning directly to RDF graphs and so indirectly to ontology structures via the Mapping to RDF graphs. The RDF-Based Semantics is fully compatible with the RDF Semantics, and extends the semantic conditions defined for RDF. The RDF-Based Semantics can be applied to any OWL 2 Ontology, without restrictions, as any OWL 2 Ontology can be mapped to RDF. "OWL 2 Full" is used informally to refer to RDF graphs considered as OWL 2 ontologies and interpreted using the RDF-Based Semantics.

The correspondence theorem of the RDF-Based Semantics Document defines a precise, close relationship between the Direct and RDF-Based Semantics. This theorem states, in essence, that given an OWL 2 DL ontology, inferences drawn using the Direct Semantics will still be valid if the ontology is mapped into an

RDF graph and interpreted using the RDF-Based Semantics.” [287]

6.1.3.1.5 Simple Knowledge Organization System (SKOS)

“The Simple Knowledge Organization System (SKOS) specifies a common data model for sharing and linking knowledge organization systems via the Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications.

The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a lightweight, intuitive language for

Page 137: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 136 of 139 D100.1 State of the Art Analysis Update

developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL).

The SKOS data model is formally defined as an OWL Full ontology. SKOS data are expressed as RDF triples, and may be encoded using any concrete RDF syntax (such as RDF/XML or Turtle). The SKOS data model views a knowledge organization system as a concept scheme comprising a set of concepts. These SKOS concept schemes and SKOS concepts are identified by URIs, enabling anyone to refer to them unambiguously from any context, and making them a part of the World Wide Web.

SKOS concepts can be labelled with any number of lexical (UNICODE) strings, such as "romantic love" or

"れんあい", in any given natural language, such as English or Japanese (written here in hiragana). One of

these labels in any given language can be indicated as the preferred label for that language, and the others as alternative labels. Labels may also be "hidden", which is useful where a knowledge organization system is being queried via a text index.

SKOS concepts can be assigned one or more notations, which are lexical codes used to uniquely identify the concept within the scope of a given concept scheme. While URIs are the preferred means of identifying SKOS concepts within computer systems, notations provide a bridge to other systems of identification already in use such as classification codes used in library catalogues.

SKOS concepts can be documented with notes of various types. The SKOS data model provides a basic set of documentation properties, supporting scope notes, definitions and editorial notes, among others. This set is not meant to be exhaustive, but rather to provide a framework that can be extended by third parties to provide support for more specific types of note. SKOS concepts can be linked to other SKOS concepts via semantic relation properties. The SKOS data model provides support for hierarchical and associative links between SKOS concepts. Again, as with any part of the SKOS data model, these can be extended by third parties to provide support for more specific needs.

SKOS concepts can be grouped into collections, which can be labelled and/or ordered. This feature of the SKOS data model is intended to provide support for node labels within thesauri, and for situations where the ordering of a set of concepts is meaningful or provides some useful information.

SKOS concepts can be mapped to other SKOS concepts in different concept schemes. The SKOS data model provides support for four basic types of mapping link: hierarchical, associative, close equivalent and exact equivalent.

Finally, an optional extension to SKOS is defined: SKOS eXtension for Labels (SKOS-XL), which provides more support for identifying, describing and linking lexical entities.” [288]

6.1.3.1.6 RDFa

“The web is a rich, distributed repository of interconnected information. Until recently, it was organized primarily for human consumption. On a typical web page, an HTML author might specify a headline, then a smaller sub-headline, a block of italicized text, a few paragraphs of average-size text, and, finally, a few single-word links. Web browsers will follow these presentation instructions faithfully. However, only the human mind understands what the headline expresses-a blog post title. The sub-headline indicates the author, the italicized text is the article's publication date, and the single-word links are subject categories. Computers do not understand the nuances between the information; the gap between what programs and humans understand is large.

Page 138: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 137 of 139 D100.1 State of the Art Analysis Update

Figure 32: On the left, what browsers see – on the right, what humans see

What if the browser, or any machine consumer such as a Web crawler, received information on the meaning of a web page's visual elements? A dinner party announced on a blog could be copied to the user's calendar, an author's complete contact information to the user's address book. Users could automatically recall previously browsed articles according to categorization labels (i.e., tags). A photo copied and pasted from a web site to a school report would carry with it a link back to the photographer, giving him proper credit. A link shared by a user to his social network contacts would automatically carry additional data pulled from the original web page: a thumbnail, an author, and a specific title. When web data meant for humans is augmented with hints meant for computer programs, these programs become significantly more helpful, because they begin to understand the data's structure.

RDFa allows HTML authors to do just that. Using a few simple HTML attributes, authors can mark up human-readable data with machine-readable indicators for browsers and other programs to interpret. A web page can include markup for items as simple as the title of an article, or as complex as a user's complete social network.

Historically, RDFa 1.0 was specified only for XHTML. RDFa 1.1 is the newer version and is specified for both XHTML and HTML5. In fact, RDFa 1.1 also works for any XML-based languages like SVG.” [289]

6.1.3.1.7 SPARQL query language

“RDF is a directed, labelled graph data format for representing information in the Web and SPARQL is a query language for RDF. SPARQL can be used to express queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions. SPARQL also supports aggregation, subqueries, negation, creating values by expressions, extensible value testing, and constraining queries by source RDF graph. The results of SPARQL queries can be result sets or RDF graphs.

Most forms of SPARQL query contain a set of triple patterns called a basic graph pattern. Triple patterns are like RDF triples except that each of the subject, predicate and object may be a variable. A basic graph pattern matches a subgraph of the RDF data when RDF terms from that subgraph may be substituted for the variables and the result is RDF graph equivalent to the subgraph.

The result of a query is a solution sequence, corresponding to the ways in which the query's graph pattern matches the data. There may be zero, one or multiple solutions to a query. Each solution gives one way in which the selected variables can be bound to RDF terms so that the query pattern matches the data. The result set gives all the possible solutions.” [290]

Page 139: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 138 of 139 D100.1 State of the Art Analysis Update

6.1.3.2 Vocabularies and ontologies

6.1.3.2.1 Observations & Measurements

This provides general models and schema for supporting the packaging of observations from sensor system and sensor-related processing. The model supports metadata about the Observation, as well as the ability to link to the procedure (i.e. sensors plus processing) that created the observation, thus, providing an indication of the lineage of the measurements. The OM standard has been encoded in XML and is also available in an experimental OWL format.

6.1.3.2.2 Sensor Model Language (SensorML)

This provides standard models and an XML encoding for describing any process, including the process of measurement by sensors and instructions for deriving higher-level information from observations. Processes described in SensorML are discoverable and executable. All processes define their inputs, outputs, parameters, and method, as well as provide relevant metadata. SensorML models detectors and sensors as processes that convert real phenomena to data.

6.1.3.2.3 Sensor and Sensor Network ontology

This specification “answers the need for a domain-independent and end-to-end model for sensing applications by merging sensor-focused (e.g. SensorML), observation-focused (e.g. Observation & Measurement) and system-focused views. It covers the sub-domains which are sensor-specific such as the sensing principles and capabilities and can be used to define how a sensor will perform in a particular context to help characterise the quality of sensed data or to better task sensors in unpredictable environments. Although the ontology leaves the observed domain unspecified, domain semantics, units of measurement, time and time series, and location and mobility ontologies can be easily attached when instantiating the ontology for any particular sensors in a domain. The alignment between the SSN ontology and the DOLCE Ultra Lite upper ontology has helped to normalise the structure of the ontology to assist its use in conjunction with ontologies or linked data resources developed elsewhere.” The SSN ontology thus can be seen as including the OGS standard and provides a framework for application development.

6.1.3.2.4 Geography Markup Language

The OpenGIS® Geography Markup Language Encoding Standard (GML) The Geography Markup Language (GML) is an XML grammar for expressing geographical features. GML serves as a modelling language for geographic systems as well as an open interchange format for geographic transactions on the Internet. As with most XML based grammars, there are two parts to the grammar – the schema that describes the document and the instance document that contains the actual data. A GML document is described using a GML Schema. This allows users and developers to describe generic geographic data sets that contain points, lines and polygons.

6.1.3.2.5 GeoSPARQL

GeoSPARQL is a standard built upon GML providing for the representation and querying linked geospatial data. It defines a small ontology using GML and WKT, as well as defining a SPARQL query interface. For more details on SPARQL consult the sections above. GeoSPARQL is assumed to be replacing the Ordinance Survey Vocabulary the UK Ordinance Survey have used until now to encode their geospatial data as linked data.

6.1.3.2.6 FOAF

“FOAF is a project devoted to linking people and information using the Web. Regardless of whether information is in people's heads, in physical or digital documents, or in the form of factual data, it can be linked. FOAF integrates three kinds of network: social networks of human collaboration, friendship and association; representational networks that describe a simplified view of a cartoon universe in factual terms, and information networks that use Web-based linking to share independently published descriptions of this inter-connected world. FOAF does not compete with socially-oriented Web sites; rather it provides an approach in which different sites can tell different parts of the larger story, and by which users can retain

Page 140: Deliverable D100.1 State of the Art Analysis Update€¦ · Deliverable D100.1 State of the Art Analysis Update WP 100 Grant Agreement number: NMP2 -LA -2013 -609143 Project acronym:

31.03.2014

© ProSEco Consortium Page 139 of 139 D100.1 State of the Art Analysis Update

some control over their information in a non-proprietary format.” (http://xmlns.com/foaf/spec/). FOAF is a widely used standard for describing people and their relationships.

6.1.3.2.7 GoodRelations Ontology

Goodrelations is a widely used ontology for the markup of products and organisations offering products on the internet. Its prime focus is e-commerce. The standard approach is to use GoodRelations in RDFa to mark up information in HTML webpages thereby allowing web-crawlers to collect RDF format data for whatever indexing or aggregation purposes. GoodRelations has had considerable success and is used in a number of major websites around the world. The best known examples include the adoption of Goodrelations by the US retailers Bestbuy.com which saw a 30% increase in their search results and a 15% increase in click throughs (Sheldon, 2011). Over 600,000 products were marked up on their website with this ontology in RDFa. Other organisations include O’Reilly books in the US, both Renault and Volkswagen in the UK and Arzneimittel.de in Germany. One of the more interesting projects has involved the markup of nearly all the retail outlets in the German town of Ravensburg and the creation of a corresponding smartphone app with information on address and opening hours. The most common current usage of the GoodRelations ontology is to ensure the best possible Search Engine Optimisation, but the design and intent from the start was much greater. The GoodRelations Ontology has now become a de facto standard and all retail and consumer facing publishing of data would be well advised to use it as the basis for their mark up and metadata.