[ieee 2011 ieee 8th international conference on e-business engineering (icebe) - beijing, china...

8
Self-Stabilizing Ecosystem for Service-based Mobile Computing Hyun Jung La, Jeong Ran Jang, and Soo Dong Kim Department of Computer Science Soongsil University 511 Sangdo-Dong, Dongjak-Ku, Seoul, Korea 156-743 {hjla80, jrjang10, sdkim777}@gmail.com Abstract— Mobile devices are widely used as a convenient device which provides cell phone capability and computing power. However, their resources such as CPU and memory are limited. A common form of remedying the resource constraints, service-based mobile computing is emerging. However, there still exists problems in this paradigm; lack of stability and performance, and limited manageability. In this paper, we present a noble approach to resolving the problems and providing effective computing environment, called Service-based Mobile Ecosystem (SME). It is a computing environment which continuously monitors the quality of services and performance necessary services migrations and replications as means of providing self-stabilization of services for service-based mobile computing. A key feature of SME is to utilize the ever-increasing computing power of mobile devices in dynamically deploying requested services on the devices. By adopting the computing model of SME, we believe that super mobile computing without manageability problem can be realized. Keywords- Mobile Computing, Service-based Computing, Self-stabilization-Ecosystem, QoS I. INTRODUCTION Mobile devices in forms of Smartphones and Tablet PCs are widely accepted as a convergence machine which provides both cell phone capability and a lightweight computing capability. The potential of utilizing mobile devices goes beyond the conventional personal computers due to their support for mobility and context-sensing capability. However, mobile devices have a major drawback of limited computing power and resources such as main memory, secondary memory, screen size, and battery life, mainly due to the small form-factor [1][2]. Consequently, large-scaled applications consuming large amount of resource could not be deployed on the devices. To overcome this limitation and to maximize utilization of mobile devices, service-based mobile applications are emerging [3][4][5]. In service-based mobile applications, services deployed on server sides are invoked client applications, yielding a number of benefits over standalone mobile apps. A new trend in mobile computing is to enable enterprise computing with mobile devices, called super mobile computing [6]. This becomes more feasible with the advent of more powerful mobile devices equipped with dual-core processors, larger capacity memory, and larger size screen. However, in enabling service-based super mobile computing, there exists a challenging problem on the quality of services, i.e., lack of stability and performance. Services can be potentially subscribed by a number of service consumers and the degree of invoking services is not known in advance. Unexpected high volume of service invocations, degradation of network bandwidth, and faults on services all contribute to the problem of instability of services. Also, the fact that services are invoked mainly through network has a consequence of potentially low performance, i.e. response time. It becomes even evident with mobile computing since the network bandwidth available for mobile devices such as 3G is considerably low. We also identify a challenging problem in managing services, i.e., limited manageability. Since services are developed in a black-box form by service providers and have limited visibility and controllability, it is a quite challenge to resolve dynamically occurring faults and quality related problems of services. These two problems, lack of stability and performance, and limited manageability, motivate our research question; how to provide a computing environment which overcomes the two problems while enabling service-based mobile computing? Our answer to this research question is to realize self- stabilizing ecosystem for service-based mobile computing. In this paper, we propose a noble approach; self- stabilizing service-based mobile ecosystem (SME). It is a computing environment which continuously monitors the quality of services and performance necessary services migrations and replications as means of providing self- stabilization of services for service-based mobile computing. A key feature of SME is to utilize the ever-increasing computing power of mobile devices in dynamically deploying requested services on the devices. For the organization of the paper, we first present key concepts and architecture of SME in section III. Then, we define proactive quality management of services in section IV, and propose a process for autonomous management of elements in SME in section V. The experiment results are given in section VI. By adopting the computing model of SME, we believe that super mobile computing without manageability problem can be realized. II. RELATED WORKS Malek’s work presents an architecture-driven framework supporting the entire life-cycle of a mobile software system 2011 Eighth IEEE International Conference on e-Business Engineering 978-0-7695-4518-9/11 $26.00 © 2011 IEEE DOI 10.1109/ICEBE.2011.64 193

Upload: soo-dong

Post on 24-Mar-2017

217 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

Self-Stabilizing Ecosystem for Service-based Mobile Computing

Hyun Jung La, Jeong Ran Jang, and Soo Dong Kim Department of Computer Science

Soongsil University 511 Sangdo-Dong, Dongjak-Ku, Seoul, Korea 156-743

{hjla80, jrjang10, sdkim777}@gmail.com

Abstract— Mobile devices are widely used as a convenient device which provides cell phone capability and computing power. However, their resources such as CPU and memory are limited. A common form of remedying the resource constraints, service-based mobile computing is emerging. However, there still exists problems in this paradigm; lack of stability and performance, and limited manageability. In this paper, we present a noble approach to resolving the problems and providing effective computing environment, called Service-based Mobile Ecosystem (SME). It is a computing environment which continuously monitors the quality of services and performance necessary services migrations and replications as means of providing self-stabilization of services for service-based mobile computing. A key feature of SME is to utilize the ever-increasing computing power of mobile devices in dynamically deploying requested services on the devices. By adopting the computing model of SME, we believe that super mobile computing without manageability problem can be realized.

Keywords- Mobile Computing, Service-based Computing, Self-stabilization-Ecosystem, QoS

I. INTRODUCTION

Mobile devices in forms of Smartphones and Tablet PCs are widely accepted as a convergence machine which provides both cell phone capability and a lightweight computing capability. The potential of utilizing mobile devices goes beyond the conventional personal computers due to their support for mobility and context-sensing capability.

However, mobile devices have a major drawback of limited computing power and resources such as main memory, secondary memory, screen size, and battery life, mainly due to the small form-factor [1][2]. Consequently, large-scaled applications consuming large amount of resource could not be deployed on the devices. To overcome this limitation and to maximize utilization of mobile devices, service-based mobile applications are emerging [3][4][5]. In service-based mobile applications, services deployed on server sides are invoked client applications, yielding a number of benefits over standalone mobile apps.

A new trend in mobile computing is to enable enterprise computing with mobile devices, called super mobile computing [6]. This becomes more feasible with the advent of more powerful mobile devices equipped with dual-core processors, larger capacity memory, and larger size screen.

However, in enabling service-based super mobile computing, there exists a challenging problem on the quality of services, i.e., lack of stability and performance. Services can be potentially subscribed by a number of service consumers and the degree of invoking services is not known in advance. Unexpected high volume of service invocations, degradation of network bandwidth, and faults on services all contribute to the problem of instability of services. Also, the fact that services are invoked mainly through network has a consequence of potentially low performance, i.e. response time. It becomes even evident with mobile computing since the network bandwidth available for mobile devices such as 3G is considerably low.

We also identify a challenging problem in managing services, i.e., limited manageability. Since services are developed in a black-box form by service providers and have limited visibility and controllability, it is a quite challenge to resolve dynamically occurring faults and quality related problems of services. These two problems, lack of stability and performance, and limited manageability, motivate our research question; how to provide a computing environment which overcomes the two problems while enabling service-based mobile computing?Our answer to this research question is to realize self-stabilizing ecosystem for service-based mobile computing.

In this paper, we propose a noble approach; self-stabilizing service-based mobile ecosystem (SME). It is a computing environment which continuously monitors the quality of services and performance necessary services migrations and replications as means of providing self-stabilization of services for service-based mobile computing. A key feature of SME is to utilize the ever-increasing computing power of mobile devices in dynamically deploying requested services on the devices. For the organization of the paper, we first present key concepts and architecture of SME in section III. Then, we define proactive quality management of services in section IV, and propose a process for autonomous management of elements in SME in section V. The experiment results are given in section VI. By adopting the computing model of SME, we believe that super mobile computing without manageability problem can be realized.

II. RELATED WORKS

Malek’s work presents an architecture-driven framework supporting the entire life-cycle of a mobile software system

2011 Eighth IEEE International Conference on e-Business Engineering

978-0-7695-4518-9/11 $26.00 © 2011 IEEE

DOI 10.1109/ICEBE.2011.64

193

Page 2: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

[7]. It consists of tools for designing, assessing, implementing, deploying, and even migrating mobile software systems at runtime. The framework enables architects to design architectures by assessing qualities through various simulation methods. This work focuses elaborating the functionality of the tools in the framework. Issues of assessing the overall quality of mobile computing environment as a whole and self-manageability of services are not dealt.

Chun’s work presents an approach to dynamic partitioning of applications between resource-constrained mobile devices and clouds, to adapt different environments and workloads dynamically [8]. They consider the costs of processing functionality on both mobile devices and service servers, and the communication cost between them in functionality partitioning. The solutions proposed remain largely at conceptual level.

Han’s work presents an adaptive software architecture supporting component migration and redeployment at runtime [9]. Information about components and connectors is classified into two types; mobility-relevant and mobility-independent. The work proposes four types of connectors for adaptive architecture; link, pull, copy, and stamp.However, implementation issues of the connectors are not treated.

Ennai’s work presents a service-oriented framework for supporting dynamic service discovery and binding, context-aware service portioning, and asynchronous push service invocation [3]. Tergujeff’s work proposes service-oriented architecture for lightweight mobile devices by surveying enabling technologies, programming interfaces, and supporting devices [4]. Natchetoi’s work presents a lightweight service-based architecture for business applications running on J2ME enabled devices [5]. The work presents methods for minimizing data transferred to and stored on the device, pro-active data loading, and security. Wang’s work [10] and Thanf’s work [11] propose a framework supporting mobile service.

All of these works focus on architecture and methods for delivering services to users of mobile devices. Our work presents an innovative and dynamic architecture for service-based mobile computing environment, and noble features of the model including autonomous service management and dynamic configuration of services.

III. SME ARCHITECTURE

A. Functionality of SME SME is an ecosystem where services are deployed on

nodes, mobile applications are running by subscribing the services, the overall quality of the ecosystem is monitored, and service management tasks are performed in autonomous way. This is much similar to typical ecosystems where living animals and plants exist and interact, and an ideal level of ecosystem’s quality is self-regulated and maintained. Hence, the key features of SME are summarized as the follows;

Quality of services in SME is continuously monitored. When quality degradation is severe, a plan to remedy

the problems is generated automatically. Service migration and replication are performed according to the plan in autonomous manner. Service can be dynamically deployed on mobile devices as well as conventional station nodes. This is to take the advantage of ever-increasing computing power and resources of mobile devices such as Tablet PCs. The entire life-cycle of monitoring quality, making a remedy plan, implementing the plan is performed in autonomous way.

Resulting benefits of self-stabilization feature in SME is to overcome the problems of service instability, low performance, and limited manageability, while providing a consist level of quality to mobile applications.

B. Key Elements of SME SME consists of several key elements as shown in

Figure 1; Cloud Service as a reusable functionality, StationNode deploying services, Mobile Node deploying mobile applications and services, Mobile Application subscribing services, SME manager coordinating self-stabilization at global level, SME Agent managing services at node-level, and SME repository.

S12

S21

S22

S92

SNode1

S11

S24

S61

S91

S23

SNode2

SNode3SME

Repository

SMEManager

MNode1

App2

App1

MNode2

App3

S25

SMEAgent

SMEAgent

SMEAgent

SMEAgent

SMEAgent

Figure 1. Key Elements of SME

Service: A service is a reusable unit that provides cohesive functionality to service consumers. Let Ti be a type of a service, from which a number of service instances can be created. We distinguish types of services from instances of a service type. This is because a number of service instances can be created from one type of service, and deployed on possible different servers. Let numServiveTypesbe the number of service types, and setServiceTypes be a set of service types, i.e.;

setServiceTypes = {T1, T2,T3, …, TnumServiceTypes}

Let Sip be a service instance of the service type, Ti. That

is, for a service type Ti, its service instances are Si1, Si

2,Si

3,…, Sin. Let numServiceInstances be the number of

service instances in a SME, and setServiceInstances be the set of all service instances.

setServiceInstances = {Si1, Si

2, …, SinumServiceInstances}

194

Page 3: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

In Figure 1, numServiceTypes is 4 and setServiceTypes is {T1, T2, T6, T9}. And, numServiceInstances is 10 and setServiceInstances is {S1

1, S12, S2

1, S22, S2

3, S24, S6

1, S91, S9

2}.There are two service instances, S1

1 and S12, for a service

type T1, four service instances, S21, S2

2, S23, and S2

4 for T2,one service instance S6

1 for T6, and two service instances, S91

and S92, for T9. Note that service instances of a same service

type can be deployed on different nodes as in the case of T2;S2

4 is on StationNode1, S23 is on SNode2, S2

1 and S22 are on

SNode3.Station Node: A node indicates a computer where

services and applications are deployed. We consider two types of nodes; Station Node for conventional servers, and Mobile Node for a mobile device deploying services as well as applications. Let setNodes be the union of station nodes and mobile nodes;

setNodes = setSNodes ∪ setMNodes

Let SNodei be a type of node deploying services. Let numSNodes be the number of station nodes, and setSNodes be the set of all station nodes.

setSNodes = {SNode1, SNode2, …, SNodenumSNodes}

In Figure 1, numSNodes is 3, and setSNodes is {SNode1,SNode2, SNode3}.

Mobile Node: Mobile node is a device which runs applications and possibly deploys services. Let MNodei be a mobile device which deploys mobile applications and cloud services. Let numMNodes be the total number of mobile devices and setMNodes be a set of the active devices, i.e.;

setMNodes = {MNode1, MNode2, …, MNodenumMNodes}

In Figure 1, numMNodes is 2, and setMNodes is {MNode1, MNode2}. Note that all the mobile nodes deploy mobile applications, while only MNode2 handles operating S2

5.Mobile Application: A mobile application is an

application running on a mobile device. Mobile applications often run by subscribing services. MNodej, mobile application interacts with service instances. Let Appi be a mobile application which is deployed on MNodej. And, let numApps be the total number of mobile applications and setApps be a set of the mobile applications, i.e.;

setApps = {App1, Apps2, …, AppnumApps}

In Figure 1, numApps is 3, and setApps is {App1, App2,App3}. App1 and App2 are deployed on MNode1, and App3 is deployed on MNode2. Note that, Appi is only deployed on MNode, not SNode.

SME Manager & SME Agent: One of the key features of SME is the capability of self-stabilizing its ecosystem [12]. SME Manager plays the role of coordinating all the activities in SME by initiating the actions of monitoring the overall QoS, making a quality remedy plan, and executing the plan in autonomous way.

SME Agent is a software agent running on mobile devices, and is responsible to carry out actions requested by SME manager such as monitoring local quality and

performing dynamic service deployment. Let Agenti be an SME agent working for the node. And let numAgents be the number of SME agents in a SME, and setAgentsbe the set of all agent nodes.

setAgents = {Agent1, Agent2, …, AgentnumAgents}

In Figure 1, numAgents is 5, which is the same as the sum of numMNodes and numSNodes. And, setAgents is {Agent1, Agent2, Agent3, Agent4, Agent5}.

The roles of SME Manager and SME Agent aresummarized in Table I.

TABLE I. TASKS OF SME MANAGER AND SME AGENTS

SME Manager SME Agent Detect Quality Degradation Make Quality Remedy Plan Initiate service migration and replication Reroute service invocations

Monitor local Quality Report status of nodes Dynamically deploy services

The interactions between SME Manager and SME Agentare illustrated in Figure 2.

: SMEManager

a2: SMEAgent: SNode: MNode

1. Send monitored QoS.

2. Send acquired QoS.

a1: SMEAgent

2. Send acquired QoS.

1. Send monitored QoS.

3. Make Quality Improvement Plan.

4. Decide Actions to Take.

5. Notify Decision Result(Migration or Replication)

6. Take Actions

5. Notify Decision Result(Migration or Replication)

6. Take Actions

Figure 2. Interactions between SME Manager and SME Agents

First, SME agent gathers quality information from a mobile or station node that is in charge of the SME agent (#1), and sends the information to SME manager (#2). With the information, SME manager decides whether overall quality of SME is being degraded or not. If degradation, SME manager make quality improvement plan (#3), decides actions to take (#4), and commands the actions to SME agents (#5). With SME manager’s determination, SME agent performs a specific action to the node (#6). This flow is performed autonomously, which will be described in section V in detail.

SME Repository: SME manager maintains a repository to store configurations of SME elements, and the SME repository maintains the following information types;

List of all the nodes, i.e. service types, service instances, station nodes and mobile nodes List of service type required by mobile applications Available resources for each mobile or station node List of service instances deployed on mobile nodes and station nodes including QoS

195

Page 4: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

After each cycle of autonomous management of services, SME repository is updated with the new configuration.

C. Key Relationships among Elements Elements of SME are closely interacting, and their

interactions and relationships are specified here. Figure 3 shows four types of relationships; «manages», «deploys»,«invokes», and «realizes».

Mobile Node

Node

Station NodeMobile

Application

Service Instance

Service Type

«invokes»

«deploys»

SME Agent

«manages» «realizes»

«deploys»

Figure 3. Relationships among Key Elements of SME

A relationship between nodes and SME agents, «manages», indicates that an SME agent manages the node including station and mobile nodes. As explained earlier, one node, regardless of station or mobile node, is managed by an SME agent by monitoring current quality and taking quality improvement actions. Let manageRel be a relation between two sets setAgent and setNodes.

manageRel = setAgent setNodes = {(x, y) | x setAgent and y setMNodes or y setSNodes}

A relationship between nodes and service instances and between mobile nodes and mobile applications «deploys»,indicates that nodes deploy service instances and mobile nodes deploy mobile applications. Note that mobile applications are linked with only mobile nodes; rather service instances are related with any kinds of nodes. Let deployRel be a relation between setNodes and setServiceInstances and between setMNodes and setApps.

deployRel = setNodes setServiceInstances = {(x,y) | x setNodes and y setServiceInstances}

deployRel = setMNodes setApps = {(x,y) | x setMNodes and y setApps}

A relationship between mobile applications and service instances, «invokes», indicates that a mobile application invokes a service instance. This relationship is required since public cloud services can be subscribed by any mobile application. Let invokeRel be a relation between two sets setApps and setServiceInstances.

invokeRel = setApps setServiceInstances = {(x,y) | x setApps and y setServiceInstances}

Let invServiceList(Appi) be a function which returns the list of service instances which are invoked by the mobile

application, and appList(Sjk) be a function which returns the

list of mobile applications which invoke the service instance. And, let numInvokingApps(Appi, Sj

k) be the total number of mobile applications invoking the service instance.

Finally, a relationship between service types and service instances, «realizes», indicates that a service instances realizes functionalities described in a service type. Let invokeRel be a relation between two sets setApps and setServiceInstances

realizeRel = setServiceInstances setServiceTypes = {(x,y) | x setServiceInstances and y setServiceTypes}

Let serviceInstanceList(Ti) be a function which returns the list of service instances which are invoked by the mobile application.

IV. MEASURING QUALITY OF SME One of the key features of SME is the ability to measure

the overall quality of the ecosystem, make a remedy plan if the quality falls behind the preset threshold value, and perform remedy actions in the plan. Hence, we need to define quality model and metrics for evaluating SME.

Quality models are typically defined with a number of quality attributes and their metrics. However, in this paper, we only consider efficiency in ISO/IEC 9126 [13] among several quality attributes of SME quality model, to present how the overall quality of SME can be synthesized from more specific quality attributes. Other quality attributes can be treated in the same way for efficiency is handled.

Efficiency is typically measured with Response Time,Throughput, and/or Turnaround Time, but we opt to use Response Time to calculate efficiency.

To measure overall quality of SME, we propose four levels of performance; Application Level, Service Instance Level, Service Type Level, SME Level. When measuring response time at each level, we calculate an average value and a standard deviation value.

A. Measuring Performance at Application Level The first level of performance, Response Time for

Mobile Application, is to measure time between when Appiinvokes Sj

k and when Sjk returns its result to Appi, called

Application Response Time, ART.ART(Appi, Sj

k) indicates how fast the mobile application can receive a result from a service instance as shown in Figure 4. Here, (Appi, Sj

k) is a member of invokeRel.

Appi Sjk

Request Time

Response Time

Processing Time

Figure 4. Application Level of Performance, ART

Typically, a response time covers the duration between asking functionalities and returning results [13]. By adopting this concept, ART(Appi, Sj

k) is evaluating by considering three parts; a request time from Appi to Sj

k

196

Page 5: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

indicating the time to send invocation to Sjk, processing time

of Sjk, and a response time from Sj

k to Appi indicating the time to receive results from Sj

k.

ART(Appi, Sjk ) = TransmissionCost(Appi, Sj

k)+ ComputationCost(Sj

k)+TransmissionCost(Sj

k , Appi,)

Here, TransmissionCost(A, B) is evaluated by the time when B receives a message from A from the time when Ainvokes B, and ComputationCost(A) is evaluated by the time when A finishes performing its functionality from the time when A starts to perform its functionality.

The value range of ART(Appi, Sjk ) is 0… . The larger

value of ART(Appi, Sjk ) means that the Appi waits for a long

time to receive results from Sjki . In SME, the lower value of

ART(Appi, Sjk ) is preferred.

For example, let us consider the situation in Table II. Table II shows the required values to calculate ART.

TABLE II. EXAMPLE OF ART

Transmission Cost(Appi, Sj

k)Computation

Cost(Sjk)

Transmission Cost(Appi, Sj

k)(App1, S1

1) 0.12 0.55 0.13 (App3, S1

1) 0.21 0.55 0.28

By using the equation for ART, we can acquire that ART(App1, S1

1) is 0.8s and ART(App3, S11) is 1.04s.

B. Measuring Performance at Service Instance Level The second level of performance, Performance for

Service Instance, is to measure overall performance of a service instance. For this, we consider two metrics, AIRT(Average Instance Response Time) and SDIRT(Standard Deviation Instance Response Time). A human administrator decides the period of evaluating SME in advance. The value ranges vary from seconds to hours. For the given period time, a same application can invoke the service instance multiple times. Hence, AIRT and SDIRTshould consider the number of invocations to the service instance.

Appb

Appa

Sij

Appc

# of invocation

# of invocation

# of invocation

For the certain period of time

Figure 5. Service Instance level of Performance, AIRT and SDIRT

AIRT(Sjk) is the average for all incoming invocations to

an service instance during a specific period, which indicates how fast the service instance returns its results as shown in Figure 5. AIRT(Sj

k) is evaluated by acquiring an average of all ART(Appi, Sj

k) as following;

AIRT (Sjk) = n))) S,(ART(App( a

num

i

ionnumInvocat

aji /

0 1= =

Here num is the return value a function numInvokingApps(Appi, Sj

k), and numInvocations is the number of invocations to service instance for the given period time. And, n is returned by adding numInvocation for all service instances. Note that this metric considers all the mobile applications which invoke Sj

k. That is, Appi is the member of appList(Sj

k). And, for the given period time, we should consider numInvocations in this metric since a same mobile application can invoke the service instance multiple times.

The value range of AIRT(Sjk ) is 0… . The larger value

of AIRT(Sjk ) means that the Sj

k spends longer period of time to return its results.

As the second metric, SDIRT(Sjk) is the standard

deviation for all incoming invocations to an service instance for the certain period time, which indicates how much fluctuating the service instance returns its results as shown in Figure 5. And this is evaluated by using the following equation;

SDIRT (Sjk) = ( )

2

1

)(),(=

−n

i

kj

kji SAIRTSAppART

The value range of SDIRT( Sjk ) is 0… . The lower value

of SDIRT(Sjk) means that the Sj

k returns its results within a similar response to the ART. Like AIRT, we also consider multiple invocations to the service instance for the given period time in this metric.

In summary, as AIRT and SDIRT has lower values, overall quality of SME is increased.

TABLE III. EXAMPLE OF AIRT AND SDIRT

Value of ART AIRT SDIRT ART(App1, S1

1) = 0.8s AIRT(S11)

= 0.97 SDIRT(S1

1)= 0.23 ART(App2, S1

1) = 1.13s ART(App3, S1

2) = 0.5s AIRT(S12)

= 0.99 SDIRT(S1

2)= 0.69 ART(App4, S1

2) = 1.47s

For example, let us consider the situation in Table III. For five seconds, S1

1 is invoked by App1 and App2, and S12 is

invoked by App3 and App4. By using two equations, we acquire AIRT(S1

1), AIRT(S12), SDIRT(S1

1), and SDIRT(S12).

With the results, we can conclude that S11 returns its results

pretty quickly and provides its result within a quite similar response time. Hence, SME manger can decide that the quality of S1

1 is better than one of S12.

C. Measuring Performance at ServiceType Level The third level of performance, Performance for Service

Type, is to measure overall performance of a service type. Like the second level of response time, we consider two metrics; ASRT(Average Service Type Response Time) and SDSRT (Standard Deviation Service Type Response Time).

ASRT (Tj) is the average for all incoming invocations to all service instances for a service type during a specific period, which indicates how fast all the service instances for a service type return their results as shown in Figure 6.

197

Page 6: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

App2

App1

App3App5

App4

S11

S12

T1

App6

Figure 6. Service Type Level of Performance, RT

ASRT (Tj) is evaluated by acquiring an average of all AIRT(Sj

k) as following. Note that Sjk is the member of

serviceInstanceList(Tj).

ASRT (Tj) = n) SIRT(n

kj /)(

0=

Here, n is the number of members in serviceInstanceList(Tj).

As the second metric, SDSRT(Tj) is the standard deviation for all incoming invocations to all service instance realizing a service type, Tj, for the certain period time, which indicates how much fluctuating all service instances for a service type return its results as shown in Figure 6. And this is evaluated by using the following equation;

SDSRT (Tj ) = ( )2

1)()(

=

−n

kj

kj TASRTSAIRT

The value range of SDSRT(Tj) is 0… . The lower value of SDSRT(Tj) means that all the instances of Tj return their results within a similar response to the ASRT.

For example, let us consider the situation for 10 seconds in Table IV. Using the formula, we acquire ASRT and SDSRT. With the results, we can conclude that T1 returns its results quicker than T2 and T2 is lower deviation than T1.

TABLE IV. EXAMPLE OF ASRT AND SDSRT

Value of AIRT ASRT SDSRT AIRT(S1

1) = 0.92sAIRT(S1

2) = 1.47ASRT(T1)= 1.195

SDSRT(T1)= 0.3889

AIRT(S21) = 1.18s

AIRT(S22) = 1.53s

ASRT(T2)= 1.355

SDSRT(T2)= 0.247

D. Measuring Performances at Ecosystem Level Performance for Ecosystem is to measure overall

performance of SME. We consider two metrics; AERT(Average SME Response Time) and SDERT (Standard Deviation SME Response Time).

AERT (SME) is the average for all incoming invocations to all service instances for a service type in the current configuration of SME during a specific period, which indicates how fast all types of services are provide to mobile applications as shown in Figure 7.

AERT(SME) is evaluated by acquiring an average of all ASRT(Tj) as following;

AERT (SME) = nSRT(Tn

jj /))(

1=

Here, n is the number of service types in the current configuration of SME.

App2

App1

App3

T1

T6

T2

T9

Figure 7. Ecosystem Level of Performance, AERT and SDERT

SDERT(SME) is the standard deviation for all incoming to all service instances for a service type for the certain period time. It indicates how much fluctuating all service types provide their results to mobile applications as shown in Figure 7. It is evaluated with this equation;

SDERT (SME ) = ( )2

1)()(

=−

n

jj SMEERTTASRT

The value range of SDERT(SME) is 0… . The lower value of SDERT(SME) means that all the services in the current SME return their results within a similar response to the ERT.

V. AUTONOMOUS MANAGEMENT PROCESS OF SME We define a four-activity process model to manage SME

by applying dynamic architecture as shown in Figure 8. The process is continuously applied to maintain consistent level of overall quality.

Evaluating Quality of SME

Taking Quality Improvement

Actions

Defining Quality Improvement

Plan

Acceptable?

Loading SME Initial

Configuration

[Not Accepted][A

ccep

ted]

Reconfiguring SME

Figure 8. Main Activities in SME Process

A. Evaluating Quality of SME This activity is to evaluate quality of SME by

monitoring services in the ecosystem. To detect abnormal states including degrading overall quality of the SME, it is necessary to continuously monitor current state of SME.

SME agents first monitor quality of services deployed on nodes, especially response time, and deliver the information to SME manager. SME manger collects a set of quality data

198

Page 7: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

from multiple SME agents to evaluate overall quality of SME. After collecting the data for a given period of time, SME manager evaluates the overall quality of SME in terms of ERT(SME) and SDERT(SME) to determine whether the quality of SME is deteriorating or is excessively high.

To decide whether current quality of SME is acceptable or not, we utilize two pre-defined threshold values; low_threshold indicating lower-bounded threshold value and high_threshold representing upper-bounded threshold value. These values are determined by using historical data accumulated in SME Repository.

If the ERT(SME) and/or SDERT(SME) is greater than low_threshold, the current state of SME is in poor state. And, if the ERT(SME) and/or SDERT(SME) is greater than high_threshold, the current state of SME is in excessively high efficiency indicating wasting resources. That is why there is high possibility that unnecessary service instances are deployed. In these cases, SME manager figures out those unnecessary instances by analyzing AIRT, SDIRT, ASRT,and SDSRT. And, SME manager performs the next activity to autonomously improve its quality.

Otherwise, SME agents continuously monitor services or nodes, and SME manager evaluates overall quality of the current SME.

B. Defining Quality Improvement Plan This activity is to make a plan for improving quality by

defining a new architectural configuration.

ERT (SME)t1 ERT (SME) t2

Measured Value Expected Value

<Quality Improvement Actions>

New Configuration

• Service Migration• Service Replication• Service Rerouting• Releasing Resources

Figure 9. Making Quality Improvement Plan

The remedy plan includes tasks of migrating services, replicating services, or removing resources as shown in Figure 9.

If the current state of SME is determined to be in excessively high efficiency (i.e. in case that the ERT(SME) and/or SDERT(SME) is greater than high_threshold), the remedy plan is quite simple, just removing those unnecessary instances from the configuration of SME.

However, if the current state is in low efficiency, making a remedy plan is quite complicated. The remedy plan can be formally defined by applying Directed Acyclic Graph (DAG) algorithms. SME manager first expresses the current configuration into DAG, where vertexes in the first level are from members of setApps, vertexes in the other levels are from members of setNodes, and edges between vertexes are from deployRel and invokeRel. Edges are defined by using weight values of the vertexes. Weight values of vertexes are determined with the capacity of available resources, and the number of services deployed on the node. By applying the

shortest path algorithm, SME manager finds nodes to migrate or replicate services.

Then, SME manager determines the needs for service migration or replication. This determination depends on the amount of available resources in the problematic nodes or overall SME. If the resources in the nodes or SME are not sufficient, service migration, where the service instance is moved to another node and removed in the original node, is preferred. Otherwise, service replication, where the service instance is coped to another node while maintaining it in the original node, is chosen.

C. Taking Quality Improvement Actions This activity is to execute the quality remedy plan.

Service migration and replication methods are already available, and they can be utilized here.

D. Reconfiguring SME This activity is to update the SME repository with the

updated configuration. SME manager stores modified information on service instances and nodes to SME repository and establishes «invokes» relationship.

VI. EXPERIMENT

To validate our SME architecture, we perform an experiment with the configuration as shown in Figure 10. In the SME configuration at t1, there are two mobile nodes, two station nodes, four mobile applications, and five service instances.

S12 S2

1

S41

SNode1

S11

SNode2

SMEManager

MNode1

App2

App1

MNode2

App3

SMEAgent

SMEAgent

SMEAgent

SMEAgent

App4

S13

Figure 10. Configuration for the Experiment

And, there following «invokes» relations among elements;

invokeRel = {(App1, S11), (App1, S2

2), (App1, S41),

(App2, S12), (App2, S2

2), (App3, S11), (App3, S2

2), (App3, S4

1),(App4, S12), (App4, S1

3)}

With the configuration, we measure of AIRT, ASRT, and ERT at t1 as shown in Table V.

TABLE V. AIRT, ASRT, AND ERT AT T1

Value of AIRT ASRT ERTAIRT(S1

1) = 0.92sAIRT(S1

2) = 1.06s AIRT(S1

3) = 1.27sASRT(T1) = 1.08 ERT(SME)

= 1.25AIRT(S22) = 1.04s ASRT(T2) = 1.04

AIRT(S41) = 1.84s ASRT(T4) = 1.84

199

Page 8: [IEEE 2011 IEEE 8th International Conference on e-Business Engineering (ICEBE) - Beijing, China (2011.10.19-2011.10.21)] 2011 IEEE 8th International Conference on e-Business Engineering

At t2, the total ERT(SME) is 1.59, which is greater than pre-defined threshold value, 1.5 as shown in Figure 11.

1.25

1.59

1.24

0

0.5

1

1.5

2

t1 t2 t3

Resp

onse

Tim

e

Time Slot

SRT(T1)

SRT(T2)

SRT(T4)

ERT(SEM)

Thresholdvalue

Figure 11. SRT and ERT at t1, t2, and t3

SME manager analyzes this problem and realizes that this is caused by the suddenly increased value of IRT(S1

2)which value is changed from 1.06 to 3.68 as shown in Figure 12. Hence, SME manager decides that S1

2 migrate from SNode2 to another node, SNode4, and SME agent performs the migration.

1.06

3.68

0.91.08

1.98

1.04

0

0.5

1

1.5

2

2.5

3

3.5

4

t1 t2 t3

Resp

onse

Tim

e

Time Slot

IRT(S11)

IRT(S12)

IRT(S13)

SRT(T1)

Main Cause

Figure 12. IRT and SRT at t1, t2, and t3

After the migration at t3, SME manager checks that ERT(SME) at t3 is in 1.24, which is lower than its threshold value.

VII. CONCLUSION

As common form of remedying the resource constraints, service-based mobile computing is emerging. However, there still exists problems in this paradigm; lack of stability and performance, and limited manageability. For the solution, we presented a noble approach to resolving the problems and providing effective computing environment, called Service-based Mobile Ecosystem (SME).

In this paper, we showed that SME is an ecosystem which reveals self-stabilizing behavior by continuously monitoring quality of services, making quality enhancing plan, and performing necessary services migrations and replications. In SME, mobile devices are utilized as nodes to dynamically deploy services on demand.

We also defined quality attributes and metrics for evaluating the overall quality of SME, and performed experiments with the proposed process for self-stabilization and with the quality model. From the experiments, we derived a conclusion that the claimed benefits of SME are shown to be valid. Especially, it was shown that the

advanced service management process of SME maintains a consistent level of quality. By adopting the computing model of SME, we believe that super mobile computing without manageability problem can be realized.

ACKNOWLEDGMENT

This research was supported by the National IT Industry Promotion Agency (NIPA) under the program of Software Engineering Technologies Development. And, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2009-0076392).

REFERENCES

[1] König-Ries, B. and Jena, F., “Challenges in Mobile Application Development,” it-Information Technology, Vol. 52, No. 2, pp. 69-71, 2009.

[2] Forman, G.H and Zahorjan, J, “The Challenges of Mobile Computing,” Computer, Vol. 27, No. 4, pp. 38-47, 1994.

[3] Ennai A and Bose S, “MobileSOA: A Service Oriented Web 2.0 Framework for Context-Aware, Lightweight and Flexible Mobile Applications,” In Proceedings of the 2009 12th Enterprise Distributed Object Computing Conference Workshop (EDOCW 2008), pp. 348-382, 2008.

[4] Tergujeff, R., Haajanen, J., Leppanen, J., and Toivonen, S., “Mobile SOA: Service Orientation on Lightweight Mobile Devices,” In Proceedings of 2007 IEEE International Conference on Web Services (ICWS 2007), pp. 1224-1225, 2007.

[5] Natchetoi, Y., Kaufman, V., and Shapiro, A., “Service-Oriented Architecture for Mobile Applications,” In Proceedings of the 1st international workshop on Software architectures and mobility (SAM ’08), pp. 27-32, 2008.

[6] Unhelkar, B. and Murugesan, S., “The Enterprise Mobile Application Development Framework,” IT Professional, Vol. 12, No. 3, pp. 33-39, 2010.

[7] Malek, S., et al., “An Architecture-driven Software Mobility Framework,” The Journal of Systems and Software, Vol. 83, pp. 972-989, 2010.

[8] Chun, B.G. and Maniatis, P., “Dynamically Partitioning Applications between Weak Devices and Clouds,” In Proceedings of the 1st ACM Workshop on Mobile Cloud Computing & Services: Social Networks and Beyond (MCS 2010), Article No. 7, 2010.

[9] Han, S., Zhang, S., Zhaing, Y., and Fan, C., “An Adaptable Software Architecture based on Mobile Components in Pervasive Computing,” In Proceedings of the 6th International Confrence on Parallel and Distributed Computing, Appliations, and Techniques (PDCAT 2005),pp. 309-311, 2005.

[10] Wang Q and Deters R, “SOA’s Last Mile - Connecting Smartphones to the Service Cloud, “ In Proceedings of 2009 IEEE International Conference on Cloud Computing (CLOUD 2009), pp. 80-87, 2009.

[11] Thanh, D. and Jorstad, I., “A Service-Oriented Architecture Framework for Mobile Services,” In Proceedings of the Advanced Industrial Conference on Telecommunications/Service Assurance with Partial and Intermittent Resources Conference / E-Learning on Telecommunications Workshop (AICT/SAPIR/ELETE'05), pp. 65-70, 2005.

[12] Salehie, M. and Tahvildari, L., “Self-Adaptive Software: Landscape and Research Challenges,” ACM Transactions on Autonomous and Adaptive Systems, Vol. 4, No. 2, Article 14, 2009.

[13] ISO/IEC, ISO-IEC 9126-1 Software Engineering – Product Quality – Part 1: Quality Model, 2001.

200