ce: monitoring and modelling the performance metrics of...

15
Research Article PRESENCE: Monitoring and Modelling the Performance Metrics of Mobile Cloud SaaS Web Services Abdallah A. Z. A. Ibrahim , 1 Muhammad Umer Wasim, 1,2 Sebastien Varrette , 1 and Pascal Bouvry 1,2 1 FSTC-CSC/ILIAS–Parallel Computing and Optimization Group (PCOG), University of Luxembourg, 2 Avenue de l’Universit´ e, L-4365 Esch-sur-Alzette, Luxembourg 2 Interdisciplinary Centre for Security, Reliability and Trust (SnT), Luxembourg City, Luxembourg Correspondence should be addressed to Abdallah A. Z. A. Ibrahim; [email protected] Received 18 January 2018; Revised 4 May 2018; Accepted 21 May 2018; Published 14 August 2018 Academic Editor: Andrea Gaglione Copyright © 2018 Abdallah A. Z. A. Ibrahim et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Service Level Agreements (SLAs) are defining the quality of the services delivered from the Cloud Services Providers (CSPs) to the cloud customers. e services are delivered on a pay-per-use model. e quality of the provided services is not guaranteed by the SLA because it is just a contract. e developments around mobile cloud computing and the advent of edge computing technologies are contributing to the diffusion of the cloud services and the multiplication of offers. Although the cloud services market is growing for the coming years, unfortunately, there is no standard mechanism which exists to verify and assure that delivered services satisfy the signed SLA agreement in an automatic way. e accurate monitoring and modelling of the provided Quality of Service (QoS) is also missing. In this context, we aim at offering an automatic framework named PRESENCE, to evaluate the QoS and SLA compliance of Web Services (WSs) offered across several CSPs. Yet unlike other approaches, PRESENCE aims at quantifying in a fair and by stealth way the performance and scalability of the delivered WS. is article focuses on the first experimental results obtained on the accurate modelisation of each individual performance metrics. Indeed, 19 generated models are provided, out of which 78.9% accurately represent the WS performance metrics for two representative SaaS web services used for the validation of the PRESENCE approach. is opens novel perspectives for assessing the SLA compliance of Cloud providers using the PRESENCE framework. 1. Introduction As per NIST definition [1], Cloud Computing (CC) is a re- cent computing paradigm for “enabling ubiquitous, conve- nient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly pro- visioned and released with minimal management effort or service provider interaction.” ese resources are operated by a Cloud Services Provider (CSP), which typically delivers its services using one of three traditional models: Infrastructure- as-a-Service (IaaS), Platform-as-a-Service (PaaS), or Software- as-a-Service (SaaS) [2, 3]. is classification has since evolved totakeintoaccountthefederationofmoreandmorediverse computing resources. For instance, recent developments around Fog and Edge computing permitted to enlarge the scope of CC around Mobile CC, which offer new types of services and facilities to mobile users [4–7]. is leads to stronger business perspectives bringing more and more actors in the competition as CSPs. In this article, we focus on the performance evaluation of Web Services (WSs) deployed in the the context of the SaaS model by these actors acting as CSPs. ese services could be used through cloud users’ mobile devices or normal com- puters [8, 9]. In practice, WSs are delivered to the cloud customers on a pay-per-use model while the performance and Quality of Service (QoS) of the provided services are defined using services contracts or Service Level Agreement (SLA) [10, 11]. In particular, SLAs define the conditions and characteristics of the provided WS and its costs and the Hindawi Mobile Information Systems Volume 2018, Article ID 1351386, 14 pages https://doi.org/10.1155/2018/1351386

Upload: others

Post on 06-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

Research ArticlePRESENCE Monitoring and Modelling the PerformanceMetrics of Mobile Cloud SaaS Web Services

Abdallah A Z A Ibrahim 1 Muhammad Umer Wasim12 Sebastien Varrette 1

and Pascal Bouvry 12

1FSTC-CSCILIASndashParallel Computing and Optimization Group (PCOG) University of Luxembourg 2 Avenue de lrsquoUniversiteL-4365 Esch-sur-Alzette Luxembourg2Interdisciplinary Centre for Security Reliability and Trust (SnT) Luxembourg City Luxembourg

Correspondence should be addressed to Abdallah A Z A Ibrahim abdallahibrahimunilu

Received 18 January 2018 Revised 4 May 2018 Accepted 21 May 2018 Published 14 August 2018

Academic Editor Andrea Gaglione

Copyright copy 2018 Abdallah A Z A Ibrahim et al is is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use distribution and reproduction in anymedium provided the original work isproperly cited

Service Level Agreements (SLAs) are defining the quality of the services delivered from the Cloud Services Providers (CSPs) to thecloud customers e services are delivered on a pay-per-use model e quality of the provided services is not guaranteed by theSLA because it is just a contract e developments around mobile cloud computing and the advent of edge computingtechnologies are contributing to the diffusion of the cloud services and the multiplication of offers Although the cloud servicesmarket is growing for the coming years unfortunately there is no standard mechanism which exists to verify and assure thatdelivered services satisfy the signed SLA agreement in an automatic way e accurate monitoring and modelling of the providedQuality of Service (QoS) is also missing In this context we aim at offering an automatic framework named PRESENCE to evaluatethe QoS and SLA compliance of Web Services (WSs) offered across several CSPs Yet unlike other approaches PRESENCE aims atquantifying in a fair and by stealth way the performance and scalability of the delivered WS is article focuses on the firstexperimental results obtained on the accurate modelisation of each individual performance metrics Indeed 19 generated modelsare provided out of which 789 accurately represent the WS performance metrics for two representative SaaS web services usedfor the validation of the PRESENCE approachis opens novel perspectives for assessing the SLA compliance of Cloud providersusing the PRESENCE framework

1 Introduction

As per NIST definition [1] Cloud Computing (CC) is a re-cent computing paradigm for ldquoenabling ubiquitous conve-nient on-demand network access to a shared pool ofconfigurable computing resources (eg networks serversstorage applications and services) that can be rapidly pro-visioned and released with minimal management effort orservice provider interactionrdquo ese resources are operated bya Cloud Services Provider (CSP) which typically delivers itsservices using one of three traditional models Infrastructure-as-a-Service (IaaS) Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) [2 3] is classification has since evolvedto take into account the federation of more andmore diversecomputing resources For instance recent developments

around Fog and Edge computing permitted to enlarge thescope of CC around Mobile CC which offer new types ofservices and facilities to mobile users [4ndash7] is leads tostronger business perspectives bringing more and moreactors in the competition as CSPs

In this article we focus on the performance evaluation ofWeb Services (WSs) deployed in the the context of the SaaSmodel by these actors acting as CSPsese services could beused through cloud usersrsquo mobile devices or normal com-puters [8 9] In practice WSs are delivered to the cloudcustomers on a pay-per-use model while the performanceand Quality of Service (QoS) of the provided services aredefined using services contracts or Service Level Agreement(SLA) [10 11] In particular SLAs define the conditions andcharacteristics of the provided WS and its costs and the

HindawiMobile Information SystemsVolume 2018 Article ID 1351386 14 pageshttpsdoiorg10115520181351386

penalties encountered when the expected QoS is not met[12 13] Unfortunately there is no standard mechanismwhich exists to verify and assure that delivered servicessatisfy the signed SLA agreement Accurate measures of theprovided Quality of Service (QoS) is also missing most of thetime which render even more difficult the possibility toevaluate on a fair basis different CSPs e ambition of theproposed PRESENCE framework (PeRformance Evaluationof SErvices on the Cloud) is to fill this gap by offering anautomated approach to evaluate and classify in a fair and bystealth way the performance and scalability of the deliveredWS across multiple CSPs

In this context the contributions of this paper are four-fold

(1) e presentation of the PRESENCE framework thereasoning behind its design and organization

(2) e definition of the different module composing theframework and based on a Multi-Agent System(MAS) acting behind the PRESENCE client whichaim at tracking and modeling the WS performanceusing a predefined set of common performancemetrics

(3) e validation and first experimental results of thismodule over two representative WSs relying onseveral reference backend services at the heart ofmost WSs Apache HTTP Redis MemcachedMongoDB and PostgreSQL

(4) e cloud Web Service (WS) performance metricsmodels such as throughput transfer rate and latency(read and write) for HTTP Redis MemcachedMongoDB and PostgreSQL

e appropriate performance modeling depicted in thispaper is crucial for the accuracy and dynamic adaptationexpected within the stealthmodule of PRESENCE which willbe the object of another article is article is an extendedversion of our presented paper during the 32nd IEEE In-ternational Conference of Information Networks (ICOIN)2018 [14] which received the best paper award Compared tothis initial paper the present article details the modelling ofthe Web Services performance metrics and brings 19 gen-erated models out of which 789 accurately represent theWS performance metrics for the two SaaS WSs

is paper is organized as follows Section 2 details thebackground of this work and reviews related works ePRESENCE framework is described in Section 3 togetherwith some implementation details We then focus on thevalidation of the monitoring module on several referencebackend services at the heart of most WSsmdashdetails andexperiment results are discussed in Section 4 Finally Sec-tion 5 concludes the paper and provides some future di-rections and perspectives opened by this study

2 Context and Motivations

As mentioned before a SLA defines the conditions andcharacteristics of a given WS their costs and the penaltiesencountered when the expected QoS is not met Measuringthe performances of a given WS is therefore key to evaluate

whether or not the corresponding SLA is satisfiedmdashespecially from a user point of view which can thus requestpenalties to the CSP However accurate measures of theprovided Quality of Service (QoS) is missing most of thetime as performance evaluation is challenging in a cloudcontext considering that the end-users do not have a fullcontrol of the system running the service In this contextStantchev in [15] provides a generic methodology for theperformance evaluation of cloud computing configurationsbased on the Non-Functional Properties (NFP) (such asresponse time) of individual services Yet none of the stepswere clearly detailed and the evaluation is based on a singlebenchmark measuring a single metric Lee et al in [16]propose a comprehensive model for evaluating quality ofSaaS after defining the key features of SaaS deriving thequality attributes from the key features and defining themetrics for the quality attributes is model serves asa guideline to SaaS provider to characterize and improve theprovided QoS but obviously does not address user-basedevaluation However we used part of the proposed ontologyto classify our own performance metrics Gao et al [17]propose a Testing-as-a-Service (TaaS) infrastructure andreport a cloud-based TaaS environment with tools (known asCTaaS) developed to meet the needs in SaaS testing per-formance and scalability evaluation One drawback of thisapproach is that its deployment cannot be hidden from theCSP which might in returnmaliciously increase the capacityof the allocated resources to mitigate artificially the evalu-ation in favor of its own offering Wen and Dong in [18]propose a quality characteristics and standards for the se-curity and the QoS of the SaaS services Unfortunately theauthors did not propose any practical steps to evaluate thecloud services but only a set of recommendations

Beyond pure performance evaluation and to the best ofour knowledge the literature around SLA assurance and vi-olation monitoring is relatively sparse Cicotti et al in [19 20]propose a quality of services monitoring approach and SLAviolation reporter which are based on APIs queries and eventsCalled QoSMONaaS this proposed approach is measuring theKey Performance Indicator (KPI) for the services providedfrom the CSPs to the cloud customers Ibrahim et al in [10 21]provide a framework to assure SLA and evaluate the per-formance of the cloud applications ey use the simulationand local scenarios to test the cloud applications and servicesHammadi and Hussain in [22] propose a SLA monitoringframework by a third party e third party assesses the QoSand assures the performance and no violation in the SLANevertheless none of the abovementioned approaches featurethe dynamic adaptation of the evaluation campaign as foreseenwithin the PRESENCE proposal

Finally as regards to the CSPs ranking and classificationWagle et al in [23 24] provide a ranking based on theestimation and prediction of the quality of the cloud pro-vider services e model of estimating the performance isbased on the prediction methods such as ARIMA amp ETS

Motivated by recent scandals in the automotive sectorwhich demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results

2 Mobile Information Systems

this article presents PRESENCE which aims at coveringa large set of real benchmarks contributing to all aspects ofthe performance analysis while hiding the true nature of theevaluation to the CSP Our proposal is detailed in the nextsection

3 PRESENCE Performance Evaluation ofServices on the Cloud

An overview of the PRESENCE framework is proposed inFigure 1 and is now depicted It is basically composed of fivemain components

(1) A set of agents each of them responsible for a specificperformance metric measuring a specific aspect ofthe WS QoS ose metrics have been designed toreflect scalability and performance in a representa-tive cloud environment In practice several referencebenchmarks have been considered and evaluated toquantify this behaviour

(2) e monitoring and modeling module responsiblefor collecting the data from the agent which is usedtogether with an application performancemodel [25]to assess the performance metric model

(3) e stealthmodule responsible to dynamically adapt andbalance the workload pattern of the combined metricagents to make the resulting traffic indistinguishablefrom a regular user traffic from the CSP point of view

(4) e virtual QoS aggregator and SLA checker modulewhich takes care of evaluating the QoS and SLAcompliance of the WS offered across the consideredCSPs is ranking will help decision maker todetermine which provider is better to use for theanalyzed WS

(5) Finally the PRESEnCE client (or Auditor) is re-sponsible for interacting with the selected CSPs andevaluating the QoS and SLA compliance of WebServices It is meant to behave as a regular client ofthe WS and can eventually be distributed acrossseveral parallel instances even if our first imple-mentation operates a single sequential client

319ePRESENCEAgents ePRESENCE approach is usedto collect the data which represent the behaviour of the CSPand reflect the performance of the delivered services In thiscontext the PRESENCE agents are responsible for a set ofperformance metrics measuring a specific aspect of the WSQoS ose metrics have been designed to reflect scalabilityand performance in a representative cloud environmentcovering different criteria summarized in Table 1 eimplementation status and coverage of these metrics withinPRESENCE at the time of writing is also detailed

Most of these metrics are measured through a set ofreference benchmarks and each agent is responsible fora specific instance of one of these benchmarks ena multiobjective optimization heuristic is applied to evaluatethe audited WS according to the different performance

domains raised by the agents In practice a low-levelhybrid approach combining Machine Learning (for deepand reinforcement learning) and evolutionary-based met-aheuristics compensate the weaknesses of one method withthe strengths of the other More specifically a GeneticProgramming Hyper-heuristic (GPHH) approach will beused to automatically generate heuristics using buildingblocks extracted from the problem definition and thebenchmarks domains Such a strategy has been employedwith success in [26 27] and we are confident it could beefficiently applied to fit the context of this work It is worth tonote that the metrics marked as not yet implemented withinPRESENCE at the time of writing are linked to the cost andthe availability of the checked service e current papervalidates the approach against a set of classicalWSs deployedin a local environment and introduces a complex modellingand evaluation for the performance metrics

As regards the stealthmodule PRESENCE aims at relyingon a GA [28] approach to mitigate and adapt the concurrentexecutions of the different agents by evolving their respectiveparameters and thus the visible load pattern toward the CSPAnOracle is checked upon each iteration of the GA and basedon a statistical correlation of the resulting pattern againsta reference model corresponding to a regular usage Whenthis oracle is unable to statistically distinguish the outgoingmodeled pattern of the client from a regular client weconsider that we can apply one of the found solutions fora real evaluation campaign of the checked CSP Finally thevirtual QoS aggregator and SLA checker rely on the CSPranking and classification proposed byWagle et al in [23 24]

32 Monitoring and Modeling for the Dynamic Adaptation ofthe Agents e first step to ensure the dynamic adaptationof the workload linked to the evaluation process resides inthe capacity to model accurately this workload based on theconfiguration of a given agent Modelling the performancemetric will help the other researchers to generate datarepresenting the CSPrsquos behaviour under a high load andunder the normal usage in just a couple of minutes withoutany experiments In this context the multiple runs of theagents are stored and analyzed in a Machine Learningprocess During the training part the infrastructure modelrepresenting the CSP side which contains the SaaS services isfirst virtualized locally to initiate the first collection of datasample and setup the PRESENCE client (ie the auditor)based on a representative environment e second phase ofthe training involves ldquorealrdquo runs permitting the modeling ofeach metrics In practice PRESENCE relies on a simulationsoftware called Arena [29] to analyse the data returned fromthe agents and get the model for each individual perfor-mance metric Arena is a simulation software by RockwellCorporation It is used in different application domainsfrom manufacturing to supply chain and from customerservice and strategies to internal business processes It in-cludes three modules respectively called Arena InputAnalyserOutput Analyser and Process Analyser Among thethree the Input Analyser is useful for determining an ap-propriate distribution for collected data It allows the user tomake a sample set of raw data (eg latency of Cloud-based

Mobile Information Systems 3

WS) and fit it into a statistical distribution is distributionthen can be incorporated directly into a model to developand understand the corresponding system performance

en assuming such a model is available for eachconsidered metric PRESENCE aims at adapting its client cprime

(ie the auditor) to ensure the evaluation process is per-formed in a stealth way In this paper 19 models have beengenerated for each agentmdashthey are listed in the next sectionOf course once a model is provided we should validate itthat is ensure that the model is an accurate representation ofthe actual metric evolution and behaves in the same way

ere are many tests that could be used to validate on themodels generated ese tests are used to check the accuracyof the models by verifying on the null hypothesis e testsare such as t-test WilcoxonndashMannndashWhitney test andAnova test In PRESENCE this is achieved by using t-test bycomparing means of raw data and statistical distributiongenerated by the agent analysis e use of t-test is based onthe fact that the variable(s) in question (eg roughput)is normally distributed When this assumption is in doubtthe nonparametric WilcoxonndashMannndashWhitney test is usedas an alternative to the t-test [30] As we will see 789 ofthe generated models are proved as an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agents In the next section the modelling of theperformance metrics are detailed besides the experimentresults of PRESENCE

4 Validation and First Experimental Results

is section presents the results obtained from the PRES-ENCE approach within themonitoring and modelingmoduleas a prelude to the validation of the stealth module and thevirtual QoS aggregator and SLA checker left for the sequel ofthis work

Example redis memcachedmongoDB postgreSQL etc

Cloud provider n

Web service A Web service A

Cloud provider 1

Client cA1

Client cA2

Client cAn

Client cB1

[Distributed] PRESEnCE client crsquo (auditor)

Client cB2

Client cBm

WS performanceevaluation

Stealth moduledynamic load adaptation

Monitoringmodeling

WorkloadSLA analysis

Performance evaluation

On-demand evaluation ofSaaS web services

across multi-cloud providersbased on

PRESEnCE

Agentmetric 1 Agentmetric 2 Agentmetric k

Virtual QoSaggregatorSLA checker

SLA

QoS

Val

idat

or

Predictiveanalytics

Analyze Ranking

FIGURE 1 Overview of the PRESENCE framework e figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

TABLE 1 Performance metrics used in PRESENCE

Domain MetricImplementation status Metric type

Scalability

Number of transactions Workloadperformanceindicator

Number of requests Number of operations Number of records Number of fetches

Reliability

Parallel connections (clients)

WorkloadNumber of pipes Number of threads Workload size

Availability

Response time times

Performanceindicator

Up time times

Down time times

Load balancing times

Performance

Latency Performanceindicator

roughput Transfer rate Misshit rate

Costs Installing costs times QualityindicatorRunning costs times

SecurityAuthentication Security

indicatorEncryption Auditability

4 Mobile Information Systems

41 Experimental Setup In an attempt to validate the ap-proach on realistic workflows we tested PRESENCE againsttwo traditional and core web services

(1) A multi-DataBase (DB) WS which offers both SQL(ie PostgreSQL 94) and NoSQL DBs For the latercase we deployed two reference in-memory datastructure stores used as a database cache andmessagebroker that is Redis 2817 (redisio) and Memcached150 (memcachedorg) as well as MongoDB 34 anopen-source document database that provides highperformance high availability and automatic scaling

(2) e reference HTTP Apache server (version 2222-13) which is used for testing a traditional HTTP WSon the cloud

Building such a CSP environment was performed on topof two physical servers running Ubuntu 1404 LTS (Trusty64) connected over a 1 GbE network

On the PRESENCE client side 8 agents are deployed asKVM guests that is virtual machines running CentOS 73over 4 physical servers Each agent is running one of thebenchmarking tool listed in Table 2 to evaluate the WSperformance and collecting data about the CSP behaviourEach PRESENCE agent thus measures a specific subset ofperformance metrics and attributes and also deals with spe-cific kinds of cloud servers Eachmeasurement is consisting ofan average over 100 runs collected by the PRESENCE agent tomake the data statistically significant e tools used forperformance evaluation are several reference benchmarkingsuch as Yahoo Cloud Serving (YCSB) [31] Memtire [32]Redis benchmark [33] Twitter RPC [34] Pgbench [35] HTTPload [36] and Apache AB [37] In addition iperf [38] (a toolfor active measurements of the maximum achievable band-width on IP networks) is used in the closed environment forthe validation of PRESENCE as it provides an easy testimonialfor the WS access capacity e general overview of thedeployed infrastructure is provided in Figure 2

42 PRESENCE Agents Evaluation Results e targeted WSof each deployed agent is precised in Table 2e PRESENCEapproach is used to collect the data which represent thebehaviour of the CSP and these data also can indicate andevaluate the performance of the services As mentioned inthe previous section there are many metrics that can rep-resent the performance PRESENCE uses some of thesemetrics as a workload to the CSPrsquos servers and the others asresults from the experiments For example the number ofrequests operations records transactions and fetches aremetrics which representing the scalability of the CSP and areused by PRESENCE to increase the workload to see thebehaviour of the servers under the workload Other metricslike parallel or concurrency connections number of pipesnumber of threads and the workload size are representingthe CSP reliability are also used by PRESENCE to increasethe workload during the test Other metrics like responsetime up time down time transfer rate latency (read andupdate) and throughput are indicating the CSP perfor-mance and availability and PRESENCE used them to

evaluate the services performance Because PRESENCE usesmany tools to evaluate and benchmark the services it candeal with most of the metrics But there are two or threecommon metrics we will model and represent them in theresults such as latency throughput and transfer rate ereare other metrics that represent the security and the costs ofthe CSPs and all those metrics are summarized in Table 1 inthe previous section e different parameters (both inputand output) which are used for the PRESENCE validation areprovided in Table 3 We now provide some of the numeroustraces produced by the execution of the PRESENCE agentswhen checking the performance of the DB and HTTP WSs

Figure 3 shows the Redis Memcached and Mongomeasured WS performance under the statistically significantstress produced by the PRESENCE agent running the YCSBbenchmarking tool Figure 3(a) shows the throughput of thethree backends and demonstrates that the Redis WS is thebest in this metric when compared to the other two WSswhere it has the highest throughput is trend is confirmedwhen the latency metric is analysed in Figures 3(b) and 3(c)

Figure 4 shows the Redis and Memcached measured WSperformance under the stress produced by the PRESENCEagent running the Memtier benchmarking tool e pre-vious trend is again confirmed in our runs that is the Redis-based WS performs better than the Memcached backend for

Table 2 Benchmarking tools used by PRESENCE agents

Benchmark tool Version Targeted WS

YCSB 0120 Redis MongoDB memcachedDynamoDB etc

Memtire-Bench 128 Redis memcachedRedis-Bench 242 RedisTwitter RPC-Perf 203-pre Redis memcached ApachePgBench 9412 PostgresqlApache AB 23 ApacheHTTP Load 1 ApacheIperf v1 v3 Iperf server

Cloud data center

Web serverDatabase server

SaaS web services

- Redis- Memcached- MongoDB- DynamoDB- Postgresql- Apache HTTP- Iperf

Cloudservers

FIGURE 2 Infrastructure deployed to validate the PRESENCE frame-work e figure is reproduced from Ibrahim et al [14] (under theCreative Commons Attribution Licensepublic domain)

Mobile Information Systems 5

all metrics for example throughput latency and transferrate As noticed in the three plots from the Memtier agentsthe latency throughput and transfer rate of the MemcachedWS have increased suddenly in the end Such behaviour wasconsistent across all runs and was linked to the memorysaturation reached by the server process before being clearerStill upon DB WS performance evaluation Figure 5 detailsthe performance of the PostgreSQL WS under the stressproduced by the PRESENCE agent executing the PgBenchbenchmarking tool e figure shows the normalized re-sponse time of the server and the normalized (standardized)number of Transactions per Second (TPS) e responsetime is the latency of the service when the TPS correspondsto its throughput e performance of the WS is affected bythe increased workload which is represented by the in-creasing number of TPSs and parallel clients e increasingof the TPS let the response time increasing even if the TPSwas going down and after filling in the memory the TPSdecreased again and response time returned back again toa decrease is behaviour was consistent across the runs ofthe PgBench agent Finally Figure 6 shows the average runsof the PRESENCE agent executing the HTTP Load bench-mark tool when assessing the performance of the HTTPWSWe exhibit on each subfigure the behaviour of both the la-tency and throughput against an increasing number of fetchesand parallel clients which increases the workload of the WS

Many more traces of all considered agent runs areavailable but were not displayed in this article for the sake ofconciseness Overall and to conclude on the collected tracesfrom the many agent runs depicted in this section we wereable to reflect several complementary aspects of the twoWSsconsidered in the scenario of this experimental validationYet as part of the contributions of this article the generationof accurate models for these evaluations is crucial ey aredetailed in the next section

43 WS Performance Metrics Modeling Outside the de-scription of the PRESENCE framework the main contri-bution of this article resides more on the modeling of themeasured WS performance metrics from the data collectedby the PRESENCE agents rather than the runs in themselvesdepicted in the previous section Once these models areavailable they can be used to estimate and dynamically adaptthe behaviour of the PRESENCE client (which combine andschedule the execution of all considered agents in parallel) soas to hide the true nature of the evaluation by making itindistinguishable from a regular client traffic But this cor-responds to the next step of our work In this paper we wishto illustrate the developed model from the PRESENCE agentsevaluations reported in the previous section e main ob-jective of the developed model is to understand the systemperformance behaviour relative to various assumptions andinput parameters discussed in previous sections As men-tioned in Section 32 we rely on the simulation software calledArena to analyse the data returned from the agents and get themodel for each individual performance metric We haveperformed this analysis for each agents and the results of themodels are presented in the below tables Of course sucha contribution is pertinent only if the generated model isvalidatedmdasha process consisting in ensuring the accuraterepresentation of the actual system and its behaviour is isachieved by using a set of statistical t-tests by comparing themeans of raw data and statistical distribution generated by theInput Analyzer of the Arena system If the result shows thatboth samples are analytically similar then the model de-veloped from statistical distribution is an accurate repre-sentation of the actual system and behaves in the same wayFor each model detailed in the tables the outcomes of the t-tests in the form of the computed p value (against thecommon significance level of 005) is provided and dem-onstrate if present the accuracy of the proposed models

Table 3 Inputoutput metrics for each PRESENCE Agent

PRESENCE agentInput parameters

Transactions Requests Operations Records Fetches Parallelclients Pipes reads Workload

sizeYCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

PRESENCE agentOutput parameters

roughput Latency Readlatency

Updatelatency

CleanUplatency

Transferrate

Responsetime Miss Hits

YCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

6 Mobile Information Systems

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 2: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

penalties encountered when the expected QoS is not met[12 13] Unfortunately there is no standard mechanismwhich exists to verify and assure that delivered servicessatisfy the signed SLA agreement Accurate measures of theprovided Quality of Service (QoS) is also missing most of thetime which render even more difficult the possibility toevaluate on a fair basis different CSPs e ambition of theproposed PRESENCE framework (PeRformance Evaluationof SErvices on the Cloud) is to fill this gap by offering anautomated approach to evaluate and classify in a fair and bystealth way the performance and scalability of the deliveredWS across multiple CSPs

In this context the contributions of this paper are four-fold

(1) e presentation of the PRESENCE framework thereasoning behind its design and organization

(2) e definition of the different module composing theframework and based on a Multi-Agent System(MAS) acting behind the PRESENCE client whichaim at tracking and modeling the WS performanceusing a predefined set of common performancemetrics

(3) e validation and first experimental results of thismodule over two representative WSs relying onseveral reference backend services at the heart ofmost WSs Apache HTTP Redis MemcachedMongoDB and PostgreSQL

(4) e cloud Web Service (WS) performance metricsmodels such as throughput transfer rate and latency(read and write) for HTTP Redis MemcachedMongoDB and PostgreSQL

e appropriate performance modeling depicted in thispaper is crucial for the accuracy and dynamic adaptationexpected within the stealthmodule of PRESENCE which willbe the object of another article is article is an extendedversion of our presented paper during the 32nd IEEE In-ternational Conference of Information Networks (ICOIN)2018 [14] which received the best paper award Compared tothis initial paper the present article details the modelling ofthe Web Services performance metrics and brings 19 gen-erated models out of which 789 accurately represent theWS performance metrics for the two SaaS WSs

is paper is organized as follows Section 2 details thebackground of this work and reviews related works ePRESENCE framework is described in Section 3 togetherwith some implementation details We then focus on thevalidation of the monitoring module on several referencebackend services at the heart of most WSsmdashdetails andexperiment results are discussed in Section 4 Finally Sec-tion 5 concludes the paper and provides some future di-rections and perspectives opened by this study

2 Context and Motivations

As mentioned before a SLA defines the conditions andcharacteristics of a given WS their costs and the penaltiesencountered when the expected QoS is not met Measuringthe performances of a given WS is therefore key to evaluate

whether or not the corresponding SLA is satisfiedmdashespecially from a user point of view which can thus requestpenalties to the CSP However accurate measures of theprovided Quality of Service (QoS) is missing most of thetime as performance evaluation is challenging in a cloudcontext considering that the end-users do not have a fullcontrol of the system running the service In this contextStantchev in [15] provides a generic methodology for theperformance evaluation of cloud computing configurationsbased on the Non-Functional Properties (NFP) (such asresponse time) of individual services Yet none of the stepswere clearly detailed and the evaluation is based on a singlebenchmark measuring a single metric Lee et al in [16]propose a comprehensive model for evaluating quality ofSaaS after defining the key features of SaaS deriving thequality attributes from the key features and defining themetrics for the quality attributes is model serves asa guideline to SaaS provider to characterize and improve theprovided QoS but obviously does not address user-basedevaluation However we used part of the proposed ontologyto classify our own performance metrics Gao et al [17]propose a Testing-as-a-Service (TaaS) infrastructure andreport a cloud-based TaaS environment with tools (known asCTaaS) developed to meet the needs in SaaS testing per-formance and scalability evaluation One drawback of thisapproach is that its deployment cannot be hidden from theCSP which might in returnmaliciously increase the capacityof the allocated resources to mitigate artificially the evalu-ation in favor of its own offering Wen and Dong in [18]propose a quality characteristics and standards for the se-curity and the QoS of the SaaS services Unfortunately theauthors did not propose any practical steps to evaluate thecloud services but only a set of recommendations

Beyond pure performance evaluation and to the best ofour knowledge the literature around SLA assurance and vi-olation monitoring is relatively sparse Cicotti et al in [19 20]propose a quality of services monitoring approach and SLAviolation reporter which are based on APIs queries and eventsCalled QoSMONaaS this proposed approach is measuring theKey Performance Indicator (KPI) for the services providedfrom the CSPs to the cloud customers Ibrahim et al in [10 21]provide a framework to assure SLA and evaluate the per-formance of the cloud applications ey use the simulationand local scenarios to test the cloud applications and servicesHammadi and Hussain in [22] propose a SLA monitoringframework by a third party e third party assesses the QoSand assures the performance and no violation in the SLANevertheless none of the abovementioned approaches featurethe dynamic adaptation of the evaluation campaign as foreseenwithin the PRESENCE proposal

Finally as regards to the CSPs ranking and classificationWagle et al in [23 24] provide a ranking based on theestimation and prediction of the quality of the cloud pro-vider services e model of estimating the performance isbased on the prediction methods such as ARIMA amp ETS

Motivated by recent scandals in the automotive sectorwhich demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results

2 Mobile Information Systems

this article presents PRESENCE which aims at coveringa large set of real benchmarks contributing to all aspects ofthe performance analysis while hiding the true nature of theevaluation to the CSP Our proposal is detailed in the nextsection

3 PRESENCE Performance Evaluation ofServices on the Cloud

An overview of the PRESENCE framework is proposed inFigure 1 and is now depicted It is basically composed of fivemain components

(1) A set of agents each of them responsible for a specificperformance metric measuring a specific aspect ofthe WS QoS ose metrics have been designed toreflect scalability and performance in a representa-tive cloud environment In practice several referencebenchmarks have been considered and evaluated toquantify this behaviour

(2) e monitoring and modeling module responsiblefor collecting the data from the agent which is usedtogether with an application performancemodel [25]to assess the performance metric model

(3) e stealthmodule responsible to dynamically adapt andbalance the workload pattern of the combined metricagents to make the resulting traffic indistinguishablefrom a regular user traffic from the CSP point of view

(4) e virtual QoS aggregator and SLA checker modulewhich takes care of evaluating the QoS and SLAcompliance of the WS offered across the consideredCSPs is ranking will help decision maker todetermine which provider is better to use for theanalyzed WS

(5) Finally the PRESEnCE client (or Auditor) is re-sponsible for interacting with the selected CSPs andevaluating the QoS and SLA compliance of WebServices It is meant to behave as a regular client ofthe WS and can eventually be distributed acrossseveral parallel instances even if our first imple-mentation operates a single sequential client

319ePRESENCEAgents ePRESENCE approach is usedto collect the data which represent the behaviour of the CSPand reflect the performance of the delivered services In thiscontext the PRESENCE agents are responsible for a set ofperformance metrics measuring a specific aspect of the WSQoS ose metrics have been designed to reflect scalabilityand performance in a representative cloud environmentcovering different criteria summarized in Table 1 eimplementation status and coverage of these metrics withinPRESENCE at the time of writing is also detailed

Most of these metrics are measured through a set ofreference benchmarks and each agent is responsible fora specific instance of one of these benchmarks ena multiobjective optimization heuristic is applied to evaluatethe audited WS according to the different performance

domains raised by the agents In practice a low-levelhybrid approach combining Machine Learning (for deepand reinforcement learning) and evolutionary-based met-aheuristics compensate the weaknesses of one method withthe strengths of the other More specifically a GeneticProgramming Hyper-heuristic (GPHH) approach will beused to automatically generate heuristics using buildingblocks extracted from the problem definition and thebenchmarks domains Such a strategy has been employedwith success in [26 27] and we are confident it could beefficiently applied to fit the context of this work It is worth tonote that the metrics marked as not yet implemented withinPRESENCE at the time of writing are linked to the cost andthe availability of the checked service e current papervalidates the approach against a set of classicalWSs deployedin a local environment and introduces a complex modellingand evaluation for the performance metrics

As regards the stealthmodule PRESENCE aims at relyingon a GA [28] approach to mitigate and adapt the concurrentexecutions of the different agents by evolving their respectiveparameters and thus the visible load pattern toward the CSPAnOracle is checked upon each iteration of the GA and basedon a statistical correlation of the resulting pattern againsta reference model corresponding to a regular usage Whenthis oracle is unable to statistically distinguish the outgoingmodeled pattern of the client from a regular client weconsider that we can apply one of the found solutions fora real evaluation campaign of the checked CSP Finally thevirtual QoS aggregator and SLA checker rely on the CSPranking and classification proposed byWagle et al in [23 24]

32 Monitoring and Modeling for the Dynamic Adaptation ofthe Agents e first step to ensure the dynamic adaptationof the workload linked to the evaluation process resides inthe capacity to model accurately this workload based on theconfiguration of a given agent Modelling the performancemetric will help the other researchers to generate datarepresenting the CSPrsquos behaviour under a high load andunder the normal usage in just a couple of minutes withoutany experiments In this context the multiple runs of theagents are stored and analyzed in a Machine Learningprocess During the training part the infrastructure modelrepresenting the CSP side which contains the SaaS services isfirst virtualized locally to initiate the first collection of datasample and setup the PRESENCE client (ie the auditor)based on a representative environment e second phase ofthe training involves ldquorealrdquo runs permitting the modeling ofeach metrics In practice PRESENCE relies on a simulationsoftware called Arena [29] to analyse the data returned fromthe agents and get the model for each individual perfor-mance metric Arena is a simulation software by RockwellCorporation It is used in different application domainsfrom manufacturing to supply chain and from customerservice and strategies to internal business processes It in-cludes three modules respectively called Arena InputAnalyserOutput Analyser and Process Analyser Among thethree the Input Analyser is useful for determining an ap-propriate distribution for collected data It allows the user tomake a sample set of raw data (eg latency of Cloud-based

Mobile Information Systems 3

WS) and fit it into a statistical distribution is distributionthen can be incorporated directly into a model to developand understand the corresponding system performance

en assuming such a model is available for eachconsidered metric PRESENCE aims at adapting its client cprime

(ie the auditor) to ensure the evaluation process is per-formed in a stealth way In this paper 19 models have beengenerated for each agentmdashthey are listed in the next sectionOf course once a model is provided we should validate itthat is ensure that the model is an accurate representation ofthe actual metric evolution and behaves in the same way

ere are many tests that could be used to validate on themodels generated ese tests are used to check the accuracyof the models by verifying on the null hypothesis e testsare such as t-test WilcoxonndashMannndashWhitney test andAnova test In PRESENCE this is achieved by using t-test bycomparing means of raw data and statistical distributiongenerated by the agent analysis e use of t-test is based onthe fact that the variable(s) in question (eg roughput)is normally distributed When this assumption is in doubtthe nonparametric WilcoxonndashMannndashWhitney test is usedas an alternative to the t-test [30] As we will see 789 ofthe generated models are proved as an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agents In the next section the modelling of theperformance metrics are detailed besides the experimentresults of PRESENCE

4 Validation and First Experimental Results

is section presents the results obtained from the PRES-ENCE approach within themonitoring and modelingmoduleas a prelude to the validation of the stealth module and thevirtual QoS aggregator and SLA checker left for the sequel ofthis work

Example redis memcachedmongoDB postgreSQL etc

Cloud provider n

Web service A Web service A

Cloud provider 1

Client cA1

Client cA2

Client cAn

Client cB1

[Distributed] PRESEnCE client crsquo (auditor)

Client cB2

Client cBm

WS performanceevaluation

Stealth moduledynamic load adaptation

Monitoringmodeling

WorkloadSLA analysis

Performance evaluation

On-demand evaluation ofSaaS web services

across multi-cloud providersbased on

PRESEnCE

Agentmetric 1 Agentmetric 2 Agentmetric k

Virtual QoSaggregatorSLA checker

SLA

QoS

Val

idat

or

Predictiveanalytics

Analyze Ranking

FIGURE 1 Overview of the PRESENCE framework e figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

TABLE 1 Performance metrics used in PRESENCE

Domain MetricImplementation status Metric type

Scalability

Number of transactions Workloadperformanceindicator

Number of requests Number of operations Number of records Number of fetches

Reliability

Parallel connections (clients)

WorkloadNumber of pipes Number of threads Workload size

Availability

Response time times

Performanceindicator

Up time times

Down time times

Load balancing times

Performance

Latency Performanceindicator

roughput Transfer rate Misshit rate

Costs Installing costs times QualityindicatorRunning costs times

SecurityAuthentication Security

indicatorEncryption Auditability

4 Mobile Information Systems

41 Experimental Setup In an attempt to validate the ap-proach on realistic workflows we tested PRESENCE againsttwo traditional and core web services

(1) A multi-DataBase (DB) WS which offers both SQL(ie PostgreSQL 94) and NoSQL DBs For the latercase we deployed two reference in-memory datastructure stores used as a database cache andmessagebroker that is Redis 2817 (redisio) and Memcached150 (memcachedorg) as well as MongoDB 34 anopen-source document database that provides highperformance high availability and automatic scaling

(2) e reference HTTP Apache server (version 2222-13) which is used for testing a traditional HTTP WSon the cloud

Building such a CSP environment was performed on topof two physical servers running Ubuntu 1404 LTS (Trusty64) connected over a 1 GbE network

On the PRESENCE client side 8 agents are deployed asKVM guests that is virtual machines running CentOS 73over 4 physical servers Each agent is running one of thebenchmarking tool listed in Table 2 to evaluate the WSperformance and collecting data about the CSP behaviourEach PRESENCE agent thus measures a specific subset ofperformance metrics and attributes and also deals with spe-cific kinds of cloud servers Eachmeasurement is consisting ofan average over 100 runs collected by the PRESENCE agent tomake the data statistically significant e tools used forperformance evaluation are several reference benchmarkingsuch as Yahoo Cloud Serving (YCSB) [31] Memtire [32]Redis benchmark [33] Twitter RPC [34] Pgbench [35] HTTPload [36] and Apache AB [37] In addition iperf [38] (a toolfor active measurements of the maximum achievable band-width on IP networks) is used in the closed environment forthe validation of PRESENCE as it provides an easy testimonialfor the WS access capacity e general overview of thedeployed infrastructure is provided in Figure 2

42 PRESENCE Agents Evaluation Results e targeted WSof each deployed agent is precised in Table 2e PRESENCEapproach is used to collect the data which represent thebehaviour of the CSP and these data also can indicate andevaluate the performance of the services As mentioned inthe previous section there are many metrics that can rep-resent the performance PRESENCE uses some of thesemetrics as a workload to the CSPrsquos servers and the others asresults from the experiments For example the number ofrequests operations records transactions and fetches aremetrics which representing the scalability of the CSP and areused by PRESENCE to increase the workload to see thebehaviour of the servers under the workload Other metricslike parallel or concurrency connections number of pipesnumber of threads and the workload size are representingthe CSP reliability are also used by PRESENCE to increasethe workload during the test Other metrics like responsetime up time down time transfer rate latency (read andupdate) and throughput are indicating the CSP perfor-mance and availability and PRESENCE used them to

evaluate the services performance Because PRESENCE usesmany tools to evaluate and benchmark the services it candeal with most of the metrics But there are two or threecommon metrics we will model and represent them in theresults such as latency throughput and transfer rate ereare other metrics that represent the security and the costs ofthe CSPs and all those metrics are summarized in Table 1 inthe previous section e different parameters (both inputand output) which are used for the PRESENCE validation areprovided in Table 3 We now provide some of the numeroustraces produced by the execution of the PRESENCE agentswhen checking the performance of the DB and HTTP WSs

Figure 3 shows the Redis Memcached and Mongomeasured WS performance under the statistically significantstress produced by the PRESENCE agent running the YCSBbenchmarking tool Figure 3(a) shows the throughput of thethree backends and demonstrates that the Redis WS is thebest in this metric when compared to the other two WSswhere it has the highest throughput is trend is confirmedwhen the latency metric is analysed in Figures 3(b) and 3(c)

Figure 4 shows the Redis and Memcached measured WSperformance under the stress produced by the PRESENCEagent running the Memtier benchmarking tool e pre-vious trend is again confirmed in our runs that is the Redis-based WS performs better than the Memcached backend for

Table 2 Benchmarking tools used by PRESENCE agents

Benchmark tool Version Targeted WS

YCSB 0120 Redis MongoDB memcachedDynamoDB etc

Memtire-Bench 128 Redis memcachedRedis-Bench 242 RedisTwitter RPC-Perf 203-pre Redis memcached ApachePgBench 9412 PostgresqlApache AB 23 ApacheHTTP Load 1 ApacheIperf v1 v3 Iperf server

Cloud data center

Web serverDatabase server

SaaS web services

- Redis- Memcached- MongoDB- DynamoDB- Postgresql- Apache HTTP- Iperf

Cloudservers

FIGURE 2 Infrastructure deployed to validate the PRESENCE frame-work e figure is reproduced from Ibrahim et al [14] (under theCreative Commons Attribution Licensepublic domain)

Mobile Information Systems 5

all metrics for example throughput latency and transferrate As noticed in the three plots from the Memtier agentsthe latency throughput and transfer rate of the MemcachedWS have increased suddenly in the end Such behaviour wasconsistent across all runs and was linked to the memorysaturation reached by the server process before being clearerStill upon DB WS performance evaluation Figure 5 detailsthe performance of the PostgreSQL WS under the stressproduced by the PRESENCE agent executing the PgBenchbenchmarking tool e figure shows the normalized re-sponse time of the server and the normalized (standardized)number of Transactions per Second (TPS) e responsetime is the latency of the service when the TPS correspondsto its throughput e performance of the WS is affected bythe increased workload which is represented by the in-creasing number of TPSs and parallel clients e increasingof the TPS let the response time increasing even if the TPSwas going down and after filling in the memory the TPSdecreased again and response time returned back again toa decrease is behaviour was consistent across the runs ofthe PgBench agent Finally Figure 6 shows the average runsof the PRESENCE agent executing the HTTP Load bench-mark tool when assessing the performance of the HTTPWSWe exhibit on each subfigure the behaviour of both the la-tency and throughput against an increasing number of fetchesand parallel clients which increases the workload of the WS

Many more traces of all considered agent runs areavailable but were not displayed in this article for the sake ofconciseness Overall and to conclude on the collected tracesfrom the many agent runs depicted in this section we wereable to reflect several complementary aspects of the twoWSsconsidered in the scenario of this experimental validationYet as part of the contributions of this article the generationof accurate models for these evaluations is crucial ey aredetailed in the next section

43 WS Performance Metrics Modeling Outside the de-scription of the PRESENCE framework the main contri-bution of this article resides more on the modeling of themeasured WS performance metrics from the data collectedby the PRESENCE agents rather than the runs in themselvesdepicted in the previous section Once these models areavailable they can be used to estimate and dynamically adaptthe behaviour of the PRESENCE client (which combine andschedule the execution of all considered agents in parallel) soas to hide the true nature of the evaluation by making itindistinguishable from a regular client traffic But this cor-responds to the next step of our work In this paper we wishto illustrate the developed model from the PRESENCE agentsevaluations reported in the previous section e main ob-jective of the developed model is to understand the systemperformance behaviour relative to various assumptions andinput parameters discussed in previous sections As men-tioned in Section 32 we rely on the simulation software calledArena to analyse the data returned from the agents and get themodel for each individual performance metric We haveperformed this analysis for each agents and the results of themodels are presented in the below tables Of course sucha contribution is pertinent only if the generated model isvalidatedmdasha process consisting in ensuring the accuraterepresentation of the actual system and its behaviour is isachieved by using a set of statistical t-tests by comparing themeans of raw data and statistical distribution generated by theInput Analyzer of the Arena system If the result shows thatboth samples are analytically similar then the model de-veloped from statistical distribution is an accurate repre-sentation of the actual system and behaves in the same wayFor each model detailed in the tables the outcomes of the t-tests in the form of the computed p value (against thecommon significance level of 005) is provided and dem-onstrate if present the accuracy of the proposed models

Table 3 Inputoutput metrics for each PRESENCE Agent

PRESENCE agentInput parameters

Transactions Requests Operations Records Fetches Parallelclients Pipes reads Workload

sizeYCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

PRESENCE agentOutput parameters

roughput Latency Readlatency

Updatelatency

CleanUplatency

Transferrate

Responsetime Miss Hits

YCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

6 Mobile Information Systems

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 3: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

this article presents PRESENCE which aims at coveringa large set of real benchmarks contributing to all aspects ofthe performance analysis while hiding the true nature of theevaluation to the CSP Our proposal is detailed in the nextsection

3 PRESENCE Performance Evaluation ofServices on the Cloud

An overview of the PRESENCE framework is proposed inFigure 1 and is now depicted It is basically composed of fivemain components

(1) A set of agents each of them responsible for a specificperformance metric measuring a specific aspect ofthe WS QoS ose metrics have been designed toreflect scalability and performance in a representa-tive cloud environment In practice several referencebenchmarks have been considered and evaluated toquantify this behaviour

(2) e monitoring and modeling module responsiblefor collecting the data from the agent which is usedtogether with an application performancemodel [25]to assess the performance metric model

(3) e stealthmodule responsible to dynamically adapt andbalance the workload pattern of the combined metricagents to make the resulting traffic indistinguishablefrom a regular user traffic from the CSP point of view

(4) e virtual QoS aggregator and SLA checker modulewhich takes care of evaluating the QoS and SLAcompliance of the WS offered across the consideredCSPs is ranking will help decision maker todetermine which provider is better to use for theanalyzed WS

(5) Finally the PRESEnCE client (or Auditor) is re-sponsible for interacting with the selected CSPs andevaluating the QoS and SLA compliance of WebServices It is meant to behave as a regular client ofthe WS and can eventually be distributed acrossseveral parallel instances even if our first imple-mentation operates a single sequential client

319ePRESENCEAgents ePRESENCE approach is usedto collect the data which represent the behaviour of the CSPand reflect the performance of the delivered services In thiscontext the PRESENCE agents are responsible for a set ofperformance metrics measuring a specific aspect of the WSQoS ose metrics have been designed to reflect scalabilityand performance in a representative cloud environmentcovering different criteria summarized in Table 1 eimplementation status and coverage of these metrics withinPRESENCE at the time of writing is also detailed

Most of these metrics are measured through a set ofreference benchmarks and each agent is responsible fora specific instance of one of these benchmarks ena multiobjective optimization heuristic is applied to evaluatethe audited WS according to the different performance

domains raised by the agents In practice a low-levelhybrid approach combining Machine Learning (for deepand reinforcement learning) and evolutionary-based met-aheuristics compensate the weaknesses of one method withthe strengths of the other More specifically a GeneticProgramming Hyper-heuristic (GPHH) approach will beused to automatically generate heuristics using buildingblocks extracted from the problem definition and thebenchmarks domains Such a strategy has been employedwith success in [26 27] and we are confident it could beefficiently applied to fit the context of this work It is worth tonote that the metrics marked as not yet implemented withinPRESENCE at the time of writing are linked to the cost andthe availability of the checked service e current papervalidates the approach against a set of classicalWSs deployedin a local environment and introduces a complex modellingand evaluation for the performance metrics

As regards the stealthmodule PRESENCE aims at relyingon a GA [28] approach to mitigate and adapt the concurrentexecutions of the different agents by evolving their respectiveparameters and thus the visible load pattern toward the CSPAnOracle is checked upon each iteration of the GA and basedon a statistical correlation of the resulting pattern againsta reference model corresponding to a regular usage Whenthis oracle is unable to statistically distinguish the outgoingmodeled pattern of the client from a regular client weconsider that we can apply one of the found solutions fora real evaluation campaign of the checked CSP Finally thevirtual QoS aggregator and SLA checker rely on the CSPranking and classification proposed byWagle et al in [23 24]

32 Monitoring and Modeling for the Dynamic Adaptation ofthe Agents e first step to ensure the dynamic adaptationof the workload linked to the evaluation process resides inthe capacity to model accurately this workload based on theconfiguration of a given agent Modelling the performancemetric will help the other researchers to generate datarepresenting the CSPrsquos behaviour under a high load andunder the normal usage in just a couple of minutes withoutany experiments In this context the multiple runs of theagents are stored and analyzed in a Machine Learningprocess During the training part the infrastructure modelrepresenting the CSP side which contains the SaaS services isfirst virtualized locally to initiate the first collection of datasample and setup the PRESENCE client (ie the auditor)based on a representative environment e second phase ofthe training involves ldquorealrdquo runs permitting the modeling ofeach metrics In practice PRESENCE relies on a simulationsoftware called Arena [29] to analyse the data returned fromthe agents and get the model for each individual perfor-mance metric Arena is a simulation software by RockwellCorporation It is used in different application domainsfrom manufacturing to supply chain and from customerservice and strategies to internal business processes It in-cludes three modules respectively called Arena InputAnalyserOutput Analyser and Process Analyser Among thethree the Input Analyser is useful for determining an ap-propriate distribution for collected data It allows the user tomake a sample set of raw data (eg latency of Cloud-based

Mobile Information Systems 3

WS) and fit it into a statistical distribution is distributionthen can be incorporated directly into a model to developand understand the corresponding system performance

en assuming such a model is available for eachconsidered metric PRESENCE aims at adapting its client cprime

(ie the auditor) to ensure the evaluation process is per-formed in a stealth way In this paper 19 models have beengenerated for each agentmdashthey are listed in the next sectionOf course once a model is provided we should validate itthat is ensure that the model is an accurate representation ofthe actual metric evolution and behaves in the same way

ere are many tests that could be used to validate on themodels generated ese tests are used to check the accuracyof the models by verifying on the null hypothesis e testsare such as t-test WilcoxonndashMannndashWhitney test andAnova test In PRESENCE this is achieved by using t-test bycomparing means of raw data and statistical distributiongenerated by the agent analysis e use of t-test is based onthe fact that the variable(s) in question (eg roughput)is normally distributed When this assumption is in doubtthe nonparametric WilcoxonndashMannndashWhitney test is usedas an alternative to the t-test [30] As we will see 789 ofthe generated models are proved as an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agents In the next section the modelling of theperformance metrics are detailed besides the experimentresults of PRESENCE

4 Validation and First Experimental Results

is section presents the results obtained from the PRES-ENCE approach within themonitoring and modelingmoduleas a prelude to the validation of the stealth module and thevirtual QoS aggregator and SLA checker left for the sequel ofthis work

Example redis memcachedmongoDB postgreSQL etc

Cloud provider n

Web service A Web service A

Cloud provider 1

Client cA1

Client cA2

Client cAn

Client cB1

[Distributed] PRESEnCE client crsquo (auditor)

Client cB2

Client cBm

WS performanceevaluation

Stealth moduledynamic load adaptation

Monitoringmodeling

WorkloadSLA analysis

Performance evaluation

On-demand evaluation ofSaaS web services

across multi-cloud providersbased on

PRESEnCE

Agentmetric 1 Agentmetric 2 Agentmetric k

Virtual QoSaggregatorSLA checker

SLA

QoS

Val

idat

or

Predictiveanalytics

Analyze Ranking

FIGURE 1 Overview of the PRESENCE framework e figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

TABLE 1 Performance metrics used in PRESENCE

Domain MetricImplementation status Metric type

Scalability

Number of transactions Workloadperformanceindicator

Number of requests Number of operations Number of records Number of fetches

Reliability

Parallel connections (clients)

WorkloadNumber of pipes Number of threads Workload size

Availability

Response time times

Performanceindicator

Up time times

Down time times

Load balancing times

Performance

Latency Performanceindicator

roughput Transfer rate Misshit rate

Costs Installing costs times QualityindicatorRunning costs times

SecurityAuthentication Security

indicatorEncryption Auditability

4 Mobile Information Systems

41 Experimental Setup In an attempt to validate the ap-proach on realistic workflows we tested PRESENCE againsttwo traditional and core web services

(1) A multi-DataBase (DB) WS which offers both SQL(ie PostgreSQL 94) and NoSQL DBs For the latercase we deployed two reference in-memory datastructure stores used as a database cache andmessagebroker that is Redis 2817 (redisio) and Memcached150 (memcachedorg) as well as MongoDB 34 anopen-source document database that provides highperformance high availability and automatic scaling

(2) e reference HTTP Apache server (version 2222-13) which is used for testing a traditional HTTP WSon the cloud

Building such a CSP environment was performed on topof two physical servers running Ubuntu 1404 LTS (Trusty64) connected over a 1 GbE network

On the PRESENCE client side 8 agents are deployed asKVM guests that is virtual machines running CentOS 73over 4 physical servers Each agent is running one of thebenchmarking tool listed in Table 2 to evaluate the WSperformance and collecting data about the CSP behaviourEach PRESENCE agent thus measures a specific subset ofperformance metrics and attributes and also deals with spe-cific kinds of cloud servers Eachmeasurement is consisting ofan average over 100 runs collected by the PRESENCE agent tomake the data statistically significant e tools used forperformance evaluation are several reference benchmarkingsuch as Yahoo Cloud Serving (YCSB) [31] Memtire [32]Redis benchmark [33] Twitter RPC [34] Pgbench [35] HTTPload [36] and Apache AB [37] In addition iperf [38] (a toolfor active measurements of the maximum achievable band-width on IP networks) is used in the closed environment forthe validation of PRESENCE as it provides an easy testimonialfor the WS access capacity e general overview of thedeployed infrastructure is provided in Figure 2

42 PRESENCE Agents Evaluation Results e targeted WSof each deployed agent is precised in Table 2e PRESENCEapproach is used to collect the data which represent thebehaviour of the CSP and these data also can indicate andevaluate the performance of the services As mentioned inthe previous section there are many metrics that can rep-resent the performance PRESENCE uses some of thesemetrics as a workload to the CSPrsquos servers and the others asresults from the experiments For example the number ofrequests operations records transactions and fetches aremetrics which representing the scalability of the CSP and areused by PRESENCE to increase the workload to see thebehaviour of the servers under the workload Other metricslike parallel or concurrency connections number of pipesnumber of threads and the workload size are representingthe CSP reliability are also used by PRESENCE to increasethe workload during the test Other metrics like responsetime up time down time transfer rate latency (read andupdate) and throughput are indicating the CSP perfor-mance and availability and PRESENCE used them to

evaluate the services performance Because PRESENCE usesmany tools to evaluate and benchmark the services it candeal with most of the metrics But there are two or threecommon metrics we will model and represent them in theresults such as latency throughput and transfer rate ereare other metrics that represent the security and the costs ofthe CSPs and all those metrics are summarized in Table 1 inthe previous section e different parameters (both inputand output) which are used for the PRESENCE validation areprovided in Table 3 We now provide some of the numeroustraces produced by the execution of the PRESENCE agentswhen checking the performance of the DB and HTTP WSs

Figure 3 shows the Redis Memcached and Mongomeasured WS performance under the statistically significantstress produced by the PRESENCE agent running the YCSBbenchmarking tool Figure 3(a) shows the throughput of thethree backends and demonstrates that the Redis WS is thebest in this metric when compared to the other two WSswhere it has the highest throughput is trend is confirmedwhen the latency metric is analysed in Figures 3(b) and 3(c)

Figure 4 shows the Redis and Memcached measured WSperformance under the stress produced by the PRESENCEagent running the Memtier benchmarking tool e pre-vious trend is again confirmed in our runs that is the Redis-based WS performs better than the Memcached backend for

Table 2 Benchmarking tools used by PRESENCE agents

Benchmark tool Version Targeted WS

YCSB 0120 Redis MongoDB memcachedDynamoDB etc

Memtire-Bench 128 Redis memcachedRedis-Bench 242 RedisTwitter RPC-Perf 203-pre Redis memcached ApachePgBench 9412 PostgresqlApache AB 23 ApacheHTTP Load 1 ApacheIperf v1 v3 Iperf server

Cloud data center

Web serverDatabase server

SaaS web services

- Redis- Memcached- MongoDB- DynamoDB- Postgresql- Apache HTTP- Iperf

Cloudservers

FIGURE 2 Infrastructure deployed to validate the PRESENCE frame-work e figure is reproduced from Ibrahim et al [14] (under theCreative Commons Attribution Licensepublic domain)

Mobile Information Systems 5

all metrics for example throughput latency and transferrate As noticed in the three plots from the Memtier agentsthe latency throughput and transfer rate of the MemcachedWS have increased suddenly in the end Such behaviour wasconsistent across all runs and was linked to the memorysaturation reached by the server process before being clearerStill upon DB WS performance evaluation Figure 5 detailsthe performance of the PostgreSQL WS under the stressproduced by the PRESENCE agent executing the PgBenchbenchmarking tool e figure shows the normalized re-sponse time of the server and the normalized (standardized)number of Transactions per Second (TPS) e responsetime is the latency of the service when the TPS correspondsto its throughput e performance of the WS is affected bythe increased workload which is represented by the in-creasing number of TPSs and parallel clients e increasingof the TPS let the response time increasing even if the TPSwas going down and after filling in the memory the TPSdecreased again and response time returned back again toa decrease is behaviour was consistent across the runs ofthe PgBench agent Finally Figure 6 shows the average runsof the PRESENCE agent executing the HTTP Load bench-mark tool when assessing the performance of the HTTPWSWe exhibit on each subfigure the behaviour of both the la-tency and throughput against an increasing number of fetchesand parallel clients which increases the workload of the WS

Many more traces of all considered agent runs areavailable but were not displayed in this article for the sake ofconciseness Overall and to conclude on the collected tracesfrom the many agent runs depicted in this section we wereable to reflect several complementary aspects of the twoWSsconsidered in the scenario of this experimental validationYet as part of the contributions of this article the generationof accurate models for these evaluations is crucial ey aredetailed in the next section

43 WS Performance Metrics Modeling Outside the de-scription of the PRESENCE framework the main contri-bution of this article resides more on the modeling of themeasured WS performance metrics from the data collectedby the PRESENCE agents rather than the runs in themselvesdepicted in the previous section Once these models areavailable they can be used to estimate and dynamically adaptthe behaviour of the PRESENCE client (which combine andschedule the execution of all considered agents in parallel) soas to hide the true nature of the evaluation by making itindistinguishable from a regular client traffic But this cor-responds to the next step of our work In this paper we wishto illustrate the developed model from the PRESENCE agentsevaluations reported in the previous section e main ob-jective of the developed model is to understand the systemperformance behaviour relative to various assumptions andinput parameters discussed in previous sections As men-tioned in Section 32 we rely on the simulation software calledArena to analyse the data returned from the agents and get themodel for each individual performance metric We haveperformed this analysis for each agents and the results of themodels are presented in the below tables Of course sucha contribution is pertinent only if the generated model isvalidatedmdasha process consisting in ensuring the accuraterepresentation of the actual system and its behaviour is isachieved by using a set of statistical t-tests by comparing themeans of raw data and statistical distribution generated by theInput Analyzer of the Arena system If the result shows thatboth samples are analytically similar then the model de-veloped from statistical distribution is an accurate repre-sentation of the actual system and behaves in the same wayFor each model detailed in the tables the outcomes of the t-tests in the form of the computed p value (against thecommon significance level of 005) is provided and dem-onstrate if present the accuracy of the proposed models

Table 3 Inputoutput metrics for each PRESENCE Agent

PRESENCE agentInput parameters

Transactions Requests Operations Records Fetches Parallelclients Pipes reads Workload

sizeYCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

PRESENCE agentOutput parameters

roughput Latency Readlatency

Updatelatency

CleanUplatency

Transferrate

Responsetime Miss Hits

YCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

6 Mobile Information Systems

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 4: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

WS) and fit it into a statistical distribution is distributionthen can be incorporated directly into a model to developand understand the corresponding system performance

en assuming such a model is available for eachconsidered metric PRESENCE aims at adapting its client cprime

(ie the auditor) to ensure the evaluation process is per-formed in a stealth way In this paper 19 models have beengenerated for each agentmdashthey are listed in the next sectionOf course once a model is provided we should validate itthat is ensure that the model is an accurate representation ofthe actual metric evolution and behaves in the same way

ere are many tests that could be used to validate on themodels generated ese tests are used to check the accuracyof the models by verifying on the null hypothesis e testsare such as t-test WilcoxonndashMannndashWhitney test andAnova test In PRESENCE this is achieved by using t-test bycomparing means of raw data and statistical distributiongenerated by the agent analysis e use of t-test is based onthe fact that the variable(s) in question (eg roughput)is normally distributed When this assumption is in doubtthe nonparametric WilcoxonndashMannndashWhitney test is usedas an alternative to the t-test [30] As we will see 789 ofthe generated models are proved as an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agents In the next section the modelling of theperformance metrics are detailed besides the experimentresults of PRESENCE

4 Validation and First Experimental Results

is section presents the results obtained from the PRES-ENCE approach within themonitoring and modelingmoduleas a prelude to the validation of the stealth module and thevirtual QoS aggregator and SLA checker left for the sequel ofthis work

Example redis memcachedmongoDB postgreSQL etc

Cloud provider n

Web service A Web service A

Cloud provider 1

Client cA1

Client cA2

Client cAn

Client cB1

[Distributed] PRESEnCE client crsquo (auditor)

Client cB2

Client cBm

WS performanceevaluation

Stealth moduledynamic load adaptation

Monitoringmodeling

WorkloadSLA analysis

Performance evaluation

On-demand evaluation ofSaaS web services

across multi-cloud providersbased on

PRESEnCE

Agentmetric 1 Agentmetric 2 Agentmetric k

Virtual QoSaggregatorSLA checker

SLA

QoS

Val

idat

or

Predictiveanalytics

Analyze Ranking

FIGURE 1 Overview of the PRESENCE framework e figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

TABLE 1 Performance metrics used in PRESENCE

Domain MetricImplementation status Metric type

Scalability

Number of transactions Workloadperformanceindicator

Number of requests Number of operations Number of records Number of fetches

Reliability

Parallel connections (clients)

WorkloadNumber of pipes Number of threads Workload size

Availability

Response time times

Performanceindicator

Up time times

Down time times

Load balancing times

Performance

Latency Performanceindicator

roughput Transfer rate Misshit rate

Costs Installing costs times QualityindicatorRunning costs times

SecurityAuthentication Security

indicatorEncryption Auditability

4 Mobile Information Systems

41 Experimental Setup In an attempt to validate the ap-proach on realistic workflows we tested PRESENCE againsttwo traditional and core web services

(1) A multi-DataBase (DB) WS which offers both SQL(ie PostgreSQL 94) and NoSQL DBs For the latercase we deployed two reference in-memory datastructure stores used as a database cache andmessagebroker that is Redis 2817 (redisio) and Memcached150 (memcachedorg) as well as MongoDB 34 anopen-source document database that provides highperformance high availability and automatic scaling

(2) e reference HTTP Apache server (version 2222-13) which is used for testing a traditional HTTP WSon the cloud

Building such a CSP environment was performed on topof two physical servers running Ubuntu 1404 LTS (Trusty64) connected over a 1 GbE network

On the PRESENCE client side 8 agents are deployed asKVM guests that is virtual machines running CentOS 73over 4 physical servers Each agent is running one of thebenchmarking tool listed in Table 2 to evaluate the WSperformance and collecting data about the CSP behaviourEach PRESENCE agent thus measures a specific subset ofperformance metrics and attributes and also deals with spe-cific kinds of cloud servers Eachmeasurement is consisting ofan average over 100 runs collected by the PRESENCE agent tomake the data statistically significant e tools used forperformance evaluation are several reference benchmarkingsuch as Yahoo Cloud Serving (YCSB) [31] Memtire [32]Redis benchmark [33] Twitter RPC [34] Pgbench [35] HTTPload [36] and Apache AB [37] In addition iperf [38] (a toolfor active measurements of the maximum achievable band-width on IP networks) is used in the closed environment forthe validation of PRESENCE as it provides an easy testimonialfor the WS access capacity e general overview of thedeployed infrastructure is provided in Figure 2

42 PRESENCE Agents Evaluation Results e targeted WSof each deployed agent is precised in Table 2e PRESENCEapproach is used to collect the data which represent thebehaviour of the CSP and these data also can indicate andevaluate the performance of the services As mentioned inthe previous section there are many metrics that can rep-resent the performance PRESENCE uses some of thesemetrics as a workload to the CSPrsquos servers and the others asresults from the experiments For example the number ofrequests operations records transactions and fetches aremetrics which representing the scalability of the CSP and areused by PRESENCE to increase the workload to see thebehaviour of the servers under the workload Other metricslike parallel or concurrency connections number of pipesnumber of threads and the workload size are representingthe CSP reliability are also used by PRESENCE to increasethe workload during the test Other metrics like responsetime up time down time transfer rate latency (read andupdate) and throughput are indicating the CSP perfor-mance and availability and PRESENCE used them to

evaluate the services performance Because PRESENCE usesmany tools to evaluate and benchmark the services it candeal with most of the metrics But there are two or threecommon metrics we will model and represent them in theresults such as latency throughput and transfer rate ereare other metrics that represent the security and the costs ofthe CSPs and all those metrics are summarized in Table 1 inthe previous section e different parameters (both inputand output) which are used for the PRESENCE validation areprovided in Table 3 We now provide some of the numeroustraces produced by the execution of the PRESENCE agentswhen checking the performance of the DB and HTTP WSs

Figure 3 shows the Redis Memcached and Mongomeasured WS performance under the statistically significantstress produced by the PRESENCE agent running the YCSBbenchmarking tool Figure 3(a) shows the throughput of thethree backends and demonstrates that the Redis WS is thebest in this metric when compared to the other two WSswhere it has the highest throughput is trend is confirmedwhen the latency metric is analysed in Figures 3(b) and 3(c)

Figure 4 shows the Redis and Memcached measured WSperformance under the stress produced by the PRESENCEagent running the Memtier benchmarking tool e pre-vious trend is again confirmed in our runs that is the Redis-based WS performs better than the Memcached backend for

Table 2 Benchmarking tools used by PRESENCE agents

Benchmark tool Version Targeted WS

YCSB 0120 Redis MongoDB memcachedDynamoDB etc

Memtire-Bench 128 Redis memcachedRedis-Bench 242 RedisTwitter RPC-Perf 203-pre Redis memcached ApachePgBench 9412 PostgresqlApache AB 23 ApacheHTTP Load 1 ApacheIperf v1 v3 Iperf server

Cloud data center

Web serverDatabase server

SaaS web services

- Redis- Memcached- MongoDB- DynamoDB- Postgresql- Apache HTTP- Iperf

Cloudservers

FIGURE 2 Infrastructure deployed to validate the PRESENCE frame-work e figure is reproduced from Ibrahim et al [14] (under theCreative Commons Attribution Licensepublic domain)

Mobile Information Systems 5

all metrics for example throughput latency and transferrate As noticed in the three plots from the Memtier agentsthe latency throughput and transfer rate of the MemcachedWS have increased suddenly in the end Such behaviour wasconsistent across all runs and was linked to the memorysaturation reached by the server process before being clearerStill upon DB WS performance evaluation Figure 5 detailsthe performance of the PostgreSQL WS under the stressproduced by the PRESENCE agent executing the PgBenchbenchmarking tool e figure shows the normalized re-sponse time of the server and the normalized (standardized)number of Transactions per Second (TPS) e responsetime is the latency of the service when the TPS correspondsto its throughput e performance of the WS is affected bythe increased workload which is represented by the in-creasing number of TPSs and parallel clients e increasingof the TPS let the response time increasing even if the TPSwas going down and after filling in the memory the TPSdecreased again and response time returned back again toa decrease is behaviour was consistent across the runs ofthe PgBench agent Finally Figure 6 shows the average runsof the PRESENCE agent executing the HTTP Load bench-mark tool when assessing the performance of the HTTPWSWe exhibit on each subfigure the behaviour of both the la-tency and throughput against an increasing number of fetchesand parallel clients which increases the workload of the WS

Many more traces of all considered agent runs areavailable but were not displayed in this article for the sake ofconciseness Overall and to conclude on the collected tracesfrom the many agent runs depicted in this section we wereable to reflect several complementary aspects of the twoWSsconsidered in the scenario of this experimental validationYet as part of the contributions of this article the generationof accurate models for these evaluations is crucial ey aredetailed in the next section

43 WS Performance Metrics Modeling Outside the de-scription of the PRESENCE framework the main contri-bution of this article resides more on the modeling of themeasured WS performance metrics from the data collectedby the PRESENCE agents rather than the runs in themselvesdepicted in the previous section Once these models areavailable they can be used to estimate and dynamically adaptthe behaviour of the PRESENCE client (which combine andschedule the execution of all considered agents in parallel) soas to hide the true nature of the evaluation by making itindistinguishable from a regular client traffic But this cor-responds to the next step of our work In this paper we wishto illustrate the developed model from the PRESENCE agentsevaluations reported in the previous section e main ob-jective of the developed model is to understand the systemperformance behaviour relative to various assumptions andinput parameters discussed in previous sections As men-tioned in Section 32 we rely on the simulation software calledArena to analyse the data returned from the agents and get themodel for each individual performance metric We haveperformed this analysis for each agents and the results of themodels are presented in the below tables Of course sucha contribution is pertinent only if the generated model isvalidatedmdasha process consisting in ensuring the accuraterepresentation of the actual system and its behaviour is isachieved by using a set of statistical t-tests by comparing themeans of raw data and statistical distribution generated by theInput Analyzer of the Arena system If the result shows thatboth samples are analytically similar then the model de-veloped from statistical distribution is an accurate repre-sentation of the actual system and behaves in the same wayFor each model detailed in the tables the outcomes of the t-tests in the form of the computed p value (against thecommon significance level of 005) is provided and dem-onstrate if present the accuracy of the proposed models

Table 3 Inputoutput metrics for each PRESENCE Agent

PRESENCE agentInput parameters

Transactions Requests Operations Records Fetches Parallelclients Pipes reads Workload

sizeYCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

PRESENCE agentOutput parameters

roughput Latency Readlatency

Updatelatency

CleanUplatency

Transferrate

Responsetime Miss Hits

YCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

6 Mobile Information Systems

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 5: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

41 Experimental Setup In an attempt to validate the ap-proach on realistic workflows we tested PRESENCE againsttwo traditional and core web services

(1) A multi-DataBase (DB) WS which offers both SQL(ie PostgreSQL 94) and NoSQL DBs For the latercase we deployed two reference in-memory datastructure stores used as a database cache andmessagebroker that is Redis 2817 (redisio) and Memcached150 (memcachedorg) as well as MongoDB 34 anopen-source document database that provides highperformance high availability and automatic scaling

(2) e reference HTTP Apache server (version 2222-13) which is used for testing a traditional HTTP WSon the cloud

Building such a CSP environment was performed on topof two physical servers running Ubuntu 1404 LTS (Trusty64) connected over a 1 GbE network

On the PRESENCE client side 8 agents are deployed asKVM guests that is virtual machines running CentOS 73over 4 physical servers Each agent is running one of thebenchmarking tool listed in Table 2 to evaluate the WSperformance and collecting data about the CSP behaviourEach PRESENCE agent thus measures a specific subset ofperformance metrics and attributes and also deals with spe-cific kinds of cloud servers Eachmeasurement is consisting ofan average over 100 runs collected by the PRESENCE agent tomake the data statistically significant e tools used forperformance evaluation are several reference benchmarkingsuch as Yahoo Cloud Serving (YCSB) [31] Memtire [32]Redis benchmark [33] Twitter RPC [34] Pgbench [35] HTTPload [36] and Apache AB [37] In addition iperf [38] (a toolfor active measurements of the maximum achievable band-width on IP networks) is used in the closed environment forthe validation of PRESENCE as it provides an easy testimonialfor the WS access capacity e general overview of thedeployed infrastructure is provided in Figure 2

42 PRESENCE Agents Evaluation Results e targeted WSof each deployed agent is precised in Table 2e PRESENCEapproach is used to collect the data which represent thebehaviour of the CSP and these data also can indicate andevaluate the performance of the services As mentioned inthe previous section there are many metrics that can rep-resent the performance PRESENCE uses some of thesemetrics as a workload to the CSPrsquos servers and the others asresults from the experiments For example the number ofrequests operations records transactions and fetches aremetrics which representing the scalability of the CSP and areused by PRESENCE to increase the workload to see thebehaviour of the servers under the workload Other metricslike parallel or concurrency connections number of pipesnumber of threads and the workload size are representingthe CSP reliability are also used by PRESENCE to increasethe workload during the test Other metrics like responsetime up time down time transfer rate latency (read andupdate) and throughput are indicating the CSP perfor-mance and availability and PRESENCE used them to

evaluate the services performance Because PRESENCE usesmany tools to evaluate and benchmark the services it candeal with most of the metrics But there are two or threecommon metrics we will model and represent them in theresults such as latency throughput and transfer rate ereare other metrics that represent the security and the costs ofthe CSPs and all those metrics are summarized in Table 1 inthe previous section e different parameters (both inputand output) which are used for the PRESENCE validation areprovided in Table 3 We now provide some of the numeroustraces produced by the execution of the PRESENCE agentswhen checking the performance of the DB and HTTP WSs

Figure 3 shows the Redis Memcached and Mongomeasured WS performance under the statistically significantstress produced by the PRESENCE agent running the YCSBbenchmarking tool Figure 3(a) shows the throughput of thethree backends and demonstrates that the Redis WS is thebest in this metric when compared to the other two WSswhere it has the highest throughput is trend is confirmedwhen the latency metric is analysed in Figures 3(b) and 3(c)

Figure 4 shows the Redis and Memcached measured WSperformance under the stress produced by the PRESENCEagent running the Memtier benchmarking tool e pre-vious trend is again confirmed in our runs that is the Redis-based WS performs better than the Memcached backend for

Table 2 Benchmarking tools used by PRESENCE agents

Benchmark tool Version Targeted WS

YCSB 0120 Redis MongoDB memcachedDynamoDB etc

Memtire-Bench 128 Redis memcachedRedis-Bench 242 RedisTwitter RPC-Perf 203-pre Redis memcached ApachePgBench 9412 PostgresqlApache AB 23 ApacheHTTP Load 1 ApacheIperf v1 v3 Iperf server

Cloud data center

Web serverDatabase server

SaaS web services

- Redis- Memcached- MongoDB- DynamoDB- Postgresql- Apache HTTP- Iperf

Cloudservers

FIGURE 2 Infrastructure deployed to validate the PRESENCE frame-work e figure is reproduced from Ibrahim et al [14] (under theCreative Commons Attribution Licensepublic domain)

Mobile Information Systems 5

all metrics for example throughput latency and transferrate As noticed in the three plots from the Memtier agentsthe latency throughput and transfer rate of the MemcachedWS have increased suddenly in the end Such behaviour wasconsistent across all runs and was linked to the memorysaturation reached by the server process before being clearerStill upon DB WS performance evaluation Figure 5 detailsthe performance of the PostgreSQL WS under the stressproduced by the PRESENCE agent executing the PgBenchbenchmarking tool e figure shows the normalized re-sponse time of the server and the normalized (standardized)number of Transactions per Second (TPS) e responsetime is the latency of the service when the TPS correspondsto its throughput e performance of the WS is affected bythe increased workload which is represented by the in-creasing number of TPSs and parallel clients e increasingof the TPS let the response time increasing even if the TPSwas going down and after filling in the memory the TPSdecreased again and response time returned back again toa decrease is behaviour was consistent across the runs ofthe PgBench agent Finally Figure 6 shows the average runsof the PRESENCE agent executing the HTTP Load bench-mark tool when assessing the performance of the HTTPWSWe exhibit on each subfigure the behaviour of both the la-tency and throughput against an increasing number of fetchesand parallel clients which increases the workload of the WS

Many more traces of all considered agent runs areavailable but were not displayed in this article for the sake ofconciseness Overall and to conclude on the collected tracesfrom the many agent runs depicted in this section we wereable to reflect several complementary aspects of the twoWSsconsidered in the scenario of this experimental validationYet as part of the contributions of this article the generationof accurate models for these evaluations is crucial ey aredetailed in the next section

43 WS Performance Metrics Modeling Outside the de-scription of the PRESENCE framework the main contri-bution of this article resides more on the modeling of themeasured WS performance metrics from the data collectedby the PRESENCE agents rather than the runs in themselvesdepicted in the previous section Once these models areavailable they can be used to estimate and dynamically adaptthe behaviour of the PRESENCE client (which combine andschedule the execution of all considered agents in parallel) soas to hide the true nature of the evaluation by making itindistinguishable from a regular client traffic But this cor-responds to the next step of our work In this paper we wishto illustrate the developed model from the PRESENCE agentsevaluations reported in the previous section e main ob-jective of the developed model is to understand the systemperformance behaviour relative to various assumptions andinput parameters discussed in previous sections As men-tioned in Section 32 we rely on the simulation software calledArena to analyse the data returned from the agents and get themodel for each individual performance metric We haveperformed this analysis for each agents and the results of themodels are presented in the below tables Of course sucha contribution is pertinent only if the generated model isvalidatedmdasha process consisting in ensuring the accuraterepresentation of the actual system and its behaviour is isachieved by using a set of statistical t-tests by comparing themeans of raw data and statistical distribution generated by theInput Analyzer of the Arena system If the result shows thatboth samples are analytically similar then the model de-veloped from statistical distribution is an accurate repre-sentation of the actual system and behaves in the same wayFor each model detailed in the tables the outcomes of the t-tests in the form of the computed p value (against thecommon significance level of 005) is provided and dem-onstrate if present the accuracy of the proposed models

Table 3 Inputoutput metrics for each PRESENCE Agent

PRESENCE agentInput parameters

Transactions Requests Operations Records Fetches Parallelclients Pipes reads Workload

sizeYCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

PRESENCE agentOutput parameters

roughput Latency Readlatency

Updatelatency

CleanUplatency

Transferrate

Responsetime Miss Hits

YCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

6 Mobile Information Systems

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 6: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

all metrics for example throughput latency and transferrate As noticed in the three plots from the Memtier agentsthe latency throughput and transfer rate of the MemcachedWS have increased suddenly in the end Such behaviour wasconsistent across all runs and was linked to the memorysaturation reached by the server process before being clearerStill upon DB WS performance evaluation Figure 5 detailsthe performance of the PostgreSQL WS under the stressproduced by the PRESENCE agent executing the PgBenchbenchmarking tool e figure shows the normalized re-sponse time of the server and the normalized (standardized)number of Transactions per Second (TPS) e responsetime is the latency of the service when the TPS correspondsto its throughput e performance of the WS is affected bythe increased workload which is represented by the in-creasing number of TPSs and parallel clients e increasingof the TPS let the response time increasing even if the TPSwas going down and after filling in the memory the TPSdecreased again and response time returned back again toa decrease is behaviour was consistent across the runs ofthe PgBench agent Finally Figure 6 shows the average runsof the PRESENCE agent executing the HTTP Load bench-mark tool when assessing the performance of the HTTPWSWe exhibit on each subfigure the behaviour of both the la-tency and throughput against an increasing number of fetchesand parallel clients which increases the workload of the WS

Many more traces of all considered agent runs areavailable but were not displayed in this article for the sake ofconciseness Overall and to conclude on the collected tracesfrom the many agent runs depicted in this section we wereable to reflect several complementary aspects of the twoWSsconsidered in the scenario of this experimental validationYet as part of the contributions of this article the generationof accurate models for these evaluations is crucial ey aredetailed in the next section

43 WS Performance Metrics Modeling Outside the de-scription of the PRESENCE framework the main contri-bution of this article resides more on the modeling of themeasured WS performance metrics from the data collectedby the PRESENCE agents rather than the runs in themselvesdepicted in the previous section Once these models areavailable they can be used to estimate and dynamically adaptthe behaviour of the PRESENCE client (which combine andschedule the execution of all considered agents in parallel) soas to hide the true nature of the evaluation by making itindistinguishable from a regular client traffic But this cor-responds to the next step of our work In this paper we wishto illustrate the developed model from the PRESENCE agentsevaluations reported in the previous section e main ob-jective of the developed model is to understand the systemperformance behaviour relative to various assumptions andinput parameters discussed in previous sections As men-tioned in Section 32 we rely on the simulation software calledArena to analyse the data returned from the agents and get themodel for each individual performance metric We haveperformed this analysis for each agents and the results of themodels are presented in the below tables Of course sucha contribution is pertinent only if the generated model isvalidatedmdasha process consisting in ensuring the accuraterepresentation of the actual system and its behaviour is isachieved by using a set of statistical t-tests by comparing themeans of raw data and statistical distribution generated by theInput Analyzer of the Arena system If the result shows thatboth samples are analytically similar then the model de-veloped from statistical distribution is an accurate repre-sentation of the actual system and behaves in the same wayFor each model detailed in the tables the outcomes of the t-tests in the form of the computed p value (against thecommon significance level of 005) is provided and dem-onstrate if present the accuracy of the proposed models

Table 3 Inputoutput metrics for each PRESENCE Agent

PRESENCE agentInput parameters

Transactions Requests Operations Records Fetches Parallelclients Pipes reads Workload

sizeYCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

PRESENCE agentOutput parameters

roughput Latency Readlatency

Updatelatency

CleanUplatency

Transferrate

Responsetime Miss Hits

YCSB Memtire-Bench Redis-Bench Twitter RPC-Perf PgBench Apache AB HTTP Load Iperf

6 Mobile Information Systems

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 7: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

Tables 4ndash6 show the model of the performance metricssuch as latency (read and update) and throughput for theRedis Memcached and MongoDB services by using thedata collected with the YCSB PRESENCE agents Inparticular Table 4 shows that the Redis throughput is Betaincreasing Redis latency (read) is Gamma increasingand Redis latency (update) is Erlang increasing with

respect to the configuration of the experimental setupdiscussed in the previous sections Moreover as p values(0757 0394 and 0503) are greater than 005 the nullhypothesis (the two samples are the same) is accepted ascompared to alternate hypothesis (the two samples aredifferent) Hence the models for throughput that isminus0001 + 1lowastBETA(363 309) latency (read) that is

0 2000 4000 6000 8000 100000

2000

4000

6000

8000

10000

Number of operations

Thro

ughp

ut (o

pss

ec)

Servers (throughput)RedisMongoDBMemcached

0 2000 4000 6000 8000 10000Number of records

(a)

RedisMongoDBMemcached

0 2000 4000 6000 8000 10000

0

10000

20000

30000

40000

50000

Number of operations

Upd

ate l

aten

cy (μ

s)

Servers (update latency)

0 2000 4000 6000 8000 10000Number of records

(b)

RedisMongoDBMemcached

Servers (read latency)

0 2000 4000 6000 8000 10000

0

20000

40000

60000

Number of operations

Read

late

ncy

(μs)

0 2000 4000 6000 8000 10000Number of records

(c)

FIGURE 3 YCSB Agent Evaluation of the NoSQL DBWS (a) roughput (b) update latency and (c) read latency e figure is reproducedfrom Ibrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

Mobile Information Systems 7

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 8: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

minus0001 + GAMM(00846 239) and latency (update) thatis minus0001 + ERLA(00733 3) are an accurate represen-tation of the WS performance metrics exhibited by thePRESENCE agent in Figure 3

Similarly Table 5 shows that MongoDB throughput andlatency (read) are Beta increasing and latency (update) isErlang increasing with respect to the configuration setupMoreover as p values (0388 0473 and 0146) are greaterthan 005 the null hypothesis (the two samples are the same)is accepted as compared to alternate hypothesis (the twosamples are different) Hence the models for throughputthat is minus0001 + 1lowastBETA(365 211) latency (read) that isminus0001 + 1lowastBETA(16 248) and latency (update) that is

minus0001 + ERLA(00902 2) are an accurate representation ofthe WS performance metrics exhibited by the PRESENCEagents in Figure 3

Finally Table 6 shows that Memcached throughputand latency (read) is Beta increasing and latency (update) isNormal increasing with respect to configuration setupAgain as p values (0106 0832 and 0794) are greater than005 the null hypothesis is accepted Hence the models forthroughput that is minus0001 + 1lowastBETA(441 248) latency(read) that is minus0001 + 1lowastBETA(164 312) and latency(update) that is NORM(0311 0161) are an accuraterepresentation of the WS performance metrics exhibited bythe PRESENCE agents in Figure 3

0 2000 4000 6000 8000 10000

0e + 00

1e + 05

2e + 05

3e + 05

4e + 05

5e + 05

6e + 05

Number of requests

Thro

ughp

ut (o

pss

ec)

Server restarted

MemtierminusBenchRedisMemcached

(a)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

10

20

30

40

Number of requests

Late

ncy

(s)

Server restarted

(b)

MemtierminusBenchRedisMemcached

0 2000 4000 6000 8000 10000

0

5000

10000

15000

20000

Number of requests

Tran

sfer r

ate (

Kbs

ec)

Server restarted

(c)

Figure 4 Memtier Agent Evaluation of the NoSQL DBWS (a)roughput (b) latency and (c) transfer ratee figure is reproduced fromIbrahim et al [14] (under the Creative Commons Attribution Licensepublic domain)

8 Mobile Information Systems

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 9: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

e above analysis is repeated for the MemtierPRESENCE agentmdashthe corresponding models are pro-vided in Tables 7 and 8 and summarize the computedmodel for the WS performance metrics such as latencythroughput and transfer rate for the Redis and Memc-ached services by using the data collected from PRESENCE

Memtier agents For instance it can be seen in Table 7 thatthe Redis throughput is Erlang increasing with respect totime and assumptions in previous section Moreover as p

value (0902) is greater than 005 the null hypothesis isagain accepted as compared to alternate hypothesis (thetwo samples are different) Hence the Erlang distribution

0 2000 4000 6000 8000 10000

00

02

04

06

08

10

Number of transactions per client

Nor

mal

ized

TPS

00

02

04

06

08

10

Nor

mal

ized

resp

onse

tim

e (la

tenc

y)

20 50 80

Number of parallel clients

PgbenchTPSResponse Time

FIGURE 5 PgBench Agent on the SQL DB WS e figure is reproduced from Ibrahim et al [14] (under the Creative Commons AttributionLicensepublic domain)

0 50000 100000 150000 200000

00

02

04

06

08

10

Number of fetches

Nor

mal

ized

thro

ughp

ut (f

etch

ess

ec)

00

02

04

06

08

10300 650 1000

Number of parallel clients

Nor

mal

ized

late

ncy

HTTP loadThroughputLatency

(a)

200 400 600 800 1000

20

40

60

80

100

120

Number of parallel clients

Ave

rage

late

ncy

(mse

c)

20

40

60

80

100

12030000 120000 190000

Number of fetches

Ave

rage

late

ncy

conn

ectio

n (m

sec)

HTTP loadLatency (avg (msec))Connection latency (avg (ms))

(b)

Figure 6 HTTP Load Agent Evaluation on the HTTPWSe figure is reproduced from Ibrahim et al [14] (under the Creative CommonsAttribution Licensepublic domain)

Mobile Information Systems 9

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 10: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

model of roughput that is minus0001 + ERLA(00155 7)is an accurate representation of the WS performancemetrics exhibited by the PRESENCE agents in Figure 4(a)e same conclusions on the generated models canbe triggered for the other metrics collected by the PRES-ENCE Memtier agents that is latency and transferrates Interestingly Table 8 reports an approximation(blue curve line) for multimodel distribution of memc-ached throughput and transfer rate is shows a failureof a single model to capture the system behaviour withrespect to the configuration setup However for the samesetup memcached latency is Beta increasing and as itsp value (0625) is greater than 005 the null hypoth-esis is accepted Hence the model for latency that isminus0001 + 1lowastBETA(099 21) is an accurate representation

of theWS performance metrics exhibited by the PRESENCEagents in Figure 4(b)

To finish on the DB WS performance analysis usingPRESENCE Table 9 exhibits the generated model for theperformance model from the Pgbench PRESENCE agentsefirst row in the table shows the approximation (blue curve line)for multimodel distribution of the throughput metric thusdemonstrating the failure of a single model to capture thesystem behaviour with respect to the configuration setupHowever for the same setup the latency metric is log normalincreasing and as its p value (0682) is greater than 005 thenull hypothesis (ie the two samples are the same) is acceptedas compared to alternate hypothesis that is the two samplesare different Hence the model for latency that isminus0001 + LOGN(0212 0202) is an accurate representation

TABLE 4 Modelling DB WS performance metrics from YCSB PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(363 309)

whereBETA(β α)

β 363α 309

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0lt xlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0757(gt005)

Latency read Gamma

minus0001 + GAMM(00846 239)

whereGAMM(β α)

β 00846α 239

offset minus0001

f(x) (βminusαxαminus1eminus(xβ))Γ(α) for xgt 00 otherwise

1113896

where Γ is the complete gamma function given byΓ(α) 1113938

inf0 tαminus1eminus1 dt

0394(gt005)

Latency update Erlang

minus0001 + ERLA(00733 3)

whereERLA(β k)

k 3β 00733

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0503(gt005)

Table 5 Modelling DB WS performance metrics from YCSB PRESENCE Agent MongoDB

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(365 211)

whereBETA(β α)

β 365α 211

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0388(gt005)

Latency read Beta

minus0001 + 1lowastBETA(16 248)

whereBETA(β α)

β 16α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0473(gt005)

Latency update Erlang

minus0001 + ERLA(00902 2)

whereERLA(β k)

k 2β 00902

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0146(gt005)

10 Mobile Information Systems

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 11: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

of the WS performance metrics exhibited by the PRESENCEagents in Figure 6 As regards the HTTP WS performanceanalysis using PRESENCE we decided to report in Table 10the model generated from the performance model from theHTTP Load PRESENCE agents Again we can see that we failto model the throughput metric with respect to the config-uration setup discussed in the previous section However forthe same setup the response time is Beta increasing and as itsp value (0165) is greater than 005 the null hypothesis isaccepted Hence the model for latency that is minus0001 + 1lowastBETA(155 346) is an accurate representation of the WSperformance metrics exhibited by the PRESENCE agents inFigure 6

Summary of the obtained Models In this paper 19models were generated which represent the performancemetrics for the SaaS Web Service by using the PRESENCE

approach Out of the 19 models 15 models that is 789 ofthe analyzed models are proved to have accurately representthe performance metrics collected by the PRESENCE agentssuch as throughput latency transfer rate and response timein different contexts depending on the considered WS eaccuracy of the proposed models is assessed by the referencestatistical t-tests performed against the common signifi-cance level of 005

5 Conclusion

Motivated by recent scandals in the automotive sector(which demonstrate the capacity of solution providers toadapt the behaviour of their product when submitted to anevaluation campaign to improve the performance results)this paper presents PRESENCE an automatic framework

Table 7 Modelling DB WS performance metrics from Memtier PRESENCE Agent Redis

Metric Distribution Model Expression p value (t-test)

roughput Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0767(gt005)

Latency Beta

minus0001 + 1lowastBETA(0648 172)

whereBETA(β α)

β 0648α 172

offset minus0001

f(x) (xβminus1(1minus x)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0902(gt005)

Transfer rate Erlang

minus0001 + ERLA(00155 7)

whereERLA(β k)

k 7β 00155

offset minus0001

f(x) (βminuskxkminus1eminus(xβ))(kminus 1) for xgt 00 otherwise

1113896 0287(gt005)

Table 6 Modelling DB WS performance metrics from YCSB PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput Beta

minus0001 + 1lowastBETA(441 248)

whereBETA(β α)

β 441α 248

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0106(gt005)

Latency read Beta

minus0001 + 1lowastBETA(164 312)

whereBETA(β α)

β 164α 312

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0832(gt005)

Latency update Normal

NORM(0311 0161)

whereNORM(meanμ stdDevσ)

μ 0311σ 0161

f(x) (1σ2π

radic)eminus(xminusμ)22σ2 for all real x 0794(gt005)

Mobile Information Systems 11

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 12: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

which aims at evaluating monitoring and benchmarkingWeb Services (WSs) offered across several Cloud ServicesProviders (CSPs) for all types of Cloud Computing (CC) andMobile CC platforms More precisely PRESENCE aims atevaluating the QoS and SLA compliance of Web Services(WSs) by stealth way that is by rendering the performanceevaluation as close as possible from a regular yet heavy usage

of the considered service Our framework is relying ona Multi-Agent System (MAS) and a carefully designed client(called the Auditor) responsible to interact with the set ofCSPs being evaluated

e first step to ensure the dynamic adaptation of theworkload to hide the evaluation process resides in the ca-pacity to model accurately this workload based on the

Table 10 Modelling HTTP WS performance metrics from HTTP load PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + 1lowastBETA(155 346)

whereBETA(β α)

β 155α 346

offset minus0001

f(x) 1(σx

radic)eminus(ln(x)minus μ)22σ2 for xgt 0

0 otherwise1113896 0165(gt005)

Table 9 Modelling DB WS performance metrics from Pgbench PRESENCE Agent

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency Log normal

minus0001 + LOGN(0212 0202)

whereLOGN(log Meanμ LogStdσ)

μ 0212σ 0202

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0682(gt005)

Table 8 Modelling DB WS performance metrics from Memtier PRESENCE Agent Memcached

Metric Distribution Model Expression p value (t-test)

roughput mdash mdash mdash

Latency read Beta

minus0001 + 1lowastBETA(099 21)

whereBETA(β α)

β 099α 21

offset minus0001

f(x) (xβminus1(1minusx)αminus1)B(β α) for 0ltxlt 10 otherwise

1113896

where β is the complete beta function given byB(β α) 1113938

10 tβminus1(1minus t)αminus1 dt

0625(gt0005)

Latency update mdash mdash mdash

12 Mobile Information Systems

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 13: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

configuration of the agents responsible for the performanceevaluation

In this paper a nonexhaustive list of 22 metrics wassuggested to reflect all facets of the QoS and SLA complianceof a WSs offered by a given CSP en the data collectedfrom the execution of each agent within the PRESENCEclient can be then aggregated within a dedicated module andtreated to exhibit a rank and classification of the involvedCSPs From the preliminary modelling of the load patternand performance metrics of each agent a stealth moduletakes care of finding through a GA the best set of parametersfor each agent such that the resulting pattern of thePRESENCE client is indistinguishable from a regular usageWhile the complete framework is described in the seminalpaper for PRESENCE [14] the first experimental resultspresented in this work focus on the performance and net-working metrics between cloud providers and cloudcustomers

In this context 19 generated models were provided outof which 789 accurately represent the WS performancemetrics for the two SaaS WSs deployed in the experimentalsetup e claimed accuracy is confirmed by the outcome ofreference statistical t-tests and the associated p valuescomputed for each model against the common significancelevel of 005

is opens novel perspectives for assessing the SLAcompliance of Cloud providers using the PRESENCEframework e future work induced by this study includesthe modelling and validation of the other modules definedwithin PRESENCE and based on the monitoring and mod-elling of the performance metrics proposed in this article Ofcourse the ambition remains to test our framework againsta real WS while performing further experimentation ona larger set of applications and machines

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Disclosure

is article is an extended version of [14] presented at the32nd IEEE International Conference of Information Net-works (ICOIN) 2018 In particular Figures 1ndash6 arereproduced from [14]

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors would like to acknowledge the funding of thejoint ILNAS-UL Programme onDigital Trust for Smart-ICTe experiments presented in this paper were carried outusing the HPC facilities of the University of Luxembourg[39] (see httphpcunilu)

References

[1] P MMell and T Grance ldquoSP 800ndash145eNISTdefinition ofcloud computingrdquo Technical Report National Institute ofStandards amp Technology (NIST) Gaithersburg MD USA2011

[2] Q Zhang L Cheng and R Boutaba ldquoCloud computingstate-of-the-art and research challengesrdquo Journal of InternetServices and Applications vol 1 no 1 pp 7ndash18 2010

[3] A BottaW DeDonato V Persico and A Pescape ldquoIntegrationof cloud computing and internet of things a surveyrdquo FutureGeneration Computer Systems Journal vol 56 pp 684ndash700 2016

[4] N Fernando S W Loke and W Rahayu ldquoMobile cloudcomputing a surveyrdquo Future generation computer systemsJournal vol 29 no 1 pp 84ndash106 2013

[5] M R Rahimi J Ren C H Liu A V Vasilakos andN Venkatasubramanian ldquoMobile cloud computing a surveystate of art and future directionsrdquo Mobile Networks andApplications Journal vol 19 no 2 pp 133ndash143 2014

[6] Y Xu and S Mao ldquoA survey of mobile cloud computing forrich media applicationsrdquo IEEE Wireless Communicationsvol 20 no 3 pp 46ndash53 2013

[7] Y Wang R Chen and D-C Wang ldquoA survey of mobilecloud computing applications perspectives and challengesrdquoWireless Personal Communications Journal vol 80 no 4pp 1607ndash1623 2015

[8] P R Palos-Sanchez F J Arenas-Marquez and M Aguayo-Camacho ldquoCloud computing (SaaS) adoption as a strategictechnology results of an empirical studyrdquoMobile InformationSystems Journal vol 2017 Article ID 2536040 20 pages 2017

[9] M N Sadiku S M Musa and O D Momoh ldquoCloudcomputing opportunities and challengesrdquo IEEE Potentialsvol 33 no 1 pp 34ndash36 2014

[10] A A Ibrahim D Kliazovich and P Bouvry ldquoOn service levelagreement assurance in cloud computing data centersrdquo inProceedings of the 2016 IEEE 9th International Conference onCloud Computing pp 921ndash926 San Francisco CA USAJune-July 2016

[11] S A Baset ldquoCloud SLAs present and futurerdquo ACM SIGOPSOperating Systems Review vol 46 no 2 pp 57ndash66 2012

[12] L Sun J Singh and O K Hussain ldquoService level agreement(SLA) assurance for cloud services a survey from a trans-actional risk perspectiverdquo in Proceedings of the 10th In-ternational Conference on Advances in Mobile Computing ampMultimedia pp 263ndash266 Bali Indonesia December 2012

[13] C Di Martino S Sarkar R Ganesan Z T Kalbarczyk andR K Iyer ldquoAnalysis and diagnosis of SLA violations ina production SaaS cloudrdquo IEEE Transactions on Reliabilityvol 66 no 1 pp 54ndash75 2017

[14] A A Ibrahim S Varrette and P Bouvry ldquoPRESENCE to-ward a novel approach for performance evaluation of mobilecloud SaaS web servicesrdquo in Proceedings of the 32nd IEEEInternational Conference on Information Networking (ICOIN2018) Chiang Mai ailand January 2018

[15] V Stantchev ldquoPerformance evaluation of cloud computingofferingsrdquo in Proceedings of the 3rd International Conferenceon Advanced Engineering Computing and Applications inSciences ADVCOMP 2009 pp 187ndash192 Sliema MaltaOctober 2009

[16] J Y Lee J W Lee D W Cheun and S D Kim ldquoA qualitymodel for evaluating software-as-a-service in cloud com-putingrdquo in Proceedings of the 2009 Seventh ACIS InternationalConference on Software Engineering Research Managementand Applications pp 261ndash266 Haikou China 2009

Mobile Information Systems 13

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 14: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

[17] C J Gao K Manjula P Roopa et al ldquoA cloud-based TaaSinfrastructure with tools for SaaS validation performance andscalability evaluationrdquo in Proceedings of the CloudCom 2012-Proceedings 2012 4th IEEE International Conference on CloudComputing Technology and Science pp 464ndash471 TaipeiTaiwan December 2012

[18] P X Wen and L Dong ldquoQuality model for evaluating SaaSservicerdquo in Proceedings of the 4th International Conference onEmerging Intelligent Data and Web Technologies EIDWT2013 pp 83ndash87 Xirsquoan China September 2013

[19] G Cicotti S DrsquoAntonio R Cristaldi and A Sergio ldquoHow tomonitor QoS in cloud infrastructures the QoSMONaaS ap-proachrdquo Studies in Computational Intelligence vol 446pp 253ndash262 2013

[20] G Cicotti L Coppolino S DrsquoAntonio and L Romano ldquoHowto monitor QoS in cloud infrastructures the QoSMONaaSapproachrdquo International Journal of Computational Scienceand Engineering vol 11 no 1 pp 29ndash45 2015

[21] A A Z A Ibrahim D Kliazovich and P Bouvry ldquoServicelevel agreement assurance between cloud services providersand cloud customersrdquo in Proceedingsndash2016 16th IEEEACMInternational Symposium on Cluster Cloud and Grid Com-puting CCGrid 2016 pp 588ndash591 Cartagena Colombia May2016

[22] A M Hammadi and O Hussain ldquoA framework for SLAassurance in cloud computingrdquo in Proceedingsndash26th IEEEInternational Conference on Advanced Information Net-working and Applications Workshops WAINA 2012pp 393ndash398 Fukuoka Japan March 2012

[23] S S Wagle M Guzek and P Bouvry ldquoCloud service pro-viders ranking based on service delivery and consumer ex-periencerdquo in Proceedings of the 2015 IEEE 4th InternationalConference on Cloud Networking (CloudNet) pp 209ndash212Niagara Falls ON Canada October 2015

[24] S S Wagle M Guzek and P Bouvry ldquoService performancepattern analysis and prediction of commercially availablecloud providersrdquo in Proceedings of the International Con-ference on Cloud Computing Technology and Science Cloud-Com pp 26ndash34 Hong Kong China December 2017

[25] M Guzek S Varrette V Plugaru J E Pecero and P BouvryldquoA holistic model of the performance and the energy-efficiency of hypervisors in an HPC environmentrdquo Concur-rency and Computation Practice and Experience vol 26no 15 pp 2569ndash2590 2014

[26] M Bader-El-Den and R Poli ldquoGenerating sat local-searchheuristics using a gp hyper-heuristic frameworkrdquo in ArtificialEvolution NMonmarche E-G Talbi P Collet M Schoenauerand E Lutton Eds pp 37ndash49 Springer Berlin HeidelbergGermany 2008

[27] J H Drake E Ozcan and E K Burke ldquoA case study ofcontrolling crossover in a selection hyper-heuristic frame-work using the multidimensional knapsack problemrdquo Evo-lutionary Computation vol 24 no 1 pp 113ndash141 2016

[28] A Shrestha and A Mahmood ldquoImproving genetic algorithmwith fine-tuned crossover and scaled architecturerdquo Journal ofMathematics vol 2016 Article ID 4015845 10 pages 2016

[29] T T Allen Introduction to ARENA Software SpringerLondon UK 2011

[30] R C Blair and J J Higgins ldquoA comparison of the power ofWilcoxonrsquos rank-sum statistic to that of Studentrsquos t statisticunder various nonnormal distributionsrdquo Journal of Educa-tional Statistics vol 5 no 4 pp 309ndash335 1980

[31] B F Cooper A Silberstein E Tam R Ramakrishnan andR Sears ldquoBenchmarking cloud serving systems with YCSBrdquo

in Proceedings of the 1st ACM symposium on Cloud computingSoCCrsquo 10 Indianapolis IN USA June 2010

[32] Redis Labs memtier_benchmark A High-9roughputBenchmarking Tool for Redis amp Memcached Redis LabsMountain View CA USA httpsredislabscomblogmemtier_benchmark-a-high-throughput-benchmarking-tool-for-redis- memcached 2013

[33] Redis Labs How Fast is Redis Redis Labs Mountain ViewCA USA httpsredisiotopicsbenchmarks 2018

[34] Twitter rpc-perfndashRPC Performance Testing Twitter San FranciscoCA USA httpsgithubcomAbdallahCoptanrpc-perf 2018

[35] Postgresql ldquopgbenchmdashrun a benchmark test on PostgreSQLrdquohttpswwwpostgresqlorgdocs94staticpgbenchhtml 2018

[36] A Labs ldquoHTTP_LOAD multiprocessing http test clientrdquohttpsgithubcomAbdallahCoptanHTTP_LOAD 2018

[37] Apache ldquoabmdashApache HTTP server benchmarking toolrdquohttpshttpdapacheorgdocs24programsabhtml 2018

[38] e Regents of the University of California ldquoiPerfmdashthe ulti-mate speed test tool for TCP UDP and SCTPrdquo httpsiperffr2018

[39] S Varrette P Bouvry H Cartiaux and F GeorgatosldquoManagement of an academic HPC cluster the UL experi-encerdquo in Proceedings of the 2014 International Conference onHigh Performance Computing amp Simulation (HPCS 2014)pp 959ndash967 Bologna Italy July 2014

14 Mobile Information Systems

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 15: CE: Monitoring and Modelling the Performance Metrics of ...downloads.hindawi.com/journals/misy/2018/1351386.pdf · ResearchArticle PRESENCE: Monitoring and Modelling the Performance

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom