ieee systems journal 1 drawing the cloud map: virtual ... · index terms—cloud computing, cloud...

12
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual Network Provisioning in Distributed Cloud Computing Data Centers Khaled Alhazmi, Student Member, IEEE, Mohamed Abu Sharkh, Student Member, IEEE, and Abdallah Shami, Senior Member, IEEE Abstract—Efficient virtualization methodologies constitute the core of cloud computing data center implementation. Clients are attracted to the cloud model by its ability to scale the resources dynamically and the flexibility in payment options that it offers. However, performance hiccups may push them to go back to the buy-and-maintain model. Virtualization plays a key role in the synchronous management of the thousands of servers along with clients’ data living on them. To achieve seamless virtualization, cloud providers require a system that performs the function of virtual network provisioning. This includes receiving the cloud client requests and allocating their computational and network re- sources in a way that guarantees the quality-of-service conditions for clients while maximizing the data center resource utilization and providers’ revenue. We introduce a comprehensive system to solve the problem of virtual network mapping for a set of con- nection requests sent by cloud clients. Connections are collected in time intervals called windows. Consequently, node and link provisioning is performed. Different window size selection schemes are introduced and evaluated. Three schemes to prioritize connec- tions are used, and their effect is assessed. Moreover, a technique dealing with connections spanning over more than a window is introduced. The proposed algorithm is compared with previous work well known in the literature. Simulation results show that the dynamic window size algorithm achieves cloud service providers’ objectives in terms of generated revenue, served-connection ratio, resource utilization, and computational overhead. In addition, experimental results show that handling spanning connections independently improves the performance of the system. Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning, virtual network embedding, virtualization. I. I NTRODUCTION C LOUD clients are attracted to the cloud model by the ability to scale the number and capabilities of their rented machines dynamically as their business demands require. The flexibility in payment options offered by providers is another at- tracting point. However, performance hiccups may push clients to either switch to other cloud providers or even go back to the Manuscript received April 1, 2015; revised August 13, 2015 and September 15, 2015; accepted September 19, 2015. This work was funded in part by the King Abdulaziz City for Science and Technology (KASCT) through the Cultural Bureau of Saudi Arabia in Canada. K. Alhazmi is with the Department of Electrical and Computer Engineering, Western University, London, ON N6A 5B9, Canada, and also with Computer Research Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh, Saudi Arabia (e-mail: kalhazmi@ uwo.ca; [email protected]). M. Abu Shark and A. Shami are with the Department of Electrical and Computer Engineering, Western University, London, ON N6A 5B9, Canada (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/JSYST.2015.2484298 buy-and-maintain model [1]. To maintain the performance in the data center at the level required by clients, cloud providers need a thorough data center management process. This manage- ment process includes synchronously handling the diverse re- sources of thousands of servers located in the data center along with clients’ data living on them [2]. Virtualization’s role is key in this process. Through virtualization, cloud service providers can offer computing power, storage, platforms, and services in a commodity-based design without the clients needing to worry about low level implementation details. Clients can compute and connect without the overhead of resource management or network routing and control [3]. Both interdata center and intra- data center networks are wholly managed by the cloud provider. To achieve seamless virtualization, cloud providers require an efficient system to comprehensively perform the functions of resource virtualization, allocation, and scheduling in their geographically distributed network of data centers (public clouds). Cloud data centers contain thousands of servers that store, process, and exchange clients’ data. Such systems will receive clients’ network and computational resource reserva- tion requests and perform the mapping and scheduling of these requests. In the virtual network model adopted by many providers, clients are able to reserve virtual machines (VMs) of multiple types that have different resource configurations [4]–[6]. Clients can also make connection requests. These con- nections will facilitate data exchange between the client VMs or between a client VM and that client’s private cloud. The VMs can represent the vertices of a virtual network where each client expects to maintain the agreed-upon quality-of-service (QoS) conditions regardless of how many other clients are sharing the data center resources at the same time. A key condition here is the system’s ability to allocate network resources to VMs dy- namically at any moment. This virtual network mapping (VNM) scenario raises questions such as the following: What is the optimal VM placement policy/method to serve client requests? Which connection assignment/mapping and scheduling policy should be used? How often are arriving requests processed and mapped? How are these requests prioritized for service? In this paper, we tackle the problem of VNM in a cloud com- puting data center environment. VNM in this context means finding the optimal technique/policy to serve/handle requests received continuously from clients. This is done by constructing virtual networks that contain multiple VM instances running on servers in multiple geographically distributed data centers. VM instances can be connected through virtual network links or edges that are mapped onto physical (substrate) network paths. 1932-8184 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 02-Sep-2019

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE SYSTEMS JOURNAL 1

Drawing the Cloud Map: Virtual NetworkProvisioning in Distributed Cloud

Computing Data CentersKhaled Alhazmi, Student Member, IEEE, Mohamed Abu Sharkh, Student Member, IEEE, and

Abdallah Shami, Senior Member, IEEE

Abstract—Efficient virtualization methodologies constitute thecore of cloud computing data center implementation. Clients areattracted to the cloud model by its ability to scale the resourcesdynamically and the flexibility in payment options that it offers.However, performance hiccups may push them to go back to thebuy-and-maintain model. Virtualization plays a key role in thesynchronous management of the thousands of servers along withclients’ data living on them. To achieve seamless virtualization,cloud providers require a system that performs the function ofvirtual network provisioning. This includes receiving the cloudclient requests and allocating their computational and network re-sources in a way that guarantees the quality-of-service conditionsfor clients while maximizing the data center resource utilizationand providers’ revenue. We introduce a comprehensive system tosolve the problem of virtual network mapping for a set of con-nection requests sent by cloud clients. Connections are collectedin time intervals called windows. Consequently, node and linkprovisioning is performed. Different window size selection schemesare introduced and evaluated. Three schemes to prioritize connec-tions are used, and their effect is assessed. Moreover, a techniquedealing with connections spanning over more than a window isintroduced. The proposed algorithm is compared with previouswork well known in the literature. Simulation results show that thedynamic window size algorithm achieves cloud service providers’objectives in terms of generated revenue, served-connection ratio,resource utilization, and computational overhead. In addition,experimental results show that handling spanning connectionsindependently improves the performance of the system.

Index Terms—Cloud computing, cloud data centers, node andlink mapping, resource allocation, resource provisioning, virtualnetwork embedding, virtualization.

I. INTRODUCTION

C LOUD clients are attracted to the cloud model by theability to scale the number and capabilities of their rented

machines dynamically as their business demands require. Theflexibility in payment options offered by providers is another at-tracting point. However, performance hiccups may push clientsto either switch to other cloud providers or even go back to the

Manuscript received April 1, 2015; revised August 13, 2015 andSeptember 15, 2015; accepted September 19, 2015. This work was funded inpart by the King Abdulaziz City for Science and Technology (KASCT) throughthe Cultural Bureau of Saudi Arabia in Canada.

K. Alhazmi is with the Department of Electrical and Computer Engineering,Western University, London, ON N6A 5B9, Canada, and also with ComputerResearch Institute, King Abdulaziz City for Science and Technology (KACST),Riyadh, Saudi Arabia (e-mail: kalhazmi@ uwo.ca; [email protected]).

M. Abu Shark and A. Shami are with the Department of Electrical andComputer Engineering, Western University, London, ON N6A 5B9, Canada(e-mail: [email protected]; [email protected]).

Digital Object Identifier 10.1109/JSYST.2015.2484298

buy-and-maintain model [1]. To maintain the performance inthe data center at the level required by clients, cloud providersneed a thorough data center management process. This manage-ment process includes synchronously handling the diverse re-sources of thousands of servers located in the data center alongwith clients’ data living on them [2]. Virtualization’s role is keyin this process. Through virtualization, cloud service providerscan offer computing power, storage, platforms, and services ina commodity-based design without the clients needing to worryabout low level implementation details. Clients can computeand connect without the overhead of resource management ornetwork routing and control [3]. Both interdata center and intra-data center networks are wholly managed by the cloud provider.

To achieve seamless virtualization, cloud providers requirean efficient system to comprehensively perform the functionsof resource virtualization, allocation, and scheduling in theirgeographically distributed network of data centers (publicclouds). Cloud data centers contain thousands of servers thatstore, process, and exchange clients’ data. Such systems willreceive clients’ network and computational resource reserva-tion requests and perform the mapping and scheduling ofthese requests. In the virtual network model adopted by manyproviders, clients are able to reserve virtual machines (VMs)of multiple types that have different resource configurations[4]–[6]. Clients can also make connection requests. These con-nections will facilitate data exchange between the client VMs orbetween a client VM and that client’s private cloud. The VMscan represent the vertices of a virtual network where each clientexpects to maintain the agreed-upon quality-of-service (QoS)conditions regardless of how many other clients are sharing thedata center resources at the same time. A key condition here isthe system’s ability to allocate network resources to VMs dy-namically at any moment. This virtual network mapping (VNM)scenario raises questions such as the following: What is theoptimal VM placement policy/method to serve client requests?Which connection assignment/mapping and scheduling policyshould be used? How often are arriving requests processed andmapped? How are these requests prioritized for service?

In this paper, we tackle the problem of VNM in a cloud com-puting data center environment. VNM in this context meansfinding the optimal technique/policy to serve/handle requestsreceived continuously from clients. This is done by constructingvirtual networks that contain multiple VM instances runningon servers in multiple geographically distributed data centers.VM instances can be connected through virtual network links oredges that are mapped onto physical (substrate) network paths.

1932-8184 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE SYSTEMS JOURNAL

TABLE ICOMPARISON OF THE EXISTING VNM APPROACHES

In cloud computing environments, central network con-trollers have to deal with the task of mapping of numerousVNM requests within short periods [7], [8]. An example of thisscenario can be seen with the increasingly popular software-defined networking (SDN) technology [9]. SDN controllers areresponsible for centrally mapping clients’ connection requests/flows. To put this in perspective, a typical SDN controller cansupport up to 105 flows/s in the optimal case [10]. This sheervolume of requests arriving at the SDN controller can cause asubstantial performance handicap. The computational overheadwill increase significantly if these requests are processed indi-vidually as they come. Therefore, the need arises for connectionrequest aggregation. Considering aggregation as a solutionto computational issues brings to the forefront many designquestions. A complete methodology needs to be constructed.The most important decisions include aggregation technique,aggregation factor, window size, and request prioritizing. Weendeavor to answer these questions and others in this paper.

In this paper, we introduce a new VNM methodology forcloud computing data centers. Our contribution can be summa-rized as follows:

1) Introduce a comprehensive model that covers computa-tional and network resource requests and supports per-forming node mapping and link provisioning.

2) Item Aggregate the connection requests into virtual net-work requests (VNRs), and process these requests in atime window-based manner. This decreases the computa-tional load on the central controller.

3) Investigate the effect of fixed and dynamic window sizesand the aggregation factor combined with VNM in a net-worked cloud environment. The objective is to determinethe optimal window size for a specific VNM problem.

4) Investigate the effect of connections’ order on the per-formance of the system by testing multiple methods ofprioritizing connections before processing.

5) Investigate the effect of adding the spanning connec-tion technique, and show its effect on revenue andperformance.

6) Study the effect of permissible waiting time of cloudservice requests on the performance metrics.

7) Compare the proposed method to the prominent methodsin the literature, and analyze the results for multiplemetrics.

The following sections are organized as follows. InSection II, a brief review of the related work is provided.Detailed problem description is given in Section III. Section IVpresents the VNM, the different time window selection tech-niques, the spanning connection technique, and the revenuecalculation method. The simulation environment and resultsare discussed in Section V. Finally, Section VI concludesthis paper.

II. RELATED WORK

The increasing attention that cloud computing is attractingrecently is resulting in a bigger research focus on techniquesof network virtualization for VNM. The problem of efficientVNM is currently modeled as a question of mathematicaloptimization.

The problem of VNM can be seen in the literature in differentforms as in [11]–[17], each with the goal of developing the mostefficient mapping (embedding or assignment in some sources)technique. Table I shows a comprehensive comparison of themost recent and commonly used virtual network provisioningapproaches. Inherent problem constraints, including node andlink resource constraints, limited substrate resources, the dy-namic nature of the VNRs’ arrival, and VNRs’ diverse topolo-gies, impose challenges on the process of VNM and makean optimal mapping difficult to reach. Each of the previousresearch efforts chose to relax one or more of these factors toreduce the complexity/search space of the problem.

The VNM process is divided into node mapping and linkmapping. Some of the proposed approaches tend to separate thenode and link mapping into two separate stages as in [13]–[16]to reduce the complexity, while others, including this work, usecoordination between the two stages (e.g., [11] and [17]). In[11], the proposed virtual network embedding algorithm calledViNEYard coordinates node and link mapping stages so as toenhance the embedding efficiency. Node and link separationduring the mapping process means that the node mapping stageis performed independently of the link mapping stage. The lack

Page 3: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALHAZMI et al.: DRAWING THE CLOUD MAP: VIRTUAL NETWORK PROVISIONING IN DATA CENTERS 3

of the coordination between the two stages can lead to highlink mapping cost which results in decreasing the number of ac-cepted virtual networks and the generated revenue. With regardto the second challenge, the authors of [13] and [16] assumeinfinite capacity of the physical nodes; however, recent works(e.g., [11] and [17]), including the proposed work, assume finitenode and link resources as constraints on the problem. More-over, contrary to [11], [14], and [15] where only computingpower in terms of CPU capacity and network resources in termsof bandwidth are considered, the proposed approach, as in [12],studies other physical resources such as memory, storage, andbandwidth. The limitation of the infinite capacity assumption isthat the applicability of the relaxed technique is bounded; thus,the acceptance rate is not evaluated precisely.

An efficient and on-demand virtual network embeddingmethodology of VNRs is needed without sacrificing any of theaforementioned factors. Recent works, including our work, tendto use the online approach, which is the real-world practicalapproach, of mapping the incoming VNRs upon their arrival(e.g., [14], [15], and [17]). On the other hand, the authors of[13] and [16] follow the offline variant where the mapping ofthe requests begins after the completed set of requests arrives.Despite the former approach being more challenging since themapping algorithm has no observation over the future mappingrequests, the approach is a better reflection of the practicalscenario, and hence, it is used in this paper.

All the previously mentioned efforts considered only tra-ditional VNM with deterministic computational and networkresources. Virtual network provisioning is the main resource al-location problem in network virtualization. In cloud computingenvironments, applications with elastic resources for differentclients may be hosted and run on VMs in geodistributed datacenters. Recently, VNM and resource allocation in the contextof cloud computing have been considered in [12], [15], [17],and [18]; however, they have not been widely explored. Theclients and their VMs can be abstracted as VNRs. To the bestof our knowledge, we are the first to consider online VNRprovisioning in cloud environment such that the virtual networktopologies are diverse and their topologies are tailored based onthe cloud connection requests.

In [17], optimal networked cloud mapping is formulatedas a mixed-integer-programming problem, with the objectivefocusing on cost efficiency. A method is subsequently proposedfor the efficient mapping of resource requests onto a sharedsubstrate network connecting various islands of computingresources. A heuristic algorithm is adopted to address theproblem. The authors of [17] proposed an augmented graphas in [11]. The augmented graph extends the substrate graphby connecting the virtual node to each physical node of thesame type (server and router) where the end user determinesthe location of each virtual router and server. A challenge hereis that, while the authors of [11] and [17] try to coordinatenode and link mapping using the augmented graph, determiningthe location of each virtual router by the end user is notpractically applicable, and if the location was ignored, thenthat would increase the size of the network which increases therun time significantly. The real-world cloud infrastructure is notconsidered in [17] as well.

In [12], a resource scheduling model for cloud computingdata centers is presented. In this model, requests arrive fromclients either to reserve a VM, connect two VMs together, orconnect a VM to a private cloud. VM placement techniques andconnection request scheduling techniques are evaluated. Bothcomputational and network resources are independently con-sidered. Connection requests are processed one by one withoutemploying any aggregation policy. Moreover, VM placementand connection request are performed separately which affectsthe efficiency negatively. Unlike in [15] and [17], the real-world cloud based infrastructure is considered in [12] havingdistributed virtualized data centers; however, the virtualizednetwork resources are not considered. Both data center andnetwork virtualization are considered in this paper.

Unlike in [12], [13], [15]–[17], the authors of [11] and [14]propose an embedding algorithm that accumulates multipleVNRs during a fixed active window and then processes themaccording to their specific requirements. The VNR that cannotbe addressed in a particular time window is inserted in a queueand then assigned accordingly in the subsequent windows. Therequest is dropped only when its maximum waiting time in thequeue has passed without being processed. Path splitting andmigration features are also considered in [14]. The authors in[11] propose a window-based virtual network algorithm calledthe WiNE. The simulation results show that combining theVNRs and processing them in groups at the end of the timeinterval called window is effective in terms of resource cost. AsVNRs come in, the WiNE algorithm collects them in batchesfor a given time period (window) and calculates the potentialrevenue of each request, and then, it assigns higher priority torequests with higher potential revenue. Every VNR is activeduring a limited time frame. If this time frame (request lifetime)is over before the end of the window, the request is dropped orignored. The optimal window size analysis for a set of requestswas not addressed in that work.

In [19], the authors propose a new multiplexing mechanismto be used in wavelength-division-multiplexing networks whichis traffic grooming. This mechanism aggregates applicationsthat have less bandwidth requirements on shared wavelengthchannels in order to maximize network resource utilization. Asliding traffic scheduling model is also proposed. Schedulingdoes not depend on the connection lifetime in this model.Moreover, the authors present a time window-based techniquein which the network bandwidth requests are divided intomultiple time windows. Spanning requests (or requests that arelong enough to span over two time windows or more) can bescheduled in an alternative window if no network resources areavailable to serve these requests in the current window. In thispaper, we consider fixed window sizes as in [11] and [14] andpropose a dynamic technique to find the most fitting windowsize while maintaining constraints stemming from admissioncontrol, online VNR, node and link provisioning, and VNRdiverse topologies.

III. MODEL DESCRIPTION

When creating an efficient resource allocation methodol-ogy, it is critical that the resources involved by the cloud

Page 4: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE SYSTEMS JOURNAL

TABLE IIEXAMPLE OF A SET OF CONNECTION REQUESTS

infrastructure are accurately modeled. Another key factor is thatany cloud management system should be continuously awareof the infrastructure operational status. In the scenario thatwe are investigating, clients will reserve VMs for a fixed oropen amount of time. The specifications of connection requestssuch as the source, destination, and lifetime are not known inadvance. The substrate network consists of data centers andclient nodes. Each data center has a number of servers thatcan host multiple VMs without exceeding the server capacity.Multiple types of VMs with different resource configurationsare available. After reserving the VM, a client may requestconnections between the VM and a client node. In a connectionrequest, the client typically defines the source, destination,requested (preferred) start time, connection lifetime (duration),requested capacity units (VM specifications), and requiredbandwidth. Table II shows a sample of the input data for theproblem in the form of connection requests. To start solvingthis problem, client requests are aggregated into VNRs basedon a configurable aggregation factor. Next, serving connec-tion requests are abstracted as a VNM problem where nodesrepresent sources and destinations and edges are virtual linksbetween these nodes. Each virtual link represents a physicalnetwork path from the source to the destination. Moreover, eachVNR will be assigned to a time window based on the requestedstart time. Therefore, we can abstract a single window as aset of VNM requests during the time period that this windowrepresents. In case a VNR lifetime is long enough to span overmore than a window, the VNR will be assigned to all of thesewindows. We call these requests spanning requests. After that,the system performs node and link mapping for VNRs in aspecific window on the substrate network.

A key decision here is choosing the window size that yieldsthe best performance and revenue values. In the followingsection, we discuss the techniques that we use to decide thewindow size. The window size selection affects performancemeasures such as request acceptance ratio, allocation com-putational overhead, resource utilization, and cloud providerrevenue.

IV. NETWORKED CLOUD PROVISIONING SOLUTION

A. VNM

1) Node Mapping: Before mapping the VNR on the sub-strate network, the VM needs to be allocated the requiredcomputational resources. The VM will be placed on a serverwith sufficient resources. The node mapping algorithm used is

a variation of the node distance algorithm used in [12]. Thealgorithm adds the advantage of ensuring that the VMs aredistributed widely, and this leads to less connection requestcollision.

2) Link Mapping: The next step is to map the virtual linkonto a physical network path. This path has to be a validpath between the source VM and the client node or privatecloud. In addition, the mapping has to satisfy the bandwidthrequirements by the virtual link on all the physical links thatconstruct the path. The algorithm used is a greedy algorithmthat maps the virtual link onto the shortest path, provided thatthe path has the requested bandwidth available. The advantageof using this algorithm is that it takes less time to calculate; thecomputational overhead is decreased.

B. Time Window Selection Techniques

1) Fixed Window Technique: In this technique, fixed timeperiods are chosen, and windows are defined based on them.The connection requests are aggregated based on a predefinedaggregation factor. This factor basically specifies how manyconnection requests are in one VNR. For any given fixedwindow, the connection requests are analyzed, and those withthe highest revenue potential are prioritized. The processedrequests are then aggregated and mapped to the substratenetwork. Requests that cannot be mapped are rejected. Amaximum waiting time for every request is also considered.Along with the connection request details, the client defines themaximum tardiness allowed per connection. If the connectionwas not served within this period, it is considered blocked [18].

2) Dynamic Window Technique: In this technique, clientrequests are still distributed over multiple time windows. How-ever, the sizes of these windows might differ. The connectionsare divided into sets, and the time window size is specifiedbased on the number and lifetimes of the requests in this set.We implement this step using the maximum independent setalgorithm [19], [20]. A variation of this algorithm is usedin the context of optical networks in [19]. The input of thisalgorithm is the set of connections as in Table II. Afterward,an intersection table, as in Table III, is constructed containingbinary fields indicating weather two nodes intersect (conflict)in time or not. Next, the interval graph shown in Fig. 1 isconstructed. Each connection request is represented by a node,and if two connections conflict in time, a link will be drawn toconnect them. Fig. 1 represents an interval graph of four con-nection requests (C1 to C4). The algorithm then divides these

Page 5: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALHAZMI et al.: DRAWING THE CLOUD MAP: VIRTUAL NETWORK PROVISIONING IN DATA CENTERS 5

TABLE IIIINTERSECTION TABLE FOR CONNECTIONS IN TABLE II

Fig. 1. Interval graph for four connections.

Fig. 2. Dynamic windows.

connections into independent sets so that, if two connections arein the same set, then they are in the same dynamic time window.Given the interval graph, the algorithm finds the largest setof connection requests such that no two nodes in the set areconnected by a link. These set of nodes are called a maximumindependent set of this interval graph. The requested start timesfor connections in each set form the boundaries of the dynamictime windows. In Fig. 2, the client connection requests areshown divided into different time windows in which they alloverlap in time. Within a single dynamic window, the timerequirements for different client requests could also overlap.In other words, two different requests accommodated within asingle time window might request resources in the same periodof time.

As mentioned earlier, VNM involves node mapping and linkmapping where connection requests are mapped to the substratelayer. This process is shown on a high level in Fig. 3. Oncethe number of windows and window sizes have been decided,the requests are allocated to their respective assigned windows,and mapping is performed. The steps taken are detailed inAlgorithm 1. After deciding the number of windows (NW)and each window size (WS), these values are used to assignconnections to windows.

Algorithm 1 VNM using dynamic window technique

1: INPUTS: Connection requests set CR, Server set S, Virtual Machines VMs, Substrate

network G(N,L,P ) Node set N , Link set L, Path set P where Pijk is the path

number k between nodes i and j, NP number of paths between i and j

2: OUTPUT: Mapping of virtual network requests from all the dynamic windows on the

substrate network

3: Interval graph for CR is generated

4: ConnectionsInWindow[w]: Set of connections assigned to window w

5: VN_request[w]: Set of virtual network requests in window w

6: NW=Number of windows calculated by running maximum independent graphs

algorithm

7: WS=Array of window sizes calculated by running maximum independent graphs

algorithm

8: w = 0

9: for w < NW do

10: ConnectionsInWindow[w] = AssignConToWindow(WS[w])

11: w++

12: end for

13: w = 0

14: for w < NW do

15: for all Con ∈ connectionsInWindow[w] do

16: if Con.StartT ime+Con.duration+

Con.AllowedTardiness >= window[w].size then

17: Con.StartT ime = windowSizeSet[w]

18: else NumBlockedConnections =

NumBlockedConnections+ 1

19: end if

20: end for

21: w++

22: end for

23: w = 0

24: for all w < NW do

25: Sort ConnectionsInWindow[w] in descending order based on the revenue

26: w++

27: end for

28: w = 0

29: k = 0

30: for w < NW do

31: while k < connInWindow[w].size do

32: addNewVNR(k, aggregationFactor,VN_Request[w])

33: k++

34: end while

35: w++

36: end for

37: w = 0

38: y = 0

39: for w < NW do

40: for y < VN_requests[w].size do // number of VNR in the window

41: if CheckNodeMappingGreedy(V N_ requesti[w][y]) then

42: if CheckPathBW (V N_request[w][y]) then

43: NodeMappingGreedy(V N_request[w][y])

44: Linkmapping(V N_request[w][y])

45: Accepted_V N_requests ++

46: NumServedConnections =

NumServedConnections+ V N_requests[w][y].size

47: end if

48: end if

49: y++

50: end for

51: w ++

52: end for

Page 6: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE SYSTEMS JOURNAL

Fig. 3. Node and link mapping.

Fig. 4. VNM process.

Next, connections that expire before the end of their assignedwindow are filtered and removed. Then, connections of eachwindow are aggregated into VNRs and sorted based on the po-tential generated revenue. The system checks if there is enoughcomputational and network resources to map these requests.Finally, node mapping and link mapping are performed. Asummary of the whole process is provided in Fig. 4.

3) Spanning Connection Technique: After the time windowdivision is done, some connection requests might be longenough to span over two or more windows. These connectionsare termed spanning connections. Handling spanning connec-tions can go down one of two roads. The spanning connectionrequest can either be assigned to one of the windows that it

spans over or assigned to more than one (or even all of thecovered windows). We start our investigation by choosing thefirst option; then, we investigate the second option and show itseffect on performance.

The analysis and aggregation of spanning connections willbe performed next, and this will mainly depend on the startwindow and the duration of the spanning connections. If thespanning connections have the same start window, then theywill be prioritized based on the generated revenue. This aggre-gation step will be performed only for connections that spanover the same number of windows. The process will be repeateduntil all the connections with the same start window are served.

4) Revenue Objective Function: One of the objectives thatcloud providers focus on when designing their VNM systems isrevenue. Revenue mainly depends on the number of acceptedVNRs and the amount of resources requested by each. Therevenue of a single accepted VNR is defined as follows [11],[13], [14], [21]:

R(VNRi) =∑

cεVNRi

Tc∑t=0

CPUc + β

Tc∑t=0

Memoryc

Tc∑t=0

Storagec + δ

Tc∑t=0

BWc

). (1)

As shown in the equation, the revenue of VNRi is the totalof the revenue amounts coming from connections aggregatedto form this VNR. Consequently, the revenue from a singlerequest is calculated linearly based on the amount of resourcesthat it requests. We chose to design the revenue calculation ina generic way that adapts to any cloud provider with any setof resource offerings. The variables α, β, and γ refer to theprices per unit for computational resources, while the variableδ refers to the unit price for bandwidth (BW). CPU, memory,and storage are the main computational resources. Bandwidthis the only network resource used in this paper.

V. PERFORMANCE EVALUATION

In this section, we explain the simulation environment indetail and then present evaluation results. Our evaluation isintended to illustrate the effectiveness of the proposed al-gorithm in virtual network provisioning in distributed cloudenvironment. This is performed by comparing the proposedalgorithm with well-known approaches in the literature suchas the greedy-multicommodity flow problem (G-MCF) [14]and ViNEYard [11]. Several performance metrics were used toevaluate efficiency.

A. Simulation Environment

To evaluate the proposed techniques, a discrete event sim-ulator was developed using C++. With regard to the substratenetwork, the National Science Foundation Network (NSFNET),as in Fig. 5, is used in the simulation. Following similar setupsto the ones in [12], the network is composed of 14 nodes ofwhich three are data center nodes and the rest are client nodes.One hundred thirty-two servers were used in the simulation.Each of the three data centers contains 44 servers and is seen

Page 7: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALHAZMI et al.: DRAWING THE CLOUD MAP: VIRTUAL NETWORK PROVISIONING IN DATA CENTERS 7

Fig. 5. NSFNET network of 14 nodes.

as a single node. Twenty-one links were set up to connect thenodes with 546 paths defined. Three alternate routing pathswere defined for each pair of nodes. The input contained datacorresponding to 200 VM instances.

For the connection requests coming from the clients, as in[11], [13]–[15], and [17], their arrival rates were set accordingto the Poisson process varying from one to five steps of 0.5per 100 time units. The connection request lifetime follows anexponential distribution with an average of 1000 time units.We run each experiment using 3000 connection requests. Themaximum waiting time (maximum allowed tardiness) for eachrequest was set to half of its lifetime. A connection requestinput line includes the source, destination, start time, duration,VM specifications, and requested bandwidth information. Thesource nodes are uniformly distributed with a client ID rangingfrom 0 to 10 and the destination nodes that represent a VMnumber following a uniform distribution of 1–200, given thefact that 200 VM instances were used in the simulation.

As for the resource configuration, the CPU resources areuniformly distributed in the range of 50–100, and memory andstorage resources are uniformly distributed in a range of 50–100of their respective units. The available BW is set at 200 forall the links. When looking at the requested VM capacities,CPU resources are uniformly distributed for every VM instancerequests in the range of 0–20. For memory and storage, auniform distribution is defined with a range of 0–200. Similarly,with regard to BW, a uniform distribution within a range of0–50 is defined.

B. Results and Analysis

1) Comparison With Previous Work: To evaluate the pro-posed algorithm, we compare it with well-known approachesapplied in the literature. These approaches are greedy nodemapping followed by solving a multicommodity flow prob-lem for link mapping (G-MCF) [14] and ViNEYard [11].Fig. 6 shows the behavior of our algorithm compared withthe performance of the ViNEYard and G-MCF. At low arrivalrates, where less requests are embedded to the system, ouralgorithm tends to have better ratio by around 10% comparedwith ViNEYard and 25% compared with the other algorithms.Also, at high arrival rates where the substrate network is loadedwith more VNRs, the proposed network cloud provisioning

Fig. 6. Results showing ratio of served connections for different mappingalgorithms.

Fig. 7. Results showing the average number of VNRs per window for differentwindow decision techniques.

(NCP) with dynamic windowing still outperforms the otheralgorithms with regard to the ratio of served connection as afunction of increasing VNR arrival rate. It succeeds in increas-ing the ratio of served connection by around 30%. The proposedNCP with the dynamic window demonstrates better ratio ofserved connection among G-MCF, NCP with fixed window, andViNEYard at low and high arrival rates.

2) Effect of Window Size Decision: Multiple metrics weremeasured during the experiments. Our main objective was toevaluate the window size decision effect over the differentperformance metrics. We have evaluated a no-window schemealong with the dynamic window scheme based on the maximumindependent set algorithm. In addition, we have evaluated afixed window scheme with multiple choices for the windowsize ranging from a small size (50 time units) up to a very largewindow size (2000 time units).

Fig. 7 shows the average number of VNRs per window. Thisnumber becomes very large as the window size chosen growslarger. This has a direct effect on the ratio of served connections.

Page 8: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE SYSTEMS JOURNAL

Fig. 8. Results showing the ratio of served connections for different windowdecision techniques.

Fig. 9. Results showing the ratio of blocked connections for different windowdecision techniques.

As the number of the VNRs in a specific window grows, itbecomes harder to find enough resources to establish a virtualnetwork by mapping the requested nodes and links. We can alsosee that, when using dynamic window sizes, although it doesnot produce the lowest number of VNRs per window, it stillyields a low number close to the numbers yielded by the fixedwindows of very small size. Therefore, we notice (as shownin Fig. 8) that the served connections’ ratio for the dynamicwindow stands among the best. It was surpassed only by theresults from fixed windows of a very small size (50 to 150 timeunits).

Fig. 9 shows the average number of blocked connections.As we explained in the previous sections, all connections areaggregated with a factor of three, collected during the window,and then processed at the end of the window. This means that, ifa connection’s lifetime expires before their respective windowis over, this connection will be blocked or rejected before thebeginning of the mapping process regardless of the availabilityof the resources. The main factor that affects the ratio ofblocked connections, apart from the connections’ lifetime, is

Fig. 10. Results showing the number of VNRs for different window decisiontechniques.

the window size. As the figure shows, the number of blockedconnections is very high for large window sizes, while it stays inan acceptable level for small and dynamic window sizes. Again,the dynamic window size scheme performs acceptably for thismetric.

We now examine the final metric which relates to the com-putational overhead expected by the network controller thatperforms the mapping process. The amount of computationaloverhead needed to map a certain amount of requests is affectedby two factors in this problem. The first factor is the numberof windows. This is based on the window size when using thefixed window size scheme. As the windows become smaller,the number of windows needed is larger, and the total computa-tional overhead grows. The second factor is the total number ofVNRs in the problem. Fig. 10 shows that, for a specific amountof requested connections, using a small fixed window size tendsto produce a much higher number of VNRs than using the largefixed window sizes and dynamic windows. This correspondsto a higher overhead which, of course, is not favorable by thecloud service provider. Comparing the performance metrics,including acceptance, expired connections, and computationaloverhead, we find that using the dynamic window size schemeis the technique that shows good performances for all thesemetrics. Hence, using this scheme would be the best option forthe cloud service providers when performing VNM.

3) Effect of Connections’ Order: After dividing the connec-tions into sets and assigning them to time windows, a decisionneeds to be made regarding the order based upon which theseconnections will be processed or served. There are multiplemethods to prioritize connection requests over each other.We have evaluated the effect of multiple connections’ orderschemes on performance metrics. For this set of experiments,multiple choices for the arrival rate ranging from 1 to 5 witha step of 0.5 have been used. CPU, memory, link utilization,and generated revenue have been added as metrics. Threeprioritizing methods were evaluated for both fixed and dynamictime window schemes. These methods were used in orderingthe requests: high-to-low, low-to-high, and same-order meth-ods. In the high-to-low method, analyzed connection requests

Page 9: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALHAZMI et al.: DRAWING THE CLOUD MAP: VIRTUAL NETWORK PROVISIONING IN DATA CENTERS 9

Fig. 11. Results showing VNR acceptance ratio for different connections’order schemes.

Fig. 12. Results showing the ratio of served connections for different connec-tions’ order schemes.

with higher potential revenue are prioritized for aggregationmapping. The low-to-high method gives priority to the otherside of the scale, while the same-order method deals withconnections based on their original arrival order. For this roundof experiments, the fixed window size was set to 50 time units.

Figs. 11 and 12 show the behavior of the three connections’order methods regarding the metrics of acceptance ratio andthe ratio of served connections as a function of increasingrequest arrival rate. As we can observe in these figures, the low-to-high method surpasses the other two schemes with regardto the acceptance ratio and the ratio of successfully servedconnections. This can be explained by the close dependencebetween the amount of revenue generated from a request andthe lifetime of a request. Low revenue requests tend to end veryquickly which gives the chance to use the network resource toschedule more requests. This reflects positively on the ratio ofserved connections and the acceptance ratio metrics.

For resource utilization metrics, the picture looks a littledifferent. Figs. 13–16 compare the resource utilization for the

Fig. 13. Results showing CPU utilization for different connections’ orderschemes.

Fig. 14. Results showing memory utilization for different connections’ orderschemes.

Fig. 15. Results showing storage utilization for different connections’ orderschemes.

Page 10: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE SYSTEMS JOURNAL

Fig. 16. Results showing link utilization for different connections’ orderschemes.

Fig. 17. Results showing the ratio of blocked connections for different allowedtardiness.

three methods, covering fixed and dynamic window sizes foreach. It is noted that the high-to-low method shows the bestresource utilization readings for computational and networkresources, including CPU power, memory, storage, and band-width. This can be referred to the effect of the method on theserved connections. The high-to-low method tends to prioritizefewer requests, but these requests reserve resources for longperiods of time. This leads to less resource fragmentation andhigher utilization ratios for these resources.

4) Effect of Maximum Allowed Tardiness on the VNM Per-formance: The main goal here is to assess the effect of theconnection’s maximum allowed tardiness on the performancemetrics. For this experiment, the arrival rate was set to 2.5. Wekept the window size either fixed at 500 and 1000 or dynamicwhile varying the maximum allowed tardiness of a connectionrequest starting from 10% of the connection’s lifetime up 100%.

Figs. 17 and 18 show that choosing an optimal allowed tardi-ness value is a tradeoff. Fig. 17 shows the effect of graduallyincreasing the allowed tardiness limit per connection on theratio of blocked connections (connections rejected in the phase

Fig. 18. Results showing the ratio of served connections for different allowedtardiness.

before the mapping starts). This is shown for three differentwindow size techniques. It is clear that the ratio of blockedconnections decreases as the allowed tardiness per connectionbecomes higher. This is due to the fact that, when connectionshave higher tolerance for tardiness, they are able to wait untilthe end of the window when the scheduling happens; therefore,fewer connections are blocked.

Fig. 18 shows the effect of increasing the allowed tardinesslevel on the ratio of served connections. Regardless of thewindow size technique used, high allowed tardiness levels leadto improvements in the ratio of served connections. A highertardiness tolerance gives the scheduler more options in termsof connection mapping, so it leads to more connections beingscheduled. These improvements range from around 3% in thecase of dynamic window sizing to 8% in the case of largewindow sizes. The dynamic window size technique shows sta-ble performance regardless of the degree of tardiness allowed.Hence, this technique is more suitable for highly demanding en-vironments where the connection-allowed tardiness level eitherkeeps fluctuating or is tight to begin with.

5) Effect of Spanning Connection Technique: To concludethe set of experiments, an experiment was conducted to evaluatethe result of implementing the spanning connection techniqueon the performance metrics. We have used arrival rates rangingfrom 1 to 5 with a step of 0.5. In this analysis, we focused ongenerated revenue as the main metric.

The analysis of connection requests was carried out usingboth fixed window and dynamic window size techniques. Re-quests are then prioritized based on the highest revenue (usingthe high-to-low method). Aggregation is performed for theprioritized requests, and the mapping is done onto the substratenetwork. Requests are rejected if mapping cannot be performedon the substrate network.

Fig. 19 shows the effect of enabling the spanning connectionson the ratio of served connections when using dynamic windowand fixed window sizes of 500 time units along either thehigh-to-low or low-to-high ordering methods. The comparisonbetween the three methods is not as straightforward as it wasbefore using the spanning connections. Although the fixed

Page 11: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALHAZMI et al.: DRAWING THE CLOUD MAP: VIRTUAL NETWORK PROVISIONING IN DATA CENTERS 11

Fig. 19. Results showing the ratio of served connections for different orderschemes using spanning connection technique.

window size with the high-to-low method still yields the lowestserved-connection ratio compared to the other two methods, thelow-to-high method does not guarantee the best performancein all cases like before. Instead, using the fixed window sizewith the low-to-high method performs better in the case of highconnection load, while using the high-to-low method along withthe dynamic window size technique method produces betterresults when the load is low or regular. This is due to the factthat this method prioritized the spanning connections inheritedfrom earlier windows to be served first. This allows for moreconnections from this category to be served compared to thelow-to-high method which will order the spanning connectionsalong with the current window connections based on theirpotential revenue. As the connections that arrived in the currentwindow will have more tolerance for tardiness, this, in total,will lead to more connections served. However, as the loadincreases, the effect of the spike in the service ratio starts to godown, and the original trend starts to show again. This is alsosupported by the fact that the spanning connection percentagegets marginalized when the number of connections per windowbecomes higher.

In the experiment shown in Fig. 20, we try to find thecombination of techniques that achieves the highest revenuewhen using the spanning connection technique. Using dynamicwindow size selection along with the high-to-low connectionordering technique achieves the highest revenue for high andlow loads. This is despite the number of connections servedthrough dynamic window size selection at high arrival ratesbeing less in the high-to-low method than when using low-to-high ordering methods. This is because of the focus on higherrevenue requests which guarantees more revenue per request.

The evaluation result demonstrates the benefits of the pro-posed spanning connection technique and proves that the resultsare best drawn in the case of dynamic windowing.

VI. CONCLUSION

Implementing virtualization in a smooth and cost-effectiveway is crucial to the cloud service acceptance and market

Fig. 20. Results showing the generated revenue for different mapping schemesusing spanning connection technique.

penetration. A challenge faced by cloud service providers isdesigning the resource allocation techniques that will tackle theproblem of VNM. Clients send numerous requests to reservecomputational and network resources and expect their QoSconditions to be maintained through the request lifetime. Oneof the main features that define a VNM policy is the windowsize selection scheme. Multiple window size selection schemeswere presented and evaluated in this paper. The dynamicwindow selection scheme was introduced in the context ofVNM for the cloud computing data center. After evaluating thepossible window size selection techniques, simulation resultsshowed that the dynamic window size scheme achieved allcloud service provider objectives in terms of served-connectionratio, resource utilization, and computational overhead. More-over, three connection ordering schemes were investigated. Thelow-to-high technique achieved the best performance in termsof the ratio of served connections while the high-to-low methodhad the advantage in terms of resource utilization. In addition,the effect of adding features such as the maximum allowed tar-diness and the spanning connection technique was studied. Theproposed algorithm shows better performance than commonlyapproached schemes such as G-MCF [14] and ViNEYard [11].As a future step, we will further investigate the impact of theaggregation factor, different pricing options, and the distributedcloud network topology on performance and revenue of theVNM requests.

ACKNOWLEDGMENT

The authors would like to thank Dr. D. Ban from Samsung forhis feedback and supportive insights which were very valuablewhile implementing this research project. Also, we would liketo thank King Abdulaziz City for Science and Technology(KASCT) for their continuing support.

REFERENCES

[1] M. Jammal, A. Kanso, and A. Shami, “Chase: Component highavailability-aware scheduler in cloud computing environment,” in Proc.IEEE 8th Int. Conf. CLOUD Comput., Jun. 2015, pp. 477–484.

Page 12: IEEE SYSTEMS JOURNAL 1 Drawing the Cloud Map: Virtual ... · Index Terms—Cloud computing, cloud data centers, node and link mapping, resource allocation, resource provisioning,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE SYSTEMS JOURNAL

[2] B. Rimal, E. Choi, and I. Lumb, “A taxonomy and survey of cloudcomputing systems,” in Proc. 5th Int. Joint Conf. INC IMS IDC NCM,Aug. 2009, pp. 44–51.

[3] M. A. Sharkh, M. Jammal, A. Shami, and A. Ouda, “Resource allocationin a network-based cloud computing environment: Design challenges,”IEEE Commun. Mag., vol. 51, no. 11, pp. 46–52, Nov. 2013.

[4] Amazon Web Services, Amazon Elastic Compute Cloud (Amazon EC2),Accessed Jun. 2014. [Online]. Available: http://aws.amazon.com/fr/ec2/

[5] Microsoft Azure, Accessed Jun. 2014. [Online]. Available: http://azure.microsoft.com

[6] Google Cloud Platform Home Page, Accessed Jun. 2014. [Online].Available: https://cloud.google.com/products/app-engine

[7] Z. Wang, J. Wu, Y. Wang, N. Qi, and J. Lan, “Survivable virtual networkmapping using optimal backup topology in virtualized SDN,” Commun.,China, vol. 11, no. 2, pp. 26–37, Feb. 2014.

[8] P. Lin, J. Bi, and H. Hu, “VCP: A virtualization cloud platform for SDNintra-domain production network,” in Proc. IEEE 20th ICNP, Oct. 2012,pp. 1–2.

[9] C. Papagianni, G. Androulidakis, and S. Papavassiliou, “Virtual topol-ogy mapping in SDN-enabled clouds,” in Proc. IEEE 3rd Symp. NCCA,Feb. 2014, pp. 62–67.

[10] T. Sunay, of Controllers and Why Nicira Had to Do a Deal, Part III:SDN and Openflow Enabling Network Virtualization in the Cloud, 2012.Accessed Jun. 2014. [Online]. Available: http://pluribusnetworks.com/blog

[11] M. Chowdhury, M. Rahman, and R. Boutaba, “Vineyard: Virtual net-work embedding algorithms with coordinated node and link mapping,”IEEE/ACM Trans. Netw., vol. 20, no. 1, pp. 206–219, Feb. 2012.

[12] M. A. Sharkh, A. Ouda, and A. Shami, “A resource scheduling modelfor cloud computing data centers,” in Proc. 9th IWCMC, Jul. 2013,pp. 213–218.

[13] Y. Zhu and M. Ammar, “Algorithms for assigning substrate network re-sources to virtual network components,” in Proc. IEEE 25th INFOCOM,Apr. 2006, pp. 1–12.

[14] M. Yu, Y. Yi, J. Rexford, and M. Chiang, “Rethinking virtual network em-bedding: Substrate support for path splitting and migration,” SIGCOMMComput. Commun. Rev., vol. 38, no. 2, pp. 17–29, Mar. 2008.

[15] G. Sun, V. Anand, H. Yu, D. Liao, and L. Li, “Optimal provisioningfor elastic service oriented virtual network request in cloud computing,”in Proc. IEEE GLOBECOM, Dec. 2012, pp. 2517–2522.

[16] J. Lu and J. Turner, “Efficient mapping of virtual networks onto a sharedsubstrate,” Washington Univ., Seattle, WA, USA, Tech. Rep. WUCSE-2006-35, 2006.

[17] C. Papagianni et al., “On the optimal allocation of virtual resourcesin cloud computing networks,” IEEE Trans. Comput., vol. 62, no. 6,pp. 1060–1071, Jun. 2013.

[18] K. Alhazmi, M. Abusharkh, D. Ban, and A. Shami, “A map of the clouds:Virtual network mapping in cloud computing data centers,” in Proc. IEEE27th CCECE, May 2014, pp. 1–6.

[19] B. Wang, T. Li, X. Luo, and Y. Fan, “Traffic grooming under a slidingscheduled traffic model in WDM optical networks,” in Proc. IEEEWorkshop Traffic Grooming WDM Netw., Oct. 2004, pp. 1–10.

[20] A. Dharwadker, The Independent Set Algorithm, CreateSpace Inde-pendent Publishing Platform an Amazon company, Seattle, WA, USA,Oct. 2011.

[21] C. Wang, S. Shanbhag, and T. Wolf, “Virtual network mapping with trafficmatrices,” in Proc. IEEE ICC, Jun. 2012, pp. 2717–2722.

Khaled Alhazmi (S’14) received the B.Sc. degreein computer engineering from King Saud Univer-sity, Riyadh, Saudi Arabia, in 2007 and the ME.Sc.degree in electrical and computer engineering fromWestern University, London, ON, Canada, in 2014,where he has been working toward the Ph.D. degreesince September 2014.

Since 2007, he has been working as a Senior R&DResearcher, Project Manager, and Senior Engineer atthe King Abdulaziz City for Science and Technol-ogy. Since September 2012, he has been with the

Optimized Communication and Computations (OC2Lab) group at WesternUniversity. His current research interests are in the areas of cloud computing,virtualization, cloud computing optimization management and service provi-sioning, SDN, and software engineering.

Mohamed Abu Sharkh (S’13) received the B.Sc.degree in computer science from the Faculty ofScience, Kuwait University, Kuwait City, Kuwait, in2005 and the M.Sc. degree in computer engineer-ing from the Faculty of Engineering and Petroleum,Kuwait University, in 2009. Since January 2012, hehas been working toward the Ph.D. degree in theDepartment of Electrical and Computer Engineering,Western University, London, ON, Canada.

He has five years of professional experience as asoftware engineer, a business analyst, and then as an

Enterprise Resource Planning (ERP) software consultant. His current researchinterests include cloud computing data center management, high availability inthe cloud and natural language processing.

Abdallah Shami (SM’09) received the B.E. de-gree in electrical and computer engineering fromthe Lebanese University, Beirut, Lebanon, in 1997and the Ph.D. degree in electrical engineering fromthe Graduate School and University Center, CityUniversityofNewYork, NewYork, NY, USA, in2002.

In September 2002, he joined the Departmentof Electrical Engineering at Lakehead University,Thunder Bay, ON, Canada as an Assistant Professor.Since July 2004, he has been with Western Univer-sity, London, ON, Canada, where he is currently a

Professor in the Department of Electrical and Computer Engineering. Hiscurrent research interests are in the areas of network optimization, cloudcomputing, and wireless networks.