a review on cloud computing: design challenges in architecture

31
Journal of Computing and Information Technology - CIT 19, 2011, 1, 25–55 doi:10.2498 /cit.1001864 25 A Review on Cloud Computing: Design Challenges in Architecture and Security Fei Hu 1 , Meikang Qiu 2 , Jiayin Li 2 , Travis Grant 1 , Draw Tylor 1 , Seth McCaleb 1 , Lee Butler 1 and Richard Hamner 1 1 Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, USA 2 Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, USA Cloud computing is becoming a powerful network archi- tecture to perform large-scale and complex computing. In this paper, we will comprehensively survey the concepts and architecture of cloud computing, as well as its security and privacy issues. We will compare different cloud models, trust/reputation models and privacy-preservation schemes. Their pros and cons are discussed for each cloud computing security and architecture strategy. Keywords: cloud computing, security, privacy, computer networks, models 1. Introduction Cloud computing is quickly becoming one of the most popular and trendy phrases being tossed around in today’s technology world. “It’s be- coming the phrase du jour”, says Gartner’s Ben Pring [1]. It is the big new idea that will supposedly reshape the information technol- ogy (IT) services landscape. According to The Economist in a 2008 article, it will have huge impacts on the information technology industry, and also profoundly change the way people use computers [2]. What exactly is cloud comput- ing then, and how will it have such a big impact on people and the companies they work for? In order to define cloud computing, it is first necessary to explain what is referenced by the phrase “The Cloud”. The first reference to “The Cloud” originated from the telephone in- dustry in the early 1990s, when Virtual Pri- vate Network (VPN) service was first offered. Rather than hard-wire data circuits between the provider and customers, telephone companies began using VPN-based services to transmit data. This allowed providers to offer the same amount of bandwidth at a lower cost by rerout- ing network traffic in real-time to accommo- date ever-changing network utilization. Thus, it was not possible to accurately predict which path data would take between the provider and customer. As a result, the service provider’s network responsibilities were represented by a cloud symbol to symbolize the black box of sorts from the end-users’ perspective. It is in this sense that the term “cloud” in the phrase cloud computing metaphorically refers to the Internet and its underlying infrastructure. Cloud computing is in many ways a conglom- erate of several different computing technolo- gies and concepts like grid computing, virtu- alization, autonomic computing [40], Service- oriented Architecture (SOA)[43], peer-to-peer (P2P) computing [42], and ubiquitous comput- ing [41]. As such, cloud computing has inher- ited many of these technologies’ benefits and drawbacks. One of the main driving forces behind the development of cloud computing was to fully harness the already existing, but under-utilized computer resources in data cen- ters. Cloud computing is, in a general sense, on-demand utility computing for anyone with access to the cloud. It offers a plethora of IT services ranging from software to storage to security, all available anytime, anywhere, and from any device connected to the cloud. More

Upload: hoangthuan

Post on 14-Feb-2017

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: A Review on Cloud Computing: Design Challenges in Architecture

Journal of Computing and Information Technology - CIT 19, 2011, 1, 25–55doi:10.2498/cit.1001864

25

A Review on Cloud Computing:Design Challenges in Architectureand Security

Fei Hu1, Meikang Qiu2, Jiayin Li2, Travis Grant1, Draw Tylor1,Seth McCaleb1, Lee Butler1 and Richard Hamner 1

1 Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, USA2 Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, USA

Cloud computing is becoming a powerful network archi-tecture to perform large-scale and complex computing.In this paper, we will comprehensively survey theconcepts and architecture of cloud computing, as wellas its security and privacy issues. We will comparedifferent cloud models, trust/reputation models andprivacy-preservation schemes. Their pros and consare discussed for each cloud computing security andarchitecture strategy.

Keywords: cloud computing, security, privacy, computernetworks, models

1. Introduction

Cloud computing is quickly becoming one ofthemost popular and trendy phrases being tossedaround in today’s technology world. “It’s be-coming the phrase du jour”, says Gartner’s BenPring [1]. It is the big new idea that willsupposedly reshape the information technol-ogy (IT) services landscape. According to TheEconomist in a 2008 article, it will have hugeimpacts on the information technology industry,and also profoundly change the way people usecomputers [2]. What exactly is cloud comput-ing then, and how will it have such a big impacton people and the companies they work for?

In order to define cloud computing, it is firstnecessary to explain what is referenced by thephrase “The Cloud”. The first reference to“The Cloud” originated from the telephone in-dustry in the early 1990s, when Virtual Pri-vate Network (VPN) service was first offered.

Rather than hard-wire data circuits between theprovider and customers, telephone companiesbegan using VPN-based services to transmitdata. This allowed providers to offer the sameamount of bandwidth at a lower cost by rerout-ing network traffic in real-time to accommo-date ever-changing network utilization. Thus,it was not possible to accurately predict whichpath data would take between the provider andcustomer. As a result, the service provider’snetwork responsibilities were represented by acloud symbol to symbolize the black box ofsorts from the end-users’ perspective. It is inthis sense that the term “cloud” in the phrasecloud computing metaphorically refers to theInternet and its underlying infrastructure.

Cloud computing is in many ways a conglom-erate of several different computing technolo-gies and concepts like grid computing, virtu-alization, autonomic computing [40], Service-oriented Architecture (SOA) [43], peer-to-peer(P2P) computing [42], and ubiquitous comput-ing [41]. As such, cloud computing has inher-ited many of these technologies’ benefits anddrawbacks. One of the main driving forcesbehind the development of cloud computingwas to fully harness the already existing, butunder-utilized computer resources in data cen-ters. Cloud computing is, in a general sense,on-demand utility computing for anyone withaccess to the cloud. It offers a plethora of ITservices ranging from software to storage tosecurity, all available anytime, anywhere, andfrom any device connected to the cloud. More

Page 2: A Review on Cloud Computing: Design Challenges in Architecture

26 A Review on Cloud Computing: Design Challenges in Architecture and Security

formally, the National Institute of Standards andTechnology (NIST) defines cloud computing asa model for convenient, on-demand network ac-cess to computing resources such as networks,servers, storage, applications, and services thatcan be quickly deployed and released with verylittlemanagement by the cloud-provider [4]. Al-though the word “cloud” does refer to the In-ternet in a broad sense, in reality, clouds canbe public, private, or hybrid (a combination ofboth public and private clouds). Public cloudsprovide IT services to anyone in the general pub-lic with an Internet connection and are ownedand maintained by the company selling and dis-tributing these services. In contrast, privateclouds provide IT services through a privately-owned network to a limited number of peoplewithin a specific organization.

As two examples of public consumer-level cloudsconsider Yahoo! R©Mail and YouTube. Throughthese two sites, users access data in the formof e-mails, attachments, and videos from anydevice that has an Internet connection. Whenusers download e-mails or upload videos, theydo not know where exactly the data came fromor went. Instead, they simply know that theirdata is located somewhere inside the cloud. Inreality, cloud computing involves much morecomplexity than the preceding two examplesillustrate and it is this behind the scenes com-plexity that makes cloud computing so appeal-ing and beneficial to individual consumers andlarge businesses alike.

Cloud computing is an emerging technologyfrom which many different industries and in-dividuals can greatly benefit. The concept is

simple; the cloud (internet) can be utilized toprovide services that would otherwise have tobe installed on a personal computer. For ex-ample, service providers may sell a serviceto customers which would provide storage ofcustomer information. Or perhaps a servicecould allow customers to access and use somesoftware that would otherwise be very costlyin terms of money and memory. Typically,cloud computing services are sold on an “asneeded” basis, thus giving customers more con-trol than ever before. Cloud computing ser-vices certainly have the potential to benefit bothproviders and users. However, in order for cloudcomputing to be practical and reliable, many ex-isting issues must be resolved.

With ongoing advances in technology, the use ofcloud computing is certainly on the rise. Cloudcomputing is a fairly recent technology that re-lies on the internet to deliver services to payingcustomers. Services that are most often usedwith cloud computing include data storage andaccessing software applications (Figure 1). Theuse of cloud computing is particularly appealingto users for two reasons: it is rather inexpensiveand it is very convenient. Users can access dataor use applications with only a personal com-puter and internet access. Another convenientaspect of cloud computing is that software ap-plications do not have to be installed on a user’scomputer, they can simply be accessed throughthe internet. However, as with anything elsethat seems too good to be true, there is one cen-tral concern with the use of cloud computingtechnology – security [44].

Figure 1. Cloud computing [5].

Page 3: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 27

In the internet, people like to use email forcommunication because of its convenience, ef-ficiency and reliability. It’s well known thatmost of the contexts have no special meaning,which means it’s more likely our daily com-munication. So, such context is less attractivefor the hacker. However, when two people ortwo groups, even two countries, want to com-municate using the internet secretly, for eachcommunication partners they need some kindof encryption to ensure their privacy and confi-dentiality. For sake of that such situation occursrarely, users don’t want to make their processorslower by encrypting the email or other docu-ments. There must be a way for the users toeasily encrypt their document only when theyrequire such service.

In the rest of the paper, we will first surveytypical architectures from networks layers view-point. Then we will discuss cloud models. Nextwe move to security issues such as encryption-on-demand. The privacy preservation modelswill be described. In each part where multipleschemes are presented, we will also comparetheir pros and cons.

2. Cloud Computing Architecture

2.1. Abstract Layers

To begin understanding cloud computing insome sort of detail, it is necessary to examineit in abstraction layers beginning at the bottomand working upwards. Figure 2 illustrates thefive layers that constitute cloud computing [44].A particular layer is classified above another ifthat layer’s services can be composed of ser-vices provided by the layer beneath it.

Figure 2. The five abstraction layers of cloudcomputing [5].

The bottom layer is the physical hardware, namelythe cloud-provider owned servers and switchesthat serve as the cloud’s backbone. Customerswho use this layer of the cloud are usually bigcorporations who require an extremely largeamount of subleased Hardware as a Service(HaaS). As a result, the cloud-provider runs,oversees, and upgrades its subleased hardwarefor its customers. Cloud-providers must over-comemany issues related to the efficient, smooth,and quick allocation of HaaS to their customers,and one solution that allows providers to ad-dress some of these issues involves using re-mote scriptable boot-loaders. Remote script-able boot-loaders allow the cloud-provider tospecify the initial set of operations executed bythe servers during the boot process, meaningthat complete stacks of software can be quicklyimplemented by remotely located data centerservers.

The next layer consists of the cloud’s softwarekernel. This layer acts as a bridge betweenthe data processing performed in the cloud’shardware layer and the software infrastructurelayer which operates the hardware. It is thelowest level of abstraction implemented by thecloud’s software and its main job is to man-age the server’s hardware resources while at thesame time allowing other programs to run andutilize these same resources. Several differentimplementations of software kernels include op-erating system (OS) kernels, hypervisors, andclustering middleware. Hypervisors allow mul-tiple OSs to be run on servers at the same time,while clustering middleware consists of soft-ware located on groups of servers, which allowsmultiple processes running on one or multipleservers to interact with one another as if all wereconcurrently operating on a single server.

The abstraction layer above the software kernelis called software infrastructure. This layer ren-ders basic network resources to the two layersabove it in order to facilitate new cloud soft-ware environments and applications that canbe delivered to end-users in the form of ITservices. The services offered in the soft-ware infrastructure layer can be separated intothree different subcategories: computationalresources, data storage, and communication.Computational resources, also called Infras-tructure as a Service (IaaS), are available to

Page 4: A Review on Cloud Computing: Design Challenges in Architecture

28 A Review on Cloud Computing: Design Challenges in Architecture and Security

cloud customers in the form of virtual machines(VMs). Virtualization technologies such aspara-virtualization [44], hardware-assisted vir-tualization, live-migration, and pause-resumeenable a single, large server to act as multi-ple virtual servers, or VMs, in order to betterutilize limited and costly hardware resourcessuch as its central processing unit (CPU) andmemory. Thus, each discrete server appearsto cloud users as multiple virtual servers readyto execute the users’ software stacks. Throughvirtualization, the cloud-providerwho owns andmaintains a large number of servers is able tobenefit from gains in resource utilization andefficiency and claim the economies of scale thatarise as a result. IaaS also automatically al-locates network resources among the variousvirtual servers rapidly and transparently. Theresource utilization benefits provided by virtu-alization would be almost entirely negated ifnot for the layer’s ability to allocate servers’resources in real-time. This gives the cloud-provider the ability to dynamically redistributeprocessing power in order to supply users withthe amount of computing resources they re-quire without making any changes to the phys-ical infrastructure of their data centers. Severalcurrent examples of clouds that offer flexibleamounts of computational resources to its cus-tomers include the Amazon Elastic ComputeCloud (EC2) [9], Enomaly’s Elastic ComputingPlatform (ECP) [10], and RESERVOIR archi-tecture [6]. Data storage, which is also referredto as Data-Storage as a Service (DaaS), allowsusers of a cloud to store their data on servers lo-cated in remote locations and have instant accessto their information from any site that has an In-ternet connection. This technology allows soft-ware platforms and applications to extend be-yond the physical servers on which they reside.Data storage is evaluated based on standards re-lated to categories like performance, scalability,reliability, and accessibility. These standardsare not all able to be achieved at the same timeand cloud-providers must choose which designcriteria they will focus on when creating theirdata storage system. RESERVOIR architectureallows providers of cloud infrastructure to dy-namically partner with each other to create aseemingly infinite pool of IT resources whilefully preserving the autonomy of technologi-cal and business management decisions [6].Inthe RESERVOIR architecture, each infrastruc-

ture provider is an autonomous business with itsown business goals. A provider federates withother providers (i.e., other RESERVOIR sites)based on its own local preferences. The IT man-agement at a specific RESERVOIR site is fullyautonomous and governed by policies that arealigned with the site’s business goals. To opti-mize this alignment, once initially provisioned,resources composing a service may be movedto other RESERVOIR sites based on economi-cal, performance, or availability considerations.Our research addresses those issues and seeksto minimize the barriers to delivering servicesas utilities with guaranteed levels of service andproper risk mitigation.

Two examples of data storage systems are Ama-zon Simple Storage Service (Amazon S3) [11]and Zecter ZumoDrive [12]. The communi-cation subcategory of the software infrastruc-ture layer, dubbed Communication as a Service(CaaS), provides communication that is reli-able, schedulable, configurable, and (if neces-sary) encrypted. This communication enablesCaaS to perform services like network secu-rity, real-time adjustment of virtual overlays toprovide better networking bandwidth or trafficflow, and network monitoring. Through net-work monitoring, cloud-providers can track theportion of network resources being used by eachcustomer. This “pay for what you use” conceptis analogous to traditional public utilities likewater and electricity provided by utility com-panies and is referred to as utility computing.Voice over Internet Protocol (VoIP) telephones,instant messaging, and audio and video con-ferencing are all possible services which couldbe offered by CaaS in the future. As a resultof all three software infrastructure subcompo-nents, cloud-customers can rent virtual servertime (and thus their storage space and pro-cessing power) to host web and online gamingservers, to store data, or to provide any otherservice that the customer desires.

There are several Virtual Infrastructure Man-agements in IaaS, such as CLEVER [25], Open-QRM [8], OpenNebula [16], and Nimbus [19].CLEVER aims to provide Virtual infrastructureManagement services and suitable interfaces atthe High-level Management layer to enable theintegration of high-level features such as PublicCloud Interfaces, Contextualization, Securityand Dynamic Resources provisioning. Open-QRM is an open-source platform for enabling

Page 5: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 29

flexible management of computing infrastruc-tures. It is able to implement a cloud withseveral features that allows the automatic de-ployment of services. It supports different vir-tualization technologies and format conversionduring migration. OpenNebula is an open andflexible tool that fits into existing data center en-vironments to build a Cloud computing environ-ment. OpenNebula can be primarily used as avirtualization tool to manage virtual infrastruc-tures in the data-center or cluster, which is usu-ally referred as Private Cloud. Only the morerecent versions of OpenNebula are trying tosupport Hybrid Cloud to combine local infras-tructure with public cloud-based infrastructure,enabling highly scalable hosting environments.OpenNebula also supports Public Clouds byproviding Cloud interfaces to expose its func-tionalities for virtual machine, storage and net-work management. Nimbus provides two levelsof guarantees: 1) quality of life: users get ex-actly the (software) environment they need, and2) quality of service: provision and guaranteeall the resources the workspace needs to func-tion correctly (CPU, memory, disk, bandwidth,availability), allowing for dynamic renegotia-tion to reflect changing requirements and condi-tions. In addition, Nimbus can also provision avirtual cluster forGrid applications (e.g. a batchscheduler, or a workflow system), which is alsodynamically configurable, a growing trend inGrid Computing.

The next layer is called the software environ-ment, or platform, layer and for this reason iscommonly referred to as Platform as a Service(PaaS). The primary users of this abstract layerare the cloud application developers who use itas ameans to implement and distribute their pro-grams via the cloud. Typically, cloud applica-tion developers are providedwith a programming-language-level environment and a pre-definedset of application programming interfaces (APIs)to allow their software to properly interact withthe software environment contained in the cloud.Two such examples of current software en-vironments available to cloud software devel-opers are the Google App Engine and Sales-force.com’s Apex code. Google App Engineprovides both Java and Python runtime envi-ronments as well as several API libraries tocloud application developers [13], while Sales-force.com’s Apex code is an object-orientedprogramming language that allows developers

to run and customize programs that interactwith Force.com platform servers [14]. Whendevelopers design their cloud software for a spe-cific cloud environment, their applications areable to utilize dynamic scaling and load bal-ancing as well as easily have access to otherservices provided by the cloud software en-vironment provider like authentication and e-mail. This makes designing a cloud applica-tion a much easier, faster, and more manage-able task. Another example of the PaaS is theAzure from Microsoft [46]. Applications forMicrosoft’s Azure are written using the .NETlibraries, and compiled to the Common Lan-guage Runtime, a language-independent man-aged environment. The framework is signifi-cantly more flexible than AppEngine’s, but stillconstrains the user’s choice of storage modeland application structure. Thus, Azure is inter-mediate between application frameworks likeAppEngine and hardware virtual machines likeEC2 [47]. The top layer of cloud computingabove the software environment is the appli-cation layer. Dubbed as Software as a Ser-vice (SaaS), this layer acts as an interface be-tween cloud applications and end-users to offerthem on-demand and many times fee-based ac-cess to web-based software through their webbrowsers. Because cloud users run programs byutilizing the computational power of the cloud-provider’s servers, the hardware requirementsof cloud users’ machines can often be signif-icantly reduced. Cloud-providers are able toseamlessly update their cloud applicationswith-out requiring users to install any sort of up-grade or patch since all cloud software resideson servers located in the provider’s data centers.Google Apps and ZOHO R©are two examples ofofferings currently available. In many cases thesoftware layer of cloud computing completelyeliminates the need for users to install and runsoftware on their own computers. This, in turn,moves the support and maintenance of applica-tions from the end-user to the company or or-ganization that owns and maintains the serversthat distribute the software to customers.

There are several other layer architectures incloud computing. For example, Sotomayor etal. proposed a three-layer model [37]: cloudmanagement, Virtual infrastructure (VI) man-agement, andVMmanager. Cloudmanagementprovides remote and secure interfaces for creat-ing, controlling, and monitoring virtualized re-

Page 6: A Review on Cloud Computing: Design Challenges in Architecture

30 A Review on Cloud Computing: Design Challenges in Architecture and Security

sources on an infrastructure-as-a-service cloud.VI management provides primitives to sched-ule and manage VMs across multiple physicalhosts. VM managers provide simple primitives(start, stop, suspend) tomanageVMson a singlehost. Wang et al. presented a cloud computingarchitecture, the Cumulus architecture [45]. Inthis architecture, there are also three layers: vir-tual network domain, physical network domain,and Oracle file system.

2.2. Management Strategies forMultiple Clouds [17]

In order to mitigate the risks associated with un-reliable cloud services, businesses can choose touse multiple clouds in order to achieve adequatefault-tolerance – a system’s ability to continuefunctioning properly in the event that single ormultiple failures occur in some of its compo-nents. The multiple clouds utilized can either bea combination of 1) all public clouds or 2) pub-lic and private clouds that together form a singlehybrid cloud. A lack ofwell-defined cloud com-puting standards means that businesses wishingto utilize multiple clouds must manage multi-ple, often unique cloud interfaces to achievetheir desired level of fault-tolerance and com-putational power. This can be achieved throughthe formulation of cloud interface standards sothat any public cloud available to customersutilizes at least one well-defined standardizedinterface. As long as all cloud-providers usea standardized interface, the actual technologyand infrastructure implemented by the cloud isnot relevant to the end-user. Also, some type ofresource management architecture is needed tomonitor, maximize, and distribute the compu-tational resources available from each cloud tothe business.

There are several multiple cloudsmanagements,including the Intercloud Protocols [20], andsome method of cloud federation [5, 7, 48].The Intercloud Protocols consist of six layers:actual physical layer, physical metaphor layer,platform metaphor layer, communication layer,management layer and endpoints layer. Thepossibility of a worldwide federation of Cloudshas been studied recently [3, 5, 48]. Some of thechallenges ahead for the Clouds, like monitor-ing, storage, QoS, federation of different orga-nizations, etc. have been previously addressed

by grids. Clouds present, however, specific ele-ments that call for standardization too; e.g. vir-tual images format or instantiation/migrationAPIs [5]. So enhancing existing standards isgranted to ensure the required interoperability.

Figure 3. Architecture for resource management thatallows fault-tolerance cloud computing.

Resource management architecture is realizedby using abstract computational resources as itsmost basic building block. Next, a set of op-erations are defined so that each computationalresource can be identified, started, stopped, orqueried by the resource manager as to their pro-cessing state. Finally, the resource managershould provide some sort of interface to the ITemployees of the company utilizing it to allowthem to view, as well as manually modify, theamount of cloud computational resources be-ing used by the company. Upon proper set-upand configuration of IT services from multiplecloud-providers, the resource manager shouldmap the actual resources of each connectedcloud to a generic, abstract representation ofits computational resources. Ideally, once theactual resources of a cloud from a particularcloud-provider have been mapped, then anyother cloud from that same cloud-provider couldalso be ported over to the resource manageras a generic computational resource represen-tation using the same mapping scheme. Us-ing the newly created generic representationsof computational resources, the resource man-ager’s algorithm could systematically identifyeach available resource, query its amount of uti-lized processing power, and then start or stop theresource accordingly in such a way as to maxi-mize the total amount of cloud resources avail-able to the business for a targeted price level.This process is depicted in Figure 3. Depend-ing on each business’s scenario and individual

Page 7: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 31

preferences, the development of a suitable algo-rithm for the cloud manager to implement couldbe a seemingly complex task.

One such instance of a hybrid cloud resourcemanager has been realized through the collab-oration of Dodda, Moorsel, and Smith [15].Their cloud management architecture managedin tandem the Amazon EC2 via Query andSimple Object Access Protocol (SOAP) inter-faces as well as their own private cloud via aRepresentational State Transfer (REST) inter-face. The Query interface of EC2 uses a querystring placed in the Uniform Resource Loca-tor (URL) to implement the management op-erations of the resource manager. Amazon, asthe cloud-provider, provides a list of definedparameters and their corresponding values tobe included in a query string by the resourcemanager. These query strings are sent by thecloud resource manager to a URL called out bythe cloud-provider via HTTP GET messages inorder to perform necessary management oper-ations. In this way, EC2’s interface is mappedto a generic interface that can be manipulatedby the resource manager. The SOAP interfaceof EC2 operates very similarly to the Query in-terface. The same operations are available toresource managers using the SOAP interfacesas are available when using the Query interface,and HTTP GET messages are sent to a URLspecified by the cloud-provider in order to per-form cloud management operations. The onlydifference between the two interfaces is the ac-tual parameters themselves that are needed toperform each operation.

The REST interface of the private cloud assignsa global identifier, in this case a Uniform Re-source Identifier (URI), to each local resource.Each local resource is then manipulated viaHTTP and mapped to its generic interface inmuch the same way as EC2’s SOAP and Queryinterfaces. After each cloud-specific interfacehad been successfully mapped to its generic in-terface, the resource manager was able to ef-fectively oversee both the EC2 and the privatecloud at the same time by using a single, com-mon set of interface commands.

However, one might wonder whether the choiceof interface that the resource manager uses tooversee a cloud has a significant influence on thecloud’s performance. To answer this question, aseries of 1,000 identical management operation

commandswere issued by the resource managerto EC2 via the Query interface. Each commandwas issued a timestamp upon transmission fromand arrival to the resource manager. Then, thesame process was repeated for EC2 using theSOAP interface. A comparison of both inter-faces’ response times yielded a mean responsetime of 508 milliseconds (ms) for the Queryinterface and 905 ms for the SOAP interface.Thus, the SOAP interface’s response time wasnearly double the response time of the Queryinterface. In addition, the variance of the SOAPinterface response times was much larger thanthe variance exhibited by the response times ofthe Query interface. So, the Query interface’sresponse time is not only faster on average, butalso more consistent than the response time ofthe SOAP interface. Based on these results, itshould come as no surprise that most businessesuse EC2’s Query interface instead of its SOAPinterface. This comparison clearly shows thatwhen using a particular cloud, businesses mustbe careful in choosing the most responsive inter-face to connect to their resource manager so asto maximize performance and quality of service(QoS). The less amount of time required for pro-cessing commands issued by the resource man-ager, the more responsive the resource managercan be at maximizing and distributing the com-putational resources available from each cloudto the business.

Another example of hybrid cloud managementdescribed by H. Chen et al. utilizes a techniquecalled intelligentworkload factoring (IWF) [17].As was the case with the previously describedcloud resource manager approach, businessescan utilize IWF to satisfy highly dynamic com-puting needs. The architecture of the IWFmanagement scheme allows the entire compu-tational workload to be split into two separatetypes of loads, a base load and a trespassingload, when extreme spikes in the workload oc-cur. The base load consists of the smaller,steady workload that is constantly demandedby users of cloud computing services while thetrespassing load consists of the bigger, transientworkload demanded on an unpredictable basisby users. Both loads are managed indepen-dently from one another in separate resourcezones. The resources necessary to handle thebase load can be obtained from a company’sprivate cloud, and are thus referred to as the

Page 8: A Review on Cloud Computing: Design Challenges in Architecture

32 A Review on Cloud Computing: Design Challenges in Architecture and Security

base load resource zone, while the elastic com-putational power to meet the needs of the tres-passing load can be accessed on-demand viaa public cloud supplied by a cloud-provider,which is also known as the trespassing load re-source zone. Base load management shouldproactively seek to use base zone resources ef-ficiently by using predictive algorithms to an-ticipate the future base load conditions presentat all times in the company’s network. In con-trast, trespass load management should have theability to devote trespass zone resources to sud-den, unpredictable changes in the trespass loadin a systematic, yet agile fashion. Part of theIWF model’s task is to ensure that the base loadremains within the amount planned for by thecompany’s IT department; in most cases, thisis the computing capacity of the business’s pri-vate cloud (i.e. on-site servers in data center).By doing this, the computational capacity thatthe business’s on-site data center has to be over-planned for is considerably decreased, resultingin less capital expenditure and server underuti-lization for the business. The other part of theIWF model’s job is to make sure a minimalamount of data replication is required to processthe services obtained from the public cloud.

One instance of such an IWFmanagement schemewas created by H. Chen and his colleagues [14].Their system employed a private cloud in the

form of a local cloud cluster as its base loadresource zone and the Amazon EC2 as its tres-passing load resource zone. The architectureused in this IWF hybrid cloud management sys-tem is depicted in Figure 4.

The IWF module, located at the very front of themanagement architecture, has a job that’s two-fold: first, it should send a smooth, constantworkload to the base zone while at the sametime avoid overloading either zone by exces-sively redirecting load traffic; second, the IWFmodule should effectively break down the over-all load initiated by users of the network, bothby type and amount of requested data. The pro-cess of load decomposition helps to avoid dataredundancy in the management scheme. Thebase zone of the IWF system is on 100% of thetime, and, as mentioned previously, the volumeof the base load does not vary much with time.As a result, the resources of the private cloudthat handle the base load are able to be runvery efficiently. Nonetheless, a small amountof over-provisioning is still necessary in orderfor the local cloud cluster to be able to handlesmall fluctuations in the base load volume andto guarantee an acceptable QoS to the business’employees. The trespassing zone of the systemis only on for X% of the time, where X is typi-cally a small single-digit number. Because the

Figure 4. The IWF hybrid cloud management system [17].

Page 9: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 33

trespassing zonemust be able to adequately han-dle large spikes in processing demand, its com-putational capacity must be over-provisionedbya considerably large amount.

The procedure for implementing IWF can bemodeled as a hypergraph partitioning problemconsisting of vertices and nets. Data objects inthe load traffic, such as streaming videos or ta-bles in a database, are represented by vertices,while the requests to access these data objectsare modeled as nets.

The link between a vertex and a net depicts therelationship that exists between a data objectand its access request. Formally, a hypergraphis defined as

H = (V, N) (1)where V is the vertex set and N is the net set.Each vertex vi ∈ V is assigned weights wi andsi that represent the portion of the workloadcaused by the ith data object and the data sizeof the ith data object, respectively. The weightwi is computed as the average workload per ac-cess request of the ith data object multipliedby its popularity. Each net nj ∈ N is assigneda cost cj that represents the expected networkoverhead required to perform an access requestof type j. The cost cj attributed to each net iscalculated as the expected number of type j ac-cess requests multiplied by the total data size ofall neighboring data objects. The problem thatmust be solved in the design of the IWFmanage-ment system involves assigning all data objects,or vertices, to two separate locations withoutcausing either location to exceed its workloadcapacitywhile at the same time achieving amin-imum partition cost. This is given by

Min(∑

nj∈Nexp ected

cj + ∑

vi∈Vtrespas sin g

si) (2)

where the first summation is the total cost of allnets that extend across multiple locations (i.e.the data consistency overhead), the second sum-mation is the total data size of objects locatedin the trespassing zone (i.e. the data replicationoverhead), and is a weighting factor for eachof the two summations.

The IWF management scheme consists of threemain components: workload profiling, fast fac-toring, and base load threshold. A flow diagramof how these three components interact is shownin Figure 5. The workload profiling component

continually updates the system load as changesoccur to the number and nature of computingrequests initiated by users of the network. Bycomparing the current system load with the baseload threshold (the maximum load that the basezone can adequately handle), the workload pro-filing component places the system in one oftwo states – normal mode if the system loadis less than or equal to the base load thresholdor panic mode if the system load is greater thanthe base load threshold. The base load thresholdcan either be set manually by those in charge ofthe system or automatically by utilizing the pasthistory of the base load. When the system is innormal mode, the fast factoring component for-wards the current system load to the base zone.However, when the system is in panic mode,the fast factoring component implements a fastfrequent data object detection algorithm to seeif a computing request is seeking a data objectthat is in high demand.

Figure 5. Flow diagram of IWF management schemecomponents [17].

If the object being requested is not in high de-mand, the fast factoring component forwardsthat portion of the system load to the base zone;if the requested object is in high demand, thatportion of the system load is immediately for-warded to the trespassing zone. In essence, thefast factoring component removes the sporadicsurges in computing power from the base zone(which is not equipped to handle such requests)and instead shifts them over to the trespassingzone (which is over provisioned to handle suchspikes in the computing load) to be processed.

The fast frequent data object detection algo-rithm features a First In, First Out (FIFO) queueof the past c computing requests as well as twolists of the up-to-date k most popular data ob-jects and historical k most popular data objectsbased on the frequency of access requests foreach data object. The algorithm uses its queueand popular data object lists to accurately pin-point data objects that are in high demand bythe system The level of accuracy achieved by

Page 10: A Review on Cloud Computing: Design Challenges in Architecture

34 A Review on Cloud Computing: Design Challenges in Architecture and Security

the algorithm depends largely on how correctthe counters are that measure the numbers ofaccess requests for each data object For a dataobject T , its access request rate, p(T), is definedas

p(T) =requestsT

requeststotal(3)

where requestsT is the number of access re-quests for data object T and requeststotal is thetotal number of access requests for all data ob-jects combined. The algorithm can approximatep(T̂) (where the hat indicates an approximationinstead of an exact result) such that

p(T̂) ∈(

p(T) ·(

1 − 2

), p(T) ·

(1 +

2

))(4)

where is a relatively small number that repre-sents the percent error in the request rate approx-imation for a data object T . Obviously, a lower (and thus percent error in approximating ac-cess request rates) allows the algorithm to moreaccurately determine which data objects are inhigh demand so that the frequent requests forsuch data objects can be passed on to the tres-passing zone of the IWF management systemand processed accordingly. Therefore, throughthe use of the fast frequent data object detectionalgorithm the performance of the IWF hybridcloud management system is greatly improved.

As a way to compare the costs of different cloudcomputing configurations, H. Chen and his col-leagues [17] used the hourly workload data fromYahoo! Video’s web service measured over a46-day period to serve as a sample workload.The workload (measured in terms of requestedvideo streams per hour) contained a total of over32 million stream requests. The three cloudcomputing configurations considered were: 1)a private cloud composed of a locally operateddata center, 2) the Amazon EC2 public cloud,and 3) a hybrid cloud containing both a localcloud cluster to handle the bottom 95-percentileof the workload and the Amazon EC2 publiccloud provisioned on-demand to handle the top5-percentile of the workload. For the AmazonEC2, the price of a single machine hour wasassumed to be $0.10. The cost results of thestudy are shown in Table 1, where for the sakeof simplification, only the costs associated withrunning the servers themselves are considered.

From Table 1, implementing a private cloudthat could handle Yahoo! Video’s typical video

streaming workload would require the yearlycost of maintaining 790 servers in a local datacenter. To handle the same workload using onlya public cloud like Amazon EC2’s would costa staggering $1.3 million per year! Needlessto say, although $0.10 per machine hour doesnot seem like a considerable cost, it adds upvery quickly for a load of considerable size.Finally, the hybrid cloud configuration imple-menting IWF still maintains the ability to han-dle large fluctuations in load demand and re-quires a yearly cost of approximately $60,000plus the cost of operating 99 servers in a localdata center. This is by far the most attractiveconfiguration among the three cloud computingmodels and should warrant strong considerationby any small or midsize business.

Cloud ComputingConfiguration Annual Cost

Private Cloud Cost of running a data centercomposed of 790 servers

Public Cloud $1,384,000

Hybrid Cloudwith IWF

$58,960 plus cost of runninga data center composed of99 servers

Table 1. Cost comparsion of three different cloudcomputing configurations [17].

2.3. Integrate Sensor Networks and CloudComputing [18]

In addition to cloud computing, wireless sen-sor networks (WSNs) are quickly becoming atechnological area of expansion. The abilityof their autonomous sensors to collect, store,and send data pertaining to physical or envi-ronmental conditions while being remotely de-ployed in the field has made WSNs a very at-tractive solution for many problems in the areasof industrial monitoring, environmental moni-toring, building automation, asset management,and health monitoring. WSNs have enabled awhole new wave of information to be tappedthat has never before been available. By sup-plying this immense amount of real-time sensordata to people via software, the applications arelimitless: up-to-date traffic information couldbe made available to commuters, the vital signsof soldiers on the battlefield could be monitoredby medics, and environmental data pertaining to

Page 11: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 35

earthquakes, hurricanes, etc. could be analyzedby scientists to help predict further disasters.All of the data made available using WSNs willneed to be processed in some fashion. Cloudcomputing is a very real option available to sat-isfy the computing needs that arise from pro-cessing and analyzing all of the collected data.

A content-based publish/subscribe model forhow WSNs and cloud computing can be in-tegrated together is described in [18]. Pub-lish/subscribemodels do not require publishers,or sensor nodes, to send their messages to anyspecific receiver, or subscriber. Instead, sentmessages are classified into classes and thosewho want to view messages, or sensor data,pertaining to a particular class simply subscribeto that class. By disconnecting the sensor nodesfrom its subscribers, the abilities of a WSNto scale in size and its nodes to dynamicallychange position are greatly increased. This isnecessary as nodes are constantly reconfiguringthemselves in an ad-hoc manner to react to sud-denly inactive, damaged nodes as well as newlydeployed and/or repositioned nodes. In orderto deliver both real-time and archived sensor in-formation to its subscribers, an algorithm thatquickly and efficientlymatches sensor data to itscorresponding subscriptions is necessary. In ad-dition, the computing power needed to processand deliver extremely large amounts of real-time, streaming sensor data to its subscribersmay (depending on the transient spikes of theload) force cloud providers to be unable tomain-tain a certain QoS level. Therefore, a VirtualOrganization (VO) of cloud-providers might benecessary to ensure that this scenario does notoccur. In a VO, a dynamic partnership is formedwith a certain understanding regarding the shar-ing of resources to accomplish individual butbroadly related goals.

In the model described by M. M. Hassan etal. [18], sensor data is received by the gatewaynodes of each individual WSN. These gatewaynodes serve as the intermediary between thesensor nodes and a publisher/subscriber broker.The publisher/subscriber broker’s job is to de-liver sensor information to the SaaS cloud com-puting applications under dynamically chang-ing conditions. In this way, sensor data fromeach WSN need only to be sent to one fixed lo-cation, the publisher/subscriber broker, insteadof directly to multiple SaaS applications simul-taneously; this reduces the amount of complex-ity necessary in the sensor and gateway nodes

and allows for the cost of the nodes that com-prise WSNs to remain minimal. The pub-lisher/subscriber broker consists of four maincomponents: the stream monitoring and pro-cessing component (SMPC), registry compo-nent (RC), analyzer component (AC), and dis-seminator component (DC). TheSMPCreceivesthe various streams of sensor data and initi-ates the proper method of analysis for eachstream. The AC ascertains which SaaS appli-cations the sensor data streams should be sentto and whether or not they should be deliveredon a periodic or emergency basis. After properdetermination, the AC passes this informationon to the DC for delivery to its subscribers viaSaaS applications. The DC component utilizesan algorithm designed to match each set of sen-sor data to its corresponding subscriptions sothat it can ultimately distribute sensor data tousers of SaaS applications. SaaS supplicationsare implemented using cloud resources and thuscan be run from any machine that is connectedto the cloud. SaaS packages the sensor dataproduced from WSNs into a usable graphicalinterface that can be examined and sorted by itsusers.

2.4. Typical Applications of CloudComputing

In describing the benefits of cloud computing,one cannot help being bombarded by reasonsrelated to economics. The two main reasonswhy cloud computing can be viewed as bene-ficial and worthwhile to individual consumersand companies alike are its cheap cost and ef-ficient utilization of computing resources. Be-cause cloud computing users only purchase theon-demand virtual server time and applicationsthey need at little to no upfront cost, the capi-tal expenditure associated with purchasing thephysical network infrastructure is no longer re-quired. Moreover, the physical space neededto house the servers and switches, the IT staffneeded to operate, maintain, and upgrade thedata center, and the energy costs of operatingthe servers are all suddenly deemed unneces-sary.

For small businesses, barriers to enter marketsthat require significant amounts of computingpower are substantially lowered. This meansthat small companies have newly found accessto computing power that they otherwise would

Page 12: A Review on Cloud Computing: Design Challenges in Architecture

36 A Review on Cloud Computing: Design Challenges in Architecture and Security

never have been able to acquire. Because com-puting power costs are calculated based on us-age, small businesses do not have to take onthe unnecessary risk associated with commit-ting a large amount of capital to purchasing net-work infrastructure. The ability of a companyto grow or shrink the size of its workforce basedon business demand without having to scale itsnetwork infrastructure capacity accordingly isanother benefit of on-demand pricing and sup-ply of computing power [21].

While operating completely without any on-location, physical servers might be an optionfor some small companies, many colleges andlarge companies might not consider this optionvery feasible. However, they could still standto benefit from cloud computing by construct-ing a private cloud. Although a private cloudwould still require on-site server maintenanceand support and its relatively small size wouldprevent benefits related to economies of scalefrom being realized, the efficiency gains real-ized from server virtualization would be largeenough inmost cases to justify the effort. The ITresearch firm Infotech estimates that distributedphysical servers “generally use only 20 percentof their capacity, and that, by virtualizing thoseserver environments, enterprises can boost hard-ware utilization to between 60 percent and 80percent” [22]. In addition, the private cloudcould be connected to a public cloud as a sourceof on-demand computing power. This wouldhelp combat instances where spikes in compu-tational power exceeded the capacity of the pri-vate cloud’s virtualized servers. Part of sectionV, entitled ‘Management Strategies for MultipleClouds’, explicates in considerable detail howmultiple clouds, whether public or private, canbe usefully utilized together in this manner.

2.5. Comparisons on Cloud ComputingModels

From the preceding discussion it might seem asif cloud computing has virtually no downsidesassociated with it as long as the cloud is largeenough in size to realize economies of scale;yet, cloud computing does have several draw-backs. Because cloud computing is still a newconcept and still in its infant stages of develop-ment, there is currently a lack of well-defined

market standards. This also means that a con-stant flow of new cloud-providers is enteringthe cloud computing market in an effort to eachsecure a large and loyal customer base.

As is the case with any developing technol-ogy, a constantly changing marketplace makeschoosing the right cloud-provider and the propercloud computing standards difficult. Customersface the risk of choosing a wrong provider whowill soon go out of business and take the cus-tomer’s data with them or backing a cloud com-puting standard that will eventually become ob-solete. Once an individual consumer or busi-ness has chosen a particular provider and itsassociated cloud, it is currently difficult (butnot altogether impossible) to transfer data be-tween clouds if the customer wants to switchcloud-providers, and most likely, the customerwill have to pay for the move in the form ofswitching costs.

Although virtualization has helped resource ef-ficiency in servers, currently, performance in-terference effects still occur in virtual environ-ments where the same CPU cache and trans-lation lookaside buffer (TLB) hierarchy areshared by multiple VMs. The increasing num-ber of servers possessing multi-core processorsfurther compounds the effects of this issue. Asa result of these interference effects, cloud-providers are currently not able to offer theircustomers an outright guarantee as to the spe-cific level of computing performance that willbe provided.

The physical location of a cloud-provider’s ser-vers determines the extent to which data storedon those servers is confidential. Due to re-cent laws such as the Patriot Act, files storedon servers that physically reside in the UnitedStates are subject to possible intense examina-tion. For this reason, cloud-providers such asAmazon.com allow their customers to choosebetween using servers located in the UnitedStates or Europe.

Another negative aspect of cloud computing in-volves data transfer and its associated costs.Currently, because customers access a cloudusing the Internet, which has limited band-width, cloud-providers recommend that userstransfer large amounts of information by ship-ping their external storage devices to the cloud-provider. Upon receipt of the storage device,the cloud-provider uploads the customer’s data

Page 13: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 37

to the cloud’s servers. For instance, AmazonS3 recommends that customers whose data willtake a week or more to upload should use Ama-zon Web Services (AWS) Import/Export. Thistool automates shipping an external storage de-vice toAWS Import/Export using amail couriersuch as United Parcel Service (UPS). For thisservice, the user is charged $80 per storage de-vice handled and $2.49 per data-loading hour(where partial data-loading-hours are billed asfull hours) [23]. Until the Internet infrastruc-ture is improved several times over, transferringlarge amounts of data will be a significant obsta-cle that prevents some companies from adoptingcloud computing.

There are several different cloud pricing mod-els available, depending on the provider – themain three are tiered, per-unit, and subscription-based pricing. Amazon.com’s clouds offer tieredpricing that corresponds to varying levels of of-fered computational resources and service levelagreements (SLAs). SLAs are part of a cloud-provider’s service contract and define specificlevels of service that will be provided to its

customers. Many cloud-providers utilize per-unit pricing for data transfers (as mentioned inthe preceding paragraph’s example of AWS Im-port/Export) and storage space. GoGrid CloudHosting, however,measures their server compu-tational resources used in random access mem-ory (RAM) per hour [24]. Subscription-basedpricing models are typically used for SaaS.Rather than charge users for what they actuallyuse, cloud-providers allow customers to knowin advance what they will be charged so theycan accurately predict future expenses.

Because transitioning between clouds is diffi-cult and costly, end-users are somewhat lockedin to whichever cloud-provider they choose.This presents reliability issues associated withthe chosen cloud. Although rare and only for amatter of hours in most cases, some cloud ac-cess failures have already occurred with Googleand Amazon.com’s services. One such outageoccurred with the Amazon EC2 as a result ofa power failure at both its Virginia data centerand its back-up data center in December 2009.

Public Cloud Private Cloud Hybrid Cloud

1. Simplest to implement and useAllows for complete control of

server software updatespatches, etc.

Most cost-efficient throughutilization flexibility of public

and private clouds

2. Minimal upfront costs Minimal long-term costs Less susceptible toprolonged service outages

Pros 3. Utilization efficiency gainsthrough server virtualization

Utilization efficiency gainsthrough server virtualization

Utilization efficiency gainsthrough server virtualization

4. Widespread accessibility – Suited for handling largespikes in workload

5. Requires no spacededicated for data center – –

6. Suited for handling largespikes in workload – –

1. Most expensive long-term Large upfront costsDifficult to implement dueto complex managementschemes and assorted

cloud center

2. Susceptible to prolongedservices outages

Susceptible to prolongedservices outages

Requires moderate amountof space dedicated for

data centerCons 3. – Limited accessibility –

4. –Requires largest amount

of space dedicatedfor data center

5. – Not suited for handlinglarge spikes in workload –

Table 2. Summarization of pros and cons for public, private, and hybrid cloud.

Page 14: A Review on Cloud Computing: Design Challenges in Architecture

38 A Review on Cloud Computing: Design Challenges in Architecture and Security

This outage caused many east coast AmazonAWS users to be without service for approxi-mately five hours. While these access issuesare inevitable with any data center (includingthose on-site), lack of access to a business’scloud could occur at very inopportune instancesresulting in a business’s computing capabilitybeing effectively shut down for the duration ofthe service outage. This issue has the poten-tial to nullify the hassle-free benefit that manycloud-providers tout as a selling point for utiliz-ing cloud computing.

2.6. Summary of Cloud ComputingArchitecture

Acommonmisconception amongmany of thosewho would consider themselves “tech-savvy”technology users is that cloud computing is afancy word for grid computing; grid computingis a form of parallel computing where a group ofnetworked computers form a single, large vir-tual supercomputer that can perform obscenelylarge amounts of computational operations thatwould take normal computers many years tocomplete. Although cloud computing is in someways a descendant of grid computing, it is by nomeans an alternative title. Instead, cloud com-puting offers software, storage, and/or com-puting power as a service to its users, whetherthey be individuals or employees of a company.Due to its efficient utilization of computing re-sources and its ability to scale with workloaddemand, cloud computing is an enticing newtechnology to businesses of all sizes. However,the decision each business faces is not quiteas clear-cut as whether cloud computing couldprove beneficial to it. Instead, each businessmust make a decision as to what type of cloudit will utilize: public, private, or hybrid. Eachcloud type has its own set of positives and neg-atives. These benefits and drawbacks are listedin Table 2. For example, the public cloud hasthe minimum upfront cost due to the fact thatusers do not need to invest in the infrastructure.However, it has the maximum long term costsince users have to pay for the lease in longterm, while the user in the private cloud need topay less in long term due to the ownership ofthe cloud.

In recent years, more and more companies havebegun to adopt cloud computing as a way to

cut costs without sacrificing their productivity.These cost savings stem from little or no up-keep required by their in-house IT departmentsas well as their ability to forego the huge capitalexpenditures associatedwith purchasing serversfor their local data center. Even for those com-panies that adopt hybrid clouds which requiresome local servers, the number of these is dra-matically reduced due to over provisioning ofthe data center no longer being necessary. Inaddition to cost savings, cloud computing al-lows companies to remain agile and more dy-namically responsive to changes in forecastedbusiness. If a company needs to quickly scaleup or down the computing power of their net-work infrastructure, they can simply pay formore on-demand computing resources availablefrom cloud-providers. This same level of scala-bility is simply not achievable with a local datacenter only. As cloud computing technologycontinues to mature, an increasing number ofbusinesses are going to be willing to adopt it.For this reason, cloud computing is much morethan just the latest technological buzzword onthe scene – it’s here to stay.

3. Cloud Computing Security

3.1. Why Security in Cloud Computing?

By using offloading data and cloud computing,a lot of companies can greatly reduce their ITcost. However, despite tons of merits of cloudcomputing, many companies owners began toworry about the security treats. Because in thecloud-based computing environment, the em-ployees can easily access, falsify and divulgethe data. Sometime such behavior is a disasterfor a big and famous company.

Encryption is a kind of ideal way to solve suchproblem, whereas for the customers who are us-ing the cloud computing system cannot use suchencrypted data. The original data must be usedin the host memory otherwise the host VM ma-chine cannot do applications on-demand. Forthat sake, people can hardly achieve the goodsecurity in today’s Cloud services. For exam-ple, the Amazon’s EC2 is one of the serviceproviders who have privileged to read and tam-per the data from the customers. There is nosecurity for the customers who use such ser-vice.

Page 15: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 39

Some service providers develop some technicalmethod aimed to avoid the security treats fromthe interior. For instance, some providers limitthe authority to access and manage the hard-ware, monitor the procedures, and minimize thenumber of staff who has privilege to access thevital parts of the infrastructure. However, at theprovider backend, the administrator can also ac-cess the customer’s VM-machine.

Security within cloud computing is an espe-cially worrisome issue because of the fact thatthe devices used to provide services do not be-long to the users themselves. The users have nocontrol of, nor any knowledge of, what couldhappen to their data. This is a great concern incases when users have valuable and personal in-formation stored in a cloud computing service.Personal information could be “leaked” out,leaving an individual user or business vulnera-ble for attack. Users will not compromise theirprivacy so cloud computing service providersmust ensure that the customers’ information issafe. This, however, is becoming increasinglychallenging because as security developmentsare made, there always seems to be someoneto figure out a way to disable the security andtake advantage of user information. Aware ofthese security concerns, cloud computing ser-vice providers could possibly face extinction ifproblems which are hindering fail-proof secu-rity are not resolved.

Designing and creating error-proof and fail-proof security for cloud computing services fallson the shoulders of engineers. These engineersface many challenges. The services must elim-inate risk of information theft while meetinggovernment specifications. Some governmentshave laws limiting where personal informationcan be stored and sent. If too much security de-tail is relayed to the users, then theymaybecomeconcerned and decide against cloud computingaltogether. If too little security detail is relayedto the users, then the customers will certainlyfind business elsewhere. If a cloud comput-ing service provider’s security is breached andvaluable user information is gathered and usedagainst them, then the user could, and likelywould, sue the provider and the company couldlose everything. Given these scenarios, engi-neers certainly have to be flawless in their secu-rity mechanism designs.

Engineers cannot fixate only on the present is-sues with cloud computing security but theymust also prepare for the future. As cloudcomputing services become increasingly pop-ular, new technologies are destined to arise.For example, in the future, user information islikely going to be transferred among differentservice providers. This certainly increases thecomplexity of the security mechanism becausethe information must be tracked and protectedwherever it may go. As businesses begin to relyupon cloud computing, the speed of serviceswill need to be as high as possible. However,the speed of services cannot come about at thecost of lowered security, after all security isthe top concern among cloud computing users.Another aspect of future cloud computing thatengineers must be aware of is the increasingpopulation of cloud computing users. As cloudcomputing services become more popular, thecloud will have to accommodate more users.This raises even more concern about informa-tion security and also increases the complexityof cloud computing security mechanisms. Se-curity designers must track every bit of data andalso protect information from other users aswellas provide a fast and efficient service.

Risk of information theft is different for ev-ery cloud computing service and thus should beaddressed differently when designing securityschemes. For example, services which processand store public information that could also befound in say, a newspaper, would need a verylow amount of security. On the other hand, ser-vices which process personalized data about anindividual or a business must be kept confiden-tial and secure. This type of data is most oftenat risk when there is unauthorized access to theservice account, lost copies of data throughoutthe cloud, or security faults. Because of theseconcerns, a security scheme in which all datahave a “tracking device” that is limited to cer-tain locations could be implemented. However,care must be taken to also ensure that the data isnot transferred or processed in any way withoutthe consent and knowledge of the user.

When preparing to design a cloud computingservice and in particular a security scheme, sev-eral things must be kept in mind in order toeffectively protect user information. First andforemost, protection of user personal informa-tion is the top priority. Keeping this in mind

Page 16: A Review on Cloud Computing: Design Challenges in Architecture

40 A Review on Cloud Computing: Design Challenges in Architecture and Security

will certainly provide a better service to cus-tomers. The amount of valuable personal in-formation that is given to a provider should bekept to a minimum. For example, if the ser-vice being provided does not require a telephonenumber, then the user should not have to offerthat information to the provider. Where pos-sible, personal information could be encryptedor somehow kept anonymous. Also, mecha-nisms which destroy personal data after use andmechanisms which prevent data copying couldbe implemented. The amount of control thata user has over their information should be in-creased to amaximum. For example, if a servicerequires that some personal information be sentout through the cloud and processed, then theuser should be able to control where and howthat information is sent. The user should alsobe able to view their data and decide whether ornot certain information can be sent out throughthe cloud. Before any data processing or storingis done, the user should be asked for consent tocarry out the operation. Maximizing customercontrol and influence will earn user trust andestablish a wider consumer base.

Someorganizations have been focusing on secu-rity issues in the cloud computing. The CloudSecurity Alliance is a non-profit organizationformed to promote the use of best practicesfor providing security assurance within CloudComputing, and provide education on the usesof Cloud Computing to help secure all otherforms of computing [37]. The Open SecurityArchitecture (OSA) is another organizations fo-cusing on security issues. They propose theOSA pattern [38], which pattern is an attemptto illustrate core cloud functions, the key rolesfor oversight and risk mitigation, collaborationacross various internal organizations, and thecontrols that require additional emphasis. Forexample, the Certification, Accreditation, andSecurity Assessments series increase in impor-tance to ensure oversight and assurance giventhat the operations are being “outsourced” toanother provider. System and Services Acqui-sition is crucial to ensure that acquisition of ser-vices is managed correctly. Contingency plan-ning helps to ensure a clear understanding ofhow to respond in the event of interruptions toservice delivery. The Risk Assessment controlsare important to understand the risks associatedwith services in a business context.

Regarding practical cloud computing products,Microsoft Azure cloud computing plans to of-fer a new security structure for its multi-tenantcloud environments as well as private cloudsoftware. Google has a multi-layered securityprocess protocol to secure cloud data. Sucha process has been independently verified ina successful third-party SAS 70 Type II audit.Google is also able to efficiently manage se-curity updates across our nearly homogeneousglobal cloud computing infrastructure. Inteluses SOA Expressway Cloud Gateway to en-able cloud security. Intel’s Cloud Gateway actsas a secure broker that provides security cloudconnectors to other companies’ cloud products.

3.2. Encryption-on-Demand [26]

The basic idea for encryption-on-demand is thatthey try to use the encryption-on-demand serverwhich can provide some kind of encryption ser-vice. For example, when the server gets arequest from user, the website or server willmake a unique encryption program, the so-called ‘client’ program, send a package includ-ing such program to the client. However, howto identify a client program or how to assign aclient program to a client? They use a tailoringidentification (TID) The user can forward theTID to the server, the server will use TID tosend them the so-tailor made copy. So how dothe users or communication partners compro-mise the TID? So far, there are many differentways to do that, by using cell phone, text mes-sage, or other communication tools. By usingsome reference information, such as birth city,birth day, or some other personal information allof which can only be known by the communi-cation partners. As long as the two partners canreceive the client package including the clientprogram, they can encrypt their messages oremail easily. After communication, both usersshould dump the client program they used. So,every timewhen theywant to communicatewitheach other privately, they need to download thepackage.

In Figure 6, the client (both communicatingpartners) sends the UID (same UID for twopartners) to the server, the server will assigna package which binds the encryption systemand choose key to the client. Further, the clientcan use such encryption-on-demand package to

Page 17: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 41

Figure 6. The client sending the UID to the server [26].

encrypt the message and communicate with thepartner.

3.3. Security for the Cloud Infrastructure:Trusted Virtual Data CenterImplementation [27, 28]

For managing the hardware or resources in thesame physical system, the VMM (virtual ma-chine monitor) can create multipleVMs (virtualmachines). For the server, this infrastructureprovides a very convenient way to create, mi-grate, and delete the VMs. The cloud comput-ing concept can easily achieve the big scale andcheap services. However, using one physicalmachine to execute all the workloads of all theusers, it will make some serious security issues.Because this infrastructure needs many likeli-hood components, it will easily lead to the mis-configuration problems. The so-called trustedvirtual data center has different VMs and as-sociated hardware resources. In this way, theVM will know which resource it can access.TVDc can separate each customer workloadsto different associated virtual machines. Theadvantages are very obvious, 1) make sure noworkload can leak to other customers, 2) in caseof some malicious programs like viruses, theycannot be spread to other nodes, 3) prevent themisconfiguration problems.

TVDc uses the so-called isolation policy, so itcan separate both the hardware resource andusers’ workload or data. The isolation policycan manage the data center, access the VMs,

and switch from one VM to another VM. TVDcconsists of a bunch of VMs and the associ-ated resources which can be used for one user’sprogram. In TVD, the VMs have some labelswhich can be identified uniquely. For example,one label is corresponded with one customer orthe customer’s data. TVDc isolation policy in-cludes two major tasks: (1) label that can beused to indentify the VMs which are assignedto the customers, (2) allow all the VMs to runon the same TVD. Based on the security label,the control management can assign the author-ity to the users for the accessibility. Figure 7illustrates such a principle.

There are three basic components in the net-work (Figure 8), such as Client: the laptop,desktop or PDA which can be seen as the dataresource. These data need to be processed bythe cloud servers. Cloud storage server (CSS)has a huge space to store the data. Third partyauditor (TPA) can be responded with the dataverification. In Figure 8, the client can deliverthe data to the cloud server, the admin will pro-cess the data before storage. Since all these dataaren’t processed in the local computer, the cus-tomer needs to verify whether or not the data iscorrect. Without monitoring the data, the userneeds a delegate like the TPA to monitor thedata.

The client needs to check the data in the serverand make sure all the data of the cloud is correctwithout any modifying periodically. However,in reality, we assume the adversary can freelyaccess the storage in the server. For example,

Page 18: A Review on Cloud Computing: Design Challenges in Architecture

42 A Review on Cloud Computing: Design Challenges in Architecture and Security

Figure 7. Enabling public verifiability and data dynamics for storage security [27].

Figure 8. The architecture of cloud data storage [27].

the adversary can play a role of a monitor orTPA. In this way, the adversary can easily cheatthe user during the verifying of the data’s cor-rectness. Generally, the adversary can use somemethods to generate the valid acknowledgmentsand deliver the verification. Both the server anduser can hardly detect such attack.

3.4. Towards Trusted Cloud Computing [28]

Figure 9. The simple architecture of Eucalyptus [28].

A traditional trusted computing architecture canprovide some-degree security for the customers.Such system can forbid the owner of a host tointerfere all the computation. The customer can

also run the remote testing program which canlet the customer know if the procedure of thehost is secure or not. If the users or customersdetect any kind of abnormal behavior from thehost, they can immediately terminate their VM-machines. Unfortunately, such apparent perfectplatforms also have some fatal flaws. For exam-ple, as we know, the service providers alwaysprovide a list of available machines to the cus-tomers. Afterwards, the customer will be au-tomatically assigned a machine. However suchdynamical assigned machine from the providerbackend can incur some kinds of security threatwhich cannot be solved by such system.

Many cloud providers allow customers to accessvirtual machines which were hosted by serviceproviders. So user or customer can be seen asthe data source for the software running in theVM in the lower layer. In contrast, in the higherlayer, the server side can provide all the appli-cation on-demand.

The reason of the difficulty to provide an effec-tive security environment for the user is the factthat all the data which need to be processed willbe executed directly at higher layers. Briefly,

Page 19: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 43

we just focus on the lower layer in the customerside, because it is managed more easily.

In Eucalyptus (Figure 9), this system can man-age multiple clusters in which every node canrun a virtual machinemonitorwhich can be usedto host the user’s VM. CM is the cloud managerwhich responds to a set of nodes in one clus-ter. From the user side, Eucalyptus can providea bunch of interfaces to use the VM which isassigned to the customer. Every VM needs avirtual machine image for launching it. Beforelaunching a VM, VM needs to load VMI. ForCM, it can provide some kind of services whichcan be used to add or remove VMI or users.

The sys admin in the cloud has some kind ofprivilege and ability to use the backend performvarious kind of attacks such as access to thememory of one user’s VM. As long as the sysadmin has the root privileges at the machine, heor she can use some special software to do somemalicious action. Xen can do the livemigration,switching its physical host. Because Xen is run-ning at the backend of the provider, Xenaccesscan enable the sys admin to run as a customerlevel process in order to directly access the dataof a VM’s memory. In this way, the sys admincan do more serious attacks such as the coldboot attacks. Currently we don’t worry aboutsuch vulnerability because most of IaaS (infras-tructure as a Service) providers have some verystrict limitation in order to prevent one singleperson to accumulate all the authorities. Such

policy can efficiently avoid the physical accessattacks.

We assume the sys admins can log in any ma-chine by using the root authority. In order toaccess the customers’ machine, the sys adminscan switch a VM which already runs a cus-tomer’s VM to one under his or her control.Thus, the TCCP limit the execution of VM inthe IaaS perimeter and sys admin cannot accessthe memory of a host’s machine running a VM.

The Trusted Computing (Figure 10) is based ona smart design which uses a so-called trustedplatform module (TPM) chip which has a kindof endorsement private key (EK). Moreover,each TPM is bundled with each node in thePerimeter. The service providers can use publickey to ensure both the chip and the private keyare correct.

At the boot time, the host’ machine will gener-ate a list of ML which include a bunch of hasheswhich is so-called the BIOS, further, both theboot loader and the software begin to run theplatform. ML will be stored in the TPM of ahost. In order to test the platform, the platformwill ask the TPM in the host machine to makea message including the ML, a random num-ber K, and the private EK. Moreover, the senderwill send this message to the remote part. Inthe remote part, by using the public key, it caneasily decrypt the message and authenticate the

Figure 10. The trusted cloud computing platform TC (trusted coordinator) [28].

Page 20: A Review on Cloud Computing: Design Challenges in Architecture

44 A Review on Cloud Computing: Design Challenges in Architecture and Security

correspond host. Here, we use a table to rep-resent the major ideas and differences for theabove 3 security methods.

Table 3 compares the above three security sche-mes.

3.5. Privacy Model [29]

Privacy is a fundamental human right; secu-rity schemes in cloud computing services mustmeetmany government regulations and abide bymany laws [29]. These laws and regulations areprimarily aimed toward protecting informationwhich can be used to identify a person (suchas a bank account number or social securitynumber). Of course, these stipulations makecloud computing security even more difficult toimplement. Many security schemes have beenproposed, but the factor of accountability mustbe included in all systems, regardless of specificcomponents.

Reasonable solutions to cloud computing secu-rity issues involve several elements. First, usersmust constantly be informed of how and wheretheir data is being sent. Likewise, it is justas important to inform users of how and wheretheir incoming information is received. Second,service providers should give users contractualagreements that assure privacy protection. Thistype of accountability will provide a controlmechanism which will allow users to gain trustof their providers. Third, service providersmustaccept a responsibility to their customers to es-tablish security and privacy standards. Having

standards will certainly help to provide someorganization to all security schemes. Lastly,service providers should work with their cus-tomers to achieve some feedback in order tocontinuously update and improve their securityschemes.

It is very obvious that cloud computing willgreatly benefit consumers as well as internetproviders. In order for cloud computing ap-plications to be successful however, customersecurity must be attained. Presently, the secu-rity of cloud computing users is far from certain.Cloud computing security must be greatly im-proved in order to earn the trust of more peoplewho are potential cloud computing customers.One tempting solution, which has been focusedon in the past, is to hide user identifiable infor-mation and provide means of anonymous datatransfer between user and a cloud computingservice. However, this solution is not the bestway to provide cloud computing security be-cause users often need to communicate identifi-able information to the cloud computing serviceprovider in order to receive the requested ser-vice. A concept will now be presented in thecontext of a group or business participating in acloud computing service [29].

One suggestion for information privacy in cloudcomputing services is not to hide identifiabledata tracks, but rather to encrypt user identi-fiable data tracks. This particular method isbased on a vector space model which is used torepresent group profiles as well as group mem-ber profiles. Suppose that a group, consisting

Security method ofcluster computing

The major ideasfor the methods pros cons

Encryption-on-Demand

Both sender and receivershare the same TID in

order to get clientpackage for encryption

Using random encrypt-system good for security

Need local machineto encrypt the data

Security for the cloudinfrastructure: Trusted

virtual data centerimplementation

By using TVDc, the serverside can easily assign thedifferent components fordifferent users preventingthe data leak to other users

Reduce the payload forone physical machine andavoid the misconfigurationproblem, good for users

data security

The user should checkthe data frequently to makesure it won’t be changed

Towards TrustedCloud Computing

The server can generate arandom number K and aprivate key sent to theuser for authentication

Good security by usingprivate key and public key

It can also have someproblems if the attackersuse the playback attack

Table 3. The major ideas and differences for 3 security methods [29].

Page 21: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 45

of several members, is participating in informa-tion exchange via a cloud computing service.When the members are logged on to the service,any information that each member transfers iscontinuously tracked. While all information isbeing tracked, false information is continuouslygenerated. The actual information and gener-ated false information are randomly mixed to-gether. This information mixture disrupts theability of any potential hacker to obtain useridentifiable information. The false informationgenerated is not completely random, but ratherit is constructed to be similar to the actual infor-mation in order to further deceive any potentialhacker. Furthermore, the generated false datais cleverly constructed such that is appears tohave been generated by the users. This methodalso allows users to adjust the level of desiredprivacy. Of course, however, different levels ofsecurity have desirable and undesirable trade-offs (less privacy allows for faster data transfer,etc.).

The vector space model used for this privacyprotection method is used to represent user pro-files. All information within the vector spacemodel is represented as a vector of weightedterms, where the terms are weighted by levelof importance. Suppose some data, d, is repre-sented as an n-dimensional vectord = (w1, w2, w3, . . .wi, . . . , wn) where wi rep-resents the weight of the ith term. There aremany ways to calculate the weight of a term butoften the term’s frequency and importance arethe main factors. The vector space is updatedwith every website visit or any other informa-tion transfer based upon user preference. Forthe case above, with group participation, thevector space associated with the group profile isa weighted average of all member vectors. Anynew data that could possibly be transferred to auser is analyzed in vector form and if the vectorrepresenting this data closely matches the vec-tor representing the user profile, then permis-sion for data transfer is granted. The similaritybetween some arbitrary data and a user profile,or between two profiles, can be realized by thefollowing relation:

S(uj, uk) =

n∑i=1

(tuij · tuik)√n∑

i=1tu2

ij ·n∑

i=1tu2

ik

(4)

where uj and uk represent the vectors them-selves, tuij and tuik represent the ith terms ineach vector, and n represents the number ofterms contained within each vector.

The generator used to create false information isa very important part of the non-anonymous pri-vacy protection method. This generator, calledtheTransactionGenerator, references a databaseof terms which are closely related to terms asso-ciated with group interests and the service beingprovided. Whenever information is needed tobe encrypted, the Transaction Generator usesterms in the database to enter into a search en-gine. The generator then randomly activatesone of the thousands of web sites which resultfrom the search, to generate a complete trans-fer of false data (false only relative to the userrequested or sent information). The number offalse transactions generated can be controlledby the user (or group). In the method beingdescribed, the number of user-controlled fal-sified transactions is represented by Tr. TheTransaction Generator uses this number to gen-erate false transactions each time the user sendsout information. The generator does not, how-ever, generate Tr falsified transactions for eachuser information exchange, rather it cleverlygenerates Tr average falsified transactions foreach user information exchange. Thus any pre-dictability is eliminated and security is maxi-mized. To guarantee that any potential hackersare distracted, the majority of falsified transac-tions should consist of web pages which are notdirectly related to the actual information, butrather to the general idea of the actual informa-tion. Otherwise, hackers could accidentally begiven a route to find identifiable information.Each time the generator produces a falsifiedtransaction, a vector of weighted terms is builtup which can be used to strengthen the user orgroup profile. An illustration of the transactiongeneration process is shown in Figure 11.

There are three calculated parameters which areused to update and improve privacy protectioneach time a user exchanges information anda falsified transaction is made: The InternalGroup Profile, the Faked Group Profile, and theExternal Group Profile. A mechanism calledthe Group Profile Meter is implemented in thismethod of non-anonymous privacy protectionwith a group of users. The Group Profile Me-ter receives a vector of weighted terms, VU

tU ,

Page 22: A Review on Cloud Computing: Design Challenges in Architecture

46 A Review on Cloud Computing: Design Challenges in Architecture and Security

Figure 11. Falsifield transaction generation process for privacy protection [29].

from user information exchange. Using thesevectors, a parameter called the Internal GroupProfile (IGP) is computed. The Internal GroupProfile at any time, tU, is computed using thefollowing relation:

IGP(tU) =Pr−1∑i=0

VUtU−i (5)

wherePr represents the number of previous vec-tors in the profile. A similar mechanism calledthe FakedGroup Profile (FGP) is also employedin this privacy protection method. Like the In-ternal Group Profile, the Faked Group Profileis calculated using information provided by theGroup Profile Meter. The Faked Group Profileat any time, tT , is calculated using the followingrelation:

FGP(tT) =Pr×Tr−1∑

i=0

VTtT−i (6)

Where Pr and Tr have previously been definedand VT

tT is a vector of weighted terms receivedfrom the Group Profile Meter each time a fal-sified transaction is made by the TransactionGenerator. The Group Profile Meter assemblesa parameter called the External Group Profileeach time a user or false data vector is received.The External Group Profile (EGP) at any time,t, is simply calculated as follows:

EGP(t) = IGP(tU) + FGP(tT) (7)

where EGP(t) is updated any time the Internalor External Group Profiles change. Addition-ally, the Group Profile Meter can compare theInternal and External Group Profiles by calcu-lating a similarity parameter. The similaritybetween IGP and FGP is calculated by usingthe following relation:

S(IGP, EGP) =

n∑i=1

(tigpi · tegpi)√n∑

i=1tigp2

i ·n∑

i=1tegp2

i

(8)

where tigpi is the ith term contained within theIGP vector, tegpi is the ith term contained withinthe EGP vector, and n is the number of termscontained within each vector. All of the pa-rameters described allow the non-anonymousprivacy protection method to be continuouslyupdated and improved in order to provide max-imum security for user identifiable information.

3.6. Intrusion Detection Strategy [30]

By now it has become very apparent that cloudcomputing services can be very beneficial tousers. Cloud computing services will allowfor better economic efficiency for users as wellas service providers. However, maximum ef-ficiency can only be achieved when the occur-rence of problems within cloud computing ser-vices is at a minimum. Most of these problems

Page 23: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 47

are in the form of security issues. Cloud com-puting services will not increase in popularityuntil security issues are resolved. Presently, se-curity within cloud computing is far from per-fect and certainly not fail-proof. Cloud comput-ing services will only be practical when usersfeel safe. One possible solution to safely trans-fer information is the development of an intru-sion detection system [30].

Currently, intrusion detection systems are quitecomplicated and have much room for improve-ment. The complexity of these systems arisesdue to the variety of intrusions which havecaused the systems to be built up over time.Cloud computing service providers heavily relyon these intrusion detection systems but at thesame time they are faced with the issue of trad-ing good security with efficient performance ofthe service. Due to the ongoing efforts of hack-ers, the intrusion detection systems must be of-ten updated and re-worked. One proposal forimprovement upon intrusion detection systemswill now be presented. This system will bebeneficial in detecting intrusions within cloudcomputing services andwill easily adapt to ever-changing attacks.

One proposal for an intrusion detection systemis based on a strategy implementing a statisticalmodel for security. This system is designed tobe flexible and adaptable to the efforts of attack-ers. Characteristics of the network involved inthe cloud computing service are expressed assome random variables. The value range ofthese variables is limited in order to be able toidentify attacks. When a character, and thus arandom variable, changes, then an attack is be-ing attempted. When the values of variables arefound to be outside the range of the intervals,then an intrusion has occurred. If the change intime can also be realized, then the intrusion canbe successfully detected. This system is orga-nized by simulations implementing trace data.All attacks are arranged in a sample space, .The sample space is then divided into smallersub-sets. These sub-sets make the system moreeasily manageable. The set is broken down intomutually exclusive sets. This method will allowthe intrusion detection algorithm to be muchmore efficient. An index for a series of char-acteristics can be represented by Xi, where Xiis a vector containing p vectors and p is alwaysgreater than 1. Intrusion detection can then be

realized by the following change-point relation:

Xi ≈{

Np(0,0) i ≤ Np(1,1) i > (9)

where is the position of attacks, is the meanvalue, and is the nonsingular covariance. Eachtime the system changes (i.e. an attack is made)the deviation can be calculated by the followingrelation:

Yk =[k(n − k)

n

] 12

(X0,k − Xk,n) (10)

where

X0,k =1k

k∑i=1

Xi (11)

and k is the position of the change. The teststatistic is represented by T2, which can be re-alized by the following relation:

T2k = Y ′

kW−1k Yk, (k = 1, 2, . . . , n − 1) (12)

where Wk is defined in the following relation.

Wk =

⎡⎢⎢⎢⎣

k∑i=1

(Xi − X0,k)(Xi − X0,k)′

(n − 2)

+

n∑i=k+1

(Xi − Xk,n)(Xi − Xk,n)′

(n − 2)

⎤⎥⎥⎥⎦(13)

The sum of the deviation can be realized by thefollowing relation:

T2f = T2

k,max, (k = 1, 2, . . . , n − 1) (14)

For the covariance probability change of meanvalue, the variable H0: o = 1, is used for hy-pothesis testing. The test statistic then becomesrealized by the following relation:

G∗ = −2 log() =∑

(ni − 1) log|̂i|

|̂pooled|(15)

Where is calculated by the following relation:

=|̂0| k

2 × |̂1| n−k2

|̂| n2

(16)

Page 24: A Review on Cloud Computing: Design Challenges in Architecture

48 A Review on Cloud Computing: Design Challenges in Architecture and Security

Where ̂ is the prediction of . If the expectedvalue, as well as the variance, have both beenfound to change, then 0 = 1 and 0 = 1.Although this has been found to perform well indetecting intrusions under experimental analy-sis, its reliable performance on the market is yetto be confirmed.

3.7. Dirichlet Reputation Model [31]

As cloud computing becomes more popular,more uses for the service are going to be devel-oped. A newly found use for cloud computingis contained within services which require usersto transfer information among one another. Thefact that the users of most of these services donot know the user to which they are sendinginformation gives rise to many more securityconcerns than before. Concerns give rise to fearwhen users must transfer valuable informationsuch as social security numbers, bank accountnumbers, etc. In order to make cloud comput-ing services appeal more to users, trust must beestablished between users and service providers[31].

Figure 12. Dirichlet reputation model [32].

One possible solution to resolve the very impor-tant security issues is the integration of a systemknown as the Dirichlet reputation, as shown inFigure 12. This system is implemented in orderto observe the behavior of all users participatingin a particular service. Basically, users estab-lish trust by interacting with other users in anacceptable manner. If the Dirichlet reputationsystem discovers that a user is behaving suspi-ciously or unacceptably, then the user will be

punished and possibly not allowed to partici-pate in the cloud computing service. A physicalvalue is assigned to each user which representstheir reputation. This value is calculated basedupon several factors which are weighted dif-ferently and of course can change with userbehavior. The reputation value consists of afirst-hand reputation and a second-hand repu-tation. The first-hand reputation is recognizedby a direct observation of the user whereas thesecond-hand reputation is acquired by sharingof the first-hand reputation among other users.Each user participating in the service must re-port a trust value for each other user with whomthey communicate when a second-hand reputa-tion is acquired.

To illustrate the Dirichlet reputation, considertwo users, 1 and 2, participating in a cloud com-puting service. Suppose that user 1 examinessome mischievous behavior such as the spreadof a virus by user 2. The behavior of user 2is assumed to follow the Dirichlet distributionwith a probability of F12. Dir() is designatedas the Dirichlet distribution with representinga vector of real numbers greater than 0. Thisprobability distribution function is used to cal-culate the first-hand reputation. Bayes’ theoremis used to compute this reputation and the typi-cal representation is shown below.

P(i|D) =P(D|i)P(i)

P(D)= Dir(i|i1 + Ni1 · · ·iri + Niri)

(17)

In the above relation, D represents new datawhich is provided by user 1 and N representsinstantaneous occurrences of new data. TheDirichlet reputation system recognizes only threepossible types of behavior: friendly, selfish,and malicious denoted as 1, 2, and 3 re-spectively. Observed sent or received packetscan be realized mathematically by the followingtwo relations

X = (X1, . . . , Xk) ∼ Dir()

0 =k∑

i=1

i (18)

where k represents the number of parametersunder examination within the distribution func-tion Dir() which, in this particular example,is always equal to 3.

Page 25: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 49

Figure 13. General procedure for calculation ofreputation and trust values [32].

In addition to the reputation values calculatedusing the relations in (14) – (16), the Dirichletreputation system also implements a trust value(Figure 13). The trust value is used to reflecthow trustworthy user reports are of other users.Like the reputation values, the trust values arecomputed using Bayes’ theorem but only twopossible outcomes can result, trustworthy or nottrustworthy. The probability distribution func-tion used to calculate trust values is a specialcase of the Dirichlet distribution known as theBeta distribution. The trust value representingthe trust that user 1 has for user 2 is denotedas T12 ∼ Beta( , ) where represents trust-worthy and represents not trustworthy. Whenusers start out in a cloud computing service, = 1 and = 1 to establish a starting point.Each time a trustworthy report is made for auser, the trust value for that user changes as = + 1 and each time a non-trustworthy re-port is made for a user, the trust value for thatuser changes as = + 1. The trust value thatuser 1 establishes for user 2 is realized mathe-matically by the following relation:

12 = Expectation(Beta( , )) =

+ (19)

The overall, total reputation value used for eval-uation is calculated by blending reports from allusers in an environment. An illustration show-ing the general procedure by which the Dirich-let reputation system implements reputation andtrust values is shown in Figure 13.

3.8. Anonymous Bonus Point System [32]

In the current fast-paced world and with cloudcomputing technologies on the rise, integrationof these services with mobile devices has be-come a necessity [32]. Users now demand mo-bile serviceswithwhich they can conduct a busi-ness, make a purchase, engage in the stock mar-ket, or participate in other services that requirevaluable information to be transferred. Theseservices operate using complex communicationschemes and involve multiple nodes for datatransfer. Often, while participating in cloudcomputing services, users must transfer infor-mation to and from individuals or groups whomthey do not know. This, of course, presents asituation that is difficult to account for when de-signing cloud computing security. These typesof service systems are especially susceptible toattacks simply because it is easy for attackers togain access and very difficult for them to be de-tected and identified. Nonetheless, if designersof these mobile cloud computing services areclever with their security schemes, then attack-ers can be deterred. A cloud computing serviceimplementing mobile users, along with a pro-posed security scheme, will now be examinedunder the context of a personal digital assistant.

Consider a network of three mobile devices(personal digital assistants) capable of com-municating with each other over a distance onthe order of three hundred to four hundred feet.These three devices will be denoted by UserA,UserB, and UserC. Assuming that all of thesedevices have access to the cloud, data can betransferred from one user to another on a multi-hop basis (passing along information one userat a time). This method conserves much energyand prevents the cloud from becoming cluttered.Since users must donate energy from their de-vices in order to serve other users, each occur-rence of a donation results in credit toward thedonating user which may later be exchanged forsomething of monetary value. Now that the ba-sic idea is clear, consider a more complex andmore practical situation involving six users ofa particular cloud computing service capable ofcommunicating with each other over a distanceof several miles. These users will be denoted asu1, u2, u3, u4, u5, and u6. Consider an exam-ple situation where user six, u6, has requestedinformation from the cloud. Figure 13 showshow this information is transferred to u6 using

Page 26: A Review on Cloud Computing: Design Challenges in Architecture

50 A Review on Cloud Computing: Design Challenges in Architecture and Security

a multi-hop communication system. The fig-ure shows all possible paths in this particularnetwork from the cloud to u6. The numbersbetween each node represent the credit valuespassed along to each intermediate user. How-ever, in this particular system credits can only beawarded if the information reaches the intendedrecipient.

This system certainly has flaws and drawbacksincluding speed and efficiency, but the mostconcerning drawback is security. Ordinary cloudservices provide means to examine where andwho information came from. However, due tohigh susceptibility to attacks, this particular sys-tem keeps this type of information confidential.To achieve confidentiality two primary methodsare used. For one method, the sender of infor-mation is kept anonymous to every user exceptthe recipient. Still, even then, the recipient mustrequest identification from the sender. For theother method, the sender uses an alias so that allparties involved can know where the informa-tion came from, but cannot view the true identityof the sender. Both of these methods preventothers from knowing who is sending informa-tion, and thus where to look to find valuableinformation. Another important security issueinvolves users’ saved data. Often in cloud com-puting services, users are required to constructprofiles where personal information is storedwhich obviously attracts attackers. This systemprevents these kinds of attacks on devices (per-sonal digital assistants in this case) by allocat-ing IP and MAC addresses to each device. Thismethod would at least prevent any unauthorizedaccess to information. In order to compete inthe cloud computing market, service providersmust put every effort into ensuring a safe andsecure network.

3.9. Network Slicing [33]

Although a recent endeavor, cloud computinghas proved to be a very useful and very conve-nient means of providing services. However, aswith most other new products, there are manyaspects of cloud computing which need to beimproved. One particular cloud computing as-pect that is of concern is the fact that one sin-gle consumer, depending on their service, mayhave the ability to occupy more than their shareof the useful bandwidth. This is obviously a

major problem since the cloud computing ser-vice providers are trying to make a profit. Indeveloping a scheme to essentially multiplexcustomers, security becomes more complex andmore important. Basically, the task at hand is toconserve our resources, which in this case is thebandwidth of a cloud computing network. Apossible solution to this problem proposed hereis to utilize virtual machines in order to “slice”the network for bandwidth conservation. Thisis somewhat of a newly found method and littlestudy and testing have been conducted in this ef-fort. Each user would install a group of virtualmachines on their computer, while being ableto use a wide range of operating systems. Avirtual machine is used for each specific cloudcomputing service so the number of virtual ma-chines a user must install is dependent upon thenumber of services they request. The setup ofthis virtual machine usage is somewhat simpleand may be thought of as an information relay.An illustration of this setup is shown in Figure14.

Figure 14. Setup of virtual machine network slicing [33].

The hypervisor serves as a control mechanismwhich interfaces the operating system and thehardware. The domains represent different users,thus information is passed from one virtual ma-chine to the next. The host driver then com-municates with the hardware to perform the re-quested tasks. It is also suggested to imple-ment a transmission control protocol mecha-nism. Within the hypervisor, bridges can beused to route information to outside sources.Rather than slicing the network by prioritizingpackets, this system slices the network by im-plementing a schedule. The scheduling scheme

Page 27: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 51

seems to provide a more balanced network.As traffic in the network fluctuates, the sched-uler makes adjustments to accommodate thesechanges. Suppose that a particular set of “n”host users owns a number of virtual machines.Let the following relation represent a vectorcontaining these users.

Gi = {A1, A2, A3, A4, . . . , An} (20)

Furthermore, suppose that each user has “m”virtual machines installed on their computer.Let the following equation represent a vectorcontaining these machines.

Ai = {V1, V2, V3, V4, . . . , Vm} (21)

If information is transferred to or from one of theusers, then the information would likely travelthrough (m − 1) ∗ n virtual machines. For thisreason, security in this system must be nearflawless. In order for this network slicing sys-tem to be effective, every possiblemeasure mustbe taken to prevent malicious attacks. There ismuch work to be done to improve this system,but the payoff would certainly be worth the ef-fort.

3.10. Discussions on Cloud ComputingPrivacy [29-36]

In the above discussions, five security propos-als have been examined and discussed in de-tail including the effects that each would haveon the cloud computing industry. Perhaps thepreceding concepts could be used to derive anew method which could implement the strongaspects of each of these fives schemes. Se-curity within cloud computing is far from per-fect and has recently become a very puzzling

issue to resolve. The complexity with secu-rity arises because of constant improvement ofattackers’ knowledge and accessibility. Be-fore cloud computing services become desir-able, customers must feel safe with their in-formation transfer. Each proposal previouslyunder examination has beneficial properties intheir own respect. For example, the first modeldescribed (the privacy model) implements aneconomically efficient method while the CP in-trusion detection system focuses more efforttoward attack prevention. When designing asecurity scheme for cloud computing services,there underlies a dilemma by which securitycannot come at the cost of other desirable as-pects such as data speed or affordability. Tocounter this dilemma, some security schemeslike the Dirichlet Reputation system allow theuser to control the level of security a greatdeal. Although some have more than others, allfive aforementioned security schemes each havevery important and valuable concepts. Table 4displays a list of the main favorable aspects,or pros, of each of the examined five securitymethods.

To combine the benefits these security propos-als have provided, we propose a hybrid securitysolution as follows. First, we group users intodifferent domains, based on their demands andthe services they need. A network slicing hy-pervisor serves as a control mechanism. We usea vector space model to represent domain pro-files as well as domain member profiles. Afterlogged on, the members transfer informationwhich is tracked. Meanwhile, false informa-tion is continuously generated and mixed withthe tracked information randomly. Security pa-rameters can be customized by the members.

PrivacyModel

• Provides very strong encryption of information• Users can easily customize their security parameters• Provides an organized method which can be implemented easily

CP IntrusionDetection

• Protects against a wide variety of intrusion schemes• Provides excellent prevention of attacks

DirichletReputation

• Provides sophisticated system of checks and balances• Avoids ability for attackers to adapt• Provides a great deal of user control

AnonymousBonus Point

• Best suited for small distances, thus users are well hidden from attackers• Credit rewards provide incentive for users to participate

NetworkSlicing

• Provides attacker confusion• Conserves network bandwidth• Fast data rates are easily attainable

Table 4. Pros of cloud computing security strategies.

Page 28: A Review on Cloud Computing: Design Challenges in Architecture

52 A Review on Cloud Computing: Design Challenges in Architecture and Security

PrivacyModel

• Errors and bugs are difficult to find and correct• Services can become bogged down by distracting information• System is only preventative, thus it does not protect against aggressive attackers

CP IntrusionDetection

• Must be updated frequently to confuse attackers• May erroneously detect and terminate non-intrusive information

DirichletReputation

• Relies on complicated strategy that is difficult to implement• User trust yields susceptibility to deceptive customers• Performance is solely dependent upon user participation

AnonymousBonus Point

• Data speed is drastically reduced• Provides little intrusion protection• Can only be implemented in wireless applications

NetworkSlicing

• Due to the relay structure, protection is unreliable• Can become expensive when implemented in large networks

Table 5. Cons of cloud computing security strategies.

In information transferring, the system also en-hances the ability of encountering attacks by anintrusion detection system based on a strategyimplementing a statistical security model withcharacteristics of the network. For each mem-ber, a Dirichlet reputation index is associated.With the index of the receiver, every sender candetermine whether to transfer the data or not ifthe receiver behaves suspiciously. We will fur-ther study more details when implementing thishybrid security proposal in our future works.

Just as each security scheme has its own favor-able parameters, each scheme also has its ownunfavorable parameters, or downfalls. Becauseno cloud computing security method will everbe free of flaws, an extremely important consid-eration when designing security schemes mustbe to balance the favorable aspects and the unfa-vorable aspects. Yet, this is certainly more com-plicated than it may seem at first glance. Thereare a number of factors which must be consid-ered when attempting to balance strong pointsand downfalls. For example, a system such asthe CP intrusion detection strategy, which pro-vides very strong protection against attackers,must also allow the service to be efficient bothin performance and in cost. Any security sys-tem developed will inevitably exhibit flaws anddownfalls, but this does not mean that the sys-tem is not usable. Just as users desire somespecific parameters included with their securityscheme, they also are not concerned about someof the downfalls. For example, a large corpo-ration is likely not to be concerned about thecost of an expensive service if excellent secu-rity is included whereas an individual is morelikely not to spend as much money on a compa-rable system. In order to compare strong pointsto downfalls of the previously mentioned cloudcomputing security schemes, Table 5 displays

a list of the major flaws, or cons, included inthese systems.

The aforementioned security proposals for im-plementation into cloud computing are some ofthe best performing methods in experimentalanalysis. Although not perfect, and in somecases not even thoroughly tested, these securityschemes certainly encompass good solutions tothe major issues within cloud computing secu-rity. As mentioned before, none of these fivesecurity methods under examination exhibit so-lutions to all problems, but future work mayinclude sub-schemes from each of these meth-ods. Of course one central “template” for de-veloping cloud computing security schemes ishighly unlikely, so research must include manydifferent proposals such as the five discussedpreviously. Cloud computing is a fairly recentendeavor and thus, will require time to developreasonable and efficient security systems. Se-curity methods within cloud computing will in-evitably have to be frequently updated, more sothan security implementations for other appli-cations, due to ongoing efforts of attackers. Thefuture of the entire cloud computing industry isessentially dependent upon security schemes toprovide privacy and protection of users.

4. Conclusions

Cloud computing will be a major power of thelarge-scale and complex computing in the fea-ture. In this paper, we present a comprehen-sive survey on the concepts, architectures, andchallenges of cloud computing. We provide in-troduction in details for architectures of cloudcomputing in every level, followed by a sum-mary of challenges in cloud computing, in the

Page 29: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 53

aspects of security, virtualization, and cost ef-ficiency. Among them, Security issues are themost important challenge in the cloud comput-ing. We survey comprehensively the securityissues and the current methods addressing thesecurity challenges. This survey provides use-ful introduction on cloud computing to the re-searchers with interest in cloud computing.

References

[1] G. GRUMAN, E. KNORR, What cloud computingreally means. InfoWorld, (2009, May). [Online].Available:http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031

[2] L. SIEGELE, Let it rise: A survey of corporate IT.The Economist, (Oct., 2008).

[3] P. WATSON, P. LORD, F. GIBSON, Panayiotis Peri-orellis, and Georgios Pitsilis. Cloud computing fore-science with carmen, (2008), pp. 1–5.

[4] R. M. SAVOLA, A. JUHOLA, I. UUSITALO, Towardswider cloud service applicability by security,privacyand trust measurements. International Conferenceon Application of Information and CommunicationTechnologies (AICT), (Oct., 2010), pp. 1–6.

[5] M.-E. BEGIN, An egee comparative study: Gridsand clouds – evolution or revolution. EGEE IIIproject Report, vol. 30 (2008).

[6] B. ROCHWERGER, D. BREITGAND, E. LEVY, A.GALIS, K. NAGIN, I. M. LLORENTE, R. MONTERO,Y. WOLFSTHAL, E. ELMROTH, J. CACERES, M. BEN-YEHUDA, W. EMMERICH, F. GALAN, The Reservoirmodel and architecture for open federated cloudcomputing. IBM Journal of Research and Develop-ment, vol. 53, no. 4 (July, 2009), pp. 1–11.

[7] L. M. VAQUERO, L. RODERO-MERINO, J. CACERES,M. LINDNER, A break in the clouds: towards a clouddefinition. SIGCOMM Comput. Commun. Rev., 39,1 (December 2008).

[8] OPENQRM. [Online]: http://www.openqrm.com

[9] AMAZON ELASTIC COMPUTE CLOUD (AMAZONEC2) [Online]. Available:http://aws.amazon.com/ec2/

[10] L. WANG, G. VON LASZEWSKI, A. YOUNGE, X. HE,M. KUNZE, J. TAO, C. FU, Cloud Computing: A Per-spective Study.New GenerationComputing, vol. 28,no. 2 (2010), pp. 137–146.

[11] J. MURTY, Programing Amazon Web Services: S3,EC2, SQS, FPS, and SimpleDB, O’Reilly Media,2008.

[12] E. Y. CHEN, M. ITOH, Virtual smartphone over IP.In Proceedings of IEEE International Symposiumon A World of Wireless, Mobile and MultimediaNetworks, (2010), pp. 1–6.

[13] E. CIURANA, Developing with Google App Engine,Spring, 2010.

[14] C. D. WEISSMAN, S. BOBROWSKI, The design of theforce.com multitenant internet application develop-ment platform. In Proceedings of the 35th SIGMODinternational conference on Management of data,(SIGMOD ’09), (2009) pp. 889–896.

[15] R. T. DODDA, A. MOORSEL, C. SMITH, An Ar-chitecture for Cross-Cloud System Management,unpublished.

[16] B. SOTOMAYOR, R. MONTERO, I. LLORENTE, I. FOS-TER, Resource Leasing and the Art of SuspendingVirtualMachines. In IEEE InternationalConferenceon High Performance Computing and Communica-tions (HPCC09), (June 2009), pp. 59–68.

[17] H. CHEN, G. JIANG, A. SAXENA, K. YOSHIHIRA,H. ZHANG, Intelligent Workload Factoring for AHybrid Cloud Computing Model, unpublished.

[18] M. M. HASSAN, E. HUH, B. SONG, A Frameworkof Sensor – Cloud Integration Opportunities andChallenges. Presented at the International Confer-ence on Ubiquitous Information Management andCommunication (ICUIMC), (January 15-16, 2009),Suwon, South Korea.

[19] K. KEAHEY, T. FREEMAN, Contextualization: Pro-viding One-Click Virtual Clusters. IEEE Interna-tional Conference on eScience, (2008), pp. 301–308.

[20] D. BERNSTEIN, E. LUDVIGSON, K. SANKAR, S. DI-AMOND, M. MORROW, Blueprint for the Intercloud– Protocols and Formats for Cloud Computing In-teroperability. In Proceedings of the 2009 FourthInternational Conference on Internet and Web Ap-plications and Services (ICIW ’09). Washington,DC, USA, pp. 328–336.

[21] J. POWELL, Cloud computing – what is it and whatdoes it mean for education?, unpublished.

[22] SERVER VIRTUALIZATION FAQ [Online]. Available:http://www.itmanagement.com/faq/server-virtualization/

[23] K. KEAHEY, Cloud Computing for Science. LectureNotes in Computer Science, vol. 5566 (2009),pp. 478.

[24] A. LENK, M. KLEMS, J. NIMIS, T. SANDHOLM,What’s inside the Cloud? An architectural mapof the Cloud landscape. ICSE Workshop on Soft-ware Engineering Challenges of Cloud Computing,(2009), pp. 23–31.

[25] F. TUSA, M. PAONE, M. VILLARI, A. PULIAFITO,CLEVER: A cloud-enabled virtual environment.Computers and Communications (ISCC), 2010IEEE Symposium on, vol., no. (22-25 June 2010),pp. 477–482.

[26] GIDEON SAMID SCHOOL OF ENGINEERING CASEWESTERN RESERVE UNIVERSITY, Encryption-On-Demand: Practical and Theoretical Considerations,2008.

Page 30: A Review on Cloud Computing: Design Challenges in Architecture

54 A Review on Cloud Computing: Design Challenges in Architecture and Security

[27] N. SANTOS, K. P. GUMMADI, R. RODRIGUES, To-wards Trusted Cloud Computing.

[28] S. BERGER, R. CA’CERES, K. GOLDMAN, D. PEN-DARAKIS, R. PEREZ, J. R. RAO, E. ROM, R. SAILER,W. SCHILDHAUER, D. SRINIVASAN, S. TAL, E.VALDEZ, Security for the cloud infrastructure:Trusted virtual data center implementation. 2009.

[29] Y. ELOVICI, B. SHAPIRA, A. MASCHIACH, A NewPrivacy Model for Hiding Group Intersest whileAccessing the Web. In Proceedings of the 2002ACM workshop on Privacy in the Electronic Society(WPES ’02), ACM, New York, NY, USA, pp. 63–70

[30] Y. GUAN, J. BAO, A CP Intrusion Detection Strat-egy on Cloud Computing. In Proceedings of the2009 International Symposium on Web InformationSystems and Applications (WISA’09), (May 22-24,2009), Nanchang, P. R. China, pp. 84–87.

[31] L. YANG, A. CEMERLIC, Integrating Dirichlet rep-utation into usage control. In Proceedings of the5th Annual Workshop on Cyber Security and Infor-mation Intelligence Research: Cyber Security andInformation Intelligence Challenges and Strategies,(2009), pp. 1–4.

[32] T. STRAUB, A. HEINEMANN, An anonymous bonuspoint system for mobile commerce based on word-of-mouth recommendation. In Proceedings of the2004 ACM symposium on Applied computing (SAC’04), (2004) New York, NY, USA, pp. 766–773.

[33] A. NAYAK, B. ANWER, P. PATIL, Network Slicing inVirtual Machines for Cloud Computing. [Online]:http://www.cc.gatech.edu/projects/disl/

[34] B. THOMPSON, D. YAO, The Union-Split Algorithmand Cluster-Based Anonymization of Social Net-works, 2009.

[35] Q. WANG, C. WANG, J. LI, K. REN, W. LOU, En-abling Public Verifiability and Data Dynamics forStorage Security in Cloud Computing, 2009.

[36] C. WANG, Q. WANG, K. REN, EnsuringData StorageSecurity in Cloud Computing.

[37] SECURITY GUIDANCE FOR CRITICAL AREAS OF FO-CUS IN CLOUD COMPUTING [Online]. Available:http://www.cloudsecurityalliance.org/guidance/csaguide.pdf.

[38] T. MATHER, S. KUMARASWAMY, S. LATIF, CloudSecurity and Privacy: An Enterprise Perspective onRisks and Compliance. O’Reilly Media.

[39] B. SOTOMAYOR, R. S. MONTERO, I. M. LLORENTE, I.FOSTER, Virtual Infrastructure Management in Pri-vate and Hybrid Clouds. IEEE Internet Computingvol. 13, no. 5 (Sept.-Oct. 2009), pp. 14–22,

[40] J. O. KEPHART, D. M. CHESS, The vision of auto-nomic computing. Computer, 36(1) (2003),pp. 41–50.

[41] L. KLEINROCK, A vision for the internet. ST Journalof Research, 2(1) (November 2005), pp. 4–5.

[42] M. MILENKOVIC, S. H. ROBINSON, R. C. KNAUER-HASE, D. BARKAI, S. GARG, A. TEWARI, T. A. AN-DERSON, M. BOWMAN, Toward internet distributedcomputing. Computer, 36(5) (May 2003),pp. 38–46.

[43] L.-J. ZHANG, EIC Editorial: Introduction to theBody of Knowledge Areas of Services Computing.IEEE Transactions on Services Computing, 1(2)(April-June 2008), pp. 62–74.

[44] L. YOUSEFF, M. BUTRICO, D. DA SILVA, Toward aunified ontology of cloud computing. In Grid Com-puting Environments Workshop, (Nov. 2008),pp. 1–10.

[45] L. WANG, J. TAO, M. KUNZE, A. C. CASTELLANOS,D. KRAMER, W. KARL, Scientific CloudComputing:Early Definition and Experience. In HPCC ’08,pp. 825–830.

[46] D. CHAPPELL, Introducing the Azure Services Plat-form. [Online].http://download.microsoft.com/download/e/4/3/e43bb484-3b52-4fa8-a9f9-ec60a32954bc/Azure Services

[47] M. ARMBRUST, A. FOX, R. GRIFFITH, A. D. JOSEPH,R. KATZ, A. KONWINSKI, G. LEE, D. PATTERSON, A.RABKIN, I. STOICA, M. ZAHARIA, A view of cloudcomputing. Commun. ACM, 53(4) (April 2010),pp. 50–58.

Received: August, 2010Revised: February, 2011Accepted: March, 2011

Contact addresses:

Fei HuDepartment of Electrical and Computer Engineering

University of AlabamaTuscaloosa, AL, USA

e-mail: [email protected]

Meikang QiuDepartment of Electrical and Computer Engineering

University of KentuckyLexington, KY, USA

e-mail: [email protected]

Jiayin LiDepartment of Electrical and Computer Engineering

University of KentuckyLexington, KY, USA

e-mail: [email protected]

Travis GrantDraw Tylor

Seth McCalebLee Butler

Richard HamnerElectrical and Computer Engineering

University of AlabamaTuscaloosa, AL, USA

e-mail: {tggrant, labutler, rahamner1}@crimson.ua.edu;[email protected]

Page 31: A Review on Cloud Computing: Design Challenges in Architecture

A Review on Cloud Computing: Design Challenges in Architecture and Security 55

FEI HU is currently an associate professor in theDepartment ofElectricaland Computer Engineering at the University of Alabama, Tuscaloosa,AL,USA. His research interests are wireless networks, wireless securityand their applications in biomedicine. His research has been supportedby NSF, Cisco, Sprint, and other sources. He obtained his first Ph.D.degree at Shanghai Tongji University, China in Signal Processing (in1999), and second Ph.D. degree at Clarkson University (New YorkState) in the field of Electrical and Computer Engineering (in 2002).

MEIKANG QIU received the B.E. and M.E. degrees from Shanghai JiaoTong University, China. He received the M.S. and Ph.D. degrees ofComputer Science from the University of Texas in Dallas in 2003 and2007, respectively. He had worked at Chinese Helicopter R&D Instituteand IBM. Currently, he is an assistant professor of ECE at the Univer-sity of Kentucky. He is an IEEE Senior member and has published 100papers. He has been on various chairs and TPC members for manyinternational conferences. He served as the Program Chair of IEEEEmbeddCom’09 and EM-Com’09. He received Air Force SummerFaculty Award 2009 and won the best paper award in IEEE Embeddedand Ubiquitous Computing (EUC) 2009. His research interests includeembedded systems, computer security, and wireless sensor networks.

JIAYIN LI received the B.E. and M.E. degrees from Huazhong Univer-sity of Science and Technology (HUST), China, in 2002 and 2006,respectively. Now he is pursuing his Ph.D. degree in the Department ofElectrical and Computer Engineering (ECE), University of Kentucky.His research interests include software/hardware co-design for embed-ded system and high performance computing.

TRAVIS GRANT, DRAW TYLOR, SETH MCCALEB, LEE BUTLER, AND

RICHARD HAMNER are students in the Department of Electrical andComputer Engineering at the University of Alabama, Tuscaloosa, AL,USA.