quality of service provision in cloud-based storage system for multimedia delivery

12
292 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014 Quality of Service Provision in Cloud-based Storage System for Multimedia Delivery Yen-Ming Chu, Nen-Fu Huang, Senior Member, IEEE, and Sheng-Hsiung Lin Abstract —With the emergence of various multimedia appli- cations, service and devices, multimedia delivery is expected to become the major traffic of Internet which will keep increasing rapidly. In order to serve such large scale multimedia applica- tions, more and more service providers store their video assets in the cloud and delivery streaming to their consumers cross cloud, for example, YouTube. Along with the growth of users and the amount of media content constantly being produced, traditional cloud-based storage has two drawbacks. First, a lot of servers and storages devices are needed, which could easily be the performance bottleneck in the whole system. Second, to provide differential classes of services in the large-scale situation, system tends to need many additional devices. This article proposes a robust, scalable, highly available and service level provisioning cloud-based storage system designed specifically for distributing multimedia content. The proposed system contains a proven Adaptive Quality of Service (AQoS) algorithm in order to provide differential service levels. The system can also be used flexibly in large, medium and small-scale environment. In addition, some algorithms are also developed to increase overall system performance and fault tolerance. Implementations and experiment results show that the proposed system can meet the requirements both in the laboratory and a practical commercial service environment. Index Terms—Cloud computing, content delivery system, multimedia application. I. Introduction C LOUD COMPUTING is a fast, growing and emerging technology that could provide elasticity, scalability, ubiq- uitous availability, and cost-effectiveness. There have been numerous studies about the definition and categories of cloud computing [1]–[5]; in fact, the “cloud” is more often used as “Metaphoring the Internet” where “Cloud-based” means network-centric [5]. More and more new topics are being studied from prior research fields which combine the concept of cloud. Multimedia cloud (or media cloud) aims to leverage cloud computing technologies for multimedia applications, services and systems. Researchers have proposed various kinds Manuscript received August 1, 2012; revised November 29, 2012; accepted April 2, 2013. Date of publication August 15, 2013; date of current version February 5, 2014. This work was supported by National Science Council (NSC) of Taiwan under the grant numbers NSC-101–2221-E-007-065. Y. M. Chu is with the Department of Computer Science and Information Engineering, De Lin Institute Technology, New Taipei City, Taiwan, R. O. C. (e-mail: [email protected]). N. F. Huang and S. H. Lin are with the Department of Computer Science and Information Engineering, National Tsing Hua University, Hsin-Chu, Taiwan, R.O.C. (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSYST.2013.2257338 of media or multimedia cloud from different orientations. From references [6]–[8], multimedia cloud is proposed as an emerging computing paradigm that can effectively process multimedia applications and provide novel multimedia service for consumers. Moreover, according to [9], the multimedia-related traffic has been predicted to account for around 90% of the global Internet Protocol (IP) traffic that will reach 1.3 ZB per year in 2016. Therefore, an important research issue is how to deliver such large amounts of multimedia content which is stored in and crossed over the cloud. However, one key challenge is effectively transferring the multimedia on the clouds while providing quality of service (QoS) provision [7]. In particular, QoS provision needs to be considered in cloud-based storage system that is responsible to store and fetch data for others’ applications and services in the cloud computing systems. Regarding the delivery of multimedia from/to the cloud, the most challenging work is how the cloud storage can provide distributed parallel accessing of media asset for millions of users with different service levels. Therefore, this paper proposes a QoS-provisioning cloud storage system, which is particularly aiming at distributed parallel accessing of media asset for millions of users with different service levels. The rest of this article is organized as follows. Some related works are presented in Section II. The proposed cloud storage system and related algorithm are introduced in Section III. The implementations framework and experimentation analysis are depicted in Sections IV and V, respectively. Finally, conclusions and future work are given in Section VI. II. Background and Related Works A. Multimedia Cloud Computing Much work has been carried out in the area of multimedia and cloud computing [6]–[8] and [10]–[12]. Zhu et al. [7] introduced the principal concepts of multimedia cloud comput- ing from multimedia cloud and cloud multimedia. In addition, they proposed a media-edge cloud (MEC) architecture. MEC can reduce delay and jitter of media streaming and provides better QoS of multimedia service. Moreover, the authors con- sider the QoS-related issue is very important either multimedia cloud or cloud media. Certainly, it is most important to provide QoS aware- ness and provision for multimedia delivery, no matter what the delivery infrastructures are adopted. For example, con- 1932-8184 c 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html

Upload: sheng-hsiung

Post on 12-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

292 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014

Quality of Service Provision in Cloud-basedStorage System for Multimedia Delivery

Yen-Ming Chu, Nen-Fu Huang, Senior Member, IEEE, and Sheng-Hsiung Lin

Abstract—With the emergence of various multimedia appli-cations, service and devices, multimedia delivery is expected tobecome the major traffic of Internet which will keep increasingrapidly. In order to serve such large scale multimedia applica-tions, more and more service providers store their video assetsin the cloud and delivery streaming to their consumers crosscloud, for example, YouTube. Along with the growth of usersand the amount of media content constantly being produced,traditional cloud-based storage has two drawbacks. First, a lotof servers and storages devices are needed, which could easilybe the performance bottleneck in the whole system. Second, toprovide differential classes of services in the large-scale situation,system tends to need many additional devices. This articleproposes a robust, scalable, highly available and service levelprovisioning cloud-based storage system designed specifically fordistributing multimedia content. The proposed system containsa proven Adaptive Quality of Service (AQoS) algorithm in orderto provide differential service levels. The system can also beused flexibly in large, medium and small-scale environment. Inaddition, some algorithms are also developed to increase overallsystem performance and fault tolerance. Implementations andexperiment results show that the proposed system can meet therequirements both in the laboratory and a practical commercialservice environment.

Index Terms—Cloud computing, content delivery system,multimedia application.

I. Introduction

CLOUD COMPUTING is a fast, growing and emergingtechnology that could provide elasticity, scalability, ubiq-

uitous availability, and cost-effectiveness. There have beennumerous studies about the definition and categories of cloudcomputing [1]–[5]; in fact, the “cloud” is more often usedas “Metaphoring the Internet” where “Cloud-based” meansnetwork-centric [5]. More and more new topics are beingstudied from prior research fields which combine the conceptof cloud. Multimedia cloud (or media cloud) aims to leveragecloud computing technologies for multimedia applications,services and systems. Researchers have proposed various kinds

Manuscript received August 1, 2012; revised November 29, 2012; acceptedApril 2, 2013. Date of publication August 15, 2013; date of current versionFebruary 5, 2014. This work was supported by National Science Council(NSC) of Taiwan under the grant numbers NSC-101–2221-E-007-065.

Y. M. Chu is with the Department of Computer Science and InformationEngineering, De Lin Institute Technology, New Taipei City, Taiwan, R. O. C.(e-mail: [email protected]).

N. F. Huang and S. H. Lin are with the Department of Computer Science andInformation Engineering, National Tsing Hua University, Hsin-Chu, Taiwan,R.O.C. (e-mail: [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/JSYST.2013.2257338

of media or multimedia cloud from different orientations.From references [6]–[8], multimedia cloud is proposed asan emerging computing paradigm that can effectively processmultimedia applications and provide novel multimedia servicefor consumers.

Moreover, according to [9], the multimedia-related traffichas been predicted to account for around 90% of the globalInternet Protocol (IP) traffic that will reach 1.3 ZB per year in2016. Therefore, an important research issue is how to deliversuch large amounts of multimedia content which is stored inand crossed over the cloud. However, one key challenge iseffectively transferring the multimedia on the clouds whileproviding quality of service (QoS) provision [7]. In particular,QoS provision needs to be considered in cloud-based storagesystem that is responsible to store and fetch data for others’applications and services in the cloud computing systems.Regarding the delivery of multimedia from/to the cloud, themost challenging work is how the cloud storage can providedistributed parallel accessing of media asset for millions ofusers with different service levels.

Therefore, this paper proposes a QoS-provisioning cloudstorage system, which is particularly aiming at distributedparallel accessing of media asset for millions of users withdifferent service levels. The rest of this article is organizedas follows. Some related works are presented in Section II.The proposed cloud storage system and related algorithm areintroduced in Section III. The implementations framework andexperimentation analysis are depicted in Sections IV and V,respectively. Finally, conclusions and future work are given inSection VI.

II. Background and Related Works

A. Multimedia Cloud Computing

Much work has been carried out in the area of multimediaand cloud computing [6]–[8] and [10]–[12]. Zhu et al. [7]introduced the principal concepts of multimedia cloud comput-ing from multimedia cloud and cloud multimedia. In addition,they proposed a media-edge cloud (MEC) architecture. MECcan reduce delay and jitter of media streaming and providesbetter QoS of multimedia service. Moreover, the authors con-sider the QoS-related issue is very important either multimediacloud or cloud media.

Certainly, it is most important to provide QoS aware-ness and provision for multimedia delivery, no matter whatthe delivery infrastructures are adopted. For example, con-

1932-8184 c© 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html

Page 2: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

CHU et al.: QUALITY OF SERVICE PROVISION IN CLOUD-BASED STORAGE SYSTEM FOR MULTIMEDIA DELIVERY 293

Fig. 1. System model of a typical cloud media.

ventional client/server, content delivery network (CDN) andpeer-to-peer (P2P) [13]. For media cloud and delivery, trans-missions from/to the media cloud to/from the outside, theQoS requirements are as very important [14]. Like Youtube[15], Netflix [16] and Hulu [17] the “over-the-top” (OTT)service providers, for example, serve the video streaming fromthe private or public cloud to their consumers over the openInternet that is not offered by that network operator. Fig. 1shows the typical design of OTT service architecture based oncloud. A cloud-based OTT service architecture often includesthese parts as follows.

1) Encoding System: the OTT service provide transcodingthe media content to the specific formats with variousquality and codecs. A HD video is updated to YouTube,for example, it will at least be transcoding into morethan six kinds of formats store.

2) Storage System: the transcoded media assets are storedin the storage system. The hardware infrastructure andsoftware strategy adopted by the storage system areconsidered with a lot of terms, such as cost, performancerequirement, expected capacity, co-local on public orprivate cloud and delivery networks. Storage systemplays a significant role in providing QoS provisions.

3) Streaming Servers: although OTT service providers alsosupply on-demand video streaming service, the de-ployed streaming servers for OTT are different fromconventions streaming server in telco-IPTV. Unlike theRTSP/UDP used by telco-IPTV, OTT services generallyuse the HTTP-based protocol.

4) End user: as Fig.1, the OTT media streaming is deliveredover different network environment and arrived to thevarious end user device, such as personal computer (PC),set-top box (STB), mobile Internet device (MID).

The OTT audience consumers watch the videos of differentquality, based on their device and network bandwidth. Forinstance, a paid audience can watch the high definition (HD)quality video streaming on their smart TV in the living room,but another no paying consumers only watch low quality videoon the tablet PC. In order to provide such service requirement,

the OTT service system needs to guarantee its system QoSprovision on streaming server. YouTube user can watch a fullHD video smoothly, for example, it means the YouTube systemcan pump over 4 Mbps throughput between servers and client;moreover, the servers need to be able to fetch sufficient amountof data from the storage.

B. Cloud Storage

Cloud storage is the concept of the cloud computing whichhas been developed as a new paradigm. It is a system whichuses application software to make large amounts of storageequipment implement collaborative work and it also providesbusiness visit service by hosting application which is the gridtechnology. Input and output (I/O) and storage always is veryimportant issue in computer architecture, it is very difficult tobalance between speed, capacity and cast [18]. Even moreso, it will face much severe challenges for designing anddeploying storage system in the cloud.

Beside scalability in term of the number of devices andclients, an ideal storage system should provide data sharingcross platforms (i.e., operating systems), data security, andhigh performance. In common use there are three storage hard-ware architectures which are direct-attached storage (DAS),storage area networks (SANs), and network-attached storage(NAS). In traditional storage architecture, block-based storagedevices are directly connected to the I/O bus of a host machine(e.g., via SCSI or ATA/IDE) by DAS. DAS provides highperformance and minimal security concerns, but on connectiv-ity, there are limits. For addressing the connectivity limits ofDAS and sharing of storage devices, the SAN was introducedwhich is a switched fabric. A SAN offers a fast, scalableinterconnection for large numbers of storage devices and hosts[19], [20].

DAS and SAN are both block-based. Data structures such asfiles and directories are blocked on the storage devices by thestorage application (e.g., file system) which is responsible formapping these data structures. Doing this mapping requiresextra data which is commonly referred to as metadata. Forbeing able to share data blocks, multiple hosts must sharemetadata as well, and to do so, they also must guaranteemetadata consistency among the hosts. Only among tightlycoupled performance-sensitive storage applications, such asclustered databases and file systems, due to the complexity ofthis process block sharing has resulted. By using NAS, hostsare only allowed to share data indirectly through files.

NAS was introduced to enable data sharing across platformswhich is just another name for file-serving. On the file server,the metadata describing how files are stored on devices ismanaged completely with NAS. By enabling cross-platformdata sharing at such levels also causes all I/O to be directedthrough a single file server. As the former refers as a NASgateway, which is implementing NAS based on SAN or withDAS, but in either case, there will be limits on the performanceof the file server and the aggregate performance of the storagedevices will be rarely seen by the clients [19].

Due to the connectivity limits of DAS, it has been rarelyadopted in cloud storages, which need flexibility, except forthe few cloud-based backup applications [21]. On the other

Page 3: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

294 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014

hand, the NFS file systems provided by the common operatingsystem are unable to process the high performance throughputin the cloud computing system that needs to serve a great dealof requirement. Although the NAS offers the convenience offile sharing, most of cloud infrastructure and service providersdo not employ NAS as storage architectures, except for therate vendors, such as the NetApp Inc. [22]. For these reasons,SAN could be the suitable architecture for cloud computing,especially when various IP-based SAN technologies are pre-sented and popular. IP-based SAN seems to be cost-saving,interconnected, flexible and scalable in the deployment ofmedia cloud.

However, both requirements which are the high performanceof parallel fetching and easy sharing for distributed access areneeded to be considered, and a new architecture which is oftencalled a distributed file system, has recently been introducedin an attempt to capture the features of both NAS and SANs.Examples include General Parallel File System (GPFS) [23],Sun Network File System(SNFS) [24], Hadoop DistributedFile System(Hadoop) [25], Kosmos File System(KFS) [26],CEPH, Panasas [27], Parallel Virtual File system(PVFS2) [28],Red Hat Global File System(GFS, RGFS) [29], [30].

GFS is a cluster file system that is developed by theUniversity of Minnesota and maintained by Red Hat and is acluster file system that is available with Red Hat Cluster Suite.GFS nodes are configured and managed with Red Hat ClusterSuite configuration and management tools. GFS provides datasharing among nodes in a cluster and provides a single,consistent view of the file-system name space across the nodesin a cluster. It allows applications to install and run withoutmuch knowledge of the underlying storage infrastructure andis fully compliant with the IEEE POSIX interface, allowingapplications to perform file operations as if they were runningon a local file system.

C. Storage with QoS

As referred to in prior sections, when multimedia cloudconcept is mentioned, QoS is from one of the problems to payattention. Especially when the cloud computing infrastructuredevelops to maturity with many different kinds of services,many studies show that the storage becomes the bottleneckof the performance. So how to ensure the performance on aQoS-guarded storage system will become an important issue,especially when the cloud storage must be able to provide avery large user access at the same time.

As stated in the prior research [7], [8], to provide QoSprovisioning for multimedia in the cloud, there are two wayswhich are adapted usually: The first is adding QoS to currentcloud computing infrastructure within cloud. Another is addingthe QoS middleware between the cloud infrastructure andapplications. In the former way, it usually is achieved throughhardware support with the existing operating system, such asnetwork traffic and computing power, which usually beginwith a very mature standard that may be followed. But in thelatter way, QoS provision is reached by private middlewareand proprietary API of various cloud vendors.

Previous work on storage resource management can beclassified into two classes largely. One is, guaranteeing

each client’s storage QoS requirements set by a systemadministrator. These systems such as Facade [31], Chameleon[32] and Triage [33] aim to achieve the required responsetime objectives by regulating the rate of other clients’ work-load incoming into the storage system. Facade uses EarliestDeadline First (EDF) to meet the response time objectives butit is impossible when an unexpected workload burst occursby other clients. Chameleon uses leaky-bucket with feedbackcontrol but leaky-bucket system does not use the storage sys-tem efficiently because it is also not work-conserving. Triageadopts a control theory to predict the system performanceand correspondingly adjust its system model for performanceisolation and differentiation. Its system model is not sensitiveto the performance dynamics perceived by concurrent clientsdue to different physical data position.

D. System Design Goals

As referred to the prior background and related works,is necessary to design and deploy the storage system withQoS provision for a multiple class-aware multimedia deliveryservice. The design goals and requirements for a such storagesystem should include that the system can be deployed in cloudand also can be used flexibly in large, medium and small sizeenvironments and has features of scalability, considerable faulttolerance, security and other basic requirements. Furthermore,system needs to provide differential service between differentusers and dispatch resources properly in a heavy load situation.

1) Multiclass Service Aware: With multiclass, serviceawareness is an import requirement of a cloud system formultimedia application. The users in the system are dividedinto server class users. The basic requirement is that the usersin the system are divided into two levels: high class and lowclass. The high class users can always use the service andlow class users need a mechanism to determine whether theservices can be used. In the case of large number of users,how to schedule the resources finely in the content deliverysystem is a serious issue. Therefore, an algorithm is developedwhich achieves the goal of using minimal storage space withthe best resource scheduling among different users.

2) Scalability: Scalability is another goal that the systemmust be able to scale up and scale down in the different sizeenvironment. Developing a system architecture which can beused in small and large circumstances simultaneously is achallenge. The design idea is that the each component in thesystem needs to be loosely coupled and every component couldalso be scaled up and scaled down by itself.

Another consideration is that not only the component butalso the used packages in the system are scalable. Any morepowerful third-party packages could replace some parts ofcurrent system but may not scale well in the large-scalesituation. For example, our system uses many simple andstandard XML formats for exchanging information. Theseformats can be composed by a third-party powerful library orstring concatenation. If performing the above action milliontimes in just one second is needed, string concatenation willbe the fastest way. It does not need to deal with the relationshipbetween XML nodes, create XML tree or load the XMLmodules when using string concatenation.

Page 4: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

CHU et al.: QUALITY OF SERVICE PROVISION IN CLOUD-BASED STORAGE SYSTEM FOR MULTIMEDIA DELIVERY 295

Fig. 2. Proposed system block and flow Diagram.

3) Fault Tolerance: To achieve this goal, a distributedsystem must be designed and it is not affected when someservers are malfunctioned. When a server failure does not af-fect the whole system, it means that the server can be removedeasily. The proposed system also tries to provide easy serverinstallation as well. In the system, different kinds of metadataare required to be held redundantly by the correspondingtechnologies. The real implementation is shown in Section IV.

4) Security: Security is a serious issue in cloud-basedservices today. Especially in providing differential servicesystem, security vulnerabilities can easily lead to the cor-ruption of the entire system. In the system proposed in thispaper, some hash algorithms are used to provide verificationin the communication protocol, and reject malicious requests.Also, these inspections can help to correct errors that occurdue to transmission or network accidents. In addition, systemhas development and production environment due to securityconsiderations, as there must be error messages of differentdetail levels among different environments.

5) Standards-Based: The proposed system designs theprotocol over the today’s most popular Hypertext TransferProtocol (HTTP). Through careful design, this approach hasthe advantage of using proxy cache to reduce bandwidth andimprove overall performance. Moreover, many campus andenterprise network firewalls will block many other protocolsexcept HTTP. Therefore, HTTP specification must be consid-ered when developing the protocol. Such designed protocolcannot be similar to HTTP specification only but fail to fullycomply with the basic requirements of Request For Comments(RFC) standards. Incorrect HTTP protocol is likely to reducethe advantages of using this protocol.

III. Design of Cloud Storage System

According to the above 5 system design goals, we proposea system architecture that includes the five sub-systems asshown in Fig. 2.

A. Service Specification Unit

This is the first process whenever a user makes a servicerequest. System needs to decide which service user requiresand then prepares the necessary configuration for that service.To reduce management complexity and protect the integrity ofthe system, a layered architecture of the configuration has beenproposed. Rare changes and related importance parameters can

be compiled for fast execution after the adjustment. Changedparameters can be stored separately to reduce managementcomplexity, while retaining the flexibility of the system. Mean-while, in this step, the system will do some basic verificationof user-supplied data. If it fails, system will refuse to provideservices and an error message will be generated to notify theuser.

B. Resource Mapping Unit

To find the needed resources and prepare metadata anddynamic statistical data is needed for next step. Logical,detailed and protocol’s custom verification will be performedin this step. If the check fails or resources are not available,system will refuse to provide services and an error messagewill be generated to notify the user. The tricky part here isthat since different resources are scattered in various places,system may not be able to obtain the resources needed indifferent environments and conditions. An important indicatorhere is how to properly handle different situations in order toprovide robustness of the system to achieve the goal of faulttolerance.

C. Admission Control Unit

According to the information of the first few steps, thesystem can determine whether the resources required in servicestep are needed for admission control. For multimedia contentdelivering, the size and number of data is usually very large;therefore, storage throughput is usually the bottleneck ofthe system. To provide differential service, controlling theusage of storage is prerequisite. In this implementation, otherresources in the system use many new technologies to solveperformance issues under high demand. In this paper, theproposed algorithm is mainly used in the scheduling of storageresources. Every storage has statistical data itself and thesestatistical data can be accessed in the whole system.

Our proposed system applies the algorithm on the targetedstorage which has been decided in the previous step. Thisalgorithm is applied on the targeted storage with its ownstatistical data. Thus, each storage has its own statistical datafor calculation. These constants are used in the algorithm areset in the configuration file and can be changed at runtime.Storage I/O benchmark tools (iozone) are used to performthe R/W pattern which is close to our services to decide theconstant values [34]. Accuracy is not necessary since storageI/O throughput is very dynamic and the system only needsrough values. If new storage is the same as the storage usednow, the constant value could adopt from the current value.If size is the only difference between the new and currentstorages, modification of some constants is needed to fit intothe new storage size.

The process of the algorithm shown in Fig. 3 is subsequentlyexplained. System has a maximum calculation interval con-stant value, Mc, which means that how long the system resetsthe statistical data. If Mc is big, then statistical data will notreflect the current throughput of storage. The Mc constant isset to 30 s in our proposed system.

Algorithm will first determine whether the user is a highclass user. If the user is, the system will pass the user to the

Page 5: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

296 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014

Fig. 3. Admission control algorithm.

next step directly. If the user is low class, the system willneed to calculate another variable, Taj , which stands for thecalculated available throughput of the targeted storage. Thecalculation formula is shown as follows:

Taj = min(Tsj − Haj − Hrj − Laj, Ltj − Laj) (1)

In order to facilitate explanation, Fig. 4 is used to illustratethe above formula. It must be clear that the calculated Taj

value is not the truly accurate available throughput of tar-geted storage. Taj value is produced by the algorithm andits meaning is given by the system in order to facilitate itsunderstanding.

If Taj value is smaller than the needed throughput of theuser (Ut), system will reject the request. In the algorithm(line 11 in Fig. 3), there is a complicated expression whichis explained in Fig. 5(a) with the yellow area, indicating the

Fig. 4. Illustration for explaining to Formula (1).

usable throughput. When the calculated Taj value is smallerthan Rq, random early drop threshold, it means that the randomearly drop mechanism needs to be activated. Basic idea is thatthe system will start to reject some requests randomly whenthe available throughput of targeted storage is below a certainvalue.

Rf , random early drop factor, is an important factor whichcontrols how aggressive to reject service in this algorithm. IfRf = 1, system will reject service linearly. Fig. 5(b) is shownto express this situation.

Fig. 5(c) shows another situation, if Rf > 1, system willreject service more aggressively. It means that system willmore likely to reject requests in order to avoid throughputexhaustion which can lead to a significant degradation ofsystem performance. In the large-scale environment, usersare more likely to request contents on the same storagesimultaneously and the storage could easily be the hotspotin the whole system. Therefore, it is a good choice to set Rfbigger than 1 in this circumstance.

If Rf < 1, system will reject service less aggressively. Itmeans that the system will not reject a request totally evenif there is no available throughput which is calculated by ouralgorithm. In the small-scale environment, it is proper to setRf constant lesser than 1. Fig. 5(d) is shown to depict thissituation. As said before, Taj value is not the truly accurateavailable throughput of targeted storage. In fact, the storagemay still have some more throughputs even if the calculationof our algorithm indicates having none available.

The admission control algorithm of Fig. 3 will reject or passthe request finally. If admission control is passed, that meansthe user can use the service at this time and then continueinto the next service step. If admission control is rejected,the system will decline to service and provide a reason. Thealgorithm in Fig. 3 is good enough in the small-scale andmedium-scale environments, but it cannot schedule resourcesproperly in large-scale environments and heavy load situations.This proposed algorithm is relatively rigid and it does notproperly reflect the dynamics of the storage throughput. Tosolve this issue, a proposed AQoS algorithm which is shownin the next subsections is presented in this paper.

D. Service Unit

This unit will do the real job (ex: fetching data from storage,verifying data and etc.) on the server side without any responseyet to the user. Different services have different jobs, some forend-users and some for system managers. Security is the majorconsideration in end-user service and strictness is the mostimportant point in manager service. In manager service, thereare many content-managements which need very rigorous

Page 6: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

CHU et al.: QUALITY OF SERVICE PROVISION IN CLOUD-BASED STORAGE SYSTEM FOR MULTIMEDIA DELIVERY 297

Fig. 5. Illustrations for explaining to the admission control algorithm.(a) Random early drop. (b) Rejection of service linearly (Rf = 1). (c) Moreaggressive rejection of service (Rf > 1). (d) Less aggressive rejection of service(Rf < 1).

implementation to avoid content errors. A single content errorin the system which manages tens of millions of contents ishard to find out, so strictness is really crux when implementingthe proposed system.

E. Adaptive QoS (AQoS) Unit

As said before, the used algorithm in admission control isnot enough for heavy load situation. Another mechanism afterservice step is necessary for reflecting the dynamics of storagethroughput immediately. AQoS uses some metrics to feed infoback and help admission control dynamically control storage

Fig. 6. AQoS algorithm.

throughput. Storage throughput changes constantly and thereare many factors that can affect it.

First, different used storage space could affect the totalthroughput. The more space that the storage uses, the moredistribution storage accesses there are. If the used space is asmall fraction of the total space, the storage throughput willbe relatively larger than that when the storage is full. Also,small content can fit into memory, and system can cache itfor faster access.

Second, physical place where the data is stored on thestorage is another factor which can affect storage performance.When physical place of data is more scattered on the storage,the actuator arm movement is longer and the data access ismore random. Accessing data sequentially is much faster thanaccessing it randomly because of the way in which the diskhardware works. The seek operation, which occurs when thedisk head positions itself at the right disk cylinder to accessdata requested, takes more time than any other part of the I/Oprocess. Because reading randomly involves a higher numberof seek operations than sequential reading, and random readsdeliver a lower rate of throughput. The same goes for randomwriting.

Exact performance depends on lifetime, disk type, con-trollers, stripe size, implementation details and a dozen otherfactors.

In the large-scale environment, content delivery systemcould have more than 10 000 users simultaneously. Over crowdof low class users and some high class users are online at thesame time. Scheduling resources between different users insuch severe situation is something that AQoS wants to solve.Fig. 6 is the algorithm used in AQoS.

The algorithm is simple but relatively critical. Similar tothe algorithm of admission control, the system will determine

Page 7: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

298 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014

whether the algorithm shown in Fig. 6 is needed, accordingto the information of service specification step. The conceptof the whole algorithm is that when measured service timeexceeds the threshold value, system needs some mechanismsto throttle resources in order to avoid resource exhaustion. It isa sign of resource depletion when measured service time getslarger and larger. Next, the detailed process of the algorithmis explained.

St, service time, is the time spent in the service step. Sf , stor-age factor, is a threshold value which will be explained later.When St is greater than twice the storage factor, algorithmfeeds some information back into statistical data accordingto our presented formula. Then it will affect the subsequentbehavior of admission control unit. It is worth mentioningthat the algorithm acts on high class users and low class userssimultaneously. It of course, does not do anything if St is lesserthan twice the storage factor.

Sf , storage factor, is the most important factor in the systemand controls how sensitive the AQoS reacts to the busy statusof storages. Different storage technologies may have differentideal value ranges. In our experiment, for example, the idealvalue of Sf for a DAS system with hardware-based RAID5and 16 6.5 TB STAT-II disks is [0.05, 0.3].

When Sf factor is set too large, system will be insensitiveand will not be able to dispatch resources properly at a heavyload situation. In our experiment, if system does not throttleresource when it starts to exhaust, it will miss the best time toschedule resource. Like this case, dispatching resource badlyat heavy load, new high class users cannot be properly served.It means that the new high class user can get the servicebut the quality of service will be bad. Also, user experienceis adversely affected, because user cannot get the deservedservice. In other words, if storage factor is set too large, itjust means that this algorithm does not work at all.

If Sf factor is set too small, system will be very sensitiveand may reject more users than needed and reserve too muchresource for the high class users. The case of reserving toomuch resource, low class users will not be served even whenthe storage still has many available throughputs. This is a lose-lose situation, since it wastes many resources and negativelyaffects the experience of low class users. In our practicalexperience, there are more than half of low class users whowill be rejected if storage factor is set too small. After theAQoS step, system will return the service result to the user.

IV. Framework

In this section, we propose an applied framework evolvedfrom the abstract system architecture and theoretical algo-rithms presented in the Section III. This framework is suitableto deploy for a cloud-based OTT system like the referred inFig. 1. It is according to the requirement from a collaborationvendor who is planning to provide an OTT-based service. Thisservice provider has its own private cloud infrastructure in apublic data center. Fig. 7 shows a cloud-based OTT systemreference design framework according to the service provider’srequirement. The whole system framework is composed bymany components which are called subsystems. Fig. 7 also

shows the relationship between subsystems. In addition tothe components responsible to storage and delivery, an OTTservice system often includes some related functional compo-nents, such as: portal, content management, log and authen-tication, authorization and accounting (AAA) or even billing.Harmony system (HS), these related functional components arenamed in Fig. 7, is related to the service and business operationsupport. In order to provide more flexibility, the proposedsystem is originally designed as an underlying system whichmeans that it provides content delivery services to users, andthe operation administrator can control policies and behaviorsof the system through API System (APIS). In other words, ourproposed system is responsible for providing a highly efficient,reliable, robust, failover services and does not care about theoperational policies. HS provides a rich portal site which mayhave a lot of operational policies, some for users and some formanagers. The gray area in Fig. 7 is the HS and our systemis inside the red box.

A. Streaming Delivery System (SDS)

As implied by its name, streaming delivery system isresponsible for streaming delivery. This subsystem is a coreservice, which provides delivery services and may need lots ofservers in the large-scale environment. Since this subsystemprovides the streaming delivery service, it retrieves multimediacontents and manages the information, physical location andstatus of contents. Managing storages is also the job of thissubsystem. The delivery information, which is produced by theservice is, of course, also being managed by this subsystem.

Admission Control Unit and AQoS Unit, which are men-tioned in Section III are implemented in the SDS. Whenimplementing this subsystem, SDS needs ultimate high per-formance for servicing a lot of users concurrently. To achievethe high performance of SDS, lots of servers are needed todeploy. These SDS servers can be load-balanced by usinground-robin domain name system (DNS) technique. If the SDSservers are deployed respectively in individual hosts, an SDSfetches data from a storage device. The SDS needs to knowthe related information of the storage device then update theinformation, for example, Hqj and Taj like in the algorithmsin Figs. 3 and 6. In order to synchronize, to have an effectivehigh performance, and to achieve system robust, we forsakecentralized maintenance and make changes using memcachedarchitecture.

Memcached is a general-purpose distributed memorycaching system. It is used to speed up database-driven data,by caching data and objects in memory to reduce the numberof times an external data source (such as a database or API)must be read. Memcached is an in-memory key-value storefor small chunks of arbitrary data from the results of databasecalls in our system.

B. Storage System (SS)

Storage system, SS, may have storage servers and storages.The storage servers could be connecting to some or all storagesin order to export the devices to the SDS.

As we mentioned before, in general, for the Storage hard-ware architecture these three can be used; DAS, NAS and

Page 8: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

CHU et al.: QUALITY OF SERVICE PROVISION IN CLOUD-BASED STORAGE SYSTEM FOR MULTIMEDIA DELIVERY 299

Fig. 7. Reference design for a cloud-based OTT system.

SAN. In the DAS infrastructure, there is no extra storageserver in this subsystem and there’s only one server, whichdirectly attaches to some storages, in the whole system. In theNAS infrastructure, there are many possible forms of storageservers. Storage servers run with NFS or SMB programsin order to export the network file system to the SDSs. Ifmanagers do not want to deploy the dedicated storage serversfor cost saving, it is also possible to build storage servers inthe SDS. Under considering the service scale and performancerequirement, GFS, a distributed file system is adopted in thepropose storage system.

Moreover, since this subsystem is the real physical locationof all contents, storage servers need to carefully take careof the privileges in order to prevent unexpected or maliciousattempts. The last thing to mention is that, this subsystem willnot be directly accessed by users but only by other subsystems,such as SDS, AS.

C. Admin System (AS)

AS is responsible for system administration and manage-ment. It provides admin tools for managing system automati-cally and manually, such as removing the unwanted contents,eliminating expired information, resetting the statistical dataand etc.

Another function of AS is monitoring systems, which isnot intended to be implemented in the beginning of thedevelopment phase. With development of the commercialsystem, there is a problem, which becomes more and moreurgent. That is to say, system needs to know its current statusfor discovering the performance bottleneck. Also, it is almostimpossible to monitor and manage tens of servers manuallyin a large-scale environment.

D. API System (APIS)

The role of APIS is to provide APIs for HS, according toits policy to control the behavior of the proposed system. Tosome extent, the managers of HS could limitedly control ourproposed system through APIS. It is recommended to isolate

Fig. 8. Experimental environment.

APIS from the other subsystems in order to have better securityand hiding the details of the system.

V. Experimental Setup

In this section, paper will show experimental results of oursystem in order to prove the feasibility of the proposed system.

A. Lab Testing

As said previously, there are three mainly infrastructureswhich can be used to build our system. Experimenting inNAS infrastructure is a good choice, because it is cheaperthan SAN-based infrastructure and is more representative thanDAS infrastructure. The SAN-based infrastructure with GFShas already been deployed in online system, and paper willpresent this part in the next section.

The whole experimental environment is built on a singlephysical server which has 8 GB RAM and 8 CPUs. Theexperimental server runs with VMware vSphere Hypervisor(ESXi) 4.1 Build 260247 in order to simulate multiple virtualservers simultaneously and form the whole system.

Fig. 8 illustrates the experimental environment which con-tains three virtual servers (cds1, cds2 and cdsadm). Eachvirtual server has 1 GB RAM, 20 GB disk space, 1 core CPU,2 NICs (eth0 for extranet, eth1 for intranet). The virtual switchwhich has 120 ports also needs to be emulated.

In the Fig. 8, cds1 and cds2 contains the subsystems suchas SDS, so cds1 and cds2 provide the content delivery service.

Since NAS infrastructure is chosen, storage servers areneeded in order to export file system. The cdsadm plays thisrole and exports the NFS file system to cds1 and cds2 inintranet. The admin system (AS) and API system (APIS) arealso built on cdsadm. Moreover, this environment is built forexperimental use so there is no redundant server of cdsadm.Another important function, monitoring service, is also builton cdsadm and gathers the system information to provideexperimental results.

Page 9: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

300 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014

Fig. 9. (a) Storage instantaneous throughput. (b) Storage throughput.

Due to the algorithm proposed in Section III, system needsto roughly know storage’s available throughput first, so systemperforms the benchmark test before the experiment. Then,available throughput of storage used by this experiment isapproximately 200 Mbps, so Ts, total throughput of thestorage, is set to 200 Mbps, Lt, throughput threshold for lowclass, is set to 100 Mbps. Hr, reserved throughput for highclass of the storage, is set to 8 Mbps. Also, in this experiment,the storage factor, Sf , is set to 0.08. The contents which willbe delivered need more than 1 GB in total in order to avoidmemory cache of operating system.

In order to facilitate analysis, system simulates the load ofhigh-class users into cds1 and low class users’ load into cds2.The system simulates the load by performing the real servicerequests to the cds1 or cds2 from cdsadm. The simulativeprogram uses the protocols which are provided and designedby our proposed system to communicate with front-end servers(cds1, cds2) and download the real content from cds1 or cds2.

Fig. 9 illustrates the experimental results and there aredifferent meanings between (a) and (b). In the Fig. 9(a), datasource of the figure is gathered from the statistical data ofour system. In the previous Section III, paper has alreadypresented the algorithms which are used to provide differentialservice. Hence, the data sources of Fig. 9(a) are Ha and Lashown in Fig. 3 and it means the throughput at that moment.More precisely, Fig. 9(a) shows the average throughput ofthe targeted storage within Nc interval. (Nc is dynamic) Ncis lesser than Mc, and Mc is set to 30 s in our experiment.Since this statistical data will be reset after Mc seconds, so thecalculated throughput is more dynamic than the throughput ofFig. 9(b). The data of Fig. 9(b) is gathered from the throughputof NIC’s interface. The interval of gathering data is fixed (300s), and calculations are also performed in such interval.

The whole scenario of the experiment is that generating 50Mbps download throughput of high class users in cds1 at thevery beginning of the experiment. After 10 min, system startsto simulate very heavy throughput of low class users in cds2.

From 21:50 to 22:30, low class users only use approximately100 Mbps due to the limitation of constant Lt.

From 22:30 to 23:00, system starts to generate another50 Mbps throughput in cds1. During this period, high classand low class users use approximately 100 Mbps respectively,but throughput of low class users is slightly less than 100Mbps due to the effect of constant Hr. Fig. 9 present thiscircumstance.

From 23:00 to 23:50, system generates another 50 Mbpsthroughput again in cds1. During this time, Fig. 9(a) looksa little more unstable than Fig. 9(b), paper will explain thisdifferentia later. Overall, high class users use approximately150 Mbps and low class users share the throughput which islesser than 50 Mbps.

From 23:50 to 00:30, another 50 Mbps throughput isgenerated in cds1. In this situation, high class users needapproximately 200 Mbps which is the total throughput derivedfrom the previous benchmark test. At this moment, systemstill produces heavy throughput of low class users in cds2.The result is that total throughput is over 200 Mbps, and highclass users use more than 95% percent throughput. Meanwhile,low class users can still use a little bit of resource.

From 00:30 to 00:50, 100 Mbps throughput is canceled incds1 and Fig. 9 shows that low class users regain approxi-mately 100 Mbps throughput back during this period.

After 00:50, system cancels 50 Mbps in cds1 once again,low class users still use 100 Mbps due to the limitation ofconstant Lt.

The above is the entire process of the experiment. There aremany more figures which illustrate the other characters.

Fig. 10(a) shows the full CPU usage of cds1 which onlyserves high class users, throughout whole period of the ex-periment. This figure is quite typical and intuitive: systemconsumes more CPU resource when it serves more highclass users. During the heavy load period (00:00–00:20), cds1spends more time on System which is the red area in Fig.10(a), due to NFS service. Since NFS client runs in kernelmode, so cds1 will spend more time on “System” when it needsmore resources on the storage to serve increasing number ofhigh class users. Needless to say, cds1 spends the most timeon “IOWait” which is caused by accessing NFS storage anda little time on User which is caused by our content deliveryservice.

Fig. 10(b) shows the full CPU usage of cds2 which onlyserves low class users. This figure shows the different char-acter among cds1 and cds2. The most special point is thatcds2 spends less time on “IOWait” contrarily in heavy loadsituation. The reason is that cds2 rejects many more servicerequests of low class users in such situation, and then cds2 willconsume lesser storage resource in order to yield resourcesto high class users. Since cds2 needs to reject huge amountof low class users during heavy load period, it will spendmore time on User on the contrary. It is worth mentioningthat our proposed system consumes the CPU resource sta-bly for serving low class users regardless of different loadsituations.

Fig. 10(c) shows the full CPU usage of cdsadm whichexports the NFS file system to cds1 and cds2. In our practical

Page 10: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

CHU et al.: QUALITY OF SERVICE PROVISION IN CLOUD-BASED STORAGE SYSTEM FOR MULTIMEDIA DELIVERY 301

Fig. 10. Full CPU usage of cds1, cds2 and cdsadm. (a) Full CPU usage ofcds1. (b) Full CPU usage of cds2. (c) Full CPU usage of cdsadm.

experiments, system cannot continuously scale when there arealready twenty servers and the used bandwidth is over 1 Gbps.This phenomenon is caused by the fact that network sharingprogram has its overhead and performance issues. Therefore,this infrastructure cannot be used in a large scale environmentbut is properly used in the system which has fewer than tensof thousands of people. The NAS infrastructure has limitedscalability. Fig. 10(c) illustrates that NFS consumes a lot ofCPU resources even when the whole system is not very busy.There is only one disk storage appliance in the experiment,but the CPU resources are consumed severely. If there aremore storages in the system, one can imagine that overallperformance will degrade dramatically.

Fig. 11. Load average of cds1, cds2 and cdsadm. (a) Load average of cds1.(b) Load average of cds2. (c) Load average of cdsadm.

In our practical experience, the proposed system whichserves more than 15 000 client users and is built on NAS in-frastructure will have severe performance issues. Even that thewhole system can serve 15 000 users concurrently, every front-end server and storage servers are ultimately overwhelmed.

In the Linux, there is a load value which is calculated by theoperating system in order to represent the load of the server.Fig. 11 illustrates the load average of cds1, cds2 and cdsadmduring the experimental period. These figures can clearly sumup the same conclusion.

As said before, the proposed system has been deployed in anonline site which has approximately 15 000 users concurrentlyand is built on NAS infrastructure. Fig. 12 illustrates the loadaverage of the storage server. Look carefully at this figure, thetime span of horizontal axis is approximately one month, notjust one day. Since our monitor service compacts the statisticaldata periodically (daily, weekly, monthly, yearly), the real loadaverage of the storage server during that period should behigher than the value shown in Fig. 12.

Page 11: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

302 IEEE SYSTEMS JOURNAL, VOL. 8, NO. 1, MARCH 2014

Fig. 12. Load average of the storage server in heavy load situation.

The maximum number of requests of each memcachedserver in 1 s is approximately 1000. If the proposed systemdoesn’t use this technology, then these requests will be per-formed in the database. There are few databases which canreally afford more than one thousand requests per second.Another possible solution is that storing the metadata into localmemory. The one drawback is that the same metadata may becached redundantly in different servers and another drawbackis that the server cannot synchronize the statistical data witheach other which is the key of our system in order to providedifferential service. Some custom synchronous methods maybe implemented to solve this issue, but things are not thatsimple. In the heavy load situation, statistical data may beincreased by hundreds of users simultaneously, how to ensurethe synchronous data which is changed atomicity is a reallyimportant issue. Otherwise, the synchronous data will becomeuseless, especially in our system which has small calculation’sinterval.

B. Practical Deploying and Testing

The proposed system has been deployed in a commercialcontent-delivery business which has more than tens of thou-sands of users concurrently. In this environment, our systemis built on GFS infrastructure shown in Fig. 7. The servicewas formally launched in May 11th, 2010. After two monthsof service, there were a total of more than 60 000 on-lineusers concurrently and 300 TB multimedia assets were storedand serviced in this system. Moreover, more than 30 millionmultimedia content accumulated deliveries have been madeand more than half a million of this content accumulateddownload only once during one month.

VI. Conclusion

In this paper, a cloud storage system was proposed in orderto provide robust, scalable, highly available and load-balancedservice. In the meantime, the system also needs to providequality of service provision for multimedia applications andservices. Hence, the scheduling algorithm is proposed whichcould dispatch resources properly between difference servicelevel end-users in the system and has been proved both intheory and practice.

Experimental results showed that the system’s loadingwhich only serves end users with low class is stable even

when many users are served and the storage throughput isused up. It is the important characteristic especially in large-scale environment which generally has some high class usersand over crowd of low class users. Practical results showedthat the proposed system has passed the severe examinationof business environment which needs to not only serve manyusers concurrently but also provide many enterprise featuressuch as robustness, scalability, high availability and loadbalance in order to ensure 24-hour operation night and day.Moreover, It was shown that the proposed system achieves thethree functions of a multimedia-aware cloud [7]: 1) QoS sup-porting and provisioning, 2) parallel processing in distributedenvironment, 3) QoS adaptation; These functions make theproposed system especially suitable to the video on demandservice in OTT system, it often provides different servicequality to users with various types of devices and networkbandwidth.

There are some future works in the proposed system. Sincesystem stores contents on storages randomly, the contentscheduling algorithm could be developed in the future in orderto balance the access load on all storages as far as possibleand avoid hot-spot storage to a certain extent. Another issueis that whether some storages are ultimately popular and cannot even serve high class users properly. The content piecescaching algorithm may be developed in the future in order tosolve this issue and improve overall system performance.

Acknowledgment

The authors would like to thank the editor and the anony-mous reviewers for constructive comments.

References

[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A.Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M.Zaharia, “Above the clouds: A Berkeley view of cloud computing,”University of California, Berkeley, Tech. Rep. USB-EECS-2009–28,Feb. 2009.

[2] P. Mell and T. Grance, “The NIST Definition of Cloud Computing,”ver. 15, National Institute of Standards and Technology, InformationTechnology Laboratory, Oct. 7, 2009.

[3] C.-H. R. Lin, H.-J. Liao, K.-Y. Tung, Y.-C. Lin, and S.-L. Wu, “Networktraffic analysis with cloud platform,” J. Internet Technol., vol. 13, no. 6,pp. 953–961, Dec. 2012.

[4] Q. Huang, C. Yang, D. Nebert, K. Liu, and H. Wu, “Cloud computing forgeosciences: Deployment of GEOSS clearinghouse on Amazon’s EC2,”in Proc. ACM SIGSPATIAL Int. Workshop on High Performance andDistributed Geographic Information Systems, Nov. 2010, pp. 35–38.

[5] K.-H. Kim, S.-Ju Lee, and P. Congdon, “On cloud-centric networkarchitecture for multi-dimensional mobility,” ACM SIGCOMM Comput.Commun. Rev., vol. 42, no. 4, pp. 1–6, Oct. 2012.

[6] S. Poehlein, V. Saxena, G. T. Willis, J. Fedders, and M. Guttmann.(2010, Aug. 15). Moving to the Media Cloud [Online]. Available: .

[7] W. Zhu, C. Luo, J. Wang, and S. Li, “ Multimedia cloud computing,”IEEE Signal Process. Mag., vol. 28, no. 3, pp. 59–69, May 2011.

[8] C.-F. Lai, H. Wang, H.-C. Chao and G. Nan, “A network and deviceaware QoS approach for cloud-based mobile streaming,” IEEE Trans.Multimedia, vol. 15, no. 4, pp. 747–757, Jun. 2013.

[9] Cisco Visual Networking Index: Forecast and Methodology, 2011–2016[retrieved: May 30, 2012]. [Online]. Available: .

[10] J. Jiang, Y. Wu, X. Huang, G. Yang, and W. Zheng, “Online videoplaying on smartphones: A context-aware approach based on cloudcomputing,” J. Internet Technol., vol. 11, no. 6, pp. 821–828, Nov. 2010.

Page 12: Quality of Service Provision in Cloud-Based Storage System for Multimedia Delivery

CHU et al.: QUALITY OF SERVICE PROVISION IN CLOUD-BASED STORAGE SYSTEM FOR MULTIMEDIA DELIVERY 303

[11] Y.-X. Lai, C.-F. Lai, C.-C. Hu, H.-C. Chao, and Y.-M. Huang, “Apersonalized mobile IPTV system with seamless video reconstructionalgorithm in cloud networks,” Int. J. Commun. Syst., vol. 24, no. 10, pp.1375–1387, Oct. 2011.

[12] V. Aggarwal, X. Chen, V. Gopalakrishnan, R. Jana, K. Ramakrish-nan, and V. Vaishampayan, “Exploiting virtualization for deliveringcloudbased IPTV services,” in Proc. IEEE INFOCOM Workshop CloudComput., 2011, pp. 637–641.

[13] C.-F. Lai, Y.-M. Huang, and H.-C. Chao, “DLNA-based multimediasharing system over OSGI framework with extension to P2P network,”IEEE Syst. J., vol. 4, no. 2, pp. 262–270, Jun. 2010.

[14] M. Tan and X. Su, “Media cloud: When media revolution meets rise ofcloud computing,” in Proc. IEEE 6th Int. Symp. Service Oriented Syst.Eng., 2011, pp. 251–261.

[15] YouTube. YouTube web site [Online]. Available: N etflix.[16] Netflix web site [Online]. Available: H ulu.[17] Hulu web site [Online]. Available: .[18] J. Hennessy, D. Patterson: Computer Architecture: A Quantitative Ap-

proach, 4th ed. San Francisco: Morgan Kaufmann Publishers Inc., 2011.[19] D. Sacks, “Demystifying storage networking,” Tech. Rep., IBM, 2001.[20] M. Mesnier, G. R. Ganger, and E. Riedel, “Object-based storage,” IEEE

Commun. Mag., vol. 41, no. 8, pp. 84–90, Aug. 2003.[21] Easiest Online Backup Service. Backblaze web site [Online]. Available:

/urlhttp://www.backblaze.com.[22] Network Appliance, Inc. NetApp web site [Online]. Available: .[23] F. Schmuck and R. Haskin, “GPFS: A shared-disk file system for large

computing clusters,” in Proc. 1st USENIX Conf. File Storage Technol.,USENIX Association, 2002, p. 19.

[24] R. Sandbaerg, “Design and Implementation or the SUD Network Filesystem”, Sun Microsystems.

[25] K. Shvachko. The Hadoop Distributed File System. Yahoo-Inc.com.[26] Kosmix. Kosmos distributed file system (KFS) [Online]. Available: .[27] D. Nagle, D. Serenyi, and A. Matthews, “The Panasas active scale

storage cluster: Delivering scalable high bandwidth storage,” in Proc.ACM/IEEE Conf. Supercomput., 2004, p. 53.

[28] W. Yu, Sh. Liang, and D. K. Panda, “High performance support ofparallel virtual file system (PVFS2) over Quadrics,” in Proc. 19th Ann.Int. Conf. Supercomput., 2005, pp. 323–331.

[29] S. R. Soltis, T. M. Ruwart, G. M. Eeickson, K. W. Preslan, and M. T.O’Keefe, “The global file system,” in High Performance Mass Storageand Parallel I/O: Technologies and Applications (chapter 23), H. Jin,T. Cortes, and R. Buyya, Eds. New York, NY: IEEE Computer SocietyPress and Wiley, 2002, pp. 344–363.

[30] Red Hat Global File System. White Paper [Online]. Available: .[31] C. R. Lumb, A. Merchant, and G. A. Alvarez, “Facade: Virtual storage

devices with performance guarantees,” in Proc. USENIX Conf. FileStorage Technol., 2003, pp. 131–144.

[32] S. Uttamchandani, L. Yin, G. A. Alvarez, J. Palmer, and G. Agha,“CHAMELEON: A self-evolving, fully-adaptive resource arbitrator forstorage systems,” in Proc. USENIX Ann. Tech. Conf., 2005, pp. 75–88.

[33] M. Karlsson, C. Karamanolis, and X. Zhu, “Triage: Performance dif-ferentiation for storage systems using adaptive control,” ACM Trans.Storage, vol. 1, no. 4, pp. 457–480, Nov. 2005.

[34] Iozone Filesystem Benchmark. Iozone web site [Online]. Available:http://www.iozone.org/

Yen-Ming Chu received the PhD degree in com-munications engineering from National Tsing-HuaUniversity, Taiwan, in 2010.

Since March 2011, he has been with the Depart-ment of Computer Science and Information Engi-neering, De-Lin Institute Technology, Taipei, Tai-wan, where he is an Assistant Professor. From 2001to 2005, he served as an Assistant Researcher inTelecommunication Labs of Chunghwa Telecom Co.Ltd. and worked in NetXtream Corp. from 2008 to2010. His current research interests include network

security, multimedia networking, and network science.

Nen-Fu Huang (SM’06) received the B.S.E.E. de-gree from National Cheng Kung University, Taiwan,in 1981 and the M.S. and Ph.D. degrees in com-puter science from National Tsing-Hua University,Taiwan, in 1983 and 1986, respectively.

Since 2008, he has been a Distinguished Professorof National Tsing-Hua University, Taiwan. He haspublished more than 200 journal and conferencepapers, and developed many pioneer and world classhigh-speed network and security systems, and estab-lished well cooperations with the industry, including

technical transfer and jointed-development projects. His current researchinterests include network security, high-speed switch/router, mobile networks,IPv6, and Cloud/P2P-based video streaming technology.

Dr. Huang is one of the Guest Editors of the Special Issue on BandwidthManagement on High-Speed Networks for the Computer Communications.

Sheng-Hsiung Lin received the B.S. degree incomputer science from National Taipei Universityof Education, Taiwan, in 2009 and received the M.S.degree in computer science from National Tsing-HuaUniversity, Taiwan, in 2011.

Since 2011, he has been a Software Engineerwith Gigabyte Technology Co., Taiwan. His majorfocus is the development the middleware frameworkfor IPTV systems. His current research interestsinclude distribution systems and streaming over P2Pnetworks.