cloud computing (principles and paradigms) || introduction to cloud computing

Download Cloud Computing (Principles and Paradigms) || Introduction to Cloud Computing

Post on 09-Apr-2017




1 download

Embed Size (px)


  • PART I






    When plugging an electric appliance into an outlet, we care neither how electricpower is generated nor how it gets to that outlet. This is possible becauseelectricity is virtualized; that is, it is readily available from a wall socket thathides power generation stations and a huge distribution grid. When extended toinformation technologies, this concept means delivering useful functions whilehiding how their internals work. Computing itself, to be considered fullyvirtualized, must allow computers to be built from distributed components suchas processing, storage, data, and software resources [1].

    Technologies such as cluster, grid, and now, cloud computing, have all aimedat allowing access to large amounts of computing power in a fully virtualizedmanner, by aggregating resources and offering a single system view. Inaddition, an important aim of these technologies has been delivering computingas a utility. Utility computing describes a business model for on-demanddelivery of computing power; consumers pay providers based on usage (pay-as-you-go), similar to the way in which we currently obtain services fromtraditional public utility services such as water, electricity, gas, and telephony.

    Cloud computing has been coined as an umbrella term to describe a categoryof sophisticated on-demand computing services initially offered by commercialproviders, such as Amazon, Google, and Microsoft. It denotes a model onwhich a computing infrastructure is viewed as a cloud, from which businessesand individuals access applications from anywhere in the world on demand [2].The main principle behind this model is offering computing, storage, andsoftware as a service.

    Cloud Computing: Principles and Paradigms, Edited by Rajkumar Buyya, James Broberg andAndrzej Goscinski Copyright r 2011 John Wiley & Sons, Inc.


  • Many practitioners in the commercial and academic spheres have attemptedto define exactly what cloud computing is and what unique characteristics itpresents. Buyya et al. [2] have defined it as follows: Cloud is a parallel anddistributed computing system consisting of a collection of inter-connectedand virtualised computers that are dynamically provisioned and presented as oneor more unified computing resources based on service-level agreements (SLA)established through negotiation between the service provider and consumers.

    Vaquero et al. [3] have stated clouds are a large pool of easily usable andaccessible virtualized resources (such as hardware, development platformsand/or services). These resources can be dynamically reconfigured to adjustto a variable load (scale), allowing also for an optimum resource utilization.This pool of resources is typically exploited by a pay-per-use model in whichguarantees are offered by the Infrastructure Provider by means of customizedService Level Agreements.

    A recent McKinsey and Co. report [4] claims that Clouds are hardware-based services offering compute, network, and storage capacity where:Hardware management is highly abstracted from the buyer, buyers incurinfrastructure costs as variable OPEX, and infrastructure capacity is highlyelastic.

    A report from the University of California Berkeley [5] summarized the keycharacteristics of cloud computing as: (1) the illusion of infinite computingresources; (2) the elimination of an up-front commitment by cloud users; and(3) the ability to pay for use . . . as needed . . .

    The National Institute of Standards and Technology (NIST) [6] charac-terizes cloud computing as . . . a pay-per-use model for enabling available,convenient, on-demand network access to a shared pool of configurablecomputing resources (e.g. networks, servers, storage, applications, services)that can be rapidly provisioned and released with minimal management effortor service provider interaction.

    In a more generic definition, Armbrust et al. [5] define cloud as the datacenter hardware and software that provide services. Similarly, Sotomayoret al. [7] point out that cloud is more often used to refer to the ITinfrastructure deployed on an Infrastructure as a Service provider data center.

    While there are countless other definitions, there seems to be commoncharacteristics between the most notable ones listed above, which a cloudshould have: (i) pay-per-use (no ongoing commitment, utility prices); (ii) elasticcapacity and the illusion of infinite resources; (iii) self-service interface; and(iv) resources that are abstracted or virtualised.

    In addition to raw computing and storage, cloud computing providersusually offer a broad range of software services. They also include APIs anddevelopment tools that allow developers to build seamlessly scalable applica-tions upon their services. The ultimate goal is allowing customers to run theireveryday IT infrastructure in the cloud.

    A lot of hype has surrounded the cloud computing area in its infancy, oftenconsidered the most significant switch in the IT world since the advent of the


  • Internet [8]. In midst of such hype, a great deal of confusion arises when tryingto define what cloud computing is and which computing infrastructures can betermed as clouds.

    Indeed, the long-held dream of delivering computing as a utility has beenrealized with the advent of cloud computing [5]. However, over the years,several technologies have matured and significantly contributed to make cloudcomputing viable. In this direction, this introduction tracks the roots ofcloud computing by surveying the main technological advancements thatsignificantly contributed to the advent of this emerging field. It also explainsconcepts and developments by categorizing and comparing the most relevantR&D efforts in cloud computing, especially public clouds, management tools,and development frameworks. The most significant practical cloud computingrealizations are listed, with special focus on architectural aspects and innovativetechnical features.


    We can track the roots of clouds computing by observing the advancement ofseveral technologies, especially in hardware (virtualization, multi-core chips),Internet technologies (Web services, service-oriented architectures, Web 2.0),distributed computing (clusters, grids), and systems management (autonomiccomputing, data center automation). Figure 1.1 shows the convergence oftechnology fields that significantly advanced and contributed to the adventof cloud computing.

    Some of these technologies have been tagged as hype in their early stagesof development; however, they later received significant attention fromacademia and were sanctioned by major industry players. Consequently, aspecification and standardization process followed, leading to maturity andwide adoption. The emergence of cloud computing itself is closely linked tothe maturity of such technologies. We present a closer look at the technol-ogies that form the base of cloud computing, with the aim of providing aclearer picture of the cloud ecosystem as a whole.

    1.2.1 From Mainframes to Clouds

    We are currently experiencing a switch in the IT world, from in-housegenerated computing power into utility-supplied computing resources deliveredover the Internet as Web services. This trend is similar to what occurred about acentury ago when factories, which used to generate their own electric power,realized that it is was cheaper just plugging their machines into the newlyformed electric power grid [8].

    Computing delivered as a utility can be defined as on demand delivery ofinfrastructure, applications, and business processes in a security-rich, shared,scalable, and based computer environment over the Internet for a fee [9].


  • This model brings benefits to both consumers and providers of IT services.Consumers can attain reduction on IT-related costs by choosing to obtaincheaper services from external providers as opposed to heavily investing on ITinfrastructure and personnel hiring. The on-demand component of thismodel allows consumers to adapt their IT usage to rapidly increasing orunpredictable computing needs.

    Providers of IT services achieve better operational costs; hardware andsoftware infrastructures are built to provide multiple solutions and serve manyusers, thus increasing efficiency and ultimately leading to faster return oninvestment (ROI) as well as lower total cost of ownership (TCO) [10].

    Several technologies have in some way aimed at turning the utility comput-ing concept into reality. In the 1970s, companies who offered common dataprocessing tasks, such as payroll automation, operated time-shared mainframesas utilities, which could serve dozens of applications and often operated closeto 100% of their capacity. In fact, mainframes had to operate at very highutilization rates simply because they were very expensive and costs should bejustified by efficient usage [8].

    The mainframe era collapsed with the advent of fast and inexpensivemicroprocessors and IT data centers moved to collections of commodityservers. Apart from its clear advantages, this new model inevitably led toisolation of workload into dedicated servers, mainly due to incompatibilities

    Autonomic ComputingData Center Automation

    Hardware VirtualizationMulti-core chips

    SOAWeb 2.0

    Web ServicesMashups

    Utility &GridComputing




    Systems Management






    g Internet Technologies

    FIGURE 1.1. Convergence of various advances leading to the advent of cloud




View more >