scalable cloud computing - aalto...

44
Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko Department of Information and Computer Science School of Science Aalto University [email protected] 6.6-2013

Upload: others

Post on 29-Oct-2019

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

1/44

Scalable Cloud Computing

Keijo Heljanko

Department of Information and Computer ScienceSchool of ScienceAalto University

[email protected]

6.6-2013

Page 2: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

2/44

Business Drivers of Cloud Computing

I Large data centers allow for economics of scaleI Cheaper hardware purchasesI Cheaper cooling of hardware

I Example: Google paid 40 MEur for a Summa paper mill sitein Hamina, Finland: Data center cooled with sea water fromthe Baltic Sea

I Cheaper electricityI Cheaper network capacityI Smaller number of administrators / computer

I Unreliable commodity hardware is usedI Reliability obtained by replication of hardware components

and a combined with a fault tolerant software stack

Page 3: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

3/44

Cloud Computing TechnologiesA collection of technologies aimed to provide elastic “pay asyou go” computing

I Virtualization of computing resources: Amazon EC2,Eucalyptus, OpenNebula, Open Stack Compute, . . .

I Scalable file storage: Amazon S3, GFS, HDFS, . . .I Scalable batch processing: Google MapReduce, Apache

Hadoop, PACT, Microsoft Dryad, Google Pregel, BerkeleySpark, . . .

I Scalable datastore:Amazon Dynamo, Apache Cassandra,Google Bigtable, HBase,. . .

I Distributed Consensus: Google Chubby, ApacheZookeeper, . . .

I Scalable Web applications hosting: Google App Engine,Microsoft Azure, Heroku, . . .

Page 4: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

4/44

Clock Speed of Processors

I Herb Sutter: The Free Lunch Is Over: A Fundamental TurnToward Concurrency in Software. Dr. Dobb’s Journal, 30(3),March 2005 (updated graph in August 2009).

Page 5: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

5/44

Implications of the End of Free Lunch

I The clock speeds of microprocessors are not going toimprove much in the foreseeable future

I The efficiency gains in single threaded performance aregoing to be only moderate

I The number of transistors in a microprocessor is stillgrowing at a high rate

I One of the main uses of transistors has been to increasethe number of computing cores the processor has

I The number of cores in a low end workstation (as thoseemployed in large scale datacenters) is going to keep onsteadily growing

I Programming models need to change to efficiently exploitall the available concurrency - scalability to high number ofcores/processors will need to be a major focus

Page 6: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

6/44

Warehouse-scale Computing (WSC)

I The smallest unit of computation in Google scale is:Warehouse full of computers

I [WSC]: Luiz Andre Barroso, Urs Holzle: The Datacenter asa Computer: An Introduction to the Design ofWarehouse-Scale Machines Morgan & ClaypoolPublishers 2009http://dx.doi.org/10.2200/S00193ED1V01Y200905CAC006

I The WSC book says:“. . . we must treat the datacenter itself as one massivewarehouse-scale computer (WSC).”

Page 7: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

7/44

Jeffrey Dean (Google): LADIS 2009 keynote failurenumbers

I LADIS 2009 keynote: “Designs, Lessons and Advice fromBuilding Large Distributed Systems”http://www.cs.cornell.edu/projects/ladis2009/

talks/dean-keynote-ladis2009.pdf

I At warehouse-scale, failures will be the norm and thus faulttolerant software will be inevitable

I Hypothetically assume that server mean time betweenfailures (MTBF) would be 30 years (highly optimistic andnot realistic number!). In a WSC with 10000 nodes roughlyone node will fail per day.

I Typical yearly flakiness metrics from J. Dean (Google,2009):

I 1-5% of your disk drives will dieI Servers will crash at least twice (2-4% failure rate)

Page 8: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

8/44

Big Data

I As of May 2009, the amount of digital content in the worldis estimated to be 500 Exabytes (500 million TB)

I EMC sponsored study by IDC in 2007 estimates theamount of information created in 2010 to be 988 EB

I Worldwide estimated hard disk sales in 2010:≈ 675 million units

I Data comes from: Video, digital images, sensor data,biological data, Internet sites, social media, . . .

I The problem of such large data massed, termed Big Datacalls for new approaches to storage and processing of data

Page 9: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

9/44

Example: Simple Word Search

I Example: Suppose you need to search for a word in a 2TBworth of text that is found only once in the text mass usinga compute cluster

I Assuming 100MB/s read speed, in the worst case readingall data from a single 2TB disk takes ≈ 5.5 hours

I If 100 hard disks can be used in parallel, the same tasktakes less than four minutes

I Scaling using hard disk parallelism is one of the designgoals of scalable batch processing in the cloud

Page 10: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

10/44

Google MapReduce

I A scalable batch processing framework developed atGoogle for computing the Web index

I When dealing with Big Data (a substantial portion of theInternet in the case of Google!), the only viable option is touse hard disks in parallel to store and process it

I Some of the challenges for storage is coming from Webservices to store and distribute pictures and videos

I We need a system that can effectively utilize hard diskparallelism and hide hard disk and other componentfailures from the programmer

Page 11: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

11/44

Google MapReduce (cnt.)

I MapReduce is tailored for batch processing with hundredsto thousands of machines running in parallel, typical jobruntimes are from minutes to hours

I As an added bonus we would like to get increasedprogrammer productivity compared to each programmerdeveloping their own tools for utilizing hard disk parallelism

Page 12: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

12/44

Google MapReduce (cnt.)

I The MapReduce framework takes care of all issues relatedto parallelization, synchronization, load balancing, and faulttolerance. All these details are hidden from theprogrammer

I The system needs to be linearly scalable to thousands ofnodes working in parallel. The only way to do this is tohave a very restricted programming model where thecommunication between nodes happens in a carefullycontrolled fashion

I Apache Hadoop is an open source MapReduceimplementation used by Yahoo!, Facebook, and Twitter

Page 13: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

13/44

MapReduce and Functional Programming

I Based on the functional programming in the large:I User is only allowed to write side-effect free functions

“Map” and “Reduce”I Re-execution is used for fault tolerance. If a node executing

a Map or a Reduce task fails to produce a result due tohardware failure, the task will be re-executed on anothernode

I Side effects in functions would make this impossible, asone could not re-create the environment in which theoriginal task executed

I One just needs a fault tolerant storage of task inputsI The functions themselves are usually written in a standard

imperative programming language, usually Java

Page 14: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

14/44

Why No Side-Effects?

I Side-effect free programs will produce the same outputirregardless of the number of computing nodes used byMapReduce

I Running the code on one machine for debugging purposesproduces the same results as running the same code inparallel

I It is easy to introduce side-effect to MapReduce programsas the framework does not enforce a strict programmingmethodology. However, the behavior of such programs isundefined by the framework, and should therefore beavoided.

Page 15: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

15/44

Yahoo! MapReduce Tutorial

I We use a Figures from the excellent MapReduce tutorial ofYahoo! [YDN-MR], available from:http://developer.yahoo.com/hadoop/tutorial/

module4.html

I In functional programming, two list processing conceptsare used

I Mapping a list with a functionI Reducing a list with a function

Page 16: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

16/44

Mapping a List

Mapping a list applies the mapping function to each list element(in parallel) and outputs the list of mapped list elements:

Figure : Mapping a List with a Map Function, Figure 4.1 from[YDN-MR]

Page 17: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

17/44

Reducing a ListReducing a list iterates over a list sequentially and produces anoutput created by the reduce function:

Figure : Reducing a List with a Reduce Function, Figure 4.2 from[YDN-MR]

Page 18: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

18/44

Grouping Map Output by Key to ReducerIn MapReduce the map function outputs (key, value)-pairs.The MapReduce framework groups map outputs by key, andgives each reduce function instance (key, (..., list of

values, ...)) pair as input. Note: Each list of values havingthe same key will be independently processed:

Figure : Keys Divide Map Output to Reducers, Figure 4.3 from[YDN-MR]

Page 19: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

19/44

MapReduce Data FlowPractical MapReduce systems split input data into large(64MB+) blocks fed to user defined map functions:

Figure : High Level MapReduce Dataflow, Figure 4.4 from [YDN-MR]

Page 20: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

20/44

Recap: Map and Reduce Functions

I The framework only allows a user to write two functions: a“Map” function and a “Reduce” function

I The Map-function is fed blocks of data (block size64-128MB), and it produces (key, value) -pairs

I The framework groups all values with the same key to a(key, (..., list of values, ...)) format, and theseare then fed to the Reduce function

I A special Master node takes care of the scheduling andfault tolerance by re-executing Mappers or Reducers

Page 21: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

21/44

MapReduce Diagram

22.1.2010 2

Worker

Worker

Worker

Worker

split 0

split 1

split 2

split 3

split 4

(3)read

(1)fork

outputfile 0(4)

local write

outputfile 1

Userprogram

Master

(1)fork

(2)assignmap

(6)write

Worker

(5)Remote read

(1)fork

(2)assignreduce

Inputfiles

Mapphase

Intermediate files(on local disks)

Reducephase

Outputfiles

Figure : J. Dean and S. Ghemawat: MapReduce: Simplified Data Processing on Large Clusters, OSDI 2004

Page 22: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

22/44

Example: Word Count

I Classic word count example from the Hadoop MapReducetutorial:http://hadoop.apache.org/common/docs/current/

mapred_tutorial.html

I Consider doing a word count of the following file usingMapReduce:

Hello World Bye World

Hello Hadoop Goodbye Hadoop

Page 23: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

23/44

Example: Word Count (cnt.)

I Consider a Map function that reads in words one a time,and outputs (word, 1) for each parsed input word

I The Map function output is:

(Hello, 1)

(World, 1)

(Bye, 1)

(World, 1)

(Hello, 1)

(Hadoop, 1)

(Goodbye, 1)

(Hadoop, 1)

Page 24: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

24/44

Example: Word Count (cnt.)

I The Shuffle phase between Map and Reduce phasecreates a list of values associated with each key

I The Reduce function input is:

(Bye, (1))

(Goodbye, (1))

(Hadoop, (1, 1))

(Hello, (1, 1))

(World, (1, 1))

Page 25: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

25/44

Example: Word Count (cnt.)

I Consider a reduce function that sums the numbers in thelist for each key and outputs (word, count) pairs. Theoutput of the Reduce function is the output of theMapReduce job:

(Bye, 1)

(Goodbye, 1)

(Hadoop, 2)

(Hello, 2)

(World, 2)

Page 26: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

26/44

Phases of MapReduce

1. A Master (In Hadoop terminology: Job Tracker) is startedthat coordinates the execution of a MapReduce job. Note:Master is a single point of failure

2. The master creates a predefined number of M Mapworkers, and assigns each one an input split to work on. Italso later starts a predefined number of R reduce workers

3. Input is assigned to a free Map worker 64-128MB split at atime, and each user defined Map function is fed (key,

value) pairs as input and also produces (key, value)

pairs

Page 27: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

27/44

Phases of MapReduce(cnt.)

4. Periodically the Map workers flush their (key, value)

pairs to the local hard disks, partitioning by their key to Rpartitions (default: use hashing), one per reduce worker

5. When all the input splits have been processed, a Shufflephase starts where M × R file transfers are used to sendall of the mapper outputs to the reducer handling each keypartition. After reducer receives the input files, reducersorts (and groups) the (key, value) pairs by the key

6. User defined Reduce functions iterate over the (key,

(..., list of values, ...)) lists, generating output(key, value) pairs files, one per reducer

Page 28: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

28/44

Google MapReduce (cnt.)

I The user just supplies the Map and Reduce functions,nothing more

I The only means of communication between nodes isthrough the shuffle from a mapper to a reducer

I The framework can be used to implement a distributedsorting algorithm by using a custom partitioning function

I The framework does automatic parallelization and faulttolerance by using a centralized Job tracker (Master) and adistributed filesystem that stores all data redundantly oncompute nodes

I Uses functional programming paradigm to guaranteecorrectness of parallelization and to implementfault-tolerance by re-execution

Page 29: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

29/44

Apache Hadoop

I An Open Source implementation of the MapReduceframework, originally developed by Doug Cutting andheavily used by e.g., Yahoo! and Facebook

I “Moving Computation is Cheaper than Moving Data” - Shipcode to data, not data to code.

I Map and Reduce workers are also storage nodes for theunderlying distributed filesystem: Job allocation is first triedto a node having a copy of the data, and if that fails, then toa node in the same rack (to maximize network bandwidth)

I Project Web page: http://hadoop.apache.org/

Page 30: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

30/44

Apache Hadoop (cnt.)

I When deciding whether MapReduce is the correct fit for analgorithm, one has to remember the fixed data-flow patternof MapReduce. The algorithm has to be efficiently mappedto this data-flow pattern in order to efficiently use theunderlying computing hardware

I Builds reliable systems out of unreliable commodityhardware by replicating most components (exceptions:Master/Job Tracker and NameNode in Hadoop DistributedFile System)

Page 31: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

31/44

Apache Hadoop (cnt.)

I Tuned for large (gigabytes of data) filesI Designed for very large 1 PB+ data setsI Designed for streaming data accesses in batch processing,

designed for high bandwidth instead of low latencyI For scalability: NOT a POSIX filesystemI Written in Java, runs as a set of user-space daemons

Page 32: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

32/44

Hadoop Distributed Filesystem (HDFS)

I A distributed replicated filesystem: All data is replicated bydefault on three different Data Nodes

I Inspired by the Google FilesystemI Each node is usually a Linux compute node with a small

number of hard disks (4-12)I A single NameNode that maintains the file locations, many

DataNodes (1000+)

Page 33: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

33/44

Hadoop Distributed Filesystem (cnt.)

I Any piece of data is available if at least one datanodereplica is up and running

I Rack optimized: by default one replica written locally,second in the same rack, and a third replica in another rack(to combat against rack failures, e.g., rack switch or rackpower feed)

I Uses large block size, 128 MB is a common default -designed for batch processing

I For scalability: Write once, read many filesystem

Page 34: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

34/44

Implications of Write Once

I All applications need to be re-engineered to only dosequential writes. Example systems working on top ofHDFS:

I HBase (Hadoop Database), a database system with onlysequential writes, Google Bigtable clone

I MapReduce batch processing systemI Apache Pig and Hive data mining toolsI Mahout machine learning librariesI Lucene and Solr full text searchI Nutch web crawling

Page 35: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

35/44

Two Large Hadoop Installations

I Yahoo! (2009): 4000 nodes, 16 PB raw disk, 64TB RAM,32K cores

I Facebook (2010): 2000 nodes, 21 PB storage, 64TB RAM,22.4K cores

I 12 TB (compressed) data added per day, 800TB(compressed) data scanned per day

I A. Thusoo, Z. Shao, S. Anthony, D. Borthakur, N. Jain, J.Sen Sarma, R. Murthy, H. Liu: Data warehousing andanalytics infrastructure at Facebook. SIGMOD Conference2010: 1013-1020.http://doi.acm.org/10.1145/1807167.1807278

Page 36: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

36/44

Cloud Software Project

I ICT SHOK Program Cloud Software (2010-2013)I A large Finnish consortium, see:

http://www.cloudsoftwareprogram.org/I Case study at Aalto: CSC Genome Browser Cloud BackendI Co-authors: Matti Niemenmaa, Andre Schumacher (Aalto

University, Department of Information and ComputerScience), Aleksi Kallio, Eija Korpelainen, Taavi Hupponen,and Petri Klemela (CSC — IT Center for Science)

Page 37: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

37/44

CSC Genome Browser

I CSC provides tools and infrastructure for bioinformaticsI Bioinformatics is the largest customer group of CSC (in

user numbers)I Next-Generation Sequencing (NGS) produces large data

sets (TB+)I Cloud computing can be harnessed for analyzing these

data setsI 1000 Genomes project (http://www.1000genomes.org):

Freely available 50 TB+ data set of human genomes

Page 38: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

38/44

CSC Genome Browser

I Cloud computing technologies will enable scalable NGSdata analysis

I There exists prior research into sequence alignment andassembly in the cloud

I Visualization is needed in order to understand large datamasses

I Interactive visualization for 100GB+ datasets can only beachieved with preprocessing in the cloud

Page 39: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

39/44

Genome Browser Requirements

I Interactive browsing with zooming in and out, “GoogleEarth”-style

I Single datasets 100GB-1TB+ with interactive visualizationat different zoom levels

I Preprocessing used to compute summary data for thehigher zoom levels

I Dataset too large to compute the summary data in real timeusing the real dataset

I Scalable cloud data processing paradigm map-reduceimplemented in Hadoop used to compute the summarydata in preprocessing (currently upto 15 x 12 = 180 cores)

Page 40: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

40/44

Genome Browser GUI

Page 41: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

41/44

Experiments

I One 50 GB (compressed) BAM input file from 1000Genomes

I Run on the Triton cluster 1-15 compute nodes with 12cores each

I Four repetitions for increasing number of worker nodesI Two types of runs: sorting according to starting position

(“sorted”) and read aggregation using five summary-blocksizes at once (“summarized”)

Page 42: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

42/44

Mean speedup

0

2

4

6

8

10

12

14

16

1 2 4 8 15

Mean speedup

Workers

50 GB sorted

Ideal

Input file import

Sorting

Output file export

Total elapsed

0

2

4

6

8

10

12

14

16

1 2 4 8 15M

ea

n s

pe

ed

up

Workers

50 GB summarized for B=2,4,8,16,32

Ideal

Input file import

Summarizing

Output file export

Total elapsed

Page 43: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

43/44

Results

I An updated CSC Genome browser GUII Chipster: Tools and framework for producing visualizable

dataI An implementation of preprocessing for visualization using

HadoopI Scalability studies for running Hadoop on 50 GB+ datasets

on the Triton cloud testbedI Software released, with more than 1200 downloads:

http://sourceforge.net/projects/hadoop-bam/

Page 44: Scalable Cloud Computing - Aalto Universityusers.ics.aalto.fi/kepa/publications/scalable-cloud-2013.pdf · Scalable Cloud Computing 6.6-2013 1/44 Scalable Cloud Computing Keijo Heljanko

Scalable Cloud Computing6.6-2013

44/44

Current Research Topics

I High level programming frameworks for ad-hoc queries:Apache Pig integration of Hadoop-BAM in the SeqPig tool(http://sourceforge.net/projects/seqpig/), and SQLintegration in Apache Hive

I Real-time SQL query evaluation on Big Data: ClouderaImpala and Berkeley Shark

I Scalable datastores: HBase (Hadoop Database) forreal-time read/write database workloads

I Energy efficiency through better main memory and SSDutilization