cloud computing and mapreduce used slides from rad lab at uc berkeley about the cloud ( and slides...

35
Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud ( http://abovetheclouds.cs.berkeley.edu/ ) and slides from Jimmy Lin’s slides (http://www.umiacs.umd.edu/~jimmylin/cloud-2010-Spring/index.html) (licensed under Creation Commons Attribution 3.0 License)

Upload: dalia-knighten

Post on 14-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud Computing and MapReduce

Used slides from RAD Lab at UC Berkeley about the cloud ( http://abovetheclouds.cs.berkeley.edu/) and slides from Jimmy Lin’s slides (http://www.umiacs.umd.edu/~jimmylin/cloud-2010-Spring/index.html) (licensed under Creation Commons

Attribution 3.0 License)

Page 2: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud computing

• What is the “cloud”?– Many answers. Easier to explain with

examples:• Gmail is in the cloud• Amazon (AWS) EC2 and S3 are the cloud• Google AppEngine is the cloud• Windows Azure is the cloud• SimpleDB is in the cloud• The “network” (cloud) is the computer

Page 3: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud Computing

What about Wikipedia?

“Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). “

Page 4: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud properties

• Cloud offers:– Scalability : scale out vs scale up (also scale

back)– Reliability (hopefully!)– Availability (24x7)– Elasticity : pay-as-you go depending on your

demand– Multi-tenancy

Page 5: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

more

• Scalability means that you (can) have infinite resources, can handle unlimited number of users

• Multi-tenancy enables sharing of resources and costs across a large pool of users. Lower cost, higher utilization… but other issues: e.g. security.

• Elasticity: you can add or remove computer nodes and the end user will not be affected/see the improvement quickly.

• Utility computing (similar to electrical grid)

Page 6: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

CLOUD COMPUTING ECONOMICS AND ELASTICITY

Page 7: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud Application Demand• Many cloud applications have cyclical demand

curves– Daily, weekly, monthly, …

• Workload spikes more frequent and significant– Death of Michael Jackson:

• 22% of tweets, 20% of Wikipedia traffic, Google thought they are under attack

– Obama inauguration day: 5x increase in tweets

Demand

Time

Page 8: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Unused resources

Economics of Cloud Users• Pay by use instead of provisioning for peak

• Recall: DC costs >$150M and takes 24+ months to design and build

Static data center Data center in the cloud

Demand

Capacity

Time

Demand

Capacity

Time

How do you pick a capacity level?

Page 9: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Unused resources

Economics of Cloud Users• Risk of over-provisioning: underutilization

• Huge sunk cost in infrastructure

Static data center

Demand

Capacity

TimeDemand

Capacity

Time (days)1 2 3

Page 10: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Utility Computing Arrives

Northern VA cluster

Page 11: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Utility Storage Arrives

• Amazon S3 and Elastic Block Storage offer low-cost, contract-less storage

Page 12: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud Computing Infrastructure

• Computation model: MapReduce*

• Storage model: HDFS*

• Other computation models: HPC/Grid Computing

• Network structure

*Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed under Creation Commons Attribution 3.0 License)

Page 13: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud Computing Computation Models

• Finding the right level of abstraction– von Neumann architecture vs cloud environment

• Hide system-level details from the developers– No more race conditions, lock contention, etc.

• Separating the what from how– Developer specifies the computation that needs to

be performed– Execution framework (“runtime”) handles actual

execution

Page 14: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

“Big Ideas”• Scale “out”, not “up”

– Limits of SMP and large shared-memory machines• Idempotent operations

– Simplifies redo in the presence of failures• Move processing to the data

– Cluster has limited bandwidth• Process data sequentially, avoid random access

– Seeks are expensive, disk throughput is reasonable• Seamless scalability for ordinary programmers

– From the mythical man-month to the tradable machine-hour

Page 15: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Typical Large-Data Problem

• Iterate over a large number of records

• Extract something of interest from each

• Shuffle and sort intermediate results

• Aggregate intermediate results

• Generate final output

Key idea: provide a functional abstraction for these two operations – MapReduce

Map

Reduce

(Dean and Ghemawat, OSDI 2004)

Page 16: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce

• Programmers specify two functions:map (k, v) → <k’, v’>*reduce (k’, v’) → <k’, v’>*– All values with the same key are sent to the

same reducer

• The execution framework handles everything else…

Page 17: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

mapmap map map

Shuffle and Sort: aggregate values by keys

reduce reduce reduce

k1 k2 k3 k4 k5 k6v1 v2 v3 v4 v5 v6

ba 1 2 c c3 6 a c5 2 b c7 8

a 1 5 b 2 7 c 2 3 6 8

r1 s1 r2 s2 r3 s3

MapReduce

Page 18: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce

• Programmers specify two functions:map (k, v) → <k’, v’>*reduce (k’, v’) → <k’, v’>*– All values with the same key are sent to the

same reducer

• The execution framework handles everything else…

What’s “everything else”?

Page 19: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce “Runtime”• Handles scheduling

– Assigns workers to map and reduce tasks• Handles “data distribution”

– Moves processes to data• Handles synchronization

– Gathers, sorts, and shuffles intermediate data• Handles errors and faults

– Detects worker failures and automatically restarts• Handles speculative execution

– Detects “slow” workers and re-executes work• Everything happens on top of a distributed FS

(later) Sounds simple, but many challenges!

Page 20: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce

• Programmers specify two functions:map (k, v) → <k’, v’>*reduce (k’, v’) → <k’, v’>*– All values with the same key are reduced together

• The execution framework handles everything else…• Not quite…usually, programmers also specify:

partition (k’, number of partitions) → partition for k’– Often a simple hash of the key, e.g., hash(k’) mod R– Divides up key space for parallel reduce operationscombine (k’, v’) → <k’, v’>*– Mini-reducers that run in memory after the map phase– Used as an optimization to reduce network traffic

Page 21: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

combinecombine combine combine

ba 1 2 c 9 a c5 2 b c7 8partition partition partition partition

mapmap map map

ba 1 2 c c3 6 a c5 2 b c7 8

Shuffle and Sort: aggregate values by keys

reduce reduce reduce

a 1 5 b 2 7 c 2 9 8

r1 s1 r2 s2 r3 s3

c 2 3 6 8

Page 22: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Two more details…

• Barrier between map and reduce phases– But we can begin copying intermediate data

earlier

• Keys arrive at each reducer in sorted order– No enforced ordering across reducers

Page 23: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

split 0split 1split 2split 3split 4

worker

worker

worker

worker

worker

Master

UserProgram

outputfile 0

outputfile 1

(1) submit

(2) schedule map (2) schedule reduce

(3) read(4) local write

(5) remote read (6) write

Inputfiles

Mapphase

Intermediate files(on local disk)

Reducephase

Outputfiles

Adapted from (Dean and Ghemawat, OSDI 2004)

MapReduce Overall Architecture

Page 24: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

“Hello World” Example: Word Count

Map(String docid, String text): for each word w in text: Emit(w, 1);

Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

Page 25: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce can refer to…

• The programming model

• The execution framework (aka “runtime”)

• The specific implementation

Usage is usually clear from context!

Page 26: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce Implementations• Google has a proprietary implementation in C++

– Bindings in Java, Python

• Hadoop is an open-source implementation in Java– Development led by Yahoo, used in production

– Now an Apache project

– Rapidly expanding software ecosystem, but still lots of room for improvement

• Lots of custom research implementations– For GPUs, cell processors, etc.

Page 27: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Cloud Computing Storage, or how do we get data to the workers?

Compute Nodes

NAS

SAN

What’s the problem here?

Page 28: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Distributed File System

• Don’t move data to workers… move workers to the data!– Store data on the local disks of nodes in the cluster– Start up the workers on the node that has the data local

• Why?– Network bisection bandwidth is limited– Not enough RAM to hold all the data in memory– Disk access is slow, but disk throughput is reasonable

• A distributed file system is the answer– GFS (Google File System) for Google’s MapReduce– HDFS (Hadoop Distributed File System) for Hadoop

Page 29: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

GFS: Assumptions• Choose commodity hardware over “exotic” hardware

– Scale “out”, not “up”

• High component failure rates– Inexpensive commodity components fail all the time

• “Modest” number of huge files– Multi-gigabyte files are common, if not encouraged

• Files are write-once, mostly appended to– Perhaps concurrently

• Large streaming reads over random access– High sustained throughput over low latency

GFS slides adapted from material by (Ghemawat et al., SOSP 2003)

Page 30: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

GFS: Design Decisions• Files stored as chunks

– Fixed size (64MB)

• Reliability through replication– Each chunk replicated across 3+ chunkservers

• Single master to coordinate access, keep metadata– Simple centralized management

• No data caching– Little benefit due to large datasets, streaming reads

• Simplify the API– Push some of the issues onto the client (e.g., data layout)

HDFS = GFS clone (same basic ideas implemented in Java)

Page 31: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

From GFS to HDFS

• Terminology differences:– GFS master = Hadoop namenode– GFS chunkservers = Hadoop datanodes

• Functional differences:– No file appends in HDFS (planned feature)– HDFS performance is (likely) slower

Page 32: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Adapted from (Ghemawat et al., SOSP 2003)

(file name, block id)

(block id, block location)

instructions to datanode

datanode state(block id, byte range)

block data

HDFS namenode

HDFS datanode

Linux file system

HDFS datanode

Linux file system

File namespace/foo/bar

block 3df2block 3df2

Application

HDFS Client

HDFS Architecture

Page 33: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Namenode Responsibilities• Managing the file system namespace:

– Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc.

• Coordinating file operations:– Directs clients to datanodes for reads and writes– No data is moved through the namenode

• Maintaining overall health:– Periodic communication with the datanodes– Block re-replication and rebalancing– Garbage collection

Page 34: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

Putting everything together…

datanode daemon

Linux file system

tasktracker

slave node

datanode daemon

Linux file system

tasktracker

slave node

datanode daemon

Linux file system

tasktracker

slave node

namenode

namenode daemon

job submission node

jobtracker

Page 35: Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud (  and slides from Jimmy Lin’s

MapReduce/GFS Summary

• Simple, but powerful programming model• Scales to handle petabyte+ workloads

– Google: six hours and two minutes to sort 1PB (10 trillion 100-byte records) on 4,000 computers

– Yahoo!: 16.25 hours to sort 1PB on 3,800 computers

• Incremental performance improvement with more nodes

• Seamlessly handles failures, but possibly with performance penalties