data-intensive computing: from clouds to gpus

47
Data-Intensive Computing: From Clouds to GPUs Gagan Agrawal March 22, 2022 1

Upload: edan-buck

Post on 31-Dec-2015

43 views

Category:

Documents


1 download

DESCRIPTION

Data-Intensive Computing: From Clouds to GPUs. Gagan Agrawal. Motivation. Growing need for analysis of large scale data Scientific Commercial Data-intensive Supercomputing (DISC) Map-Reduce has received a lot of attention Database and Datamining communities - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Data-Intensive Computing: From Clouds to GPUs

Data-Intensive Computing: From Clouds to GPUs

Gagan Agrawal

April 19, 20231

Page 2: Data-Intensive Computing: From Clouds to GPUs

April 19, 20232

Growing need for analysis of large scale data Scientific Commercial

Data-intensive Supercomputing (DISC) Map-Reduce has received a lot of attention

Database and Datamining communities High performance computing community

Motivation

Page 3: Data-Intensive Computing: From Clouds to GPUs

Motivation (2) Processor Architecture Trends

Clock speeds are not increasing Trend towards multi-core architectures Accelerators like GPUs are very popular Cluster of multi-cores / GPUs are becoming

common Trend Towards the Cloud

Use storage/computing/services from a provider How many of you prefer gmail over cse/osu account for

e-mail? Utility model of computation Need high-level APIs and adaptation

April 19, 20233

Page 4: Data-Intensive Computing: From Clouds to GPUs

My Research Group

Data-intensive theme at multiple levels Parallel programming Models Multi-cores and accelerators Adaptive Middleware Scientific Data Management / Workflows Deep Web Integration and Analysis

April 19, 20234

Page 5: Data-Intensive Computing: From Clouds to GPUs

Personnel Currently

10 PhD students 4 MS thesis students

Graduated PhDs 7 Graduated PhDs between 2005 and 2008

April 19, 20235

Page 6: Data-Intensive Computing: From Clouds to GPUs

This Talk Parallel Programming API for Data-Intensive

Computing An alternate API and System for Google’s Map-

Reduce Show actual comparison

Data-intensive Computing on Accelerators Compilation for GPUs

Overview of other topics Scientific Data Management Adaptive and Streaming Middleware Deep web

April 19, 20236

Page 7: Data-Intensive Computing: From Clouds to GPUs

Map-Reduce Simple API for (data-intensive) parallel

programming Computation is:

Apply map on each input data element Produce (key,value) pair(s) Sort them using the key Apply reduce on the set with a distinct key values

April 19, 20237

Page 8: Data-Intensive Computing: From Clouds to GPUs

April 19, 20238

Map-Reduce Execution

Page 9: Data-Intensive Computing: From Clouds to GPUs

April 19, 20239

Positives: Simple API

Functional language based Very easy to learn

Support for fault-tolerance Important for very large-scale clusters

Questions Performance?

Comparison with other approaches Suitability for different class of applications?

Map-Reduce: Positives and Questions

Page 10: Data-Intensive Computing: From Clouds to GPUs

Class of Data-Intensive Applications Many different types of applications

Data-center kind of applications Data scans, sorting, indexing

More ``compute-intensive’’ data-intensive applications Machine learning, data mining, NLP Map-reduce / Hadoop being widely used for this class

Standard Database Operations Sigmod 2009 paper compares Hadoop with Databases

and OLAP systems

What is Map-reduce suitable for? What are the alternatives?

MPI/OpenMP/Pthreads – too low level?April 19, 202310

Page 11: Data-Intensive Computing: From Clouds to GPUs

Hadoop Implementation

April 19, 202311

HDFS Almost GFS, but no file update Cannot be directly mounted by an existing

operating system Fault tolerance

Name node Job Tracker Task Tracker

Page 12: Data-Intensive Computing: From Clouds to GPUs

April 19, 2023 12

FREERIDE: GOALS Framework for Rapid Implementation of

Data Mining Engines The ability to rapidly prototype a high-

performance mining implementation Distributed memory parallelization Shared memory parallelization Ability to process disk-resident datasets

Only modest modifications to a sequential implementation for the above three

Developed 2001-2003 at Ohio State

April 19, 202312

Page 13: Data-Intensive Computing: From Clouds to GPUs

FREERIDE – Technical Basis

April 19, 202313

Popular data mining algorithms have a common canonical loop

Generalized Reduction Can be used as the

basis for supporting a common API

Demonstrated for Popular Data Mining and Scientific Data Processing Applications

While( ) {

forall (data instances d) {

I = process(d)

R(I) = R(I) op f(d)

}

…….

}

Page 14: Data-Intensive Computing: From Clouds to GPUs

April 19, 202314

Similar, but with subtle differences

Comparing Processing Structure

Page 15: Data-Intensive Computing: From Clouds to GPUs

Observations on Processing Structure Map-Reduce is based on functional idea

Do not maintain state This can lead to sorting overheads FREERIDE API is based on a programmer

managed reduction object Not as ‘clean’ But, avoids sorting Can also help shared memory parallelization Helps better fault-recovery

April 19, 202315

Page 16: Data-Intensive Computing: From Clouds to GPUs

April 19, 202316

Tuning parameters in Hadoop Input Split size Max number of concurrent map tasks per node Number of reduce tasks

For comparison, we used four applications Data Mining: KMeans, KNN, Apriori Simple data scan application: Wordcount

Experiments on a multi-core cluster 8 cores per node (8 map tasks)

Experiment Design

Page 17: Data-Intensive Computing: From Clouds to GPUs

April 19, 202317

KMeans: varying # of nodes

0

50

100

150

200

250

300

350

400

4 8 16

HadoopFREERIDE

Avg

. Tim

e P

er

Itera

tion

(sec)

# of nodes

Dataset: 6.4GK : 1000Dim: 3

Results – Data Mining

Page 18: Data-Intensive Computing: From Clouds to GPUs

April 19, 2023 18

Results – Data Mining (II)

April 19, 202318

Apriori: varying # of nodes

0

20

40

60

80

100

120

140

4 8 16

HadoopFREERIDE

Avg

. Tim

e P

er

Itera

tion

(sec)

# of nodes

Dataset: 900MSupport level: 3%Confidence level: 9%

Page 19: Data-Intensive Computing: From Clouds to GPUs

April 19, 2023 19April 19, 202319

KNN: varying # of nodes

0

20

40

60

80

100

120

140

160

4 8 16

HadoopFREERIDE

Avg

. Tim

e P

er

Itera

tion

(sec)

# of nodes

Dataset: 6.4GK : 1000Dim: 3

Results – Data Mining (III)

Page 20: Data-Intensive Computing: From Clouds to GPUs

April 19, 202320

Wordcount: varying # of nodes

0

100

200

300

400

500

600

4 8 16

HadoopFREERIDE

Tota

l Tim

e (

sec)

# of nodes

Dataset: 6.4G

Results – Datacenter-like Application

Page 21: Data-Intensive Computing: From Clouds to GPUs

April 19, 202321

KMeans: varying dataset size

0

20

40

60

80

100

120

140

160

180

800M 1.6G 3.2G 6.4G 12.8G

HadoopFREERIDE

Avg

. Tim

e P

er

Itera

tion

(sec)

Dataset Size

K : 100Dim: 3On 8 nodes

Scalability Comparison

Page 22: Data-Intensive Computing: From Clouds to GPUs

April 19, 202322

Wordcount: varying dataset size

0

100

200

300

400

500

600

800M 1.6G 3.2G 6.4G 12.8G

HadoopFREERIDE

Tota

l Tim

e (

sec)

Dataset Size

On 8 nodes

Scalability – Word Count

Page 23: Data-Intensive Computing: From Clouds to GPUs

April 19, 202323

Four components affecting the hadoop performance Initialization cost I/O time Sorting/grouping/shuffling Computation time

What is the relative impact of each ? An Experiment with k-means

Overhead Breakdown

Page 24: Data-Intensive Computing: From Clouds to GPUs

April 19, 202324

Varying the number of clusters (k)

0

20

40

60

80

100

120

140

160

180

50 200 1000

HadoopFREERIDE

Avg

. Tim

e P

er

Itera

tion

(sec)

# of KMeans Clusters

Dataset: 6.4GDim: 3On 16 nodes

Analysis with K-means

Page 25: Data-Intensive Computing: From Clouds to GPUs

April 19, 202325

Varying the number of dimensions

020406080

100120140160180

200

3 6 48 96 192

HadoopFREERIDE

Avg

. Tim

e P

er

Itera

tion

(sec)

# of Dimensions

Dataset: 6.4GK : 1000On 16 nodes

Analysis with K-means (II)

Page 26: Data-Intensive Computing: From Clouds to GPUs

Observations

April 19, 202326

Initialization costs and limited I/O bandwidth of HDFS are significant in Hadoop

Sorting is also an important limiting factor for Hadoop’s performance

Page 27: Data-Intensive Computing: From Clouds to GPUs

This Talk Parallel Programming API for Data-Intensive

Computing An alternate API and System for Google’s Map-

Reduce Show actual comparison

Data-intensive Computing on Accelerators Compilation for GPUs

Overview of other topics Scientific Data Management Adaptive and Streaming Middleware Deep web

April 19, 202327

Page 28: Data-Intensive Computing: From Clouds to GPUs

Background - GPU Computing

• Many-core architectures/Accelerators are becoming more popular

• GPUs are inexpensive and fast

• CUDA is a high-level language for GPU programming

Page 29: Data-Intensive Computing: From Clouds to GPUs

CUDA Programming

• Significant improvement over use of Graphics Libraries

• But .. • Need detailed knowledge of the architecture of GPU and a new

language • Must specify the grid configuration• Deal with memory allocation and movement• Explicit management of memory hierarchy

Page 30: Data-Intensive Computing: From Clouds to GPUs

Parallel Data mining

• Common structure of data mining applications (FREERIDE)

• /* outer sequential loop *//* outer sequential loop */

  while() {while() {

  /* Reduction loop *//* Reduction loop */

  Foreach (element e){Foreach (element e){

  (i, val) = process(e);(i, val) = process(e);

  Reduc(i) = Reduc(i) Reduc(i) = Reduc(i) opop val; val;

  }}

Page 31: Data-Intensive Computing: From Clouds to GPUs

Porting on GPUs High-level Parallelization is straight-forward Details of Data Movement Impact of Thread Count on Reduction time Use of shared memory

Page 32: Data-Intensive Computing: From Clouds to GPUs

Architecture of the System

Variable informati

on

Reduction

functions

Optional functions Code

Analyzer( In LLVM)

Variable

Analyzer

Code Generato

r

Variable Access

Pattern and Combinatio

n Operations

Host Program

Grid configurati

on and kernel

invocation

Kernel functions

Executable

User Input

Page 33: Data-Intensive Computing: From Clouds to GPUs

User Input

A sequential reduction function

Optional functions (initialization function, combination function…)

Values of each variable or size of array

Variables to be used in the reduction function

Page 34: Data-Intensive Computing: From Clouds to GPUs

Analysis of Sequential Code

• Get the information of access features of each variable

• Determine the data to be replicated• Get the operator for global combination• Variables for shared memory

Page 35: Data-Intensive Computing: From Clouds to GPUs

Memory Allocation and Copy

Copy the updates back to host memory after the Copy the updates back to host memory after the kernel reduction function returnskernel reduction function returns

CC..

Need copy for each threadNeed copy for each thread

T0 T1 T2 T3 T4 T61T62 T63 T0 T1

…… ……

T0 T1 T2 T3 T4 T61T62 T63 T0 T1

……

AA..

BB..

Page 36: Data-Intensive Computing: From Clouds to GPUs

Generating CUDA Code and C++/C code Invoking the Kernel Function

• Memory allocation and copy• Thread grid configuration (block number

and thread number)• Global function• Kernel reduction function• Global combination

Page 37: Data-Intensive Computing: From Clouds to GPUs

Optimizations

• Using shared memory• Providing user-specified initialization

functions and combination functions• Specifying variables that are allocated

once

Page 38: Data-Intensive Computing: From Clouds to GPUs

Applications

• K-means clustering• EM clustering• PCA

Page 39: Data-Intensive Computing: From Clouds to GPUs

Experimental Results

Speedup of k-means

Page 40: Data-Intensive Computing: From Clouds to GPUs

Speedup of EM

Page 41: Data-Intensive Computing: From Clouds to GPUs

Speedup of PCA

Page 42: Data-Intensive Computing: From Clouds to GPUs

[email protected]

Deep Web Data Integration

The emerge of deep web Deep web is huge Different from surface

web Challenges for

integration Not accessible through

search engines Inter-dependences among

deep web sources

Page 43: Data-Intensive Computing: From Clouds to GPUs

Our Contributions

Structured Query Processing on Deep WebSchema Mining/MatchingAnalysis of Deep web data sources

P44

Page 44: Data-Intensive Computing: From Clouds to GPUs

45

Adaptive Middleware To Enable the Time-critical Event Handling to

Achieve the Maximum Benefit, While Satisfying the Time/Budget Constraint

To be Compatible with Grid and Web Services To Enable Easy Deployment and

Management with Minimum Human Intervention

To be Used in a Heterogeneous Distributed Environment

ICAC 2008

Page 45: Data-Intensive Computing: From Clouds to GPUs

46

HASTE Middleware Design

ICAC 2008

Application Layer

Service Layer

OGSA Infrastructure (Globus Toolkit 4.0)

Application Deployment Service

AUTONOMIC SERVICECOMPONENTS

App.Service 1

Agent/Controller

...

...App.

Service 3

Agent/Controller

App.Service 4

Agent/Controller

App.Service 5

Agent/ControllerApp.

Service 2

Agent/Controller

Application

CodeConfiguration

FileBenefit

Function

Time-CriticalEvent

Resource Allocation Service

Resource Monitoring Service

CPU Memory Bandwidth

SchedulingEfficiency

ValueEstimation

Autonomic Adaptation Service

SystemModel

Estimator

Page 46: Data-Intensive Computing: From Clouds to GPUs

47

Workflow Composition System

Page 47: Data-Intensive Computing: From Clouds to GPUs

Summary Growing data is creating new challenges in

HPC, grid and cloud environments Number of topics being addressed Many opportunities for involvement 888 meets Thursdays 5:00 – 6:00, DL 280

April 19, 202348