parallel processing comparative study

Post on 24-Feb-2016

40 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Parallel Processing Comparative Study. Context . How to finish a work in short time ???? Solution To use quicker worker . Inconvenient: The speed of worker has a limit Inadequate for long works. Context . How to finish a calculation in short time ???? - PowerPoint PPT Presentation

TRANSCRIPT

1

PARALLEL PROCESSING COMPARATIVE STUDY

2

CONTEXT

How to finish a work in short time????Solution To use quicker worker. Inconvenient: The speed of worker has a limit

Inadequate for long works

3

CONTEXT How to finish a calculation in short time????Solution

To use quicker calculator (processor).[1960-2000] Inconvenient:

The speed of processor has reach a limit

Inadequate for long calculations

4

CONTEXT

How to finish a work in short time????Solution

1. To use quicker worker. (Inadequate for long works)

5

CONTEXT

How to finish a work in short time????Solution

1. To use quicker worker. (Inadequate for long works)

6

CONTEXT

How to finish a work in short time????Solution

1. To use quicker worker. (Inadequate for long works)2. To use more than one worker concurrently

7

CONTEXT

How to finish a Calculation in short time????Solution

1. To use quicker processor (Inadequate for long calculations)

8

CONTEXT

How to finish a Calculation in short time????Solution

1. To use quicker processor (Inadequate for long calculations)

9

CONTEXT

How to finish a Calculation in short time????Solution

1. To use quicker processor (Inadequate for long calculations)

2. To use more than one processor concurrently

10

CONTEXT

How to finish a Calculation in short time????Solution

1. To use quicker processor (Inadequate for long calculations)

2. To use more than one processor concurrently

Parallelism

11

CONTEXT

Definition

The parallelism is the concurrent use of more than one processing unit (CPUs, Cores of processor, GPUs, or

combinations of them) in order to carry out calculations more quickly

12

PROJECT GOAL

Parallelism needs

1. Parallel Computer (more than one processors)

2. Accommodate Calculation to Parallel Computer

13

THE GOAL

Parallelism needs

1. Parallel Computer (more than one processors)

2. Accommodate Calculation to Parallel Computer

14

THE GOAL

Parallel Computer

Several parallel computers in the hardware market Differ in their architecture Several Classifications

Based on the Instruction and Data Streams (Flynn classification)

Based on the Memory Charring Degree ….

15

THE GOALFlynn Classification

A. Single Instruction and Single Data stream

16

THE GOALFlynn Classification

B. Single Instruction and Multiple Data

17

THE GOALFlynn Classification

C. Multiple Instruction and Single Data stream

18

THE GOALFlynn Classification

D. Multiple Instruction and Multiple Data stream

19

THE GOALMemory Sharing Degree Classification

A . Shared Memory B. Distributed memory

20

THE GOALMemory Sharing Degree Classification

C. Hybrid Distributed-Shared Memory

21

THE GOAL

Parallelism needs

1. Parallel Computer (more than one processors)

2. Accommodate Calculation to Parallel Computer Dividing the calculation and data between the processors Defining the execution scenario (how the processor cooperates)

22

THE GOAL

Parallelism needs

1. Parallel Computer (more than one processors)

2. Accommodate Calculation to Parallel Computer Dividing the calculation and data between the processors Defining the execution scenario (how the processor cooperates)

23

THE GOAL

Parallelism needs

1. Parallel Computer (more than one processors)

2. Accommodate Calculation to Parallel Computer Dividing the calculation and data between the processors Defining the execution scenario (how the processors cooperate)

24

THE GOAL

The accommodation of calculation to parallel computerIs called parallel processing Depend closely on the architecture

25

THE GOAL

Goal : A comparative study between

1. Shared Memory Parallel Processing approach

2. Distributed Memory Parallel Processing approach

26

PLAN

1. Distributed Memory Parallel Processing approach

2. Shared Memory Parallel Processing approach

3. Case study problems

4. Comparison results and discussion

5. Conclusion

27

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

28

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

Distributed-Memory Computers (DMC)

= Distributed Memory System (DMS)

= Massively Parallel Processor (MPP)

29

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

• Distributed-memory computers architecture

30

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

• Architecture of nodes

Nodes can be :identical processors Pure DMCdifferent types of processor Hybrid DMCdifferent type of nodes with different Architectures

Heterogeneous DMC

31

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

• Architecture of Interconnection NetworkNo shared memory space between nodesNetwork is the only way of node-communicationsNetwork performance influence directly the performance of parallel program

on DMCNetwork performance depends on :

1. Topology2. Physical connectors (as wires…) 3. Routing Technique

The DMC evolutions closely depends on the Networking evolutions

32

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

The Used DMC in our Comparative Study

• Heterogeneous DMC• Modest cluster of workstations

• Three nodes:• Sony Laptop: i3 processor• HP Laptop: i3 processor• HP Laptop core 2 due processor

• Communication Network: 100 MByte-Ethernet

33

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

Parallel Software Development for DMC

Designer main tasks:1. Global Calculation decomposition and tasks assignment

2. Data decomposition

3. Communications scheme Definition

4. Synchronization Study

34

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

Parallel Software Development for DMC

Important considerations for efficiency:

1. Minimize Communication 2. Avoid barrier synchronization

35

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

Implementation environments

Several implementation environmentsPVM (Parallel Virtual Machine) MPI (Message Passing Interface)

DISTRIBUTED MEMORY PARALLEL PROCESSING APPROACH

MPI Application Anatomy All the node execute the same code All the nodes does not do the same work

It’s possible using SPMD application form SPMD :.... The processes are organized in one controller and workers

Contradiction

37

SHARED MEMORY PARALLEL PROCESSING APPROACHSeveral SMPC in the MarketsMulti-core PC: Intel i3 i5 i7 ,AMD

Which SMPC we use ?- GPU originally for image processing- GPU NOW : Domestic Super-Computer

Characteristics: • Chipset and fastest Shared Memory Parallel computer• Hard Parallel Design

38

SHARED MEMORY PARALLEL PROCESSING APPROACH

The GPU ArchitectureThe implementation environment

39

SHARED MEMORY PARALLEL PROCESSING APPROACH

GPU Architecture

As the classical processing unit, the Graphics Processing Unit is composed from two main components:

A- Calculation Units B- Storage Unit

40

SHARED MEMORY PARALLEL PROCESSING APPROACH

41

SHARED MEMORY PARALLEL PROCESSING APPROACHSHARED MEMORY PARALLEL PROCESSING APPROACH

42

SHARED MEMORY PARALLEL PROCESSING

The GPU ArchitectureThe implementation environment

1. CUDA : for GPUS manufactured by NVIDIA2. OpenCL: independent of the GPU architecture

43

SHARED MEMORY PARALLEL PROCESSING

CUDA Program Anatomy

44

SHARED MEMORY PARALLEL PROCESSING

Q: How to execute code fragments to be parallelized in the GPU?R: By Calling a kernel

Q: What’s Kernel ? R: A kernel is a function callable from the host and

executed on the device simultaneously by many threads in parallel

45

KERNEL LAUNCH

SHARED MEMORY PARALLEL PROCESSING

46

KERNEL LAUNCH

SHARED MEMORY PARALLEL PROCESSING

47

KERNEL LAUNCH

SHARED MEMORY PARALLEL PROCESSING

48

SHARED MEMORY PARALLEL PROCESSING

Design recommendations

utilize the shared memory to reduce the amount of time to access the global memory.

reduce the amount of idle threads ( control divergence) to fully utilize the GPU resource.

49

CASE STUDY PROBLEM

Square Matrix multiplication problem• ALGORITHM: ()// Input: Two matrices and

// Output: Matrix

for to do

for to do

for to do

return

• Complexity:If we use big notation the

50

CASE STUDY PROBLEMPi approximation• ALGORITHM: PiApprox ()

// Input: number of Bins

// Output: approximation

for to do

return

• Complexity:If we use big notation the.

51

COMPARSION

• Comparisons Creteria

• Analysis and conclusion

52

COMPARISONCriteria 1: Time-Cost factor

𝑇𝐶𝐹 = ∗  𝑃𝐸𝑇 𝐻𝐶𝑃𝐸𝑇: Parallel Execution Time (in Milliseconds)𝐻𝐶: The Hardware Cost (in Saudi Arabia Riyals)

The Hardware costs( )𝐻𝐶GPU : 5000 SAR𝐻𝐶Cluster of workstation : 9630 SAR. 𝐻𝐶

53

COMPARISON

0 500 1000 1500 20000

5000000000

10000000000

15000000000

20000000000

25000000000

30000000000

35000000000

40000000000

45000000000

50000000000

Time Cost-Factor from the matrix multiplication prob-lem

GPUcluster

matrix size

TCF

0 2000 4000 6000 8000 10000 12000 140000

2000

4000

6000

8000

10000

12000

14000

16000Time Cost-Factor from the PI approximation problem

GPU

cluster

bins number

TCF

54

COMPARISON

Conclusion:

GPU is better if we need to perform a lot of number of small amount of iterations calculation.

However if our need is to perform a calculation with big amount of iterations, the cluster of workstations is the best choice.

55

COMPARISONCriteria 2: required MemoryMatrix multiplication problemGraphics Processing Unit

The Global-Memory-based-method requirement:ℎ 𝑇 𝑒 𝑇𝑜𝑡𝑎𝑙 𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑀𝑒𝑚𝑜𝑟𝑦=6∗ ∗ ∗ 𝑛 𝑛 𝑠𝑖𝑧𝑒𝑜𝑓 𝑓𝑙𝑜𝑎𝑡

The Shared-Memory-based-method requirement:ℎ 𝑇 𝑒 𝑇𝑜𝑡𝑎𝑙 𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑀𝑒𝑚𝑜𝑟𝑦=8∗ ∗ ∗ 𝑛 𝑛 𝑠𝑖𝑧𝑒𝑜𝑓 𝑓𝑙𝑜𝑎𝑡

Cluster of workstationsThe used cluster contains three nodes

ℎ 𝑇 𝑒 𝑇𝑜𝑡𝑎𝑙 𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑀𝑒𝑚𝑜𝑟𝑦=19/3∗ ∗ ∗ 𝑛 𝑛 𝑠𝑖𝑧𝑒𝑜𝑓 𝑓𝑙𝑜𝑎𝑡

56

COMPARISONCriteria 2: required MemoryPi approximation problem

• Graphics Processing Unit The size of these arrays depends on the number of used thread

The required memory = ∗ ∗ 𝟐 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒕𝒉𝒓𝒆𝒂𝒅𝒔 𝒔𝒊𝒛𝒆𝒐𝒇 𝒅𝒐𝒖𝒃𝒍𝒆• Cluster of workstations

Small amount of memory used on each node almost 15 ∗ 𝑠𝑖𝑧𝑒𝑜𝑓𝑑𝑜𝑢𝑏𝑙𝑒

57

COMPARISON

Criteria 2: required Memory

Conclusion:We cannot judge which parallel approach is the better for the required memory criteria. This criteria depends on the intrinsic characteristics of the on-hand problem.

58

COMPARISONCriteria 3 : The Gap between the Theoretical Complexity and Effective

Complexity

• The Gap between the Theoretical Complexity and Effective Complexity-calculated by:

𝐺𝑎𝑝=(( / )−1)×100𝐸𝑃𝑇 𝑇𝑃𝑇𝐸𝑃𝑇: Experimental Parallel Time𝑇𝑃𝑇: Theoretical Parallel Time

𝑇𝑃𝑇 = /𝑆𝑇 𝑁𝑆𝑇: Sequential Time.𝑁: Number of processing unit.

59CLUSTER OF WORKSTATIONS

0 200 400 600 800 1000 1200 1400 1600 1800 20000

10000

20000

30000

40000

50000

60000

The Gap between the Theoretical complexity and E ective Complexity fffor Matrix multiplication problem - cluster of workstations

Matrix size

Gap

0 2000 4000 6000 8000 10000 120000

0.20.40.60.8

11.21.41.6

The Gap between the Theoretical complexity and Effective Complexity for Pi approximation problem-

cluster of workstaion

Bin

Gap

COMPARISONCriteria 3 : The Gap between the Theoretical Complexity and Effective Complexity

60

GRAPHICS PROCESSING UNIT

0 200 400 600 800 1000 1200 1400 1600 1800 2000

-60000

-50000

-40000

-30000

-20000

-10000

0

The Gap between the Theoretical complexity and Effective Complexity for Matrix multiplication prob-

lem- GPU.

Matrix size

Gap

0 2000 4000 6000 8000 10000 12000

-0.4-0.35

-0.3-0.25

-0.2-0.15

-0.1-0.05

00.05

The Gap between the Theoretical complexity and Effective Complexity for Pi approximation problem -

GPU

Bin

Gap

COMPARISONCriteria 3 : The Gap between the Theoretical Complexity and Effective Complexity

61

COMPARISON

Conclusion

In the GPU, the resulting execution time of parallel program can give less time than the theoretical expected time . That is impossible to achieve when using a Cluster of workstation because of the communication overhead.

To minimize the Gap, or take it constant, in the cluster of workstations, the designer has to maintain constant, as possible, number and sizes of communicated messages when increasing the problem size.

Criteria 3 : The Gap between the Theoretical Complexity and Effective Complexity

62

COMPARISONCriteria 4: Efficiency

: Sequential Time.

: Parallel Time.

: Number processing unit

63

CRITERIA 4: EFFICIENCY

0 200 400 600 800 1000 1200 1400 1600 1800 20000123456789

10111213141516

Matrix multiplication problem

cluster

GPU

matrix size

efficie

ncy

0 2000 4000 6000 8000 10000 120000

0.5

1

1.5

2

2.5

3

3.5

4

4.5Pi approximation

Cluster

GPU

Bins number

efficie

ncy

COMPARISON

64

COMPARISONCriteria 4: Efficiency

• Conclusion: The efficiency (speedup) is much better in the GPU than in the cluster of workstations.

65

IMPORTANT NOTIFICATION

one process (CPU) one thread (GPU)0

20000400006000080000

100000120000140000160000

matrix sequential solution

(32*32) (128*128) (512*512) (1000*1000) (1805*1805)

ms

one process CPU one thread GPU02468

101214

PI sequential solution

100 1000 10000

ms

COMPARISON

IMPORTANT NOTIFICATION

67

COMPARISON

• Criteria 5: Hardness of development

• Cuda

• MPI

68

COMPARISON• Criteria 6: necessary hardware and software materials• GPU (Nvidia gt 525m )

• Cluster of workstation( 3 pc, switch, internet modem and wires)

69

70

CONCLUSION

Parallel Processing Comparative Study

Shared Memory Parallel Processing Approach Distributed Memory Parallel Processing Approach

Graphics Processing Unit (GPU) Cluster Of work-station

GPU and Cluster are the main two components of the Fastest Word Computers (As Shahin)

To compare we use : Two different problems (Matrix-Multiplication and Pi Approximation) Six Measure’s Criteria

More Adequate for Data-Level Parallelism Form More Adequate for Task –Level Parallelism Form

Big number of small calculation A Big calculation

Memory requirement ̴ Problem Characteristics Memory requirement ̴ Problem Characteristics

Better than the expected Run Time Impossible Null or Negative GAP

Complicate Design and programming Less complicated

Implementation environment very practical Complicated

72

top related