ken domino, domem technologies may 2, 2011 ieee boston continuing education program
TRANSCRIPT
Introduction to Parallel Computing Using CUDA
Ken Domino, Domem TechnologiesMay 2, 2011
IEEE Boston Continuing Education Program
Time and Location: 6:00 - 8:00 PM, Mondays, May 2, 9, 16, 23
Course Website: http://domemtech.com/ieee-pp
Instructor: Ken Domino, [email protected]
About this course
Recommended Textbooks:
About this course
CUDA by Example: An Introduction to General-Purpose GPU Programming, by J. Sanders and E. Kandrot, ©2010, ISBN 9780131387683
Programming massively parallel processors: A Hands-on approach, by D. Kirk and W. Wen-mei, ©2010, ISBN 9780123814722
Recommended Textbooks:
About this course
Principles of Parallel Programming, by Calvin Lin and Larry Snyder, © 2008, ISBN 9780321487902
Introduction to parallel algorithms, by Xavier, C. and S. Iyengar,© 1998, ISBN 9780471251828
Patterns for Parallel Programming, by Timothy G. Mattson, Beverly A. Sanders, and Berna L. Massingill, © 2004, ISBN 9780321228116
Other material
Original research papers (see reference list)
About this course
Uzi Vishkin, http://www.umiacs.umd.edu/~vishkin/index.shtmlClass notes on Thinking in Parallel:Some Basic Data-Parallel Algorithms and Techniques, 2010, http://www.umiacs.umd.edu/~vishkin/PUBLICATIONS/classnotes.pdf
Why is Parallel Computing Important?
CPU’s have been getting faster…but that stopped in mid-2000’s. Why?
Why is Parallel Computing Important?
Pollack FJ. New microarchitecture challenges in the coming generations of CMOS process technologies (keynote address)(abstract only). Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture. Haifa, Israel: IEEE Computer Society; 1999:2.
Problems can be solved in much faster times.
Why is Parallel Computing Important?
Predictive protein binding“Meet Tanuki, a 10,000-core Supercomputer in the Cloud”
Based on Amazon EC2 Cloud service, client Genentech
Compute time reduced from about a month to 8 hours
http://www.bio-itworld.com/news/04/25/2011/Meet-Tanuki-ten-thousand-core-supercomputer-in-cloud.html
Computer vision
Why is Parallel Computing Important?
OpenVIDIA: Parallel GPU Computer Vision
Solves problems for Segmentation, Stereo Vision, Optical Flow and Feature Tracking
http://openvidia.sourceforge.net/index.php/OpenVIDIA
http://psychology.wikia.com/wiki/Computer_vision
Army: “Want computers to work like the human brain”
Why is Parallel Computing Important?
http://www.wired.com/dangerroom/2011/04/army-wants-a-computer-that-acts-like-a-brain/
Where are we going?
Why is Parallel Computing Important?
o NVIDIA has Fermi GPU GeForce GTX 590 with 1024-core processor (2011), programmable using CUDA, ~2500 GFLOPS.
o In 2005, Intel started manufacturing dual-core CPU’s.
o In 2010, Intel and AMD are manufacturing six-core CPU’s, ~11 GFLOPS (non-SSE).
o In 2012, Intel will introduce Knights Corner, a 50-core processor.
CPU vs GPU
The largest supercomputer is the Tianhe-IA (Nov 2010 http://www.top500.org/)
7168 Xeon X5670 6-core processors
7168 Nvidia M2050 GPU processors with 448 CUDA Cores
Why CUDA and GPUs?
Wikipedia.org. Tianhe-I, 2010.
One of the “seven” up and coming language [Wayner 2010]
Brings parallel computing to the common man.
For one GPU, a speed ups of 100 times or more over a serial CPU solution is common.
Used in many different applications.
Coming to mobile devices.
Why CUDA and GPUs?
A task is a sequence of instructions that executes as a group.
Tasks continue until halt, exit, or return.
Task
Computers do not directly execute tasks. Computer execute Instructions, which are
used to model a task.
Task
Execution of tasks are not concurrent and not simultaneous.
A sequence of tasks is called a thread.
Sequential
Step 1 Step 3Step
2
Execution of tasks of multiple threads are concurrent but not necessarily simultaneous.
Concurrent
Step 1Step
3Step
2
Step 1 Step 3Step
2
Execution of tasks of multiple threads are concurrent and simultaneously executing on multiple machines.
Goal is minimized time and work.
Parallel
Step 1 Step 3Step
2
Step 1 Step 3Step
2
Nowadays, many people use the terms interchangeably. Lin, Y. and L. Snyder (2009). Why?
Since the tasks of the threads can occur in any order, so behavior is unpredicatable.
Concurrent vs Parallel
Read x Set x = ySet y = x + 1
Read x Set x = ySet y = x + 2
Read x Set x = ySet y = x + 3
CPU – Central Processing Unit
Ld %r1, 1Ld %r2, memSt [%r2], %r1
CPU – Central Processing Unit
Basic five-stage pipeline in a RISC machine (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back). In the fourth clock cycle (the green column), the earliest instruction is in MEM stage, and the latest instruction has not yet entered the pipeline.
Instruction pipelineAn instruction pipeline is a technique used in the design ofcomputers and other digital electronic devices to increase their instruction throughput (the number of instructions that can be executed in a unit of time).
Unfortunately, not all instructions are independent!
f = 1e = 2a = b = c = d = 3
s1. e = a + bs2. f = c + ds3. g = e * f
Instruction level parallelism
e = a + b
f = c + d
g = e * f
Result: g = 36
f = 1e = 2a = b = c = d = 3
s1. e = a + bs2. f = c + ds3. g = e * f
Instruction level parallelism
e = a + b
f = c + d
g = e * f
Result: g = 6
f = 1e = 2a = b = c = d = 3
s1. e = a + bs2. f = c + ds3. g = e * f
Instruction level parallelism
e = a + bf = c + d
g = e * f“s3 is flow dependent on s1”There are other types of dependencies.
Thread-level parallelism = task parallelism Example, recalculation of a spreadsheet
Where else to find parallelism?
Process-level parallelism Example, two independent programs
(Freecell and Email)
Granularity is the size of the problem (e.g., instruction vs. thread vs. process)
Where else to find parallelism?
What is speed up?
p = number of processors
Speed up
Speed up - ExampleTa Tb Tc Td Te
Ta
Tb
Tc
Td
Te
What is the time of computation if b, c, d, e are tasks that can be run in parallel on four processors?
Serial computation: Ta … Te
Amdahl’s law
Maximum speed upTa Tb Tc Td Te
Tb
Tc
Td
Te
Ta
Let f = fraction of time that must be serially executed
Amdahl’s law
Maximum speed up
0 1000 20000
500
1000
1500
2000
2500
f=0.1f=0.05f=0.01f=0.001f=0
p
Sp
eed
up
Question: If a problem is not parallelizable by even only a small fraction, throwing more processors at a problem will not help speed it up. So, why try for a parallel solution?
Paradox
Gustafson?
(G.’s Law is equivalent to Amdahl…)
Question: If a problem is not parallelizable by even only a small fraction, throwing more processors at a problem will not help speed it up. So, why try for a parallel solution?
Answer: A prerequisite to applying Amdahl’s or Gustafson’s formulation is that the serial and parallel programs take the same number of total calculation steps for the same input.
Maximum speed up
Use of a resource constrained serial execution as the base for speedup calculation; and
Use a parallel implementation that can bypass large amount of calculation steps while yield the same output of the corresponding serial algorithm.
Any algorithm in which the complexity of verification is faster than the complexity of the solution [Shi 1995] => most algorithms!
Breaking Amdahl’s Law
CPU = “Central Processing Unit” GPU = “Graphics Processing Unit” What’s the difference?
CPU vs GPU
Why do we classify hardware? In order to program a parallel computer,
you have to understand the hardware very well.
The basic classification is Flynn taxonomy (1966): SISD, SIMD, MIMD, MISD
Hardware Classification
Single Instruction Single Data
SISD
Examples:MOS Technology 6502Motorola 68000Intel 8086
Single Instruction Multiple Data
SIMD
Examples:ILLIAC IVCM-1, -2Intel Core, AtomNVIDIA GPU’s
MISD Multiple Instruction
Single Data
Examples:Space shuttle computer
MIMD Multiple Instruction
Multiple Data
Examples:BBN Butterfly, Cedar, CM-5, IBM RP3,Intel Cube, Ncube, NYU Ultracomputer
Parallel Random Access Machine (PRAM). Idealized SIMD parallel computing model.
PRAM
Unlimited RAM’s, called Processing Units (PU). RAM’s operate with same instructions and synchronously. Shared Memory unlimited, accessed in one unit time. Shared Memory access is one of CREW, CRCW, EREW. Communication between RAM’s is only through Shared Memory.
PRAM is used for specifying an algorithm and analyzing the complexity of it.
PRAM-based algorithms can be adapted to SIMD architectures.
PRAM algorithms can be converted into CUDA implementations relatively easily.
Why is PRAM important?
Parallel for loop for Pi , 1 ≤ i ≤ n in parallel do … end
PRAM pseudo code