a perspective on the future of massively parallel computing presented by: cerise wuthrich june 23,...

28
A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Upload: clarence-mclaughlin

Post on 08-Jan-2018

214 views

Category:

Documents


0 download

DESCRIPTION

Outline  Intro & Background of Current Models –Limits of Sequential Models –Tightly Coupled MP –Loosely Coupled DS  Fine-Grain Parallel Models –ANN –Cellular Automata  Fine-Grain vs Coarse-Grain –Architecture –Functions –Potential Advantages  Summary and Conclusions

TRANSCRIPT

Page 1: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

A Perspective on the Future of Massively Parallel ComputingPresented by: Cerise WuthrichJune 23, 2005

Page 2: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

A Perspective on the Future of Massively Parallel Computing: Fine-Grain vs. Coarse-Grain Parallel Models

Predrag T. TosicProceedings of the 1st Conference on Computing FrontiersApril [email protected]

Page 3: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Outline Intro & Background of Current Models

– Limits of Sequential Models– Tightly Coupled MP– Loosely Coupled DS

Fine-Grain Parallel Models– ANN– Cellular Automata

Fine-Grain vs Coarse-Grain– Architecture– Functions– Potential Advantages

Summary and Conclusions

Page 4: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Introduction and Background of Current Models Hardware limitations There are physical limits to how fast we can

compute– Limits to increasing the densities and

decreasing the size of basic microcomponents– No signal can propagate faster than the speed of

light

Page 5: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Introduction and Background of Current Models Limitations of Von Neuman Model

– There is a clear distinction (physical and logical) where data and programs are stored (memory) and where the computation is executed (processor)

– Sequential

Page 6: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Parallel Processing Realization that parallel processing was a

necessity Classical Models

– Multiprocessing Supercomputers– Networked Distributed Systems– Both models are actually coarse-grain

Proposal– “Truly fine-grain connectionist massively

parallel model”

Page 7: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Characteristics of Multiprocessing Supercomputers Communication Medium

– Shared, Distributed, Hybrid Nature of Memory Access

– Uniform vs. NUMA Granularity Instruction Streams (single or multiple) Data Streams (single or multiple)

Page 8: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Characteristics of Distributed Systems Loosely Coupled Heterogeneous collection Networked by middleware Scalable Flexible Energy dissipation not an issue Harder to program, control, detect errors

and failures

Page 9: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

The model we should really consider Current supercomputers use thousands of

processors Current DS (like WWW) can use hundreds

of millions of computers We shouldn’t base parallel computing on

CS or engineering principles Instead look at the most sophisticated IP

device engineered – the human brain

Page 10: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Human Brain Tens of billions (1010) of processors

(neurons) Highly interconnected with 1015

interconnections Each neuron is a very simple basic

information processing unit

Page 11: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Artificial Neural Networks Best known and most studied class of a

connectionist model 1942 – Linear Perceptron Multi-Layer Perceptron Radial Basis Function NN Hopfield NN

Page 13: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Artificial Neural Networks Each neuron (processor) computes a single

pre-determined function of its inputs Neuron similar to a logical gate Neurons connected with synapses Each synapse stores about 10 bits of info Each synapse fired about 10 times/sec Receptors are input devices Effectors are output devices

Page 14: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Artificial Neural Networks Just as the brain grows, changes, and

adapts, ANNs allow for – creation of new synapses– Dynamic modification of already existing

synapses ANNs – Memory

– No separate place for memory– All info stored in nodes and edges – Dynamic changes in edge weights

Page 15: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Cellular Automata The state of a cell at a

given time depends only on its own state one time step previously, and the states of its nearby neighbors at the previous time step. All cells on the lattice are updated synchronously.

Another Connectionist Model

Page 16: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Cellular Automata Model inspired by physics Grid where each node is a Finite State

Machine– Edge-labeled directed graphs– Each vertex represents one of n states– Each edge a transition from one state to the

other on receipt of the alphabet symbol that labels the edge

Page 17: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Cellular Automata Only 2 possible states

– 0 is quiescent– 1 is active

Only input is current states of its neighbors All nodes execute in unison A one-dimensional infinite CA is a

“countably infinite set of nodes capable of universal computation”

Page 18: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Connectionist Models Appear to be a legitimate model of a

universal massively parallel computer ANNs are suitable for learning, but not

Cellular Automata CA find most of their applications in

studying paradigms of dynamics of complex systems

Page 19: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Coarse-Grain vs. Fine-Grain Architectures

Coarse Fine

# of Proc Thousands Billions

Type of proc Powerful, expensive,

dissipate energy

Simple, cheap, energy efficient

Capabilities Complex Single, predefined function

Memory Separated from processor

Virtually no distinction between memory and

processor

Page 20: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Coarse-Grain vs. Fine-Grain Functions At the very core level, connectionist models

are different in how they:– Receive information– Process information– Store information

Page 21: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Limitations of ANNs ANNs aren’t necessary in all domains

– ANN can’t computer more or faster than the human brain

– The power of a human brain is an asymptotic upper bound on a connectionist ANN model

– Not needed for:• Computation tasks• Searching large databases

Page 22: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Suitable domains for ANNs Pattern Recognition

– “No computer can get anywhere close to the speed and accuracy with which humans recognize and distinguish between, for example, different human faces or other similar context-sensitive, highly structured visual images.”

Problem domains where computing agent has on-going, dynamic interaction with environment or where computations may have fuzzy components

Page 23: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Potential Advantages of Connectionist Fine-Grain Models Scalability Avoid slow storage bottleneck since there is

no physically separated memory Flexibility (not necessary to re-wire or re-

program with additional components) Graceful Degradation – neurons keep dying

in our brains and yet we continue to function reasonably well

Page 24: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Potential Advantages of Connectionist Fine-Grain Models Robustness – If one component of a tightly

coupled supercomputer fails, the whole system can fall apart

Energy consumption – dissipate much less heat

Page 25: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Summary and Conclusions Connectionist models such as ANNs or CA

are capable of massively parallel information processing

They are legitimate candidates for an alternative approach to the design of highly parallel computers of the future

These models are conceptually, architecturally and functionally very different from traditional models

Page 26: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Summary and Conclusions Connectionist models are:

– Very fine-grained– Basic operations are much simpler– Several orders of magnitude more processors– Memory concept is totally different

Page 27: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Summary and Conclusions Connectionist models are:

– Yet to be built– Idea is in its infancy– Currently still too far-fetched an endeavor– Promising future as the underlying abstract

model of the general-purpose massively parallel computers of tomorrow

Page 28: A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

Questions