cs 258 parallel computer architecture lecture 8 network interface design february 20, 2008 prof john...

38
CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz http://www.cs.berkeley.edu/~kubitron/cs258

Post on 22-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

CS 258 Parallel Computer Architecture

Lecture 8

Network Interface Design

February 20, 2008Prof John D. Kubiatowicz

http://www.cs.berkeley.edu/~kubitron/cs258

Page 2: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.22/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Network Transaction Primitive

• one-way transfer of information from a source output buffer to a dest. input buffer– causes some action at the destination– occurrence is not directly visible at source

• deposit data, state change, reply

output buffer input buffer

Source Node Destination Node

Communication Network

serialized msg

Page 3: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.32/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Programming Models Realized by Protocols

CAD

Multiprogramming Sharedaddress

Messagepassing

Dataparallel

Database Scientific modeling Parallel applications

Programming models

Communication abstractionUser/system boundary

Compilationor library

Operating systems support

Communication hardware

Physical communication medium

Hardware/software boundary

Network Transactions

Page 4: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.42/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Shared Address Space Abstraction

• Fundamentally a two-way request/response protocol– writes have an acknowledgement

• Issues– fixed or variable length (bulk) transfers– remote virtual or physical address, where is action performed?– deadlock avoidance and input buffer full

• coherent? consistent?

Source Destination

Time

Load r Global address]

Read request

Read request

Memory access

Read response

(1) Initiate memory access

(2) Address translation

(3) Local /remote check

(4) Request transaction

(5) Remote memory access

(6) Reply transaction

(7) Complete memory access

Wait

Read response

Page 5: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.52/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Consistency

• write-atomicity violated without caching– No way to enforce serialization

• Solution? Acknowledge write of A before writing Flag…

Memory

P1 P2 P3

Memory Memory

A=1;flag=1;

while (flag==0);print A;

A:0 flag:0->1

Interconnection network

1: A=1

2: flag=1

3: load ADelay

P1

P3P2

(b)

(a)

Congested path

P3 P2

P1

Page 6: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.62/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Properties of Shared Address Abstraction• Source and destination data addresses are

specified by the source of the request– a degree of logical coupling and trust

• no storage logically “outside the address space”

» may employ temporary buffers for transport

• Operations are fundamentally request response

• Remote operation can be performed on remote memory

– logically does not require intervention of the remote processor

Page 7: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.72/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Message passing

• Bulk transfers• Complex synchronization semantics

– more complex protocols– More complex action

• Synchronous– Send completes after matching recv and source data

sent– Receive completes after data transfer complete from

matching send

• Asynchronous– Send completes after send buffer may be reused

Page 8: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.82/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Synchronous Message Passing

• Constrained programming model. • Deterministic! What happens when threads added?• Destination contention very limited.• User/System boundary?

Source Destination

Time

Send Pdest, local VA, len

Send-rdy req

Tag check

(1) Initiate send

(2) Address translation on Psrc

(4) Send-ready request

(6) Reply transaction

Wait

Recv Psrc, local VA, len

Recv-rdy reply

Data-xfer req

(5) Remote check for posted receive (assume success)

(7) Bulk data transferSource VA Dest VA or ID

(3) Local/remote check

Processor Action?

Page 9: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.92/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Asynch. Message Passing: Optimistic

• More powerful programming model• Wildcard receive => non-deterministic• Storage required within msg layer?

Source Destination

Time

Send (Pdest, local VA, len)

(1) Initiate send

(2) Address translation

(4) Send data

Recv Psrc, local VA, len

Data-xfer req

Tag match

Allocate buffer

(3) Local /remote check

(5) Remote check for posted receive; on fail, allocate data buffer

Page 10: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.102/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Asynch. Msg Passing: Conservative

• Where is the buffering?• Contention control? Receiver initiated protocol?• Short message optimizations

Source Destination

Time

Send Pdest, local VA, len

Send-rdy req

Tag check

(1) Initiate send

(2) Address translation on Pdest

(4) Send-ready request

(6) Receive-ready request

Return and compute

Recv Psrc, local VA, len

Recv-rdy req

Data-xfer reply

(3) Local /remote check

(5) Remote check for posted receive (assume fail); record send-ready

(7) Bulk data replySource VA Dest VA or ID

Page 11: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.112/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Features of Msg Passing Abstraction

• Source knows send data address, dest. knows receive data address

– after handshake they both know both

• Arbitrary storage “outside the local address spaces”

– may post many sends before any receives– non-blocking asynchronous sends reduces the

requirement to an arbitrary number of descriptors» fine print says these are limited too

• Optimistically, can be 1-phase transaction– Compare to 2-phase for shared address space– Need some sort of flow control

» Credit scheme?

• More conservative: 3-phase transaction– includes a request / response

• Essential point: combined synchronization and communication in a single package!

Page 12: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.122/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Active Messages

• User-level analog of network transaction– transfer data packet and invoke handler to extract it from the network

and integrate with on-going computation

• Request/Reply• Event notification: interrupts, polling, events?• May also perform memory-to-memory transfer

Request

handler

handler

Reply

Page 13: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.132/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Common Challenges

• Input buffer overflow– N-1 queue over-commitment => must slow sources

• Options:– reserve space per source (credit)

» when available for reuse? • Ack or Higher level

– Refuse input when full» backpressure in reliable network» tree saturation» deadlock free» what happens to traffic not bound for congested

dest?– Reserve ack back channel– drop packets– Utilize higher-level semantics of programming model

Page 14: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.142/20/08 Kubiatowicz CS258 ©UCB Spring 2008

The Fetch Deadlock Problem• Even if a node cannot issue a request, it

must sink network transactions!– Incoming transaction may be request generate a

response.– Closed system (finite buffering)

• Deadlock occurs even if network deadlock free!

NETWORKNETWORK

Page 15: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.152/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Solutions to Fetch Deadlock?

• logically independent request/reply networks

– physical networks– virtual channels with separate input/output queues

• bound requests and reserve input buffer space

– K(P-1) requests + K responses per node– service discipline to avoid fetch deadlock?

• NACK on input buffer full– NACK delivery?

• Alewife Solution:– Dynamically increase buffer space to memory when

necessary– Argument: this is an uncommon case, so use software

to fix

Page 16: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.162/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Network Transaction Processing

• Key Design Issue: • How much interpretation of the message?• How much dedicated processing in the

Comm. Assist?

PM

CA

PM

CA° ° °

Scalable Network

Node Architecture

Communication Assist

Message

Output Processing – checks – translation – formating – scheduling

Input Processing – checks – translation – buffering – action

Page 17: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.172/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Spectrum of Designs

• None: Physical bit stream– blind, physical DMA nCUBE, iPSC, . . .

• User/System– User-level port CM-5, *T, Alewife– User-level handler J-Machine, Monsoon,

. . .

• Remote virtual address– Processing, translation Paragon, Meiko CS-2

• Global physical address– Proc + Memory controller RP3, BBN, T3D

• Cache-to-cache– Cache controller Dash, Alewife, KSR,

FlashIncreasing HW Support, Specialization, Intrusiveness, Performance (???)

Page 18: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.182/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Net Transactions: Physical DMA

• DMA controlled by regs, generates interrupts• Physical => OS initiates transfers• Send-side

– construct system “envelope” around user data in kernel area

• Receive– must receive into system buffer, since no interpretation in CA

PMemory

Cmd

DestData

Addr

Length

Rdy

PMemory

DMAchannels

Status,interrupt

Addr

Length

Rdy

sender auth

dest addr

Page 19: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.192/20/08 Kubiatowicz CS258 ©UCB Spring 2008

nCUBE Network Interface

• independent DMA channel per link direction– leave input buffers always open– segmented messages

• routing interprets envelope– dimension-order routing on hypercube– bit-serial with 36 bit cut-through

Processor

Switch

Input ports

Output ports

Memory

Addr AddrLength

Addr Addr AddrLength

AddrLength

DMAchannels

Memorybus

Os 16 ins 260 cy13 us

Or 18 200 cy15 us

- includes interrupt

Page 20: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.202/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Conventional LAN NI

NIC Controller

DMAaddr

len

trncv

TX

RX

Addr LenStatusNext

Addr LenStatusNext

Addr LenStatusNext

Addr LenStatusNext

Addr LenStatusNext

Addr LenStatusNext

Data

Host Memory NIC

IO Busmem bus

Proc

• Costs: Marshalling, OS calls, interrupts

Page 21: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.212/20/08 Kubiatowicz CS258 ©UCB Spring 2008

User Level Ports

• initiate transaction at user level• deliver to user without OS intervention• network port in user space

– May use virtual memory to map physical I/O to user mode

• User/system flag in envelope– protection check, translation, routing, media access in src CA– user/sys check in dest CA, interrupt on system

PMem

DestData

User/system

PMemStatus,interrupt

Virtual address space

Status

Net outputport

Net inputport

Program counter

Registers

Processor

Page 22: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.222/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Example: CM-5

• Input and output FIFO for each network

• 2 data networks• tag per message

– index NI mapping table

• context switching?

• Alewife integrated NI on chip

• *T and iWARP also

Diagnostics network

Control network

Data network

Processingpartition

Processingpartition

Controlprocessors

I/O partition

PM PM

SPARC

MBUS

DRAMctrl

DRAM DRAM DRAM DRAM

DRAMctrl

Vectorunit DRAM

ctrlDRAM

ctrl

Vectorunit

FPU Datanetworks

Controlnetwork

$ctrl

$SRAM

NI

Os 50 cy 1.5 us

Or 53 cy 1.6 us

interrupt 10us

Page 23: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.232/20/08 Kubiatowicz CS258 ©UCB Spring 2008

User Level Handlers

• Hardware support to vector to address specified in message

– On arrival, hardware fetches handler address and starts execution

• Active Messages: two options– Computation in background threaads

» Handler never blocks: it integrates message into computation

– Computation in handlers (Message Driven Processing)» Handler does work, may need to send messages or block

U ser /sy s te m

PM e m

D e stD ata A d dress

PM e m

Page 24: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.242/20/08 Kubiatowicz CS258 ©UCB Spring 2008

J-Machine

• Each node a small mdg driven processor

• HW support to queue msgs and dispatch to msg handler task

Page 25: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.252/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Alewife Messaging• Send message

– write words to special network interface registers

– Execute atomic launch instruction

• Receive– Generate interrupt/launch

user-level thread context– Examine message by reading

from special network interface registers

– Execute dispose message– Exit atomic section

Page 26: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.262/20/08 Kubiatowicz CS258 ©UCB Spring 2008

iWARP

• Nodes integrate communication with computation on systolic basis

• Msg data direct to register of neighbor

• Stream into memory

Interface unit

Host

Page 27: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.272/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Sharing of Network Interface• What if user in middle of constructing

message and must context switch???– Need Atomic Send operation!

» Message either completely in network or not at all» Can save/restore user’s work if necessary (think

about single set of network interface registers– J-Machine mistake: after start sending message must let

sender finish» Flits start entering network with first SEND

instruction» Only a SENDE instruction constructs tail of message

• Receive Atomicity– If want to allow user-level interrupts or polling, must

give user control over network reception» Closer user is to network, easier it is for him/her to

screw it up: Refuse to empty network, etc» However, must allow atomicity: way for good user to

select when their message handlers get interrupted– Polling: ultimate receive atomicity – never interrupted

» Fine as long as user keeps absorbing messages

Page 28: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.282/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Dedicated processing without dedicated hardware design

Page 29: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.292/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Dedicated Message Processor

• General Purpose processor performs arbitrary output processing (at system level)

• General Purpose processor interprets incoming network transactions (at system level)

• User Processor <–> Msg Processor share memory• Msg Processor <–> Msg Processor via system network

transaction

Network

° ° ° 

dest

Mem

P M P

NI

User System

Mem

P M P

NI

User System

Page 30: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.302/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Levels of Network Transaction

• User Processor stores cmd / msg / data into shared output queue– must still check for output queue full (or make elastic)

• Communication assists make transaction happen– checking, translation, scheduling, transport, interpretation

• Effect observed on destination address space and/or events• Protocol divided between two layers

Network

° ° ° 

dest

Mem

P M P

NI

User System

Mem

PM P

NI

Page 31: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.312/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Example: Intel Paragon

Network

° ° ° Mem

P M P

NIi860xp50 MHz16 KB $4-way32B BlockMESI

sDMArDMA

64400 MB/s

$ $

16 175 MB/s Duplex

I/ONodes

rteMP handler

Var dataEOP

I/ONodes

Service

Devices

Devices

2048 B

Page 32: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.322/20/08 Kubiatowicz CS258 ©UCB Spring 2008

User Level Abstraction (Lok Liu)

• Any user process can post a transaction for any other in protection domain

– communication layer moves OQsrc –> IQdest

– may involve indirection: VASsrc –> VASdest

• See, for instance:– “Remote Queues: Exposing Message Queues for

Optimization and Atomicity,” Eric A. Brewer, Fredrik T. Chong, Lok T. Liu, Shamik D. Sharma, John D. Kubiatowicz, SPAA 1995

ProcOQ

IQ

VAS

ProcOQ

IQ

VAS

ProcOQ

IQ

VAS

ProcOQ

IQ

VAS

Page 33: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.332/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Msg Processor Events

Dispatcher

User OutputQueues

Send FIFO~Empty

Rcv FIFO~Full

Send DMA

Rcv DMA

DMA done

ComputeProcessorKernel

SystemEvent

Page 34: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.342/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Basic Implementation Costs: Scalar

• Cache-to-cache transfer (two 32B lines, quad word ops)– producer: read(miss,S), chk, write(S,WT), write(I,WT),write(S,WT)– consumer: read(miss,S), chk, read(H), read(miss,S), read(H),write(S,WT)

• to NI FIFO: read status, chk, write, . . .• from NI FIFO: read status, chk, dispatch, read, read, . . .

CP

User OQ

MP

Registers

Cache

Net FIFO

UserIQ

MP CP Net

2 1.5 2

4.4 µs 5.4 µs

10.5 µs

7 wds

2 2 2

250ns + H*40ns

Page 35: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.352/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Virtual DMA -> Virtual DMA

• Send MP segments into 8K pages and does VA –> PA• Recv MP reassembles, does dispatch and VA –> PA

per page

CP

User OQ

MP

Registers

Cache

Net FIFO

UserIQ

MP CP Net

2 1.5 2

7 wds

2 2 2

Memory

sDMA

hdr

rDMA

MP

20482048

400 MB/s

175 MB/s

400 MB/s

Page 36: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.362/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Single Page Transfer Rate

Transfer Size (B)

MB

/s

0

50

100

150

200

250

300

350

400

0 2000 4000 6000 8000

Total MB/s

Burst MB/s

Actual Buffer Size: 2048Effective Buffer Size: 3232

Page 37: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.372/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Msg Processor Assessment

• Concurrency Intensive– Need to keep inbound flows moving while outbound flows stalled– Large transfers segmented

• Reduces overhead but adds latency

User OutputQueues

Send FIFO~Empty

Rcv FIFO~Full

Send DMA

Rcv DMA

DMA done

ComputeProcessorKernel

SystemEvent

User InputQueues

VAS

Dispatcher

Page 38: CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz kubitron/cs258

Lec 8.382/20/08 Kubiatowicz CS258 ©UCB Spring 2008

Conclusion

• Shared Address Space – Request/Response Protocol– Global names for memory locations specify nodes

• Many different Message-Passing styles– Global Address space: 2-way– Optimistic message passing: 1-way– Conservative transfer: 3-way

• “Fetch Deadlock”– RequestResponse introduces cycle through network– Fix with:

» 2 networks» dynamic increase in buffer space

• Network Interfaces– User-level access– DMA– Atomicity