© michel dubois, murali annavaram, per stenström all rights reserved modified version by c andras...

27
© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 668 Multiprocessor Systems IV Chip Multiprocessors Csaba Andras Moritz Papers: handed in class Textbook Chapter 8 Slides adopted from authors groups

Upload: alyson-hensley

Post on 18-Jan-2018

218 views

Category:

Documents


0 download

DESCRIPTION

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz NEW OPPORTUNITIES Parallel architectures have been around since the 1970’s But in the past they were reserved for high-end servers Today, out of technological pressures, parallel architectures have become pervasive Pcs, WORKSTATIONS, EVEN MOBILE Mass market production TODAY’S MICROPROCESSORS ARE CMPS (CHIP MULTIPROCESSORS OR MULTIPROCESSORS ON A CHIP) Multi-core vs many-core Core multithreading Exploit TLP (thread-level parallelism) Mostly multiprogramming Renewed interest in parallel programming New programming paradigms are being explored As compared to traditional shared-memory, communication overheads are drastically reduced This enables new architectures and new programming paradigms Systems with huge amount of threads can be built by exploiting parallelism at all levels: ILP, processors, and cores

TRANSCRIPT

Page 1: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

UNIVERSITY OF MASSACHUSETTSDept. of Electrical & Computer Engineering

Computer Architecture ECE 668

Multiprocessor Systems IV

Chip Multiprocessors

Csaba Andras Moritz

Papers: handed in classTextbook Chapter 8 Slides adopted from authors groups

Page 2: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

Outline

• CORE MULTITHREADING

• HOMOGENEOUS CMP ARCHITECTURES

• HETEROGENEOUS/CONJOINED CORES

Page 3: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

NEW OPPORTUNITIES• Parallel architectures have been around since the 1970’s

• But in the past they were reserved for high-end servers• Today, out of technological pressures, parallel architectures have become

pervasive• Pcs, WORKSTATIONS, EVEN MOBILE• Mass market production

• TODAY’S MICROPROCESSORS ARE CMPS (CHIP MULTIPROCESSORS OR MULTIPROCESSORS ON A CHIP)

• Multi-core vs many-core• Core multithreading• Exploit TLP (thread-level parallelism)• Mostly multiprogramming• Renewed interest in parallel programming• New programming paradigms are being explored

• As compared to traditional shared-memory, communication overheads are drastically reduced

• This enables new architectures and new programming paradigms• Systems with huge amount of threads can be built by

exploiting parallelism at all levels: ILP, processors, and cores

Page 4: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

WHY CORE MULTITHREADING??Cache miss in the 5-stage pipeline

• On a miss the processor clock is stopped, the miss is handled and then the clock is restarted

• Example: 20 cycles compute/20 cycles L1-miss latency/L2 miss every 200 compute cycles and 200 cycles L2 miss latency

• Cpi without cache misses (cpi0) is 1• L1- mpi (miss per instruction) is .05; L2-mpi is .005• Time to execute 200 instructions: (20+20)x10 + 200 = 20x10 (compute) + 20x10 (l1 misses) + 200 (l2 miss) = 600

Conclusion: the cpi is multiplied by 3 because of cache misses!!!

Register ALU Register

IF ID EX WBMEM

D-cacheI-cache

PC

20cycles 200cycles

compute L1 miss compute L1 miss compute L2 miss compute L1 miss

20cycles

Page 5: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

PRIMITIVE (OLD) FORM: SOFTWARE MULTITHREADING

• Used since the 1960’s to hide the latency of i/o operations• Multiple processes or threads are active

• Virtual memory space allocated• Process control block allocated

• On an i/o operation• Process is preempted and removed from ready list• I/o operation is started• Another active process is picked from the ready list and run

• When i/o completes, put the preempted process back in the ready list• Context switch

• Trap processor--flush pipeline• Save process state in process control block

• INCLUDES REGISTER FILE(S), PC, INTERRUPT VECTOR, PAGE TABLE BASE REGISTER, etc

• Restore process state of a different process• Start execution--fill pipeline

• Also triggered on• SHARED RESOURCE CONFLICT (e.G., Semaphores)• Timer interrupts (fairness)

Very high switching overhead (ok, since wait is very long)

Page 6: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

HARDWARE MULTITHREADING• Run multiple threads on the same core concurrently• Run another thread when a thread is blocked on

• Unsuccessful synchronization• L1 or L2 cache misses • TLB misses• Exceptions• Even while waiting for operands (latency of operation)

• Minimum hardware support: replicate architectural state • All running threads must have their own thread context

• Multiple register sets in the processor• MULTIPLE STATE REGISTERS (ccs, PC, PTBR, IV)

• Three types of hardware multithreading:• Block multithreading aka coarse-grain multithreading• Interleaved multithreading aka fine-grain multithreading• Simultaneous multithreading

Page 7: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BLOCK MULTITHREADING• Each running thread executes in turn until a long latency event

• Similar to software multithreading but at a different scale• Five stage pipeline.

• In the example, each context switch due to l1 miss causes a 25% overhead to flush the pipeline

• Major cost is due to flushing the pipeline

Page 8: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BLOCK MULTITHREADING (FIVE STAGE PIPELINE)

• Both l1 and l2 must be lockup-free• Must handle two cache accesses (one hit and one miss or two misses)

• Use more threads to cover idle times• More state replication• More complex thread selection• Scale up tlb and cache sizes• Diminishing returns

• Fictive timeline• Cache misses happen at highly variable times• Latencies are variable• Overlap is never as perfect as in the example

Page 9: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BLOCK MULTITHREADING IN OOO CORES• OoO CORES

• The cost of switching threads is much higher• All instructions must leave the pipeline before the switch

• Example

• • Thread 1 experiences a long latency event at instruction x5

• Thread switch is triggered when x5 reaches the top of rob

X1

X3

X2 X4

X6

X5

Y2

Y3

Y4

Y1

Y5

Y6

2

2

4 4

21, 20

1

3

2

2

3

2

2

1

2

Thread 1 Thread 2

Page 10: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BLOCK MULTITHREADING• OoO PROCESSORS

• 2-WAY DISPATCH, 2-WAY ISSUE (1 ISSUE Q OF SIZE 4), 2 cdbs, 2-WAY RETIRE

• The second thread is executed in the shadow of the miss of the first thread, as in the 5-stage pipeline• Here only two instructions are flushed (x5 and x6)

More state duplication is useful: e.G., Tlb, branch prediction

Instruction(latency)

Dispatch(issue Q)

Issue Registerfetch

Exec start

Exec complete

CDB Retire(T1)

Retire(T2)

X1(2) 1(1) 2 3 4 5 6 7X2(2) 1(2) 4 5 6 7 8 9X3(4) 2(3) 4 5 6 9 10 11X4(2) 2(4) 6 7 10 11 12 13

X5(1,20) 9(3) 10 11 12* 12 13X6(1) 9(4) 11 12 13 13 14Y1(2) 15(1) 16 17 18 19 20 21Y2(3) 15(2) 18 19 20 22 23 24Y3(2) 16(3) 21 22 23 24 25 26Y4(2) 16(4) 23 24 25 26 27 28Y5(3) 24(3) 25 26 27 29 30 31Y6(1) 24(4) 28 29 30 30 31 32

X5(1,20) 32(3) 33 34 35 35 36 37X6(1) 32(3) 34 35 36 36 37 38

Page 11: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BLOCK MULTITHREADING--EXAMPLES• IBM iSERIES SStar

• Called hmt (hardware mt)• 4-way superscalar io processor with a 5-stage pipeline• Designed for commecial workloads• Two threads: foreground and background• Switch threads on cache misses + time-out mechanism

• INTEL’S MONTECITO• Two cores with two threads per core• IA-64 (itanium)• L3 cache misses--off chip accesses• Events: l3 cache misses/data return, expiration of quantum, thread

switch hint provided by software (instruction that forces the thread to yield the core

• Thread urgency level based on occurrence of events• Thread switching occurs when the urgency level of suspended thread is

higher than that of the running threadNO EXAMPLE OF BLOCK MULTITHREADING IN OoO PROCESSORS

Page 12: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

INTERLEAVED MULTITHREADING• Dispatch instructions from different theads/processes in

each cycle • Different ready threads dispatch in turn in every cycle• Takes advantage of small latencies such as instruction latencies

• Two threads: penalty of a taken branch is one clock and penalty of an exception is three clocks.

• Five threads: penalty of a taken branch is zero and penalty of an exception is one clock.

Page 13: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

INTERLEAVED MULTITHREADING

• Same architecture as for block multithreading except that:• Data forwarding must be thread aware• Context id is carried by forwarded values• Stage flushing must be thread aware• On an miss exception if, id, ex and mem cannot be flushed

indiscriminately• Same for taken branches and regular software exceptions• Thread selection algorithm is different• A different thread is selected in each cycle (round-robin)• On a long latency event the selector puts the thread aside and

removes it from selection

Page 14: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

INTERLEAVED MULTITHREADING• Sparc t1 and t2

• Avoids taking exceptions• IFQ tolerates i-cache misses• Thread selection stage; store buffers• The thread selector selects the thread to fetch and decode in

every cycle• Typically round-robin• If long latency event, the selection of the thread is suspended• Static branch prediction• Flushing and forwarding are thread aware

Page 15: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BARREL PROCESSORS• Enough threads so that the pipeline is filled with

instructions from different threads, • There is no need to forward or to detect hazards• There can be so many ready threads that there is no need for a cache

• Or cache can be very large with high hit latency • No context switch• Control hazards are also solved by multithreading• High throughput but low single thread performance

Page 16: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

EXAMPLES OF BARREL PROCESSORS• CDC6600 I/O PROCESSORS (1960s)• DENELCOR HEP (EARLY 1980s)

• Up to 16 processors• 8-stage pipeline • Different threads in the pipeline (needs at least 8 threads)• No forwarding, no hdu, no stalling and no flushing• No cache• Throughput for eight threads: 10mips

• Tera• Mp with up to 256 processors• 128 i-streams per processor• 128 pcs AND 4096 REGISTERS• No hardware support for data hazards

• An instruction in an i-stream can issue if it has no dependencies with previous instructions

• A lookahead field is added to every instruction • It indicates the number of following instructions that have no dependency

with it

Page 17: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

SIMULTANEOUS MULTITHREADING IN OoO PROCESSORS• Dispatch instructions from different threads in consecutive cycles• If superscalar, may dispatch from different threads in the same cycle (simplification of i-fetch,

decode and dispatch)• Dispatching, scheduling, flushing and forwarding must be thread aware• Branch prediction separate; tlb separate; rob separate

• Dispatch one instruction from each thread in each cycle.

Instruction(latency)

Dispatch

Issue

Registerfetch

Exec start

Exec complete

CDB Retire(T1)

Retire(T2)

X1(2) 1(1) 2 3 4 5 6 7Y1(2) 1(2) 2 3 4 5 6 7X2(2) 2(3) 4 5 6 7 8 9Y2(3) 2(4) 4 5 6 8 9 10X3(4) 7(3) 8 9 10 13 14 15Y3(2) 7(4) 8 9 10 11 12 13X4(2) 10(3) 12 13 14 15 16 17Y4(2) 10(4) 11 12 13 14 15 16

X5(1,20) 15(2) 16 17 18* 37 38 39Y5(3) 15(3) 16 17 18 20 21 22X6(1) 17(2) 36 37 38 38 39 40Y6(1) 17(3) 19 20 21 21 22 23

Page 18: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

NIAGARA CORE• SIMPLE INTERLEAVING MULTITHREADING

Page 19: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

CHIP MULTIPROCESSORS (CMPs)

• CMPs have several desirable properties • Design simplicity• Improved power scalability• Low-latency communication• Modularity and customization

 • CMPs can be homogeneous or heterogeneous

• Depends on whether the cores are identical or not

Page 20: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

BUS-BASED CMPS

• Cores share l2 cache banks through a shared bus• COHERENCE IS MAINTAINED BETWEEN L1s BY BUS SNOOPING

• Similar to UMA or SMP except that memory is replaced by L2• INCLUSION IS ENFORCED BETWEEN L2 AND l1s• Main difference is that a miss can happen in l2• In this case the on-chip protocol must be capable to deal with variable latencies

• Example: pentium iv dual core processor

L1 $

Core0

L1 $

Core1

L1 $

Core2

L1 $

Core3

L2 $ Bank0

L2 $ Bank1

L2 $ Bank2

L2 $ Bank3

RouterSnoop Coherence

on BusMemory Controller

BUS INTERCONNECT

Page 21: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

RING-BASED CMPs • Nodes (core + l2 bank) are connected through a ring

• Multiple requests are in progress on different links• Packets are routed by additional logic in each node (routers)

L1 $

Core0

L1 $

Core1

L1 $

Core2

L1 $

Core3L2 $ Bank0

L2 $ Bank1

L2 $ Bank2

L2 $ Bank3

RouterDirectory

Coherence

QPI/HT Interconnect

Memory Controller

Page 22: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

RING-BASED CMPs• Rings can be clocked much faster than busses

• Point-to-point instead of global• Snooping protocols

• Coherence requests visit (snoop) every node, including cores and l2 banks• Requests “hop” around the ring, from link to link to broadcast to all nodes• Responses (e.g., Missing blocks, Acks) are inserted in the ring• Owner (l2 or dirty node) replies with the block on a miss• A coherence transaction takes one trip around the ring

• Directory-based protocols are also possible• Each node (core plus l2 cache bank) is responsible for a range of

addresses• Where the global state of the block is stored (presence bits, dirty bit)

• Request go first to home node• If home node is not the owner, then the request is forwarded to dirty node• If dirty node is between requester and home then one more round trip is needed

• As in bus-based, the l2 bank may miss

• Example: Intel core i7• Note: integration of external controllers on chip

Page 23: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

CMPs WITH HETEROGENEOUS CORES• Workloads have different characteristics

• Large number of small cores (applications with high thread count)• Small number of large cores (applications with single thread or limited thread count)• Mix of workloads• Most parallel applications have parallel and serial sections (amdahl law)

• Hence, heterogeneity• Temporal: eg, epi throttling• Spatial: each core can differ either in performance or functionality

• Performance asymmetry• USING HOMOGENEOUS cores AND DVFS, OR PROCESSOR WITH MIXED CORES• Variable resources: e.g., adapt size of cache by gating off power to cache banks (up to 50%)• Speculation control (low branch prediction code): throttle the number of in-flight instructions

(reduces activity factor)

Method EPI Range Time to vary EPIDVFS 1:2 to 1:4 100 us, ramp Vcc

Variable Resources 1:1 to 1:2 1 us, Fill L1

Speculation Control 1:1 to 1:1.4 10 ns, Pipe flush

Mixed Cores 1:6 to 1:11 10 us, Migrate L2

Page 24: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

CMPs WITH HETEROGENEOUS CORES• Functional asymmetry

• Use heterogeneous cores• E.G., Gp cores, graphics processors, crytography, vector cores, floating-point

co-processors• Heterogeneous cores may be programmed differently• Mechanisms must exist to transfer activity for one core to another• Fine-grain: in the case of floating point co-processor, use ISA• Coarse grain: transfer the computation from one core to another using APIs

• Examples:• Cores with different ISAs• Cores with different cache sizes, different issue width, different branch

predictors• Cores with different micro-architectures (e.G., Static and dynamic)• Different types of cores (e.G., Gp and simd)

• Goals:• Save area (more cores!)• Save power by using cores with different power/performance characteristics

for different phases of execution

Page 25: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

CMPs WITH HETEROGENEOUS CORES• Different applications may have better performance/power

characteristics on some types of core (static)• Same application goes through different phases that can

use different cores more efficiently (dynamic)• Execution moves from core to core dynamically• Most interesting case (dynamic)• Cost of switching cores (must be infrequent: such as o/s time-slice)

• Assume cores with same ISA but different performance/energy ratio

• Need ability to track performance and energy to make decisions• Goal: minimize delay-energy product• Sample performance and energy spent periodically• To sample, run application on one or multiple cores in small intervals

Page 26: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

IBM CELL PROCESSOR

• One powerpc processing element (ppe)• 2-WAY SMT power CORE

• PLUS 8 SYNERGISTIC PROCESSING ELEMENTS (spes)• SPE is 2-issue in-order processor• Two SIMD instructions can be issued in each cycle (vectors)• No coherence support between SPE and PPE (data transfers are explicitly

programmed)

Power PC ISA-Based PPE

Ring Based Element Interconnect Bus

SPE SPE SPE SPE SPE SPE SPE SPE

L2Cache

8 SPE Cores

Page 27: © Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz UNIVERSITY OF MASSACHUSETTS Dept. of Electrical

© Michel Dubois, Murali Annavaram, Per Stenström All rights reserved Modified version by C Andras Moritz

CONJOINED CORES

• In simultaneous multithreading all threads share all the resources of the core (CACHES, fUs, Network interface...)

• In CMPs threads run on different, disjoint cores

• An intermediate solution is conjoined cores, in which cores share some (but not all) physical resources such as

• L1 I-cache ports• L1 D-cache ports• Interconnect interface• Co-processors

• Example: sun sparc T1• 8 cores on a chip• Sharing a single fpu• Core sends opcode, ops, core

id• FPU sends result• Thread requesting fPU is

suspended

Shared FPU

IFID

EX

WB

CT

IFID

EX

WB

CT

Arbiter

Req Buffer