eecs 252 graduate computer architecture lec 16 – advanced memory hierarchy

56
EECS 252 Graduate Computer Architecture Lec 16 – Advanced Memory Hierarchy David Patterson Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~pattrsn http://vlsi.cs.berkeley.edu/cs252-s06

Upload: hamilton-vinson

Post on 03-Jan-2016

52 views

Category:

Documents


2 download

DESCRIPTION

EECS 252 Graduate Computer Architecture Lec 16 – Advanced Memory Hierarchy. David Patterson Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~pattrsn http://vlsi.cs.berkeley.edu/cs252-s06. Outline. 11 Advanced Cache Optimizations - PowerPoint PPT Presentation

TRANSCRIPT

  • EECS 252 Graduate Computer Architecture

    Lec 16 Advanced Memory Hierarchy

    David PattersonElectrical Engineering and Computer SciencesUniversity of California, Berkeley

    http://www.eecs.berkeley.edu/~pattrsnhttp://vlsi.cs.berkeley.edu/cs252-s06

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Outline11 Advanced Cache OptimizationsAdministriviaMemory Technology and DRAM optimizationsVirtual MachinesXen VM: Design and PerformanceConclusion

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Why More on Memory Hierarchy?Processor-MemoryPerformance GapGrowing

    CS252 s06 Adv. Memory Hieriarchy

    Chart1

    11

    1.251.07

    1.56251.1449

    1.9531251.225043

    2.441406251.31079601

    3.05175781251.4025517307

    3.81469726561.5007303518

    5.79833984381.6057814765

    8.81347656251.7181861798

    13.3964843751.8384592124

    20.362656251.9671513573

    30.95123752.1048519523

    47.0458812.252191589

    71.509739122.4098450002

    108.69480346242.5785341502

    165.21610126282.7590315407

    251.12847391952.9521637486

    381.71528035773.158815211

    580.20722614373.3799322757

    881.91498373843.616527535

    1340.51077528243.8696844625

    2037.57637842924.1405623749

    3097.11609521244.4304017411

    4707.61646472284.740529863

    7155.57702637865.0723669534

    8586.69243165445.4274326401

    10304.03091798525.8073529249

    12364.83710158236.2138676297

    14837.80452189876.6488383638

    17805.36542627857.1142570492

    21366.43851153427.6122550427

    Memory

    Processor

    CPU

    DRAM

    Year

    Performance

    Sheet1

    CPUDRAM

    198011.0025%7%

    198111.07

    198221.14

    198321.23

    198421.31

    198531.40

    198641.5052%

    198761.61

    198891.72

    1989131.84

    1990201.97

    1991312.10

    1992472.25

    1993722.41

    19941092.58

    19951652.76

    19962512.95

    19973823.16

    19985803.38

    19998823.62

    20001,3413.87

    20012,0384.14

    20023,0974.43

    20034,7084.74

    20047,1565.0720%

    20058,5875.43

    200610,3045.81

    200712,3656.21

    200814,8386.65

    200917,8057.11

    201021,3667.61

    Sheet1

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    00

    Memory

    Processor

    CPU

    DRAM

    Year

    Performance

    Sheet2

    Sheet3

  • *CS252 s06 Adv. Memory Hieriarchy*Review: 6 Basic Cache OptimizationsReducing hit time1.Giving Reads Priority over Writes E.g., Read complete before earlier writes in write buffer2.Avoiding Address Translation during Cache Indexing Reducing Miss Penalty3.Multilevel Caches Reducing Miss Rate4.Larger Block size (Compulsory misses)5.Larger Cache size (Capacity misses)6.Higher Associativity (Conflict misses)

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*11 Advanced Cache OptimizationsReducing hit timeSmall and simple cachesWay predictionTrace caches Increasing cache bandwidthPipelined cachesMultibanked cachesNonblocking cachesReducing Miss PenaltyCritical word firstMerging write buffers Reducing Miss RateCompiler optimizations

    Reducing miss penalty or miss rate via parallelismHardware prefetchingCompiler prefetching

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*1. Fast Hit times via Small and Simple CachesIndex tag memory and then compare takes time Small cache can help hit time since smaller memory takes less time to indexE.g., L1 caches same size for 3 generations of AMD microprocessors: K6, Athlon, and OpteronAlso L2 cache small enough to fit on chip with the processor avoids time penalty of going off chipSimple direct mappingCan overlap tag check with data transmission since no choiceAccess time estimate for 90 nm using CACTI model 4.0Median ratios of access time relative to the direct-mapped caches are 1.32, 1.39, and 1.43 for 2-way, 4-way, and 8-way caches

    CS252 s06 Adv. Memory Hieriarchy

    Chart1

    0.55222085650.72597516340.81612716170.8070170238

    0.59449834290.74788564710.79700017930.8429381038

    0.66866364810.85722505430.89426308210.9571793205

    0.76252919751.00795137981.02938690271.0149729883

    0.86539354151.23172215781.204178721.2107973918

    1.01852166331.64393011511.63867371341.5862286

    1.3184589352.11562719942.095071782.0622329802

    1-way

    2-way

    4-way

    8-way

    Cache size

    Access time (ns)

    Sheet1

    K1-way2-way4-way8-way16

    161638416 KB0.550.730.820.811.311.481.46

    323276832 KB0.590.750.800.841.261.341.42

    646553664 KB0.670.860.890.961.281.341.43

    128131072128 KB0.761.011.031.011.09281213961.321.351.33

    256262144256 KB0.871.231.201.211.421.391.40

    512524288512 KB1.021.641.641.591.611.611.56

    102410485761 MB1.322.122.102.061.601.591.56

    1.321.391.43

    64 byte blocks

    .09 microns

    Sheet1

    1-way

    2-way

    4-way

    8-way

    Cache size

    Access time (ns)

    Sheet2

    Sheet3

  • *CS252 s06 Adv. Memory Hieriarchy*2. Fast Hit times via Way PredictionHow to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache? Way prediction: keep extra bits in cache to predict the way, or block within the set, of next cache access. Multiplexor is set early to select desired block, only 1 tag comparison performed that clock cycle in parallel with reading the cache data Miss 1st check other blocks for matches in next clock cycle

    Accuracy 85%Drawback: CPU pipeline is hard if hit takes 1 or 2 cyclesUsed for instruction caches vs. data cachesHit TimeWay-Miss Hit TimeMiss Penalty

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*3. Fast Hit times via Trace Cache (Pentium 4 only; and last time?)Find more instruction level parallelism? How avoid translation from x86 to microops? Trace cache in Pentium 4Dynamic traces of the executed instructions vs. static sequences of instructions as determined by layout in memoryBuilt-in branch predictorCache the micro-ops vs. x86 instructionsDecode/translate from x86 to micro-ops on trace cache miss+1. better utilize long blocks (dont exit in middle of block, dont enter at label in middle of block)1. complicated address mapping since addresses no longer aligned to power-of-2 multiples of word size-1. instructions may appear multiple times in multiple dynamic traces due to different branch outcomes

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*4: Increasing Cache Bandwidth by PipeliningPipeline cache access to maintain bandwidth, but higher latencyInstruction cache access pipeline stages:1: Pentium2: Pentium Pro through Pentium III 4: Pentium 4 greater penalty on mispredicted branches more clock cycles between the issue of the load and the use of the data

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*5. Increasing Cache Bandwidth: Non-Blocking CachesNon-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a missrequires F/E bits on registers or out-of-order executionrequires multi-bank memorieshit under miss reduces the effective miss penalty by working during miss vs. ignoring CPU requestshit under multiple miss or miss under miss may further lower the effective miss penalty by overlapping multiple missesSignificantly increases the complexity of the cache controller as there can be multiple outstanding memory accessesRequires muliple memory banks (otherwise cannot support)Penium Pro allows 4 outstanding memory misses

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Value of Hit Under Miss for SPEC (old data)FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.198 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92Hit under n Misses0->11->22->64Base

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*6: Increasing Cache Bandwidth via Multiple BanksRather than treat the cache as a single monolithic block, divide into independent banks that can support simultaneous accessesE.g.,T1 (Niagara) L2 has 4 banksBanking works best when accesses naturally spread themselves across banks mapping of addresses to banks affects behavior of memory systemSimple mapping that works well is sequential interleaving Spread block addresses sequentially across banksE,g, if there 4 banks, Bank 0 has all blocks whose address modulo 4 is 0; bank 1 has all blocks whose address modulo 4 is 1;

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*7. Reduce Miss Penalty: Early Restart and Critical Word FirstDont wait for full block before restarting CPUEarly restartAs soon as the requested word of the block arrives, send it to the CPU and let the CPU continue executionSpatial locality tend to want next sequential word, so not clear size of benefit of just early restartCritical Word FirstRequest the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the blockLong blocks more popular today Critical Word 1st Widely used

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*8. Merging Write Buffer to Reduce Miss PenaltyWrite buffer to allow processor to continue while waiting to write to memoryIf buffer contains modified blocks, the addresses can be checked to see if address of new data matches the address of a valid write buffer entry If so, new data are combined with that entryIncreases block size of write for write-through cache of writes to sequential words, bytes since multiword writes more efficient to memoryThe Sun T1 (Niagara) processor, among many others, uses write merging

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*CS252: Administrivia Welcome back!Quiz: Average, Median, Standard DeviationReturn Graded Quiz at end of lectureAdvanced Memory Hierarchy (Ch 5) this weekHandout Chapter 5Guest lecturer on Sun T1/Niagara Monday 4/10Storage (Ch 6) Wed 4/12 Mon 4/17Project Update Meeting Wednesday 4/19Quiz 2 Monday 4/24 5-8 PMBar Career / Bad Talk Advice Project Presentations 5/1 (all day)Project Posters 5/3, Final Papers due 5/5

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Read and Answer for WednesdayVirtual Machine Monitors: Current Technology and Future Trends Mendel Rosenblum and Tal Garfinkel, IEEE Computer, May 2005Submit write-up of paper Tuesday evening, be sure to comment onHow old are VMs? Why did it lie fallow so long?Why do authors say this old technology got hot again?What are 3 main implementation issues, and their key challenges , techniques , and future support opportunities?What important role do you think VMs will play in the future Information Technology? Why?

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*9. Reducing Misses by Compiler OptimizationsMcFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in softwareInstructionsReorder procedures in memory so as to reduce conflict missesProfiling to look at conflicts(using tools they developed)DataMerging Arrays: improve spatial locality by single array of compound elements vs. 2 arraysLoop Interchange: change nesting of loops to access data in order stored in memoryLoop Fusion: Combine 2 independent loops that have same looping and some variables overlapBlocking: Improve temporal locality by accessing blocks of data repeatedly vs. going down whole columns or rows

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Merging Arrays Example/* Before: 2 sequential arrays */int val[SIZE];int key[SIZE];

    /* After: 1 array of stuctures */struct merge {int val;int key;};struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Loop Interchange Example/* Before */for (k = 0; k < 100; k = k+1)for (j = 0; j < 100; j = j+1)for (i = 0; i < 5000; i = i+1)x[i][j] = 2 * x[i][j];/* After */for (k = 0; k < 100; k = k+1)for (i = 0; i < 5000; i = i+1)for (j = 0; j < 100; j = j+1)x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improved spatial locality

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Loop Fusion Example/* Before */for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)a[i][j] = 1/b[i][j] * c[i][j];for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)d[i][j] = a[i][j] + c[i][j];/* After */for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1){a[i][j] = 1/b[i][j] * c[i][j];d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Blocking Example/* Before */for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1){r = 0; for (k = 0; k < N; k = k+1){r = r + y[i][k]*z[k][j];}; x[i][j] = r;};Two Inner Loops:Read all NxN elements of z[]Read N elements of 1 row of y[] repeatedlyWrite N elements of 1 row of x[]Capacity Misses a function of N & Cache Size:2N3 + N2 => (assuming no conflict; otherwise )Idea: compute on BxB submatrix that fits

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Blocking Example/* After */for (jj = 0; jj < N; jj = jj+B)for (kk = 0; kk < N; kk = kk+B)for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1){r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) {r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r;}; B called Blocking FactorCapacity Misses from 2N3 + N2 to 2N3/B +N2Conflict Misses Too?

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Reducing Conflict Misses by BlockingConflict misses in caches not FA vs. Blocking sizeLam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48 despite both fit in cache

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*10. Reducing Misses by Hardware Prefetching of Instructions & DataPrefetching relies on having extra memory bandwidth that can be used without penaltyInstruction PrefetchingTypically, CPU fetches 2 blocks on a miss: the requested block and the next consecutive block. Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream bufferData PrefetchingPentium 4 can prefetch data into L2 cache from up to 8 streams from 8 different 4 KB pages Prefetching invoked if 2 successive L2 cache misses to a page, if distance between those cache blocks is < 256 bytes

    CS252 s06 Adv. Memory Hieriarchy

    Chart1

    1.16

    1.45

    1.18

    1.2

    1.21

    1.26

    1.29

    1.32

    1.4

    1.49

    1.97

    SPECint2000

    SPECfp2000

    Performance Improvement

    Sheet1

    Intel Pentium(R) 4 Processor(R) on 90nm Technology, Ronak Singhal (Intel)

    HOT CHIPS 16 Archives (2004)

    Date August 22-24, 2004 PlaceMemorial Auditorium, Stanford University

    gap1.16SPECINT2000

    mcf1.45SPECINT2000

    fam3d1.18SPECFP2000

    wupwise1.20SPECFP2000

    galgel1.21SPECFP2000

    facerec1.26SPECFP2000

    swim1.29SPECFP2000

    applu1.32SPECFP2000

    lucas1.40SPECFP2000

    mgrid1.49SPECFP2000

    equake1.97SPECFP2000

    applyapplu

    ?

    Sheet1

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    SPECint2000

    SPECfp2000

    Performance Improvement

    Sheet2

    Sheet3

  • *CS252 s06 Adv. Memory Hieriarchy*11. Reducing Misses by Software Prefetching DataData PrefetchLoad data into register (HP PA-RISC loads)Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9)Special prefetching instructions cannot cause faults; a form of speculative execution Issuing Prefetch Instructions takes timeIs cost of prefetch issues < savings in reduced misses?Higher superscalar reduces difficulty of issue bandwidth

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Compiler Optimization vs. Memory Hierarchy SearchCompiler tries to figure out memory hierarchy optimizationsNew approach: Auto-tuners 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, ) and algorithms, then produce C code to be compiled for that computerAuto-tuner targeted to numerical methodE.g., PHiPAC (BLAS), Atlas (BLAS), Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Sparse Matrix Search for Blockingfor finite element problem [Im, Yelick, Vuduc, 2005]

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Best Sparse Blocking for 8 ComputersAll possible column block sizes selected for 8 computers; How could compiler know? 84211248row block size (r)column block size (c)

    Intel Pentium MSun Ultra 2, Sun Ultra 3, AMD OpteronIBM Power 4, Intel/HP ItaniumIntel/HP Itanium 2IBM Power 3

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*

    TechniqueHit TimeBand-widthMiss penaltyMiss rateHW cost/ complexityCommentSmall and simple caches+0Trivial; widely usedWay-predicting caches +1Used in Pentium 4Trace caches +3Used in Pentium 4Pipelined cache access+1Widely usedNonblocking caches++3Widely usedBanked caches+1Used in L2 of Opteron and NiagaraCritical word first and early restart+2Widely usedMerging write buffer+1Widely used with write throughCompiler techniques to reduce cache misses+0Software is a challenge; some computers have compiler optionHardware prefetching of instructions and data++2 instr., 3 dataMany prefetch instructions; AMD Opteron prefetches dataCompiler-controlled prefetching++3Needs nonblocking cache; in many CPUs

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Main Memory BackgroundPerformance of Main Memory: Latency: Cache Miss PenaltyAccess Time: time between request and word arrivesCycle Time: time between requestsBandwidth: I/O & Large Block Miss Penalty (L2)Main Memory is DRAM: Dynamic Random Access MemoryDynamic since needs to be refreshed periodically (8 ms, 1% time)Addresses divided into 2 halves (Memory as a 2D matrix):RAS or Row Access StrobeCAS or Column Access StrobeCache uses SRAM: Static Random Access MemoryNo refresh (6 transistors/bit vs. 1 transistor Size: DRAM/SRAM 4-8, Cost/Cycle time: SRAM/DRAM 8-16

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Main Memory Deep BackgroundOut-of-Core, In-Core, Core Dump?Core memory?Non-volatile, magneticLost to 4 Kbit DRAM (today using 512Mbit DRAM)Access time 750 ns, cycle time 1500-3000 ns

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*DRAM logical organization (4 Mbit)Square root of bits per RAS/CASColumn DecoderSense Amps & I/OMemory Array(2,048 x 2,048)A0A1011DQWord LineStorage Cell

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Quest for DRAM PerformanceFast Page mode Add timing signals that allow repeated accesses to row buffer without another row access timeSuch a buffer comes naturally, as each array will buffer 1024 to 2048 bits for each accessSynchronous DRAM (SDRAM)Add a clock signal to DRAM interface, so that the repeated transfers would not bear overhead to synchronize with DRAM controllerDouble Data Rate (DDR SDRAM)Transfer data on both the rising edge and falling edge of the DRAM clock signal doubling the peak data rateDDR2 lowers power by dropping the voltage from 2.5 to 1.8 volts + offers higher clock rates: up to 400 MHzDDR3 drops to 1.5 volts + higher clock rates: up to 800 MHzImproved Bandwidth, not Latency

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*DRAM name based on Peak Chip Transfers / SecDIMM name based on Peak DIMM MBytes / Sec

    Stan-dardClock Rate (MHz)M transfers / secondDRAM NameMbytes/s/ DIMMDIMM NameDDR133266DDR2662128PC2100DDR150300DDR3002400PC2400DDR200400DDR4003200PC3200DDR2266533DDR2-5334264PC4300DDR2333667DDR2-6675336PC5300DDR2400800DDR2-8006400PC6400DDR35331066DDR3-10668528PC8500DDR36661333DDR3-133310664PC10700DDR38001600DDR3-160012800PC12800

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Need for Error Correction!Motivation:Failures/time proportional to number of bits!As DRAM cells shrink, more vulnerableWent through period in which failure rate was low enough without error correction that people didnt do correctionDRAM banks too large nowServers always corrected memory systemsBasic idea: add redundancy through parity bitsCommon configuration: Random error correctionSEC-DED (single error correct, double error detect)One example: 64 data bits + 8 parity bits (11% overhead)Really want to handle failures of physical components as wellOrganization is multiple DRAMs/DIMM, multiple DIMMsWant to recover from failed DRAM and failed DIMM!Chip kill handle failures width of single DRAM chip

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Introduction to Virtual MachinesVMs developed in late 1960sRemained important in mainframe computing over the yearsLargely ignored in single user computers of 1980s and 1990sRecently regained popularity due toincreasing importance of isolation and security in modern systems, failures in security and reliability of standard operating systems, sharing of a single computer among many unrelated users,and the dramatic increases in raw speed of processors, which makes the overhead of VMs more acceptable

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*What is a Virtual Machine (VM)?Broadest definition includes all emulation methods that provide a standard software interface, such as the Java VM(Operating) System Virtual Machines provide a complete system level environment at binary ISAHere assume ISAs always match the native hardware ISAE.g., IBM VM/370, VMware ESX Server, and XenPresent illusion that VM users have entire computer to themselves, including a copy of OSSingle computer runs multiple VMs, and can support a multiple, different OSes On conventional platform, single OS owns all HW resources With a VM, multiple OSes all share HW resourcesUnderlying HW platform is called the host, and its resources are shared among the guest VMs

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Virtual Machine Monitors (VMMs)Virtual machine monitor (VMM) or hypervisor is software that supports VMsVMM determines how to map virtual resources to physical resourcesPhysical resource may be time-shared, partitioned, or emulated in software VMM is much smaller than a traditional OS; isolation portion of a VMM is 10,000 lines of code

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*VMM Overhead?Depends on the workloadUser-level processor-bound programs (e.g., SPEC) have zero-virtualization overhead Runs at native speeds since OS rarely invokedI/O-intensive workloads OS-intensive execute many system calls and privileged instructions can result in high virtualization overhead For System VMs, goal of architecture and VMM is to run almost all instructions directly on native hardwareIf I/O-intensive workload is also I/O-bound low processor utilization since waiting for I/O processor virtualization can be hidden low virtualization overhead

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Other Uses of VMsFocus here on protection2 Other commercially important uses of VMsManaging SoftwareVMs provide an abstraction that can run the complete SW stack, even including old OSes like DOSTypical deployment: some VMs running legacy OSes, many running current stable OS release, few testing next OS releaseManaging HardwareVMs allow separate SW stacks to run independently yet share HW, thereby consolidating number of serversSome run each application with compatible version of OS on separate computers, as separation helps dependabilityMigrate running VM to a different computer Either to balance load or to evacuate from failing HW

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Requirements of a Virtual Machine MonitorA VM Monitor Presents a SW interface to guest software, Isolates state of guests from each other, and Protects itself from guest software (including guest OSes)Guest software should behave on a VM exactly as if running on the native HW Except for performance-related behavior or limitations of fixed resources shared by multiple VMsGuest software should not be able to change allocation of real system resources directlyHence, VMM must control everything even though guest VM and OS currently running is temporarily using themAccess to privileged state, Address translation, I/O, Exceptions and Interrupts,

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Requirements of a Virtual Machine MonitorVMM must be at higher privilege level than guest VM, which generally run in user mode Execution of privileged instructions handled by VMME.g., Timer interrupt: VMM suspends currently running guest VM, saves its state, handles interrupt, determine which guest VM to run next, and then load its state Guest VMs that rely on timer interrupt provided with virtual timer and an emulated timer interrupt by VMMRequirements of system virtual machines are same as paged-virtual memory: At least 2 processor modes, system and userPrivileged subset of instructions available only in system mode, trap if executed in user modeAll system resources controllable only via these instructions

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*ISA Support for Virtual MachinesIf VMs are planned for during design of ISA, easy to reduce instructions that must be executed by a VMM and how long it takes to emulate themSince VMs have been considered for desktop/PC server apps only recently, most ISAs were created without virtualization in mind, including 80x86 and most RISC architecturesVMM must ensure that guest system only interacts with virtual resources conventional guest OS runs as user mode program on top of VMMIf guest OS attempts to access or modify information related to HW resources via a privileged instruction--for example, reading or writing the page table pointer--it will trap to the VMMIf not, VMM must intercept instruction and support a virtual version of the sensitive information as the guest OS expects (examples soon)

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Impact of VMs on Virtual MemoryVirtualization of virtual memory if each guest OS in every VM manages its own set of page tables?VMM separates real and physical memory Makes real memory a separate, intermediate level between virtual memory and physical memorySome use the terms virtual memory, physical memory, and machine memory to name the 3 levelsGuest OS maps virtual memory to real memory via its page tables, and VMM page tables map real memory to physical memoryVMM maintains a shadow page table that maps directly from the guest virtual address space to the physical address space of HWRather than pay extra level of indirection on every memory accessVMM must trap any attempt by guest OS to change its page table or to access the page table pointer

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*ISA Support for VMs & Virtual MemoryIBM 370 architecture added additional level of indirection that is managed by the VMM Guest OS keeps its page tables as before, so the shadow pages are unnecessaryTo virtualize software TLB, VMM manages the real TLB and has a copy of the contents of the TLB of each guest VMAny instruction that accesses the TLB must trapTLBs with Process ID tags support a mix of entries from different VMs and the VMM, thereby avoiding flushing of the TLB on a VM switch

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Impact of I/O on Virtual MemoryMost difficult part of virtualizationIncreasing number of I/O devices attached to the computer Increasing diversity of I/O device typesSharing of a real device among multiple VMs,Supporting the myriad of device drivers that are required, especially if different guest OSes are supported on the same VM systemGive each VM generic versions of each type of I/O device driver, and let VMM to handle real I/OMethod for mapping virtual to physical I/O device depends on the type of device:Disks partitioned by VMM to create virtual disks for guest VMsNetwork interfaces shared between VMs in short time slices, and VMM tracks messages for virtual network addresses to ensure that guest VMs only receive their messages

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Example: Xen VMXen: Open-source System VMM for 80x86 ISA Project started at University of Cambridge, GNU license modelOriginal vision of VM is running unmodified OSSignificant wasted effort just to keep guest OS happyparavirtualization - small modifications to guest OS to simplify virtualization 3 Examples of paravirtualization in Xen:To avoid flushing TLB when invoke VMM, Xen mapped into upper 64 MB of address space of each VM Guest OS allowed to allocate pages, just check that didnt violate protection restrictions To protect the guest OS from user programs in VM, Xen takes advantage of 4 protection levels available in 80x86 Most OSes for 80x86 keep everything at privilege levels 0 or at 3.Xen VMM runs at the highest privilege level (0) Guest OS runs at the next level (1) Applications run at the lowest privilege level (3)

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Xen changes for paravirtualizationPort of Linux to Xen changed 3000 lines, or 1% of 80x86-specific code Does not affect application-binary interfaces of guest OSOSes supported in Xen 2.0

    http://wiki.xensource.com/xenwiki/OSCompatibility

    OS Runs as host OSRuns as guest OS Linux 2.4 Yes Yes Linux 2.6 Yes Yes NetBSD 2.0 No Yes NetBSD 3.0 Yes Yes Plan 9 No Yes FreeBSD 5 No Yes

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Xen and I/OTo simplify I/O, privileged VMs assigned to each hardware I/O device: driver domains Xen Jargon: domains = Virtual MachinesDriver domains run physical device drivers, although interrupts still handled by VMM before being sent to appropriate driver domain Regular VMs (guest domains) run simple virtual device drivers that communicate with physical devices drivers in driver domains over a channel to access physical I/O hardware Data sent between guest and driver domains by page remapping

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*Xen PerformancePerformance relative to native Linux for Xen for 6 benchmarks from Xen developers

    Slide 40: User-level processor-bound programs? I/O-intensive workloads? I/O-Bound I/O-Intensive?

    CS252 s06 Adv. Memory Hieriarchy

    Chart1

    1

    0.9704797048

    0.9186046512

    0.9527421237

    0.956937799

    0.9922779923

    Xen/Linux

    Performance relative to native Linux

    Sheet1

    L5671.00Linux (native)Linux (native)Xen (VM)Vmware (VM)Linux (User Mode)

    X5671.00Xen (VM)SPEC INT20001.001.000.980.97

    V5540.98Vmware (VM)Linux build time1.000.970.790.49

    U5500.97Linux (User Mode)OSDB-IR1.000.920.470.38

    SPEC INT2000 (score)OSDB-OLTP1.000.950.120.18

    L2631.00dbench1.000.960.740.27

    X2710.97SPEC WEB991.000.990.290.33

    V3340.79Xen/LinuxVMware Workstation 3.2User Mode Linux

    U5350.49SPEC INT2000100%1.01.0

    Linux build time (s)Linux build time97%0.80.5

    L1721.00PostgreSQL Inf. Retrieval92%0.50.4

    X1580.92PostgreSQL OLTP95%0.10.2

    V800.47dbench96%0.70.3

    U650.38SPEC WEB9999%0.30.3

    OSDB-IR (tup/s)

    L17141.00

    X16330.95

    V1990.12

    U3060.18

    OSDB-OLTP (tup/s)

    L4181.00

    X4000.96

    V3100.74

    U1110.27

    dbench (score)

    L5181.00

    X5140.99

    V1500.29

    U1720.33

    SPEC WEB99 (score)

    Sheet1

    00

    00

    00

    00

    00

    00

    Xen/Linux

    VMware Workstation 3.2

    Performance relative to native Linux

    Sheet2

    0

    0

    0

    0

    0

    0

    Xen/Linux

    Performance relative to native Linux

    Sheet3

  • *CS252 s06 Adv. Memory Hieriarchy*Xen Performance, Part IISubsequent study noticed Xen experiments based on 1 Ethernet network interfaces card (NIC), and single NIC was a performance bottleneck

    CS252 s06 Adv. Memory Hieriarchy

    Chart1

    942942849

    18821878849

    24621539849

    24461593849

    Linux

    Xen-privileged driver VM ("driver domain")

    Xen-guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

    Figure 8 events web server

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:4.5]

    # set size 0.6,0.6

    # plot 'file' using ($2/$3):xtic(1) title 2, '' using ($3/$3) title 3, '' using ($4/$3) title 4, '' using ($5/$3) title 5

    # set xlabel "Profiled Hardware Event"

    # set ylabel "Relative costs"

    # set terminal postscript eps

    # profile for the 1 NIC runs

    #profiled parameterLinuxxen-domain0xen-guest0xen-guest1

    arbitLinux (1 CPU)Xen-priviledged driver VM (1 CPU)Xen-guest VM + driver VM (1 CPU)Xen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    Linux (1 CPU)Xen-priviledged driver VM (1 CPU)Xen-guest VM + driver VM (1 CPU)Xen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    1.22.62.7

    "L-2"319832001383510470

    1.04.33.3

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    11.722.024.4

    all_knot_prof.data for figure 8

    Relative to Xen-domain0

    Linux (1 CPU)Xen-priviledged driver VM (1 CPU)Xen-guest VM + driver VM (1 CPU)Xen-guest VM + driver VM (2 CPUs)

    Intructions0.861.002.262.30

    L2 misses1.001.004.323.27

    I-TLB misses0.621.001.741.77

    D-TLB misses0.091.001.882.08

    Figure 8

    Figure 8 events web server

    0000000000000000

    0000000000000000

    0000000000000000

    0000000000000000

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Event count relative to Xen-priviledged driver domain

    Figure 7 events web server

    Request rate (reqs/sec)Throughput (Mbits/sec)

    LinuxXen-priviledged driver domainXen-guest0Xen-guest1

    10008.138.138.138.13

    200016.2716.2716.2716.27

    300024.424.424.424.4

    400032.5232.5232.5232.52

    500040.6640.5240.5240.52

    600048.848.845.5148.8

    700056.9556.9543.156.95

    800065.165.143.165.1

    900073.273.145073.14

    1000081.4181.4152.581.41

    1100089.5389.5353.0389.53

    1200097.4697.465397.46

    13000105.76105.75105.75

    14000113.9113.9113.9

    15000122.05122.04107.1

    16000130.298.23106.8

    17000138.3370.9491.7

    18000146.4765.53104.6

    19000154.660.26104.51

    20000122.7557.02104.5

    21000119.2257.14

    22000103.0866.33

    2300096.8955.83

    2400089.7250.53

    Figure 7 events web server

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Xen-guest1

    Figure 3 Rcv Thruput

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    000

    Xen guest VM +privileged driver VM

    Xen privileged driver VM only

    Lunix

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Request Rate (Reqs/sec)

    Throughput (Mbits/sec)

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:3000]

    # set size 0.6,0.6

    # plot 'file' using 2:xtic(1) title 2, '' using 3 title 3

    # set xlabel "Number of NICs"

    # set ylabel "Aggregate Throughput (Mb/s)"

    # set terminal postscript eps

    #number of NICSMb/s(linux)xenXen-domain0

    arbitLinuxXen-privileged driver VM ("driver domain")

    1942942

    218821878

    324621539

    424461593

    LinuxXen-privileged driver VM ("driver domain")Xen-guest VM + driver VM

    1942942849

    218821878849

    324621539849

    424461593849

    Number of NICs

    0

    000

    000

    000

    000

    Linux

    Xen-privileged driver VM ("driver domain")

    Xen-guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

    MBD00A6E44D.xls

    Chart5

    0.858585858612.25965537732.2984749455

    0.99937514.32343753.271875

    0.615295143911.73960453621.76781041

    0.085504699311.88171467872.0831359364

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Xen-guest VM + driver VM (2 CPUs)

    Event count relative to Xen-priviledged driver domain

    Figure 8 events web server

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:4.5]

    # set size 0.6,0.6

    # plot 'file' using ($2/$3):xtic(1) title 2, '' using ($3/$3) title 3, '' using ($4/$3) title 4, '' using ($5/$3) title 5

    # set xlabel "Profiled Hardware Event"

    # set ylabel "Relative costs"

    # set terminal postscript eps

    # profile for the 1 NIC runs

    #profiled parameterLinuxxen-domain0xen-guest0xen-guest1

    arbitLinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    LinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    11.722.024.4

    all_knot_prof.data for figure 8

    Relative to Xen-domain0

    LinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    Intructions0.861.002.262.30

    L2 misses1.001.004.323.27

    I-TLB misses0.621.001.741.77

    D-TLB misses0.091.001.882.08

    Figure 8

    Figure 8 events web server

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Event count relative to Xen-priviledged driver domain

    Figure 7 events web server

    Request rate (reqs/sec)Throughput (Mbits/sec)

    LinuxXen-priviledged driver domainXen-guest0Xen-guest1

    10008.138.138.138.13

    200016.2716.2716.2716.27

    300024.424.424.424.4

    400032.5232.5232.5232.52

    500040.6640.5240.5240.52

    600048.848.845.5148.8

    700056.9556.9543.156.95

    800065.165.143.165.1

    900073.273.145073.14

    1000081.4181.4152.581.41

    1100089.5389.5353.0389.53

    1200097.4697.465397.46

    13000105.76105.75105.75

    14000113.9113.9113.9

    15000122.05122.04107.1

    16000130.298.23106.8

    17000138.3370.9491.7

    18000146.4765.53104.6

    19000154.660.26104.51

    20000122.7557.02104.5

    21000119.2257.14

    22000103.0866.33

    2300096.8955.83

    2400089.7250.53

    Figure 7 events web server

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Xen-guest1

    Figure 3 Rcv Thruput

    Xen guest VM+driver VM (1 CPU)

    Xen priviledged driver VM (1 CPU)

    Lunix(1 CPU)

    Xen guest VM+driver VM (2 CPUs)

    Linux

    Xen-priviledged driver domain

    Xen-guest1

    Xen-guest0

    Request Rate (Reqs/sec)

    Throughput (Mbits/sec)

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:3000]

    # set size 0.6,0.6

    # plot 'file' using 2:xtic(1) title 2, '' using 3 title 3

    # set xlabel "Number of NICs"

    # set ylabel "Aggregate Throughput (Mb/s)"

    # set terminal postscript eps

    #number of NICSMb/s(linux)xenXen-domain0

    arbitLinuxXen-priviledged driver VM ("driver domain")

    1942942

    218821878

    324621539

    424461593

    LinuxXen-priviledged driver VM ("driver domain")Xen-guest VM + driver VM

    1942942800

    218821878930

    324621539930

    424461593930

    Number of NICs

    0

    Linux

    Xen-priviledged driver VM ("driver domain")

    Xen-guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

  • *CS252 s06 Adv. Memory Hieriarchy*Xen Performance, Part III> 2X instructions for guest VM + driver VM> 4X L2 cache misses12X 24X Data TLB misses

    CS252 s06 Adv. Memory Hieriarchy

    Chart3

    0.858585858612.2596553773

    0.99937514.3234375

    0.615295143911.7396045362

    0.085504699311.8817146787

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Event count relative to Xen-priviledged driver domain

    Chart5

    0.858585858612.25965537732.2984749455

    0.99937514.32343753.271875

    0.615295143911.73960453621.76781041

    0.085504699311.88171467872.0831359364

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Xen-guest VM + driver VM (2 CPUs)

    Event count relative to Xen-priviledged driver domain

    Figure 8 events web server

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:4.5]

    # set size 0.6,0.6

    # plot 'file' using ($2/$3):xtic(1) title 2, '' using ($3/$3) title 3, '' using ($4/$3) title 4, '' using ($5/$3) title 5

    # set xlabel "Profiled Hardware Event"

    # set ylabel "Relative costs"

    # set terminal postscript eps

    # profile for the 1 NIC runs

    #profiled parameterLinuxxen-domain0xen-guest0xen-guest1

    arbitLinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    LinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    11.722.024.4

    all_knot_prof.data for figure 8

    Relative to Xen-domain0

    LinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    Intructions0.861.002.262.30

    L2 misses1.001.004.323.27

    I-TLB misses0.621.001.741.77

    D-TLB misses0.091.001.882.08

    Figure 8

    Figure 8 events web server

    000

    000

    000

    000

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Event count relative to Xen-priviledged driver domain

    Figure 7 events web server

    Request rate (reqs/sec)Throughput (Mbits/sec)

    LinuxXen-priviledged driver domainXen-guest0Xen-guest1

    10008.138.138.138.13

    200016.2716.2716.2716.27

    300024.424.424.424.4

    400032.5232.5232.5232.52

    500040.6640.5240.5240.52

    600048.848.845.5148.8

    700056.9556.9543.156.95

    800065.165.143.165.1

    900073.273.145073.14

    1000081.4181.4152.581.41

    1100089.5389.5353.0389.53

    1200097.4697.465397.46

    13000105.76105.75105.75

    14000113.9113.9113.9

    15000122.05122.04107.1

    16000130.298.23106.8

    17000138.3370.9491.7

    18000146.4765.53104.6

    19000154.660.26104.51

    20000122.7557.02104.5

    21000119.2257.14

    22000103.0866.33

    2300096.8955.83

    2400089.7250.53

    Figure 7 events web server

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Xen-guest1

    Figure 3 Rcv Thruput

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    0000

    Xen guest VM+driver VM (1 CPU)

    Xen priviledged driver VM (1 CPU)

    Lunix(1 CPU)

    Xen guest VM+driver VM (2 CPUs)

    Linux

    Xen-priviledged driver domain

    Xen-guest1

    Xen-guest0

    Request Rate (Reqs/sec)

    Throughput (Mbits/sec)

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:3000]

    # set size 0.6,0.6

    # plot 'file' using 2:xtic(1) title 2, '' using 3 title 3

    # set xlabel "Number of NICs"

    # set ylabel "Aggregate Throughput (Mb/s)"

    # set terminal postscript eps

    #number of NICSMb/s(linux)xenXen-domain0

    arbitLinuxXen-priviledged driver VM ("driver domain")

    1942942

    218821878

    324621539

    424461593

    LinuxXen-priviledged driver VM ("driver domain")Xen-guest VM + driver VM

    1942942800

    218821878930

    324621539930

    424461593930

    Number of NICs

    0

    000

    000

    000

    000

    Linux

    Xen-priviledged driver VM ("driver domain")

    Xen-guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

  • *CS252 s06 Adv. Memory Hieriarchy*Xen Performance, Part IV> 2X instructions: page remapping and page transfer between driver and guest VMs and due to communication between the 2 VMs over a channel4X L2 cache misses: Linux uses zero-copy network interface that depends on ability of NIC to do DMA from different locations in memory Since Xen does not support gather DMA in its virtual network interface, it cant do true zero-copy in the guest VM12X 24X Data TLB misses: 2 Linux optimizationsSuperpages for part of Linux kernel space, and 4MB pages lowers TLB misses versus using 1024 4 KB pages. Not in XenPTEs marked global are not flushed on a context switch, and Linux uses them for its kernel space. Not in XenFuture Xen may address 2. and 3., but 1. inherent?

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*And in Conclusion [1/2] Memory wall inspires optimizations since so much performance lost thereReducing hit time: Small and simple caches, Way prediction, Trace cachesIncreasing cache bandwidth: Pipelined caches, Multibanked caches, Nonblocking cachesReducing Miss Penalty: Critical word first, Merging write buffersReducing Miss Rate: Compiler optimizationsReducing miss penalty or miss rate via parallelism: Hardware prefetching, Compiler prefetchingAuto-tuners search replacing static compilation to explore optimization space?DRAM Continuing Bandwidth innovations: Fast page mode, Synchronous, Double Data Rate

    CS252 s06 Adv. Memory Hieriarchy

  • *CS252 s06 Adv. Memory Hieriarchy*And in Conclusion [2/2] VM Monitor presents a SW interface to guest software, isolates state of guests, and protects itself from guest software (including guest OSes)Virtual Machine RevivalOvercome security flaws of large OSesManage Software, Manage HardwareProcessor performance no longer highest priorityVirtualization challenges for processor, virtual memory, and I/OParavirtualization to cope with those difficultiesXen as example VMM using paravirtualization2005 performance on non-I/O bound, I/O intensive apps: 80% of native Linux without driver VM, 34% with driver VM

    CS252 s06 Adv. Memory Hieriarchy

    CS258 S99*CS258 S99CS258 S99*

    Y-axis is performanceX-axis is timeLatencyClich: Not e that x86 didnt have cache on chip until 1989CS258 S99CS258 S99*Answer is 3 stages between branch and new instruction fetch and 2 stages between load and use (even though if looked at red insertions that it would be 3 for load and 2 for branch)Reasons:1) Load: TC just does tag check, data available after DS; thus supply the data & forward it, restarting the pipeline on a data cache miss2) EX phase does the address calculation even though just added one phase; presumed reason is that since want fast clockc cycle dont want to sitck RF phase with reading regisers AND testing for zero, so just moved it back on phaseCS258 S99CS258 S99*Answer is 3 stages between branch and new instruction fetch and 2 stages between load and use (even though if looked at red insertions that it would be 3 for load and 2 for branch)Reasons:1) Load: TC just does tag check, data available after DS; thus supply the data & forward it, restarting the pipeline on a data cache miss2) EX phase does the address calculation even though just added one phase; presumed reason is that since want fast clockc cycle dont want to sitck RF phase with reading regisers AND testing for zero, so just moved it back on phaseCS258 S99CS258 S99*This slide is for the 3-min class administrative matters.

    Make sure we update Handout #1 so it is consistent with this slide.CS258 S99CS258 S99*Autotuners is fine. I had a slightly longer list I was working on before seeing Krste's mail. PHiPAC: Dense linear algebra (Krste, Jim, Jeff Bilmes and others at UCB) PHiPAC = Portable High Performance Ansi C FFTW: Fastest Fourier Transforms in the West (from Matteo Frigo and Steve Johnson at MIT; Matteo is now at IBM) Atlas: Dense linear algebra now the "standard" for many BLAS implementations; used in Matlab, for example. (Jack Dongarra, Clint Whaley et al) Sparsity: Sparse linear algebra (Eun-Jin Im and Kathy at UCB) Spiral: DSP algorithms including FFTs and other transforms (Markus Pueschel, Jos M. F. Moura et al) OSKI: Sparse linear algebra (Rich Vuduc, Jim and Kathy, From the Bebop project at UCB) In addition there are groups at Rice, USC, UIUC, Cornell, UT Austin, UCB (Titanium), LLNL and others working on compilers that include an auto-tuning (Search-based) optimization phase. Both the Bebop group and the Atlas group have done work on automatic tuning of collective communication routines for supercomputers/clusters, but this is ongoing. I'll send a slide with an autotuning example later. Kathy CS258 S99CS258 S99*[NOTE: This slide has some animation in it.]Consider the following experiment in which we implement SpMV using BCSR format for the matrix shown in the previous slide at all block sizes that divide 8x816 implementations in all. These implementations fully unroll the innermost loop and use scalar replacement for the source and destination vectors. You might reasonably expect performance to increase relatively smoothly as r and c increase, but this is clearly not the case!Platform: 900 MHz Itanium-2, 3.6 Gflop/s peak speed. Intel v8.0 compiler.Good speedups (4x) but at an unexpected block size (4x2).Figure taken from Im, Yelick, Vuduc, IJHPCA 2005 paper.

    CS258 S99