computer architecture memory hierarchy lynn choi korea university

40
Computer Architecture Computer Architecture Memory Hierarchy Memory Hierarchy Lynn Choi Lynn Choi Korea University Korea University

Upload: branden-edwards

Post on 31-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Computer ArchitectureComputer Architecture

Memory HierarchyMemory Hierarchy

Lynn ChoiLynn Choi

Korea UniversityKorea University

Page 2: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Memory HierarchyMemory Hierarchy

Motivated byMotivated byPrinciples of Locality

Speed vs. Size vs. Cost tradeoff

Locality principle Locality principle Temporal Locality: reference to the same location is likely to occur soon

Example: loops, reuse of variables

Keep recently accessed data/instruction to closer to the processor

Spatial Locality: nearby references are likely

Example: arrays, program codes

Access a block of contiguous bytes

Speed vs. Size tradeoffSpeed vs. Size tradeoffBigger memory is slower but cheaper: SRAM - DRAM - Disk - Tape

Fast memory is more expensive but faster

Page 3: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Memory WallMemory Wall

10

100

1000

10000

100000

80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 2000

CPU performance2x every 18 months

DRAM performance7% per year

Page 4: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Levels of Memory HierarchyLevels of Memory Hierarchy

Registers

Cache

Main Memory

Disk

Network

Instruction Operands

Cache Line

Page

File

Capacity/Access Time Moved By Faster/Smaller

Slower/Larger

Program/Compiler1- 16B

H/W16 - 512B

OS512B – 64MB

Userany size

100Bs

KBs-MBs

100GBs

Infinite

GBs

Page 5: Computer Architecture Memory Hierarchy Lynn Choi Korea University

CacheCache

A small but fast memory located between processor and main memory A small but fast memory located between processor and main memory

BenefitsBenefitsReduce load latency

Reduce store latency

Reduce bus traffic (on-chip caches)

Cache Block Placement (Where to place)Cache Block Placement (Where to place)Fully-associative cache

Direct-mapped cache

Set-associative cache

Page 6: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Fully Associative CacheFully Associative Cache

32KB cache (SRAM)

Physical Address Space32 bit PA = 4GB (DRAM)

0

228-1

0

Cache Block(Cache Line)

Memory Block

A memory block can be placed into any cache block location!

211-1

32b Word, 4 Word Cache Block

211-1

Page 7: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Fully Associative CacheFully Associative Cache

32KB DATA RAM

211-1

0

211-1

0

TAG RAM

3 031

tag

=

=

=

=

offset

V

Word & Byte select

Data out

Data to CPU

Advantages Disadvantages 1. High hit rate 1. Very expensive 2. Fast

Yes

Cache Hit

Page 8: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Direct Mapped CacheDirect Mapped Cache

32KB cache (SRAM)

Physical Address Space32 bit PA = 4GB (DRAM)

0

228-1

0 Memory Block

A memory block can be placed into only a single cache block!

211-1

211

2*211

(217-1)*211

…..

Page 9: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Direct Mapped CacheDirect Mapped Cache

32KB DATA RAM

211-1

0

211-1

0

TAG RAM

3 031

index offset

V

Word & Byte select

Data out

Data to CPU

Disadvantages Advantages 1. Low hit rate 1. Simple HW 2. Fast Implementation

tag

dec

od

er

=

Cache Hit

Yes

14 4

Page 10: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Set Associative CacheSet Associative Cache

32KB cache (SRAM)

0

228-1

0 Memory Block

In an M-way set associative cache,A memory block can be placed into M cache blocks!

211-1

210

2*210

(218-1)*210

210

Way 0

Way 1

210-1

210 sets

210 sets

Page 11: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Set Associative CacheSet Associative Cache

32KB DATA RAM

210-1

0

210-1

0

TAG RAM

3 031

index offset

V

Word & Byte select

Data out

Data to CPU

tag

dec

od

er

=

Cache Hit

Yes

=

13 4

Wmux

Most caches are implemented asset-associative caches!

Page 12: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Block Allocation and ReplacementBlock Allocation and Replacement

Block Allocation (When to place)Block Allocation (When to place)On a read miss, always allocate

On a write miss

Write-allocate: allocate cache block on a write miss

No-write-allocate

Replacement PolicyReplacement PolicyLRU (least recently used)

Need to keep timestamp

Expensive due to global compare

Pseudo-LRU: use LFU using bit tags

Random

Just pick one and replace it

Pseudo-random: use simple hash algorithm using address

Replacement policy critical for small caches

Page 13: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Write PolicyWrite Policy

Write-throughWrite-throughWrite to cache and to next level of memory hierarchy

Simple to design, memory consistent

Generates more write traffic

With no write allocate policy

Write-backWrite-backOnly write to cache (not to lower level)

Update memory when a dirty block is replaced

Less write traffic, write independent of main memory

Complex to design, memory inconsistent

With write allocate policy

Page 14: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Review: Review: 4 Questions for Cache Design4 Questions for Cache Design

Q1: Where can a block can be placed in? (Block Placement)Q1: Where can a block can be placed in? (Block Placement)Fully-associative, direct-mapped, set-associative

Q2: How is a block found in the cache? (Block Indentification)Q2: How is a block found in the cache? (Block Indentification)Tag/Index/Offset

Q3: Which block should be replaced on a miss? (Block Replacement)Q3: Which block should be replaced on a miss? (Block Replacement)Random, LRU

Q4: What happens on a write? (Write Policy)Q4: What happens on a write? (Write Policy)

Page 15: Computer Architecture Memory Hierarchy Lynn Choi Korea University

3+1 Types of Cache Misses3+1 Types of Cache Misses

Cold-start misses (or compulsory misses)Cold-start misses (or compulsory misses)The first access to a block is always not in the cache

Misses even in an infinite cache

Capacity missesCapacity missesIf the memory blocks needed by a program is bigger than the cache size, then capacity misses will occur due to cache block replacement.

Misses even in fully associative cache

Conflict misses (or collision misses)Conflict misses (or collision misses)For direct-mapped or set-associative cache, too many blocks can be mapped to the same set.

Invalidation misses (or sharing misses): cache blocks can be invalidated Invalidation misses (or sharing misses): cache blocks can be invalidated due to coherence trafficdue to coherence traffic

Page 16: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Miss Rates (SPEC92)Miss Rates (SPEC92)

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.141 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

Page 17: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Cache PerformanceCache Performance

Avg-access-time = hit time+miss rate*miss penaltyAvg-access-time = hit time+miss rate*miss penaltyImproving Cache Performance

Reduce hit time

Reduce miss rate

Reduce miss penalty

For L1 organization, For L1 organization, AMAT = Hit_Time + Miss_Rate * Miss_Penalty

For L1/L2 organization,For L1/L2 organization,AMAT = Hit_TimeL1 + Miss_RateL1 * (Hit_TimeL2 + Miss_RateL2 * Miss_PenaltyL2)

Design IssuesDesign IssuesSize(L2) >> Size(L1)

Usually, Block_size(L2) > Block_size(L1)

Page 18: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Cache Performance vs Block SizeCache Performance vs Block Size

Miss Penalty Miss Rate

Average Access Time

Block Size Block Size

Block Size

Access Time

Sweet Spot

Transfer Time

Page 19: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Random Access MemoryRandom Access Memory

Static vs. Dynamic MemoryStatic vs. Dynamic MemoryStatic RAM (at least 6 transistor)

State can be retained while power is supplied

Use latched storage

Speed: access time 8-16X faster than DRAM

Used for registers, buffers, on-chip and off-chip caches

Dynamic RAM (usually 1 transistor)

State is discharged as time goes by

Use dynamic storage of charge on a capacitor

Require refresh of each cell every few milliseconds

Density: 16X SRAM size at the same feature size

Multiplexed address lines - RAS, CAS

Complex interface logic due to refresh, precharge

Used for main memory

Page 20: Computer Architecture Memory Hierarchy Lynn Choi Korea University

SRAM Cell versus DRAM CellSRAM Cell versus DRAM Cell

SRAM Cell

DRAM Cell

Page 21: Computer Architecture Memory Hierarchy Lynn Choi Korea University

DRAM RefreshDRAM Refresh

Typical devices require each cell to be refreshed once every 4 Typical devices require each cell to be refreshed once every 4 to 64 mS.to 64 mS.

During “suspended” operation, notebook computers use During “suspended” operation, notebook computers use power mainly for DRAM refresh.power mainly for DRAM refresh.

Page 22: Computer Architecture Memory Hierarchy Lynn Choi Korea University

RAM StructureRAM Structure

Column Decoder + Multiplexer

Memory Array

Ro

w D

eco

derRow

Address

ColumnAddress

Data

2N

2MM

N-K

2K

Page 23: Computer Architecture Memory Hierarchy Lynn Choi Korea University

DRAM Chip Internal OrganizationDRAM Chip Internal Organization

64K x 1 bit 64K x 1 bit DRAMDRAM

Page 24: Computer Architecture Memory Hierarchy Lynn Choi Korea University

RAS/CAS OperationRAS/CAS Operation

Row Address Strobe, Column Address StrobeRow Address Strobe, Column Address Stroben address bits are provided in two steps using n/2 pins, referenced to the falling edges of RAS_L and CAS_L

Traditional method of DRAM operation for 20 years.

DRAM read timingDRAM read timing

Page 25: Computer Architecture Memory Hierarchy Lynn Choi Korea University

DRAM PackagingDRAM Packaging

Typically, 8 or 16 memory chips are mounted on a tiny Typically, 8 or 16 memory chips are mounted on a tiny printed circuit boardprinted circuit board

For compatibility and easier upgrade

SIMM (Single Inline Memory Module)Connectors on one side

32 pins for 8b data bus

72 pins for 32b data bus

DIMM (Dual Inline Memory Module)For 64b data bus (64, 72, 80)

84 pins at both sides, total of 168 pins

Ex) 16 16M*4 bit DRAM constitutes 128MB DRAM module with 64b data bus

SO-DIMM (Small Outline DIMM) for notebooks72 pins for 32b data while 144pins for 64b data bus

Page 26: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Memory Performance ParametersMemory Performance Parameters

Access TimeAccess TimeThe time elapsed from asserting an address to when the data is available on the output

Row Access Time: The time elapsed from asserting RAS to when the row is available in the row buffer

Column Access Time - the time elapsed from asserting CAS to when the valid data is present on the output pins

Cycle TimeCycle TimeThe minimum time between two different requests to memory

LatencyLatencyThe time to access the first word of a block

BandwidthBandwidthTransmission rate (bytes per second)

Page 27: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Memory OrganizationMemory OrganizationAssuming 1 cycle to send the address 15 cycles for each DRAM access 1 cycle to return a word of data

1 + 4 (15 + 1) = 65 cycles

1 + 2 (15 + 1) = 33 cycles (2 word wide)1 + 1 (15 + 1) = 17 cycles (4 word wide)

1 + 15 + 4 = 20 cycles

Page 28: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Pentium III ExamplePentium III Example

Host to PCI Bridge

MainMemory

Graphics

System Bus FSB (133 MHz 64b data & 32b address)

AGPMemory

BusMultiplexed(RAS/CAS)

16KBI-Cache

16KBD-Cache

Pentium IIICore Pipeline

256 KB 8-way2nd-level Cache

Pentium III Processor

800 MHz256b data

DIMMs: 16 16M*4b 133 MHzSDRAM constitutes 128MB DRAM module with 64b data bus

Page 29: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Intel i7 System Architecture Intel i7 System Architecture Integrated memory controllerIntegrated memory controller

3 Channel, 3.2GHz clock, 25.6 GB/s memory bandwidth (memory up to 24GB DDR3 SDRAM), 36 bit physical address

QuickPath Interconnect (QPI)QuickPath Interconnect (QPI)Point-to-point processor interconnect, replacing the front side bus (FSB)

64bit data every two clock cycles, up to 25.6GB/s, which doubles the theoretical bandwidth of 1600MHz FSB

Direct Media Interface (DMI)Direct Media Interface (DMI)The link between Intel Northbridge and Intel Southbridge, sharing many characteristics with PCI-Express

IOH (Northbridge)IOH (Northbridge)

ICH (Southbridge)ICH (Southbridge)

Page 30: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Virtual MemoryVirtual Memory

Virtual memoryVirtual memoryProgrammer’s view of memory (virtual address space)

Physical memory (main memory)Physical memory (main memory)Machine’s physical memory (physical address space)

ObjectiveObjectiveLarge address spaces -> Easy Programming

Provide the illusion of infinite amount of memory

Program code/data can exceed the main memory size

Processes partially resident in memory

Improve software portability

Increased CPU utilization: More programs can run at the same time

Support protection of codes and data

Privilege level

Access rights: read/modify/execute permission

Support sharing of codes and data

Page 31: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Virtual MemoryVirtual Memory

Require the following functionsRequire the following functionsMemory allocation (Placement)

Memory deallocation (Replacement)

Memory mapping (Translation)

Memory managementMemory managementAutomatic movement of data between main memory and secondary storage

Done by operating system with the help of processor HW (exception handling mechanism)

Main memory contains only the most frequently used portions of a process’s address space

Illusion of infinite memory (size of secondary storage) but access time equal to main memory

Usually implemented by demand paging

Bring a page on a page miss on demand

Exploit spatial locality

Page 32: Computer Architecture Memory Hierarchy Lynn Choi Korea University

PagingPaging

Divide address space into fixed size page framesDivide address space into fixed size page framesVA consists of (VPN, offset)

PA consists of (PPN, offset)

Map a virtual page to a physical page at runtimeMap a virtual page to a physical page at runtime

Page table contains VA to PA mapping informationPage table contains VA to PA mapping information

Page table entry (PTE) containsPage table entry (PTE) containsVPN

PPN

Presence bit – 1 if this page is in main memory

Reference bits – reference statistics info used for page replacement

Dirty bit – 1 if this page has been modified

Access control – read/write/execute permissions

Privilege level – user-level page versus system-level page

Disk address

Internal fragmentationInternal fragmentation

Page 33: Computer Architecture Memory Hierarchy Lynn Choi Korea University

ProcessProcess

Def: A process is Def: A process is an instance of a program in executionan instance of a program in execution..One of the most profound ideas in computer science.Not the same as “program” or “processor”

Process provides each program with two key abstractions:Process provides each program with two key abstractions:Logical control flow

Each program seems to have exclusive use of the CPU.Private address space

Each program seems to have exclusive use of main memory.

How are these Illusions maintained?How are these Illusions maintained?Multitasking: process executions are interleaved

In reality, many other programs are running on the system.Processes take turns in using the processorEach time period that a process executes a portion of its flow is called a time slice

Virtual memory: a private space for each processThe private space is also called the virtual address space, which is a linear array of bytes, addressed by n bit virtual address (0, 1, 2, 3, … 2n-1)

Page 34: Computer Architecture Memory Hierarchy Lynn Choi Korea University

PagingPagingPage table organizationPage table organization

Linear: one PTE per virtual page

Hierarchical: tree structured page table

Page table itself can be paged due to its size

For example, 32b VA, 4KB page, 16B PTE requires 16MB page table

Page directory tables

PTE contains descriptor (i.e. index) for page table pages

Page tables - only leaf nodes

PTE contains descriptor for page

Page table entries are dynamically allocated as needed

Different virtual memory faultsDifferent virtual memory faultsTLB miss - PTE not in TLB

PTE miss - PTE not in main memory

page miss - page not in main memory

access violation

privilege violation

Page 35: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Multi-Level Page TablesMulti-Level Page Tables

Given:Given:4KB (212) page size

32-bit address space

4-byte PTE

Problem:Problem:Would need a 4 MB page table!

220 *4 bytes

Common solutionCommon solutionmulti-level page tables

e.g., 2-level table (P6)

Level 1 table: 1024 entries, each of which points to a Level 2 page table.

This is called page directory

Level 2 table: 1024 entries, each of which points to a page

Level 1

Table

...

Level 2

Tables

Page 36: Computer Architecture Memory Hierarchy Lynn Choi Korea University

TLBTLB

TLB (Translation Lookaside Buffer)TLB (Translation Lookaside Buffer)Cache of page table entries (PTEs)

On TLB hit, can do virtual to physical translation without accessing the page table

On TLB miss, must search the page table for the missing entry

TLB configurationTLB configuration~100 entries, usually fully associative cache

sometimes mutil-level TLBs, TLB shootdown issue

usually separate I-TLB and D-TLB, accessed every cycle

Miss handling

On a TLB miss, exception handler (with the help of operating system) search page table for the missed TLB entry and insert it into TLB

Software managed TLBs - TLB insert/delete instructions

Flexible but slow: TLB miss handler ~ 100 instructions

Sometimes, by HW - HW page walker

Page 37: Computer Architecture Memory Hierarchy Lynn Choi Korea University

TLB and Cache Implementation ofDECStation 3100

Page 38: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Virtually-Indexed Physically Addressed CacheVirtually-Indexed Physically Addressed Cache

Virtually-addressed physically-tagged cacheVirtually-addressed physically-tagged cacheCommonly used scheme to bypass the translationUse lower bits (page offsets) of VA to access the L1 cache

With 8K page size, use the 13 low order bits to access 8KB, 16KB 2-way, 32KB 4-way set-associative caches

Access TLB and L1 in parallel using VA and do tag comparison after fetching the PPN from TLB

Page 39: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Exercises and DiscussionExercises and Discussion

Which one is the fastest among 3 cache Which one is the fastest among 3 cache organizations?organizations?

Which one is the slowest among 3 cache Which one is the slowest among 3 cache organizations?organizations?

Which one is the largest among 3 cache Which one is the largest among 3 cache organizations?organizations?

Which one is the smallest among 3 cache Which one is the smallest among 3 cache organizations?organizations?

What will happen in terms of cache/TLB/page What will happen in terms of cache/TLB/page misses right after context switching?misses right after context switching?

Page 40: Computer Architecture Memory Hierarchy Lynn Choi Korea University

Homework 6Homework 6

Read Chapter 9 from Computer System TextbookRead Chapter 9 from Computer System Textbook

ExerciseExercise

5.1

5.2

5.5

5.6

5.8

5.11