overview: memory memory organization: general issues (hardware) –objectives in memory design...

23
Overview: Memory Memory Organization: General Issues (Hardware) Objectives in Memory Design Memory Types Memory Hierarchies Memory Management (Software - OS) – Cache Virtual Memory & Paging – Segmentation Memory Issues in Multiprogramming (Software - OS) Memory allocation for each process – Swapping – Security

Post on 21-Dec-2015

238 views

Category:

Documents


1 download

TRANSCRIPT

Overview: Memory• Memory Organization: General Issues

(Hardware)– Objectives in Memory Design– Memory Types – Memory Hierarchies

• Memory Management (Software - OS)– Cache– Virtual Memory & Paging– Segmentation

• Memory Issues in Multiprogramming (Software - OS) – Memory allocation for each process– Swapping– Security

Memory Organization: Objectives

• High Capacity• Fast Access• Small Size to fit on chip• Cheap• Reusable• Non-volatile• Reliable

Memory Reality

• Fast Memory is very expensive• The bigger memory, the slower it will

be • Need for memory grows

• Parkinson’s Law “ Programs expand to fill the memory available to hold them”

• You can’t run Windows XP on a 5 year old machine with 32 MB

Storage Devices

• Register• Cache• Main Memory (RAM)• Hard disk• Floppy• Tape • CD ROM• …

Memory Types I

• Electric– Static RAM (Random Access Memory)

• D-Latch, holds information as long as the circuit has power

• Fast, Expensive, many transistors

– Dynamic RAM• One transistor and capacitor per bit• Slower, need regular refreshing, cheap, small

• Magnetic– Hard Drive, Floppy

• Cheap, slower, stable, error prone

Hard Drive

Memory Types II

• Optical/Mechanical: CD-Rom, DVD– ROM (Read Only access)– PROM (Programmable ROM)– EPROM (Erasable PROM) – read/write!

• Cheap, large volume, slow access, stable, high reliability

Type Comparison

Type Category

Erasure

Volatile

Use

SRAM Read/Write Electrical Yes Register, Cache

DRAM Read/Write Electrical Yes Main Memory

ROM, PROM

Read only No No Large Volume Appliances

EPROM Read mostly UV-Light No Device Prototyping

Magnetic Read/Write Magnetic No Hard Drive,Backup tapes

• Question: How can you improve performance without higher cost?

• Answer: Build memory hierarchy

Speed

Fastest

Slowest

Cost $/bit

Highest

Lowest

Size

Smallest

Biggest

CPU

Memory

Memory

Memory

Memory Hierarchy

Memory Management: OS

• Obvious question: What content of Memory should be in what level of memory?

• How to provide uniform access for all processes to memory content independent of its physical location?

Principle of Locality

• Locality:– Temporal: You will use again things you used

recently– Spatial: Nearby items are more likely to be used

(Array)• The idea of hierarchy works only because of

locality• Why does code has locality?

– Local variable space– Limited set of variables that will be used again– Arrays– Machine language is stored sequentially

Cache

• “A safe place for hiding or storing things”Webster Dictionary

• Small and very fast temporal memory between CPU and main memory that contains things (variables and instructions) that were recently used by CPU

Cache: Simple Example• Example: Memory has 8 lines, cache has 4

– LOAD R1, x5– Since x5 is not in the cache, it will be fetched from

main memory and replace one current entry

– Cache before request After request

X3

X1

X4

X0

X3

X5

X4

X0

Terminology

• Hit: data requested could be found in one level

• Miss: Data could not be found -> has to be searched on lower level

• Miss Rate: percentage of memory access that resulted a miss

• Block: Minimum amount of data transferred between levels (in the example it was 1)

• Miss penalty: Additional time it takes to find item

Cache Parameter

• Block size: How many items do you replace? – Big if high degree of spatial locality– Small if low degree of spatial locality

• Which block will be discarded in case of a miss?– Replacement strategy: pick one that will

hopefully not be used again

• Separation of data and instruction– Either cache for both or 2 caches, one for each

Unified Data and Instruction Cache

Separate Data and Instruction

Performance• Access Time: Miss rate*Miss penalty

+ Hit time– Miss rate reduction:

• Bigger cache• Bigger blocks• Good replacement strategy

– Miss penalty reduction• Smaller blocks• Simple replacement strategy

– Hit time reduction• Small and simple cache

Replacement Strategy• Not recently used

– Pick first block that was not used for certain time

• First in first out– Oldest of all blocks in cache

• Least recently used– Compare last use time or all blocks and

pick the least used– Optimal but very expensive

• Cheaper approximation of LRU

Cache Entry

• What information do you need for each item/block?– Associated address – Valid bit (during initialization 0)– Accounting information for replacement:

• time of last use

Address Valid

Time Value

Hit vs. Miss• Read hit

– Optimal

• Read miss– Stall CPU, fetch new block from memory,

identify oldest block, replace, start CPU

• Write hit– Update cache, depending on strategy

update memory

• Write miss– Similar to read miss, first update memory,

fetch block, replace, start CPU

Write Strategies• Write-through

– Update memory whenever cache is changed

• Copy back– Update memory only when block is

deleted from cache– Keep track of changes with dirty bit– Makes read miss more expensive but

improves the write time significantly• Buffered write-through

– Keep track of updates in special buffer, buffer transfers data slowly to memory