chapter 7 computer system architectures based on

Download Chapter 7 Computer System Architectures Based on

If you can't read please download the document

Upload: sherman-wilkerson

Post on 17-Jan-2018

227 views

Category:

Documents


0 download

DESCRIPTION

Chapter 7 :: Topics Introduction Memory System Performance Analysis Caches Virtual Memory Memory-Mapped I/O Summary

TRANSCRIPT

Chapter 7 Computer System Architectures Based on
Digital Design and Computer Architecture, 2nd Edition David Money Harris and Sarah L. Harris Chapter 7 :: Topics Introduction Memory System Performance Analysis
Caches Virtual Memory Memory-Mapped I/O Summary Introduction Computer performance depends on: Memory Interface
Processor performance Memory system performance Memory Interface Computer system performance depends on the memory system as well as the processor microarchitecture . Early processors were relatively slow, so memory was able to keep up. But processor speed has increased at a faster rate than memory speeds. DRAM memories are currently 10 to 100 times slower than processors. The processor communicates with the memory system over a memory interface. Figure shows the simple memory interface used in our multicycle MIPS processor. The processor sends an address over the Address bus to the memory system. For a read, MemWrite is 0 and the memory returns the data on the ReadData bus. For a write, MemWrite is 1 and the processor sends data to memory on the WriteData bus. Processor-Memory Gap Diverging processor and memory performance
DRAM speed has improved by only about 7% per year, whereas processor performance has improved at a rate of 25 to 50% per year, as shown in Figure. Memory System Challenge
Make memory system appear as fast as processor Use hierarchy of memories Ideal memory: Fast Cheap (inexpensive) Large (capacity) But can only choose two! Memory Hierarchy Computers store the most commonly used instructions and data in a faster but smaller memory, called a cache. The cache is usually built out of SRAM on the same chip as the processor. SRAM is inherently faster than DRAM, and because the on-chip memory eliminates lengthy delays caused by traveling to and from a separate chip. If the process or requests data that is avail able in the cache, it is returned quickly. This is called a cache hit. Otherwise, the processor retrieves the data from main memory (DRAM). This is called a cache miss. The third level in the memory hierarchy is the hard drive. The hard drive provides an illusion of more capacity than actually exists in the main memory. It is thus called virtual memory. Main memory, also called physical memory, holds a subset of the virtual memory. Locality Exploit locality to make memory accesses fast
Temporal Locality: Locality in time If data used recently, likely to use it again soon How to exploit: keep recently accessed data in higher levels of memory hierarchy Spatial Locality: Locality in space If data used recently, likely to use nearby data soon How to exploit: when access data, bring nearby data into higher levels of memory hierarchy too Memory Performance Hit: data found in that level of memory hierarchy
Miss: data not found (must go to next level) Hit Rate = # hits / # memory accesses = 1 Miss Rate Miss Rate= # misses / # memory accesses = 1 Hit Rate Average memory access time (AMAT): average time for processor to access data AMAT = tcache + MRcache[tMM + MRMM(tVM)] Designers (and computer buyers) need quantitative ways to measure the performance of memory systems to evaluate the cost-benefit trade-offs of various alternatives. Memory system performance metrics are miss rate or hit rate and average memory access time. Average memory access time (AMAT) is the average time a processor must wait for memory per load or store instruction. Memory Performance Example 1
A program has 2,000 loads and stores 1,250 of these data values in cache Rest supplied by other levels of memory hierarchy What are the hit and miss rates for the cache? Memory Performance Example 1
A program has 2,000 loads and stores 1,250 of these data values in cache Rest supplied by other levels of memory hierarchy What are the hit and miss rates for the cache? Hit Rate = 1250/2000 = 0.625 Miss Rate = 750/2000 = = 1 Hit Rate Memory Performance Example 2
Suppose processor has 2 levels of hierarchy: cache and main memory tcache = 1 cycle, tMM = 100 cycles What is the AMAT of the program from Example 1? Memory Performance Example 2
Suppose processor has 2 levels of hierarchy: cache and main memory tcache = 1 cycle, tMM = 100 cycles What is the AMAT of the program from Example 1? AMAT = tcache + MRcache(tMM) = [ (100)] cycles = 38.5 cycles Gene Amdahl, 1922- Amdahls Law: the effort spent increasing the performance of a subsystem is wasted unless the subsystem affects a large percentage of overall performance Co-founded 3 companies, including one called Amdahl Corporation in Performance improvements might not always be as good as they sound. For example, making the memory system ten times faster will not necessarily make a computer program run ten times as fast. If 50% of a programs instructions are loads and stores, a tenfold memory system improvement only means a 1.82-fold improvement in program performance. This general principle is called Amdahls Law, which says that the effort spent on increasing the performance of a subsystem is worthwhile only if the subsystem affects a large percentage of the overall performance. Cache Highest level in memory hierarchy
Fast (typically ~ 1 cycle access time) Ideally supplies most data to processor Usually holds most recently accessed data A cache holds commonly used memory data. The number of data words that it can hold is called the capacity, C. Because the capacity of the cache is smaller than that of main memory, the computer system designer must choose what subset of the main memory is kept in the cache . To accommodate the new data, the cache must replace old data. Cache Terminology Capacity (C): Block size (b):
number of data bytes in cache Block size (b): bytes of data brought into cache at once Number of blocks (B = C/b): number of blocks in cache: B = C/b Degree of associativity (N): number of blocks in a set Number of sets (S = B/N): each memory address maps to exactly one cache set Cache Design Questions
What data is held in the cache? How is data found? What data is replaced? Focus on data loads, but stores follow same principles What data is held in the cache?
Ideally, cache anticipates needed data and puts it in cache But impossible to predict future Use past to predict future temporal and spatial locality: Temporal locality: copy newly accessed data into cache Spatial locality: copy neighboring data into cache too An ideal cache would anticipate all of the data needed by the processor and fetch it from main memory ahead of time so that the cache has a zero miss rate. Because it is impossible to predict the future with perfect accuracy, the cache must guess what data will be needed based on the past pattern of memory accesses. In particular, the cache exploits temporal and spatial locality to achieve a low miss rate. How is data found? Cache organized into S sets
Each memory address maps to exactly one set Caches categorized by # of blocks in a set: Direct mapped: 1 block per set (S = B) N-way set associative: N blocks per set Fully associative: all cache blocks in 1 set A cache is organized into S sets, each of which holds one or more blocks of data. The relationship between the address of data in main memory and the location of that data in the cache is called the mapping. Each memory address maps to exactly one set in the cache. Some of the address bits are used to determine which cache set contains the data. If the set contains more than one block, the data may be kept in any of the blocks in the set. Example Cache Parameters
Examine each organization for a cache with: Capacity (C = 8 words) Block size (b = 1 word) So, number of blocks (B = 8) Ridiculously small, but will illustrate organizations Direct Mapped Cache A direct mapped cache has one block in each set, so it is organized into S = B sets. To understand the mapping of memory addresses onto cache blocks, imagine main memory as being mapped into b-word blocks, just as the cache is. An address in block 0 of main memory maps to set 0 of the cache. An address in block 1 of main memory maps to set 1 of the cache, and so forth until an address in block B 1 of main memory maps to block B 1 of the cache. There are no more blocks of the cache, so the mapping wraps around, such that block B of main memory maps to block 0 of the cache. Direct Mapped Cache Hardware
Because many addresses map to a single set, the cache must also keep track of the address of the data actually contained in each set. The least significant bits of the address specify which set holds the data. The remaining most significant bits are called the tag and indicate which of the many possible addresses is held in that set.Figure shows the hardware for the direct mapped cache of previous Figure . The cache is constructed as an eight-entry SRAM. Each entry, or set, contains one line consisting of 32 bits of data, 27 bits of tag, and 1 valid bit. The cache is accessed using the 32-bit address. The two least significant bits, the byte offset bits, are ignored for word accesses. The next three bits, the set bits, specify the entry or set in the cache. A load instruction reads the specified entry from the cache and checks the tag and valid bits.If the tag matches the most significant 27 bits of the address and the valid bit is 1, the cache hits and the data is returned to the processor. Otherwise, the cache misses and the memory system must fetch the data from main memory. Direct Mapped Cache Performance
# MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0xC($0) lw $t3, 0x8($0) addi $t0, $t0, -1 j loop done: Miss Rate = ? Direct Mapped Cache Performance
# MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0xC($0) lw $t3, 0x8($0) addi $t0, $t0, -1 j loop done: Miss Rate = 3/15 = 20% Temporal Locality Compulsory Misses Direct Mapped Cache: Conflict
# MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: Miss Rate = ? Direct Mapped Cache: Conflict
# MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: Miss Rate = 10/10 = 100% Conflict Misses N-Way Set Associative Cache
An N-way set associative cache reduces conflicts by providing N blocks in each set where data mapping to that set might be found. Each memory address still maps to a specific set, but it can map to any one of the N blocks in the set. Hence, a direct mapped cache is another name for a one-way set associative cache. N is also called the degree of associativity of the cache.Figure shows the hardware for a C = 8-word, N = 2-way set associative cache. The cache now has only S = 4 sets rather than 8. Thus, only log(4,2) = 2 set bits rather than 3 are used to select the set. The tag increases from 27 to 28 bits. Each set contains two ways or degrees of associativity. Each way consists of a data block and the valid and tag bits. The cache reads blocks from both ways in the selected set and checks the tags and valid bits for a hit. If a hit occurs in one of the ways, a multiplexer selects data from that way.Set associative caches generally have lower miss rates than direct mapped caches of the same capacity, because they have fewer conflicts. However, set associative caches are usually slower and somewhat more expensive to build because of the output multiplexer and additional comparators. N-Way Set Associative Performance
# MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: Miss Rate = ? N-Way Set Associative Performance
# MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: Miss Rate = 2/10 = 20% Associativity reduces conflict misses Fully Associative Cache
Reduces conflict misses Expensive to build A fully associative cache contains a single set with B ways, where B is the number of blocks. A memory address can map to a block in any of these ways. A fully associative cache is another name for a B-way set associative cache with one set.Figure shows the SRAM array of a fully associative cache with eight blocks. Upon a data request, eight tag comparisons (not shown) must be made, because the data could be in any block. Fully associative caches tend to have the fewest conflict misses for a given cache capacity, but they require more hardware for addition al tag comparisons. They are best suited to relatively small caches be cause of the large number of comparators. Spatial Locality? Increase block size: Block size, b = 4 words
C = 8 words Direct mapped (1 block per set) Number of blocks, B = 2 (C/b = 8/4 = 2) Cache with Larger Block Size Direct Mapped Cache Performance
addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0xC($0) lw $t3, 0x8($0) addi $t0, $t0, -1 j loop done: Miss Rate = ? Direct Mapped Cache Performance
addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0xC($0) lw $t3, 0x8($0) addi $t0, $t0, -1 j loop done: Miss Rate = 1/15 = 6.67% Larger blocks reduce compulsory misses through spatial locality Cache Organization Recap
Capacity: C Block size: b Number of blocks in cache: B = C/b Number of blocks in a set: N Number of sets: S = B/N Organization Number of Ways (N) Number of Sets (S = B/N) Direct Mapped 1 B N-Way Set Associative 1 < N < B B / N Fully Associative Caches are characterized by capacity C block size b (and number of blocks, B = C/b) number of blocks in a set (N) Cache capacity, associativity, set size, and block size are typically powers of 2. This makes the cache fields (tag, set, and block offset bits) subsets of the address bits. Increasing the associativity N usually reduces the miss rate caused by conflicts. But higher associativity requires more tag comparators. Increasing the block size b takes advantage of spatial locality to reduce the miss rate. However, it decreases the number of sets in a fixed sized cache and therefore could lead to more conflicts. It also increases the miss penalty. Capacity Misses Cache is too small to hold all data of interest at once If cache full: program accesses data X & evicts data Y Capacity miss when access Y again How to choose Y to minimize chance of needing it again? Least recently used (LRU) replacement: the least recently used block in a set evicted How to choose Y to minimize chance of needing it again? The principle of temporal locality suggests that the best choice is to evict the least recently used block, because it is least likely to be used again soon. Hence, most associative caches have a least recently used (LRU) replacement policy. Types of Misses Compulsory: first time data accessed
Capacity: cache too small to hold all data of interest Conflict: data of interest maps to same location in cache Miss penalty: time it takes to retrieve a block from lower level of hierarchy Cache misses can be reduced by changing capacity, block size, and/or associativity. The misses can be classified as compulsory, capacity, and conflict. The first request to a cache block is called a compulsory miss, because the block must be read from memory regardless of the cache design. Capacity misses occur when the cache is too small to hold all concurrently used data. Conflict misses are caused when several addresses map to the same set and evict blocks that are still needed. Miss Rate Trends Bigger caches reduce capacity misses
Greater associativity reduces conflict misses Adapted from Patterson & Hennessy, Computer Architecture: A Quantitative Approach, 2011 Miss Rate Trends Bigger blocks reduce compulsory misses
Bigger blocks increase conflict misses LRU Replacement # MIPS assembly lw $t0, 0x04($0) lw $t1, 0x24($0) LRU Replacement # MIPS assembly lw $t0, 0x04($0) lw $t1, 0x24($0) Cache Summary What data is held in the cache? How is data found?
Recently used data (temporal locality) Nearby data (spatial locality) How is data found? Set is determined by address of data Word within block also determined by address In associative caches, data could be in one of several ways What data is replaced? Least-recently used way in the set Multilevel Caches Larger caches have lower miss rates, longer access times Expand memory hierarchy to multiple levels of caches Level 1: small and fast (e.g. 16 KB, 1 cycle) Level 2: larger and slower (e.g. 256 KB, 2-6 cycles) Most modern PCs have L1, L2, and L3 cache Intel Pentium III Die Virtual Memory Gives the illusion of bigger memory
Main memory (DRAM) acts as cache for hard disk Most modern computer systems use a hard drive made of magnetic or solid state storage as the lowest level in the memory hierarchy. The objective of adding a hard drive to the memory hierarchy is to inexpensively give the illusion of a very large memory while still providing the speed of faster memory for most accesses.A computer with only 128 MB of DRAM , for example, could effectively provide 2 GB of memory using the hard drive. This larger 2-GB memory is called virtual memory, and the smaller 128-MB main memory is called physical memory. I will use the term physical memory to refer to main memory throughout this section. Memory Hierarchy Physical Memory: DRAM (Main Memory)
Virtual Memory: Hard drive Slow, Large, Cheap Hard Disk Takes milliseconds to seek correct location on disk Virtual Memory Virtual addresses Memory Protection
Programs use virtual addresses Entire virtual address space stored on a hard drive Subset of virtual address data in DRAM CPU translates virtual addresses into physical addresses (DRAM addresses) Data not in DRAM fetched from hard drive Memory Protection Each program has own virtual to physical mapping Two programs can use same virtual address for different data Programs dont need to be aware others are running One program (or virus) cant corrupt memory used by another Programs can access data anywhere in virtual memory, so they must use virtual addresses that specify the location in virtual memory. The physical memory holds a subset of most recently accessed virtual memory. In this way, physical memory acts as a cache for virtual memory. Thus, most accesses hit in physical memory at the speed of DRAM, yet the program enjoys the capacity of the larger virtual memory. Cache/Virtual Memory Analogues
Block Page Block Size Page Size Block Offset Page Offset Miss Page Fault Tag Virtual Page Number Virtual memory systems use different terminologies for the same caching principles discussed in previous section. Physical memory acts as cache for virtual memory Virtual Memory Definitions
Page size: amount of memory transferred from hard disk to DRAM at once Address translation: determining physical address from virtual address Page table: lookup table used to translate virtual addresses to physical addresses Virtual memory is divided into virtual pages, typically 4 KB in size. Physical memory is likewise divided into physical pages of the same size. A virtual page may be located in physical memory (DRAM) or on the hard drive. Some virtual pages are present in physical memory, and some are located on the hard drive. The process of determining the physical address from the virtual address is called address translation. If the processor attempts to access a virtual address that is not in physical memory, a page fault occurs, and the operating system loads the page from the hard drive into physical memory.Each load or store instruction requires a page table access followed by a physical memory access . The page table access translates the virtual ad dress used by the program to a physical address. The physical address is then used to actually read or write the data.Hence, each load or store involves two physical memory accesses: a page table access , and a data access . To speed up ad dress translation, a translation lookaside buffer (TLB) caches the most commonly used page table entries. Virtual & Physical Addresses
Most accesses hit in physical memory But programs have the large capacity of virtual memory Address Translation Virtual Memory Example
System: Virtual memory size: 2 GB = 231 bytes Physical memory size: 128 MB = 227 bytes Page size: 4 KB = 212 bytes Virtual Memory Example
System: Virtual memory size: 2 GB = 231 bytes Physical memory size: 128 MB = 227 bytes Page size: 4 KB = 212 bytes Organization: Virtual address: 31 bits Physical address: 27 bits Page offset: 12 bits # Virtual pages = 231/212 = 219(VPN = 19 bits) # Physical pages = 227/212 = 215 (PPN = 15 bits) Virtual Memory Example
19-bit virtual page numbers 15-bit physical page numbers Virtual Memory Example
What is the physical address of virtual address 0x247C? Virtual Memory Example
What is the physical address of virtual address 0x247C? VPN = 0x2 VPN 0x2 maps to PPN 0x7FFF 12-bit page offset: 0x47C Physical address = 0x7FFF47C How to perform translation?
Page table Entry for each virtual page Entry fields: Valid bit: 1 if page in physical memory Physical page number: where the page is located Recall that virtual memory and physical memory are divided into pages. The most significant bits of the virtual or physical address specify the virtual or physical page number. The least significant bits specify the word within the page and are called the page offset. Page Table Example VPN is index into page table Page Table Example 1 What is the physical address of virtual address 0x5F20? Page Table Example 1 What is the physical address of virtual address 0x5F20? VPN = 5 Entry 5 in page table VPN 5 => physical page 1 Physical address: 0x1F20 Page Table Example 2 What is the physical address of virtual address 0x73E0? Page Table Example 2 What is the physical address of virtual address 0x73E0? VPN = 7 Entry 7 is invalid Virtual page must be paged into physical memory from disk Page Table Challenges Page table is large
usually located in physical memory Load/store requires 2 main memory accesses: one for translation (page table read) one to access data (after translation) Cuts memory performance in half Unless we get clever Because the page table is so large, it is stored in physical memory.The page table can be stored anywhere in physical memory, at the discretion of the OS. The process or typically uses a dedicated register, called the page table register, to store the base address of the page table in physical memory. Translation Lookaside Buffer (TLB)
Small cache of most recent translations Reduces # of memory accesses for most loads/stores from 2 to 1 TLB Page table accesses: high temporal locality TLB
Large page size, so consecutive loads/stores likely to access same page TLB Small: accessed in < 1 cycle Typically entries Fully associative > 99 % hit rates typical Reduces # of memory accesses for most loads/stores from 2 to 1 Example 2-Entry TLB Memory Protection Multiple processes (programs) run at once
Each process has its own page table Each process can use entire virtual address space A process can only access physical pages mapped in its own page table As you probably know, modern computers typically run several programs or processes at the same time. All of the programs are simultaneously present in physical memory. In a well-designed computer system, the programs should be protected from each other so that no program can crash or hijack another program. Specifically, no program should be able to access another programs memory without permission. This is called memory protection.Virtual memory systems provide memory protection by giving each program its own virtual address space. Virtual Memory Summary
Virtual memory increases capacity A subset of virtual pages in physical memory Page table maps virtual pages to physical pages address translation A TLB speeds up address translation Different page tables for different programs provides memory protection Memory-Mapped I/O Processor accesses I/O devices just like memory (like keyboards, monitors, printers) Each I/O device assigned one or more address When that address is detected, data read/written to I/O device instead of memory A portion of the address space dedicated to I/O devices Input/Output (I/O) systems are used to connect a computer with external devices called peripherals.A processor accesses an I/O device using the address and data busses in the same way that it accesses memory. Memory-Mapped I/O Hardware
Address Decoder: Looks at address to determine which device/memory communicates with the processor I/O Registers: Hold values written to the I/O devices ReadData Multiplexer: Selects between memory and I/O devices as source of data sent to the processor The Memory Interface Memory-Mapped I/O Hardware
Figure shows the hardware needed to support two memory-mapped I/O devices. An address decoder determines which device communicates with the processor. It uses the Address and MemWrite signals to generate control signals for the rest of the hardware. The ReadData multiplexer selects between memory and the various I/O devices. Write-enabled registers hold the values written to the I/O devices.Software that communicates with an I/O device is called a device driver. Memory-Mapped I/O Code
Suppose I/O Device 1 is assigned the address 0xFFFFFFF4 Write the value 42 to I/O Device 1 Read value from I/O Device 1 and place in $t3 Memory-Mapped I/O Code
Write the value 42 to I/O Device 1 (0xFFFFFFF4) addi $t0, $0, 42 sw $t0, 0xFFF4($0) Memory-Mapped I/O Code
Read the value from I/O Device 1 and place in $t3 lw $t3, 0xFFF4($0) Input/Output (I/O) Systems
Embedded I/O Systems Toasters, LEDs, etc. PC I/O Systems Embedded systems use a processor to control interactions with the physical environment. They are typically built around microcontroller units (MCUs) which combine a microprocessor with a set of easy-to-use peripherals such as general-purpose digital and analog I/O pins, serial ports, timers, etc. Microcontrollers are generally inexpensive and are designed to minimize system cost and size by integrating most of the necessary components onto a single chip. Embedded I/O Systems Example microcontroller: PIC32 microcontroller
32-bit MIPS processor low-level peripherals include: serial ports timers A/D converters Digital I/O // C Code #include int main(void) { int switches; TRISD = 0xFF00; // RD[7:0] outputs // RD[11:8] inputs while (1) { // read & mask switches, RD[11:8] switches = (PORTD >> 8) & 0xF; PORTD = switches; // display on LEDs } General-purpose I/O (GPIO) pins are used to read or write digital signals. For example, Figure shows eight light-emitting diodes (LEDs) and four switches connected to a 12-bit GPIO port. Serial I/O Example serial protocols SPI: Serial Peripheral Interface
UART: Universal Asynchronous Receiver/Transmitter Also: I2C, USB, Ethernet, etc. If a microcontroller needs to send more bits than the number of free GPIO pins, it must break the message into multiple smaller transmissions. In each step, it can send either one bit or several bits. The former is called serial I/O and the latter is called parallel I/O. Serial I/O is popular because it uses few wires and is fast enough for many applications. SPI: Serial Peripheral Interface
Master initiates communication to slave by sending pulses on SCK Master sends SDO (Serial Data Out) to slave, msb first Slave may send data (SDI) to master, msb first UART: Universal Asynchronous Rx/Tx
Configuration: start bit (0), 7-8 data bits, parity bit (optional), 1+ stop bits (1) data rate: 300, 1200, 2400, 9600, baud Line idles HIGH (1) Common configuration: 8 data bits, no parity, 1 stop bit, 9600 baud Timers // Create specified ms/us of delay using built-in timer #include void delaymicros(int micros) { if (micros > 1000) { // avoid timer overflow delaymicros(1000); delaymicros(micros-1000); } else if (micros > 6){ TMR1 = 0; // reset timer to 0 T1CONbits.ON = 1; // turn timer on PR1 = (micros-6)*20; // 20 clocks per microsecond // Function has overhead of ~6 us IFS0bits.T1IF = 0; // clear overflow flag while (!IFS0bits.T1IF); // wait until overflow flag set } void delaymillis(int millis) { while (millis--) delaymicros(1000); // repeatedly delay 1 ms } // until done Analog I/O Needed to interface with outside world
Analog input: Analog-to-digital (A/D) conversion Often included in microcontroller N-bit: converts analog input from Vref--Vref+ to 0-2N-1 Analog output: Digital-to-analog (D/A) conversion Typically need external chip (e.g., AD558 or LTC1257) N-bit: converts digital signal from 0-2N-1to Vref--Vref+ Pulse-width modulation Pulse-Width Modulation (PWM)
Average value proportional to duty cycle Add high-pass filter on output to deliver average value Other Microcontroller Peripherals
Examples Character LCD VGA monitor Bluetooth wireless Motors Personal Computer (PC) I/O Systems
USB: Universal Serial Bus USB 1.0 released in 1996 standardized cables/software for peripherals PCI/PCIe: Peripheral Component Interconnect/PCI Express developed by Intel, widespread around 1994 32-bit parallel bus used for expansion cards (i.e., sound cards, video cards, etc.) DDR: double-data rate memory Personal Computer (PC) I/O Systems
TCP/IP: Transmission Control Protocol and Internet Protocol physical connection: Ethernet cable or Wi-Fi SATA: hard drive interface Input/Output (sensors, actuators, microcontrollers, etc.) Data Acquisition Systems (DAQs) USB Links