computer bus architecture, pipelining and memory management

16
Bus Architecture:- Bus architecture is a system designed on how the data signaling is used commonly with multiple devices inside a computer when they need to communicate or transfer data between them. A "BUS" is a circuit that connects the electrical components of the computer and helps the tranfer of electronic impulses from one component to the other. The architecture defines how the bus are to be used in a system by multiple system processes with as much efficiency as possible. History of the Bus:- In the early years, computer devices were connected with parallel electrical connections which were derived from busbar. But modern computer are able to use both parallel and bit serial connections. The early bus were capable of transferring 1 bit signals which then grew to 2, 4, 8, 16, 32, 64 where the signals could be multiplexed.

Upload: nirose

Post on 10-Mar-2015

242 views

Category:

Documents


6 download

DESCRIPTION

a brief study on computer bus architecture, pipelining and memory management

TRANSCRIPT

Page 1: Computer Bus Architecture, Pipelining and Memory Management

Bus Architecture:-Bus architecture is a system designed on how the data signaling is used commonly with multiple

devices inside a computer when they need to communicate or transfer data between them. A

"BUS" is a circuit that connects the electrical components of the computer and helps the tranfer

of electronic impulses from one component to the other. The architecture defines how the bus

are to be used in a system by multiple system processes with as much efficiency as possible.

History of the Bus:-In the early years, computer devices were connected with parallel electrical connections which

were derived from busbar. But modern computer are able to use both parallel and bit serial

connections. The early bus were capable of transferring 1 bit signals which then grew to 2, 4, 8,

16, 32, 64 where the signals could be multiplexed.

First Generation Bus:- The first generation of computer buses were all wires that were connected to memory and

other components of the computers.

The first generation saw the development of 8 Bit parallel processors.

Page 2: Computer Bus Architecture, Pipelining and Memory Management

This was when the first implication of interrupts were used where a sequence of the task

could be paused to continue with other task and also get back to the original task when

completed which was done by interrupt prioritization.

All the communications were controlled by the CPU which were timed by a central clock

that would determine the speed of the CPU.

But this created some drawbacks as all the peripherals had to communicate at the same

speed.

Second Generation Bus:- Memory and CPU were seperated from the other devices.

Then the bus controller transferred the data from the CPU side to other devices removing

the burden on CPU.

This allowed the development of buses separate from the CPU and the memory.

Now that the CPU were isolated from other system devices it could focus on more speed

and efficiency.

Third Generation Bus:- The third generation introduced technology such as HyperTransport and Infiniband.

The bus became more flexible physically which allowed them to be used as the internal

bus and external too.

Bus Arbritation A bus may be controlled by more than one module. ie CPU and DMA(Direct Memory

Access) Controller. The bus can be used by only one module at a time. It may be centralized or distributed.

Page 3: Computer Bus Architecture, Pipelining and Memory Management

Types of BusThere are three bus according to the method of operation.

Figure 1 - Processor Schematic Architecture

Control Bus:- Directs and monitors the actual activities by signaling the functional parts of the

computer system.

Control Bus carries signals from the CPU to read or write data or instructions from the

memory or I/O devices.

Signals such as read, write, acknowledge, interrupt to co-ordinate the operations of the

system.

Syncronize the subsystems: memory and I/O system.

Address Bus:- Transfers the address of the memory or a register where the data is to be read or is to be

written.

An address must be transmitted over the address lines before reading or writing data or

instruction through the memory by the CPU.

Page 4: Computer Bus Architecture, Pipelining and Memory Management

The actual width of any microprocessor’s address lines is what determines the amount of

individual memory space locations the microprocessor is capable of addressing.

A processor with a 32 bit address bus is able to address 232(4,294,967,296) memory

location, that is 4GB addressable space at a time.

Data Bus:- Data lines transfer data or instructions to or from the memory.

It is bi-directional. Transmit data in a single direction at a time.

Also transmits data between memory and I/O sections during input or output operations.

More the width of data bus, more data can flow through the data bus.

External Bus:Kinds of External Bus:

ISA (Industry Standard Architecture) PCI (Peripheral Component Interface) PCI-e (PCI Express) USB (Universal Serial Bus) AGP (Accelerated Graphics Port)

ISA (Industry Standard Architecture) It was developed by IBM and has evolved from 8 bit to 16 bit and EISA was an attemp

on 32 bit. These are used for industry and legacy PC and are rarely on modern PC's. Connections are directly made to FSB(Front Side Bus) Has a transfer rate of 8 MHz.

Page 5: Computer Bus Architecture, Pipelining and Memory Management

PCI Bus Architecture (32/64 Bit)

PCI Bus Architecture

PCI was invented as industry standard released by Intel in 1992, based on ISA(Industry

Standard Architecture) and VL(VESA Local)Bus.

Provides direct access to the memory to the devices which are directly connected to the

bus.

Offers higher performance without slowing down the processor.

Initially was 33 Mhz and then upgraded to 66 Mhz.

The PCI Bus Concept utilized "Plug And Play".

Has Burst mode which allows the sending of multiple data set.

The Devices on PCI bus could transfer directly.

The latest version of the architecture is called PCI Express.

USB (Universal Serial Bus) USB in these days is one of the most popular and easy to use connectivity interface

standard.

Created low cost hot-swappable plug and play support for many computer peripherals.

USB can support high bandwidth computer peripherals.

USB 3.0 can support bandwidth up to 5 Gigabyte.

Page 6: Computer Bus Architecture, Pipelining and Memory Management

USB can distribute power to low power connecting devices and supports support/resume

power saving mode.

Pipeline Architecture:Pipelining is a mechanism used in computer devices to increase the throughput of the

instructions in succession without waiting for the completion of the previous instruction.

Pipeling is generally devided into sub operations:-

Instruction Fetch (IF)

Instruction Decode (ID)

Operands Fetch (OP)

Execute (EXEC)

Write Back (WB)

Mechanism:-

Pipelining creates a segmentation of a instruction to sub instructions.

A task is broken down to multiple independent substep and processing units run those

task. When a substeps completes processing another another task can take its place

while the first task moves forward to other substeps.

Doesn't decrease the speed of execution of single instruction however it increases the

instruction throughput.

Instruction throughput is the number of task that can be completed by a pipeline in a

unit time.

Pipeline clock period is controlled by the stage with the max delay, and unless the

stage delays are managed a slow stage can slow down the whole process.

Pipelining is the backbone of vector supercomputers.

Page 7: Computer Bus Architecture, Pipelining and Memory Management

Time- Space depicting the Overlap operations.

Types of pipelining

architecture:-RISC (Reduced Instruction Set

Computer):-

RISC was developed in 1970 to increase the clock cycle of the CPU. It could process

instructions at the rate of one instruction for a machine cycle. It uses single word instruction with fixed field decoding.

CISC (Complex Instruction Set Computer):-

CISC is a computer which was able to run low level operations like load memory, read memory from a single instruction. It uses variable length instruction with variable format.

Issues with PipeliningHazards with Pipelining and Solutions:

Hazards are something that doesn't let a instruction being executed in the instruction stream.

Structure Hazard: When a functional units are not fully pipelined the instructions from those units will not be

processed. Some of the resource required for multiple instruction has not been freed so that the instructions

will not execute. Solution is pipelining be stalled for one clock cycle when memory access occurs to occupy the

resources for that instruction slot.

Data Hazard: Occurs when pipelining changes the order of read and write operation of the operands during

overlapping of the instructions.

Multiple instructions are being executed and they are referencing to the same data.

The system must ensure that the later instruction will not try to access the data before the

first instruction, otherwise this will lead to incorrect results of the instructions.

Page 8: Computer Bus Architecture, Pipelining and Memory Management

Internal forwarding is used to deal with data hazards.

Control Hazard: Caused by uncertainity of the execution paths. when decision is made before the condition is

evaluated. The solution to control hazard to stall the pipeline until the execution path is known.

Limitation and issues on pipelining arise from:Pipeline Latency:

The point that the actual execution time period of each and every instructions doesn't get

reduced sets restrictions upon pipeline depth.

Imbalance among pipeline stages:

Improper balance on pipeline stages will reduce the completion of execution of the

instructions because the clock will not be able to run faster than the time required for the

completion of the slowest pipeline stage.

Pipeline Overhead:

This arises from pipeline register delay (setup time + registration delay) and the clock

stew(difference in arrival time of the supposedly simultaneous events).

Interrupts:

Interrupts insert new instruction while a instruction is running through the system.

Interrupts should take effect between intructions when one instruction is complete and

other not started, but in pipelining the other instruction ususally begins before the

completion of the preceding instruction.

Advantages of Pipelining over Non Pipelining. A non pipelined system executes a single instruction at a time while a pipelined system can

overlap instruction execution to increase the system performance. Better utilization of the CPU resources. Achive high instruction throughtput without reducing instruction latency. Pipelining uses combinational logic to generate control signals. Overall improvement of the speed of the Computer.

Page 9: Computer Bus Architecture, Pipelining and Memory Management
Page 10: Computer Bus Architecture, Pipelining and Memory Management

Hardware Support to MEMORY MANAGEMENTMemory management is required to improve the performance of the system by increasing the

efficiency to read and write instruction and data from the memory. The primary goal of of MMU

is to optimize the number of runable processes in the memory.

Memory Management Unit (MMU) is a hardware device which controls the access between

the system memory and the CPU. The primary goal of of MMU is to optimize the number of

runable processes in the memory. Functions provided by MMU can be listed as:

hardware memory management

operating system memory management

application memory management

Page 11: Computer Bus Architecture, Pipelining and Memory Management

Memory Hierarchy A rank given to the devices that are used for memory storage according to the level of

speed and the capacity of it.

The main point of memory hierarchy is to allow fast access to a large amount of the

memory according to need of speed and cost.

Registers are the smallest and also provide the fastest access to the data.

Level 1 cache are much larger than register and the size ranges between 4 bytes and 32

bytes.

Level 2 cache can be internal as well as external.

Main memory are RAM SDRAM etc.

Virtual Memory:- The virtual memory is something that the program uses so that it doesn't conflict with

programs trying to access the same location. Modern operating systems have multiple programs running concurrently without

interfering with each other. So virtual memory provides each process its own address

space. So if another program is trying to access the same memory location and other is

trying to load something in the same memory then they become physically different.

registerson-chip L1

cache (SRAM)

main memory(DRAM)

local secondary storage(local disks)

Larger, slower,

and cheaper (per byte)storagedevices

remote secondary storage(distributed file systems, Web servers)

Local disks hold files retrieved from disks on remote network servers

Main memory holds disk blocks retrieved from local disks

off-chip L2cache (SRAM)

L1 cache holds cache lines retrieved from the L2 cache memory

CPU registers hold words retrieved from L1 cache

L2 cache holds cache lines retrieved from main memory

L0:

L1:

L2:

L3:

L4:

L5:

Smaller,faster,and

costlier(per byte)storage devices

Page 12: Computer Bus Architecture, Pipelining and Memory Management

This feat is achieved by using paging to create virtual addresses of physical memory of

the memory.

Paging breaks up memory into blocks called pages, after breaking memory into blocks the

system uses the lookup table to translate the H.O bits of the virtual address select a page and the

L.O. bit is used to select a page.

One example is, using a 4,096-byte page, you'd probably utilize the L.O. 12 bits of the actual

virtual address since the offset inside the page within physical memory space. The upper 20 bits

of the actual address you'd utilize just as one index in to a lookup table which returns the specific

higher 20 bits of the actual physical address

Segmentation, another way to achieve memory protection which is achieved by sealing off some

part of the memory from the running processes. An element is identfied with their offset from the

beginning of the segment. During address translation a mapping is required to convert logical

address to physical address. During a process the logical address and the system memory is

devided into segments which may vary in sizes.

Memory Interleaving:

Interleaving is an sophisticated method utilized by high-end motherboards/chipsets to

further improve overall memory performance .

It increases the bandwidth of the memory by allowing simultanous access to a portion of

the memory so that the CPU can transfer much more data to or from the memory.

Page 13: Computer Bus Architecture, Pipelining and Memory Management

It helps decrease the Memory/CPU bottleneck that decrease the performance of the

system.

Interleaving breaks down the memory into blocks which are accessed by different sets of

control lines combined jointly within the memory.

Cache Memory:

Cache memory is a method utilized to help memory access programs that are frequently

accessed.

Some algorithms introduced to handle the data access and retrival from the memory is as

follows:

1. FIFO (First In First Out)

2. Random Replacement

3. Least Recently Used(LRU) The memory access time is significantly increase due to a fact that it records the memory

address of the regularly accessed programs in higher priority and the lessed used program

to lesser priority.

A cache hit is a phrase that refers to when the CPU tries to find a main memory address

and finds it in the cache.