o verview of m ass s torage s tructure magnetic disks provide bulk of secondary storage of modern...
TRANSCRIPT
OVERVIEW OF MASS STORAGE STRUCTURE
Magnetic disks provide bulk of secondary storage of modern computers Drives rotate at 60 to 250 times per second Transfer rate is rate at which data flow between drive
and computer Positioning time (random-access time) is time to
move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency)
Head crash results from disk head making contact with the disk surface -- That’s bad
Disks can be removable
MOVING-HEAD DISK MECHANISM
DISK STRUCTURE
DISK STRUCTURE Disk drives are addressed as large 1-dimensional
arrays of logical blocks, where the logical block is the smallest unit of transfer Low-level formatting creates logical blocks on
physical media The 1-dimensional array of logical blocks is
mapped into the sectors of the disk sequentially Sector 0 is the first sector of the first track on the
outermost cylinder Mapping proceeds in order through that track, then the
rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost
Logical to physical address should be easy Except for bad sectors Non-constant no. of sectors per track via constant angular
velocity
DISK SCHEDULING The operating system is responsible for using
hardware efficiently — for the disk drives, this means having a fast access time and large disk bandwidth.
Access time has two major components Seek time Rotational latency
Disk bandwidth is the total number of bytes ()transferred, divided by the total time between the first request for service and the completion of the last transfer.
DISK SCHEDULING When a read/write job is requested, the disk may currently
be busy All pending jobs are placed in a disk queue
could be scheduled to improve the utilization Disk scheduling increases the disk’s bandwidth
(the amount of information that can be transferred in a set amount of time)
DISK SCHEDULING
• Several algorithms exist to schedule the servicing of disk I/O requests. •We illustrate them with a request queue (0-199).
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
FCFS SCHEDULING
Illustration shows total head movement of 640 cylinders.
45+85+146+85+108+110+59+2=640
SSTF SCHEDULING Shortest Seek Time First selects the request with
the minimum seek time from the current head position
SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests
Illustration shows total head movement of 236 cylinders
SSTF SCHEDULING
SCAN SCHEDULING The disk arm starts at one end of the disk, and
moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues.
SCAN algorithm Sometimes called the elevator algorithm
Illustration shows total head movement of 208 cylinders
SCAN SCHEDULING
C-SCAN SCHEDULING Circular scan provides a more uniform wait time
than SCAN. The head moves from one end of the disk to the
other. servicing requests as it goes. When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip.
Treats the cylinders as a circular list that wraps around from the last cylinder to the first one.
C-SCAN SCHEDULING
C-LOOK SCHEDULING Version of C-SCAN Arm only goes as far as the last request in each
direction, then reverses direction immediately, without first going all the way to the end of the disk.
EXAMPLE A disk queue with requests for I/O to blocks on
cylinders 23, 89, 132, 42, 187 With disk head initially at 100
EXAMPLE A disk system has 100 cylinders, numbered 0 to 99.
Assume that the read / write head is at cylinder 50 and determine the order of head movement for each algorithm to satisfy the following stream of requests. They are listed in the order received.
40, 12, 22, 66, 67, 33, 80, 75, 85, 65, 8
DISK MANAGEMENT Disk formatting Boot block Bad blocks
System’s peripheral devices falls into three categories:
Dedicated Devices Shared Devices Virtual Devices Every device is different. The most important
differences among them are speed and degree of sharability.
Storage media are divided into 2 groups: Sequential Access Media Direct Access Storage devices (DASD)
DEVICE MANAGEMENT
Dedicated devices are assigned to do only one job at a time.
They serve that job for the entire time it’s active. Some devices, such as tape drives, printers, and
plotters demand this kind of allocation scheme. A Shared plotter might produce half of one user’s
graph and half of another. The disadvantage of dedicated devices is that they
must be allocated to a single user for the duration of job’s execution, which can be quite inefficient, especially when the device isn’t used 100 percent of the time.
DEDICATED DEVICES
Shared devices can be assigned to several processes.
For instance, a disk pack, or any other direct access storage device, can be shared by several processes at the same time by interleaving their requests, but this interleaving must be carefully controlled by the Device Manager.
All conflicts- - such as when process A and Process B each need to read
from the same disk pack.- Must be resolved based on predetermined policies to
decide which request will be handled first.
SHARED DEVICES
They are combinations of Dedicated and shared devices.
For example, Printers (dedicated device) are converted into sharable devices through spooling program that reroutes all print requests to a disk.
Only when all of a job’s output is complete, and the printer is ready to print out the entire document, is the output sent to the printer for printing.
Because disk are sharable devices, this technique can convert one printer into several virtual printers, thus improving both its performance and use.
Spooling is a technique that is often used to speed up slow dedicated devices.
VIRTUAL DEVICES
I/O HARDWARE Three categories:
Human Readable (display, keyboard) Machine Readable (disk) Communication (modem, network)
Other Differences: Data Rate
Keyboard 80 b/s CD-ROM 6 Mb/s Video display 1Gb/s
Application Disk requires file system support
Complexity of Control Keyboard driver simpler than disk driver
Unit of Transfer (byte, block) Data Representation (char set, parity) Error Conditions
I/O TECHNIQUES Programmed I/O
Processor issues I/O command, waits for the operation to complete
Often handles I/O transfer details Interrupt-Driven I/O
Processors issues I/O command, then proceeds to another process or thread
Device interrupts the CPU when the data is ready to be moved to memory
Direct Memory Access (DMA) Processor issues I/O command Device transfers data to/from memory (CPU will wait for
memory) Device interrupts the CPU when the I/O transfer is
completed
PROGRAMMED I/OIssue Read
Command to I/O Module
Read Status of I/O module
Read Word from I/O Module
Write Word into Memory
Check Status
Done?
Yes
No
CPU Memory
I/O CPU
Error Condition
I/O CPU
CPU I/O
Not Ready
Ready
PROGRAMMED I/O In the programmed I/O, the I/O device does not
have direct access to memory. The data transfer from an I/O device to memory
requires the execution of a program or several instructions by CPU, So that this method is said to be a programmed I/O.
In this method, the CPU stays in a program loop until the I/O unit indicates that it is ready for data transfer.
It is time consuming process for the CPU. The programmed I/O method is particularly useful
for low speed computers.
INTERRUPT I/OCPU issue read command to I/O
device
Read Status of I/O device
Read Word from I/O device
Write Word into Memory
Check Status
Done?
Yes
No
CPU Memory
I/O CPU
Error Condition
I/O CPU
CPU I/O
Ready
Not Ready
Do something else
INTERRUPT I/O In programmed I/O CPU has to wait for the ready
signal from the I/O device. In interrupt driven I/O, CPU issue a read command
to I/O device about the status, and then go to do some important task.
When I/O device is ready the device sends an interrupt signal to the processor.
When the CPU received the interrupt signal from I/O device, it checks the status, if the status is ready then the CPU read the word from I/O device and write the word in to the main memory.
If the operation done successfully, then the processor go on to the next instruction.
DMA
DMA Interrupt initiated I/O is better than programmed I/O
but, the CPU is on the path at the time of data transfer. So, it is not suitable for large amount of data transfers. An alternative approach is DMA. In this method, remove the CPU from path, and letting
the DMA controller to manage the memory buses directly.
It would improve the speed of transfer. The DMA controller takes over the buses to manage the
transfer directly between the I/O device and memory. The DMA is first initialized by CPU and CPU sends some
information which includes: The starting address of memory block where data are
available for read or where the data are to be stored. The word count, which is the number of word in the block. Mode of transfer such as read or write. A control to start the DMA transfer.
I/O BUFFERING A “buffer” is a temporary storage area. Buffer stores the data when data transferred
between two devices or one device and one application.
For example, a user want to take a printout of his program, the user issue a print command, then the program temporarily stored in the printer, the printer consume the data from the Printer Buffer.
This type of mechanism is called buffering.
I/O BUFFERING Buffering is done for three reason:
If the producer device is high speed, the consumer device is low speed at that time, buffering is required. Example: Hard Disk (Producer) – Printer (Consumer)
If two devices having different data transfer sizes (Block Size), then buffering required. In networks a large message is divided in to a number of small
packets. The packets are sent over the network, the receiving side places this packets into buffer, to convert in to the packet sizes of source devices.
A third case of buffering is to support copy semantics for application I/O. Example: Copying the data between kernel buffers and
application data space.
Buffers are of 3 types: Single Buffering Double Buffering Circular Buffering
SINGLE BUFFERING In this buffering only one buffer is used to transfer the
data between two devices. The producer produces the one block of data into the
buffer after that the consumer consumes the buffer, only when the buffer is empty, the processor again produce the data.
The main drawback of this method is the data transfer rate is very low because the producer has to wait while the consumer consuming the data.
ProducerConsum
erSingle Buffer
Input Device Output Device
Operating System
DOUBLE BUFFERING In this scheme, two buffers are used in the place of
one. Producer produces one buffer, at the same time
consumer consumes another buffer. So the producer, need not to wait for filling the buffer.
ProducerConsum
erBuffer-1
Input Device Output Device
Operating System
Buffer-2
CIRCULAR BUFFERING When more than two buffers are used, the collection
of buffers is itself referred to as a circular buffer. Each individual buffer is being one unit in the circular
buffer. The data transfer rate will increase using the circular buffer rather than double buffering.
Operating System
ProducerConsum
er
Buffer-1
Input Device Output Device
Buffer-2
Buffer-3
Buffer-N
1 1