download it

220
1 Operating System Allen C.-H. Wu Department of Computer Science Tsing Hua University

Upload: sammy17

Post on 01-Nov-2014

10 views

Category:

Technology


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Download It

1

Operating System

Allen C.-H. Wu

Department of Computer Science

Tsing Hua University

Page 2: Download It

2

Part I: OverviewCh. 1 Introduction

• Operating system: is a program that acts as an intermediary between a user and computer hardware. The goals are to make the computer system convenient to use and run in an efficient manner.

• Why, what and how?• DOS, Window, UNIX, Linux• Single-user, multi-user

Page 3: Download It

3

1.1 What Is an Operating System

• OS=government: resource allocation=> CPU, memory, IO, storage

• OS: a control program controls the execution of user programs to prevent errors and improper use of the computer.

• Convenience for the user and efficient operation of the computer system

UserSystem andapplication programs

Operatingsystem

User

Hardware

Page 4: Download It

4

1.2 Mainframe Systems

• Batch systems

• Multiprogrammed systems

• Time-sharing systems

Page 5: Download It

5

Batch Systems

• In early days (beyond PC era), computers were extremely expensive. Only few institutes can afford it.

• The common IO devices include card readers, tape drives, and line printers.

• To speed up processing, operators batched together jobs with similar needs and ran them through the computer as a group.

• The OS is simple that needs to only automatically transfer control from one job to the next.

Page 6: Download It

6

Batch Systems

• Speed(CPU) >> speed(IO: card readers) => CPU is constantly idle.

• After introduce disk technology, OS can keep all jobs on a disk instead of a serial card reader. OS can perform job scheduling (Ch. 6) to perform tasks more efficiently.

Page 7: Download It

7

Multiprogrammed Systems

• Multiprogramming: OS keeps several jobs in the memory simultaneously. Interleaving CPU and IO operations between different jobs to maximize the CPU utilization.

• Life examples: a lawyer handles multiple cases for many clients.

• Multiprogramming is the first instance where OS must make decisions for the users: job scheduling and CPU scheduling.

Page 8: Download It

8

Time-Sharing Systems

• Time sharing or multitasking: the CPU executes multiple jobs by switching among them, but switches are so quick and so frequently that the users can interact with each program while it is running (the user thinks that he/she is the only user).

• A time-sharing OS uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer.

• Process: a program is loaded into memory and executed.

Page 9: Download It

9

Time-Sharing Systems

• Need memory management and protection methods (Ch. 9)

• Virtual memory (Ch. 10)• File systems (Ch. 11)• Disk management (Ch. 13)• CPU scheduling (Ch. 6)• Synchronization and communication (Ch. 7)

Page 10: Download It

10

1.3 Desktop Systems• MS-DOS, Microsoft-Window, Linux, IBM OS/2,

Macintosh OS• Mainframe (MULTICS:MIT) => minicomputers

(DEC:VMS, Bell-Lab:UNIX) => microcomputers => network computers

• Personal workstation: a large PC (SUN, HP, IBM: Windows NY, UNIX)

• PCs are mainly single-user systems: no resource sharing is needed; due to the internet access, security and protection is needed

• Worm or virus

Page 11: Download It

11

1.4 Multiprocessor Systems

• Multiprocessor systems: tightly coupled systems• Why? 1) improve throughput, 2) money saving

due to resources sharing (peripherals, storage, and power), and 3) increase reliability (graceful degradation, fault tolerant)

• Symmetric multiprocessing: each processor runs an identical OS, needs communication between processors

• Asymmetric multiprocessing: one master control processor, master-slave

Page 12: Download It

12

Multiprocessor Systems

• Back-ends• => microprocessors become inexpensive• => using additional microprocessors to off-load

some OS functions (e.g., using a microprocessor system to control disk management)

• a kind of master-salve multiprocessing

Page 13: Download It

13

1.5 Distributed Systems• Network, TCP/IP, ATM protocols

• Local-area network (LAN)

• Wide-area network (WAN)

• Metropolitan-area network (MAN)

• Client-server systems (computer-server, file server)

• Peer-to-peer systems (WWW)

• Network operating systems

Page 14: Download It

14

1.6 Clustered Systems

• High availability: one can monitor one or more of the others (over the LAN). If the monitored one fails, the monitoring machine will take ownership of its storage, and restart the applications that were running on the failed machine.

• Asymmetric and symmetric modes

Page 15: Download It

15

1.7 Real-Time Systems

• There are rigid time requirements on the operation of a processor or control/data flow

• Hard real-time systems: the critical tasks must be guaranteed to be completed on time

• Soft real-time systems: a critical real-time task gets priority over other tasks

Page 16: Download It

16

1.8 Handheld Systems

• PDAs (personal digital assistants) - palm-Pilots and cellular phones.

• Considerations: small memory size, slow processor speed, and low power consumption.

• Web clipping

Page 17: Download It

17

19. Feature Migration

• MULTICS (MULTIplexed Information and Computing Services) operating system: MIT -> GE645

• UNIX: Bell Lab -> PDP11

• Microsoft Windows NT, IBM OS/2, Macintosh OS

Page 18: Download It

18

1.10 Computing Environments

• Traditional computing: network, firewalls

• Web-based computing

• Embedded computing

Page 19: Download It

19

Ch. 2 Computer-System Structures

CPUDisk

controllerPrinter

controllerTape-drivecontroller

Memorycontroller

Memory

Disks Printers Tape drivers

System bus

Page 20: Download It

20

2.1 Computer-System Operation

• Bootstrap program• Modern OSs are interrupt driven• Interrupt vector: interrupted device address,

interrupt request, and other info• System call (e.g., performing an I/O operation)• Trap

Page 21: Download It

21

2.2 I/O Structure

• SCSI (small computer-systems interface): can attach seven or more devices

• Synchronous I/O: I/O requested => I/O started => I/O completed => returned control to user program

• Asynchronous I/O: I/O requested => I/O started => returned control to user program without waiting the completion of the I/O operation

• Device-status table: indicates the device’s type, address, and state (busy, idle, not functioning)

Page 22: Download It

22

I/O Structure

• DMA (Direct Memory Access)• Data transfer for high-speed I/O devices and main

memory• Block transfer with one interrupt (without CPU

intervention: 1 byte/word at a time)• Cycle-stealing• A back-end microprocessor?

Page 23: Download It

23

2.3 Storage Structure

• Main memory: RAM (SRAM and DRAM)• von Neumann architecture: instruction register• Memory-mapped I/O, programmed I/O (PIO)• Secondary memory• Magnetic disks, floppy disks• Magnetic tapes

Page 24: Download It

24

2.4 Storage Hierarchy

• Bridging speed gap• registers=>cache=>main memory=>electronic

disk=>magnetic disk=>optical disk=>magnetic tapes

• Volatile storage: data lost when power is off• Nonvolatile storage: storage systems below

electronic disk are nonvolatile• Cache: small size but fast (cache management: hit

and miss)• Coherency and consistency

(FIG)

Page 25: Download It

25

2.5 Hardware Protection

• Resource sharing (multiprogramming) improves utilization but also increase problems

• Many programming errors are detected by the hardware and reported to OS (e.g., memory fault)

• Dual-mode operation: user mode and monitor mode (also called supervisor, system or privileged mode: privileged instructions): indicated by a mode bit.

• Whenever a trap occurs, the hardware switches from user mode to monitor mode

Page 26: Download It

26

Hardware Protection

• I/O protection: all I/O instructions should be privileged instructions. The user can only perform I/O operation through the OS.

• Memory protection: protect the OS from access by users program, protect user programs from each other: base and limit registers.

• CPU protection: A timer to prevent a user program from getting stuck in an infinite loop.

Page 27: Download It

27

2.6 Network Structure

• LAN: cover a small geographical area, twisted pair and fiber optic cabling, high speed, Ethernet.

• WAN: Arparnet (academia research) , router, modems.

Page 28: Download It

28

CH. 3 OS Structure

• Examining the services that an OS provides• Examining the interface between the OS and users• Disassembling the system into components and

their interconnections• OS components:

=> Process management=> Main-memory management=> File management=> I/O-system management=> Secondary-storage management=> Networking=> Protection system=> Command-interpreter

Page 29: Download It

29

3.1 System ComponentsProcess Management

• Process: a program in execution (e.g., a compiler, a word-processing program)

• A process needs certain resources (e.g., CPU, memory, files and I/O devices) to complete its task. When the process terminates, the OS will reclaim any reusable resources.

• OS processes and user processes: The execution of each process must be sequential. All the processes can potentially execute concurrently, by multiplexing the CPU among them.

Page 30: Download It

30

Process Management

The OS should perform the following tasks:• Creating and deleting processes• Suspending and resuming processes• Providing mechanisms for process

synchronization• Providing mechanisms for process communication• Providing mechanisms for deadlock handling• => Ch. 4- Ch. 7

Page 31: Download It

31

Main-Memory Management

• Main memory is a repository of quickly accessible data shared by the CPU and I/O devices (Store data as well as program)

• Using absolute address to access data in the main memory

• Each memory-management scheme requires its own hardware support

• The OS should responsible for the following tasks:=> Tracking what parts memory are currently used and by whom=> Deciding which processes should be loaded into memory=> Allocating and deallocating memory as needed

Page 32: Download It

32

File Management

• Different I/O devices have different characteristics (e.g., access speed, capacity, access method) - physical properties

• File: is a collection of related information defined by its creator. The OS provides a logical view of information storage (FILE) regardless its physical properties

• Directories => files (organizer) => access right for multiple users

Page 33: Download It

33

File Management

The OS should be responsible for:• Creating and deleting files• Creating and deleting directories• Supporting primitives for manipulating files and

directories• Mapping files onto secondary storage• Backing up files on nonvolatile storage• => Ch. 11

Page 34: Download It

34

I/O-System Management

• An OS should hide the peculiarities of specific hardware devices from the user

• The I/O subsystem consists of:• A memory-management component including

buffering, caching, and spooling• A general device-driver interface• Drivers for specific hardware devices

Page 35: Download It

35

Secondary-Storage Management

• Most modern computer systems use disks as the principle on-line storage medium, for both programs and data

• Most programs stored on a disk and will be loaded into main memory whenever it is needed

• The OS should responsible for:=> Free-space management=> Storage allocation=> Disk scheduling=> Ch. 13

Page 36: Download It

36

Networking

• Distributed system: a collection of independent processors that are connected through a communication network

• FTP: file transfer protocol• WWW: NFS (network file system protocol)• http:• => Ch. 14- Ch. 17

Page 37: Download It

37

Protection System

• For a multi-user/multi-process system: processes executions need to be protected

• Any mechanisms for controlling the access of programs, data, and resources

• Authorized and unauthorized access and usage

Page 38: Download It

38

Command-Interpreter System

• OS (kernel) <=> command interpreter (shell) <=> user

• Control statements• A mouse-based window OS:• Click an icon, depending on mouse point’s

location, the OS can invoke a program, select a file or a directory (folder).

Page 39: Download It

39

3.2 OS Services

• Program execution• I/O operation• File-system manipulation• Communications• Error detection• Resource allocation• Accounting• Protection

Page 40: Download It

40

3.3 System Calls

• System calls: the interface between a process and the OS

• Mainly in assembly-language instructions. • Allow to be invoked from a higher-level language

program (C, C++ for UNIX: JAVA+C/C++)• EX. Copy one file to another: how to use system

calls to perform this task?• Three common ways to pass parameters to the OS:

register, block, stack (push/pop).

Page 41: Download It

41

System Calls

Five major categories:• Process control• File manipulation• Device manipulation• Information maintenance• Communications

Page 42: Download It

42

Process Control

• End, about:=>Halt the execution normally (end) or abnormally

(abort)=> Core dump file: debugger=>Error level and possible recovery• Load, execute=> When to load/execute? Where to return the

control after it’s done?• Create/terminate process=> When? (wait time/event)

Page 43: Download It

43

Process Control

• Get/set process attributes=> Core dump file for debugging=> A time profile of a program• Wait for time, event, single event• Allocate and free memory• The MS-DOS: a single tasking system• Berkeley UNIX: a multitasking system (using fork

to start a new process

Page 44: Download It

44

File Management

• Create/delete file• Open, close• Read, write, reposition (e.g., to the end of the file)• Get/set file attributes

Page 45: Download It

45

Device Management

• Request/release device• Read, write, reposition• Get/set device attributes• Logically attach and detach devices

Page 46: Download It

46

Information Maintenance

• Get/set time or date• Get/set system data (e.g., OS version, free

memory space)• Get/set process, file, or device attributes (e.g.,

current users and processes)

Page 47: Download It

47

Communications

• Create, delete communication connection: message-passing and shared-memory model

• Send, receive messages: host name (IP name), process name

Daemons: source (client)<->connection<->the receiving daemon (server)

• Transfer status information• Attach or detach remote devices

Page 48: Download It

48

3.4 System Programs

• OS: a collection of system programs include file management, status information, file modification, programming-language support, program loading and execution, and communications.

• Os is supplied with system utilities or application programs (e.g., web browsers, compiler, word-processors)

• Command interpreter: the most important system program=> contains code to execute the command=> UNIX: command -> to a file, load the file into memory and execute

rm G => search the file rm => load the file => execute it with the parameter G

Page 49: Download It

49

3.5 System Structure(Simple Structure)

• MS-DOS: application programs are able to directly access the basic I/O routine (8088 has no dual mode and no hardware protection) => errant programs may cause entire system crashes

• UNIX: the kernel and the system programs.• System calls define the application programmer

interface (API) to UNIX

FIG3.6

FIG3.7

Page 50: Download It

50

Layered Approach

• Layer 0 (the bottom one): the hardware, layer N (the top one): the user interface

• The main advantage of the layer approach: modularityPro: simplify the design and implementationCon: not easy to appropriately define the layers

less efficient• Windows NT: a highly layer-oriented organization

=> lower performance compared to Windows 95 => Windows NT 4.0 => moving layers from user space to kernel space to improve the performance

Page 51: Download It

51

Microkernels

• Carnegie Mellon Univ (1980s): Mach• Idea: removing all nonessential components from

the kernel, and implementing them as system and user-level programs.

• Main function: microkernel provides a communication facility (message passing) between the client program and various services (running in user space)

• Easy of extending the OS: new services are added to the user space, no change on the kernel

Page 52: Download It

52

Microkernels

• Easy to port, more security and reliability (most services are running as user, if a service fails, the rest of OS remains ok)

• Digital UNIX• Apple MacOS Server OS• Windows NT: a hybrid structure

FIG 3.10

Page 53: Download It

53

Virtual Machines

• VM: IBM• Each process is provided with a (virtual) copy of

the underlying computer• Major difficulty: disk systems => minidisks

Implementation:• Difficult to implement: switch between a virtual

user and a virtual monitor mode• Less efficient in run time

FIG 3.11

Page 54: Download It

54

Virtual Machines

Benefits:• The environment is complete protection of the

various system resources (but no direct sharing of resources)

• A perfect vehicle for OS research and development• No system-development time is needed: system

programmer can work on his/her own virtual machine to develop their system

• MS-DOS (Intel) <=> UNIX (SUN)• Apple Macintosh (68000) <=> Mac (old 68000)• Java

Page 55: Download It

55

Java

• Java: a technology rather than a programming language : SUN : late 1995

• Three essential components:

=> Programming-language specification

=> Application-programming interface (API)

=> Virtual-machine specification

Page 56: Download It

56

Java

Programming language• Object-oriented, architecture-neutral, distributed

and multithreaded programming language• Applets: programs with limited resource access

that run within a web browser• A secure language (running on distributed

network)• Performing automate garbage collection

Page 57: Download It

57

Java

API• Basic language: support for graphics, I/O, utilities

and networking• Extended language: support for enterprise,

commerce, security and mediaVirtual machine• JVM: a class loader and a Java interpreter• Just-in-time compiler: turns the architecture-

neutral bytecodes into native machine language for the host computer

Page 58: Download It

58

Java

• The Java platforms: JVM and Java API => make it possible to develop programs that are architecture neutral and portable

• Java development environment: a compile-time and a run-time environment

Page 59: Download It

59

3.8 System Design and Implementation

• Define the goals and specification• User goals (wish list) and system goals

(implementation concerns)• The separation of policy (what should be done)

and mechanism (how to do it)• Microkernel: implementing a basic set of policy-

free primitive building blocks• Traditionally, OS is implemented using assembly

language (better performance but portable is the problem)

Page 60: Download It

60

System Design and Implementation

High-level language implementation• Easy porting but slow speed with more storage• Need better data structures and algorithms• MULTICS (ALGOL); UNIX, OS/2, Windows (C)• Non critical (HLL), critical (assembly language)• System generation (SYSGEN): to create an OS for

a particular machine configuration (e.g., CPU? Memory? Devices? Options?)

Page 61: Download It

61

Part II: Process ManagementCh. 4 Processes

4.1 Process Concept• Process (job) is a program in execution• Ex. For a single-user system (PC), the user can run

multiple processes (jobs), such as web, word-processor, and CD-player, simultaneously

• Two processes may be associated with the same program. Ex. You can invoke an editor twice to edit two files (two processes) simultaneously

Page 62: Download It

62

Process Concept

Process state:• Each process may be in one of the 5 states: new,

running, waiting, ready, and terminated

New

Running

Waiting

Ready

Terminated

admitted interrupt

Schedulerdispatch IO or event waitIO or event

completion

exit

Page 63: Download It

63

Process Concept

Process Control Block (PCB): represents a process• Process state: new, ready, running, waiting or exit• Program counter: point to the next instruction to be

executed for the process• CPU registers: when an interrupt occurs, the data

needs to be stored to allow the process to be continued correctly

• CPU-scheduling information: process priority (Ch.6)

• Memory-management information: the values of base and limit registers, the page tables...

FIG 4.2

Page 64: Download It

64

Process Concept

• Accounting information: account number, process number, time limits…

• IO status information: a list of IO devices allocated to the process, a list of open files….

Threads• Single thread: a process is executed with one

control/data flow• Multi-thread: a process is executed with multiple

control/data flow (e.g., running an editor, a process can execute “type in” and spelling check at the same time

FIG 4.3

Page 65: Download It

65

4.2 Process Scheduling

• The objective of multiprogramming: maximize the CPU utilization (keep the CPU running all the time)

Scheduling queues• Ready queue (usually a linked list): the processes

that are in the main memory and ready to be executed

• Device queue: the list of processes waiting for a particular IO device

FIG 4.4

Page 66: Download It

66

Process Scheduling

• Queuing diagram

Ready queue CPU

IO IO queue IO request

Time slice expired

Fork a child

Wait for aninterrupt

Child executes

Interrupt occurs

Page 67: Download It

67

Process Scheduling

Scheduler• Long-term scheduler (job scheduler): selects

process from a pool and loads them into main memory for execution (less frequent and has longer-time to make a more careful selection decision)

• Short-term scheduler (CPU scheduler): selects among processes for execution (more frequent and must fast)

• The long-term scheduler controls the degree of multiprogramming (the # of processes in memory)

Page 68: Download It

68

Process Scheduling

• IO-bound process• CPU-bound process• if all processes are IO-bound => ready queue

always be empty => short-term scheduler has nothing to do

• if all processes are CPU-bound => IO-waiting queue always be empty => devices will be unused

• Balance system performance = a good mix of IO-bound and CPU-bound processes

Page 69: Download It

69

Process Scheduling

• The medium-term scheduler: using swapping to improve the process mix

• Context switching: switching the CPU to a new process => saving the state of the suspended process AND loading the saved state for the new process

• Context switching time is pure overhead and heavily depended on hardware support

FIG 4.6

Page 70: Download It

70

4.3 Operations on Processes

Process creation• A process may create several new processes:

parent process => children processes (tree)• Subprocesses may obtain resources from their

parent (it may overloading) or from the OSWhen a process creates a new one, the execution1. The parent and the new one run concurrently2. The parent waits until all of its children have

terminated

Page 71: Download It

71

4.3 Operations on Processes

In terms of the address space of the new process1. The child process is a duplicate of the parent

process2. The child process has a program loaded into it• In UNIX, each process has a process identifier.

“fork” system call to create a new process (it consists of a copy of the address space of the original process) Advantage? Easy communication between the parent and children processes.

Page 72: Download It

72

4.3 Operations on Processes

• “execlp” system call (after “fork”): replace the process’ memory space with a new program

Pid = fork();

if (pid<0) fork failedelse if (pid==0) execlp(“/bin/ls”, “ls”,NULL) --- overlay with UNIX “ls”else wait(NULL) -- wait for the child to complete printf(“Child Complete”); exit(0)

Page 73: Download It

73

4.3 Operations on Processes

Process termination• “exit”: system call after terminating a process• Cascading termination: when a process terminates,

all its children must also be terminated

Page 74: Download It

74

4.4 Cooperating Processes

• Independent and cooperating processes• Any process shares data with other processes is a

cooperating processWHY needs process cooperation?• Information sharing• Computation speedup (e.g., parallel execution of

CPU and IO)• Modularity: dividing the system functions into

separate processes

Page 75: Download It

75

4.4 Cooperating Processes

• Convenience: for a single-user, many tasks can be executed at the same time

• Producer-consumer• Unbounded/bounded-buffer• The shared buffer: implemented as a circular array

Page 76: Download It

76

4.5 Interprocess Communication (IPC)

Message-passing system• “send” and “receive”• Fixed or variable size of messagesCommunication link• Direct/indirect communication• Symmetric/asymmetric communication• Automatic or explicit buffering• Send by copy or by reference• Fixed or variable-sized messages

Page 77: Download It

77

4.5 Interprocess Communication (IPC)

NamingDirect communication (two processes link)• symmetric in addressing: send(p, message),

receive(q, message): explicit name of the recipient and sender

• asymmetric in addressing: send(p, message), receive(id, message): variable id is set to the name

• Disadvantage: limited modularity of the process definition (all the old names need to be found before it can be modified; not suitable for separate compilation)

Page 78: Download It

78

4.5 Interprocess Communication (IPC)

Indirect communication• using mailboxes or ports• Supporting multi-processes link• Mailbox may be owned by the process (when

process terminates, the mailbox disappears) or• If the mailbox is owned by the OS that must allow

the process: creates a new mailbox, send/receive message via the mailbox, and deletes the mailbox

Page 79: Download It

79

4.5 Interprocess Communication (IPC)

Synchronization• Blocking/nonblocking send and receive• Blocking (asynchronous) nonblocking

(synchronous)• A rendezvous between the sender and receiver

when both are blocking

Buffering• Zero/bounded/unbounded capacity

Page 80: Download It

80

Mach

• Message based: using ports• When a task is created: two mailboxes, the Kernel

(kernel communication) and the Notify (notification of event occurrences) ports are created

• Three systems calls are needed for message transfer: msg_send, msd_receive, and msg_rpc (Remote Procedure Call)

• Mailbox: initial empty queue: FIFO order• Message: fixed-length header, variable-length data

Page 81: Download It

81

Mach

• If the mailbox is full, the sender has 4 options:1. Wait indefinitely until there is a free room2. Wait for N ms3. Do not wait, just return immediately4. Temporarily cache a message• The receiver must specify the mailbox or the

mailbox set• The Mach was designed for distributed systems

Page 82: Download It

82

Window NT

• Employs modularity to increase functionality and decrease the implementation time for adding new features

• NT supports multiple OS subsystems: message passing (called local procedure-call facility (LPC))

• Using ports for communications: connection port (by client) and communication port (by server)

• 3 types of message-passing techniques:1. 256-byte queue2. Large message via shared memory3. Quick LPC (64k)

Page 83: Download It

83

4.6 Communication in Client-Server Systems

• Socket: made up of an IP address concatenated with a port number

• Remote procedure calls (RPC)

Page 84: Download It

84

Ch. 5 Thread5.1 Overview

• A lightweight process: a basic unit of CPU utilization

• A heavyweight process: a single thread of control• Multithread is common practice: ex. Web has 1

thread on displaying text/image and another on retrieving data from the network

• When a single application requires to perform several similar tasks (e.g., web server accepts many clients’ requests), using threads is more efficient than using processes.

FIG 5.1

Page 85: Download It

85

Benefits

4 main benefits:• Responsiveness: allowing a program to continue

running even part of it is blocked or running a lengthy operation

• Resource sharing: memory and code• Economy: allocating memory and resources for a

process is more expensive (in Solaris, creating a process is 30 times slower, contex switching is 5 times slower)

• Utilization of multiprocessor architectures (for a single-processor, the thread is running one at a time

Page 86: Download It

86

User and Kernel Threads

User thread• by a thread library at the user level that supports

thread creation, scheduling and management with no kernel’s support

• Advantage: fast• Disadvantage: if a kernel is single-threaded, any

user-level thread -> blocking system calls => block the entire process

• POSIX Pthreads, Mach C-threads, Solaris threads

Page 87: Download It

87

User and Kernel Threads

Kernel threads• Supported by the OS• It’s slower than user threads• If a thread performs a block system call, the kernel

can schedule another thread in the application for execution

• Window NT, Solaris, Digital UNIX

Page 88: Download It

88

5.2 Multithreading Models

Many-to-one model: many user-level to one kernel• only one user thread can access the kernel thread at one

time => can’t run in parallel on multiprocessorsOne-to-one model• More concurrency (allowing parallel execution)• Overhead: one kernel process for one user processMany-to-many• The # of kernel threads => specific for a particular

application or machine• it doesn’t suffer the drawbacks of the other two models

Page 89: Download It

89

5.3 Treading Issues

• The fork and exec system calls

• Cancellation: asynchronous and deferred

• Signal handling: default and user-defined

• Thread pools

• Thread-specific data

Page 90: Download It

90

5.4 Pthreads

• POSIX standard (IEEE 1003.1c): an API for thread creation and synchronization

• A specification for thread behavior not an implementation

Page 91: Download It

91

5.5 Solaris Threads

• Till 1992 it only supports a single thread of control

• Now, it supports kernel/user-level, symmetric multiprocessing, and real-time scheduling

• Intermediate-level of threads: user-level <=>lightweight processes (LWP)<=>kernel-level

• Many-to-many model• User-level threads: bounded (permanently

attached to a LWP), unbounded (multiplexed onto the pool of available LWPs)

FIG 5.6

Page 92: Download It

92

Solaris Threads

• Each LWP is connected to one kernel-level thread, whereas each user-level thread is independent of the kernel

Page 93: Download It

93

5.6-8 Other Threads

• Window 2000• Linux• Java

Page 94: Download It

94

Ch. 6 CPU Scheduling6.1 Basic Concepts

• The objective of multiprogramming: maximize the CPU utilization

• Scheduling: the center of OS• CPU-IO burst cycle: IO-bound program->many

short CPU bursts, CPU-bound program->few very long CPU bursts

• CPU scheduler: short-term scheduler• Queue: FIFO, priority, tree or a linked listPreemptive schedulingCPU scheduling decisions depend on:

Page 95: Download It

95

Basic Concepts

1. A process from running to waiting state2. A process from running to ready state3. A process from waiting to ready state4. A process terminates• 1 and 4 occur, a new process must be selected for

execution but not necessary for 2 and 3• The scheduling scheme only for 1 and 4 is called

nonpreemptive or cooperative (once the CPU is allocated to a process, the process keeps the CPU till it terminates or moves to the waiting state

Page 96: Download It

96

Basic Concepts

• The preemptive scheduling scheme needs to consider how to swap the process execution and maintain the correct execution (Context switching)

Dispatcher: gives control of the CPU to a newly selected process

• Switching context• Switching to user mode• Jump to proper location of the user program and

start it• Dispatch latency: the time between stop the old and

start the new one

Page 97: Download It

97

6.2 Scheduling Criteria

• CPU utilization• Throughput: the # of processes completed/per

unit-time• Turnaround time: submission of a process to its

completion• Waiting time: the sum of the periods spend

waiting in the ready queue• Response time: interactive system (minimize

variance of the response time is more important than minimize the average response time)

Page 98: Download It

98

6.3 Scheduling Algorithms

• Comparison the average waiting timeFCFS(first come first serve)• Convoy effect: all other processes wait for one big

process gets off the CPU• The FCFS scheduling algorithm is nonpreeemptive

Page 99: Download It

99

Scheduling Algorithms

SJF(Shortest-job-first scheduling)• Provably optimal• Difficulty: how to know the length of the next

CPU burst???• Used frequently in long-term scheduling

Page 100: Download It

100

Scheduling Algorithms

• Predict: exponential average

• Preemptive SJF: shortest-remaining-time-first

Page 101: Download It

101

Scheduling Algorithms

Priority scheduling• Priorities can be defined internally (some

measures in time or memory size) or externally (specify by the users)

• Either preemptive or nonpreemptive• Problem: starvation (low-priority process will

never be executed)• Solution: aging (increase priority over time)

Page 102: Download It

102

Scheduling Algorithms

Round-robin (RR) scheduling• Suitable for time-sharing systems• Time quantum: circular queue of processes• The average waiting time is often long• The RR scheduling algorithm is preemptive

Page 103: Download It

103

Scheduling Algorithms

• Performance => size of the time quantum=> extremely large (=FCFS) => extremely small (processor sharing)

• Rule of thumb=> 80% of CPU bursts should be shorter then the time quantum

• Performance => context switch effect => time quantum > time(context switching)

• Turnaround time => size of the time quantum

Page 104: Download It

104

Scheduling Algorithms

Multilevel queue scheduling• Priority: foreground (interactive) processes >

background (batch) processes• Partitions the ready queue into several separate

queues• The processes are permanently assigned to a queue

based on some properties of the process (e.g., process type, memory size…)

• Each queue has its own scheduling algorithm• Scheduling between queues: 1) fixed-priority

preemptive scheduling, 2) time slices between queues

FIG 6.6

Page 105: Download It

105

Scheduling Algorithms

Multilevel feedback-queue scheduling• Allow a process to move between queues• The idea is to separate processes with different

CPU-burst characteristics (e.g., move the process using too much CPU to a lower-priority)

• What are considerations for such decisions?

Page 106: Download It

106

6.4 Multiple-Processor Scheduling

• Homogeneous: all processors are identical• Load sharing among processors• Symmetric multiprocessing (SMP): each processor

is self-scheduling, it examines a common ready queue and select a process to execute (what’re the main concern?)

• Asymmetric multiprocessing: a master server is handling all scheduling decisions

Page 107: Download It

107

6.5 Real-Time Scheduling

• Hard real-time: resource reservation (impossible using a secondary memory or virtual memory)

• It requires a special-purpose software running on hardware dedicated to the critical process to satisfy the hard real-time constraints

• Soft real-time: guarantee critical processes having higher priorities

• The system must have priority scheduling and the real-time processes must have the highest priority, and will not degrade with time

• The dispatch latency must be short. HOW?

Page 108: Download It

108

Real-Time Scheduling

• Preemption points in long-duration system calls• Making the entire kernel preemptible• What if a high-priority process needs to

read/modify kernel data which is currently used by a low-priority process? (Priority inversion)

• Priority-inheritance protocol: the processes that are accessing resources that the high-priority process needs will inherit the high-priority and continue running till they all complete

Page 109: Download It

109

6.6 Algorithm Evaluation

• Deterministic modeling: analytic evaluation (given predetermined workloads and based on that to define the performance of each algorithm)

• Queueing models: limit theoretical analysis• Simulations: random-number generator, it may be

inaccurate due to assumed distribution (defined empirically or mathematically). Solution: trace tapes (monitoring the real system)

• Implementation: most accurate but with high cost.

Page 110: Download It

110

Ch. 7 Process Synchronization7.1 Background

• Why?• Threads: share a logical address space• Processes: share data and codes• They have to wait in line till their turns• Race condition

Page 111: Download It

111

7.2 Critical-Section Problem

• Critical section: a thread has a segment of code in which the thread may change the common data

A solution to the critical-section problem must satisfy:

• Mutual exclusion• Progress• Bounded waiting

Page 112: Download It

112

Two-Tasks Solutions

Critical section

T0 T1HOW?

Alg 1: using a “turn”T0

Turn=0?

CS

Turn=1

TF

T1

Turn=1?

CS

Turn=0

TF

What’s the problem?What if “turn=0” andT0 is in the non-criticalsection, T1 needs to enterthe critical section?

Progress requirement?

Page 113: Download It

113

Two-Tasks SolutionsAlg 1: using a “turn” and “yield()”

T1

Turn=1?

CS

Turn=0

TFDoes it need to

enter CS?

Yield()turn=0

TF

What’s the problem?It does not retain sufficientinfo about the state of eachthread (only the thread isallowed to enter the CS).

How to solve this problem?

T0

Turn=0?

CS

Turn=1

TFDoes it need to

enter CS?

Yield()turn=1

TF

Page 114: Download It

114

Two-Tasks SolutionsAlg 2: using an array to replace “turn”

a0 a1->”1” indicates that T1 is ready to enter the CS

T1

CS

a1=0

TF

a1=1

a0=0?

Is mutual exclusion satisfied? YesIs progress satisfied? No

What if both T0 and T1 set theirflag a0 and a1 to “1” at the sametime?

Loop forever!!!

T0

CS

a0=0

TF

a0=1

a1=0?

Page 115: Download It

115

Two-Tasks SolutionsAlg 3: satisfying the three requirements

T0

CS

a0=0

TF

a0=1

a1=1&&turn=1

Turn=1

T0

CS

a1=0

TF

a1=1

a0=1&&turn=0

Turn=0

Page 116: Download It

116

7.3 Synchronization Hardware

• Test-and-set: indivisible instructions. If two Test-and-Set instructions are executed simultaneously, they will be executed sequentially in some arbitrary order (flag and turn)

• Swap instruction (yield())

Page 117: Download It

117

7.4 Semaphores

• A general method to handle binary or multiple-parties synchronization

• Two operations: P: test and V: increment: must be executed indivisibly

• P(S){ while S<=0; S--}• V(S){ S++}• Binary semaphore: 0 and 1• Counting semaphore: resource allocation

Page 118: Download It

118

Semaphores

• Busy waiting: wasting CPU resources• Spinlock (semaphore): no context switching is

required when the process is waiting on a lock• One solution: a process executes P operation =>

semaphore-value<0 => block itself rather than busying waiting

• Wakeup operation: wait state => ready state• P(S){ value--; if (value<0){ add this process to a

list; block}}• V(S){ value++; if(value<=0){remove a process P

from list; wakeup(P);}}

Page 119: Download It

119

Semaphores

• If the semaphore value is negative, the value indicates the # of processes waiting on the semaphore

• The waiting can be implemented by: linked list, a FIFO queue (ensure bounded waiting), or???

• The semaphore should be treated as a critical section:

1. Uniprocessor: inhibited interrupt 2. Multiprocessor: alg 3 (SW) or hardware

instructions

Page 120: Download It

120

Semaphores

• Deadlock• Indefinite blocking or starvation

P0

p(s)p(q)..v(s)v(q)

P1

p(q)p(s)..v(q)v(s)

Wait for v(s) from P0Wait for v(q) from P1 Deadlock

Page 121: Download It

121

7.5 Classical Synchronization Problems

• The bounded-buffer problem• The readers-writers problem: read-write conflict in

database• The dining-philosophers problem

Homework exercises!!!

Page 122: Download It

122

7.6 Critical Regions

• Signal(mutex);..CS..wait(mutex)?

• Wait(mutex);..CS..wait(mutex)?

• V:shared T:

• region V when B(true) S(s1); => while statement S is being executed, no other process can access the variable V

Page 123: Download It

123

7.7 Monitors

• Programming mistakes will cause malfunction of semaphore

mutex.V();criticalsection(); ==> several processes may be executing in their CSmutex.P(); simultaneously!

mutex.P();CS(); ==> deadlock will occurmutex.P();

If a process misses P(), V() or both, mutual exclusion is violated or a deadlock will occur

Page 124: Download It

124

Monitors

• A monitor: a set of programmer-defined operations that are provided mutual exclusion within the monitor (the monitor construct prohibits concurrent access to all procedures defined within the monitor)

• Type of condition: x.wait and x.signal• Signal-and-Wait: P=>wait Q to leave the monitor

or another condition• Signal-and-Continue: Q=>wait P to leave the

monitor or other condition

P

x.signal

Q(suspended)

associated withcondition x

resume

Page 125: Download It

125

Ch. 8 Deadlocks8.1 System Model

• Resources: types (e.g., printers, memory), instances (e.g., 5 printers)

• A process: must request a resource before using it and must release it after using it (i.e., request => use => release)

• request/release device, open/close file, allocate/free memory

• What cause deadlock?

Page 126: Download It

126

8.2 Deadlock Characterization

• Necessary conditions:1. Mutual exclusion2. Hold-and-wait3. No preemption4. Circular waitResource-allocation graph• Request edge: P->R• Assignment edge: R->P

R2

R3R1

P1 P3P2

Page 127: Download It

127

Deadlock Characterization

• If each resource has only one instance, then a cycle implies that a deadlock has occurred

• If each resource has several instances, a cycle may not imply a deadlock (a cycle is a necessary but not a sufficient condition)

P1->R1->P2->R3->P3->R2->P1

P1, P2, P3 deadlock R2

R1

P1 P3

P2

P4

P1->R1->P3->R2->P1

No deadlock, why?

R2

R3R1

P1 P3P2

Page 128: Download It

128

8.3 Methods for Handling Deadlocks

• Deadlock prevention• Deadlock avoidance (deadlock detection)• Deadlock recovery• Do nothing: UNIX, JVM (leave to programmer)• Deadlocks occur very infrequently (once a year?).

It’s cheaper to do nothing than implement deadlock prevention, avoidance, recovery

Page 129: Download It

129

8.4 Deadlock Prevention

• Make sure the four conditions will not occur simultaneously

• Mutual exclusion: must hold for nonsharable resources• Hold-and-wait: guarantee a process requests a

resource, it does not hold any other resources (low resource utilization and may be starvation)

• No preemption:preempted resources of a process which requests a resource but can’t get it

• Circular wait: impose a total ordering of all resource type, and processes request resources in an increasing order. WHY???

Page 130: Download It

130

8.5 Deadlock Avoidance

• Claim edge: declare the number of resources it may need before request them

• The OS will grant the resources to a requested process IF there has no potential deadlock (safe state)

R1

R2

P1 P2

Claim edge

R1

R2

P1 P2

Unsafe if assignR2->P2: a cycle

Page 131: Download It

131

8.6 Deadlock Detection

• Wait-for-graph• Detect a cycle: O(n^2) => expensive

R2

R3R1

P1 P3P2P1 P3

P2

Page 132: Download It

132

8.7 Recovery from Deadlock

Process termination:• Abort all deadlocked processes (a great expense)• Abort one process at a time until the deadlock

cycle is eliminated

Resource preemption• Selection of a victim• Rollback• Starvation

Page 133: Download It

133

Ch. 9 Memory Management9.1 Background

• Address binding: map logical address to physical address

• Compile time• Load time• Execution time

FIG 9.1

Page 134: Download It

134

Background

• Virtual address: logical address space• Memory-management unit (MMU): a hardware unit

to perform run-time mapping from virtual to physical addresses

• Relocation register -- FIG 9.2• Dynamic loading: a routine is not loaded until it is

called (efficient memory usage)• Static linking and dynamic linking (shared libraries)

Page 135: Download It

135

Background

• Overlays: keep in memory only the instructions and data that are needed at any given time

• Assume 1) only 150k memory 2) pass1 and pass2 don’t need to be in the memory at the same time

1. Pass1: 70k2. Pass2: 80k3. Symbol table: 20k4. Common routines: 30k5. Overlay driver: 10k1+2+3+4+5=210k > 150kOverlay1: 1+3+4+5=130k; overlay2: 2+3+4+5=140k < 150k

(FIG9.3)

Page 136: Download It

136

9.2 Swapping

• Swapping: memory<=>backing store (fast disks) (FIG9.4)

• The main part of swap time is transfer time: proportional to the amount of memory swapped (1M ~ 200ms)

• Constraint on swapping: the process must completely idle especially no pending IO

• Swapping is too long: standard swapping method is used in few systems

Page 137: Download It

137

9.3 Contiguous Memory Allocation

• Memory: 2 partitions: system (OS) and users’ processes

• Memory protection:OS/processes, users processes (FIG9.5)

• Simplest method: divide the memory into a number of fixed-sized partitions. The OS keeps a table indicating which parts of memory are available and which parts are occupied

• Dynamic storage allocation: first fit (generally fast), best fit, and worst fit

Page 138: Download It

138

Contiguous Memory Allocation

• External fragmentation: statistical analysis on first fit shows that given N blocks, 0.5N blocks will be lost due to fragmentation (50-percent rule)

• Internal fragmentation: unused space within the partition

• Compaction: one way to solve external fragmentation but only possible if relocation is dynamic (WHY?)

• Other methods: paging and segmentation

Page 139: Download It

139

9.4 Paging

• Paging: permits noncontiguous local address space of a process

• Frames: divide the physical memory into fixed-sized blocks

• Pages: divide the logical memory into fixed-sized blocks

• Address=page-number+page-offset: page-number is an index into a page table

• The page and frame sizes are determined by hardware.

• FIG9.6, FIG9.7, FIG9.8

Page 140: Download It

140

Paging

• No external fragmentation but internal fragmentation still exists

• To reduce internal fragmentation: small-sized page but increase the overhead of page table entry

• What about on-the-fly page-size support?• With page: user <=> address-translation hardware

<=> actual physical memory• Frame table: OS needs to know the allocation

details of the physical memory (FIG9.9)

Page 141: Download It

141

Paging

Structure of the page table• Registers: fast but expensive: suitable for small

entries (256)• Page-table-base register (PTBR): points to the

page table (which resides in the main memory): suitable for large entries (1M) but needs two memory access to access a byte

• Using associated registers or translation look-aside buffers (TLBs) to speed up

• Hit ratio: effective memory-access time (FIG9.10)

Page 142: Download It

142

Paging

Protection• Protection bits: one bit to indicate a page to be

read and write or read only• Valid-invalid bit: indicates whether the page is in

the process’s logical address space FIG9.11• Page-table length register (PTLR) to indicate the

size of the page table: a process usually only uses a small fraction of the address space available to it

Page 143: Download It

143

Paging

Multilevel paging• Supporting large logic address space• The page table may be extremely large (32-bit:

page-size(4k:2^12):page-table(1M:2^20) 4-bytes/page-table=>4Mbytes)

• FIG9.12, FIG9.13• How does multilevel paging affect system

performance? 4-level paging=4 memory accesses

Page 144: Download It

144

Paging

Hashed page tables• handle page table Fig. 9.14• Clustered page table: useful for sparse address

spaces

Page 145: Download It

145

Paging

Inverted page table• A page entry=> millions of entries => consume a

large amount of physical memory• Inverted page table: fixed-link between entry

(index) of the page table and the physical memory• May need to search the whole page table

sequentially• Using hashed table to speed up this search• FIG9.15

Page 146: Download It

146

Paging

Shared pages• Reentrant code: is non-self-modifying code, it will

never change during execution• If the code is reentrant, it can be shares• FIG9.16• Inverted page tables have difficulty implementing

shared memory. WHY?• Work for two virtual addresses that are mapped to

one physical address

Page 147: Download It

147

9.5 Segmentation

• Segment: variable-sized (page: fixed-sized)• Each segment has a name and length• Segment table: base (starting physical address)

and limit (length of the segment)• FIG9.18, 9.19• Advantage: 1. association with protection(HOW?) the memory-

mapping hardware can check the protection-bits associated with each segment-table entry

2. Permits the sharing of code or data (FIG9.19). Need to search the shared segment’s number

Page 148: Download It

148

Segmentation

Fragmentation• May cause external fragmentation: when all

blocks of free memory are too small to accommodate a segment

• What’s the suitable segment size?• Per segment for each process <=> per segment for

per byte

Page 149: Download It

149

9.6 Segmentation with Paging

• Local descriptor table (LDT): private to the process

• Global descriptor table (GDT): shared among all processes

• Linear address• FIG9.21

Page 150: Download It

150

Ch. 10 Virtual Memory10.1 Background

• Virtual memory: execution of processes that may not be completely in memory

• Programs size > physical memory size• Virtual space: programmers can assume they have

unlimited memory for their programs• Increasing memory utilization and throughput: many

programs can be resided in memory and run at the same time

• Less IO would be needed to swap users’ programs into memory => run faster

• Demand paging and demand segmentation (more complex due to varied sizes)

FIG10.1

Page 151: Download It

151

10.2 Demand Paging

• Lazy swapper: never swaps a page into memory unless it is needed

• Valid/invalid bit: indicates whether the page is in memory or not (FIG10.3)

• Handling a page fault (FIG10.4).• Pure demand page: never bring a page into

memory until it is required (executed one page at a time)

• One instruction may cause multiple page faults (1 page for instruction and several for data) : not so bad because Locality of reference!

Page 152: Download It

152

Demand Paging

• EX: three-address instruction C=A+B: 1) fetch instruction, 2) fetch A, 3) fetch B, 4) add A, B, and 5) store to C. The worst case: 4 page-faults

• The hardware for supporting demand paging: page table and secondary memory (disks)

• Page-fault service: 1) interrupt, 2) read the page, and 3) restart the process

• Effective access time (EAT): ma: memory access time (10-200ns)p: the probability of page fault (0<=p<=1)EAT= (1-p)*ma + p*page fault time = 100 + 24,999,900*p (ma=100ns, page fault time = 25ms) => p<0.0000004 (10% degradation)=> 1ma/2,500,000 to page fault

Disk8ms: ave latency15ms: seek1ms: transfer

Page 153: Download It

153

10.3 Page Replacement

• Over-allocating: increase the degree of multiprogramming

• Page replacement: 1) find the desired page on disk, 2) find a free frame -if there is one then use it; otherwise, select a victim by applying a page replacement algorithm, write the victim page to the disk and update the page/frame table, 3) load the desired page to the free frame, 4) restart the process

• Modify (dirty) bit: reduce the overhead: if the page is dirty (means it has been changed), in this case we have to write this page back to the disk.

Page 154: Download It

154

Page Replacement

• Need a frame-allocation and a page-replacement algorithm: lowest page-fault rate

• Reference string: page faults vs # frames analysis

FIFO page replacement:• Simple but not always good (FIG10.8)• Belady’s anomaly: the page faults increase as the

increase of # of frames!!! (FIG10.9)

Page 155: Download It

155

Page Replacement

Optimal page replacement:• Replace the page that will not be used for the

longest period of time (FIG10.10)• Has the lowest page-fault rate for a fixed number

of frames (the optimum solution)• Difficult to implement: WHY? => need to predict

the future usage of the pages!• Can be used as a reference point!

Page 156: Download It

156

Page Replacement

LRU page replacement:• Replace the page has not been used for the longest

period of time (FIG10.11)• The results are usually good• How to implement it? 1) counter and 2) stack

(FIG10.12)• Stack algorithms (LRU) will not suffer from

Belady’s anomaly

Page 157: Download It

157

Page Replacement

LRU approximation page replacement• Reference bit: set by hardware: indicates whether

the page is referenced • Additional-reference-bits algorithm: at regular

interval, the OS shifts the reference bit to the MSB of a 8-bit byte (11000000 has been used more recently than 01011111)

• Second-chance algorithm: ref-bit=1, gives it a second chance and reset the ref-bit: uses a circular queue to implement it (FIG10.13)

Page 158: Download It

158

Page Replacement

Enhanced second-chance algorithm• (0,0): neither recently used nor modified - best one

to replace• (0,1): not recently used but modified - need to

write back• (1,0): recently used but clean - probably will be

used agaain• (1,1): recently used and modified• We may have to scan the circular queue several

times before we can find the page to be replaced

Page 159: Download It

159

Page Replacement

Counting-based page replacement• the least frequently used (LFU) page-replacement

algorithm• the most frequently used (MFU) page-replacement

algorithm

Page-buffering algorithm• Keep a pool of free frame: we can write the page

into a free frame before we need to write a page out of the frame

Page 160: Download It

160

10.4 Allocation of Frames

• How many free frames should each process get?Minimum number of frames• It depends on the instruction-set architecture: we

must have enough frames to hold all the pages that any single instruction can reference

• It also depends on the computer architecture: ex. PDP11 some instructions have more than 1 word (it may straddle 2 pages) in which 2 operands may be indirect reference (4 pages) => needs 6 frames

• Indirect address may cause problem (we can limit the levels of indirection e.g., 16)

Page 161: Download It

161

Allocation of Frames

Allocation Algorithms• Equal allocation• Proportional allocation: allocating memory to each

process according to its size• Global allocation: allow high-priority processes to

select frames from low-priority processes (problem? A process can not control its own page-fault rate)

• Local allocation: each process selects from its own set of frames

• Which one is better? Global allocation: high throughput

Page 162: Download It

162

10.5 Trashing

• Trashing: high paging activity (a severe performance problem)

• A process is trashing if it is spending more time paging than executing

• The CPU scheduler: decreasing CPU utilization => increases the degree of multiprogramming => more page faults => getting worse and worse (FIG10.14)

• Preventing trashing: we must provide a process as many frames as it needs

• Locality: process executes from locality to locality

Page 163: Download It

163

Trashing

• Suppose we allocate enough frames to a process to accommodate its current locality. It will not fault until it changes its localities

Working-set model => locality• Working-set: the most active-used pages within

the working-set window (period) (FIG10.16)• The accuracy of the working set depends on the

selection of the working-set window (too small: will not encompass the entire locality; too large: overlay several localities)

Page 164: Download It

164

Trashing

WSS_i: the working-set sizeD: the total demand for framesD = sum WSS_i• If the total demand is greater than the total number

of available frames (D>m), trashing will occur• If D>m allocate frames to processes else the OS

suspends some processes• Difficulty: how to keep tracking of the working-

set (it’s a moving window)

Page 165: Download It

165

Trashing

• Page-fault frequency (PFF): establishing lower- and upper-bound on the desired page-fault rate (FIG10.17)

• Below lower-bound: the process may have too many frames (remove from it)

• Over upper-bound: the process may not have enough frames (add to it)

Page 166: Download It

166

10.6 OS Examples

• NT: demand pages with clustering, working-set minimum/maximum, automatic working-set trimming

• Solaris 2: minfree/lotsfree, pageout starts when reaches minfree, two-handed-clock algorithm

Page 167: Download It

167

10.7 Other Considerations

Prepaging• bring into memory at one time all the pages that will

be neededs pages, alpha: the actual fraction of s will be used

(0<=alpha<=1)• if alpha->0 prepaging loses, if alpha->1 prepaging

winsPage size• How to determine page size?• Size of the page table: large page size => small page

table

Page 168: Download It

168

Other Considerations

• Smaller page size => smaller internal fragmentation => better memory utilization

• Page read/write time: large page size to minimize IO time

• Smaller page size => better locality (resolution) => less IO time

• The historical trend is toward large page sizes: WHY? High CPU/memory speed compared to disk speed but more internal fragmentation

Page 169: Download It

169

Other Considerations

• Inverted page table: reduce the virtual-to-physical translation time

• Program structure: increase locality => lower page-fault rate (e.g., stack is good, hashing is not good)

• Compiler and loader also affect on paging: separating code (it will never be modified) and data

• Frequent use of pointers (C, C++) tends to randomize memory access: not good; Java is better : no pointers

Page 170: Download It

170

Other Considerations

IO interlock• Allow some of the pages to be locked in memory

(for IO operations)• It may be dangerous because it may get turned on

but never turned off

Page 171: Download It

171

Ch. 11 File Systems11.1 File Concept

• File: a named collection of related information that is recorded on secondary storage

• Types: text (source), object (executable) files• Attributes: name, type, location, size, protection,

time/date/user id• Operations: creating, writing, reading,

repositioning, deleting, truncating (delete the content only), appending,a renaming, copy

• Info associated with an open file: file pointer, file open count, disk location of the file

Page 172: Download It

172

File Concept

• Memory mapping: multiple processes may be allowed to map the same files into the virtual memory of each, to allow sharing of data

• File types: name => a name + an extension• File structure: more structures, the OS needs to

support more => support minimum number of file structures (UNIX, MS-DOS 8-bit byte no interpretation): each application program needs to provide its own code to interpret an input file to the appropriate structure

Page 173: Download It

173

File Concept

• Internal file structure: packing a number of logical records into physical blocks (all file systems suffer internal fragmentation problem)

• Consistency semantics: modifications of data by one user should be observed by other users

• UNIX: writes to an open file by one user are visible immediately by other users; sharing the pointer of current location into the file

Page 174: Download It

174

11.2 Access Methods

• Sequential access (order)• Direct access (no order, random): relative block

number - an index relative to the beginning of the file

• Index file: contains pointers to various blocks• Index-index file (if index is too big)

Page 175: Download It

175

11.3 Directory Structure

• The file system is divided into partitions (IBM:minidisks, PC/Macintosh: volumes)

• Partitions can be thought as virtual disks• Device directory or volume table of contents• Operations on directory: search a file, create a file,

delete a file, list a directory, rename a file, traverse the file system

Page 176: Download It

176

Directory Structure

Single-level directory• Simple but not good when too many files or too

many users• All files in the same directory that need unique

namesTwo-level directory• User has his/her own user file directory (UFD)

under the system’s master file directory (MFD)• Solving name-collision problem but isolating user

from each other (not good for cooperation)• Path name, search path

Page 177: Download It

177

Directory Structure

Tree-structured directory• A directory contains a set of files or subdirectories• Each user has a current directory• Path names: absolute-begins at the root

(root/spell/mail); relative-from the current directory (prt/first => root/spell/mail/prt/first)

• How to delete a directory? : directory must be empty: what if the directory contains several subdirectories?

• UNIX “rm -r”: remove all but dangeous• Users can access their and all others’ files

Page 178: Download It

178

Directory Structure

Acyclic-graph directories• Can a tree structure share files and directories? NO• A graph with no cycles, allows directories to have

shared directories and files (FIG11.9)• UNIX: link: a pointer to another file or subdirectory• Symbolic link: a link is implemented as an absolute

or relative path name• Duplicate all info in both sharing directories:

consistency issue

Page 179: Download It

179

Directory Structure

• Concerns for implementing an acyclic-graph directory

1. A file may have multiple absolute path name2. When can the space allocated to a shared file can be

deleted and reused?• It is easy to handle them by using symbolic link• Need a mechanism to indicate whether all the

references to it are deleted1. File-reference list: potential large size2. Counter: 0=no more reference: cheaper (UNIX:

hard link)

Page 180: Download It

180

Directory Structure

• Acyclic-graph is complicate, some systems do not allow shared directories or links (MS-DOS: tree)

General graph directory• Problem with acyclic graph is difficult to ensure that

there are no cycles? WHY?• Adding links to a tree-structured directory becomes

general graph directory (FIG11.10)• Cycle problem: one solution is to limit the depth

(number of directories) of search• Garbage collection: 1st pass identify files/directories

can be freed, 2nd pass free the space: time consuming

Page 181: Download It

181

11.4-5 File-System Mounting & Sharing

• File opened for use; file mounting for available for processes

• Multiple users sharing: security and consistency• Remote file systems

Page 182: Download It

182

11.6 Protection

• Reliability: guarding against physical damage (duplicate copies of files)

• Protection: guarding against improper access• Controlled access: read, write, execute, append,

delete, list• Access list and group: owner, group, universe

(UNIX: rwx)• Other protection approaches: one password for

every file (who can remember so many passwords?), one password for a set of files

Page 183: Download It

183

CH12 File-System Implementation

12.1 File-System Structure• To improve IO efficiency, blocks transfer between

disk and memory• File-systems: allow the data to be stored, located,

and retrieved easily.• Two issues: 1) how to look to a user and 2) must

develop algorithms and data structures to implement it

• A layer design: application programs => logical file system (directories) => file-organization module => basic file system => IO control => devices

Page 184: Download It

184

12.2 File-System Implementation

• Open-file table: files must be opened before they can be used for IO procedures

• File descriptor, file handle (NT), file control block• Mounted: the file system must be mounted before

it can be available to processes on the system (/home/jane = /user/jane)

Page 185: Download It

185

12.3 Directory Implementation

• Linear list• Hash table

Page 186: Download It

186

12.4 Allocation Methods

• Contiguous, linked, and indexedContiguous allocation• Each file to occupy a set of contiguous blocks on

the disk (FIG12.5)• The number of disk seeks is minimum• It’s easy to access a file which is allocated to a set

of contiguous blocks• Support both sequential and direct access• Difficult to find space for a new file (dynamic

storage-allocation algorithm: best-fit, first-fit, worst-fit)

Page 187: Download It

187

Allocation Methods

• Cause external fragmentation• Run a repacking routine: disk->memory->disk:

effective but time consuming (need down time)• How much space is needed for a file? 1. Too little: the file can not be extended2. Too much: waste• Re-allocation: find a large hold, and copy the

contents, and repeat the process: slow!!!• Preallocation and then extent (with a link)

Page 188: Download It

188

Allocation Methods

Linked allocation• Each block contains a pointer to the next block

(FIG12.6)• No external fragmentation: no need to compact

space• Disadvantages:1 can be used effectively only to sequential-access

files and inefficient to support direct-access files2. Overhead for the pointers

Page 189: Download It

189

Allocation Methods

• One solution to the problems: collect blocks into clusters

• Reliability is another problem: what if a link is missing?

• File allocation table (FAT): MS-DOC and OS/2: FIG12.7

• FAT allocation scheme may need a significant number of disk head seek, unless FAT is cached

Page 190: Download It

190

Allocation Methods

Indexed allocation• Index block: bringing all the pointer into one place• Each file has its own index block that contains a array

of disk-block addresses (FIG12.8)• Support direct access, no external fragmentation• Overhead of index block > pointers of linked

allocation• How large the index block should be?1. Linked scheme2. Multilevel index3. Combined scheme (FIG12.9)

Page 191: Download It

191

Allocation Methods

Performance• Contiguous allocation: needs one access to get a

disk block• Linked allocation: needs i accesses to get the ith

block: no good for direct access applications• Direct-access files using contiguous allocation,

sequential-access using linked allocation• Indexed allocation: depends on the index

structure, file sizes, etc. (What if the index block is too big which can not stay in the memory all time? Swap-in and swap out the index block???)

Page 192: Download It

192

12.5 Free-Space Management

• Free-space list1. Bit vector: each bit represents one block: simple:

but only effective if entire bit-vector stays in memory (1.3G/512-block/bit-vector=332k)

2. Linked list: traversal needs substantial IO time but luckily it is not a frequent action

3. Grouping: store the addresses of n-free blocks in the first block (n-1 actual free blocks): a large number of free blocks can be found quickly

4. Counting: instead keep all the addresses, we only need to store the address of 1st block and count n

Page 193: Download It

193

12.6 Recovery

• Consistency checking: comparing the data in the directory structure with the data blocks on disk

• The loss of a directory entry on an indexed allocation system could be disastrous

• Backup and restore

Page 194: Download It

194

Ch. 13 IO Systems13.1 Overview

• IO devices vary widely in their function and speed. Hence we need a variety of methods to control them

• Two conflicting trends: 1) increasing standardization of hardware and software interface and 2) increasing broad variety of IO devices

• Device-driver modules: kernel, encapsulation

Page 195: Download It

195

13.2 IO Hardware

• Many types of devices: storage devices (sidks, tapes), transmission devices (modems, network cards), human interface devices (mouse, screen, keyboard)

• Port, bus, daisy chain (a serial connected devices)• Controller: a serial-port controller• Host adapter: contains processor, microcode,

memory for complex protocols (SCSI)• PC bus structure: FIG13.1

Page 196: Download It

196

IO Hardware

• How can the processor give commands and data to a controller to accomplish an IO transfer?

• Special IO instructions• Memory-mapped IO: faster if need to transfer a

large amount of data (screen display). Disadvantage? Software fault!

• IO port (4 registers): status, control, data-in and data-out

• Some controllers use FIFO

Page 197: Download It

197

IO Hardware

Polling• Handshaking: in the first step, the host repeatedly

monitors the busy bit: busy-waiting or polling (go to door every 1 minute to check out whether someone is on the door)

Interrupts• Interrupt-request line: someone ring a door bell to

indicate he/she is on the door• Interrupt-driven IO cycle: FIG13.3

Page 198: Download It

198

IO Hardware

• Interrupt-request lines: nonmaskable interrupt-reserved for events such as nonrecoverable errors; maskable interrupt- can be turned off by CPU

• Interrupt vector: the memory addresses of specialized interrupt handlers

• Interrupt priority levels• Interrupt in a OS: at boot time, during IO, exceptions• Other usage of interrupts: page-fault (virtual

memory), system calls (software interrupt or trap), manage the control flow (yield some low-priority job to high-priority one)

Page 199: Download It

199

IO Hardware

• A threaded kernel architecture is well-suite to implement multiple interrupt priority and to enforce the precedence of interrupt handling over background processing in kernel and application routines

Direct memory access (FIG13.5)• Programmed IO (PIO): 1 byte transfer at a time• DMA controller operates on memory bus directly• DMA seizes the memory bus => the CPU can not

access the memory => it still can access data in the primary and secondary cache => CYCLE STEALING

Page 200: Download It

200

13.3 Application IO Interface

• A kernel IO structure (FIG13.6): encapulation• Devices’ characteristics (FIG13.7)1. Character stream or block2. Sequential or random-access3. Synchronous or asynchronous4. Sharable (can be used by concurrent threads) or

dedicated5. Speed of operation6. Read/write, read only or write only• Escape or back-door system call (UNIX ioctl)

Page 201: Download It

201

Application IO Interface

Block and character devices• Block-device: read, write, seek (memory-mapped

file access can be layered on top of block-device drivers

• Character-stream: keyboard, mouse, modemsNetwork devices• Network socket interface (UNIX, NT)Clocks and timers• Give the current time, the elapsed time, set a timer

to trigger operation X at time T (programmable interval timer)

Page 202: Download It

202

Application IO Interface

Blocking and nonblocking IO• Blocking IO system call: the execution of the

application is suspended (run->wait queue)• Nonblocking IO: ex. Interface with mouse and

keyboard while processing and display data on the screen:

1. One solution: overlap execution with IO using a multithreaded application

2. Asynchronous system call: returns immediately without waiting for the IO to complete

Page 203: Download It

203

13.4 Kernel IO Subsystem

IO scheduling• To improve the overall performance

Buffering• Cope with speed mismatching• Cope with different data-transfer sizes• Support copy semantics for application IO• Double buffering: write to one and read from

another one

Page 204: Download It

204

Kernel IO Subsystem

Caching• Using a fast memory to hold copies of dataSpooling and device reservation• A spool: is a buffer that holds output for a device

such as a printer (serve one job at a time)• Each application’s output is spooled to a separate

disk file. Error handling• An OS uses protected memory • Sense key (SCSI protocol to identify failures)

Page 205: Download It

205

Kernel IO Subsystem

Kernel data structures• UNIX IO kernel structure (FIG13.9)• The IO subsystem supervises:1. Management of the name space for files/devices2. Access control to files/devices3. Operation control4. File system space allocation5. Device allocation6. Buffering, caching, and spooling7. IO scheduling8. Device status monitoring, error handling and failure recovery9. Device driver configuration and initialization

Page 206: Download It

206

13.5 IO Requests Handling

• How’s the connection made from the file name to the disk controller?

• Device table (MS-DOS): “c:”• Mount table (UNIX):• Stream (UNIX V): a full-duplex connection

between a device driver and a user-level process• The life cycle of an IO request (FIG13.10)

Page 207: Download It

207

13.6-7 STREAMS & Performance

• IO is a major factor in system performance• Context switching is a main factor• Interrupt is relatively expensive: state change =>

execute the interrupt handler => restore the state• Network traffic can also cause a high context-

switch rate (FIG13.11)• Telnet daemon (Solaris): using in-kernel threads to

eliminate the context switches• Front-end processor, terminal concentrator

(multiplexing many remote terminals to one port), IO channel

Page 208: Download It

208

Performance

Several principles to improve efficiency of IO:• Reduce # of context switches• Reduce # of times that data needs to be copied to

memory while they are passing between device and application

• Reduce the frequency of interrupts• Increasing usage of DMA• Move processing primitives into hardware (allowing

concurrent CPU and bus operations)• Balance load between CPU, memory subsystem,

and IO

Page 209: Download It

209

Performance

• Device-functionality progression (FIG13.12)• Where should the IO functionality be

implemented???• Application-level: easy, flexible, inefficient (high

context switches)• Kernel level: difficulty but efficient• Hardware level: inflexible, expensive

Page 210: Download It

210

Ch. 14 Mass-Storage Structure14.1 Disk Structure

• Magnetic tape (slower than disk): backup use• Converting logical block to disk: two problems: 1)

most disks have some defects and 2) the # of sectors per track is not a constant

• Cylinder, track, sector

Page 211: Download It

211

14.2 Disk Scheduling

• Seek time: the time for the disk arm to move the heads to the cylinder containing the desired sector

• Rotational latency: the time waiting for the disk to rotate the desired sector to the head

• Bandwidth: the total # bytes transferred, divided by the total time from request to completion

FCFS scheduling (first-come-first-serve)• Simple but not generally provide fastest service

(FIG14.1)

Page 212: Download It

212

Disk Scheduling

SSTF scheduling (shortest-seek-time-first)• Selects the request with the minimum seek time from

the current head position (FIG14.2)• It may cause starvation of some requests• It is much better than FCFS but not optimalSCAN scheduling• The head continuously scans back and forth across

the disk (FIG14.3): also called elevator algorithmC-SCAN scheduling• Reach one end immediately return to the other end

(FIG14.4)

Page 213: Download It

213

Disk Scheduling

LOOK scheduling• Reach the final request and return (without going to

the end of the disk) (FIG14.5)Selection of a disk-scheduling algorithm• Heavy load: SCAN and C-SCAN perform better due

to no starvation problem• Final a optimal scheduling? Computational expensive

may not justify the saving over SSTF or SCAN• Request performance may affect by the file-allocation

method (continuous vs linked/indexed files)

Page 214: Download It

214

Disk Scheduling

• Request performance is also affect by the location of directories and index blocks (e.g., the index block is on the 1st cylinder and the data is on the last one)

• Caching the directories and index blocks in main memory will help

• OS should have a module that includes a set of scheduling algorithms for different applications

• What if the rotational latency is nearly as large as the average seek time?

Page 215: Download It

215

14.3 Disk Management

• Disk formatting: low-level formatting: physical formatting: divide the disk into sectors that the disk controller can read and write from/to it

• Error-correcting code (ECC): write data => the controller computes the ECC => read data => compute the ECC => if match ok => if not error and correct it

• Using a disk to hold file: needs to record its own data structures on the disk

1. Partition the disk into several groups of cylinders2. Logical formatting (making a file system)

Page 216: Download It

216

Disk Management

• Boot block (stored in ROM), bootstrap loader, boot disk (system disk)

• Bad blocks: why disks are easily defected? (moving part)

• What if format finds a bad block? Making the FAT entry and

1. Sector sparing (forwarding): OS preserves some spare sectors to replace bad blocks

2. Sector slipping: 17 is bad, 17-100 move to 18-101• Can the replacement be fully automatic? No. users

may need to fix the lost data manually

Page 217: Download It

217

14.4 Swap-Space Management

• Main goal: provide the best throughput for the virtual-memory system

• Swap-space: range from a few megabytes to hundreds of megabytes: is safer to overestimate than to underestimate the swap space. WHY?

• Swap-space location: in the file systems (easy but inefficient) or in a separate disk partition (efficient but internal fragmentation may increase)

• In 4.3BSD, swap space is allocated to a process when it is started: text segment (FIG14.7) and data segment (FIG14.8) = swap map

Page 218: Download It

218

14.5 RAID

• Disks used to be the least reliable component of a system (disk crash)

• Disk stripping (interleaving): uses a group of disks as one storage unit: improve performance and reliability

• Redundant array of independent disks (RAID)• Mirroring or shadowing: keeps a duplicate copy of

each disk• Block interleaved parity: a small fraction of the disk

space is used to hold parity blocks• What are the overheads?

Page 219: Download It

219

14.7 Stable-Storage Implementation

• Information resided in the stable storage is never lost

• Whenever a failure occurs when writing of a block, the system will recovery and restore the block

1. Write info to the first physical block2. If it completed successfully, write the same info to

the second physical block3. Declare the operation complete only after the

second write completes successfully• Data will be safe unless all copies are destroyed

Page 220: Download It

220

14.8 Tertiary-Storage Structure

• Removable media• Removable disks• Tapes• What are the considerations of the OS?• Application interface• File naming