documentos

34
What is an Operating System? A program between computer users and computer hardware To manage and use computer hardware in an efficient manner Operating system goals: To provide an environment for users to execute programs in a convenient and efficient manner To ensure the correct operation of the computer system OS is a resource allocator Manages all resources Decides between conflicting requests for efficient and fair resource use OS is a control program Controls execution of programs to prevent errors and improper use of the computer An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, [1] [2] although the application code is usually executed directly by the hardware and frequently makes a system call to an OS function or be interrupted by it. Operating systems can be found on many devices that contain a computer—from cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems include Android, BSD, iOS, Linux, OS X, QNX, Microsoft Windows, [3] Windows Phone, and IBM z/OS. All these examples, except Windows, Windows Phone and z/OS, share roots in UNIX. Operating System Components o Kernel Program execution

Upload: raju

Post on 20-Nov-2015

3 views

Category:

Documents


0 download

DESCRIPTION

Collection of OS

TRANSCRIPT

What is an Operating System? A program between computer users and computer hardware To manage and use computer hardware in an efficient manner Operating system goals: To provide an environment for users to execute programs in a convenient and efficient manner To ensure the correct operation of the computer system OS is a resource allocator Manages all resources Decides between conflicting requests for efficient and fair resource use OS is a control program Controls execution of programs to prevent errors and improper use of the computerAnoperating system(OS) is software that managescomputer hardwareandsoftwareresources and provides commonservices forcomputer programs. The operating system is an essential component of thesystem softwarein a computer system. Application programs usually require an operating system to function.Time-sharingoperating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.For hardware functions such as input and output andmemory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2]although the application code is usually executed directly by the hardware and frequently makes asystem callto an OS function or be interrupted by it. Operating systems can be found on many devices that contain a computerfromcellular phonesandvideo game consolestosupercomputersandweb servers.Examples of popular modern operating systems includeAndroid,BSD,iOS,Linux,OS X,QNX,Microsoft Windows,[3]Windows Phone, andIBM z/OS. All these examples, except Windows, Windows Phone and z/OS, share roots inUNIX.Operating System Components Kernel Program execution Interrupts Modes Memory management Virtual memory Multitasking Disk access and file systems Device drivers Networking Security User interface Graphical user interfacesFor detail :refer http://en.wikipedia.org/wiki/Operating_systemProcessesA program is a passive thing -- just a file on the disk with code that is potentially runnable.A process is one instance of a program in execution; at any instance, there may be many processes running copies of a single program (e.g., an editor): each is a separate, independent process.

Processor: Provides a set of instructions along with the capability of automatically executing a series of those instructions. Thread: A minimal software processor in whose context a series of instructions can be executed. Saving a thread context implies stopping the current execution and saving all the data needed to continue the execution at a later stage. Process: A software processor in whose context one or more threads may be executed. Executing a thread, means executing a series of instructions in the context of that thread.

Introduction to threads Thread is a light weighted process. The analogy: thread is to process as process is to machine. Each thread runs strictly sequentially and has its own program counter and stack to keep track of where it is. Threads share the CPU just as processes do: first one thread runs, then another does. Threads can create child threads and can block waiting for system calls to complete. All threads have exactly the same address space. They share code section, data section, and OS resources (open files & signals). They share the same global variables. One thread can read, write, or even completely wipe out another threads stack. Threads can be in any one of several states: running, blocked, ready, or terminated. There is no protection between threads:(1) it is immpossible (2) it should not be necessary: a process is always owned by a single user, who has created multiple threads so that they can cooperate, not fight.

Process Scheduling In a multiprogramming system, all the processes that run on a particular system seek for the CPU time. Hence there is a need of CPU scheduling algorithm that reduces the followings as far as possible. Average waiting time Average response time and Average turn around time

CPU/Process SchedulingThe assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling.When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.Theprocess scheduleris the component of the operating system that is responsible for deciding whether the currently running process should continue running and, if not, which process should run next. There are four events that may occur where the scheduler needs to step in and make this decision:1. The current process goes from therunningto thewaitingstate because it issues an I/O request or some operating system request that cannot be satisfied immediately.2. The current process terminates.3. A timer interrupt causes the scheduler to run and decide that a process has run for its allotted interval of time and it is time to move it from therunningto thereadystate.4. An I/O operation is complete for a process that requested it and the process now moves from thewaitingto thereadystate. The scheduler may then decide to preempt the currently-running process and move this newly-readyprocess into therunningstate.

Goals of Scheduling (objectives)In this section we try to answer following question: What the scheduler try to achieve?Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.General GoalsFairness Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think ofsafety controlandpayrollat a nuclear plant.Policy Enforcement The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then thesafety control processesmust be able to run whenever they want to, even if it means delay inpayroll processes.Efficiency Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle.Response Time A scheduler should minimize the response time for interactive user.Turnaround A scheduler should minimize the time batch users must wait for an output.Throughput A scheduler should maximize the number of jobs processed per unit time.A little thought will show that some of these goals are contradictory. It can be shown that any scheduling algorithm that favors some class of jobs hurts another class of jobs. The amount of CPU time available is finite, after all.Preemptive Vs Nonpreemptive SchedulingThe Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts.Nonpreemptive SchedulingA scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from that process.Following are some characteristics of nonpreemptive scheduling1. In nonpreemptive system, short jobs are made to wait by longer jobs but the overall treatment of all processes is fair.2. In nonpreemptive system, response times are more predictable because incoming high priority jobs can not displace waiting jobs.3. In nonpreemptive scheduling, a schedular executes jobs in the following two situations.a. When a process switches from running state to the waiting state.b. When a process terminates.Preemptive SchedulingA scheduling discipline is preemptive if, once a process has been given the CPU can taken away.The strategy of allowing processes that are logically runable to be temporarily suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.Inter process Communication Process of communication between two processes that reside in same or different systems. E.g. communication of client and server Here a process of client communicates with another process of server for a specific purpose.IPC - Cooperating Processes Independent process cannot affect or be affected by the execution of another process. Cooperating process can affect or be affected by the execution of another process Advantages of process cooperation (Communication) Information sharing Computation speed-up Modularity Convenience Dangers of process cooperation (Communication) Data corruption, deadlocks, increased complexity Requires processes to synchronize their processingPurposes for IPC Data Transfer Sharing Data Event notification Resource Sharing and Synchronization Process Control

Mechanisms used for communication and synchronization the different ways of communication between the processes. Message Passing Shared Memory RPCRemote Procedure Call (RPC) RPC is an interaction between a client and a server Client invokes procedure on sever Server executes the procedure and pass the result back to client Calling process is suspended (blocked) and proceeds only after getting the result from serverRPC Motivation Transport layer message passing consists of two types of primitives: send and receive May be implemented in the OS or through add-on libraries Messages are composed in user space and sent via a send() primitive. When processes are expecting a message they execute a receive() primitive. Receives are often blocking Messages lack access transparency. Differences in data representation, need to understand message-passing process, etc. Programming is simplified if processes can exchange information using techniques that are similar to those used in a shared memory environment.The Remote Procedure Call (RPC) Model A high-level network communication interface Based on the single-process procedure call model. Client request: formulated as a procedure call to a function on the server. Servers reply: formulated as function returnConventional Procedure Calls1. Initiated when a process calls a function or procedure2. The caller is suspended until the called function completes.3. Arguments & return address are pushed onto the process stack. 4. Variables local to the called function are pushed on the stack5. Control passes to the called function6. The called function executes, returns value(s) either through parameters or in registers.7. The stack is popped.8. Calling function resumes executing

RPC and Client-Server RPC forms the basis of most client-server systems. Clients formulate requests to servers as procedure calls Servers reply: formulated as function returnProblem with Conventional RPCAccess Transparency in not maintainedModern RPC

Modern RPC1. The client procedure calls the client stub in the normal way.2. The client stub builds a message including parameters, name or number of procedure to be called etc and calls the local operating system. The packaging of arguments into a network message is called marshaling.3. The client's as sends the message to the remote OS via a system call to the local kernel. To transfer the message some protocol (either connectionless or connection-oriented) are used.4. The remote OS gives the message to the server stub.5. The server stub unpacks the parameters and calls the server.6. The server does the work and returns the result to the stub.7. The server stub packs it in a message and calls its local OS.8. The server's OS sends the message to the client's OS.9. The client's OS gives the message to the client stub.10. The stub unpacks the result and returns to the waiting client procedure.RPC Issues: Binding A local kernel calls remote kernel in RPC along with some parameters through a particular port. The local kernel must know the remotes port through which it is communicating with. The process of finding out or assigning the port and corresponding system (client or server) is called binding.BindingDetermines remote procedure and machine on which it will be executedChecks compatibility of the parameters passed Dynamic BindingUse Binding ServerVirtual machineIn computing, avirtual machine(VM) is anemulationof a particular computer system. Virtual machines operate based on thecomputer architectureand functions of a real or hypothetical computer, and their implementations may involve specialized hardware,software, or a combination of both.Classification of virtual machines can be based on the degree to which they implement functionality of targeted real machines. That way,system virtual machines(also known asfull virtualizationVMs) provide a complete substitute for the targeted real machine and a level of functionality required for the execution of a completeoperating system. On the other hand,process virtual machinesare designed to execute a singlecomputer programby providing an abstracted and platform-independent program execution environment.Different virtualization techniques are used, based on the desired usage.Native executionis based on direct virtualization of the underlying raw hardware, thus it provides multiple "instances" of the samearchitecturea real machine is based on, capable of running complete operating systems. Some virtual machines can alsoemulatedifferent architectures and allow execution of software applications and operating systems written for anotherCPUor architecture.Operating systemlevel virtualizationallows resources of a computer to be partitioned viakernel's support for multiple isolateduser spaceinstances, which are usually calledcontainersand may look and feel like real machines from the end users' point of view.Some computer architectures are capable ofhardware-assisted virtualization, which enables efficient full virtualization by using virtualization-specific hardware capabilities, primarily from the host CPUs.Definitions[edit]A virtual machine (VM) is a software implementation of a machine (for example, a computer) that executes programs like a physical machine. Virtual machines are separated into two major classes, based on their use and degree of correspondence to any real machine: Asystem virtual machineprovides a completesystem platformwhich supports the execution of a completeoperating system(OS).[1]These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use (for example, executing on otherwise obsolete platforms), or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness (known ashardware virtualization, the key to acloud computingenvironment), or both. Aprocess virtual machine(also, language virtual machine) is designed to run a singleprogram, which means that it supports a singleprocess. Such virtual machines are usually closely suited to one or more programming languages and built with the purpose of providing program portability and flexibility (amongst other things). An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machineit cannot break out of its virtual environment.A VM was originally defined byPopek and Goldbergas "an efficient, isolated duplicate of a real machine". Current use includes virtual machines which have no direct correspondence to any real hardware.[2]System virtual machines[edit]See also:Hardware virtualizationandComparison of platform virtual machinesSystem virtual machine advantages: Multiple OS environments can co-exist on the same primary hard drive, with a virtual partition that allows sharing of files generated in either the "host" operating system or "guest" virtual environment. Adjunct software installations, wireless connectivity, and remote replication, such as printing and faxing, can be generated in any of the guest or host operating systems. Regardless of the system, all files are stored on the hard drive of the host OS. Application provisioning, maintenance, high availability and disaster recovery are inherent in the virtual machine software selected. Can provideemulatedhardware environments different from the host'sinstruction setarchitecture (ISA), through emulation or by usingjust-in-time compilation.The main disadvantages of VMs are: A virtual machine is less efficient than an actual machine when it accesses the host hard drive indirectly. When multiple VMs are concurrently running on the hard drive of the actual host, adjunct virtual machines may exhibit a varying and/or unstable performance (speed of execution and malware protection). This depends on the data load imposed on the system by other VMs, unless the selected VM software providestemporal isolation among virtual machines. Malware protection for VM's are not necessarily compatible with the "host", and may require separate software.Multiple VMs running their own guest operating system are frequently engaged for server consolidation in order to avoid interference from separate VMs on the same actual machine platform.The desire to run multiple operating systems was the initial motivation for virtual machines, so as to allow time-sharing among several single-tasking operating systems. In some respects, a system virtual machine can be considered a generalization of the concept ofvirtual memorythat historically preceded it. IBM'sCP/CMS, the first systems to allowfull virtualization, implementedtime sharingby providing each user with a single-user operating system, theCMS. Unlike virtual memory, a system virtual machine entitled the user to write privileged instructions in their code. This approach had certain advantages, such as adding input/output devices not allowed by the standard system.[3]As technology evolves virtual memory for purposes of virtualization, new systems ofmemory overcommitmentmay be applied to manage memory sharing among multiple virtual machines on one actual computer operating system. It may be possible to share "memory pages" that have identical contents among multiple VM's that run on the same actual machine. This may result in mapping them to the same physical page, by a technique known asKernel SamePage Merging. This is particularly useful for read-only pages, such as those that contain code segments. In particular would be the case of multiple virtual machines running the same or similar software, software libraries, web servers, middleware components, etc. The guest operating systems do not need to be compliant with the host hardware, thereby making it possible to run different operating systems on the same computer (e.g.,Microsoft Windows,Linux, or previous versions of an operating system to support future software.The use of virtual machines to support separate guest operating systems is popular in regard toembedded systems. A typical use would be to support anactual-time operating systemsimultaneously with a preferred complex operating system, such as Linux or Windows. Another use would be for novel and unproven software still in the developmental stage, such assandbox. Virtual machines have other advantages for operating system development, and may include improved debugging access and faster reboots.[4]Process virtual machines[edit]See also:Application virtualization,Run-time systemandComparison of application virtual machinesA process VM, sometimes called anapplication virtual machine, orManaged Runtime Environment(MRE), runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide aplatform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.A process VM provides a high-level abstraction that of ahigh-level programming language(compared to the low-level ISA abstraction of the system VM). Process VMs are implemented using aninterpreter; performance comparable to compiled programming languages is achieved by the use ofjust-in-time compilation.This type of VM has become popular with theJava programming language, which is implemented using theJava virtual machine. Other examples include theParrot virtual machine, and the.NET Framework, which runs on a VM called theCommon Language Runtime. All of them can serve as anabstraction layerfor any computer language.A special case of process VMs are systems that abstract over the communication mechanisms of a (potentially heterogeneous)computer cluster. Such a VM does not consist of a single process, but one process per physical machine in the cluster. They are designed to ease the task of programming concurrent applications by letting the programmer focus on algorithms rather than the communication mechanisms provided by the interconnect and the OS. They do not hide the fact that communication takes place, and as such do not attempt to present the cluster as a single machine.[citation needed]Unlike other process VMs, these systems do not provide a specific programming language, but are embedded in an existing language; typically such a system provides bindings for several languages (e.g.,CandFORTRAN).[citation needed]Examples are PVM (Parallel Virtual Machine) and MPI (Message Passing Interface). They are not strictly virtual machines, as the applications running on top still have access to all OS services, and are therefore not confined to the system model.

Memory managementInoperating systems,memory managementis the function responsible for managing the computer'sprimary memory.[1]:pp-105208The memory management function keeps track of the status of each memory location, eitherallocatedorfree. It determines how memory is allocated among competing processes, deciding who gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed orunallocatedand updates the status.Contents[hide] 1Memory management techniques 1.1Single contiguous allocation 1.2Partitioned allocation 1.3Paged memory management 1.4Segmented memory management 2See also 3ReferencesMemory management techniques[edit]Single contiguous allocation[edit]Single allocationis the simplest memory management technique. All the computer's memory, usually with the exception of a small portion reserved for the operating system, is available to the single application.MS-DOSis an example of a system which allocates memory in this way. Anembedded systemrunning a single application might also use this technique.A system using single contiguous allocation may stillmultitaskbyswappingthe contents of memory to switch among users. Early versions of theMusicoperating system used this technique.Partitioned allocation[edit]Partitioned allocationdivides primary memory into multiplememory partitions, usually contiguous areas of memory. Each partition might contain all the information for a specificjobortask. Memory management consists of allocating a partition to a job when it starts and unallocating it when the job ends.Partitioned allocation usually requires some hardware support to prevent the jobs from interfering with one another or with the operating system. TheIBM System/360used alock-and-keytechnique. Other systems usedbase and boundsregisters which contained the limits of the partition and flagged invalid accesses. TheUNIVAC 1108Storage Limits Registerhad separate base/bound sets for instructions and data. The system took advantage ofmemory interleavingto place what were called theibankanddbankin separate memory modules.[2]:3-3Partitions may be eitherstatic, that is defined atInitial Program Load(IPL) orboot timeor by thecomputer operator, ordynamic, that is automatically created for a specific job.IBM System/360 Operating SystemMultiprogramming with a Fixed Number of Tasks(MFT) is an example of static partitioning, andMultiprogramming with a Variable Number of Tasks(MVT) is an example of dynamic. MVT and successors use the termregionto distinguish dynamic partitions from static ones in other systems.[3]:73Partitions may berelocatableusing hardwaretyped memory, like theBurroughs Corporation B5500, or base and bounds registers like thePDP-10orGE-635. Relocatable partitions are able to becompactedto provide larger chunks of contiguous physical memory. Compaction moves "in-use" areas of memory to eliminate "holes" or unused areas of memory caused by process termination in order to create larger contiguous free areas.[4]:94Some systems allow partitions to beswapped outtosecondary storageto free additional memory. Early versions of IBM'sTime Sharing Option(TSO) swapped users in and out of a singletime-sharingpartition.[5]Paged memory management[edit]Main article:Virtual memoryPaged allocationdivides the computer's primary memory into fixed-size units calledpage frames, and the program's virtualaddress spaceintopagesof the same size. The hardwarememory management unitmaps pages to frames. The physical memory can be allocated on a page basis while the address space appears contiguous.Usually, with paged memory management, each job runs in its own address space. However, there are somesingle address space operating systemsthat run all processes within a single address space, such asIBM i, which runs all processes within a large address space, and IBMOS/VS2 SVS, which ran all jobs in a single 16MiB virtual address space.Paged memory can bedemand-pagedwhen the system can move pages as required between primary and secondary memory.Segmented memory management[edit]Main article:Memory segmentationSegmented memoryis the only memory management technique that does not provide the user's program with a 'linear and contiguous address space."[1]:p.165Segmentsare areas of memory that usually correspond to a logical grouping of information such as a code procedure or a data array. Segments require hardware support in the form of asegment tablewhich usually contains the physical address of the segment in memory, its size, and other data such as access protection bits and status (swapped in, swapped out, etc.)Segmentation allows better access protection than other schemes because memory references are relative to a specific segment and the hardware will not permit the application to reference memory not defined for that segment.It is possible to implement segmentation with or without paging. Without paging support the segment is the physical unit swapped in and out of memory if required. With paging support the pages are usually the unit of swapping and segmentation only adds an additional level of security.Addresses in a segmented system usually consist of the segment id and an offset relative to the segment base address, defined to be offset zero.The IntelIA-32(x86) architecture allows a process to have up to 16,383 segments of up to 4GiB each. IA-32 segments are subdivisions of the computer'slinear address space, the virtual address space provided by the paging hardware.[6]TheMulticsoperating system is probably the best known system implementing segmented memory. Multics segments are subdivisions of the computer'sphysical memoryof up to 256 pages, each page being 1K 36-bit words in size, resulting in a maximum segment size of 1MiB (with 9-bit bytes, as used in Multics). A process could have up to 4046 segments.

Memory management (Next)Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range. For example, if the base register holds 300000 and the limit register is 1209000, then the program can legally access all addresses from 300000 through 411999.

Instructions and data to memory addresses can be done in following ways Compile time-- When it is known at compile time where the process will reside, compile time binding is used to generate the absolute code. Load time-- When it is not known at compile time where the process will reside in memory, then the compiler generates re-locatable code. Execution time-- If the process can be moved during its execution from one memory segment to another, then binding must be delayed to be done at run timeDynamic LoadingIn dynamic loading, a routine of a program is not loaded until it is called by the program. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading makes better memory space utilization and unused routines are never loaded.Dynamic LinkingLinking is the process of collecting and combining various modules of code and data into a executable file that can be loaded into memory and executed. Operating system can link system level libraries to a program. When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking.In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller.Logical versus Physical Address SpaceAn address generated by the CPU is a logical address whereas address actually available on memory unit is a physical address. Logical address is also known a Virtual address.Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and physical addresses differ in execution-time address-binding scheme.The set of all logical addresses generated by a program is referred to as a logical address space. The set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.The run-time mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device. MMU uses following mechanism to convert virtual address to physical address. The value in the base register is added to every address generated by a user process which is treated as offset at the time it is sent to memory. For example, if the base register value is 10000, then an attempt by the user to use address location 100 will be dynamically reallocated to location 10100. The user program deals with virtual addresses; it never sees the real physical addresses.SwappingSwapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store , and then brought back into memory for continued execution.Backing store is a usually a hard disk drive or any other secondary storage which fast in access and large enough to accommodate copies of all memory images for all users. It must be capable of providing direct access to these memory images.Major time consuming part of swapping is transfer time. Total transfer time is directly proportional to the amount of memory swapped. Let us assume that the user process is of size 100KB and the backing store is a standard hard disk with transfer rate of 1 MB per second. The actual transfer of the 100K process to or from memory will take100KB / 1000KB per second= 1/10 second= 100 milliseconds

Memory AllocationMain memory usually has two partitions Low Memory-- Operating system resides in this memory. High Memory-- User processes then held in high memory.Operating system uses the following memory allocation mechanism.S.N.Memory AllocationDescription

1Single-partition allocationIn this type of allocation, relocation-register scheme is used to protect user processes from each other, and from changing operating-system code and data. Relocation register contains value of smallest physical address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register.

2Multiple-partition allocationIn this type of allocation, main memory is divided into a number of fixed-sized partitions where each partition should contain only one process. When a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.

FragmentationAs processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes can not be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.Fragmentation is of two typesS.N.FragmentationDescription

1External fragmentationTotal memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous so it can not be used.

2Internal fragmentationMemory block assigned to process is bigger. Some portion of memory is left unused as it can not be used by another process.

External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.PagingExternal fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). When a process is to be executed, it's corresponding pages are loaded into any available memory frames.Logical address space of a process can be non-contiguous and a process is allocated physical memory whenever the free memory frame is available. Operating system keeps track of all free frames. Operating system needs n free frames to run a program of size n pages.Address generated by CPU is divided into Page number (p)-- page number is used as an index into a page table which contains base address of each page in physical memory. Page offset (d)-- page offset is combined with base address to define the physical memory address.

Following figure show the paging table architecture.

SegmentationSegmentation is a technique to break memory into logical pieces where each piece represents a group of related information. For example ,data segments or code segment for each process, data segment for operating system and so on. Segmentation can be implemented using or without using paging.Unlike paging, segment are having varying sizes and thus eliminates internal fragmentation. External fragmentation still exists but to lesser extent.

Address generated by CPU is divided into Segment number (s)-- segment number is used as an index into a segment table which contains base address of each segment in physical memory and a limit of segment. Segment offset (o)-- segment offset is first checked against limit and then is combined with base address to define the physical memory address.DeadlockIn anoperating system, a deadlock is a situation which occurs when aprocessorthreadenters a waitingstatebecause aresourcerequested is being held by another waiting process, which in turn is waiting for another resource. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock.NECESSARY CONDITIONSALL of these four must happen simultaneously for a deadlock to occur:DEADLOCKCHARACTERISATIONMutual exclusionOne or more than one resource must be held by a process in a non-sharable (exclusive) mode.Hold and WaitA process holds a resource while waiting for another resource.No PreemptionThere is only voluntary release of a resource - nobody else can make a process give up a resource.Circular WaitProcess A waits for Process B waits for Process C .... waits for Process A.

Prevention

Do not allow one of the four conditions to occur.Mutual exclusion:a) Automatically holds for printers and other non-sharables.b) Shared entities (read only files) don't need mutual exclusion (and arent susceptible to deadlock.)c) Prevention not possible, since some devices are intrinsically non-sharable.Hold and wait:a) Collect all resources before execution.b) A particular resource can only be requested when no others are being held. A sequence of resources is always collected beginning with the same one.c) Utilization is low, starvation possible.No preemption:a) Release any resource already being held if the process can't get an additional resource.b) Allow preemption - if a needed resource is held by another process, which is also waiting on some resource, steal it. Otherwise wait.

Circular wait:a) Number resources and only request in ascending order.b) EACH of these prevention techniques may cause a decrease in utilization and/or resources. For this reason, prevention isn't necessarily the best technique.c) Prevention is generally the easiest to implement.ExamplesAny deadlock situation can be compared to the classic "chicken or egg" problem.[4]It can be also considered a paradoxical "Catch-22" situation.[5]A real world example would be an illogical statute passed by theKansaslegislature in the early 20th century, which stated:[1][6]When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.

A simple computer-based example is as follows. Suppose a computer has three CD drives and three processes. Each of the three processes holds one of the drives. If each process now requests another drive, the three processes will be in a deadlock. Each process will be waiting for the "CD drive released" event, which can be only caused by one of the other waiting processes. Thus, it results in acircular chain.Moving onto the source code level, a deadlock can occur even in the case of a single thread and one resource (protected by amutex). Assume there is a functionfunc1()which does some work on the resource, locking the mutex at the beginning and releasing it after it's done. Next, somebody creates a different functionfunc2()following that pattern on the same resource (lock, do work, release) but decides to include a call tofunc1()to delegate a part of the job. What will happen is the mutex will be locked once when enteringfunc2()and then again at the call tofunc1(), resulting in a deadlock if the mutex is notreentrant(i.e. the plain "fast mutex" variety).Necessary conditionsA deadlockers situation can arise if all of the following conditions hold simultaneously in a system:[1]1. Mutual Exclusion:At least one resource must be held in a non-shareable mode.[1]Only one process can use the resource at any given instant of time.2. Hold and WaitorResource Holding:A process is currently holding at least one resource and requesting additional resources which are being held by other processes.3. NoPreemption:a resource can be released only voluntarily by the process holding it.4. Circular Wait:A process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is asetof waiting processes, P = {P1, P2, ..., PN}, such that P1is waiting for a resource held by P2, P2is waiting for a resource held by P3and so on until PNis waiting for a resource held by P1.[1][7]These four conditions are known as theCoffman conditionsfrom their first description in a 1971 article byEdward G. Coffman, Jr.[7]Unfulfillment of any of these conditions is enough to preclude a deadlock from occurring.Avoiding database deadlocksAn effective way to avoid database deadlocks is to follow this approach from the Oracle Locking Survival Guide:Application developers can eliminate all risk of enqueue deadlocks by ensuring that transactions requiring multiple resources always lock them in the same order.[8]This single sentence needs much explanation to understand the recommended solution. First it highlights the fact that processes must be inside a transaction for deadlocks to happen. Note that some database systems can be configured to cascade deletes which creates an implicit transaction which then can cause deadlocks. Also someDBMSvendors offer row-level locking, a type ofrecord lockingwhich greatly reduces the chance of deadlocks, as opposed to page level locking, which creates many times more locks. Second, by "multiple resources" this means more than one row in one or more tables. An example of locking in the same order would be to process all INSERTS first, all UPDATES second, and all DELETES last and within processing each of these handle all parent table changes before children table changes; and process table changes in the same order such as alphabetically or ordered by an ID or account number. Third, eliminating all risk of deadlocks is difficult to achieve as theDBMShas automatic lock escalation features that raise row level locks into page locks which can be escalated to table locks. Although the risk or chance of experiencing a deadlock will not go to zero as deadlocks tend to happen more on large, high-volume, complex systems, it can be greatly reduced and when required the software can be enhanced to retry transactions when a deadlock is detected. Fourth, deadlocks can result in data loss if the software is not developed to use transactions on every interaction with aDBMSand the data loss is difficult to locate and creates unexpected errors and problems.Deadlocks are a challenging problem to correct as they result in data loss, are difficult to isolate, create unexpected problems, and are time consuming to fix. Modifying every section of software code in a large system that access the database to always lock resources in the same order when the order is inconsistent takes significant resources and testing to implement. That and the use of the strong word "dead" in front of lock are some of the reasons why deadlocks have a "this is a big problem" reputation.[according to whom?]Deadlock handling[edit]Most current operating systems cannot prevent a deadlock from occurring.[1]When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Coffman conditions from occurring, especially the fourth one.[9]Major approaches are as follows.Ignoring deadlock[edit]In this approach, it is assumed that a deadlock will never occur. This is also an application of theOstrich algorithm.[9][10]This approach was initially used byMINIXandUNIX.[7]This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable.Detection[edit]Under deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system.[10]Deadlock detection techniques include, but are not limited to,model checking. This approach constructs afinite state-model on which it performs a progress analysis and finds all possible terminal sets in the model. These then each represent a deadlock.After a deadlock is detected, it can be corrected by using one of the following methods:1. Process Termination:One or more processes involved in the deadlock may be aborted. We can choose to abort all processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, we can choose to abort one process at a time until the deadlock is resolved. This approach has high overheads because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process.2. Resource Preemption:Resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken.Prevention]Main article:Deadlock prevention algorithmsDeadlock prevention works by preventing one of the four Coffman conditions from occurring. Removing themutual exclusioncondition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot bespooled. But even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are callednon-blocking synchronizationalgorithms. Thehold and waitorresource holdingconditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none. Thus, first they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting inresource starvation.[1](These algorithms, such asserializing tokens, are known as theall-or-none algorithms.) Thenopreemptioncondition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent orthrashingmay occur. However, inability to enforce preemption may interfere with apriorityalgorithm. Preemption of a "locked out" resource generally implies arollback, and is to be avoided, since it is very costly in overhead. Algorithms that allow preemption includelock-free and wait-free algorithmsandoptimistic concurrency control.This condition may be removed as follows: If a process holding some resources and requests for some another resource(s) which cannot be immediately allocated to it, then by releasing all the currently being held resources of that process. The final condition is thecircular waitcondition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine apartial orderingof resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration.[1]Dijkstra's solutioncan also be used.Avoidance[edit]This sectionmay contain content that is repetitive or redundant of text elsewhere in the article. Please helpimprove itby merging similar text or removing repeated statements.(September 2014)

Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which resources a process will consume in its lifetime. For every resource request, the system sees whether granting the request will mean that the system will enter anunsafestate, meaning a state that could result in deadlock. The system then only grants requests that will lead tosafestates.[1]In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time: resources currently available resources currently allocated to each process resources that will be required and released by these processes in thefutureIt is possible for a process to be in an unsafe state but for this not to result in a deadlock. The notion of safe/unsafe states only refers to theabilityof the system to enter a deadlock state or not. For example, if a process requests A which would result in an unsafe state, but releases B which would prevent circular wait, then the state is unsafe but the system is not in deadlock.One known algorithm that is used for deadlock avoidance is theBanker's algorithm, which requires resource usage limit to be known in advance.[1]However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.Two other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetry-breaking technique. In both these algorithms there exists an older process (O) and a younger process (Y). Process age can be determined by a timestamp at process creation time. Smaller timestamps are older processes, while larger timestamps represent younger processes.Wait/DieWound/Wait

O needs a resource held by YO waitsY dies

Y needs a resource held by OY diesY waits

Another way to avoid deadlock is to avoid blocking, for example by usingNon-blocking synchronizationorRead-copy-update.

File systems and operating systemsManyoperating systemsinclude support for more than one file system. Sometimes the OS and the file system are so tightly interwoven it is difficult to separate out file system functions.There needs to be an interface provided by the operating system software between the user and the file system. This interface can be textual (such as provided by acommand line interface, such as theUnix shell, orOpenVMS DCL) or graphical (such as provided by agraphical user interface, such asfile browsers). If graphical, the metaphor of the folder, containing documents, other files, and nested folders is often used (see also:directoryand folder).Unix and Unix-like operating systems[edit]Unix-likeoperating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is oneroot directory, and every file existing on the system is located under it somewhere. Unix-like systems can use aRAM diskor network shared resource as its root directory.Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is calledmountinga file system. For example, to access the files on aCD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory". The directory given to the operating system is called themount point it might, for example, be/media. The/mediadirectory exists on many Unix systems (as specified in theFilesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only theadministrator(i.e.root user) may authorize the mounting of file systems.Unix-likeoperating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose. In many situations, file systems other than the root need to be available as soon as the operating system hasbooted. All Unix-like systems therefore provide a facility for mounting file systems at boot time.System administratorsdefine these file systems in the configuration filefstab(vfstabinSolaris), which also indicates options and mount points. In some situations, there is no need to mount certain file systems atboot time, although their use may be desired thereafter. There are some utilities for Unix-like systems that allow the mounting of predefined file systems upon demand. Removable media have become very common withmicrocomputerplatforms. They allow programs and data to be transferred between machines without a physical connection. Common examples includeUSB flash drives,CD-ROMs, andDVDs. Utilities have therefore been developed to detect the presence and availability of a medium and then mount that medium without any user intervention. Progressive Unix-like systems have also introduced a concept calledsupermounting; see, for example,the Linux supermount-ng project. For example, a floppy disk that has been supermounted can be physically removed from the system. Under normal circumstances, the disk should have been synchronized and then unmounted before its removal. Provided synchronization has occurred, a different disk can be inserted into the drive. The system automatically notices that the disk has changed and updates the mount point contents to reflect the new medium. Anautomounterwill automatically mount a file system when a reference is made to the directory atop which it should be mounted. This is usually used for file systems on network servers, rather than relying on events such as the insertion of media, as would be appropriate for removable media.Linux[edit]Linuxsupports many different file systems, but common choices for the system disk on a block device include the ext* family (such asext2,ext3andext4),XFS,JFS,ReiserFSandbtrfs. For raw flash without aflash translation layer(FTL) orMemory Technology Device(MTD), there isUBIFS,JFFS2, andYAFFS, among others.SquashFSis a common compressed read-only file system.Solaris[edit]TheSun MicrosystemsSolaris operating systemin earlier releases defaulted to (non-journaled or non-logging)UFSfor bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS.Support for other file systems and significant enhancements were added over time, includingVeritas SoftwareCorp. (Journaling)VxFS, Sun Microsystems (Clustering)QFS, Sun Microsystems (Journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting)ZFS.Kernel extensions were added to Solaris to allow for bootable VeritasVxFSoperation. Logging orJournalingwas added to UFS in Sun'sSolaris 7. Releases ofSolaris 10, Solaris Express,OpenSolaris, and other open source variants of the Solaris operating system later supported bootableZFS.Logical Volume Managementallows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may useSolaris Volume Manager(formerly known asSolstice DiskSuite). Multiple operating systems (including Solaris) may useVeritas Volume Manager. Modern Solaris based operating systems eclipse the need for Volume Management through leveraging virtual storage pools inZFS.OS X[edit]OS Xuses a file system inherited from classicMac OScalledHFS Plus. Apple also uses the term "Mac OS Extended".[15][16]HFS Plus is ametadata-rich andcase-preservingbut (usually)case-insensitivefile system. Due to the Unix roots of OS X, Unix permissions were added to HFS Plus. Later versions of HFS Plus addedjournalingto prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter.Filenames can be up to 255 characters. HFS Plus usesUnicodeto store filenames. On OS X, thefiletypecan come from thetype code, stored in file's metadata, or thefilename extension.HFS Plus has three kinds of links: Unix-stylehard links, Unix-stylesymbolic linksandaliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code inuserland.OS X also supported theUFSfile system, derived from theBSDUnix Fast File System viaNeXTSTEP. However, as ofMac OS X Leopard, OS X could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.[17]As ofMac OS X LionUFS support was completely dropped.Newer versions of OS X are capable of reading and writing to the legacyFATfile systems (16 & 32) common on Windows. They are also capable ofreadingthe newerNTFSfile systems for Windows. In order towriteto NTFS file systems on OS X versions prior to 10.6 (Snow Leopard) third party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allows writing to NTFS file systems, but only after a non-trivial system setting change (third party software exists that automates this).[18]Finally, OS X supports reading and writing of theexFATfile system since Mac OS X Snow Leopard, starting from version 10.6.5.[19]PC-BSD[edit]PC-BSDis a desktop version of FreeBSD, which inheritsFreeBSD'sZFSsupport, similarly toFreeNAS. The new graphical installer ofPC-BSDcan handle/ (root) on ZFSandRAID-Zpool installs anddisk encryptionusingGeliright from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28.Plan 9[edit]Plan 9 from Bell Labstreatseverythingas a file, and accessed as a file would be (i.e., noioctlormmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I-O operations onfile descriptors. The9Pprotocol removes the difference between local and remote files.These file systems are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system.TheInferno operating systemshares these concepts with Plan 9.Microsoft Windows[edit]

Directory listing in aWindowscommand shellWindows makes use of theFAT,NTFS,exFATandReFSfile systems (the last of these is only supported and usable inWindows Server 2012; Windows cannot boot from it).Windows uses adrive letterabstraction at the user level to distinguish one disk or partition from another. For example, thepathC:\WINDOWSrepresents a directoryWINDOWSon the partition represented by the letter C. Drive C: is most commonly used for the primary hard disk partition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk partition, can be traced toMS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived fromCP/Min the 1970s, and ultimately from IBM'sCP/CMSof 1967.FAT[edit]Main article:File Allocation TableThe family ofFATfile systems is supported by almost all operating systems for personal computers, including all versions ofWindowsandMS-DOS/PCDOSandDR-DOS. (PCDOS is an OEM version of MS-DOS, MS-DOS was originally based onSCP's86-DOS. DR-DOS was based onDigital Research'sConcurrent DOS, a successor ofCP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age.The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor inStandalone Disk BASICand the short-livedMDOS/MIDASproject.[citation needed]Over the years, the file system has been expanded fromFAT12toFAT16andFAT32. Various features have been added to the file system includingsubdirectories,codepagesupport,extended attributes, andlong filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows.The FAT12 and FAT16 file systems had a limit on the number of entries in theroot directoryof the file system and had restrictions on the maximum size of FAT-formatted disks orpartitions.FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4GB, but it remains limited compared to NTFS.FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as.exe). This is commonly referred to as the8.3 filenamelimit.VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced inWindows 95andWindows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion.NTFS[edit]Main article:NTFSNTFS, introduced with theWindows NToperating system in 1993, allowedACL-based permission control. Other features also supported byNTFSinclude hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links).exFAT[edit]Main article:exFATexFATis a proprietary and patent-protected file system with certain advantages over NTFS with regard tofile system overhead.exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, and more recently, support has been added for Windows XP.[20]exFAT is supported in Mac OS X starting with version 10.6.5 (Snow Leopard).[19]Support in other operating systems is sparse since Microsoft has not published the specifications of the file system and implementing support for exFAT requires a license.OpenVMS[edit]Main article:Files-11MVS [IBM Mainframe][edit]Main article:MVS MVS filesystemOther file systems[edit] The Prospero File System is a file system based on the Virtual System Model.[21]The system was created by Dr. B. Clifford Neuman of the Information Sciences Institute at the University of Southern California.[22] RSRE FLEX file system- written inALGOL 68 The file system of theMichigan Terminal System(MTS) is interesting because: (i) it provides "line files" where record lengths and line numbers are associated as metadata with each record in the file, lines can be added, replaced, updated with the same or different length records, and deleted anywhere in the file without the need to read and rewrite the entire file; (ii) using program keys files may be shared or permitted to commands and programs in addition to users and groups; and (iii) there is a comprehensive file locking mechanism that protects both the file's data and its metadata.[23][24]