serial computing

Upload: divya-nigam

Post on 10-Apr-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Serial Computing

    1/22

    WHAT IS SERIAL COMPUTING?

    Traditionally, software has been written for

    serial computation:

    To be run on a single computer having a single

    Central Processing Unit (CPU);

    A problem is broken into a discrete series of

    instructions.

    Instructions are executed one after another.Only one instruction may execute at any moment in

    time.

  • 8/8/2019 Serial Computing

    2/22

    SERIAL COMPUTING

  • 8/8/2019 Serial Computing

    3/22

    WHAT IS PARALLEL COMPUTING?

    Parallel computing is defined as thesimultaneous use of more thanone processor to execute a program.(divide large task into smallertasks)

    To be run using multiple CPUs

    A problem is broken into discrete parts that can be solved concurrently

    Each part is further broken down to a series of instructions

    Instructions from each part execute simultaneously on different CPUs

    several operations can be performed simultaneously therefore thetotal computation time is reduced.

    The parallel version has the potential of being 3 times as fast as thesequential machine.

  • 8/8/2019 Serial Computing

    4/22

  • 8/8/2019 Serial Computing

    5/22

    RESOURCES

    The compute resources can include: A single computer with multiple processors;

    An arbitrary number of computers connected by a network;

    A combination of both.

    The computational problems are : Broken apart into discrete pieces of work that can be solved simultaneously;

    It is often difficult to divide a program in such a way that separate CPUs can executedifferent portions without interfering with each other. .

    Dependencies are important to parallel programming because they are one ofthe primary inhibitors to parallelism.eg.write x-read x

    For (I=0; I

  • 8/8/2019 Serial Computing

    6/22

    ARCHITECTURE FOR PARALLEL

    COMPUTING

    SISD: single processing unit receives a singlestream of instructions that operate on a singlestream of data.

    MISD: the same input is to be subjected toseveral different operations.

    SIMD: all processors execute the same

    instruction, each on a different data MIMD: all processors execute the Different

    instruction, each on a different data.

  • 8/8/2019 Serial Computing

    7/22

    TWO FORMS OF INTER PROCESSOR

    COMMUNICATION

    N processors each with its own individual datastream i.e. SIMD and MIMD. , it is usuallynecessary to communicate data/ results

    between processors.

    Two typesTwo types--::

    UsingShared memory Parallel ComputerUsingShared memory Parallel Computer

    UsingDistributedUsingDistributed memorymemory ParallelParallelComputer(require distributed memory software)Computer(require distributed memory software)

  • 8/8/2019 Serial Computing

    8/22

    SHARED

    MEMORYX IS SHARED VARIABLE

    1.FIRSTLY P1 READ THEN P2

    2. P2 READ THEN P13. BOTH READ SIMULTANEOUSLY BOTH NOT EXECUTE

    SIMULTANEOUSLY THAT VALUE WIL BE IN MEM WHICH IS

    EXECUTED LAST.

    4.IFBOTH EXECUTED SIMULTANEOUSLY THEN PROBLEM OF

    NON-DETERMINACY WILL ARISE CAUSED BY RACE

    CONDITION(TWO STATEMENTS IN CONCURRENT TASKS ACCESS

    THE SAME MEMORY LOCATION)

    5. SOLVED BY SYNCHRONISINGTHE USE OF SHARED DATA (I.E X=X+1AND X=X+2 COULD NOT BE EXECUTED AT THE SAME TIME,)

    multiple processorsare connected tomultiple memory

    modules such thateach memorylocation has a singleaddress spacethroughout thesystem.

    solves theinterprocessorcommunicationproblem butintroduces the

    problem ofsimultaneousaccessing of thesame location in thememory.

  • 8/8/2019 Serial Computing

    9/22

    DISTRIBUTED

    MEMORY

    Connecting independentcomputers via aninterconnection network

    Each computer has its ownmemory address space. Aprocessor can only access itsown local memory.

    Send a message to the desiredprocessorfor accessing acertain value residing in adifferent computer via MPI(Message passing interface).

    P1 P2

    receive (x,P2) send (x, P1)

    value of x is explicitly passedfrom P2 to P1. This is known asmessage passing

  • 8/8/2019 Serial Computing

    10/22

    ADVANTAGES OF PARALLEL

    COMPUTING

    Solves problem that require more memoryspace than the single CPU can provide(solvesthose problem which require much memory

    space) (provide very large memory). Much faster as compared to fastest serial

    computing(by inc no of transistor on a singlechip).

    Much cheapest as compared to fastest serialcomputing

  • 8/8/2019 Serial Computing

    11/22

    WHAT I THE DI ERENCE BETWEEN

    THREAD & ROCESS.

    The ability of a program to do multiple things simultaneously isimplemented through threads ( thread is a basic unit forexecution)

    A thread is scheduled by operating system and executed by CPU.

    A thread is a portion of a program that the OperatingSystem tellsthe CPU to run, a stream ofinstructions.

    Thread can be defined as a semi-process with a definite startingpoint, an execution sequence and a terminating point.

    Process has its own memory area and data, but the threadshares memory and data with the other threads with in the

    program memory. A process/program, therefore, consists of many such threads

    each running at the same time within the program andperforming a unique task.

  • 8/8/2019 Serial Computing

    12/22

    MULTITHREADING

    Multi-threading is the program's ability to break itselfdown to

    multiple concurrent threads that can be executed separatelyby the

    computer.

    Software architects began writingoperating systems that supported

    running pieces of programs, called threads.

    Threads are organized into processes, which are composed of one or

    more threads.

    multithreading operating systems made it possible for one thread to

    run while another was waiting for something to happen. Rather than being developed as a long single sequence of

    instructions, programs are broken into logical operating sections.

  • 8/8/2019 Serial Computing

    13/22

    CONTINUE

    Ifthe application performs operations that run

    independently of each other, those operations can be

    broken up into threads whose execution is scheduled

    and controlled by the operating system. Onsingle processorsystems, these threads are

    executed sequentially, not concurrently .

    But give u the illusion as if threads are being executed

    simultaneously AS timeslicing of multi tasking.

    arge programs that use multithreading often run

    many more than two threads.

  • 8/8/2019 Serial Computing

    14/22

    TYPES OF MULTI THREADING

    Functionally decomposed multithreading-:The processor switches back and forth between thetwo thread quickly enough that both processes appear

    to occur simultaneously. Threaded for functionalityconcern (applications)

    Data-decomposed multithreading-:Multithreaded programs can also be written to executethe same task on parallel threads. This is called data-decomposed multithreaded, where the threads differonly in the data that is processed. Threaded forthroughoutput performance

  • 8/8/2019 Serial Computing

    15/22

    ADVANTAGES OF MULTITHREADING

    Improved performance and concurrency

    Multithreadingallows you to achieve multitasking in aprogram. Multitasking is the ability to execute more thanone task at the same time.

    Minimized system resource usage, simultaneous access to multiple applications

    program structure simplification.

    Better responsiveness to the user - if there are operationswhich can take long time to be done, these operations can beput in a separate thread

  • 8/8/2019 Serial Computing

    16/22

    DISADVANTAGES OF MULTITREADING

    Creating overhead for the processor- each time the CPUfinishes with a thread it should write in the memory (a stack forevery thread in which state of thread is stored)the point it hasreached, because next time when the processor starts thatthread again it should know where it has finished and where to

    start from The code become more complex - using thread into the code

    makes the code difficult read and debug

    Sharing the resources among the threads can lead todeadlocks (p1 uses r1 & p2 uses r2 while using p1 needs r2 &

    p2 needs r1 to complete its task i.e deadlock) Difficulty of writing code .

    Difficulty of debugging .

  • 8/8/2019 Serial Computing

    17/22

    HYPER-THREADING TECHNOLOGY

    Hyper-Threading Technology(Intel technology used in the Pentium 4processorfamily. ) boosts performance

    Allowing multiple threads of software applications to run on a singleprocessor at one time, sharing the same core processor resources.

    Hyper-Threading Technology is a form ofsimultaneousmultithreading technology(SMT) Multiple threads execute on a

    single processor without switching

    A processor with Hyper-Threading Technology consists oftwo logicalprocessors, each of which has its own processor architectural state.Each logical processor has a copy of architecture state.

    processors share a single set of physical execution resources.

  • 8/8/2019 Serial Computing

    18/22

    Each logical processor can respond to interruptsindependently.

    The first logical processor can track one softwarethread while thesecond logical processor tracksanother software thread simultaneously.

    Because the two threads share one set of executionresources.

    the second thread can use resources that would beidle ,if only one thread were executing.

  • 8/8/2019 Serial Computing

    19/22

    HYPER-THREADING

    when we put a regularprocessor under 100% load,

    we're never fully utilizing 100%of the execution units.

    With a HyperThreadingenabled processor thosespareexecution units can usedtowards computingother thingsnow.

  • 8/8/2019 Serial Computing

    20/22

    RESOURCE UTILISATION

    In Superscalar processor,

    half the processor remains

    unused.

    In the Multiprocessingportion of the demonstration

    we see a dual CPU system

    working on two separate

    threads.

    In Last HyperThreading

    enabled processor, both

    threads are simultaneously

    being computed, and the

    CPU's efficiency has

    increased from around 50%

    to over 90%!

    Dual HyperThreading

    enabled processors which

    can work on four

    independent threads at the

    same time

  • 8/8/2019 Serial Computing

    21/22

    ROLE OF OPERATING SYSTEM IN

    HYPERTHREADING

    Operating systems (including Microsoft Windows and Linux*) dividetheir workload up into processes and threads that can beindependently scheduled and dispatched to run on a processor.

    The operating system will also play a key role in how wellHyperThreading works as if they were in a multiprocessing system .

    The OS assigns operations to the independent logical processors andso if it's determined that one of the logical CPU's is to remain idle, theOS will issue a HALT command to the free logical processorthusdevoting all of the other system resources to the working logicalprocessor.

    It allows you to use your computer without any knowledge of coding,

    Without an operating system, your hardware would not work at all,until you wrote your own code for the hardware to do .

  • 8/8/2019 Serial Computing

    22/22

    ADVANTAGES OF HYPER-THREADING

    HyperThreading has the potential to significantly boost system performance undercertain circumstances.

    improved reaction and response time.

    Allowing multiple threads to run simultaneously

    No performance loss if only one thread is active. Increased performance withmultiple threads

    Disadvantages Increases the complexityof the application,

    Sharing of resources, such as global data, can introduce common parallelprogramming errors such as storage conflicts and other race conditions. Debugging

    such problems is difficult as they are non-deterministic. To take advantage of hyper-threading performance, serial execution can not be

    used. Threads are non-deterministic and involve extra design

    Threads have increased overhead.