unit ii lec plan

Upload: nivethithaa-dhanraj

Post on 04-Jun-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Unit II Lec Plan

    1/19

  • 8/13/2019 Unit II Lec Plan

    2/19

    Option 1:no option at all: we will wait and see what happens. This leads to unhappy

    usersand to unwise purchases.

    Option 2:sounds more promising. The analyst may take the posit ionthat it is impossible to project future demand with any degree of certainty.Therefore, it ispointless to attempt some exact modeling procedure. Rather,a rough-and-ready projection willprovide ballpark estimates. The problemwith this approach is that the behavior of most systems

    under a changing load is not what one would intuit ively expect. I f there isan environment inwhich there is a shared facility (e.g., a network, atransmission line, a time-sharing system), thenthe performance of thatsystem typically responds in an exponential way to increases in demand.

    Figure 1is a typical example. The upper line shows what happens to userresponse t ime on a shared facility as the load on that facility increases. Theload is expressed as a fraction of capacity. Thus, if we are dealing with ainput from a disk that is capable of transferring 1000 blocks per second, thena load of 0.5 represents a transfer of 500 blocks per second, and theresponse time is the amount of time it takes to retransmit any incomingblock. The lower line is a simple projection1 based on a knowledge of the

  • 8/13/2019 Unit II Lec Plan

    3/19

  • 8/13/2019 Unit II Lec Plan

    4/19

    SIN GLE SERVER QUEUE:

    The simplest queuing system is depicted in Figure 2. The central element ofthe system is a server, which provides some service to items. Items fromsome population of items arrive at the system to be served. I f the server isidle, an i tem is served immediately. Otherwise, an arriving item joins await ing line2. W hen the server has completed serving an item, the itemdeparts. If there are items waiting in the queue, one is immediatelydispatched to the server. The server in this model can represent anythingthat performs some function or service for a collection of items.

    QUEUE PARAMETERS:

    Figure 2 also illustrates some important parameters associated with aqueuing model. I tems arrive at the facil ity at some average rate (itemsarriving per second) l. At any given time, a certain number of items will bewaiting in the queue (zero or more); the average number waiting

  • 8/13/2019 Unit II Lec Plan

    5/19

    isw, and the mean time that an item must wait is Tw. Tw is averaged overall incoming items, including those that do not wait at all. The server

    handles incoming items with an average service time Ts; this is the timeinterval between the dispatching of an item to the server and thedeparture of that item from the server. Utilization, r, is the fraction of timethat the server is busy, measured over some interval of time. Finally, twoparameters apply to the system as a whole. T he average number of itemsresident in the system, including the item being served (if any) and theitems wait ing (if any), is r; and the average time that an item spends in thesystem,waiting and being served, is T r; we refer to this as the mean

    residence time.3 If we assume that the capacity of the queue is inf inite, thenno items are ever lost from the system; they are just delayed unt il they canbe served. Under these circumstances, the departure rate equals the arrivalrate. As the arrival rate, which is the rate of traff ic passing through thesystem, increases, the utilization increases and with it, congestion. Thequeue becomes longer, increasing wait ing time. At r = 1, the server becomessaturated, working 100% of the time.Thus, the theoretical maximum inputrate that can be handled by the system is:max = 1

    TsHowever, queues become very large near system saturation, growingwithout bound whenr = 1. Practical considerations, such as response timerequirements or buffer sizes, usually limitthe input rate for a single server to70-90% of the theoretical maximum.To proceed, we need to make someassumption about this model:

    Item population: Typically, we assume an inf inite population. This meansthat the arr ival rate is not altered by the loss of population. If the population

    is f inite, then the population available for arrival is reduced by the numberof items currently in the system; this would typically reduce the arrival rateproportionally. Q ueue size: Typically, we assume an inf inite queue size. Thus, the waitingline can grow without bound. W ith a f inite queue, it is possible for items tobe lost from the system. In practice, any queue is f inite. I n many cases, thiswill make no substantive difference to the analysis. W e address this issuebrief ly, below.

  • 8/13/2019 Unit II Lec Plan

    6/19

    D ispatching discipline: W hen the server becomes free, and if there is morethan one item wait ing, a decision must be made as to which item to dispatch

    next. The simplest approach is f irst-in, first-out; this discipline is what isnormally implied when the term queue is used.Another possibility is last-in,f irst-out. One that you might encounter in practice is adispatching discipline based on service time. For example, a packet-switching node may choose to dispatch packets on the basis of shortest first(to generate the most outgoing packets) or longest f irst (to minimizeprocessing time relat ive to transmission time).Unfortunately, a disciplinebased on service time is very diff icult to model analyt ically.

    TOPIC II I : CONGESTION CONTROL

    D efinition:

    A situation is called congestion i f performance degrades in a subnet becauseof too many data packets in present, i.e. traff ic load temporarily exceeds theoffered resources. The number of packets delivered is proport ional to the

    number of packets send. But if traff ic increases too much, routers are nolonger able to handle all the traff ic and packets wi ll get lost. W ith furthergrowing traff ic this subnet wi ll collapse and no morepackets are delivered.

    CONGESTION CONTROL:

    Congestion Control involves all hosts, routers, store-and-forwarding

    processes and other factors that have something in common with thesubnets capacity. Flow control should slow down a sender which is tryingto send more that the receiver can deal with. Some Congestion ControlA lgorithms also implements some kind of slow down messages, so FlowControl and Congestion Control are unfortunately admixed.

  • 8/13/2019 Unit II Lec Plan

    7/19

    H ost C entric A lgorithmsO pen LoopIn the f irst place, Open Loop Algorithms try to avoid congestion withoutmaking any corrections once the system is up. Essential points for Open

    Loop solutions are e.g. deciding when to accept new traff ic, when to discardwhich packets and making scheduling decisions. All of these decisions arebased on a sensible system design, so they do not depend on the currentnetwork state.

    O PEN LO O P A N D SO U RC E D RI V EN :T raff ic Shaping:TRAFFIC SHAPIN G is a generic term for a couple of algorithms avoiding

    congestion on senders side without feedback messages. Therefore, anessential decision - the data rate - is negotiated either on connection set-upor is statically included in used implementations.

    Leaky Bucket:The LEAKY BUCKET Algorithm generates a constant output f low. Thename describes to way of working: it works like a bucket with water and aleak on the bottom

  • 8/13/2019 Unit II Lec Plan

    8/19

    This metaphor ref lects typical network behavior where drops of water are

    data packets and the bucket is a f inite internal queue sending one packet perclock tick.

    T oken Bucket:

    The TOKEN BUCKET Algorithm is a variation of the aforementionedLEAKY BUCKET Algorithm The intention is to allow temporary highoutput bursts, if the origin normally does not generate huge traff ic. Onepossible implementation uses credit points or tokens which are provided in af ixed time interval. T hese credit points can be accumulated in a limitednumber (= bucket size) in the bucket. In case of submitt ing data thesecredits have to be used from the bucket, i.e. one credit is consumed per dataentity (e.g. one byte or one frame) that is injected into the network. If theamount of credit points is used up (the bucket is empty), the sender has towait, until it gathers new tokens within the next time interval.

  • 8/13/2019 Unit II Lec Plan

    9/19

    O PEN LO O P A N D D E ST I N A T I O N D RI V EN :Algorithms accompanying with this group can be identif ied by their staticbehavior: once these implementations are running they work regardless hownetworks statechanges. That means, congestion is avoided by receivers sidebecause of well formulated specif ication. The question is, how thosealgorithms may look l ike. They use receivers capabilit ies to inf luence theinit ial senders behavior without any explicit indication

    CLO SED LO O P:

    Closed loop solut ions are the network implementation of typical controlcircuit. A lgorithms according to this class depend on a feedback loop withthree parts:1. system-wide congestion monitoring,2. pass this information to an action point, and3. adjust system operations to deal with the congestion.

  • 8/13/2019 Unit II Lec Plan

    10/19

    To detect congestion it is useful to monitor network values like percentageof discarded packets because of memory lacks, the number of t imed out and

    therefore retransmitted packets and average queue lengths as well as packetdelay such as round trip times.

    C L OSE D LO O P A N D I M PLI C I T FEE D BA C K :

    Slow Start:

    The Slow Start A lgorit hm tries to avoid congestion by sending data packets

    defensively. Therefore, two special variables named congestion window(cwnd) and Slow Start threshold (ssthresh) are stored on senders side.

    TOPIC IV : TRAFFIC MANAGEMENT:There are number of issues related to congestion control that might beincluded under general category of traff ic management.Congestion control is concerned for a network at high load .W hen a node is saturated it must discard packets packets that too,

    most recently arrived packets wi ll be discarded.Here comes some areas that can be used to ref ine the applicat ion of

    congestion control techniques and discard policy.

    FAIRNESS:As congestion develops, packets flows between sources and

    destinations will experience increases delays and also packet losses.To control congestion ,simply discard packets on a last in f irst out

    discarded basis which wont be fair.

    EXAMPLE:The following technique might promote fairnessA node can maintain a separate queue for each logical connection

    source destination pair.

  • 8/13/2019 Unit II Lec Plan

    11/19

    If all of the queue buffers are of equal length, then the queues with thehighest traffic load will suffer discards more frequently, allowing

    lower traff ic connections which is fair .

    QUALITY OF TRAFFIC (QOS):

    Dif ferent traff ic f lows in a dif ferent way.

    APPLICATION TRAFFIC:1. V oice &video - Delay sensitive but loss insensitive2. File transfer and email - Delay insensitive but loss sensitive3. Interactive graphics & interactive - Delay sensitive but also lossComputing applications sensitive

    NETW ORK MAN AGEM ENT T RAFFIC:

    During times of congestion or failure , network managementtraffic is more important than application traffic.

    I t is important during periods of congestion that traff ic f lowswith different requirements be treated differently and provideddif ferent quality of service.

    For example a node might transmit higher priority packets in thesame queue or a node might maintain dif ferent queues fordifferent QOS levels and give preferential treatment to thathigher levels.

    RESERVATION S:

    To avoid congestion and also to provide assured service to applicationis to use a reservation scheme.EXAMPLE:

    ATM network use reservation scheme

  • 8/13/2019 Unit II Lec Plan

    12/19

    W hen a logical connection is established, the network and theuser enter into a traff ic contract, which specif ies a data rate

    and other characteristics of the traffic flow.The network agrees to give a good QOS so long as the traff ic

    f low is within contract parameters ; excess traff ic is eitherdiscarded or handled on a best effort basis .

    I f the current outstanding reservations are present such thatthe network resources are inadequate to meet the newreservation, then the new reservation is denied.

    The main aspect of reservation scheme is traff ic policing .A node in the network to which the end system attached,

    monitors the traff ic flow and compares it to the traff iccontrast.

    Excess traffic is either discarded to indicate that it is liableto discard or delay.

    TOPIC V: CONGESTION CON TROL IN PACKET SW ITCHIN GNETWORK:There are many congestion control technique in packet switching network.Some of the examples are shown below.

    1. Sending of control packets from a congested node to source node.2. Relay on routing information3. Use end to end probe packet4.

    Congestion information can be added to the packets enroute tothe destinat ion , by the intermittent nodes.

    TOPIC V I: FRAM E RELAY CONGESTION CONTROL: Congestion avoidance vs recovery Discard control Explicit forward/ backward congestionnotification

  • 8/13/2019 Unit II Lec Plan

    13/19

    Implicit notif icationFRA M E RELA Y C O N G EST I O N T EC H N I Q U ES:Discard Control (DE Bit)Backward Explicit Congestion Notif icationForward Explicit Congestion N otif icationImplicit congestion notif icat ion

    (sequence numbers in higher layer PDUs)

    D I SC A R D C O N T RO L:Committed Information Rate (CIR)Committed Burst Size (Bc):Over measurement interval TT = Bc/ CIRExcess Burst Size (Be)Between Bc and Bc + Be M ark DE bitOver Be Discard

  • 8/13/2019 Unit II Lec Plan

    14/19

  • 8/13/2019 Unit II Lec Plan

    15/19

  • 8/13/2019 Unit II Lec Plan

    16/19

    FECN :

  • 8/13/2019 Unit II Lec Plan

    17/19

    BECN :

  • 8/13/2019 Unit II Lec Plan

    18/19

    I M PLI C I T C O N G EST I O N C O N T RO L:

  • 8/13/2019 Unit II Lec Plan

    19/19