Data Communications
Congestion in Data Networks
What Is Congestion?Congestion occurs when the number of
packets being transmitted through the network approaches the packet handling capacity of the network
Congestion control aims to keep number of packets below level at which performance falls off dramatically
Data network is a network of queuesGenerally 80% utilization is criticalFinite queues mean data may be lost
Figure 23.5 Incoming packet
Interaction of Queues
Effects of CongestionPackets arriving are stored at input buffers
(not ATM)
Routing decision madePacket moves to output bufferPackets queued for output transmitted as
fast as possibleStatistical time division multiplexing
If packets arrive too fast to be routed, or to be output, buffers will fill
Can discard packetsCan use flow control
Can propagate congestion through network
Ideal Performance
Top: As load increases, throughput directly increases
Middle: As load increases, delay increases
Bottom: Power is ratio of throughput to delay
Practical Performance
Ideal assumes infinite buffers and no overhead
Unfortunately, buffers are finiteAdditional overhead occurs in exchanging
congestion control messages
Effects of Congestion -No Control
Congestion Control ObjectivesIn general, we want to:Minimize discardsMaintain agreed QoS (if applicable)Minimize probability of one end user monopoly over
other end usersSimple to implement
Little overhead on network or user
Create minimal additional trafficDistribute resources fairlyLimit spread of congestionOperate effectively regardless of traffic flowMinimum impact on other systemsMinimize variance in QoS
Basic Mechanisms for Congestion Control
Open-Loop Congestion Control (rely on other layers for feedback and control) Retransmission policy - a good policy can reduce congestion Window policy - sel-reject better than go-back-N; use a bigger
window size Acknowledgment policy - don’t ack each packet individually Discard policy - a good policy by routers may prevent
congestion and at the same time may not harm the integrityof the transmission
Admission policy - QOS mechanism
Basic Mechanisms for Congestion Control
Closed-Loop Congestion Control Backpressure - when a router is congested, it informs the
previous upstream router to reduce the rate of outgoingpackets
Choke packet of choke point - sent by router to source, similarto ICMP’s source quench packet
Implicit signaling - look for delay in some other action Explicit signaling - router sends an explicit signal Backward signaling - bit is set in packet moving in the
direction opposite to the congestion Forward signaling - bit is set in packet moving in the direction
of congestion. Receiver can use policy such as slowingdown acks to alleviate congestion
Basic Mechanisms for Congestion Control (visual examples)
BackpressureIf node becomes congested it can slow down
or halt flow of packets from other nodesMay mean that other nodes have to apply
control on incoming packet ratesPropagates back to sourceCan restrict to logical connections
generating most trafficUsed in connection oriented that allow hop
by hop congestion control (e.g. X.25)Not used in ATM or frame relayOnly recently developed for IPv6 (PRI field)
Choke Packet
Control packet Generated at congested nodeSent to source nodee.g. ICMP source quench
From router or destinationSource cuts back until no more source
quench messageSent for every discarded packet, or
anticipated
Rather crude mechanism
Implicit Congestion Signaling
Transmission delay may increase with congestion
Packets may be discardedSource can detect these as implicit indications
of congestion (source is responsible, not network)
Useful on connectionless (datagram) networkse.g. IP based (TCP includes congestion and flow
control)
Used in frame relay LAPF
Explicit Congestion Signaling
Network alerts end systems of increasing congestion
Used on connection-oriented networksEnd systems take steps to reduce offered loadBackwards
Congestion avoidance info sent in opposite direction of packet travel
ForwardsCongestion avoidance info sent in same direction
as packet travel - when end system receives info, either sends it back to source or hands it to higher layer to take action
Categories of Explicit SignalingBinary
A bit set in a packet indicates congestion
Credit basedIndicates how many packets source may sendCommon for end to end flow control
Rate basedThe source may transmit data at a rate up to the
set limitAny node along the path of the connection can
reduce the data rate limit in a control message to the source
e.g. ATM
How Does TCP Handle / Avoid Congestion? (details in TDC 365 and TDC 463)
To Handle: TCP has a sender window size. Sender window size is minimum of receiver window size or network congestion window size.
To Avoid: TCP can use Slow Start and Additive Increase - at beginning, TCP sets congestion window size to maximum segment size, then increases window size with each ack.
Can also use Multiplicative Decrease - after timeout, threshold set to 1/2 previous threshold, and congestion window size reset to 1 (then slow start)
Figure 23.8 Multiplicative decrease
How Does Frame Relay HandleCongestion?
Connection management, coupled with
Discard strategy
Explicit signaling
Implicit signaling
In more detail:
Connection ManagementBefore a frame relay network allows a user to
transmit data, they agree on a connectionSome call this an SLA (service level
agreement)Frame relay calls it the CIR (committed
information rate)Committed burst size - max amount of data
the network agrees to transfer, under normal conditions
Excess burst size - max amount of data in excess of committed burst size
Different frame relay companies have different agreements
Connection ManagementWhat happens if you exceed your CIR and the
network experiences congestion?Frame relay may start discarding your frames
(Discard Eligible bit = 1) (Discard Strategy)Does frame relay tell you that your frames are
being tossed?No. Frame relay assumes a higher layer protocol
(such as TCP) will monitor lost or missing framesFrame relay could discard arbitrarily with no regard
for source, but then no reward for restraint so end systems transmit as fast as possible
CIR not 100% guaranteed, but network tries hardAggregate CIR should not exceed physical data rate
Figure 23.1 Traffic descriptors
Relationship Among Congestion Parameters
Explicit SignalingNetwork alerts end systems of growing
congestionBackward explicit congestion notification
Notifies the user that CA procedures should be initiated for traffic in the opposite direction; simpler
Forward explicit congestion notificationNotification goes forward, so end user has to
somehow get signal back to other end to tell them to slow down
Frame handler monitors its queuesMay notify some or all logical connectionsUser response
Reduce rate
Figure 23.9 BECN
Figure 23.10 FECN
Figure 23.11 Four cases of congestion
Implicit Signaling
Implicit Congestion Notification Telecom Definition
In frame relay, inference by user equipment that congestion has occurred in the network. The inference is triggered by realization of the receiving frame relay access device (FRAD) of transmission delays. Based on block, frame or packet sequence numbers, another protocol may recognize that one or more frames have been lost in transit. Control mechanisms at the upper protocol layers of the end devices then deal with frame loss by requesting retransmissions.
From: YourDictionary.com
What About ATM?
High speed, small cell size, limited overhead bits
Requirements (difficult)Majority of traffic not amenable to flow controlFeedback slow due to reduced transmission time
compared with propagation delayWide range of application demandsDifferent traffic patternsDifferent network servicesHigh speed switching and transmission increases
volatility
Latency/Speed EffectsConsider a typical ATM transmission speed of
150Mbps~2.8x10-6 seconds to insert single cellTime to traverse network depends on
propagation delay, switching delayAssume propagation at two-thirds speed of
lightIf source and destination on opposite sides of
USA, propagation time ~ 48x10-3 secondsGiven implicit congestion control, by the time
dropped cell notification has reached source, 7.2x106 bits have been transmitted
So, this is not a good strategy for ATM
Cell Delay Variation
For ATM voice/video, data is a stream of cells
Delay across network must be shortAND Rate of delivery must be constantThere will always be some variation in
transitDelay cell delivery to application so that
constant bit rate can be maintained to application
Time Re-assembly of CBR CellsD(i)=end to end delay of ith cellV(0)= estimate of amount of cell delay variation that an application can tolerate
Various Network Contributions to Cell Delay VariationPacket switched networks
Queuing delaysRouting decision time
Frame relayAs above but to lesser extent
ATMLess than frame relayATM protocol designed to minimize processing
overheads at switchesATM switches have very high throughputOnly noticeable delay is from congestionMust not accept load that causes congestion
Cell Delay Variation At The UNI in ATM
Even if application produces data at fixed rate, processing at (potentially) three layers of ATM causes delayInterleaving cells from different connectionsOperation and maintenance signals need to be
interleavedIf using synchronous digital hierarchy frames,
potential delays here are inserted at physical layer
Can not predict these delays(See figure next slide)
Origins of Cell Delay Variation
Traffic and Congestion Control Objectives for ATM
ATM layer traffic and congestion control should support QoS classes for all foreseeable network services
ATM layer traffic and congestion control should not rely on AAL protocols that are network specific, nor on higher level application specific protocols
Any traffic and congestion controls should minimize network and end to end system complexity
Traffic Management and Congestion Control Techniques
ITU-T and ATM Forum have defined a range of traffic management functions to maintain the QoS of ATM connections:Resource management using virtual paths - separate traffic flow according to service characteristics (1)Connection admission control (2)Usage parameter control (3)Traffic shaping (4)
Let’s examine these in more detail
Resource Management Using Virtual Paths (1)
ATM network can use the virtual path to group similar virtual channels
Connection Admission Control (2)
Good first line of defenseUser specifies traffic characteristics for new
connection (VCC or VPC) by selecting a QoSNetwork accepts connection only if it can
meet the demandTraffic contract
Peak cell rate - upper bound, CBR and VBRCell delay variation - CBR and VBRSustainable cell rate - average rate, VBRBurst tolerance - VBR
Usage Parameter Control (3)
Monitor established connection to ensure traffic conforms to contract
Protection of network resources from overload by one connection
Done on VCC and VPCControl of peak cell rate and cell delay
variation, orControl of sustainable cell rate and burst
toleranceDiscard cells that do not conform to traffic
contractCalled traffic policing
Traffic Shaping (4)
Smooth out traffic flow and reduce cell clumping
Token bucket and leaky bucket are examples of traffic shaping
Token bucket allows bursts, while leaky bucket maintains an even flow
(See figures next slides)
Figure 23.18 Token bucket
Token Bucket
Figure 23.16 Leaky bucket
Figure 23.17 Leaky bucket implementation
Leaky bucket keeps an average flow moving. Queue overflows?Discard packets. Unlike token bucket, no credit for no transmissions.
ATM’s Real-time Traffic Management
QoS provided for CBR, and rt-VBR is based on a traffic contract (connection admission control) and UBC (usage parameter control)
There is no feedback in these systems. Cells are simply discarded. This is called open-loop control.
This is not used for ABR or UBR traffic.
Non-real-time Traffic Management
Some applications (Web, file transfer) do not have well defined traffic characteristics
Best effortsAllow these applications to share unused capacity
If congestion builds, cells are dropped, eg UBR
Closed loop controlABR connections share available capacityShare varies between minimum cell rate (MCR) and
peak cell rate (PCR)ABR flow limited to available capacity by feedback
Buffers absorb excess traffic during feedback delay
Low cell loss
Feedback Mechanisms
Transmission rate characteristics:Allowed cell rateMinimum cell ratePeak cell rateInitial cell rate
Start with ACR=ICRAdjust ACR based on feedback from network
Resource management cellsCongestion indication (CI) bitNo increase (NI) bitExplicit cell rate (ER) field
Variations in Allowed Cell Rate
Cell Flow (RM = resource management)
23.7 Integrated Services23.7 Integrated Services
Integrated Services (IntServ) is a modelused to provide QoS in the Internetat the IP layer.
IntServ is a flow-based model, in thata user needs to create a flow or virtualcircuit between source and destination.
But IP is connectionless. How do youcreate a connection? Use RSVP.
Figure 23.19 Path messages
Path messages are sent from sender (S1) to all receivers(multiple if multi-cast). This establishes the path.
Figure 23.20 Resv messages
Once the path is set, receivers return Resv messages. Notehow reservations are merged (next slide).
Figure 23.21 Reservation merging
R3 takes the larger of the two reservations and sends thatupstream.
Figure 23.22 Reservation styles
Wild card filter - the router creates a single reservation for allthe senders, based on the largest request.Fixed filter - the router creates a distinct reservation for eachflow.Shared explicit - the router creates a single reservation which can be shared by a set of flows.
23.8 Differentiated Services23.8 Differentiated Services
An alternative to Integrated Services.
Produced by the IETF to create a class-based QoS modelfor IP.
Beyond the scope of this class.
Figure 23.24 Traffic conditioner
Meter checks to see if incoming flow matches negotiated trafficprofile. Marker can re-mark a packet that is using best-effortdelivery or down-mark a packet based on info received from meter.Shaper uses the info received from meter to reshape the traffic.Dropper works like a shaper with no buffer, discarding packets if the flow severely violates the negotiated profile.