qos analysis for iptv1

8
Quality of Service Analysis for IPTV Provisioning T. Janevski * and Z. Vanevski ** * University “Kiril i Metodij”, Faculty of Electrical Engineering and Information Technologies, Skopje, Macedonia ** Makedonski telekom, IP Networks Department, Skopje, Macedonia [email protected]; [email protected] Abstract - IPTV is one of the killer Internet applications today. Due to the real-time nature of this service, the Quality of Service is essential for its provisioning. In this paper we have performed QoS analysis for IPTV traffic using measurements from real pre-operational IPTV commercial network. We have performed several experiments regarding the IPTV traffic, background best- effort traffic, video buffer at the receiver and scheduling algorithm towards the user access links. I. INTRODUCTION Today IP is common networking technology for all telecommunication services, including IPTV as one of the killer services, due to transition from analogue to digital television. Most important challenge for IPTV providers are customer satisfaction and their quality of experience [1]. Pre-request of activating IPTV service is implementing end-to-end QoS at IP network. Currently, IPTV providers usually use DSL technology as an access network, which has bandwidth limitations. Their customers usually are using triple play service over DSL [2] including Internet connectivity, Voice over IP (VoIP) and IPTV. Capacity of the DSL link is limited, internet traffic will directly influent on IPTV traffic. Here, we perform analysis of IPTV traffic measurements obtained from a live pre-operational network regarding the characteristics of IPTV traffic as well as its quality parameters, objective and subjective. For measuring of IPTV quality is proposed Media Delivery Index [3-4], which is explained further in the paper. Also, understanding of Quality of Experience (QoE) is given in [5]. Results on analysis of IPTV transmission using multicast and unicast techniques, as well as standardization efforts on certain QoE parameters for IPTV are given in [6-19]. This paper is organized as follows. Next Section discusses the Quality of Experience. Challenges for transport of IPTV are outlined in Section 3. Section 4 covers QoS mechanisms. Network setup and measured IPTV traffic are given in Section 5. In Section 6 are shown the results from analysis of IPTV traffic measurements. Finally, Section 7 concludes the paper. II. QOE (QUALITY OF EXPERIENCE) A. QoE (Quality of Experience) Quality of Experience (QoE) is defined by ITU [1] as common merit for quality of a given service to the end user. QoE is consisted of subjective quality as experienced by the end user know as Mean pinion Score (MOS), and Quality of Service (QoS) as designed by ITU (ITU-T E.800) as overall objective effect from the network on the performance of a given service (in this particular case, on IPTV). Components of the QoE are shown in Fig. 1. Figure 1. QoE components Generally, there is correlation between the subjective and objective merits. Regarding the Internet services, QoS parameters usually are packet losses, packet delay, jitter, as well as throughput or link utilization. IPTV belongs to family of real-time Internet services, and hence we will refer to the QoS via such parameters. B. IPTV measurements with MDI (Media Delivery Index) The Media Delivery Index (MDI) is a set of measurements used for monitoring and troubleshooting networks carrying any IPTV traffic [3]. The video component of the triple play offering presents unique demands on the network because of its high bandwidth requirements and low tolerance to jitter and packet loss. The media delivery index (MDI) measurement gives an indication of expected video quality i.e. QoE based on network level measurements. It is independent of the video encoding scheme and examines the video transport itself. MDI is a set of two parameters. One is DF Delay Factor which is an indicator for the size of the needed buffer or interarrival time of IP packets. Another merit is media loss (MPEG packet losses in IPTV case) called MLR (Media Loss Rate), which refers to number of lost packets of a given transport flow in a given time period (usually, one second). Media Delivery Index (MDI) for IPTV networks predicts expected video quality based on IP network layer. It is independent for the encoder type. MDI, in fact, is combination of media Delay Factor (DF) and media Loss Rate (MLR), which counts for number of lost MPEG packets in one second. DF refers to the time for which the IPTV flow is buffered on the receiving side at nominal bit rate when there are no packet losses. MDI is usually given in a table with two columns DF: MLR, or as graph

Upload: quangfet

Post on 03-Apr-2015

195 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: QoS analysis for IPTV1

Quality of Service Analysis for IPTV

Provisioning

T. Janevski* and Z. Vanevski

**

* University “Kiril i Metodij”, Faculty of Electrical Engineering and Information Technologies, Skopje, Macedonia

** Makedonski telekom, IP Networks Department, Skopje, Macedonia

[email protected]; [email protected]

Abstract - IPTV is one of the killer Internet applications

today. Due to the real-time nature of this service, the

Quality of Service is essential for its provisioning. In this

paper we have performed QoS analysis for IPTV traffic

using measurements from real pre-operational IPTV

commercial network. We have performed several

experiments regarding the IPTV traffic, background best-

effort traffic, video buffer at the receiver and scheduling

algorithm towards the user access links.

I. INTRODUCTION

Today IP is common networking technology for all

telecommunication services, including IPTV as one of the

killer services, due to transition from analogue to digital

television. Most important challenge for IPTV providers

are customer satisfaction and their quality of experience

[1]. Pre-request of activating IPTV service is

implementing end-to-end QoS at IP network. Currently,

IPTV providers usually use DSL technology as an access

network, which has bandwidth limitations. Their

customers usually are using triple play service over DSL

[2] including Internet connectivity, Voice over IP (VoIP)

and IPTV. Capacity of the DSL link is limited, internet

traffic will directly influent on IPTV traffic. Here, we

perform analysis of IPTV traffic measurements obtained

from a live pre-operational network regarding the

characteristics of IPTV traffic as well as its quality

parameters, objective and subjective. For measuring of

IPTV quality is proposed Media Delivery Index [3-4],

which is explained further in the paper. Also,

understanding of Quality of Experience (QoE) is given in

[5]. Results on analysis of IPTV transmission using

multicast and unicast techniques, as well as

standardization efforts on certain QoE parameters for

IPTV are given in [6-19].

This paper is organized as follows. Next Section

discusses the Quality of Experience. Challenges for

transport of IPTV are outlined in Section 3. Section 4

covers QoS mechanisms. Network setup and measured

IPTV traffic are given in Section 5. In Section 6 are

shown the results from analysis of IPTV traffic

measurements. Finally, Section 7 concludes the paper.

II. QOE (QUALITY OF EXPERIENCE)

A. QoE (Quality of Experience)

Quality of Experience (QoE) is defined by ITU [1] as common merit for quality of a given service to the end user. QoE is consisted of subjective quality as experienced by the end user know as Mean pinion Score (MOS), and

Quality of Service (QoS) as designed by ITU (ITU-T E.800) as overall objective effect from the network on the performance of a given service (in this particular case, on IPTV). Components of the QoE are shown in Fig. 1.

Figure 1. QoE components

Generally, there is correlation between the subjective and objective merits. Regarding the Internet services, QoS parameters usually are packet losses, packet delay, jitter, as well as throughput or link utilization. IPTV belongs to family of real-time Internet services, and hence we will refer to the QoS via such parameters.

B. IPTV measurements with MDI (Media Delivery Index)

The Media Delivery Index (MDI) is a set of

measurements used for monitoring and troubleshooting

networks carrying any IPTV traffic [3]. The video

component of the triple play offering presents unique

demands on the network because of its high bandwidth

requirements and low tolerance to jitter and packet loss.

The media delivery index (MDI) measurement gives an

indication of expected video quality i.e. QoE based on

network level measurements. It is independent of the

video encoding scheme and examines the video transport

itself. MDI is a set of two parameters. One is DF Delay

Factor which is an indicator for the size of the needed

buffer or interarrival time of IP packets. Another merit is

media loss (MPEG packet losses in IPTV case) called

MLR (Media Loss Rate), which refers to number of lost

packets of a given transport flow in a given time period

(usually, one second).

Media Delivery Index (MDI) for IPTV networks

predicts expected video quality based on IP network layer.

It is independent for the encoder type. MDI, in fact, is

combination of media Delay Factor (DF) and media Loss

Rate (MLR), which counts for number of lost MPEG

packets in one second. DF refers to the time for which the

IPTV flow is buffered on the receiving side at nominal bit

rate when there are no packet losses. MDI is usually given

in a table with two columns DF: MLR, or as graph

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.

Page 2: QoS analysis for IPTV1

“window” in which the y-axis is used for DF and x-axis is

used for MLR.

MDI is defined by IETF with RFC 4445 [3]. It defines

the influence on the video streams by using network jitter.

However, although indirectly MDI provides merits which

influence the QoS of the video, but it is not QoS merit for

video traffic. MDI-DF as a merit can be used to determine

the network nodes and links which have congestion

anytime and in any part of the network. Such results are

very useful for network providers, because they can easily

determine whether their buffer settings on different

devices can provide the required MPEG TS bit rate.

Good MDI results do not mean the quality of the picture is good, because they are not dependent from the quality of the video signal. MDI values are realistic expression of the problems for transmission of video signal over any type of network.

ETSI technical report TR 101 290 [2] defines the group of standards and recommendations for digital video systems and their minimal recommended values. Therefore, before the commercial start of IPTV network it is necessary to determine the bottlenecks in the network and other possible network problems. Errors in the video signal at the receiving end can be detected using MOS techniques as well, but such methods are subjective and they do not locate the problem in the network.

DF component of the MDI is time value which indicates how many milliseconds the packets should be buffered to avoid jitter. It is calculated in the following way: after arrival of a packet, calculate the difference between received and sent bytes. This is referred to as MDI buffer [4].

|__| bytessentbytesreceived −=∆ (1)

Then, in a given time interval, calculate the difference between the minimal and maximal values of MDI buffer and divide with the bit rate:

bitrateDF

))min()(max( ∆−∆

= (2)

As an example, bit rate for IPTV over ADSL access links is usually 2.55Mbps MPEG per video flow. Let assume that in time interval of one second maximal volume of data in the virtual buffer is 2.555 Mbits and minimal volume of data is 2.540 Mbits. Then, delay factor (DF) can be calculated as follows:

mssMb

kb

sMb

MbMbDF 6

/55.2

15

/55.2

540.2555.2==

= (3)

Hence, to avoid packet losses in the above example, receiving buffer should be 15 Kbits, which will introduce 6 ms delay.

Usually merit !DI MLR is expressed in media packets per second. QoE standards for IPTV are still in the preparation phase, but current recommendations which are considered by IPTV providers are WT-126 from DSL Forum [5], which declares that maximal losses should be 5 packets in 30 minutes of standard definition TV (SDTV) video flow, while for high definition TV (HDTV) it is 4 hours. Mathematically speaking, this means that MLR

value is 0.019 (here we are considering worst case scenario when there are 7 MPEG packets in every IP packet).

C. MOS (Mean Opinion Score)

The quality of transmitted video signal is subjective because clarity and clearness are differently perceived by each TV viewer. Different encoding schemes for IPTV video also result in different quality experience. Common objective classes which are used to mark the quality of the transmitted video are referred to as MOS (Mean Opinion Score). Using the MOS factor, the viewers can objectively grade the video quality from 1 (the worst quality) to 5 (the best quality) as shown in Table I. The MOS value is determined from the average grades from large number of viewers. Also, MOS values are correlated to QoS parameters as well (refer to Table I).

TABLE I. MOS VALUES CORRELATED TO QOS PARAMETERS

Packet

loss[%]

MPEG

packet loss

Description quality MOS

0 – 3 < 20 The best 5

4 - 13 20-100 Very good (periodical freezing ) 4

14-23 100-60 Good (loss of video frames) 3

24-33 160-230 Bad (not clear picture) 2

34 -43 > 230 Worst (freeze picture or black

screen) 1

III. CHALENGES FOR TRANSPORT OF REAL-TIME TRAFFIC

Elastic contents and traffic are not sensitive to delay or limited packet loss, such as www, file transfer, electronic mail etc., i.e. so-called non-real time traffic. On the other side, real-time traffic is delay sensitive and less tolerant to packet losses. IPTV traffic is real-time traffic, where the IGMP control messages are more sensitive to packet delays. However, there is challenge in IP networks to provide simultaneous transmission of the elastic and IPTV traffic without any significant packet delays or losses.

Common for all traffic types is the buffer memory where packets are scheduled prior to transmission over outgoing links. The packets from video stream on their way from the encoder to the end user are traveling via many heterogeneous network nodes and devices, and each one of them has its own network buffers, application buffers or server buffers. Some of them have resource management utilities to provide lower waiting time in the buffer memory. Hence, first recommendation is to minimize the number of network nodes which are in the path of video flow from the source to the destination. Most critical are the buffers in the backbone network and the access network, because there are used for heterogeneous traffic. In this part of this paper we will focus on the mutual dependence of elastic Internet traffic on one side and video traffic on the other. It is well known that TCP traffic counts for most of the Internet traffic today due to www on the first place. However, TCP uses larger buffers due to requirements for no-errors on the application layer for this kind of traffic, which results in retransmissions of all lost or damaged TCP segments.

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.

Page 3: QoS analysis for IPTV1

Contrary to this, video streaming traffic is using UDP, because there is no real sense for retransmission of lost data from real-time stream, because it is useless after given moment of presentation of the content to the end user. Hence, all streaming traffic is based on UDP, which causes less delay and requires smaller buffers. Such different buffer requirements from TCP and UDP traffic are solved by using smaller FIFO buffers, which give better utilization of the link capacities, or by using dedicated buffers with different sizes (dependent upon the traffic type) for each traffic class (in the latter case it is supposed that traffic is classified in number of traffic classes). However, capacity of each link is limited and therefore there will always be added packet delays and packet loss from each link in delay budget and loss budget, respectively. Due to this discussion, we have made measurements of the video traffic using different types of background Internet traffic in different scenarios, and then performing analysis of packet delay and packet loss.

The challenge remains in the field of resource reservations and granularity of flows. Integrated Services (IntServ) are using dynamic resource reservations per flow thus guaranteeing the Quality of Service (QoS) per flow. In fact, in such case the application must use signaling end-to-end to reserve the resources (similar to No.7 signaling in PSTN) before start of data transmission. The signaling for IntServ is done with RSVP (Resource Reservation Protocol). However, today IPTV providers are usually using DiffServ (Differentiated Services), which has less refined QoS support, per traffic class, and not per flow as it is the case with IntServ. When using DiffServ, each packet is marked with Type of Service (or DiffServ Code Point) associated with a given traffic class from limited number of traffic classes in the network. All packets belonging to a same traffic class in a DiffServ domain are served with the same “priority” in network nodes, which are using FIFO scheduling within packets belonging to the given traffic class.

A. Jitter

Jitter is variation in end-to-end packet delay in respect to the average time delay. Packets arriving at a destination at a constant rate exhibit zero jitter. Packets with an irregular arrival rate exhibit non-zero jitter. After traversing the network separating the data source and the destination, and being queued, routed and switched by various network elements, packets are likely to arrive at the destination with some rate variation over time. In any event, if the instantaneous data arrival rate does not match the rate at which the destination is consuming data, the packets must be buffered upon arrival. There should be implemented jitter-buffer, which main function is to buffer IPTV packets until all packets are transferred. However, this type of buffer will increase latency of transport end-to-end; hence it is not recommended to use larger values than 500 ms.

B. Packet Loss

Packet losses occur when packets are not reaching the destination due to any cause on their path. In IP networks today, all video packets which carry video information are treated as data traffic, i.e. these packets will be dropped in the case of congestion in network nodes as any other (non-

real-time) data packet (e.g. www, e-mail etc.). But, non-real-time traffic has no strict end-to-end delay requirements such as real-time video traffic. Dropped video packet cannot be retransmitted by the source, because it is useless as discussed before.

MPEG packets loss which are parts from “I” or “"” frame will more decrease video quality than “B” frame. Any video IP packet which contains maximum 7 MPEG packets will affect all frame types, but lost of an “I” frame will affect the whole GOP. Also this type of lost will have duration as the GOP, i.e. usually 0.5 - 1 second.

DSL access technology during interleaving has

uncorrected burst loss events of typical 8 ms and 16 ms.

The “ripple effect” is the result of rounding to an integer

number of lost/corrupted IP packets. Typical DSL burst

loss is with duration 8 ms. Here we present MPEG-4

transport stream at a bit rate of 3 Mbps:

sBbits

Mbps

packetsMPEG Total7.1994

188

1*

8

3/packetsMPEG Total ==

s

packets!PEG

/ packets IP 2857

7.1994

/ packets IP Total ==

Using the above results, loss of 8 ms corresponds to:

packetsIP

packetsIP lossIPpackets

28.2

008.0*/285/

=

==

IP packets are lost if a part of a packet is lost, so 2.28 is

rounded to the next integer 3 IP packets, bytes are not

necessarily aligned to IP packet boundaries, this would be

further rounded to 4 IP packets. In “worst case” scenario if

all IP packets have maximum 7 MPEG packets, lost will

be 28 MPEG packets.

#) b)

Figure 2. Unsatisfied quality example

(captured with Visual !PEG Analyzer)

IPTV providers should have in consideration that main role in customer QoE has the following encoding parameters:

Length of GOP (Group of Pictures): the advantage of using longer GOP for IPTV is getting lower bit rate of the video stream, but from customer perspective this will cause longer bad quality video scenes.

Video frame frequency: typical video frame rates are 30-60 fps. If IPTV provider uses lower frame rates it will affect the quality of dynamical scenes, as shown in Fig. 2.

GOP Structure: if channel is video encoded with “BBBP” GOP structure, then there will be greater possibility lost packets to belong to “$” frame type. This type of GOP structure is recommended to all future IPTV providers.

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.

Page 4: QoS analysis for IPTV1

IV. QOS MECHANISMS

Implementing QoS at any parts in the IP network will increase QoE for all IPTV customers. There are several QoS mechanisms which are recommended for implementation on every segment of IPTV network. Each mechanism is using unique parameters, standards and technology. We are recommending modular and cyclic QoS management on network infrastructure, presented in Fig. 3. This type of QoS management consists of four types of QoS mechanisms:

Bandwidth Allocation

Bandwidth Management

Traffic Prioritization

Traffic Provisioning

Network management is main approach for all four mechanisms. IPTV providers, during implementation of QoS management, shall follow the given hierarchy above (starting from Bandwidth Allocation). If a provider implements Traffic Provisioning, and QoS is still poor in its network, then the provider should start again with Bandwidth Allocation and so on.

Figure 3. Modular and cyclic QoS management on network

infrastructure

A. Traffic Prioritization

Traffic prioritization is useful for IPTV when IPTV traffic is mixed with non-real-time background traffic and at the same time overall link capacity is higher than peak bit rate of the IPTV stream (e.g. 3 Mbps peak bit rate IPTV stream over 8 Mbps ADSL access link).

If we use classification for Type of Service field [6] which contains 8 bits (given in Fig. 4), then one should use for traffic classification the three p-bits (i.e. precedence bits) in ToS field of the IP packet header with IPTV content in the packet payload.

Figure 4. ToS field in IP header

Priority queuing and routing are dedicated for packet scheduling from different traffic classes with different

priorities (e.g., to serve first all VoIP packets, then all IPTV packets, and then to continue with all other IP packets in the buffers). However, there should be caution because in such case there is possibility for higher priority classes to monopolize the bandwidth. Therefore, classification can be considered mainly in the user access links, not in the backbone network.

B. Traffic Provisioning

Traffic provisioning provides QoS. This is a technique which uses applications based on the specified bit rates, which are guaranteed to the end subscribers. In the case of IPTV provisioning over ADSL it is convenient to limit the bandwidth between the DSLAM and STB (Set-Up Box) for elastic Internet traffic and for IPTV traffic. In particular, the bandwidth that is not dedicated to IPTV traffic should be redirected to separate buffer as shown in Fig. 5. Then we will have separate buffer for IPTV and separate buffer for other traffic types.

Figure 5. IPTV traffic provisioning in separate buffer

C. Buffering

Regarding the buffering and scheduling of packets and their practical realization with off-the-shelf product there are several possibilities [7]:

First-in, first-out (FIFO) queuing

Priority queuing (PQ)

Guaranteed throughput

WFQ (Weighted Fair Queuing), which can be:

- Flow based

- Class based

The simplest mechanism is FIFO. It is the default scheduling mechanism if something else is not specified.

We have already discussed traffic prioritization at the beginning of this section. It is a potential solution, but bandwidth monopolization of higher priority classes is a problem.

Guaranteed throughput means that constant bit rate is allocated to a given IPTV flow and it can not be used by other traffic classes. However, such scheme provides low utilization of network links and does not allow statistical multiplexing which is a common concept for heterogeneous Internet traffic.

D. WFQ (Weighted Fair Queuing)

The value of the WFQ algorithm comes from the usage of the three precedence bits from ToS field to achieve better service by using classification of IP packets and their queuing and scheduling according to the precedence

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.

Page 5: QoS analysis for IPTV1

bits (p-bits). These p-bits can have values from 0 to 7 (6 and 7 are reserved).

WFQ is efficient because it can utilize the whole available bandwidth, starting from higher priority traffic flows and going to the lower priority ones. WFQ works for two protocols in practice - IP precedence and Resource Reservation Protocol (RSVP), which provide QoS and guaranteed service, respectively.

WFQ allocates weight coefficient to each flow which determines the scheduling of each buffered packet. According to this scheme, smaller values provide better service. For example, the traffic with IP Precedence field with value 5 receives smaller weight than traffic with IP Precedence field with value 3, which then results in priority of one type of packets over the other.

The weight coefficient i.e. “Weight” is a number which is calculated from the value which is set in IP precedence field in IP packet headers. These values are used by the WFQ algorithm to determine when a given packet should be sent.

Capacity distribution calculation for WFQ goes in the following manner: for instance, if we have one flow for each precedence level on a given network interface (that is, 8 flows, each with different precedence mark), then each flow will be allocated (precedence+1) parts of the link i.e. the bandwidth: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = 36. The flows will accordingly get 8/36, 7/36, 6/36, 5/36 from the link capacity, etc.

V. IPTV TRAFFIC

IPTV providers provide mainly two types of traffic: 1)

multicast for delivering video channels and 2) unicast

dedicated for channel changing, video on demand and

other applications. Here, we capture traffic traces from a

live network. For example, in Fig. 6 we illustrate a

subscriber performing a channel change (stream switch)

in a typical GoP-based IPTV system. In this example the

subscriber is synchronized to channel 1. At a particular

time, the subscriber issues a switch command to channel

2, which triggers IGMP leave to the multicast stream

group 1 and joins the multicast stream group 2. Then, the

subscriber starts to receive multicast stream 2.

The network setup which is used for IPTV

measurements is shown in Fig. 7. It is used for both,

unicast and multicast delivery of IPTV streams from

IPTV stream generator to Set-up box (i.e. end user).

Instant Channel Change (ICC) is using a buffering

technique on channel changing servers. This method

creates multiple unicast streams that are sent to the

customer along with the broadcast multicast and it gets

buffered for the amount of time that is the anticipated

multicast establishment time. So when the user requests a

channel swap it immediately switches to the buffered

content as it proceeds with the new multicast request.

Channel changing servers maintain sliding bursts of

live TV service streams for some period of time. The

exact time depends on the bit rate of the stream, the

structure of the key “I” frames in the GOP Group of

Picture, the delay characteristics of the stream. Overhead

is difference (in percentage) between multicast stream

and unicast burst during the channel changing.

Figure 6. Channel changing and IGMP delay

Figure 7. Network setup for IPTV measurements

IPTV provider should take in consideration tuning of

overhead burst - parameter as very crucial function.

Overhead has direct influence on two points in the

network, limited bandwidth at customer premises and

utilizing of backbone links. First point is during worst

case scenario when the customer has two STBs and

change channels in same time so overhead directly

depends from maximum bandwidth at customer device.

Second point is unicast traffic generated during channel

changing should be transported through IP backbone

links. The capacity of a Digital Subscriber Line ("DSL")

channel is limited. Engineering a network to support

channel change as described above requires several Mbps

of reserved bandwidth. Such a configuration will either

reduce the DSL serving area, reduce the number of video

streams that can be delivered, and/or compromise other

services during channel changing periods.

If IPTV provider is using DSL technology then it

should set very low overhead, but distributed unicast

traffic is very high so it will utilize backbone links. IPTV

providers should make calculations and measurements on

access network and first input for calculation of overhead

should be DSL bandwidth. Reasonable value for

overhead is 20 %, with average bit rate of 3.2 Mbps, and

burst time is around 10 seconds for standard definition

TV stream, as we can see from captured IPTV traffic

shown in Fig. 8.

Dimensioning of the demand of bandwidth is based on

the following measurements in Table II, for different

percentage of ICC overhead. We have measured

multicast stream with average bit rate 2.37 Mbps and

maximum peak of 2.72 Mbps, where we can conclude

that overhead is percentage of peak-average bit rate

difference for the IPTV stream. Multicast stream was

constituted from 85 % H.264 video stream, 8% audio

stream MPEG1 and 4% teletext stream, as it is shown in

Fig. 9.

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.

Page 6: QoS analysis for IPTV1

0,00

0,50

1,00

1,50

2,00

2,50

3,00

3,50

4,00

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18time [sec]

bit

rate

[M

bp

s]

Ch_overhead_30%

Ch_overhead_20%

Ch_overhead_10%

Figure 8. Measured unicast bursts for different value of overhead

TABLE II.

ICC BURSTS BITRATE AND TIME

Ch. Overhead 30% 20% 10%

Bit rate [Mbps] 3,46 3,19 2,92

Unicast Bytes[MB] 3,84 4,78 5,11

Time [s] 10 12 14

0,00%

10,00%

20,00%

30,00%

40,00%

50,00%

60,00%

70,00%

80,00%

90,00%

100,00%

H.264

Video

Aud

io

TeleT

ext

ECM

PAT

PM

T

H.264 Video

Audio

TeleText

ECM

PAT

PMT

Figure 9. Histogram of IPTV captured traffic per program id

We made traces scanning live IPTV traffic on edge

router network interface towards clients’ side.

Measurements and analyses are made per traffic type,

multicast from all popular channels and unicast from all

instant popular channels changing as well as the

aggregate traffic (segments of IPTV traces are shown in

Fig.10). Autocorrelation functions of mentioned IPTV

traffic types are shown on Fig. 11. The autocorrelation of

unicast and aggregate traffic decays hyperbolically rather

than exponentially fast. This shows that IPTV traffic is

self-similar, i.e. bursty. Therefore higher link utilization

leads to lower bit rate of the IPTV stream (Fig. 12).

VI. ANALYSES OF IPTV TRAFFIC MEASUREMENTS

After we have defined all QoS network parameters as well perform IPTV traffic measurements, we perform analyses of the measurements data. We present four measurement scenarios with aim to analyze the influence of the data traffic on the IPTV traffic. The link capacity is limited and therefore the goal is to determine the influence of the background Internet traffic on the IPTV traffic when there is no traffic classification.

0

50000

100000

150000

200000

250000

300000

350000

400000

450000

500000

0 25000 50000 75000 100000

Time (ms)

Tra

ffic

In

ten

sit

y (

Byte

s/1

00m

s)

MC_traffic

unicast

AGR_traffic

Figure 10. Measured IPTV traffic traces

-0,4

-0,2

0,0

0,2

0,4

0,6

0,8

1,0

1 101 201 301 401 501 601 701 801 901 1001

lag k

Co

rrela

tio

n c

oefi

cie

nt

MC_Traffic

ICC_UNIC

AGR_traffic

Figure 11. Autocorrelation Function for Multicast, Unicast and

Aggregated IPTV traffic

0,00E+00

5,00E+05

1,00E+06

1,50E+06

2,00E+06

2,50E+06

1 12,5 25 37,5 50 62,5 75 87,5 100

Link utilization [%]

Bit

ra

te [

bp

s]

1500

1000

500

Figure 12. Bit rate of the IPTV stream for different link utilization

In Fig. 13 we show packet loss ratio dependence upon link utilization. As one can expect packet loss increases as link utilization increases, which is usually the case in packet networks with bursty traffic such as IPTV. Significant packet losses start after reaching link utilization of 55-60%, which leads to packet losses of IPTV traffic. However, this means that also there are packet losses in background TCP based traffic, which causes congestion avoidance mechanism in TCP or slow start (depending upon TCP version and number of lost segments within a congestion window of the TCP), which provides back-off of the TCP streams leaving more room for UDP-based traffic such as IPTV is our analyses. Therefore, there is lowering the loss ratio after reaching first peak at 65% link utilization, which (the lower value

Page 7: QoS analysis for IPTV1

of packet loss ratio) occurs near 85% link utilization. Of course, if link utilization continues to higher values (over 85%) the packet loss ratio continues to increase exponentially.

Further, we have made experimental measurements using different sizes of IP packets and different flow data rates. Using WAN killer packet generator we have generated data packets with sizes of 50, 200, 1000 and 1500 bytes. Then, we have made 9 measurements with precisely defined throughput going from 0 up to 8 Mbps with step of 1 Mbps. The measurement results for MDI are given in Table III. The results show that when link utilization is low then packet size does not influence MDI. However, for higher link utilization values, smaller packet sizes cause higher MDI while larger packets lead to smaller MDI. The results show the MDI dependence upon the background packets sizes and the influence on IPTV traffic regarding the link utilization. One may conclude that larger background packet which are multiplexed on the same link with a given IPTV flow have smaller influence on the IPTV traffic compared to smaller background IP packets. For large IP packets of 1500 bytes the influence on IPTV traffic was insignificant. On the other side, background IP packets with sizes of 50 bytes have significant impact on the MPEG packets (IPTV traffic) even at 12% link utilization, something that can be noticed from Fig.14 and Fig. 15.

If we analyze such behavior from buffer point of view, what are the reasons for these results, we may conclude that the reason is in that the number of transmitted IP packets, with WFQ applied on buffer’s side, is higher for smaller than for larger packets. Larger packets have larger serving time thus producing higher values for packet delay and jitter. In all this measurements IPTV traffic uses packet size of 1400 bytes. Larger packet sizes of IPTV traffic increases the probability of congestion of IPTV traffic when it is mixed with background traffic.

Also, the parameters which influence the quality of IPTV flows are worsening with link utilization increase, where at 75% link utilization we have obtained very “bad” MDI DF values.

Using the obtained results for packet sizes of 50 and 1500 bytes, one can easily notice the difference. That is, if there are users in the network which generate packets with sizes of 50 bytes then the performances and quality of IPTV traffic will significantly degraded. On the other side, packet with sizes of 500 or 1500 bytes (usually originating from www or FTP traffic) give us opposite results. However, in practice there is no such non-real-time traffic with small packets such as 50 bytes long packets, and it is usually anomaly produced by Denial of Service (DoS) attacks, something that should be expected in IPTV network as well. These conclusions can be drawn from Fig. 16.

If we use these MDI DF results obtained form the user access link, we may calculate maximum retransmission time which causes IPTV quality degradation. The retransmission is during the unicast burst. From the figures this value is 150 ms at 75% link utilization. This way we may calculate the real maximum value for the video buffer, i.e.: Time for exhausting the buffer = (maximal retransmission time * 100 / Percentage of unicast burst)

0%

2%

4%

6%

8%

10%

12%

14%

1% 12,5% 25% 37,5% 50% 62,5% 75% 87,5% 100%

Link utilization [%]

IP P

ac

ket

loss [

%]

Figure 13. Packet loss ratio of IPTV flow versus link utilization

MDI DF

0

50

100

150

200

250

300

1 12,5 25 37,5 50 62,5 75 87,5 100

Link utilization [%]

DF

1500

1000

500

Figure 14. MDI DF values for different link utilizations and different

sizes of background IP packets

MDI DF

0

200

400

600

800

1000

1200

1400

1600

1 12,5 25 37,5 50 62,5 75 87,5 100

Link utilization [%]

DF

1500

1000

500

50

Figure 15. MDI DF values for different link utilization and different

sizes of IP packets (DoS)

However, with aim to provide stable value for the video

buffer for SD flows, IPTV providers usually set video buffers in STBs with sizes of 1000 ms.

The results in Table IV show significant degradation of quality at value of 75% link utilization and worse at higher link utilization. The same behavior can be seen in Fig. 16. Again, the exception is the case with very small IP packets (in this case with packet size of 50 bytes), which does not influence the losses of MPEG video packets.

Regarding the MOS values given in Table I, after reaching the link utilization of 75% the MOS values decreases from 4 to 2, when there is no QoS mechanism used on the link. The solution to achieve higher MOS values is usage of WFQ mechanism, which resulted in MOS values in the range 4-5 at link utilization of 87.5%.

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.

Page 8: QoS analysis for IPTV1

VII. CONCLUSION

In this paper we have performed analyses on the QoS parameters for IPTV traffic. As a merit we have used the Media Delivery Index (MDI) which is standardized and unified merit for QoS in IPTV networks. Also, we have performed analyses regarding the scheduling mechanisms for IPTV traffic.

The results showed that we may use IPTV with satisfactory quality even at higher loads, up to 85% link utilization. Packet losses are even lower at 85% link utilization than at 65% utilization.

Also, packet sizes of the background traffic influences the IPTV quality. Smaller IP packets (for background elastic traffic) cause higher degradation in IPTV traffic and vice versa. Hence, IPTV traffic can be efficiently multiplexed with non-real-time traffic such as www, email etc., with different scheduling schemes. However, DoS attacks with small size IP packets can cause significant traffic degradation to the IPTV stream.

Efficient separation of IPTV traffic from other traffic types on a same link can be efficiently achieved with WFQ by using precedence bits in Type of Service field in IP headers. However, with WFQ the user will experience noticeable quality degradation after reaching the link utilization of 75%. The results showed that bad values for MDI DF can be compensated by using larger buffers for video packets.

Regarding the losses of video packets and delay parameters IPTV packet should be classified, and for such purpose the class marking of IPTV packet should be done closer to the stream source (i.e. IPTV platform) to be able to achieve certain end-to-end QoS.

REFERENCES

[1] Tim Rahrer, Nortel, Riccardo Fiandra, FastWeb, Steven Wright,

BellSouth “ TR-126 Triple-play Services Quality of Experience

(QoE) Requirements and Mechanisms For Architecture & Transport” , DSL Forum, February 21, 2006.

[2] “Migration to Ethernet Based DSL Aggregation For Architecture

and Transport Working Group”, TR-101, DSL Forum, May 2004. [3] J. Welch, J. Clark, “A Proposed Media Delivery Index (MDI)”,

IETF RFC 4445 - IneoQuest Technologies, Cisco Systems April 2006.

[4] “IPTV QoE: Understanding and interpreting MDI values”, Agilent

Technologies, Inc. 2006. [5] Tim Rahrer, Nortel, Riccardo Fiandra, Fast Web, Steven Wright,

BellSouth, “Triple-play Services Quality of Experience (QoE)

Requirements and Mechanisms”, WT 126, DSL Forum, 2006. [6] P. Almquist, “Type of Service in the Internet Protocol Suite”, RFC

1349 July 1992.

[7] "QoS Solutions for PPPoE and DSL Environments", Cisco Systems, Document ID: 23706, Cisco Press, Aug, 2005.

[8] ITU-T Focus Group on IPTV, “IPTV Focus Group Proceedings -

Architecture and requirements”, ITU-T, Handbook 2008. [9] ITU-T Focus Group on IPTV- SG2, “Operational aspects of

service provision, networks and performance”, ITU-T, Handbook

2008. [10] Beau Williamson, “Developing IP Multicast Networks”, Cisco

Systems Press Publications, 2004.

[11] Meeyoung Cha, W. Art Chaovalitwongse, Zihui Ge, Jennifer Yates, Sue Moon. “Path Protection Routing with SRLG

Constraints to Support IPTV in WDM Mesh Networks”, In Proc.

IEEE Global Internet Symposium, Barcelona, Spain, April 2006. [12] M. Cha, G. Choudhury, J. Yates, A. Shaikh, and S. Moon. Case

Study: “Resilient Backbone Network Design for IPTV Services”,

In Proc. of International Workshop on IPTV Services over World Wide Web, May 2006.

MDI MLR

0

50

100

150

200

250

300

350

400

450

500

1 12,5 25 37,5 50 62,5 75 87,5 100

Link utilization [%]

L

R

1500100050050

Figure 16. MPEG packets losses - MLR values for different link

utilizations and different packet sizes

%ABLE III. !DI DF PARAMETERS OF IPTV FLOW USING DIFFERENT PACKET SIZES

AND DIFFERENT LINK UTILIZATION

B | % 1 25 50 75 100

1500 73.69 58.59 83.09 96.26 286.05

1000 76.81 98.03 68.87 132.52 290.63

500 65.76 80.68 76.58 150.11 261.23

50 72.06 884.03 1430.8 1455.54 1469.4

TABLE IV. !DI MLR MEASUREMENTS, INFLUENCE OF DIFFERENT TYPES OF DATA

TRAFFIC ON IPTV TRAFFIC PERCENTAGES OF UTILIZATION

B | % 1 25 50 75 100

1500 0 0 0 41.50 197.20

1000 0 0 0 148.40 328.60

500 0 0 0 203.36 483.60

50 4.00 64.56 37.33 123.00 155

[13] Patrik Osterberg, “Fair Treatment of Multicast Sessions and Their Receivers – Incentives for more efficient bandwidth utilization”,

Department of Information Tech. and Media - Mid Sweden

University Doctoral Thesis, Sundsvall, Sweden 2007. [14] Thomas Wiegand, Gary J. Sullivan, Senior Member, Gisle

Bjontegaard, Ajay Luthra “Overview of the H.264/AVC Video

Coding Standard”, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 13, No. 7, pp. 560-576, Jul. 2003.

[15] Geert Van der Auwera, Prasanth T. David, and Martin Reisslein

“Traffic and Quality Characterization of Single-Layer Video Streams Encoded with the H.264/MPEG–4 Advanced Video

Coding Standard and Scalable Video Coding Extension”, Broadcasting, IEEE Transactions, Issue:3, Part 2 On page(s): 698-

718, Sept. 2008.

[16] Fengdan Wan, “Traffic Modeling and Performance Analysis for IPTV Systems”, PhD Thesis, Dept. of Electrical and Computer

Engineering, University of Victoria, British Columbia, Canada,

11-Aug-2008. [17] Arni Lie, “Enhancing Rate Adaptive IP Streaming Media

Performance with the use of Active Queue Management”,

Doctoral thesis for the degree of doktor ingenior Trondheim, April 2008.

[18] Donald E. Smith, “IPTV Bandwidth Demand: Multicast and

Channel Surfing”, published in IEEE INFOCOM, Anchorage, Alaska, USA, May, 2007.

[19] Meeyoung Cha, Pablo Rodriguez, Sue Moon, and Jon Crowcroft,

“On Next-Generation Telco-Managed P2P TV Architectures”, In Proc. of International Workshop on Peer-To-Peer Systems

(IPTPS), February 2008.

Authorized licensed use limited to: Qualcomm. Downloaded on March 17,2010 at 17:00:54 EDT from IEEE Xplore. Restrictions apply.