[ieee middleware and workshops (comsware '08) - bangalore, india (2008.01.6-2008.01.10)] 2008...

10
Multicast Instant Channel Change in IPTV Systems Damodar Banodkar 1 , K.K. Ramakrishnan 2 , Shivkumar Kalyanaraman 1 , Alexandre Gerber 2 , Oliver Spatscheck 2 1 Rensselaer Polytechnic Institute (RPI), 2 AT&T Labs Research AbstractIPTV delivers television content over an IP infrastructure with the potential to enrich the viewing experience of users by integrating data applications with video delivery. From an engineering perspective, IPTV places both significant steady state and transient demands on network bandwidth. Typical IPTV streaming techniques incur delays to fill the play-out buffer. But, when viewers switch or surf channels, it is important to minimize this user-perceived latency. Traditional Instant Channel Change (ICC) techniques reduce this latency by having a separate unicast assist channel for every user changing channels. Instead, we propose a multicast-based approach using a secondary "channel change stream" in association with the multicast of the regular quality stream for the channel requested. During channel change events, the user does a multicast join to this new stream and experiences smaller display latency. In the background, the play-out buffer of the new full-quality multicast stream is filled. Then, the transition to the new channel is complete. We show that this approach has several performance benefits including lower bandwidth consumption even during flash crowds of channel changes, lower display latency (50% lower), and lower variability of network & server load. The tradeoff is a lower quality video during the play-out buffering period of a few seconds. Our results are based upon both synthetic channel change arrival patterns as well as traces collected from an operational IPTV environment. Keywords- Display Latency, Channel switching Latency, Bandwidth, IPTV. I. INTRODUCTION With the rapid growth of broadband deployment to homes, IP-based distribution of real-time television (IPTV) [1] is now a reality. The term IPTV refers to an array of video services (including “linear-TV”, i.e., real-time programming, and video-on-demand) where the video is compressed, and transmitted to subscribers over an IP packet data network. A recent white paper [2] projects that consumer IP traffic (both Internet and non-Internet IP) will grow annually at 57%, driven and dominated by video traffic. Therefore engineering IP networks to support the long-term growth of video traffic is very important. Since a HD-quality channel can consume 6-15 Mbps [3], and there may be hundreds of channels for a large number of clients, IPTV places significant bandwidth demands on the infrastructure. There are both steady state and transient traffic demands. One of the sources of transient bandwidth demand is from clients switching channels. The transient traffic demand can be significant in terms of network bandwidth and video server I/O capacity, to meet the quality of experience goals of clients. To understand this, consider how IPTV is delivered. For the purposes of this paper, we only address the use of IPTV for the delivery of linear broadcast TV. IPTV typically uses IP multicast to deliver the content, using a distinct multicast group for each TV channel. There are typically several hundred channels. The consumer’s set-top box “tunes” to a particular TV “channel” by joining the multicast group for that channel. There is a channel switching latency due to the time taken to join the multicast group and (more significantly) a time to wait for an I-frame and then fill up the play-out buffer for the new channel (to prevent underflow and the resulting jerky/stalled playout). The total channel switching latency can be several seconds, causing unacceptable quality of experience to users who may want to surf or browse channels. In contrast, with traditional over-the-air broadcasting, or cable-based analog systems that do not utilize a packetized video distribution, the set top box (STB) receives all the channels simultaneously. When the user changes channels, the STB immediately “tunes” to the new channel and begins displaying it on the screen. Hence the channel change time in such a system is minimal. But such systems are dramatically wasteful in the use of network resources. To minimize this delay, techniques called “Instant Channel Change” (ICC) have been proposed. Changing channels on IPTV systems, while minimizing the perceived delay to the user, goes by a variety of names in the industry (e.g., rapid/fast/instant channel change). In this paper we will refer to this feature generically, and independent of platform or provider, as “Instant Channel Change” (ICC). One approach first sends a full-quality unicast stream of the new channel’s data at a higher bit rate [9]. The higher bit rate allows the play- out buffer to be filled faster and the unicast also masks the multicast join latency. But, each such unicast stream adds a transient demand to the steady-state multicast traffic. Flash crowds of channel changes place significant demands on access links and video server resources. These unicast channel change mechanisms are built on the expectation that ICC events are mostly uncorrelated, and the impulse load on the system does not grow to unmanageable levels. However, when they are correlated (e.g., during prime-time viewing where subscribers switch at the completion of a program), there is a burst load that imposes a significant I/O demand on video servers sending the unicast stream and bandwidth on the links in the network. This paper is focused on these situations where the channel change events are correlated and occur close to each other from a number of STBs in a large-scale IPTV deployment. We seek an approach where the bandwidth

Upload: oliver

Post on 16-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Multicast Instant Channel Change in IPTV Systems Damodar Banodkar1, K.K. Ramakrishnan2, Shivkumar Kalyanaraman1, Alexandre Gerber2, Oliver Spatscheck2

1Rensselaer Polytechnic Institute (RPI), 2AT&T Labs Research

Abstract— IPTV delivers television content over an IP infrastructure with the potential to enrich the viewing experience of users by integrating data applications with video delivery. From an engineering perspective, IPTV places both significant steady state and transient demands on network bandwidth. Typical IPTV streaming techniques incur delays to fill the play-out buffer. But, when viewers switch or surf channels, it is important to minimize this user-perceived latency. Traditional Instant Channel Change (ICC) techniques reduce this latency by having a separate unicast assist channel for every user changing channels. Instead, we propose a multicast-based approach using a secondary "channel change stream" in association with the multicast of the regular quality stream for the channel requested. During channel change events, the user does a multicast join to this new stream and experiences smaller display latency. In the background, the play-out buffer of the new full-quality multicast stream is filled. Then, the transition to the new channel is complete. We show that this approach has several performance benefits including lower bandwidth consumption even during flash crowds of channel changes, lower display latency (50% lower), and lower variability of network & server load. The tradeoff is a lower quality video during the play-out buffering period of a few seconds. Our results are based upon both synthetic channel change arrival patterns as well as traces collected from an operational IPTV environment.

Keywords- Display Latency, Channel switching Latency, Bandwidth, IPTV.

I. INTRODUCTION With the rapid growth of broadband deployment to homes,

IP-based distribution of real-time television (IPTV) [1] is now a reality. The term IPTV refers to an array of video services (including “linear-TV”, i.e., real-time programming, and video-on-demand) where the video is compressed, and transmitted to subscribers over an IP packet data network.

A recent white paper [2] projects that consumer IP traffic (both Internet and non-Internet IP) will grow annually at 57%, driven and dominated by video traffic. Therefore engineering IP networks to support the long-term growth of video traffic is very important. Since a HD-quality channel can consume 6-15 Mbps [3], and there may be hundreds of channels for a large number of clients, IPTV places significant bandwidth demands on the infrastructure. There are both steady state and transient traffic demands. One of the sources of transient bandwidth demand is from clients switching channels. The transient traffic demand can be significant in terms of network

bandwidth and video server I/O capacity, to meet the quality of experience goals of clients.

To understand this, consider how IPTV is delivered. For the purposes of this paper, we only address the use of IPTV for the delivery of linear broadcast TV. IPTV typically uses IP multicast to deliver the content, using a distinct multicast group for each TV channel. There are typically several hundred channels. The consumer’s set-top box “tunes” to a particular TV “channel” by joining the multicast group for that channel. There is a channel switching latency due to the time taken to join the multicast group and (more significantly) a time to wait for an I-frame and then fill up the play-out buffer for the new channel (to prevent underflow and the resulting jerky/stalled playout). The total channel switching latency can be several seconds, causing unacceptable quality of experience to users who may want to surf or browse channels. In contrast, with traditional over-the-air broadcasting, or cable-based analog systems that do not utilize a packetized video distribution, the set top box (STB) receives all the channels simultaneously. When the user changes channels, the STB immediately “tunes” to the new channel and begins displaying it on the screen. Hence the channel change time in such a system is minimal. But such systems are dramatically wasteful in the use of network resources.

To minimize this delay, techniques called “Instant Channel Change” (ICC) have been proposed. Changing channels on IPTV systems, while minimizing the perceived delay to the user, goes by a variety of names in the industry (e.g., rapid/fast/instant channel change). In this paper we will refer to this feature generically, and independent of platform or provider, as “Instant Channel Change” (ICC). One approach first sends a full-quality unicast stream of the new channel’s data at a higher bit rate [9]. The higher bit rate allows the play-out buffer to be filled faster and the unicast also masks the multicast join latency. But, each such unicast stream adds a transient demand to the steady-state multicast traffic. Flash crowds of channel changes place significant demands on access links and video server resources. These unicast channel change mechanisms are built on the expectation that ICC events are mostly uncorrelated, and the impulse load on the system does not grow to unmanageable levels. However, when they are correlated (e.g., during prime-time viewing where subscribers switch at the completion of a program), there is a burst load that imposes a significant I/O demand on video servers sending the unicast stream and bandwidth on the links in the network. This paper is focused on these situations where the channel change events are correlated and occur close to each other from a number of STBs in a large-scale IPTV deployment. We seek an approach where the bandwidth

requirements are relatively independent of the population of users switching to a channel or the arrival process of the channel change events.

We propose a multicast-based approach using a secondary "channel change stream" associated with each channel. This secondary stream is a lower bit rate stream carrying only I-frames and associated audio. During a channel change event, the client does a multicast join to this secondary stream and results in a low “channel display latency” (between A and B in

time

Channel Change Issued

First frame Displayed on TV screen

Stream starts playing out ofthe data buffered from primary multicast stream

Display Latency

Channel switching Latency High Picture quality video

time

Channel Change Issued

First frame Displayed on TV screen

Stream starts playing out ofthe data buffered from primary multicast stream

Display Latency

Channel switching Latency High Picture quality video

Figure 1). In the background, the play-out buffer of the new full-quality multicast stream is filled. Once the playout point is reached, the full-quality video is displayed and the transition is complete (at point C in Figure 1). The time interval from when the user requests the channel change to this completion point C is the “channel switching latency”.

Figure 1: Display Latency vs. Channel Switching Latency

The secondary channel change stream carrying I-frames is likely to use about 50% of the bandwidth compared to the full-quality video (we present an example later in this paper). The drawback therefore is the 50% additional capacity required during the channel switching period on the access network to the home. However, as we describe below, there is likely to be sufficient capacity on the access link to the home to carry this channel change stream. Our proposed scheme also requires additional I/O capacity from the video server (up to 50% for each channel that has at least one channel change in progress). However, unlike the unicast ICC which requires a full-rate stream at an accelerated rate for every single user (of which there may be thousands if there is a correlated channel change event), the multicast approach uses a lower rate I-frame stream and is relatively independent of the user population requesting a channel change. Moreover, we will show through traces that the maximum simultaneous number of channels experiencing a channel change event at any time in an operational environment is small compared to the total number of active channels. On the other hand, our multicast approach has several

performance benefits including lower bandwidth consumption during a burst (flash crowds) of channel changes (the obvious advantage of using multicast), lower display latency (50% lower), and lower variability of network & server load. For a low intensity of channel change events, the reduction in the display latency with the multicast approach may not be significant. But, even with a modest rate of channel changes, the need to use separate unicast assist sessions leads to consistently higher bandwidth requirements than our multicast scheme. The predictable load on the links and video servers

with the multicast scheme makes provisioning decisions much simpler. The tradeoff in our proposed multicast scheme is the use of a lower quality (just I-frames) video displayed during the play-out buffering period (between point B and point C in Figure 1). We believe that this is a tradeoff worth making to deliver the quick response to clients (they don’t have to see a blank/black image till point C in Figure 1) and the benefit of reduced bandwidth requirements on the infrastructure.

The rest of the paper is organized as follows: Section II overviews related unicast instant channel change (ICC) schemes in IPTV systems. We then provide an overview of the network infrastructure. Section IV outlines the unicast ICC scheme and develops our Multicast ICC scheme. Section V provides performance analysis including trace-driven NS-2 simulations to validate the bandwidth requirements and channel change latencies for the two ICC schemes. Finally, section VI concludes the paper.

II. RELATED WORK There have been different approaches to reduce the delay

associated with each part of the channel change process. Cho et al. [4] propose an approach that is based on the idea that the adjacent channels are going to be watched more frequently by viewers. Their scheme maintains a table that keeps track of adjacent channel requests, and adds currently inactive adjacent channels to the table and joins them while a join is in progress for the new channel requested. One vendor’s approach [8] improves the above scheme by adding a statistical selection of channels to join instead of choosing the adjacent channels. These approaches address the issue of channel switching latency by issuing joins to different multicast groups, but do not address playout buffering latencies. An analytical study

A B C

[8]shows that the user channel change requests are not a steady flow and tend to double at commercial breaks disrupting the steady state of the system.

Another commercial approach [9] is to create a unicast stream sent at an accelerated rate so that the play-out buffer threshold is reached quickly. When the user requests a channel change, the STB waits for the unicast stream to fill up to a higher threshold in the playout buffer, and then begins playout. After the buffer is filled to the higher threshold, a join request is issued to the multicast stream. By the time buffer drains to the normal playout threshold, the multicast join will complete and the multicast stream continues to deliver the video to the client. The unicast session starts the content staggered in time, so that the handoff to the new multicast session is seamless. As discussed earlier, such unicast streams add a transient demand to the steady-state multicast traffic, proportional to the number of users concurrently initiating a channel change event. Flash crowds of channel changes potentially place excessive demands on access links and video server resources.

To summarize some key issues with the existing proposals: 1) the schemes may not scale well when there are many channel change requests over short periods (e.g., during commercial breaks in popular TV events/shows) and 2) when users surf between non-contiguous channels. Our multicast ICC scheme addresses both scalability and stability of the system even during flash crowds while meeting the user expectation for rapid channel change. Our scheme achieves

lower channel switching latency, and is independent of the increase in the number of channel changes issued by users.

III. NETWORK ARCHITECTURE FOR IPTV SERVICE Figure 2 shows the network architecture and system

components for IPTV systems. The distribution network consists of video servers for each metropolitan area connected through the metro network of IP routers and optical networks to the access network. The access network typically includes a tree-like distribution network (whether it is a DSL or cable environment) with copper (existing environments) or fiber (newer environments) connectivity to the home. In an example nationwide distribution environment, content is received from the point where it is acquired over an IP backbone to the IPTV head-end for the metropolitan area at a Video Hub Office (VHO). From this point, the video is distributed to subscriber homes in the metro area. The architecture comprises: • Content sources & D-Server: Video content is

received from content providers either live or from storage (for Video on Demand). The content is buffered at Distribution Servers (D-Server) in the Video hub office (VHO). A separate D-server could be used for every channel; all D-servers share the link to the VHO, and on average a single D-server’s I/O capacity is of the order of ~100s of Mbps.

• Metro Network: The metro network connects the VHO to a number of central offices (CO). The metro network is usually an optical network with significant capacity. Downstream of the CO are several Digital Subscriber Line Access Multiplexers (DSLAMs).

Figure 2: IPTV network Architecture & Components

• Customer access link: Delivery of IPTV over the “last mile” may be provided over existing loop plant to homes using higher-speed DSL technologies such as VDSL2. Service providers may use a combination of Fiber-to-the Node (FTTN) and DSL technologies or implement direct Fiber-to-the-Home (FTTH) access.

• Customer premises equipment (CPE): CPE includes the broadband network termination (B-NT) and a Residential gateway (RG). The RG is typically an IP router and serves as the demarcation point between the service provider and the home network.

• IPTV Client: The IPTV client (e.g., Set top Box (STB)) terminates IPTV traffic at the costumer premises. Link capacities vary significantly in the architecture.

While capacity between the Internet routers may be 10 Gbps,

the capacity from a CO to the DSLAM may be around 1 Gbps. The DSL link itself may also be a bottleneck, which the service provider avoids by provisioning and arranging with the subscriber so that the total demand (e.g., number of TVs viewing distinct channels in the home) does not exceed the DSL link capacity. The link from the CO to the DSLAM (a DSLAM may support 100-200 homes) may also be a bottleneck in certain situations. Finally, the number of parallel streams carried out of a D-Server may be limited by its I/O capacity. We will evaluate the impact of high utilization due to channel change events on these potential bottleneck links and their effects on user quality of experience (QoE) as evaluated by the channel display and switching latencies.

IV. INSTANT CHANNEL CHANGE (ICC) IN IPTV SYSTEMS

A. Current Approaches for ICC We refer to the existing approaches for fast channel

change (ICC) as “Unicast Instant Channel Change” or “Unicast ICC” and focus on one baseline approach. As shown in Figure 3, when a client connects to the D-server, the D-server begins unicasting data to the client, starting from an I-frame in its buffer. The D-Server bursts the data at a higher rate than the nominal video bitrate rate.

DServer

DServer Router

10-100 Gbps

SHO

1 Gbps

~100s of Mbps I/O B/w

DSLAM

RG S

S

RG

S

S

30Mbps2Mbps

Video Hub Office

Metro Network

Access Network

Central Office

DServer

Figure 3: Unicast ICC scheme: uses accelerated unicast streams to fill the playout buffer, thus reducing the wait at the STB.

Given this higher unicast rate, the set-top box (STB) buffer fills up faster than the nominal rate at which the multicast stream is transmitted. A multicast “join” is issued by the client after sufficient data is buffered in the playout buffer of the STB so that the buffer does not under-run by the time the multicast join completes and the subsequent multicast stream is received. The unicast stream is stopped once the playout buffer is filled to the desired level. Once the multicast join is successful, the client can start displaying video received from the multicast stream.

Figure 5 shows the timeline of the sequence events and buffer dynamics with the Unicast ICC scheme. When a channel change is issued, a unicast join is issued to the D-Server (1). At point 2, the client starts receiving packets from the unicast stream, and buffers the packets up to the play-out point. At point 3, the full quality video is displayed on the screen starting with the first I-frame. At point 4, the client issues a join to the multicast group corresponding to the channel. At point 5, the client starts receiving packets from the multicast stream. Any gaps between the two streams are reconciled by having the D-Server retransmit packets corresponding to the gap.

B. Our Proposed Approach: Multicast ICC

STB STB Network

1. A channel change is signaled by the user

1. A unicast stream “start” signal is issued to the D-Server

Network Delay

2. The first packet from the accelerated Unicast stream

STB

Network

4. Join is issued for the multicast channel Stream and “Stop” for the multicast stream is issued

STB

5. Starts buffering the Multicast Stream

Join Delay

STB

Additional data buffered

The Unicast ICC method indeed reduces the channel change time. However, it is designed with the expectation that the number of concurrent ICC requests is small. When there are a number of concurrent ICC requests (e.g., during prime time TV viewing periods, when requests from multiple users are correlated) this approach can impose a substantial load on the network. We have observed in practice. Moreover, the D-servers with limited I/O capacity also have to support this large number of unicast streams simultaneously, potentially requiring the service provider to deploy additional servers to meet the peak demand.

We propose a multicast-based Instant Channel Change (Multicast ICC) mechanism that reduces the channel switching latency, especially under high loads of channel change events. The approach makes the system scale better as the user population grows. The bandwidth demand is more predictable.

1) Motivation:

The network topology from the D-Server to the DSLAM is shared by all the users. Unicasting the same stream for a given channel is wasteful use of network resources. When a user “channel surfs” or changes channels rapidly, she intends to catch a glimpse of the channel that is being played and then decides whether to continue to watch the channel being displayed currently or to move on to another channel. We believe it is sufficient for the user to briefly (for 1-2 seconds) see a lower quality, less-than-full-motion video with good quality sound, and that it is better than displaying a blank screen for the corresponding period (as in some satellite TV systems) when a channel change occurs.

In the underlying network, there are bandwidth

constraints on the links from the DSLAM to the CO; hence it is desirable to limit the number of concurrent streams delivered to a particular DSLAM. Similar limits also exist for the number of parallel streams delivered out of the D-Server. Multicast is a natural fit to address these performance bottlenecks. As we noted earlier, examining the number of channels receiving concurrent channel change events, from an operational IPTV system. We find that a relatively small proportion (less than 25%) of the total number of channels receive change requests when measured in 1 second bins, where the request was initiated a few seconds earlier, as shown in

Figure 4: Characteristics of concurrent channel change events (measured over 1 second intervals) from a 7-day trace. We examined the trace of channel change events over a 7

day time period in a portion of the metro network that supported over 5000 users in an operational IPTV system. Figure 4 shows the cumulative distribution of the number of ICC requests and the number of unique channels that were delivered by the ICC measured over 1 second intervals (the request may have been initiated a couple of seconds earlier). The peak number of distinct channels experiencing channel change, measured in the 1 second bins saw a peak of 175, which was less than 25% of the number of distinct channels observed over the trace period. On the other hand, the maximum number of ICC events measured in the 1 second intervals saw a peak of 326 requests. Thus, even in this small sample set, we see that the unicast ICC has to support the additional 326 unicast (faster than full bit rate) streams. The multicast ICC has to support the additional 175 I-frame (at about 50% bit rate) channel change streams. This is more than a factor of two improvement compared to unicast ICC.

Figure 4.

2) Multicast ICC We introduce a secondary lower-bandwidth channel-

change stream corresponding to each channel at the D-server. This stream will consist of I-frames only Therefore, each channel will now add another IP multicast group called the secondary ICC multicast group that transmits only the I-frames for each channel. The secondary ICC channel change stream is offset by a short time interval (approximately 2.6 seconds in our experiments) so that the secondary stream is delayed in time relative to the primary high quality multicast stream. This enables the client to display the (less than full motion) video of the new channel that the user switches to, while allowing the play-out buffer to be filled with the primary multicast stream.

Figure 6 shows the mechanism of

the Multicast Instant Channel change process:

A. The channel change request for a particular channel results in a multicast join request being issued by the client. A join is issued for both the primary multicast stream as well as the secondary ICC multicast group for the channel change stream obtained by extracting the I-frames from the primary stream. With IP multicast, the join progresses as far up the distribution tree as necessary, until it hits an “on-tree” node.

B. The D-server transmits both the secondary channel change stream as well as the primary multicast channel if there is an outstanding channel change event for that channel.

Figure 5: Timeline of events in Unicast ICC.

C. The first frame from the secondary ICC channel change stream is displayed on the screen by the client, without buffering, along with the audio, resulting in what we believe is an acceptable viewing experience, even though it is not necessarily full-motion.

D. In the mean-time the higher quality, primary multicast stream fills the buffer.

Multicast Replicator TV

Client

1. “join”

2. a) I-Frame stream flow

2. b) Primary Multicast stream flow

3. The first frame from the I-Frame stream is displayed on the screen

4. Buffer the primary multicast stream up to the playout point of the buffer.

5. Issue a leave to the I-Frame stream and start playing out of the buffer

E. Once the buffer is filled up to the playout point, the client can start displaying the high quality picture from the playout buffer. Because of the time-offset between the secondary ICC channel change stream and the primary stream, when the buffer meets its threshold, the client (STB) can seamlessly switch to the buffered (primary) input without a user-perceived time-shift of the video.

Figure 6: Steps of the Multicast ICC Scheme

Advantages & Tradeoffs with Multicast ICC

We observe that our proposed multicast ICC approach requires approximately 50% additional capacity for each channel that has a channel change event to carry the secondary channel change stream. This requirement is both on the D-server I/O and the network links. However, the capacity requirement is relatively independent of, and does NOT grow with, the user population requesting a channel change. Moreover, we showed from traces of channel change events that the maximum simultaneous number of channels experiencing a channel change event at any time on an operational environment is small.

In comparison to the unicast ICC, with our multicast ICC scheme, both the D-server and network require lower capacity during periods of high channel change rates. The channel change processing is more scalable, the D-server I/O capacity requirement are more predictable even with bursts of channel change requests. Unlike the unicast ICC approach, our proposed multicast ICC scheme can more easily deal with correlated channel change events from users.

The tradeoff in our proposed multicast scheme is a lower quality (less than full motion) video that is displayed during the period that the playout buffer has to be filled (~2-3 seconds). We believe that the STB can play out I-frames as they come without any buffering to prevent under-runs. On the other hand, the full quality MPEG-4 video stream is much more susceptible to buffer under-runs as players have difficulty in adjusting to gaps in the stream. The player will have to wait for the next I-frame and only then re-start

buffering. Subsequently, when the playout buffer reaches the playout threshold, it can start playing the video. In contrast, an I-frame only stream can play out an I-frame once it is received in its entirety. If the buffer then runs out, the current I-frame is displayed as a freeze frame until the next I-frame arrives.

V. SIMULATION RESULTS [10]We built an NS-2 simulation of the metro/access

network and the VHO servers. Our objective is to evaluate the unicast and multicast instant channel change (ICC) schemes in terms of the following metrics: bandwidth consumption at the D-servers and the bottleneck link, the display latency and the channel switching latency. For the scenarios we considered, the link between the central office (CO) and the DSLAM (see Figure 2) and the D-server I/O were the bottlenecks. We present results based on both traces collected from an operational IPTV environment and synthetic channel change arrival patterns.

Our trace-driven simulation uses data from channel change logs in an operational IPTV system during the prime time viewing period (8pm-9pm) of a given day. First, we used a trace of the channel change requests initiated by users served by a single CO for a time period of 150 sec to simulate the effects on the link between the CO and DSLAM. Second, we selected the most popular channel from the actual trace and simulated the channel change requests handled by a particular D-Server for a period of 1600 sec. Third, we investigate the behavior of unicast and multicast ICC under low loads, with respect to the channel change events. Fourth, we show a stress test based upon a synthetic workload that has a higher intensity of channel change events.

The video for our trace-driven simulations was from a library of traces of encoded video [6][7]. These traces were generated by encoding ~60 minute long videos. The video codec used for the streams was H.264/MPEG-4 AVC (Advanced Video Coding), from the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), a.k.a. the Joint Video Team (JVT).

Figure 7: Comparison of I-frame (secondary, multicast) stream vs. Full-rate stream (with I, P, B frames). I-frame stream has a lower mean (~1Mbps) and lower variance.

Some sample statistics of the video frame sizes (mean, max, min) in bytes are: I-frames (16011, 36027, 1162) bytes, P-frames (6587, 32754, 60) bytes, B-frames: (2133, 19013, 16) bytes. The frame statistics indicate a larger per-frame size of I-

frames, but lower variance in sizes, unlike P- and B-frames which display high variance. Moreover, the I-frame stream averages 1Mbps, compared to 3.2 Mbps for the full-quality stream. The time-dynamics of the I-frame secondary stream vs. the full-rate stream are shown in

Num

ber o

f Cha

nnel

cha

nges

Time (sec)

Figure 7. Clearly, the I-frame stream has substantially lower mean and variance compared to the full-rate stream. The accelerated unicast stream would thus require a higher average rate (i.e., potentially even greater than 3-4 times the I-frame rate). Also recall that this accelerated stream is unicast to each receiver compared to the multicast of the I-frame stream in our proposal.

Figure 8 shows the time series for the channel change events from the trace of an operational IPTV system collected over a 24-hour period. This period was chosen to include the peak concurrent channel change events observed over the 7-day trace whose statistics were shown in Figure 4. This figure shows the maximum number of ICC requests per second, examined over 5 minute intervals. There is considerable variability and burstiness in the channel change requests (almost an order of magnitude). With a unicast ICC mechanism will result in a corresponding burstiness in the traffic emanating from the D-server and flowing over the metro and access networks.

Figure 8: Channel Change events (in requests/sec) examined over 5 minute intervals for an entire day

A. Bottleneck link from Central Office (CO) to DSLAM In the actual channel change request trace, there were a

few hundred distinct channels being transmitted. However, on an average there were far fewer channels active below a given DSLAM. To deal with NS-2 simulation constraints, we restricted the number of channels at the DSLAM to 10, and scaled down the link capacity from the DSLAM to CO to 200 Mbps. The trace-driven simulation was run for 150 seconds. The empirical distribution of the channel change requests across all channels initiated from all users below the CO is shown in Figure 9.

Figure 10 shows the bandwidth consumption of the unicast and multicast ICC schemes as a function of time. Observe that the unicast ICC scheme interestingly has a higher average bandwidth requirement, and has higher volatility (especially during the peak periods of channel change (between 50-90 seconds). The bandwidth requirement for our multicast ICC

scheme is lower and shows less variability even with the increase in the number of channel changes (for all channels).

Figure 9: Empirical channel change request distribution for a single Central Office (CO): 150 second trace.

Figure 10: Bandwidth consumption (CO-DSLAM link): Unicast ICC requires more bandwidth especially at peak loads.

Figure 11 shows the display latencies for this trace-driven

simulation. With Unicast ICC, the CO-DSLAM link (with reduced capacity to reflect the limited scale of our simulation) saturates during peak channel change periods.

Figure 11: Display Latency: Multicast ICC vs. Unicast ICC for channel changes under a single Central Office (CO)

As a result, the display latency deteriorates sharply (we assume that network buffering is sufficient to deal with the increased traffic and there is no loss). At a low intensity of channel changes, the Unicast ICC display latency stays constant at around 400msec. On the other hand, the Multicast ICC scheme in general has a lower the display latency that varies between 0 to 480 msec (due to variability in the arrival time for the I-frame, relative to the channel change event).

Recall the two measures of latency that we examine – the “display latency” and “switching latency” (Figure 1). In terms of the channel switching latency (i.e., when the switchover to the multicast video stream occurs), the unicast ICC scheme performs a little better (see Figure 12). This is a tradeoff with our multicast ICC approach (but the average difference is only 0.5 seconds) which we believe is tolerable since the user will be seeing a lower quality video in the interim. The spike in the unicast scheme’s channel switching latency again reflects the degradation that occurs during peak periods of channel change requests.

Figure 12: Channel Switching Latency (i.e. final switchover to the multicast video session).

B. Bottleneck: D-Server I/O In this scenario the channels change requests for the most

popular channel at a D-Server are collected (Figure 13).

Figure 13: The number of incoming channel change events for the most popular channel at the D-Server

The key bottleneck we examine here is the D-server I/O capacity. The popularity of a channel is defined by the largest number of users changing to the particular channel during the

entire trace period of one hour. Our simulation used the channel change event trace for the time period from 800 to 2400 sec (see Figure 13). The channel change rate has a peak at around 1900 sec.

We set the nominal capacity of the D-server I/O to 50 Mbps, based on the constraints of the NS-2 simulation as well as the consideration that only a portion of the physical D-server capacity would be available for supporting one of the channels (across a line-up of hundreds of channels).

Figure 14: D-server I/O capacity requirements: Unicast ICC again requires much more capacity than Multicast ICC.

Figure 14 shows the I/O capacity requirements at the D-server. The Unicast ICC scheme consumes significantly more capacity than Multicast ICC both in the steady state and during peak times. Multicast ICC requirements are also stable even during peak channel change times.

Figure 15 shows consistent performance benefits of Multicast ICC over Unicast ICC in terms of display latency. Also, this metric is not affected by aggregate number of channel changes initiated by the total user population.

Figure 15: Display latency (popular channel case): Multicast ICC consistently outperforms Unicast ICC

The channel switching latency (recall from Figure 14) is plotted in Figure 16. The behavior is similar to the prior results where the unicast scheme performs better, subject to spikes during overloads. As mentioned earlier, since a lower quality, less-than-full-motion video is played out in the multicast ICC scheme we think this additional channel switching latency will be acceptable. The additional latency

for the multicast scheme is primarily to fill the playout buffer as the multicast of the full-quality stream is transmitted at the regular playout rate.

Figure 18: Display Latency with rare channel change requests

Figure 16: Channel Switching Latency: Unicast ICC is better

than Multicast ICC, subject to overload spikes.

The trace-driven simulation results demonstrate the bandwidth requirements and the latencies involved in the Unicast and Multicast ICC. To examine the behavior of the systems under extreme conditions we subject the mechanism to synthetic channel change workload intensity to show that the results based upon synthetic workloads are consistent with the trace-driven simulations.

C. Synthetic Workload: Rare Channel Changes

In this scenario, we simulate a channel change process in which the network is almost free of any transient channel changes and hence does not excessively load the network. Our first topology has 150 nodes per DSLAM, with 5 channels involved in relatively rare channel changes (an average of a single request every 12 seconds per DSLAM).

Figure 19: Bandwidth Requirement on DSLAM- CO link.

Despite small number of channel changes, the multicast ICC has comparable bandwidth costs to unicast ICC.

Figure 19 shows that despite rare channel changes (less than half the number of channels at any time), the additional demand on bandwidth from the secondary channel change multicast stream is no more than the unicast ICC bandwidth requirement. This is because the secondary stream operates at a lower rate, while the unicast ICC scheme uses an accelerated rate compared to full-quality bit rate for the video.

Display Latency (sec) CO - DSLAM link

bandwidth (Mbps) Scheme

Peak Avg. Std. Dev

95 % C.I.

Peak Avg. Std. Dev

95 % C.I.

Unicast ICC

0.8 0.43 0.01 0.42- 0.44

100 52.5 1.2 51.46-53.35

Multicast ICC

0.7 0.26 0.01 0.25-0.27

95 51.2 0.1 51.1- 51.2

Figure 17: Rare channel changes: Distribution of Requests Table I: Summary statistics across 10 runs for the rare channel case with different, random seeds. Note in all cases the standard deviation is small compared to the mean, and

hence the confidence intervals are tight.

Figure 17 shows the distribution of the channel change requests: observe that the number of concurrent channel changes (<= 3 changes) tends to be smaller than the number of channels (5 channels). Figure 18 shows that the display latency of multicast ICC is still better than unicast ICC.

Figure 20 shows a similar result as shown earlier (Figure 14

This synthetic workload simulation shows that even though the multicast ICC scheme uses a secondary lower quality channel change stream per channel that is multicast, (the bit-rate of this channel change stream is lower than the regular high-quality stream (less than half)), the overall bandwidth requirements for the multicast ICC scheme (both in terms of the D-server load and the metro access network from the CO down to the DSLAMs) do not exceed the unicast ICC scheme even for low channel change workloads.

) – the unicast ICC imposes significant requirements on the average I/O capacity of the D-server, and also results in increased variability of the load on the D-server (compared to Figure 14). Figure 21 shows the display latency during peak loads increasing significantly to about 6-7 seconds. The multicast ICC scheme displays remarkable stability both in terms of display latency and the bandwidth requirements. Table I summarizes the statistics for the Display latency

and the Bandwidth comparison for the two schemes, across 10 different simulation runs. The multicast ICC has comparable bandwidth requirements as the unicast ICC case, even with a small number of channel changes.

Display Latency (sec) D-Server output B/w (Mbps)

Scheme

Peak Avg. Std. Dev

95 % C.I.

Peak Avg. Std. Dev

95 % C.I.

D. Synthetic Workload: Popular Channel, Stress Test Case Unicast

ICC 6.79 0.6 0.04 0.58-

0.62 50 23.64 4.3 20.57-

26.71

The workload in this simulation is a more intense version

of the corresponding trace driven simulation for the most popular channel as discussed in subsection B. It helps us to understand the impact of rapid channel changes when users switch to a given popular channel (e.g., switching to a channel at the start of a live event or to an adjacent channel during an advertisement). We assume that about 25% of the viewers served by a DSLAM tune to this popular channel within a time period of 120 seconds. We first focus on D-Server I/O capacity requirement.

Multicast ICC

1.25 0.26 0.02 0.25-0.27

16 7.76 0.4 7.48- 8.04

Table II: Summary statistics across 10 simulations for the stress test case (different random seeds). Standard deviation is small compared to the mean, but higher for the Unicast ICC.

This second synthetic workload simulation shows that as the system is stressed, the unicast ICC scheme incurs much higher display latency, and places much higher D-Server I/O capacity demands during times of peak loads compared to the multicast ICC scheme. One effect seen more clearly for the unicast ICC scheme here (going beyond the results seen with the trace driven simulations) is that until bandwidth saturation occurs, the user-experienced display latency is not severely affected. But, after this, the display latency can severely degrade. Further, there is a significant bandwidth demand with the unicast ICC scheme.

Table II provides the summary statistics for 10 different simulation runs of the two schemes with the synthetic workload. With the increased rate of channel changes, the Unicast ICC consumes about 3 times the D-server output bandwidth as the Multicast ICC. The peak display latency with unicast ICC is also much higher, just as we observed earlier. The variability for both bandwidth demand and display latency with Multicast ICC is lower than with Unicast ICC.

Figure 20: D-server I/O Capacity Requirement: Popular Channel Stress Test. Unicast ICC is more volatile and

saturates the link at peak loads

VI. SUMMARY AND CONCLUSIONS As IPTV usage grows, viewers will have novel interfaces to

interact with TV. One result of this is the continued propensity of viewers to switch or surf channels. With IP multicast, and especially due to the jitter arising from protocols that have to recover from packet loss and the possible delay variation in the IP network need there is the need to have a playout buffer at the receiver (STB) to prevent underflows. This leads to latencies with channel switching and the consequent impact on the user quality of experience. Previous instant channel change (ICC) schemes use unicast assist streams that transmit at an accelerated rate to reduce the channel switching latency. However, they come at the cost of additional bandwidth load on the server I/O subsystems and the network. Our analysis shows that even for average channel switching loads in prime

Figure 21: Display Latency (popular channel stress test)

time, the transient bandwidth requirement from such unicast-based channel change schemes can be significant. Bursts in channel switching load (from correlated channel changes across users) increase the utilization on the infrastructure further and the queuing effects can lead to longer display latencies. In this paper, we introduced a pure multicast ICC approach

that involves the use of a secondary lower quality (I-frame) channel change stream for each channel. Users issue multicast joins to this stream on a channel change event. This aspect of the scheme does not require pre-buffering since I-frames can be displayed without the concern related to buffer under-runs. Concurrently, a join to the full-rate multicast stream is also issued. With a modest number of channel changes, this scheme outperforms the unicast on all the key metrics of display latency and bandwidth requirements. The tradeoff is a lower quality (less than full-motion) video being displayed during the play-out buffering period lasting a small number of seconds (between point B and point C in Figure 1). We believe that this is a tradeoff worth making to deliver the quick response to clients (they don’t have to see a blank/black image till the point that the playout buffer is filled to the threshold (point C in Figure 1). The scheme results in a significant reduction in the bandwidth requirement for the infrastructure.

ACKNOWLEDGEMENTS We gratefully acknowledge the help, input and suggestions we received from Amy Reibman and Chris Chase of AT&T and John Woods of RPI.

REFERENCES [1] IPTV, Wikipedia,

(URL:http://en.wikipedia.org/wiki/IPTV ) [2] Cisco Systems, “The Exabyte Era,” White paper, (URL:

http://www.cisco.com/application/pdf/en/us/guest/netsol/ns537/c654/cdccont_0900aecd806a81a7.pdf), 2007.

[3] Cisco Systems, “Optimizing Video Transport in Your IP Triple Play Network”, White Paper , 2006 (URL: http://www.webtorials.com/abstracts/Cisco62.htm )

[4] Chunglae Cho, Intak Han, Yongil Jun and Hyeongho Lee, Improvement of Channel Zapping time in IPTV Services using the Adjacent Groups Join Leave Method, International Conference on Advanced Communication Technology, 2004.

[5] Donald E. Smith, IPTV Bandwidth Demand: Multicast and Channel Surfing, Proceedings of IEEE INFOCOM 2007, Anchorage, AK, May 2007

[6] Geert Van der Auwera and Prasanth T. David and Martin Reisslein "Traffic Characteristics of H.264/AVC Variable Bit Rate Video" Technical Report, Arizona State University, March 2007

[7] Geert Van der Auwera and Prasanth T. David and Martin Reisslein "Traffic and Quality Characterization of Single-Layer Video Streams Encoded with the H.264/MPEG-4 Advanced Video Coding Standard and Scalable Video Coding Extension" Technical Report Arizona State University, July 2007

[8] Lucent Technologies, Optimization of IPTV Multicast Traffic Transport over Next Generation Metro Networks, Telecommunications Network Strategy and Planning Symposium, 2006

[9] Microsoft Corporation, Microsoft TV: IPTV Edition, http://www.microsoft.com/tv/IPTVEdition.mspx

[10] NS-2 Simulator, (URL:http://www.isi.edu/nsnam/ns/)