proxy caching for multimedia objects project of multimedia course amir nayyeri
TRANSCRIPT
Proxy Caching For Multimedia Objects
Project of Multimedia Course
Amir Nayyeri
Overview
Introduction Video Caching
Introduction Overview of Current Algorithms
My Work Other Interesting Subjects
Prefetching Distributed Proxies
Introduction
What can a Proxy Do? Reducing the delay
for users. Reducing the load in
the Backbone. How?
Prefetching Caching Popular
Requests
Proxy
Proxy
Video Server
Backbone Network
Video Stream Caching
Video Objects vs Web Objects High Data Rate, yet Adoptive Huge Volume
One Hour MPEG1, about 675 MB Long Playback Duration
Various Interactions Random Access Early Termination
Video Stream Caching
Video Objects vs Web Objects Caching the entire object is not feasible Media Streams are not required to deliver at once We can change the bandwidth for media streams,
gaining lower quality …
Video Stream Caching
Three Primary Ideas: Sliding Interval Caching Prefix Caching Segment Caching
Video Stream Caching
Sliding Interval Caching, Dan et all, 1996 Maintain the blocks after servicing the
request. It tries to benefit from similar requests in a
short period of time.
1 4 5 6 7 8 9 3 4 5 6 7 8 9
r1r2 r1r2
1 2
2 3
Video Stream Caching
Prefix Caching, Sen et all. 1999
Cache the Initial Frames of the stream Many of the users terminate the video
before reaching the end of it The size of the video prefix
Path between the server and the proxy
Client playback delay Aids Bandwidth Smoothing
Video Stream Caching
Segment Caching, Wu et all, 2001 Blocks of a media object are grouped into
variable-sized, distance-sensitive segments Two LRU stacks are maintained: one for initial
segments, one for later segments Provides better facilities for the replacement
mechanism
Video Stream Caching
Common Assumptions till NOW
Continuous Playback No interaction with the users
Homogeneous Clients Identical Access Bandwidth
Time Partitioning Only Non-Adoptive Caching
Video Stream Caching
Heterogeneous Environments
Users with different request types From Cell Phone to PC
Maybe Different formats of the same object is requested
Can benefit from Transcoding, or Layered Coding
Video Stream Caching
Rajaei et all, proposed the following, 2000: On the first request of an object cache it as you
receive it from the server. On later requests try to refine your database
Video Stream Caching
Rajaei et all, proposed the following, 2000: If the cache is full follow this replacement strategy:
Once a victim video is identified, its cached segments are flushed as presented in the figure
Video Stream Caching
Tang et all, proposed the following, 2002 : They tried to benefit from transcoding Simplicity: transcode only from the full version FVO, Full Version Only, Fetching only the full version and
transcode if other formats are required High CPU Load
TVO, Transcoded Version Only, Fetching all the transcoded formats from the server High Network Load
Tradeoff Use FVO with probability p TVO with Probability 1-p Adjust p according to the history of the requests
My Work
Supporting users interactions, Random Access Considering Segments as the caching blocks
rather than entire object Simple Version: Trying to get max Utility with
limited initial tolerable delay, D, and proxy to server bandwidth, B.
Utility Function:
U = (L-r1)2 + (L-r2)2 + … + (L-rn)2
ri is the layers of the segment sent to the client for request i.
My Work
Splitting the System of Caching into two main components, Queue Manager and Request Manager
Request Manager sends the request from the main server to the Queue Manager
Queue Manager tries to fetch more vital requests from the server sooner
Overall System Design
RM QM
Main Server
Users
Request Manager
Maintains a vector from Recently referenced Segments
These Segments are sorted due to an estimation the probability of future accesses
The Vector is divided into N parts, p1, … pN pi contains segments cached up to ith layer.
P1 … P2 PN PN-1
Request Manager
Upon a request of Seg from the clients: Update the access record of Seg, and replace
it in the vector Send request of not cached layers to QM Wait D Send the ready video to the client Start prefetching the future segments, if they
are not already in the cache
Queue Manager
Three types of requests:A. Time Limit: those that should be answered
after a limited time, extra layers to cache.
B. Time Limit and not needed else: like number one but they are not needed after the time limit passed, extra layers to show.
C. Vital time limit: they should be answered on time or the system will encounter serious problems, base layer to show.
Existing Challenges
The replacement function: some primary assumptions are considered like using some well-known functions like LRU …
How much of the Vector should be assigned to each layer storage?
How the method can be extended to heterogeneous environments?
Theoretical analysis about the order of the algorithms and also about the exactness of the solution could be really valuable.
Maybe other things later.
Other Interesting Subjects
Distributed Proxies Prefetching
Distributed Proxies
Some Proxies Working together They can be very far from each other Two main models
Centralized Manager Completely Distributed
Each Client sends its request to the nearest proxy, then it will serve it directly, or fetch the required object from the main server or other proxies
Prefetching
Just a brief Overview Trying to anticipate the future requests from the
previous logs The work of Zhong Su et al:
“A Prediction System for Multimedia Prefetching in Internet”, 2000.
Prefetching (Cont)
Keeping logs of users Request Predicting the probability of a special requests
among the future m requests, based on the previous n requests, call it m_step n_gram model.
IDEA: Different Prediction methods of AI can be used to register better results.
Prefetching (Cont)
Example(3-Gram, 2-Step):Log Files:
A, B, C, J, E
A, B, C, E, F
A, B, C, E, F
B, C, D, K, A
B, C, D, K, B
B, C, D, F, L
Prefetching (Cont)
Example(3-Gram, 2-Step):Log Files:
A, B, C, J, E
A, B, C, E, F
A, B, C, E, F
B, C, D, K, A
B, C, D, K, B
B, C, D, F, L
N-Gram Prediction
Probability Model:
A, B, C E(100%)
Prefetching (Cont)
Example(3-Gram, 2-Step):Log Files:
A, B, C, J, E
A, B, C, E, F
A, B, C, E, F
B, C, D, K, A
B, C, D, K, B
B, C, D, F, L
N-Gram Prediction
Probability Model:
A, B, C E(100%)
B, C, D K(66%)
Any Questions?
Thanks
References: [1] J. Liu, and J. Xu, “A Survey of Streaming Media Caching”, Department of Computer
Science The Chinese University of Hong Kong. [2] S. Sen, J. Rexford, and D. Towsley, “Proxy prefix caching for multimedia streams,” in Proc. IEEE
INFOCOM’99, New York, NY, Mar. 1999. [3] S. Chen, B. Shen, S. Wee, and X. Zhang, “Adaptive and lazy segmentation based proxy caching for
streaming media delivery,” Proc. NOSSDAV’03, Monterey, CA, June 2003. [4] R. Tewari, H. M. Vin, A. Dan, and D. Sitaram, “Resource-based caching for Web servers,” in Proc.
SPIE/ACM Conf. on Multimedia Computing and Networking (MMCN'98), San Jose, CA, Jan. 1998. [5] J. M. Almeida, D. L. Eager, and M. K. Vernon, “A hybrid caching strategy for streaming media files,” in
Proc. Multimedia Computing and Networking (MMCN’01), San Jose, CA., Jan. 2001. [6] J. Kangasharju, F. Hartanto, M. Reisslein, and K. W. Ross, “Distributing layered encoded video
through caches,” IEEE Trans. Computers, 51(6), pp. 622-636, June 2002. [7] R. Rejaie, H. Yu, M. Handley, and D. Estrin, “Multimedia proxy caching mechanism for quality adaptive
streaming applications in the Internet,” in Proc. IEEE INFOCOM’00, Tel Aviv, Israel, Mar. 2000. [8] J. Liu, X. Chu, and J. Xu, “Proxy Cache Management for Fine-Grained Scalable Video Streaming,”
Proc. IEEE INFOCOM'04, Hong Kong, Mar. 2004. [9] S. Podlipnig, L. Boszormenyi, “A Survey of Web Cache Replacement Strategies”, ACM Computer
Science Surveys, December 2003. [10] X. Tang, F. Zhang, S. T. Chanson, “Streaming Media Caching Algorithms for Transcoding Proxies”,
ACM Proceedings of the International Conference on Parallel Processing, 2002. [11] S. Acharya and B. C. Smith, “Middleman: A video caching proxy server,” in Proc. NOSSDAV’00, June
2000. [12] K. C. Tsui, J. Liu, M. J. Kaiser, “Self-Organized Load Balancing in Proxy Servers: Algorithms and
Performance”, ACM Journal of Intelligent Information Systems, Volume 20, Issue 1, January 2003. [13] Brian D. Davison. (2004) Learning Web request patterns. [14] Z. Su, Q. Yang, H. Zhang, “A Prediction System for Multimedia Pre-fetching in Internet”, Proceedings
of the eighth ACM international conference on Multimedia, 2000.