a wavelet based lossless video compression using ... wavelet based lossless video compression using...

7
A Wavelet based Lossless Video Compression using Adaptive Prediction and Motion Approximation S.Jeyakumar 1 and S.Sundaravadivelu 2 1 Assistant Professor in IT, Dr.Sivanthi Aditanar College of Engineering, Tiruchendur 1, India 2 Professor in ECE, SSN College of Engineering, Chennai, India [email protected], [email protected] Abstract There have been great interests in developing efficient video compression techniques due to the increased multimedia applications in today’s era of digital technology. In this paper, we present a technique for lossless video sequence compression using predictive coding as a method to improve the compression efficiency. The proposed algorithm is an adaptive method that switches between prediction in the spatial domain and prediction in the wavelet domain by estimating the inter frame motion similarities using correlation. Noting that motion compensation in the wavelet domain is very inefficient for high motion video sequences due to the shift-variant property of the wavelet coefficients, motion compensation is directly applied to a high motion activity sequence in its spatial domain. The frames that have more similarities with very few motion changes are coded using temporal prediction in its integer wavelet domain. Experimental results show that the proposed method has better compression performance and well suited for lossless compression of high quality video frames. Keywords : Lossless compression, Motion vector, Temporal redundancy, Prediction residual, Entropy coding. 1.Introduction Compression of video is ubiquitous in today’s multimedia and Internet world. The increasing demand to incorporate video data into telecommunications services, multimedia applications, the corporate environment, the entertainment industry, and even at home has made digital video technology a necessity. Video images are moving pictures which are sampled at frequent intervals usually, 25 frames per second and stored as sequence of frames. A problem, however, is that digital video data rates are very large, typically in the range of 150 Megabits/second. Data rates of this magnitude would consume a lot of the bandwidth in transmission, storage and computing resources in the typical personal computer. Hence, to overcome these issues, Video Compression standards have been developed and intensive research is going on to derive effective techniques to eliminate picture redundancy, allowing video information to be transmitted and stored in a compact and efficient manner[6]. A video image consists of a time-ordered sequence of frames of still images as in figure 1. Generally, two types of image frames are defined: Intra-frames (I-frames) and Inter-frames (P- frames). I-frames are treated as independent key images and P-frames are treated as Predicted frames. An obvious solution to video compression would be predictive coding of P-frames based on previous frames and compression is made by coding the residual error [3], [13], [17]. Figure 1 Structure of video image Temporal redundancy removal is included in P-frame coding, whereas I-frame coding performs only spatial redundancy removal [3]. The rest of the paper is organized as follows: A survey on the related work is presented in Section 2. Section 3, describes the proposed adaptive method. In Section 4, we present the algorithm of proposed technique. Experimental results and discussion have been given in Section 5, and finally, conclusions are drawn in Section 6. 2. Previous Related Works Lossless video coding is useful for many applications, such as telemedicine and high quality multimedia systems [9]. Most of the video compression techniques which exploit temporal redundancy are lossy compression. Lossless image compression algorithms allow the exact original image to be reconstructed from the compressed one. This can be contrasted to lossy data compression, which does not allow the exact original data to be ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009 15

Upload: ngonga

Post on 29-May-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

A Wavelet based Lossless Video Compression using Adaptive Prediction and Motion Approximation

S.Jeyakumar 1 and S.Sundaravadivelu 2

1Assistant Professor in IT, Dr.Sivanthi Aditanar College of Engineering, Tiruchendur1, India 2Professor in ECE, SSN College of Engineering, Chennai, India

[email protected], [email protected]

Abstract There have been great interests in developing efficient video compression techniques due to the increased multimedia applications in today’s era of digital technology. In this paper, we present a technique for lossless video sequence compression using predictive coding as a method to improve the compression efficiency. The proposed algorithm is an adaptive method that switches between prediction in the spatial domain and prediction in the wavelet domain by estimating the inter frame motion similarities using correlation. Noting that motion compensation in the wavelet domain is very inefficient for high motion video sequences due to the shift-variant property of the wavelet coefficients, motion compensation is directly applied to a high motion activity sequence in its spatial domain. The frames that have more similarities with very few motion changes are coded using temporal prediction in its integer wavelet domain. Experimental results show that the proposed method has better compression performance and well suited for lossless compression of high quality video frames.

Keywords : Lossless compression, Motion vector, Temporal redundancy, Prediction residual, Entropy coding. 1.Introduction Compression of video is ubiquitous in today’s multimedia and Internet world. The increasing demand to incorporate video data into telecommunications services, multimedia applications, the corporate environment, the entertainment industry, and even at home has made digital video technology a necessity. Video images are moving pictures which are sampled at frequent intervals usually, 25 frames per second and stored as sequence of frames. A problem, however, is that digital video data rates are very large, typically in the range of 150 Megabits/second. Data rates of this magnitude would consume a lot of the bandwidth in transmission, storage and computing resources in the typical personal computer. Hence, to overcome these issues, Video Compression standards have been developed and intensive research is

going on to derive effective techniques to eliminate picture redundancy, allowing video information to be transmitted and stored in a compact and efficient manner[6]. A video image consists of a time-ordered sequence of frames of still images as in figure 1. Generally, two types of image frames are defined: Intra-frames (I-frames) and Inter-frames (P- frames). I-frames are treated as independent key images and P-frames are treated as Predicted frames. An obvious solution to video compression would be predictive coding of P-frames based on previous frames and compression is made by coding the residual error [3], [13], [17].

Figure 1 Structure of video image

Temporal redundancy removal is included in P-frame coding, whereas I-frame coding performs only spatial redundancy removal [3]. The rest of the paper is organized as follows: A survey on the related work is presented in Section 2. Section 3, describes the proposed adaptive method. In Section 4, we present the algorithm of proposed technique. Experimental results and discussion have been given in Section 5, and finally, conclusions are drawn in Section 6. 2. Previous Related Works Lossless video coding is useful for many applications, such as telemedicine and high quality multimedia systems [9]. Most of the video compression techniques which exploit temporal redundancy are lossy compression. Lossless image compression algorithms allow the exact original image to be reconstructed from the compressed one. This can be contrasted to lossy data compression, which does not allow the exact original data to be

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

15

reconstructed from the compressed data. Lossy methods accept some loss of data in order to achieve higher compression. But, lossless compression is used when it is important that the original and the decompressed data be identical. Medical images are frequently transmitted using lossless compression [17]. In [11], Memon and Sayood presented a lossless compression scheme that adaptively exploits spatial, temporal and spectral correlations. The authors also investigated a hybrid prediction scheme shifting between the temporal prediction based on block motion compensation and spectral prediction which used the best predictor from one spectral plane on another. However, this method has more computation than reconstruction. Brunello et al. [1], introduced a temporal prediction technique based on block motion compensation and an optimal 3-dimensional linear prediction algorithm. In their scheme, based on motion information, the pixel to be coded is predicted by a linear combination of neighboring pixels in the current and reference frames. Ming et al [13] developed an adaptive combination of a spatial predictor as in CALIC [12] and temporal predictor based on block motion compensation. Gong et al. [5], proposed a wavelet-based lossless video coding algorithm. Noting that compression in wavelet domain is efficient , but, motion compensation in the wavelet domain can be very inefficient for high motion video sequences, block motion compensation was first performed in the spatial domain, and then wavelet coefficients of the prediction residuals were coded and transmitted. Ying et al. [18] proposed an enhanced adaptive pixel based predictor which exploits the motion information among adjacent frames using extremely low side information. However, since the prediction is pixel based, the computational complexity is very high which may not be suitable for online real time applications. In this work, the motion vectors are adaptively estimated in spatial and wavelet domain based on inter frame similarity using correlation approach. The motion estimation is block based with reduced search. Also, as in [10], only the residuals of motion vector are encoded using neighboring information and hence the side information is reduced significantly.

3. The Proposed Work The objective of this work is to study the relationship between the operational domains for prediction, according to temporal redundancies between the sequences to be encoded. Based on the motion characteristics of the inter frames, the system will adaptively select the spatial or wavelet domain for prediction. Also the work is to develop a temporal predictor which exploits the motion information among adjacent frames using extremely low side information. The proposed temporal predictor has to work without the requirement of the transmission of complete motion vector set and hence much overhead would be reduced due to the omission of motion vectors. The block diagram of proposed method is shown in Figure 2.

Figure 2 Block diagram of proposed method 3.1. Adaptive Domain Selection This step aims to determine the operational mode of video sequence compression according to its motion characteristics. The candidate operational modes are spatial domain and wavelet domain. The wavelet domain is extensively used for compression due to its excellent energy compaction. However, Gong et al [5] pointed out that motion estimation in the wavelet domain might be inefficient due to shift invariant properties of wavelet transform. Hence, it is unwise to predict all kinds of video sequences in the spatial domain alone or in the wavelet domain alone. Hence a method is introduced to determine the prediction mode of a video sequence adaptively according to its temporal redundancies. The amount of temporal redundancy is estimated by the inter frame correlation coefficients of the test video sequence [18]. The inter frame correlation coefficient between frames can be calculated by (1). If the inter frame correlation coefficients are smaller than a predefined threshold, then the sequence is likely to be a high motion video sequence. In this case, motion compensation and coding the temporal prediction residuals in wavelet domain would be inefficient; therefore, it is wise to operate on the sequence in the spatial mode. Those sequences that have larger inter frame correlation coefficients are predicted in direct spatial domain. The frames that have more similarities with very few motion changes are coded using temporal prediction in integer wavelet domain. ( )( ) ( )( )

( )( ) ( )( )( )∑ ∑∑ ∑∑∑

++

+++

−⋅−

−⋅−=

2

11

2

111,

,,

,,

nnnn

nnnnnn

iyxiyxiyxiyx

iyxiiyxiyxC

(1)

Prediction in spatial domain

Correlate Frame i and Frame i-1

Code the Prediction Residuals

Compressed Bit Stream

Source Frames

Prediction in wavelet domain

Adaptive Prediction Selection

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

16

3.2 Discrete Wavelet Transform Discrete Wavelet Transform (DWT) is the most popular transform for image-based application [14], [16], [18]. A 2-dimensional wavelet transform is applied to the original image in order to decompose it into a series of filtered sub band images. At the top left of the image is a low-pass filtered version of the original and moving to the bottom right, each component contains progressively higher-frequency information that adds the detail of the image. It is clear that the higher-frequency components are relatively sparse, i.e., many of the coefficients in these components are zero or insignificant. The wavelet transform is thus an efficient way of decorrelating or concentrating the important information into a few significant coefficients. The wavelet transform is particularly effective for still image compression and has been adopted as part of the JPEG 2000 standard [8] and for still image texture coding in the MPEG-4 standard. Figure 3 shows the representation of DWT sub bands of a three level multi resolution decomposition..

LL3

HL3

HL2

HL1

LH3 HH3

LH2 HH2

LH1

HH1

Figure 3 DWT Sub bands

The Haar wavelet is the first known wavelet and was proposed in 1909 by Alfréd Haar The Haar wavelet is the simplest possible wavelet with coefficients [0.707, 0.707]. The S transform is the integer version of the Harr transform [2] which has the lowest computational complexity, and reasonably well both for lossy and lossless compression. The forward S transform equations are given in (2).

( ) ( ) ( )ixixih 212 −+=

( ) ( ) ( )⎥⎦⎥

⎢⎣⎢+=

22 ihixil (2)

where x(i) is the input signal, h(i) is the high frequency sub-band signal and l(i) is the low-frequency sub-band signal. 3.3 Temporal Residual Prediction Motion estimation obtains the motion information by finding the motion field between the reference frame and the current frame. It exploits temporal redundancy of video sequence, and, as a result, the required storage or transmission bandwidth is reduced by a factor of four. Block matching is one of the most popular and time-consuming methods of motion estimation [15]. This method compares blocks of each frame with the blocks of

its next frame to compute a motion vector for each block; therefore, the next frame can be generated using the current frame and the motion vectors for each block of the frame. Block matching algorithm is one of the simplest motion estimation techniques that compare one block of the current frame with all of the blocks of the next frame to decide where the matching block is located [4]. Considering the number of computations that has to be done for each motion vector, each frame of the video is partitioned into search windows of size H*W pixels. Each search window is then divided into smaller macro blocks of size 8*8 or 16*16 pixels. To calculate the motion vectors, each block of the current frame must be compared to all of the blocks of the next frame with in the search range and the Mean Absolute Difference (MAD) for each matching block is calculated is using equation (1)

( ) ( ) ( )[ ]∑∑−

=

=

++−=1

0

1

0, ,,,

N

n

N

inm njmiyjixyxMAD (3)

Where N*N is the block size, x(i,j) is the pixel values of current frame at (i,j) th position and y(i+m,j+n) is the pixel value of reference frame at (i+m,j+n) th position.

Figure 4 Inter frame target window mapping

The block with the minimum value of the Mean Absolute Difference (MAD) is the preferred matching block. The location of that block is the motion displacement vector for that block in current frame. the best motion vector for the target window with the minimum MAD is determined by,

(m0, n0) = { minimum MAD (T w)} (4) where (mo,no) indicates the motion displacement of the target window Tw. Then the temporal predictor of pixel pi(x,y) can be obtained by,

( ) ( )001 ,,ˆ nymxpyxp iiT ++= − (5)

and the temporal prediction residual is

( ) ( )YXiiSpyxpe ,2 ˆ, −= (6)

The motion activities of the neighboring pixels for a specific frame are different but highly correlated since they usually characterize very similar motion structures [18]. Therefore, motion information of the pixel pi(x,y) can be approximated by the neighboring pixels in the same frame. The initial motion vector (Vx, Vy) of the current pixel is approximated by the motion activity of

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

17

the upper-left neighboring pixels in the same frame as in Figure 5.

M(x-1,y-1) m(x,y-1) m(x+1,y-1)

m(x-1,y) Vx,Vy

Figure 5. Template of motion approximation The (Vx,Vy) is the average motion vector of four past neighboring symbols calculated by,

⎥⎦

⎥⎢⎣

⎢ −++−−+−+−=

⎟⎠⎞⎜

⎝⎛⎟

⎠⎞⎜

⎝⎛⎟

⎠⎞⎜

⎝⎛⎟

⎠⎞⎜

⎝⎛

41,11,11,,1 yxmyxmyxmyxm

Vxoooo

⎥⎦

⎥⎢⎣

⎢ −++−−+−+−=

⎟⎠⎞⎜

⎝⎛⎟

⎠⎞⎜

⎝⎛⎟

⎠⎞⎜

⎝⎛⎟

⎠⎞⎜

⎝⎛

41,11,11,,1 yxnyxnyxnyxn

Vyoooo (7)

where [mo(x-1,y), no(x-1,y)] is the motion vector of symbol pi(x-1,y) and so for [mo(x,y-1), no(x,y-1)], [mo(x-1,y-1), no(x-1,y-1)] and [mo(x+1,y-1), no(x+1,y-1)]. The computation only uses the past information which is available at the encoder side and the decoder side. Since the motion activity of the target window between frame[i] and frame[i-1] can be perfectly reconstructed at the decoder using motion approximation, there is no requirement for the transmission complete set of the motion vector (mo,no) to the decoder. 3.4 Coding the Prediction Residual The temporal prediction residuals from adaptive prediction are encoded using Huffman codes. Huffman codes are used for data compression that will use a variable length code instead of a fixed length code, with fewer bits to store the common characters, and more bits to store the rare characters. The idea is that the frequently occurring symbols are assigned short codes and symbols with less frequency are coded using more bits. The Huffman code can be constructed using a tree. The probability of each intensity level is computed and a column of intensity level with descending probabilities is created. The intensities of this column constitute the levels of Huffman code tree. At each step the two tree nodes having minimal probabilities are connected to form an intermediate node. The probability assigned to this node is the sum of probabilities of the two branches. The procedure is repeated until all branches are used and the probability sum is 1.Each edge in the binary tree, represents either 0 or 1, and each leaf corresponds to the sequence of 0s and 1s traversed to reach a particular code. Since no prefix is shared, all legal codes are at the leaves, and decoding a string means following edges, according to the sequence of 0s and 1s in the string, until a leaf is reached. The code words are constructed by traversing the tree from root to its leaves. At each level 0 is assigned to the top branch and 1 to the bottom branch. This procedure is repeated until all the tree leaves are reached. Each leaf corresponds to a unique intensity level. The codeword for each intensity level consists of 0s and 1s that exist in the path from the root to the specific leaf.

4. Algorithm for Adaptive Prediction Begin Read image Frame i and Frame i-1 Correlate Fame I and Frame i-1 If Correlation Coefficient > 0.99 then

Apply S wavelet transform to each frame Divide the frames into macro blocks of Equal size m x n For each block in Frame i, Set search range as –W to + W in Framei-1 Map the block of Frame i with respective Non overlapping blocks of Frame i-1 Calculate minimum abs (difference) Motion vector = (x,y) positions of the block Whose MAD is minimum Predict motion vector of (x,y) using motion Vectors of left-upper neighboring Pixels (x-1,y), (x-1,y-1),(x,y-1),(x+1,y-1) Calculate MV residual = Actual MV – Predicted MV Calculate Frame residual = Actual pixel value – Predicted pixel value Encode the error residuals End for End 5. Experimental Results and Discussion The proposed algorithm has been implemented using Interactive Data Language (IDL) for standard test video sequences Jane and Tennis. IDL software is a best tool for implementing image processing algorithms as it is an array oriented language. 5.1 Experimental Output Figure 6 shows a snapshot of the execution of the software developed.

Figure 6 Snapshot of software developed

Figure 7 shows the result sample of the multi-resolution wavelet transform for Jane sequence frame 2. The prediction results for the Jane sequence frames 1 and 5 are shown in figures 8 and 9 respectively. Since frame 0 is the start frame, it is treated as independent I-frame and compressed using JPEG-LS. The frame 1 has high motion changes from frame 0 and hence it is compressed using temporal predictor in spatial domain. Frame 5 has

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

18

more similarity with Frame 4. Hence its motion estimation is done with wavelet domain.

Figure 7 Result sample of 5/3 Wavelet

a b

Figure 8 Prediction for frame 1 in spatial domain (a. Frame 0 and b. Reconstructed Frame 1)

a. b.

Figure 9 Prediction for frame 5 in wavelet domain (a. Frame 4 and b. Reconstructed Frame 5)

The prediction results for the Tennis sequence frames 1 and 3 are shown in figures 10 and 11 respectively. Frame 1 has high motion characteristics from frame 0 and hence its motion vectors are predicted and residuals are compressed in spatial domain. Frame 3 more similarity with frame2 and its motion estimation is done in wavelet domain.

Figure 10 Prediction for Tennis frame 1 in spatial domain (a. Frame 0 and b. Reconstructed frame 1)

Figure 11 Prediction for Tennis frame 3 in wavelet domain

(a. Frame 2 and b. Reconstructed frame 3)

5.2 Compression Ratio To analyze the results of our proposed work, Compression Ratio (CR) and Peak Signal to Noise Ratio (PSNR) parameters are used. Compression Ratio (CR) is the ratio between the number of bits required to store the image before compression (I) and the number of bits required to store the image after compression (O).

C R = I / O (8) Table 1 lists the motion characteristics of each frame over its previous frame, its prediction mode and compression ratio for the Jane frame sequences (1-9). The prediction threshold is fixed as 0.99. The frames with correlation coefficient 0.99 and above are considered as having, low motion features and their prediction domain is wavelet. The Frames that have correlation coefficient below 0.99 are high motion frames and hence their prediction is direct spatial domain. Similarly Table 2 lists the motion characteristics of each frame over its previous frame, its prediction mode and compression ratio for the Tennis frame sequences (1-9). The prediction threshold is fixed as 0.99.

Table 1 Compression ratio for test video sequence

(Jane frames 1-9) Frame Correlation

coefficient Prediction domain

Compression ratio

1 0.956559 Spatial 7.23 2 0.996758 Wavelet 7.72 3 0.996435 Wavelet 7.74 4 0.953795 Spatial 7.21 5 0.996265 Wavelet 7.63 6 0.963063 Spatial 7.26 7 0.992512 Wavelet 7.75 8 0.991062 Wavelet 7.73 9 0.954371 Spatial 7.25

Table 2 Compression ratio for test video sequence

(Tennis frames 1-9) Frame Correlation

coefficient Prediction domain

Compression ratio

1 0.983613 Spatial 6.862 0.985742 Spatial 6.73 3 0.994431 Wavelet 7.21 4 0.997359 Wavelet 7.25 5 0.995662 Wavelet 7.33 6 0.992310 Wavelet 7.36 7 0.971542 Spatial 6.85 8 0.981276 Spatial 6.62 9 0.975463 Spatial 6.32

Figure 12 shows the compression ratio of each frame of Jane and Tennis sequences (1-9). It is obvious that, temporal prediction in wavelet domain has high compression ratio than spatial prediction. However, spatial prediction is faster and motion estimation in the wavelet domain is inefficient for high motion frames, due to shift invariant properties of wavelet transform.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

19

66.26.4

6.66.8

77.27.4

7.67.8

8

1 2 3 4 5 6 7 8 9

Frames

Com

pres

sion

ratio

JaneTennis

Figure 12 Compression rate for adaptive prediction

5.3 Peak Signal to Noise Ratio To analyze the quality of proposed adaptive prediction method, the Peak Signal to Noise Ratio (PSNR) is calculated between the original frame and reconstructed frames by,

⎟⎟⎠

⎞⎜⎜⎝

⎛=

mseLogPSNR

2

1025510 (9)

where, Mean Square Error (mse) is

( )2

1,,

1

1 ∑∑==

−=n

jjiji

m

ixy

mnmse (10)

In (10), m and n denote respective number of rows and columns in the image, mse is the calculated Mean Square Error, jiy , is decompressed image at location (i,j) and

jix , is original image at location (i, j). Table 3 shows the possible combination of spatial-wavelet domains for motion estimation in case of low motion frames and frames with high motion characteristics.

Table 3 Combination of domains for prediction Possible Combination

Low motion frame

High motion frame

C1 Spatial Spatial Spatial C2 Wavelet Wavelet Wavelet C3 Adaptive Wavelet Spatial

Table 4 PSNR value of reconstructed Jane Sequence

Frame C1 C2 Proposed C3 1 33.75 33.40 33.75 2 33.14 33.65 33.65 3 32.69 33.91 33.91 4 32.82 32.54 32.82 5 29.57 31.62 31.62 6 31.65 31.29 31.65 7 29.05 33.48 33.48 8 31.16 32.64 32.64 9 33.24 32.21 33.24

Table 5 PSNR values of reconstructed Tennis Frames Frame C1 C2 Proposed C3

1 33.34 32.73 33.34 2 33.38 33.25 33.38 3 32.16 33.85 33.85 4 32.08 32.84 32.84 5 31.97 33.26 33.26 6 32.16 33.12 33.12 7 33.05 32.34 33.05 8 33.26 32.72 33.26 9 33.49 32.81 33.49

Table 4 and Table 5 give the performance comparison of the quality parameter in terms of PSNR for the proposed adaptive method with the existing spatial alone and wavelet alone approaches. The PSNR values in C3 column of our various samples of input in Table 4 and Table 5, show that the temporal prediction using the proposed adaptive method is robust and the it can be well adopted for lossless video sequence compression. 6. Conclusion An adaptive for lossless coding of video sequences is implemented in this work. It exploits amount of temporal redundancies and adaptively selects the best prediction mode based on motion information. The successive frames are first correlated and temporal prediction in wavelet domain is applied into the frames that have less motion features and frames that are highly correlated are predicted in spatial domain. This estimates the motion vectors and the motion vectors are further approximated and only the residuals of motion vector predictor are transmitted as side information for reconstruction. Hence this method has good prediction and uses less side information. Experimental results show that the reconstructed frames by adaptive prediction are of best quality. Hence the proposed algorithm has better compression performance and very much suitable for lossless high quality video sequence compression. 7. References [1] D.Brunello, G. Calvagno, G. A. Mian, and R.

Rinaldo, Lossless compression of video using temporal information, IEEE Trans. Image Process., vol. 12, no. 2, pp. 132–139, Feb. 2003.

[2] A. R. Calderbank, I. Daubechies, W. Sweldens, and B. L. Yeo, Wavelet transforms that map integers to integers, Appl. Comput. Harmon. Anal., vol. 5, no. 3, pp. 332–369, Jul. 1998.

[3] E. S. G. Carotti, J. C. De Martin, and A. R. Meo, Low complexity lossless video coding via spatio-temporal prediction, in Proc. Int. Conf. Image Processing, Sep. 2003, vol. 2, pp. 197–200.

[4].Chitra. A. Dhawale, Sanjeev Jain, Motion Compensated Video Shot Detection using Multiple Feature Experts ICGST-GVIP Journal, ISSN 1687-398X, Volume (8), Issue (V), January 2009, Pg 1-11

[5] Y. Gong, S. Pullalarevu, and S. Sheikh, A wavelet-based lossless video coding scheme, in Proc. Int. Conf. Signal Processing, 2004, pp. 1123–1126.

[6] Iain E. G. Richardson, H.264 and MPEG-4 Video Compression, John Wiley & Sons, September 2003.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

20

[7].Y.-L. Lee and H. W. Park, Loop filtering and postfiltering for low-bit rates moving picture coding, Signal Processing: Image Communication, vol.16, pp. 871–890, 2001.

[8] Marc Antonini, Michel Barlaud, Pierre Mathieu and Ingrid Daubechies, Image coding using wavelet transform, IEEE Transactions on Image Processing, vol. 1, no. 2, April 1992.

[9] B.Martions and S. Forchgammer, Lossless compression of video using motion compensation, in Proc. IEEE DCC, 1998, pp. 560–589.

[10] Martucci, S.A., Reversible Compression of HDTV Images using Median Adaptive Prediction and Arithmetic Coding, IEEE Int. Symposium on Circuits and Systems, pp.1310-1313, IEEE Press, New York, 1990. .

[11] N.D. Memon and K. Sayood, Lossless compression of video sequences, IEEE Trans. on Communications., vol. 44, no. 10, pp. 1340–1345, Oct. 1996.

[12] N. Memon and X. Wu, Context-based, adaptive, lossless image coding, IEEE Trans. Commun., vol. 45, no. 4, pp. 437–444, Apr. 1997.

[13] Z. Ming-Feng, H. Jia, and Z. Li-Ming, Lossless video compression using combination of temporal and spatial prediction, in Proc. IEEE. Int. Conf. Neural Netwoks Signal Processing, Dec. 2003, pp. 1193–1196.

[14] S.-G. Park, E. J. Delp, and H. Yu, Adaptive lossless video compression using an integer wavelet transform, in Proc. Int. Conf. Image Processing, 2004, pp. 2251–2254.

[15]. G. Sreelekha, T. Rajesh, P.S. Sathidevi, A New Perceptual Video Coder Incorporating Human Visual System Model, ICGST-GVIP, ISSN 1687-398X, Volume (8), Issue (III), October 2008.

[16].R.Sudhakar, Ms R Karthiga, S.Jayaraman, Image Compression using Coding of Wavelet Coefficients – A Survey, ICGST-GVIP Journal, Volume (5), Issue (6), June 2005, Pg. 25-38

[17] K. H. Yang and A. F. Faryar, A context-based predictive coder for lossless and near lossless compression of video, in Proc. Int. Conf. Image Processing, Sep. 2000, vol. 1, pp. 144–147.

[18] Ying Li and Khalid Sayood, Lossless Video Sequence Compression Using Adaptive Prediction, vol. 16, no. 4, Apr. 2007.

S.Jeyakumar is working as Assistant Professor in Information Technology at Dr.Sivanthi Aditanar College of Engineering, Tiruchendur, India. He is presently doing PhD programme in Anna University, Chennai. His research area includes video compression, optical image

processing and parallel image processing.

Dr.S.Sundaravadivelu is presently working as Professor in ECE at SSN College of Engineering, Chennai, India. Earlier, he worked at Thiagarajar College of Engineering, Madurai. He has over 25 years of teaching and research experience. His research interest includes optical signal processing,

d digital image processing and parallel computing.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

21