impact analysis of digital watermarking on perceptual quality of...

14
Impact analysis of digital watermarking on perceptual quality using HVS models Qi, Pei Graduate student in Electrical&Computering Engineering At University of Wisconsin – Madsion [email protected] Abstract In this project, two improved perceptual quality methods are proposed. One is weighted PSNR (WPSNR). Based on the fact that the human eye will have less sensitivity to modifications in textured areas than in smooth areas, WPSNR uses an additional parameter called the Noise Visibility Function (NVF) that is a texture masking function as a penalization factor. The other method based on Watson HVS models. In his work, Watson defines Just Noticeable Difference (JND) as linear multiples of a noise pattern that produces a JND distortion measure. A perceptually lossless quantization matrix of the DCT transform is generated. Each entries of this matrix represent the amount of quantization that each coefficient can withstand without affecting the visual quality of the image. Introduction All sorts of digital watermarking algorithms or methods have grown greatly in the past twenty years. However, besides designing these approaches, a very important and often neglected issue is how to effectively and precisely measure the perceptual quality of image that has been watermarked. In other words, we need quality metrics to analyze the image degradation introduced by embedding watermarks. However, most popular measures in the field of image coding and compression such as MSE, SNR and PSNR, are objective and independent of the subjective factors like HVS (human visual system). This might be a problem in the application of digital watermarking since sophisticated watermarking methods exploit in one way or the other HVS. Using the above measures to quantify the distortion caused by a watermarking process might therefore result in misleading quantitative distortion measurements. In particular, in the digital watermarking field, the difference between original image and watermarked image is very small. The evaluation of image quality is significantly affected by image contents. Therefore, the present objective measures without considering the effect of HVS do not always provide with reliable quality assessments. In this project, section I briefly introduced some fundamental knowledge regarding human visual system and visual models proposed by Watson [1993]. Section II mainly talked about two current types of measures for perceptual quality, including subjective and objective assessments. And two improved measures which take into account the effect of HVS are proposed in this section. In section III, we evaluate the perceptual quality of different images through several simulations using three perceptual measures (PSNR, WPSNR and JND-based). - 1 -

Upload: others

Post on 25-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Impact analysis of digital watermarking on perceptual quality using HVS models

Qi, Pei

Graduate student in Electrical&Computering Engineering At University of Wisconsin – Madsion

[email protected] Abstract In this project, two improved perceptual quality methods are proposed. One is weighted PSNR (WPSNR). Based on the fact that the human eye will have less sensitivity to modifications in textured areas than in smooth areas, WPSNR uses an additional parameter called the Noise Visibility Function (NVF) that is a texture masking function as a penalization factor. The other method based on Watson HVS models. In his work, Watson defines Just Noticeable Difference (JND) as linear multiples of a noise pattern that produces a JND distortion measure. A perceptually lossless quantization matrix of the DCT transform is generated. Each entries of this matrix represent the amount of quantization that each coefficient can withstand without affecting the visual quality of the image. Introduction All sorts of digital watermarking algorithms or methods have grown greatly in the past twenty years. However, besides designing these approaches, a very important and often neglected issue is how to effectively and precisely measure the perceptual quality of image that has been watermarked. In other words, we need quality metrics to analyze the image degradation introduced by embedding watermarks. However, most popular measures in the field of image coding and compression such as MSE, SNR and PSNR, are objective and independent of the subjective factors like HVS (human visual system). This might be a problem in the application of digital watermarking since sophisticated watermarking methods exploit in one way or the other HVS. Using the above measures to quantify the distortion caused by a watermarking process might therefore result in misleading quantitative distortion measurements. In particular, in the digital watermarking field, the difference between original image and watermarked image is very small. The evaluation of image quality is significantly affected by image contents. Therefore, the present objective measures without considering the effect of HVS do not always provide with reliable quality assessments. In this project, section I briefly introduced some fundamental knowledge regarding human visual system and visual models proposed by Watson [1993]. Section II mainly talked about two current types of measures for perceptual quality, including subjective and objective assessments. And two improved measures which take into account the effect of HVS are proposed in this section. In section III, we evaluate the perceptual quality of different images through several simulations using three perceptual measures (PSNR, WPSNR and JND-based).

- 1 -

Page 2: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Section I - Human Visual System Models Overview - Human Visual system Much work over the years has been done to understand the human visual system as well as using this knowledge for image and video application. A. Just Noticeable Difference Weber’s law [1]The Difference Threshold (or “Just Noticeable Difference”) is the minimum amount by which stimulus intensity must be changed in order to produce a noticeable variation in sensory experience.

Ernst Weber, a 19th century experimental psychologist, observed that the size of the difference threshold appeared to be lawfully related to initial stimulus magnitude. This relationship, known since as Weber’s law, can be expressed as:

kII=

Where delta I represents the difference threshold, I represents the initial stimulus intensity and k signifies that the proportion on the left side of the equation remains constant despite variations in the I term. Weber’s Law, more simply stated, says that the size of the JND (i.e., delta I) is a constant proportion of the original stimulus value. Weber’s Law can be applied to variety of sensory modalities. The size of the Weber fraction varies across modalities but in all cases tends to be a constant within a specific modality. An empirical value of k equals 0.02 for a wide range of luminance. However, nowadays there are better descriptions of JND, and it is clear that the ratio I / I is not constant but depends on the adaptation level, and can be approximated using Weber’s law just at certain adaptation levels.

[2]The mapping function proposed by Greg Ward uses a briefly flashing dot on a uniform background Blackwell to establish the relationship between adaptation luminance, Ia, and just noticeable difference in luminance I (I∆ a) as:

∆ I (Ia) = 0.0594(1.219 + La0.4)2.5

- 2 -

Page 3: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

That means that if there is a patch of luminance Ia + ∆ Ia on the background of luminance Ia it will be noticeable, but the patch of luminance Ia + ∆ I’, where I’ < ∆ ∆ Ia will not. In fact, the minimal perceptible difference depends on the background luminance. This phenomenon is referred to as luminance or contrast sensitivity. To be precise, Weber-Ferwerda’s law states that “if the luminance of a test stimulus is just noticeable from the surrounding luminance, then the ration of the luminance difference to the surrounding luminance is approximately constant”. Thus, the visibility threshold of a noise is larger for bright areas than for dark ones[3]. [4]Most of the early work on perceptually based image coding has utilized the frequency sensitivity of the human visual system. As far as frequency is concerned, JND thresholds are such that changes in a particular frequency band of an image are not noticeable as long as they remain below the threshold of that particular frequency band. To determine these thresholds, extensive psychovisual measurements have been performed on sinusoidal grating with various spatial frequencies and orientations by given viewing conditions. The goal is to determine the contrast thresholds of gratings by the given frequency and orientation. Contrast as a measure of relative variation of luminance for periodic pattern such as a sinusoidal grating is give by the equation as:

minmax

minmax

LLLL

C+−

=

Where Lmax and Lmin are maximal and minimal luminance of a grating. Reciprocal values of contrast thresholds express the contrast sensitivity (CS), and the contrast sensitivity as a function of spatial frequency determines the contrast sensitivity function (CSF) defined by the equation:

])114.0([ 1.1

)114.00192.0(6.2)( feffCSF −+= Where f is the spatial frequency in cycles/degree of visual angle. The CSF curve in Fig.1 indicates that HVS is most sensitive to spatial frequencies between 5 and 10 cycles/degree and less sensitive to very low and very high frequencies. This fact can be used to develop a simple image independent HVS model.

- 3 -

Page 4: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Common HVS models are composed of image dependent or independent JND Noticeable Difference (JND) thresholds. Most of the models incorporate three basic properties of HVS: frequency sensitivity (Fu,v), luminance sensitivity (Lu,v,b)and contrast masking (Cu,v,b). HVS Models in DCT domain The choice of the frequency decomposition of the image used in an encoder can affect not only the performance of the compression system, but also how effectively visual masking can be utilized. In order to utilize visual properties effectively, the frequency decomposition should be selected to allow control of the spatial frequency location of the quantization distortion. Ideally, the addition of quantization distortion to one frequency coefficient should not show up in coefficients that are not adjacent to the one that was perturbed. The frequency decomposition should also mimic the human visual system’s structure in order to gain the most in terms of masking[5]. The discrete cosine transform (DCT) satisfy the criterion of controlling the frequency location of the quantization distortion, although do not provide a good model of the human visual system’s structure. However, it is worth studying how visual models can be utilized in a DCT framework, since it is the current building block for still image and video encoding, such as compression, embedding and so on. [6]An example of the DCT transform coefficients F(u,v) of an NxN block of pixels x(i,j) are shown by the following equation:

NvCuC

NvCuC

Nvj

NuijixvCuCvuF

N

j

N

i

2)(),(

1)(),(

2)12(cos

2)12(cos),()()(),(

1

0

1

0

=

=

⎟⎠⎞

⎜⎝⎛ +

⎟⎠⎞

⎜⎝⎛ +

= ∑∑−

=

=

ππ

for u, v =0 for u, v = 1,2,3,….,N-1

Watson’s HVS models Firstly, visual models are applied in the area of source coding or compression. In the image compression algorithms based on DCT there has been a need of a good quantization matrix that would provide optimal quality of compressed images with higher compression ratio. The design of a quantization matrix that provides an optimal quality for a given bit rate depends on the visibility of DCT basis function, since the error caused by quantization of a particular DCT coefficient will not be visible if its quantization error is less than corresponding JND threshold. JEPG DCT quantization – visual models applied in image compression [7]The JPEG image compression standard provides a mechanism by which images may be compressed effectively while preserving perceptual quality of image. The image is first divided into blocks of size 8x8. Each block is transformed into its DCT coefficient, which we write Xu,v,b , where u,v indexes the DCT coefficient and b represent the block in the image. Each block is then quantized by being divided by a quantization matrix Qu,v , and rounding to the nearest integer.

- 4 -

Page 5: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Yu,v,b = Round [Xu,v,b / Qu,v ]

The quantization error Eu,v,b in the DCT domain is then

Eu,v,b = Xu,v,b - Yu,v,b However, the JPEG quantization matrix is not defined by the standard. The principle that should guide the design of a JPEG quantization matrix is that it provide optimal visual quality for a given bit rate. Quantization matrix design thus depends upon the visibility of quantization errors at the various DCT frequencies. Therefore, a straightforward way to get a quantization error is to set it equal to JND threshold (Tu,v) of the corresponding DCT coefficients. From the above equations, it is clear the maximum possible quantization error is Qu,v / 2. Thus to ensure that all errors are invisible (below thresholds), we set

Qu,v = 2 * Tu,v In order to compute the JND threshold, Watson[8] first modeled three different properties of the human visual system: frequency sensitivity, luminance sensitivity and contrast masking. The corresponding JND thresholds for each DCT coefficient are respectively (TF

u,v), (TLu,v,b) and (TC

u,v,b).

Where f0,v, fu,0 are horizontal and vertical spatial frequency respectively, the minimum threshold Fmin occurs at spatial frequency fmin, K determines the steepness of the parabola, and r is the model’s parameter. These thresholds determine frequency sensitivity of human visual system, which describes the human eye’s sensitivity to sine wave gratings at various frequencies. The thresholds are image independent and represent the basic HVS model that depends only on viewing conditions. They are measured by a constant viewing distance and background luminance. If the background luminance is changed then the thresholds should change, too. A more complete perceptual model can be achieved by considering luminance sensitivity to find the JND for each coefficient. [9]Luminance sensitivity measures the effect of the detectability threshold of noise on a constant background. It is a nonlinear function of the local image characteristics and can be estimated as:

Where x(0,0,b) is the DC coefficient for block b, X0,0 is the DC coefficient corresponding to the mean luminance of the display, and α is a parameter which controls the degree of luminance sensitivity.

- 5 -

Page 6: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

A much more precise perceptual model can be achieved by considering contrast masking by determining JND thresholds for each coefficient. Contrast masking refers to the effect of decreasing visibility of one signal in the presence of another signal called masker. The masking effect can be categorized as self-contrast masking is a masking effect where the masking signal is of the same spatial frequency, orientation and location as the marked signal. JND thresholds of the contrast masking for each DCT coefficient by considering only the self-contrast masking can be evaluated as the following:

Where x(u,v,b) is the value of the DCT coefficient in block b, and wu,v is a number between zero and one. The watermarking techniques can take full advantage of the research results on developing useful visual models for image compression. Specially, perceptual coders based on the JND paradigm are ideally suited in addressing the watermarking problem. In section III (simulation), you will see a concrete example which utilizes the existing JPEG quantization table to derive JND thresholds in terms of HVS models and applied to be a visual thresholds matrix as the reference for the difference between original image and watermarked image. Section II – Perceptual quality measures The perceptual quality of watermarked image can be evaluated either using subjective evaluation techniques involving human observers, or using some kind of distortion or distance measures. Subjective assessment The most accurate tests of quality are subjective tests which involve human observers. Those tests have been developed by psychophysics, a scientific discipline whose goal is to determine relationship between the physical world and people’s subjective experience of the world. An accepted measure for evaluation of the level of distortion is a Just Noticeable Difference (JND), and it represents a level of distortion that can be received in 50% of experimental trials. One JND thus represents a minimum distortion that is generally perceptible. A. Two Alternative, Forced Choice test (2AFC) In this test, human observers are presented with a pair of images, one original and one watermarked, and they must decide which one has higher quality. Statistical analysis of responses provides some information about whether the watermark is perceptible. For example, if the difference between original and watermarked image is imperceptible, the random responses will be received and we will see approximately 50% of the observers selecting the original image as the higher quality image, and 50% of the observers selecting the watermarked images as the higher quality image. This result can be interpreted as zero JND [10]. B. Five – Scale Rating System (ITU-R Rec.500)

- 6 -

Page 7: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

[11]Another more general approach provides human observer more option in their choice of answers. Instead of selecting the higher quality image, observer are request to rate the quality of the watermarked image to the different quality scale. One example of quality scale that can be used to evaluate perceptibility of an embedded watermark is the one recommended by the ITU-R Rec.500, which provides a five-scale rating system of the degree of impairment used by Bell Labs.

Where a quality rating depends on the level of impairment a distortion creates. The recommended scale goes from excellent to bad, and those quality scales correspond to impairment descriptions from imperceptible distortion to very annoying distortion. Using this system, human observers are asked to rank watermarked images in terms of these five scales. Ranking results from such experiments gives some quantitative measurement of the subjective perception of the images. Objective assessment These subjective tests can provide very accurate measure of perceptibility of an embedded watermark. However, they can be very expensive, they are not easily repeatable, and they cannot be automated. Therefore, ideally some automated mechanism for assigning a numerical value to the perceived quality of the image is desired. A. Present objective quality measures The distortion caused by embedding watermarks can be represented as a measure of difference or distance between the original and the watermarked signal. Letting the original image be I(x,y) and the watermarked image be I’(x,y), an error function is defined as

e(x,y) = I(x,y) – I’(x,y) The e(x,y) shows how close the watermarked image is to the original image. If e(x,y) equals to zero, that means no distortion introduced by embedding watermarks. Whereas the larger e(x,y) is, the more perceptible distortion is. One of the simplest distortion measures is the Mean Square Error (MSE or Ems) function.

∑∑==

==N

y

M

xms yxe

MNMSEE

1

2

1),(1

- 7 -

Page 8: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Where the image size is M by N, the sum is over all the location in the image. Thus the MSE is the mean of the squared error values across the entire image. The signal to noise ratio is another popular objective measure and it has units of Decibels (dB). The larger the SNR is, the better the quality of the watermarked image is. That is, the closer the watermarked image I’(x,y) is to the original image I(x,y). The function defined as

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

=∑∑==

ms

N

y

M

x

EMN

yxIdBSNR

*

),('log10)( 1

2

110

This is a ration between the signal power, measured as the sum squared intensities in the original image I, and the ‘noise’ power measured as the MSE of the error. A more popular version of the SNR used widely is the Peak SNR , essentially a modified version of SNR, which is defined as

⎟⎟⎠

⎞⎜⎜⎝

⎛=

MSELdBPSNR

2max

10log10)(

Where Lmax is the maximum value of luminance level (0~255). Advantage of the above objective measures is that they do not depend on subjective evaluations. Their disadvantage is that they are not correlated with human vision. First, they don’t consider the influence of vision condition. Second, the image itself has many factors affecting the judging image quality, like various masking. B. Objective quality measures based on HVS An alternative approach is to develop an automated technique for quality measure which is based on a model which tries to predict human observer’s responses. The requirements for the improved measure include: Being an objective assessment, hence it has merits of repeatability, fast and easy implementation. Because based on HVS, its assessment of visual quality agrees closely to that of subjective assessment. Therefore, the ideal methods should combine merits of the previous two different types of assessment so that provides a more effectively and precisely way to measure the perceptual quality of watermarked image. Weighted Peak Signal to Noise Ratio (WPSNR) The WPSNR is a different quality measurement suggested in [12]. The WPSNR uses an additional parameter called the Noise Visibility Function (NVF) which is a texture masking function. NVF uses a Gaussian model to estimate how much texture exists in any area of an image. The WPSNR uses the value of NVF as a penalization factor.

- 8 -

Page 9: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

⎟⎟⎠

⎞⎜⎜⎝

⎛= 2

2max

10 *log10

NVFMSELWPSNR

For flat regions, the NVF is close to 1. And for edge or textured regions NVF is more close to 0. This indicates that for smooth image, WPSNR approximately equals to PSNR. But for textured image, WPSNR is a little bit higher than PSNR. The form of NVF is given as

),(11),( 2 ji

jiNVFxθσ+

=

where denotes the local variance of the image in a window centered on the pixel with coordinates (i,j) and

),(2 jixσθ is a tuning parameter corresponding to the particular image.

Local variance is given as

where a window of size (2L+1)x(2L+1) is considered. The image-depend tuning parameter is given as

where is the maximum local variance for a given image and D is an experimental value, range from 50 to 100.

2xσ

Just Noticeable Difference (JND) – Watson’s HVS models In section I, we have talked about how to generate a perceptually quantization matrix of the DCT transform. The entries of this matrix represent the amount of quantization that each coefficient can withstand without affecting the visual quality of the image. This quantization matrix, refers to be a perceptual threshold matrix, takes into account three main properties of HVS: frequency sensitivity, luminance sensitivity and contrast masking. We compare 8*8 blocks of the watermarked image with the corresponding blocks of the original image to determine if the block has been modified to the extent that the modification can be perceptible to human observers. Here provides two options to be an estimate of perceptual error (JND threshold). One is computing the average of all blocks errors. The other is use the per-block error matrix, which is generally more accurate than former. Section III - Simulation All test images are 8-bit grayscale images of size 256x256 pixels. One image is a heavily textured image ‘baboon.bmp’, the other is a less textured image ‘lena.bmp’.

- 9 -

Page 10: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Baboon.bmp Lena.bmp

Fig. 1 Perceptual threshold matrix In the experiment, JPEG (Pennebaker and Mitchell) quantization table was used for frequency sensitivity. Notice that each entry of TF

u,v is set to the half value of the quantization table. For TL

u,v,b and TCu,v,b, I used the same value of α (0.649) and β (0.7)

suggested by Watson.

fig.2 Perceptual threshold matrix

Cox Watermarking Scheme This algorithm is an early frequency domain watermarking approach, which is based on such a fact that the watermark strength depends on the intensity of the DCT coefficients of the original image. In this way, the watermark signal can be quite strong in the DCT coefficients with large intensity values and is attenuated in the areas with small DCT coefficients. Therefore, inserting watermark into the perceptually significant components and adapting the watermark provides a watermark that is robust and transparent. The key procedures of this algorithm are following: 1. Perform DCT on entire original image. 2. Choose 1000 largest AC coefficients to be inserted watermark and the watermark strength is adjusted by scaling factor α (empirical value =0.1).

v’i = vi + α vi wi = vi (1+α wi)

- 10 -

Page 11: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Results

a=0.1

psnr=35 (dB) wpsnr=38.3 (dB)

a=0.15 psnr=32.5 (dB)

wpsnr=35.7 (dB)

a=0.1

psnr=33.7 (dB) wpsnr=35.2 (dB)

a=0.15 psnr=30.5 (dB)

wpsnr=31.9 (dB)

fig. 3 When scaling factor a = 0.15, the psnr of watermarked image ‘lena’ equals 30.5, which is below the generally acceptable threshold range [35~40 dB]. We can also find the obvious distortion in image ‘lena’. For image ‘baboon’, when a =0.15, you still can NOT find perceptible distortion, BUT its psnr value is 32.5 dB. This is because the human eye is less sensitive to the textured areas than smooth areas. Therefore, for different images, if you don’t consider that effect of HVS, you may get a misleading result. WPSNR take into account the effect of image content (texture) on human eye. So it can more precisely reflect the difference.

- 11 -

Page 12: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

Baboon

2025303540455055

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

scaling factor

(dB)

psnr wpsnr

fig. 4

Lena

2025303540455055

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

scaling factor

(dB)

psnr wpsnr

fig. 5

The above graphics exhibit the advantage of WPSNR. We observe that for the same scaling factor, WPSNR is significantly higher than PSNR in image ‘baboon’. This indicates that for textured images, we can increase more watermark energy without sacrificing perceptual quality. However, for image ‘lena’ which is much smooth than ‘baboon’, WPSNR is closer to PSNR.

- 12 -

Page 13: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

JND-based analysis

0

0.51

1.5

2

2.53

3.5

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

scaling factor

perc

eptu

al erro

baboon lena

fig. 6 We also can get similar result by using JND-based analysis. We observe that 1) Textured image ‘baboon’ has much smaller perceptual error than ‘lena’. 2) In other words, image ‘baboon’ is more far from JND threshold, while image ‘pool’ is more likely approach JND threshold. 3) Therefore, ‘baboon’ has higher perceptual capacity Conclusions In this project, we compared several different measures that are used to evaluate the perceptual quality of image. The simulation result shows for different images, the improved measures (WPSNR and JNDs) taking into account the effect of HVS or based on the HVS models outperform other conventional measures like PSNR. But due to the complexity of human visual system, so far the visual models have a lot of limitation for application. In addition, the overhead of creating models (i.e. completely computing JND thresholds) may be a bottleneck of fully using the model. Reference: [1] Weber’s Law of Just Noticeable Differences. USD Internet Sensation & Perception Laboratory. [2] Tone Mapping Techniques and Color Image Difference in Global Illumination. By Krešimir Matković Dissertation [3] Human Visual System Features Enabling Watermarking. J.F. Delaigle, C. Devleeschouwer, B. Macq, and I. Langendijk(*) Laboratoire de Telecommnunications at University of catholique de Louvain, Proc of the IEEE [4] Human Visual System Models in Digital Image Watermarking. Dusan LEVICKY, Peter FORIS

- 13 -

Page 14: Impact analysis of digital watermarking on perceptual quality of …homepages.cae.wisc.edu/~ece738/projs05/qi_rpt.pdf · Where f is the spatial frequency in cycles/degree of visual

[5] Image-Adaptive Watermarking Using Visual Models. Christine I. Podilchuk, Wenjun Zeng. [6] Visibility of DCT basis Functions. Solomon, J.A., Watson, A.B. [7] JPEG: Still image Data Compression Standard. W.B. Pennebaker and J.L. Mitchell, 1993 [8] An Improved Detection Model for DCT Coefficient Quantization. In Proc. SPIE Conf. Human Vision. [9] DCT quantization matrices visually optimized for individual images. A.B. Watson [10] I.J. Cox, M.L.Miller, and J.A.Bloom, “Digital Watermarking”, Morgan Kaufmann [11] CCIR Recommendation 500-3, “Method for the subjective assessment of the quality of television pictures,” Recommendations and Reports of the CCIR, 1986, [12] A stochastic approach to content adaptive digital image watermarking. S. Voloshynovskiy et al.

- 14 -