cfa interpolation detection

18
CFA Interpolation Detection Leszek Swirski November 30, 2009 1 Introduction 1.1 Colour Image Sensors To capture a still photograph, digital cameras have to employ a technology converting incoming pho- tons into digital signals. The image sensor used for this is most often a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) chip, however regardless of the technology, the eect is the same: light strikes an array of photosensors, where energy proportional to that of the light is converted into per-pixel voltages, and these voltages can be interpreted as digital pixel information. The problem when faced with colour imaging using the majority of such image sensors is that the photosensors in the array can only detect the intensity of light, and not its wavelength; hence, the output image can only be greyscale. Three major methods exist to create colour images: Colour lter arrays (CFA), which use an array of lters which lter the wavelengths of light that can reach each sensor. Foveon X3 sensors, which use layered CMOS chips and rely on the wavelength-depended absorption of silicon 3CCD, which uses three separate CCD sensors, and separates the incoming light into three beams of various wavelengths, each directed onto a separate CCD. Image by Colin Burnett Figure 1: A Bayer pattern CFA overlaying a photosensor array. Of these, the CFA is by far the most popular in still digital cameras, and is therefore the only colour separation technique considered in this essay. The most common CFA pattern is the Bayer pattern, 1

Upload: others

Post on 11-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CFA Interpolation Detection

CFA Interpolation Detection

Leszek Swirski

November 30, 2009

1 Introduction

1.1 Colour Image Sensors

To capture a still photograph, digital cameras have to employ a technology converting incoming pho-tons into digital signals. The image sensor used for this is most often a charge-coupled device (CCD)or a complementary metal-oxide-semiconductor (CMOS) chip, however regardless of the technology,the effect is the same: light strikes an array of photosensors, where energy proportional to that ofthe light is converted into per-pixel voltages, and these voltages can be interpreted as digital pixelinformation.

The problem when faced with colour imaging using the majority of such image sensors is that thephotosensors in the array can only detect the intensity of light, and not its wavelength; hence, theoutput image can only be greyscale. Three major methods exist to create colour images:

∙ Colour filter arrays (CFA), which use an array of filters which filter the wavelengths of lightthat can reach each sensor.

∙ Foveon X3 sensors, which use layered CMOS chips and rely on the wavelength-dependedabsorption of silicon

∙ 3CCD, which uses three separate CCD sensors, and separates the incoming light into threebeams of various wavelengths, each directed onto a separate CCD.

Image by Colin Burnett

Figure 1: A Bayer pattern CFA overlaying a photosensor array.

Of these, the CFA is by far the most popular in still digital cameras, and is therefore the only colourseparation technique considered in this essay. The most common CFA pattern is the Bayer pattern,

1

Page 2: CFA Interpolation Detection

Figure 2: An example image we want to receive by taking a photograph

Figure 3: The colour filter array that overlays the photosensor array

2

Page 3: CFA Interpolation Detection

Figure 4: The data received from the image sensor

Figure 5: A coloured version of the sensor data, coloured by the corresponding CFAfilter colour

3

Page 4: CFA Interpolation Detection

which consists of repeating four pixel squares. Each square is coloured with two green pixels on thediagonal, and one each of red and blue pixels (figure 1). The ratio of two green pixels to one red andone blue was chosen because the human eye is much more sensitive to green light.

The problem with such a filter array is that we cannot interpret the sensor data as received. If wedid, in the resulting image each pixel would only be one of red, green or blue, whereas for a full colourimage we want each pixel to be a combination of these colours. This problem is demonstrated in thefollowing figures: figure 2 show what we want the image taken to be, figure 3 shows the filter appliedto the incoming light, and figures 4 and 5 demonstrated what the output of the image sensor actuallyis.

1.2 Interpolation

Since our target is a full-colour, full-size image, digital cameras have to interpolate the pixel valuesacross each of the three colour channels. In this section, I describe several such interpolation algo-rithms. There are many different interpolation models used in commercial digital cameras, and anexhaustive list would be unfeasible—however most are similar in effect or principle to the following(as described in Popescu and Farid[3]). When describing the algorithms, let Sx,y be the CFA image1,

and let Rx,y, Gx,y, Bx,y be the red, green and blue channels constructed from Sx,y, where

Rx,y =

{Sx,y if (x ≡ 1 mod 2) ∧ (y ≡ 1 mod 2)

0 otherwise(1)

Gx,y =

{Sx,y if (x ≡ 1 mod 2)⊕ (y ≡ 1 mod 2)

0 otherwise(2)

Bx,y =

{Sx,y if (x ≡ 0 mod 2) ∧ (y ≡ 0 mod 2)

0 otherwise(3)

Then, let Rx,y, Gx,y, Bx,y be the colour channels of the target, interpolated image. It is these threethat we want to calculate. In all the below cases, my Matlab implementation can be found inappendix A, and example outputs of the Matlab implementations can be found in appendix B.

1.2.1 Nearest-Neighbour Interpolation

In nearest-neighbour interpolation, the empty pixel values are filled in with their nearest neighbour’svalue. An example of such an interpolation is:

Figure 6: Nearest-neighbour interpolation of the red and green channels

1Note that x increases in the vertical direction, and y in the horizontal. This is due to these being cells in matrixnotation, rather than image co-ordinates.

4

Page 5: CFA Interpolation Detection

R2i,2j = R2i+1,2j = R2i,2j+1 = R2i+1,2j+1 = R2i+1,2j+1 (4)

G2i,2j = G2i,2j+1 = G2i,2j+1 (5)

G2i+1,2j = G2i+1,2j+1 = G2i+1,2j (6)

B2i,2j = B2i+1,2j = B2i,2j+1 = B2i+1,2j+1 = B2i,2j (7)

Figure 6 demonstrates the above algorithm visually.

Such an interpolation causes unpleasant blocky artefacts, and is not generally used unless a veryhigh-speed implementation is necessary.

1.2.2 Bilinear Interpolation

Bilinear interpolation estimates the value of an empty pixel as the average of the values of its non-empty neighbours: more precisely, it performs a convolution of the matrix representing each channelwith an bilinear interpolation matrix. Therefore, we have:

R = R ∗ 1

4

⎡⎣1 2 12 4 21 2 1

⎤⎦ (8)

G = G ∗ 1

4

⎡⎣0 1 01 4 10 1 0

⎤⎦ (9)

B = B ∗ 1

4

⎡⎣1 2 12 4 21 2 1

⎤⎦ (10)

Figure 7 demonstrates the above algorithm visually.

Figure 7: Bilinear interpolation of the red and green channels

Other forms of linear interpolation using convolutions also exist, including bicubic and biquadratic,however bilinear is the most common.

1.2.3 Smooth Hue Transition

In bilinear interpolation, there are visible chrominance aliases in areas with detail, where the inter-polation of colour information has caused the detail to be slightly shifted between colour channels.To attempt to eliminate such chrominance distortions, one can make the assumption that in naturalimages colour hue varies smoothly. If we define two ‘hues’ as the ratios of each chrominance compo-nent to the luminance, and assuming we have interpolated the luminance already, we can interpolatethese hue values rather than the chrominance values to reduce colour aliasing.

5

Page 6: CFA Interpolation Detection

In the smooth hue transition algorithm, we approximate luminance with the green channel, with thered and blue as chrominance components. This is good enough, as half the pixels captured are green,and green is the major component in luminance (approximately 0.6). The green channel is bilinearlyinterpolated, and then used in the hue interpolation. The red and blue channels have their rationwith the green channel interpolated, and are then pointwise multiplied by the green channel.

G = G ∗ 1

4

⎡⎣0 1 01 4 10 1 0

⎤⎦ (11)

R = Gpt.wise

×

⎛⎝(R pt.wise

÷ G)∗ 1

4

⎡⎣1 2 12 4 21 2 1

⎤⎦⎞⎠ (12)

B = Gpt.wise

×

⎛⎝(B pt.wise

÷ G)∗ 1

4

⎡⎣1 2 12 4 21 2 1

⎤⎦⎞⎠ (13)

1.2.4 Median Filtering

All of the above methods contain ‘speckles’, or noise, near edges. This is caused, as above, byshifted edge information, however discontinuities in hue mean that the smooth hue transition doesnot reduce these artefacts. Since this noise is caused not by the individual channels themselves, butby differences between them, it should be possible to decrease this noise by applying a despecklingfilter to the pairwise differences of the colour channels, and incorporating this despeckled differenceback into the original. This is what median filtering does.

Firstly, all the channels are interpolated using another interpolation scheme, usually simply bilinear.Then, their pairwise differences are median filtered. The resulting colour channels are constructedpixel-by-pixel from the CFA, where for each pixel in the CFA image, the corresponding medianfiltered difference pixel is added or subtracted. Say Mrg, Mrb, Mbg are the median filtered pairwise

differences. Then, we take the example of S1,0, which is the green pixel G1,0. In this case:

R1,0 = S1,0 + (Mrg)1,0 (14)

G1,0 = S1,0 (15)

B1,0 = S1,0 − (Mgb)1,0 (16)

The resulting image has fewer speckles on edges, and the image could be median filtered again tofurther remove speckles. Indeed, dcraw (an open-source program dealing with, among other things,CFA interpolation) has a separate option of how many times to median filter its interpolated result.

1.2.5 Gradient-Based interpolation

A problem with all the algorithms described above is that they all interpolate across edges: this meansthat edges are less defined in the final image, and also causes artefacts along edges. The gradient-based interpolation scheme aims to avoid interpolating across edges by adaptively interpolating thegreen channel based on local horizontal and vertical gradients. A second derivative can be taken inboth directions, as a second derivative near zero is indicative of a locally linear gradient, whereas anon-zero second derivative is indicative of local gradient change. Hence, we can compare the absolutevalues of horizontal and vertical second derivatives. Where there is local linearity in a certain directionbut not in the other, it is a safe assumption that we should interpolate linearly only in the locallylinear direction. This means that, for example, if the absolute horizontal second derivative is smallerthan the vertical, we should only interpolate horizontally.

In gradient-based interpolation, the horizontal and vertical second derivatives are approximated using

6

Page 7: CFA Interpolation Detection

central difference. Hence, we get H and V as the derivative estimators using:

Hx,y =

∣∣∣∣Sx,y−2 + Sx,y+2

2− Sx,y

∣∣∣∣ (17)

Vx,y =

∣∣∣∣Sx−2,y + Sx+2,y

2− Sx,y

∣∣∣∣ (18)

Notice that, for each possible (x, y), all three S values will be from the same colour pixel. The greenchannel is then adaptively interpolated as:

Gx,y =

⎧⎨⎩Gx,y if Sx,y is greenGx,y−1+Gx,y+1

2 if Hx,y < Vx,yGx−1,y+Gx+1,y

2 if Hx,y > Vx,yGx,y−1+Gx,y+1+Gx−1,y+Gx+1,y

4 if Hx,y = Vx,y

(19)

To interpolate the red and blue channels, we take advantage of the fact that human perception noticesloss of edge definition in chrominance much less than in luminance. If we once again assume that G isthe luminance, then we can define the two chrominance channels as R−G and B−G, not unlike thedefinition of the Y UV colour space. These chrominance channels can now be interpolated in the usualway, without worrying about gradients, as the interpolation across edges will be imperceptible. Theresulting interpolated chrominance can be added back to G to recover the colour channels. Hence:

R = G+

⎛⎝(R−G) ∗ 1

4

⎡⎣1 2 12 4 21 2 1

⎤⎦⎞⎠ (20)

B = G+

⎛⎝(B −G) ∗ 1

4

⎡⎣1 2 12 4 21 2 1

⎤⎦⎞⎠ (21)

1.2.6 Other methods

There are many more methods of interpolation than those described above, however most share thesame techniques and principles: identifying edges/gradients in the image, operating on the greenchannel as if it were luminance, and use linear interpolation were data can be assumed to be linear.Popescu and Farid[3] describe two more algorithms, however both are complications of the gradient-based method, attempting to achieve better treatment of edges in the image.

2 Interpolation Detection

The real aim of both Popescu and Farid[3] and Gallagher and Chen[1] is to detect if an image hasundergone a similar interpolation to one of the above. This is useful for several reasons. Firstly, it canprove an image to be digitally tampered, if it does not exhibit these interpolations, and offers evidenceto verify its authenticity. If one could identify the exact interpolation used, this could provide evidenceabout the camera used to take the photograph. Images could also be analysed locally, to identifyforged regions in an otherwise untampered photograph. In addition to these forensic applications,such analysis could be used to automatically distinguish photorealistic computer generated (PRCG)images from real images.

2.1 Methods

The principle behind interpolation detection is that interpolation will create a correlation betweenpixels. The majority of CFA interpolation algorithms are regular and approximately linear, in partic-

7

Page 8: CFA Interpolation Detection

ular for the green channel which is often first interpolated linearly as a basis for the remainder of thealgorithm. Therefore, if an algorithm can determine some regular correlation between pixels, then itis likely that the image is the result of an interpolation.

2.1.1 EM Algorithm

Popescu and Farid[3] use the Expectation-Maximisation (EM) algorithm to attempt to simultane-ously find which pixels are correlated, and the parameters of their correlation—that is, what typeof interpolation is used. The EM algorithm is an two step iterative algorithm. In each iteration, itestimates the probability for each pixel of being correlated by a model M , and then estimates theparameters of M using minimisation based on these probabilities.

A very strong advantage of the EM algorithm creates a pairwise separable 24-D parameter space forthe model, where this separability allows one to determine the interpolation algorithm used. However,its convergence can be slow.

2.1.2 Gallagher and Chen Algorithm

Gallagher and Chen[1] propose a different method of detection. They notice that correlation causes adecrease in variance—indeed, for a bilinear interpolation, the non-interpolated green pixels have fourtimes the variance of interpolated green pixels (figure 8). Therefore one can detect regular correlationby finding periodicity in the variance.

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

�2

4

�2

Figure 8: The variance of interpolated green pixels is a quarter of the variance of thenon-interpolated pixels

In their algorithm, Gallagher and Chen only look at the green channel, though they state that otherchannels could also be considered. First, they convolve the image with a highpass filter to removelow-frequency information (which is irrelevant). The high-pass filter has the interesting property thatit will remove any pixels which have been bilinearly interpolated using the convolution described insection 1.2.2.

To calculate variance, Maximum Likelihood Estimation (MLE) is used on the anti-diagonals of theimage, assuming that each pixel on the diagonal is drawn from a normal distribution, and thatvariance along the anti-diagonals is constant. Diagonals are used because it is clear from the patternthat the variance across diagonals varies periodically. It is not clear why the authors choose touse the anti-diagonals rather than the diagonals, however it is likely that this is only to make themathematical representation simpler.

Instead of actually computing the variance of each diagonal, the authors instead use a computationallysimpler estimate, which is the mean of the absolute values of the diagonal. This calculates the meanabsolute deviation, which is a measure of disperisity similar to the standard deviation. Therefore, a

8

Page 9: CFA Interpolation Detection

one-dimensional signal of ‘variances’ is formed:

m(d) =

∑x+y=d

∣(ℎ ∗ i)x,y∣

Nd(22)

where Nd is the number of elements along the diagonal, ℎ is the high-pass filter, and i is the image.

For an interpolated image, we expect to find periodicity in m(d). To find such periodicity, theauthors use a DFT, to calculate

∣∣M(ei!)∣∣. This M will exhibit peaks at any frequency corresponding

to periodicity in the variance; for CFA interpolation—which is by a factor of two—we expect a peakat ! = �. The authors quantify this peak as:

s =

∣∣M(ei!)∣∣!=�

median!{∣M(ei!)∣}(23)

where normalising by the median was found to be important in differentiating between interpolationand signals with large energy across the spectrum.

I implemented this algorithm in Matlab, with the following code:

I = im2double(imread (...));

G = I(:,:,2); % Extract green channel

h = [0 1 0;1 -4 1; 0 1 0];

o = conv2(G, h, ’valid ’); % High pass filter

fo = fliplr(o); % Flip to find anti -diagonals using diag

m = [];

for d=(size(fo ,2) -1):-1:(-size(fo ,1) +1)

m(size(fo ,2)-d) = mean(abs(diag(fo , d)));

end

M = abs(fft(m));

k = median(M(2:end));

mid = M(floor(size(M,2) /2) +1);

s = mid/k;

I ran this algorithm on my bilinearly interpolated image, as well as a PRCG image, and on 256× 256sections of the bilinear and gradient-based images to demonstrate localised interpolation detectionon both linear and non-linear interpolation algorithms. In each case, each the ‘real’ image showed asignificant peak in the DFT (and significant s value), whereas the PRCG image exhibited no suchpeak. The images used, and their m(d) and

∣∣M(ei!)∣∣ plots can be found in appendix C.

2.1.3 Identifying Forged Regions

Since the above statistics apply locally as well as over the whole image, we can adapt the abovealgorithm to perform a local analysis (within a radius n) of every pixel, using local diagonals, a localDFT, and returning a per-pixel peak sxy. The m function would change to a local pixel varianceestimation:

m(x, y) =

n∑i=−n

∣(ℎ ∗ i)x+i,y+i∣

2n+ 1(24)

Any pixels with a low value for sxy would be likely to be parts of a forged region. With appropriatethresholding, these forged areas can be segmented, as is the case at the end of [1].

2.2 Possible Problems

There are several possible problems with the above techniques, however these prove not to be asproblematic as one may expect. Although most interpolation techniques used in modern cameras are

9

Page 10: CFA Interpolation Detection

non-linear, the linearity assumptions used in the detection are still ‘good enough’, as the non-linearalgorithms end up being sufficiently linear locally to leave traces of interpolation. Also, JPEG com-pression will interfere with the interpolation watermark, due to its own compression, which includeslow-pass filtering. However, the authors of both methods described above found that the techniquesdegrade gracefully with increased JPEG compression, and that high-quality JPEGs have little effecton the results.

The largest problems is the CFA interpolation patterns can be synthesised, therefore these methodscannot verify authenticity. However, they can verify forgery, and they add a new method of forgerydetection which makes convincing forgery more difficult.

3 CFA Pattern Synthesis

In contrast to the other two papers, Kirchner and Bohme[2] describe a method to circumvent theforgery detection described above by restoring CFA-like correlations. The naıve method would beto simply sample the forged image with a CFA filter, and reinterpolate it. However, this form ofsub-sampling the image could introduce aliasing if the forged area’s spectrum was too wide. A muchbetter method would be to find the additive tamper error (the difference between tampered andoriginal), low-pass filter it (e.g. Gaussian blur), and add it to the image before sampling the resultwith a CFA filter. This would work, however it is not guaranteed to give the minimal error from theoriginal.

To guarantee a minimum error, the authors consider this as an algebraic problem. A linear interpo-lation can be represented as a matrix operation on vectorised forms of the images y = Hx, and themanipulation can be treated as an additive error:

y = Hx + � (25)

For a given manipulated image y, Kirchner and Bohme want to find a CFA sensor value x suchthat ∥y − Hx∥ = ∥�∥ is minimal. This is a least squares problem, with known solution using theMoore-Penrose pseudoinverse:

x = (HTH)−1HTy (26)

However, calculating the Moore-Penrose pseudoinverse using standard methods (e.g. singular valuedecomposition) quickly becomes intractable. The remainder of the paper describes an implementationwhich takes advantage of the sparseness of H, however I will not deal with the implementation inthis essay.

References

[1] Andrew C. Gallagher and Tsuhan Chen. Image authentication by detecting traces of demosaicing.In Proc. CVPR WVU Workshop, 2008.

[2] Matthias Kirchner and Rainer Bohme. Synthesis of color filter array pattern in digital images. InProceedings of SPIE, volume 7254, page 72540K, 2009.

[3] Alin C. Popescu and Hany Farid. Exposing digital forgeries in color filter array interpolatedimages. IEEE Transactions on Signal Processing, 53(10):3948–3959, 2005.

10

Page 11: CFA Interpolation Detection

A Interpolation Implementation

In each of the following implementations, the following values are predefined:

% Load sensor data

S = im2double(imread(’BoatsSensor.png’));

% Create CFA filter for each of the three colours

Rcfa = repmat ([1 0;0 0], size(S)/2);

Gcfa = repmat ([0 1;1 0], size(S)/2);

Bcfa = repmat ([0 0;0 1], size(S)/2);

% Split data into ’hat ’ variables

Rh = S .* Rcfa;

Gh = S .* Gcfa;

Bh = S .* Bcfa;

A.1 Nearest-Neighbour

R = Rh(floor ([0:end -1]/2) *2+1, floor ([0:end -1]/2) *2+1);

G(floor ([0:end -1]/2) *2+1, :) = ...

Gh(floor ([0:end -1]/2) *2+1, floor ([0:end -1]/2) *2+2);

G(floor ([0:end -1]/2) *2+2, :) = ...

Gh(floor ([0:end -1]/2) *2+2, floor ([0:end -1]/2) *2+1);

B = Bh(floor ([0:end -1]/2) *2+2, floor ([0:end -1]/2) *2+2);

A.2 Bilinear

R = conv2(Rh , [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

G = conv2(Gh , [0 1 0; 1 4 1; 0 1 0]/4, ’same’);

B = conv2(Bh , [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

A.3 Smooth Hue Transition

G = conv2(Gh , [0 1 0; 1 4 1; 0 1 0]/4, ’same’);

R = G .* conv2(Rh./G, [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

B = G .* conv2(Bh./G, [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

A.4 Median Filter

R = conv2(Rh , [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

G = conv2(Gh , [0 1 0; 1 4 1; 0 1 0]/4, ’same’);

B = conv2(Bh , [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

Mrg = R - G;

Mrb = R - B;

Mgb = G - B;

R = S + Mrg.*Gcfa + Mrb.*Bcfa;

G = S - Mrg.*Rcfa + Mgb.*Bcfa;

B = S - Mrb.*Rcfa - Mgb.*Gcfa;

A.5 Gradient-Based

H = abs((S(:, [1 1 1:end -2]) + S(:, [3:end end end]))/2 - S);

V = abs((S([1 1 1:end -2], :) + S([3: end end end], :))/2 - S);

G = Gh + (Rcfa+Bcfa).*(...

(H < V) .* ( (Gh(:, [1 1:end -1]) + Gh(:, [2:end end ]))/2 ) + ...

11

Page 12: CFA Interpolation Detection

(H > V) .* ( (Gh([1 1:end -1], :) + Gh([2: end end ], :))/2 ) + ...

(H == V) .* ( (Gh(:, [1 1:end -1]) + Gh(:, [2:end end ]) ...

+ Gh([1 1:end -1], :) + Gh([2: end end ], :))/4 ) ...

);

R = G + conv2(Rh - Rcfa.*G, [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

B = G + conv2(Bh - Bcfa.*G, [1 2 1; 2 4 2; 1 2 1]/4, ’same’);

B Interpolation Examples

Figure 9: A nearest neighbour interpolation of the data from figure 5

12

Page 13: CFA Interpolation Detection

Figure 10: A bilinear interpolation of the data from figure 5

Figure 11: A smooth hue transition interpolation of the data from figure 5

13

Page 14: CFA Interpolation Detection

Figure 12: A median filtered bilinear interpolation of the data from figure 5

Figure 13: A gradient-based interpolation of the data from figure 5

14

Page 15: CFA Interpolation Detection

C Detection Examples

This section contains example executions of the CFA interpolation detection algorithm.

Original Image High-pass of green channel

0 0.2 0.4 0.6 0.8 10

5 ⋅ 10−2

0.1

0 �2

� 3�2

2�0

50

100

150

m(d)∣∣M(ei!)

∣∣s =

∣∣M(ei!)∣∣!=�

median!{∣M(ei!)∣}

=79.2337

0.1191= 665.0172

Figure 14: A bilinearly interpolated image passing through the interpolation detector

15

Page 16: CFA Interpolation Detection

Original Image High-pass of green channel

0 0.2 0.4 0.6 0.8 10

5 ⋅ 10−2

0.1

0.15

0.2

0.25

0 �2

� 3�2

2�0

20

40

60

80

100

m(d)∣∣M(ei!)

∣∣s =

∣∣M(ei!)∣∣!=�

median!{∣M(ei!)∣}

=0.2546

0.1688= 1.5081

Figure 15: A PRCG image passing through the interpolation detector

16

Page 17: CFA Interpolation Detection

Original Image High-pass of green channel

0 0.2 0.4 0.6 0.8 10

2 ⋅ 10−2

4 ⋅ 10−2

6 ⋅ 10−2

8 ⋅ 10−2

0 �2

� 3�2

2�0

2

4

6

8

10

12

m(d)∣∣M(ei!)

∣∣s =

∣∣M(ei!)∣∣!=�

median!{∣M(ei!)∣}

=6.1377

0.0574= 106.8595

Figure 16: A 256× 256 section of a bilinearly interpolated image passing through theinterpolation detector

17

Page 18: CFA Interpolation Detection

Original Image High-pass of green channel

0 0.2 0.4 0.6 0.8 10

2 ⋅ 10−2

4 ⋅ 10−2

6 ⋅ 10−2

0 �2

� 3�2

2�0

5

10

15

m(d)∣∣M(ei!)

∣∣s =

∣∣M(ei!)∣∣!=�

median!{∣M(ei!)∣}

=2.8713

0.0637= 45.1039

Figure 17: A 256× 256 section of a gradient-interpolated image passing through theinterpolation detector

18