texture sensitive image inpainting after object morphing...

7
Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan e-mail: [email protected] Abstract - This paper develops an object morphing algorithm and an image inpainting algorithm. Nowadays, there are many image editing functions for the cameras. This paper proposed an object morphing method to heighten the human height in the image. The object morphing method is based on minimizing the change in gradients after adding rows to the object to preserve texture detail. To fill the missing pixels in the image after object morphing, an efficient and accurate inpainting method is presented based on a new patch classification to determine what edge direction the patch passed through, and recovering the missing pixel according to the edge direction. Also, a hybrid inpainting algorithm designed by automatic texture complexity detection is presented. Experiments demonstrate that the proposed texture sensitive inpainting method and the hybrid inpainting method perform better than the previous inpainting methods. Keywords: image inpainting; texture complexity; object morphing 1. Introduction Image inpainting is to fill the missing pixels by using effective information in the image. It can be widely applied to repair medical images, remove scratches in images, change the image foregrounds, etc. Many image inpainting methods have been proposed in recent years. The first automatic image inpainting method was proposed by Bertalmio et al. [1] for still images. This method is known as BSCB method, after the names of the authors. Afterwards, different kinds of models were proposed continuously. Ben et al. [2] proposed a Fast Marching method (FFM) [3] inpainting algorithm based on structure. Since the FFM algorithm is a kind of iterative algorithm, they introduced a weight calculation method to solve the time complexity problem. Farid et al. [4] presented an inpainting method using dynamic weighted kernels. This method used traditional blur kernels of variable sizes and weights. The edge pixels in the neighborhood of a missing pixel were weighted more than non-edge pixels to preserve the edges in the missing region. But the blur kernel was only applicable to restore small missing region. Sun et al. [5] introduced an inpainting algorithm based on multi-scale Markov Random Field (MRF) model. The image to be inpainted was divided into multiple scales. They inpainted the coarsest scale based on the MRF model. Then, the final inpainting result was achieved from the coarsest resolution to the finest one by using the belief propagation (BP) algorithm [6]. Xu et al. [7] presented an inpainting algorithm through investigating the sparsity of the similarities of an image patch with its neighboring patches. The patch to be inpainted is repaired by linear combination of candidate patches in the source region iteratively until no missing pixels left. Most of the previous methods were executed iteratively thus produced lots of computational overhead. To repair the missing region rapidly, Huang et al. [8] proposed an efficient inpainting approach that kept the structure consistency between the source region and the target region by the priority of the filling order of target region. Above mentioning inpainting methods were all from single image. Wu et al. [9] proposed an 3D information obtained from a sequence of images for the usage of image inpainting. They introduced Homography and Image Rectification geometric characteristic to reduce the guessing of the image inpainting. After object morphing, some pixels will be covered with the new object, some may lose the original pixels. Hence it causes missing pixels in the image. This paper aims at applying an efficient and effective image inpainting algorithm on the object morphing. The proposed image inpainting method is based on the patch priority and a new patch classification, and we inpaint the missing pixels according to the edge direction which is decided by the patch classification. Since our inpainting method focuses on the edge points, we can produce better inpainting results on

Upload: doanquynh

Post on 06-May-2018

245 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

Texture Sensitive Image Inpainting after Object Morphing

Yin Chieh Liu and Yi-Leh Wu

Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

e-mail: [email protected]

Abstract - This paper develops an object morphing algorithm and an image inpainting algorithm. Nowadays, there are many image editing functions for the cameras. This paper proposed an object morphing method to heighten the human height in the image. The object morphing method is based on minimizing the change in gradients after adding rows to the object to preserve texture detail. To fill the missing pixels in the image after object morphing, an efficient and accurate inpainting method is presented based on a new patch classification to determine what edge direction the patch passed through, and recovering the missing pixel according to the edge direction. Also, a hybrid inpainting algorithm designed by automatic texture complexity detection is presented. Experiments demonstrate that the proposed texture sensitive inpainting method and the hybrid inpainting method perform better than the previous inpainting methods.

Keywords: image inpainting; texture complexity; object morphing

1. Introduction

Image inpainting is to fill the missing pixels by using effective information in the image. It can be widely applied to repair medical images, remove scratches in images, change the image foregrounds, etc.

Many image inpainting methods have been proposed in recent years. The first automatic image inpainting method was proposed by Bertalmio et al. [1] for still images. This method is known as BSCB method, after the names of the authors. Afterwards, different kinds of models were proposed continuously. Ben et al. [2] proposed a Fast Marching method (FFM) [3] inpainting algorithm based on structure. Since the FFM algorithm is a kind of iterative algorithm, they introduced a weight calculation method to solve the time complexity problem. Farid et al. [4] presented an inpainting method using dynamic weighted

kernels. This method used traditional blur kernels of variable sizes and weights. The edge pixels in the neighborhood of a missing pixel were weighted more than non-edge pixels to preserve the edges in the missing region. But the blur kernel was only applicable to restore small missing region. Sun et al. [5] introduced an inpainting algorithm based on multi-scale Markov Random Field (MRF) model. The image to be inpainted was divided into multiple scales. They inpainted the coarsest scale based on the MRF model. Then, the final inpainting result was achieved from the coarsest resolution to the finest one by using the belief propagation (BP) algorithm [6]. Xu et al. [7] presented an inpainting algorithm through investigating the sparsity of the similarities of an image patch with its neighboring patches. The patch to be inpainted is repaired by linear combination of candidate patches in the source region iteratively until no missing pixels left. Most of the previous methods were executed iteratively thus produced lots of computational overhead. To repair the missing region rapidly, Huang et al. [8] proposed an efficient inpainting approach that kept the structure consistency between the source region and the target region by the priority of the filling order of target region. Above mentioning inpainting methods were all from single image. Wu et al. [9] proposed an 3D information obtained from a sequence of images for the usage of image inpainting. They introduced Homography and Image Rectification geometric characteristic to reduce the guessing of the image inpainting.

After object morphing, some pixels will be covered with the new object, some may lose the original pixels. Hence it causes missing pixels in the image. This paper aims at applying an efficient and effective image inpainting algorithm on the object morphing. The proposed image inpainting method is based on the patch priority and a new patch classification, and we inpaint the missing pixels according to the edge direction which is decided by the patch classification. Since our inpainting method focuses on the edge points, we can produce better inpainting results on

Page 2: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

the images with complex background. The paper is organized as follows. In second section,

an object morphing algorithm is presented. In third section, the details of our proposed inpainting method are introduced. In forth section, the individual inpainting performance comparison is shown. In fifth section, a hybrid inpainting method is presented. Finally, experiments are shown in sixth second.

2. Object morphing

We now present the technical details of object morphing. The idea is similar to seam-carving. Seams are vertical or horizontal chains of pixels that are successively removed from or added to an image to change its width or height. A seam-carving algorithm introduced by Grundmann et al. [10], which aimed at minimizing the change in gradients during adding chains of pixels. Refer to above-mentioned conception, we propose an object morphing method based on minimizing the change in gradients to preserve texture detail. Figure 1 shows the flow chart of the morphing procedure. Objects to be edited are cut out from the original image as the input of the system. We regard the middle row of the object as the beginning of the morphing process, and the last row as the end. Next, cover Sobel mask from the beginning row to the end row on each pixel to measure chains of pixel gradients. The Sobel operator is widely used in edge detection. It is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. After covering Sobel mask, choose the minimum sum of gradients every n rows as the optimal seam to preserve texture detail. Then shift all the rows above the optimal seam upward, and insert the average color of the optimal seam and its previous row to the empty row. Here, we use 3 x 3 Sobel mask, and set n to 5 according to experimental experience.

Now we have the morphing result. Aligning the last row of the object to the original position in the original image and paste back the object, we obtain an image with new object but missing some background information. Therefore, we present an image inpainting method in the next section.

3. The proposed image inpainting method

First, we defineI to be the original image which includes a target region to be inpainted. To attain an

Figure 1. Flow chart of object morphing.

efficient and accurate image inpainting approach, we adopt Huang et al.’s [8] priority function to keep the structure consistency between the target region and the source region, and present a new patch classification for inpainting process. The patch classification is designed to determine what edge direction the patch be passed through. Here, we

define horizontal variation ( )hVar p and vertical

variation ( )vVar p expressed in formula (1-2).

( , )I s t represents the middle point of the patch instead of

the started point. The corresponding diagram of the patch classification is shown in Figure 2 that Figure 2(a)

represents the estimation of ( )hVar p and Figure 2(b)

represents the estimation of ( )vVar p .

1 0

1 1

( ) ( , ) ( , 1)hi j

Var p I i s j t I i s j t=− =−

= + + − + + +∑∑ (1)

1 0

1 1

( ) ( , ) ( 1, )vj i

Var p I i s j t I i s j t=− =−

= + + − + + +∑∑

(2)

Page 3: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

Under the order of patch priority, we inpaint the missing points according to formula (3), whereα is a positive constant and its value is set in the experiments. If

(a) (b) Figure 2. Patch classification.

the horizontal variation ( )hVar p is larger than the sum of

the vertical variation ( )vVar p andα , it means that the

horizontal line is much more significant than the vertical line in the patch. In this case, we fill the missing point ( , )I s t by the average of( -1, )I s t and ( 1, )I s t+ ; if

the vertical variation ( )vVar p is larger than the sum of the

horizontal variation ( )hVar p andα , it means that the

vertical line is much more significant than the horizontal line in the patch. In this case, we fill the missing point ( , )I s t by the average of( , -1)I s t and ( , +1)I s t ;

otherwise, we regard it as a smooth region and fill the missing point by the weighted pixels in the patch. The

weight ijw of each pixel ( , )I i j is defined by equation (4),

where ( , )I s t represents the middle point of the patch

and Z is the sum of all weights in the patch for normalization. For example, the weight for a 3 x 3 patch is shown in Figure 3.

if , horizontal lines

if , vertical lines

otherwise , smooth region

h v

v h

Var Var

Var Var

αα

> + > + (3)

2 2

1 1

1 ( ) ( )ijw

Zi s j t= ×

+ − + − (4)

Figure 3. Weight for a 3 x 3 patch.

The flow chart of the proposed algorithm is shown in Figure 4. Our algorithm thinks over the different edge directions of the patch to be inpainted and fill the missing pixels according to the patch classification. This is the key difference to the previous method. In the next section, we show advantages of the proposed method.

Figure 4. Flow chart of object morphing and image

inpainting.

4. Individual inpainting performance comparison

In this section, we present the proposed method on ranges of full color photos that have smooth and complex background. Based on these test photos, we compare our method with two previous inpainting algorithms. According to experimental experience, the size of patch is set to 3 x 3, and α which is used to determine the edge direction of the patch is set to 25 in the following test photos. The environment of implementation was on Intel Pentium 4CPU 3.40 GHz with 1.24 GB of RAM.

In our implementation, objects to be edited in the image are cut out by the GNU Image Manipulation Program (GIMP). Figure 5 presents an example of object morphing, including the original photo, object which is cut out by GIMP, and the morphing result. It’s obvious that the object after being edited keeps texture well on the jeans in

Page 4: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

Figure 5(c). Figure 6 presents seven 520 x 390 test photos for object morphing and background pixel inpainting. Figure 7(a-g) show the objects to be edited in the seven test photos, and Figure 7(h-n) show the morphing results.

Figure 5. Example of object morphing.

(a) (b) (c)

(d) (e) (f) (g)

Figure 6. Test photos.

(a) (b) (c) (d) (e) (f) (g)

(h) (i) (j) (k) (l) (m) (n)

Figure 7. Objects to be edited.

We now compare the proposed inpainting method with Huang et al.’s [8] and Xu et al.’s exemplar-based [7] inpainting method. For Huang et al.’s algorithm [8], the patch size is set to 3 x 3. For Xu et al.’s algorithm [7], the size of patch and neighborhood for computing patch similarity are separately set for each test photo in order to obtain the highest image quality. Peak signal-to-noise ratio (PSNR) [11] between the inpainted images and the original images are measured for comparison since it is the widely accepted and commonly used standard of quantitatively measuring image quality.

To measure the PSNR between the inpainted images and the original images, we also require the background of seven test photos as shown in Figure 8 which only differ from the original photos on reducing the foreground. In this

way, we are able to measure the PSNR on the inpainted pixels.

(a) (b) (c)

(d) (e) (f) (g)

Figure 8. Background of test photos.

The pixels to be inpainted are colored green in Figure 9(a), and Figure 9(b-d) show the inpainting results of Huang et al.’s, Xu et al.’s, and our method. Since we have considered the edge direction of each patch to be inpainted, our method results in better visual quality at edge points. On the contrary, owing to the background in Figure 6(a-b) are much smoother than other test photos, our inpainting algorithm has less vantage on these photos. It’s observed from Table 1, the much more complexity the background is, the higher PSNR our inpainting results obtain than the other two methods.

Page 5: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

(a) (b) (c) (d) Figure 9. Individual inpainting performance comparison.

Figure 10 shows magnified views of the inpainting results of seven test photos. These allow us to observe the texture detail after inpainting. The background photo is shown in Figure 10(a) and Figure 10(b-d) show the inpainting results of Huang et al.’s, Xu et al.’s, and our method. It’s obvious that Figure 10(d) keeps better texture than Figure 10(b-c) since we have more consideration to the edge direction. Thus we certainly preserve more precise details than other two compared methods on the complex background. In addition to visual quality, we also concern about the computational overhead. Here, we consider the execution time of the morphing process and the missing pixels inpainting process. As shown in Table 2, our proposed method is approximately equal to Huang et al.’s method [8], and is better than Xu et al.’s method [7] overwhelmingly with different number of missing pixels in the seven test photos.

Besides object morphing, the proposed inpainting method can also be applied in other image processing. Figure 11 shows an application example that replaces the foreground in Figure 6(g) with another object. After the replacement, the missing pixels are colored green in Figure 11(a), and Figure 11(b-d) show the inpainting results of Huang et al.’s, Xu et al.’s, and our method. Table 3 shows that our inpainting approach performs much better than previous methods.

Table 1. PSNR (dB).

Huang et al. [8] Xu et al. [7] Our method

Figure 6(a) 18.57 15.56 17.94 Figure 6(b) 12.06 11.80 11.72 Figure 6(c) 21.47 19.99 22.68 Figure 6(d) 22.40 19.66 23.32 Figure 6(e) 22.68 19.32 23.40 Figure 6(f) 15.60 14.34 16.17 Figure 6(g) 16.38 12.83 18.79

(a) (b) (c) (d)

Figure 10. Magnified views of inapinting result.

(a) (b) (c) (d)

Figure 11. Object replacement.

Table 2. Execution Time (s).

Huang et al. [8] Xu et al. [7] Our method

Figure 6(a):2564 missing pixels 0.04 2.81 0.04 Figure 6(b):2194 missing pixels 0.04 1.44 0.04 Figure 6(c):1293 missing pixels 0.03 8.89 0.03 Figure 6(d):4281 missing pixels 0.05 51.47 0.05 Figure 6(e):4349 missing pixels 0.04 9.58 0.05 Figure 6(f):4860 missing pixels 0.07 5.76 0.07 Figure 6(g):3676 missing pixels 0.08 6.85 0.08

Page 6: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

Table 3. Inpainting comparison of application.

PSNR (dB) Execution Time (s)

Huang et al. [8] 16.60 0.07 Xu et al. [7] 14.35 10.71 Our method 17.42 0.08

5. Hybrid inpainting method

As shown in Table 1, our method produces the highest PSNR for images whose backgrounds are complex and Huang et al.’s method produces the highest PSNR for images whose backgrounds are smooth. Thus, we present a hybrid inpainting method by designing a texture complexity detection to select which inpainting method is much more proper.

We design a texture complexity detection. Here, we cover 5 x 5 mask on the source region around the missing pixels on the image to be inpainted, and estimate the variance of the pixels in the mask. The variance is calculated by the intensity of pixels inside the mask around the pixel as shown in formula (5), where N means the number of pixels inside the area, ix means intensity of

pixel i and x means average intensity in the mask.

2

1

1( )

N

ii

variance x xN =

= −∑ (5)

Then set a threshold 150 according to experiments to sieve out the edge points whose variance is larger than it. Table 4 shows that except Figure 6(c) whose missing pixels are insufficient, the proposed texture complexity detection by variance can much precisely indicate the complex background by percentage of edge points that is upper than 20% and the smooth background by percentage of edge point that is lower than 20%. The flow chart of the proposed hybrid inpainting method is shown in Figure 12.

Table 4. The texture complexity detection by variance.

Percentage of edge points Figure 6(a) 14.55% Figure 6(b) 19.07% Figure 6(c) 9.40%

Figure 6(d) 28.23%

Figure 6(e) 27.67%

Figure 6(f) 24.49% Figure 6(g) 24.61%

Figure 12. Flow chart of hybrid inpainting method.

6. Experiments

In this section, we compare the proposed hybrid inpainting method to the previous image inpainting method. Figure 13 shows the test photos with different texture complexities around the missing area, the texture complexities are shown in Table 5 that shows the pixels around missing area in Figure 13(b) belongs to smooth background and in Figure 13(c) belongs to complex background. Hence, Figure 13(d) includes two different background conditions. According to the texture complexity, the proposed hybrid inpainting algorithm chooses the better method between Huang et al.’s and our method to perform the best result. Here, the hybrid method can inpaint the smooth area by Huang et al.’s method and inpaint the complex area by our method.

(a) (b) (c) (d)

Figure 13. Test photos with different texture complexity.

Table 5. The texture complexity detection by variance.

Percentage of edge points Figure 13(b) 9.79% Figure 13(c) 21.17%

The comparison of inpainting results is shown in

Figure 14. Figure 14(a) represents the missing pixels after object morphing which are colored green, Figure 14(b-d) show the inpainting results of Huang et al.’s, our, and the proposed hybrid inpainting method. Table 6 shows the hybrid inpainting method precisely performs the best

Page 7: Texture sensitive image inpainting after object morphing ...worldcomp-proceedings.com/proc/p2012/IPC2193.pdf · Texture Sensitive Image Inpainting after Object ... characteristic

inpainting result for both high and low texture complexities. For Figure 13(b), the hybrid method chooses Huang et al.’s method to fill the missing area since the percentage of edge points is lower than 20%. Hence, the result of hybrid method is the same as the result of Huang et al.’s method. For Figure 13(c), the hybrid method chooses our method to fill the missing area since the percentage of edge points is upper than 20%. Hence, the result of hybrid method is the same as the result of our method. For Figure 13(d), the hybrid method fills the left missing area by Huang et al.’s method and fills the right missing area by our method. Since the proposed hybrid inpainting method separately determines the inpainting algorithm for different missing area background complexity, it performs the highest PSNR.

(a) (b) (c) (d)

Figure 14. Comparison of inpainting results.

Table 6. Inpainting comparison of application.

Huang et al. [8] Our method Hybrid method

Figure 13(b) 14.64 14.21 14.64 Figure 13(c) 17.17 17.56 17.56 Figure 13(d) 15.41 15.16 15.45

7. Conclusions

In this study, a texture sensitive image inpainting after object morphing is proposed. The object is edited by adding rows to the object. To preserve texture detail, the approach is based on minimizing the change in gradients after adding rows. Visually, the objects can still keep texture detail after morphing. The image inpainting is presented based on the patch priority and a new patch classification, and recovering the damaged patches according to the edge direction. Here also present a hybrid inpainting algorithm designed by automatic texture complexity detection. Experiments

demonstrate that the proposed texture sensitive inpainting method and the hybrid inpainting method not only produce better repair result, but also have advantage in speed.

References

[1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of SIGGRAPH 2000, New Orleans, LA, 2000.

[2] B. Guo, C. B. Xian, Q. C. Sun, L. Liu, and F. Su, “A fast image inpainting algorithm based on structure,” 2009 Fourth International Conference on Innovative Computing, Information and Control, pp. 310-314, 2009.

[3] A. Telea, “An image technique based on the fast matching method,” Journals of Graphics Tools, vol. 9, no. 1, pp. 25-36, 2004.

[4] M. S. Farid, and H. Khan, “Image inpainting using dynamic weighted kernels,” 2010 3rd IEEE International Conference on Computer Science and Information Technology, vol. 8, pp. 252-255, 2010.

[5] J. X. Sun, D. F. Hao, L. F. Hao, H. M. Yang, and D. B. Gu, “A digital image inpainting method based on multiscale markov random field,” 2010 IEEE International Conference on Information and Automation, pp. 1118-1122, 2010.

[6] P. F. Felzenszwalb, and D. P. Huttenloche, “Efficient belief propagation for early vision,” 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I-261-I-268, 2004.

[7] Z. B. Xu, and J. Sun, “Image inpainting by patch propagation using patch sparsity,” IEEE Transactions on Image Processing, vol. 19, no. 5, pp. 1153-1165, 2010

[8] H. Y. Huang, and C. N. Hsiao, “An image inpainting technique based on illumination variation and structure consistency,” 2010 3rd International Conference on InformationSciences and Interaction Sciences, pp. 415, 2010.

[9] Y. L. Wu, C. Y. Tang, M. K. Hor, and C. T. Liu, "Automatic Image Interpolation Using Homography," EURASIP Journal on Advances in Signal Processing, vol. 2010, Article ID 307546, 12 pages, 2010.

[10] M. Grundmann, V. Kwatra, M. Han, and I. Essa, “Discontinuous seam-carving for video retargeting,” 2010 IEEE Conference on Computer Vision and Pattern Recognition, pp. 569-576, 2010.

[11] S. K. Mitra, and G. L. Sicuranza, “Nonlinear image processing,” San Diego, CA: Academic Press, 2001.