exploitation of 3d video technologies takashi matsuyama graduate school of informatics, kyoto...

34
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04) 1

Upload: amy-gibson

Post on 12-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

1

Exploitation of 3D Video Technologies

Takashi MatsuyamaGraduate School of Informatics, Kyoto University

12th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04)

Page 2: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

2

Outline

• Introduction• 3D Video Generation• Deformable Mesh Model• Texture Mapping Algorithm• Editing and Visualization System • Conclusion

Page 3: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

3

Introduction

• PC cluster for real-time active 3D object shape reconstruction

Page 4: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

4

3D Video Generation

• Synchronized Multi-View Image Acquisition

• Silhouette Extraction• Silhouette Volume

Intersection• Surface Shape

Computation• Texture Mapping

Page 5: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

5

Silhouette Volume Intersection

Page 6: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

6

Silhouette Volume Intersection

• Plane-to-Plane Perspective Projection– 3D voxel space is partitioned into a group of

parallel planes

Page 7: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

7

Plane-to-Plane Perspective Projection

1. Project the object silhouette observed by each camera onto a common base plane

2. Project each base plane silhouette onto the other parallel planes

3. Compute 2D intersection of all silhouettes projected on each plane

Page 8: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

8

Linearized Plane-to-Plane Perspective Projection(LPPPP) algorithm

Page 9: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

9

Parallel Pipeline Processing on a PC cluster system

Silhouette Extraction

Projection to the Base-Plane

Base-Plane Silhouette Duplication

Object Cross Section Computation

Without camera

With camera

Page 10: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

10

Parallel Pipeline Processing on a PC cluster system

• average computation time for each pipeline stage

Page 11: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

11

3D Video Generation

• Synchronized Multi-View Image Acquisition

• Silhouette Extraction• Silhouette Volume

Intersection• Surface Shape

Computation• Texture Mapping

Page 12: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

12

Deformable Mesh Model

• Dynamic 3D Shape reconstruction– Reconstruct 3D shape for each frame– Estimate 3D motion by establishing correspondences

between frames t and t+1• Constraint– Photometric constraint– Silhouette constraint– Smoothness constraint– 3D motion flow constraint– Inertia constraint

Intra-frame deformation

Inter-frame deformation

Page 13: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

13

Intra-frame deformation

• step 1 Convert the voxel representation into a triangle mesh. [1]

• step 2 Deform the mesh iteratively:– step 2.1 Compute force acting on each vertex– step 2.2 Move each vertex according to the force.– step 2.3 Terminate if all vertex motions are smallenough. Otherwise go back to 2.1 .

[1] Y. Kenmochi, K. Kotani, and A. Imiya. Marching cubes method with connectivity. In Proc. of 1999 International Conference on Image Processing, pages 361–365, Kobe, Japan, Oct. 1999.

Page 14: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

14

Intra-frame deformation

• External Force: – satisfy the photometric constraint

)(vFe

Page 15: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

15

Intra-frame deformation

• Internal Force:– smoothness constrain

• Silhouette Preserving Force:

– Silhouette constrain

• Overall Vertex Force:

)(vFi

)(vFs

)()()()( vFvFvFvF sei

Page 16: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

16

Performance Evaluation

Page 17: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

17

Dynamic Shape Recovery

• Inter-frame deformation– A model at time t deforms its shape to satisfy the

constraints at time t+1, we can obtain the shape at t+1 and the motion from t to t+1 simultaneously.

Page 18: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

18

Dynamic Shape Recovery

• Define 、 、 • Drift Force:– 3D Motion flow constraint

• Inertia Force: – Inertia constraint

• Overall Vertex Force:

)( 1te vF )( 1ti vF )( 1ts vF

)( 1td vF

)( 1tn vF

)()()()()()( 111111 tntdtstetit vFvFvFvFvFvF

Page 19: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

19

Dynamic Shape Recovery

Page 20: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

20

3D Video Generation

• Synchronized Multi-View Image Acquisition

• Silhouette Extraction• Silhouette Volume

Intersection• Surface Shape

Computation• Texture Mapping

Page 21: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

21

Viewpoint Independent Patch-Based Method

• Select the most “appropriate” camera for each patch1. For each patch pi

2. Compute the locally averaged normal vector Vlmn using normals of pi and its neighboring patches.

3. For each camera cj , compute viewline vector Vcj directing toward the centroid of pi.

4. Select such camera c* that the angle between Vlmn and Vcj becomes maximum.

5. Extract the texture of pi from the image captured by camera c*.

Page 22: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

22

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm

• Parameters– c: camera– p: patch– n: normal vector– I: RGB value

Page 23: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

23

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm

• A depth buffer of cj : Bcj

– Record patch ID and distance to that patch from cj

Page 24: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

24

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm

• Visible Vertex from Camera cj :

– The face of pi can be observed from camera cj

– Project visible patches onto Bcj – Check the visibility of each vertex using the buffer

kcp ji

v ,

0 iji pcp vn

Page 25: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

25

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm

1. Compute RGB values of all vertices visible from each camera2. Specify the viewpoint eye3. For each patch pi, do 4 to 94. If visible ( ), do 5 to 95. Compute weight 6. For each vertex of patch pi, do 7 to 87. Compute the normalized weight

8. Compute RGB value

9. Generate the texture of patch pi by linearly interpolating RGB values of its vertices

0 ii peyep vnm

peyepcc iijjvvw )(

l

kc

kck

c

l

j

j w

ww

n

j

kcp

kc

kp jiji

vIwvI1 , )()(

Page 26: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

26

Performance

• Viewpoint Independent Patch-Based Method (VIPBM)

• Viewpoint Dependent Vertex-Based Texture Mapping Algorithm (VDVBM)– VDVBM-1 : including real images captured by

camera cj itself– VBVBM-2 : excluding real images

• Mesh : converted from voxel data• D-Mesh : after deformation

Page 27: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

27

Performance

Page 28: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

28

Performance

Frame number

Page 29: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

29

Editing and Visualization System

• Methods to generate camera-works– Key Frame Method– Automatic Camera-Work Generation Method

• Virtual Scene Setup– Virtual camera– Background– object

Page 30: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

30

Key Frame Method

• Specify the parameters (positions, rotations of a virtual camera, object, etc.) for arbitrary key frames

Page 31: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

31

Automatic Camera-Work Generation Method

• Object’s parameters (standing human)– Position, Height, Direction

• User has only to specify1) Framing of a picture2) Appearance of the object from the virtual camera

• We can compute virtual camera parameters– Distance between virtual camera and object d– Position of the virtual camera (xc, yc, zc)

Page 32: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

32

Automatic Camera-Work Generation Method

• Distance between virtual camera and object d

• Position of the virtual camera (xc, yc, zc)

2tan2

hr

d

Page 33: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

33

Conclusion

• A PC cluster system with distributed active cameras for real-time 3D shape reconstruction– Plane-based volume intersection method– Plane-to-Plane Perspective Projection algorithm– Parallel pipeline processing

• A dynamic 3D mesh deformation method for obtaining accurate 3D object shape

• A texture mapping algorithm for high fidelity visualization

• A user friendly 3D video editing system.

Page 34: Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics

34

Reference1) T. Matsuyama and T. Takai. “Generation, visualization, and editing of 3d

video.” In Proc. of symposium on 3D Data Processing Visualization and Transmission, pages 234–245, Padova, Italy, June 2002.

2) T. Matsuyama, X. Wu, T. Takai, and T. Wada. “Real-time dynamic 3d object shape reconstruction and high-fidelity texture mapping for 3d video.” IEEE Trans. on Circuit and Systems for Video Technology, pages 357–369, 2004.

3) T. Wada, X. Wu, S. Tokai, and T. Matsuyama. “Homography based parallel volume intersection: Toward real-time reconstruction using active camera.” In Proc. of International Workshop on Computer Architectures for Machine Perception, pages 331–339, Padova, Italy, Sept. 2000.