recovering geometric, photometric and kinematic properties from images
Post on 09-Jan-2016
23 Views
Preview:
DESCRIPTION
TRANSCRIPT
Recovering Geometric, Photometric and Kinematic Properties from
Images
Jitendra MalikComputer Science DivisionUniversity of California at
Berkeley
Work supported by ONR, Interval Research, Rockwell, MICRO, NSF, JSEP
Jitendra MalikComputer Science DivisionUniversity of California at
Berkeley
Work supported by ONR, Interval Research, Rockwell, MICRO, NSF, JSEP
Physics of Image Formation
•Lighting•BRDFs•Shape and Spatial layout•Internal DOFs
•Lighting•BRDFs•Shape and Spatial layout•Internal DOFs
ImagesImages
Solving inverse problems requires models
• Define suitable parametric models for geometry, lighting, BRDFs, and kinematics.
• Recover parameters using optimization techniques.
• Humans better at selecting models; computers at recovering parameters.
• Define suitable parametric models for geometry, lighting, BRDFs, and kinematics.
• Recover parameters using optimization techniques.
• Humans better at selecting models; computers at recovering parameters.
But there will always be unmodeled detail…..
• Models are always approximate.• Adding more parameters doesn’t help;
data will be insufficient to recover these parameters.
• Models are always approximate.• Adding more parameters doesn’t help;
data will be insufficient to recover these parameters.
Hybrid Approaches are best!• ANALYSIS
– use images to recover a subset of object parameters. These are chosen judiciously so that they can be recovered robustly
• SYNTHESIS
– render using appropriately selected images or subimages, transformed using the model.
• ANALYSIS
– use images to recover a subset of object parameters. These are chosen judiciously so that they can be recovered robustly
• SYNTHESIS
– render using appropriately selected images or subimages, transformed using the model.
Talk Outline
• Geometry– Debevec, Taylor and Malik, SIGGRAPH 96
• Photometry– Yu and Malik, SIGGRAPH 98– Debevec and Malik, SIGGRAPH 97
• Kinematics– Bregler and Malik, CVPR 98
• Geometry– Debevec, Taylor and Malik, SIGGRAPH 96
• Photometry– Yu and Malik, SIGGRAPH 98– Debevec and Malik, SIGGRAPH 97
• Kinematics– Bregler and Malik, CVPR 98
Modeling and Rendering Architecture from Photographs
Paul DebevecCamillo TaylorJitendra Malik
Computer Vision GroupComputer Vision GroupComputer Science Division Computer Science Division
University of California at BerkeleyUniversity of California at Berkeley
George Borshukov
Yizhou Yu
Overview• Photogrammetric Modeling
– Allows the user to construct a parametric model of the scene directly from photographs
• Model-Based Stereo– Recovers additional geometric detail through
stereo correspondence
• View-Dependent Texture-Mapping– Renders each polygon of the recovered
model using a linear combination of three nearest views
Our Modeling Method:• The user represents the scene as a
collection of blocks
• The computer solves for the sizes and positions of the blocks according to user-supplied edge correspondences
Block ModelBlock Model User-Marked EdgesUser-Marked Edges Recovered ModelRecovered Model
Arc de Triomphe
Modeled from five photographsby George Borshukov
Surfaces of Revolution
Taj MahalTaj Mahalmodeled frommodeled from
one photographone photographby G. Borshukovby G. Borshukov
Recovered ModelRecovered Model
PhotographPhotograph Synthetic ViewSynthetic View
Recovering Additional Detailwith Model-Based Stereo
• Scenes will have geometric detail not captured in the model
• This detail can be recovered automatically through model-based stereo
Scene with Geometric DetailScene with Geometric Detail
Approximate Block ModelApproximate Block Model
Model-Based Stereo• Given a key and an offset image,
– Project the offset image onto the model
– View the model through the key camera Warped offset image
• Stereo becomes feasible between key and warped offset images because:
– Disparities are small
– Foreshortening is greatly reduced
Key ImageKey Image Warped Offset ImageWarped Offset Image Offset ImageOffset Image
Disparity Disparity MapMap
Synthetic Views of
Refined ModelFour images composited with
View-Dependent Texture Mapping
Rendering with View-Dependent
Texture Mapping• Triangulate the view hemisphere
• For each polygon, determine which images viewed it from which angles
• Label each triangle vertex according to best viewed image
11
22 55
44
33
view hemisphere
• To render, determine to which triangle the viewpoint belongs
• Compute Barycentric weights for the triangle vertices
• Render the polygon with a weighted average of the three vertex images
Rendering with View-Dependent
Texture Mapping
11
22 55
44
33
view hemisphere
The Campanile (Debevec et al)
• 20 photographs used• approx. 1-2 weeks of modeling time.• Real time rendering
• 20 photographs used• approx. 1-2 weeks of modeling time.• Real time rendering
Recovered Campus Model
Campanile + 40 Buildings
top related