incorporating dynamic real objects into immersive virtual environments benjamin lok university of...
TRANSCRIPT
IncorporatingDynamic Real Objects
into Immersive Virtual Environments
Benjamin Lok
University of North Carolina at Charlotte
Samir Naik
Disney VR Studios
Mary Whitton, Frederick P. Brooks Jr.
University of North Carolina at Chapel Hill
April 28th, 2003
Outline
• Motivation
• Managing Collisions Between Virtual and Dynamic Real Objects
• NASA Case Study
• Conclusion
Why we need dynamic real objects in VEs
How we get dynamic real objects in VEs
Applying the system to a driving real world problem
Assembly Verification
• Given a model, we would like to explore:– Can it be readily assembled?– Can repairers service it?
• Example:– Changing an oil filter– Attaching a cable
to a payload
Current ImmersiveVE Approaches
• Most objects are purely virtual– User– Tools– Parts
• Most virtual objects are not registered with a corresponding real object.
• System has limited shape and motion information of real objects.
Ideally
• Would like:– Accurate virtual representations, or avatars, of
real objects– Virtual objects responding to real objects– Haptic feedback– Correct affordances– Constrained motion
• Example: Unscrewing a virtual oil filter from a car engine model
Dynamic Real Objects
• Tracking and modeling dynamic objects (change shape and appearance) would:– Improve interactivity– Enable visually faithful
virtual representations
Previous Work: Incorporating Real Objects into VEs
• Non-Real Time– Virtualized Reality (Kanade, et al.)
• Real Time– Image Based Visual Hulls [Matusik00, 01]– 3D Tele-Immersion [Daniilidis00]
• How important is to get real objects into a virtual environment?
Previous Work: Interaction and Collision Detection
• Commercial Interaction Solutions– Tracked mice, gloves, joysticks
• Augment specific objects for interaction– Doll’s head [Hinkley1994]
– Plate [Hoffman1998]
• Virtual object collision detection– Traditional packages [Ehmann2000]
– Hardware accelerated [Hoff2001]
• Virtual object – real object– a priori modeling and tracking [Breen1996]
Real-time Object Reconstruction System
• Handle dynamic objects (generate a virtual representation)
• Interactive rates
• Bypass an explicit 3D modeling stage
• Inputs: outside-looking-in camera images
• Generate an approximation of the real objects (visual hull)
Reconstruction Algorithm
…1. Start with live camera images
2. Image Subtraction
3. Use images to calculate volume intersection (visual hull)
4. Composite with the VE
…
Visual Hull Computation
• Visual hull - tightest volume given a set of object silhouettes
• Intersection of the projection of object pixels
Visual Hull Computation
• Visual hull - tightest volume given a set of object silhouettes
• Intersection of the projection of object pixels
Volume Querying in Hardware
A point (P) inside the visual hull (VHreal objects) projects onto an object pixel from each camera
P VHreal objects iff i j, P = Ci-1 Oi, j
Implementation
• 1 HMD-mounted and 3 wall-mounted cameras
• SGI Reality Monster – handles up to 7 video feeds
• 15-18 fps• Estimated error: 1 cm• Performance will increase as graphics
hardware continues to improve
Managing Collisions Between Virtual and Dynamic Real
Objects
Approach• We want virtual objects to respond to
real object avatars
• This requires detecting when real and virtual objects intersect
• If intersections exist, determine plausible responses
• Only virtual objects can move or deform at collision.
• Both real and virtual objects are assumed stationary at collision.
Detecting Collisions
Visual Hull Computation
Detecting Collisions Approach
Are there real-virtual collisions?
For virtual object i
Done withobject i
Volume queryeach triangle
Calculate plausiblecollision response
Determine pointson virtual object
in collisionN Y
Resolving Collisions Approach
1. Estimate point of deepest virtual object penetration.
CPobj
2. Define plausible recovery vector
Vrec = RPobj - CPobj
3. Back out virtual object.
CPobj = CPhull
CPobj
Vrec
CPhull
Resolving Collisions Approach
1. Estimate point of deepest virtual object penetration.
CPobj
2. Define plausible recovery vector
Vrec = RPobj - CPobj
3. Back out virtual object.
CPobj = CPhull
CPobj
Vrec
CPhull
Resolving Collisions Approach
1. Estimate point of deepest virtual object penetration.
CPobj
2. Define plausible recovery vector
Vrec = RPobj - CPobj
3. Back out virtual object.
CPobj = CPhull
CPobj
Vrec
CPhull
Resolving Collisions Approach
1. Estimate point of deepest virtual object penetration.
CPobj
2. Define plausible recovery vector
Vrec = RPobj - CPobj
3. Back out virtual object.
CPobj = CPhull
CPobj
Vrec
CPhull
Resolving Collisions Approach
1. Estimate point of deepest virtual object penetration.
CPobj
2. Define plausible recovery vector
Vrec = RPobj - CPobj
3. Back out virtual object.
CPobj = CPhull
CPobj
Vrec
CPhull
Results
Results
Collision Detection / Response Performance
• Volume-query about 5000 triangles per second
• Error of collision points is ~0.75 cm.– Depends on average size of virtual object
triangles– Tradeoff between accuracy and time– Plenty of room for optimizations
Case Study: NASA Langley Research Center
(LaRC)Payload Assembly Task
NASA Driving Problems• Given payload models, designers and engineers
want to evaluate:– Assembly feasibility– Assembly training– Repairability
• Current Approaches– Measurements– Design drawings– Step-by-step assembly instruction list– Low fidelity mock-ups
Task
• Wanted a plausible task given common assembly jobs.
• Abstracted a payload layout task– Screw in tube– Attach power cable
Task Goal
• Determine how much space should be allocated between the TOP of the PMT and the BOTTOM of Payload A
Videos of Task
Results
Participant
#1 #2 #3 #4
(Pre-experience) How much space is necessary?
14 cm 14.2 cm 15 – 16 cm
15 cm
(Pre-experience) How much space would you actually allocate?
21 cm 16 cm 20 cm 15 cm
Actual space required in VE 15 cm 22.5 cm 22.3 cm 23 cm
(Post-experience) How much space would you actually allocate?
18 cm 16 cm
(modify tool)
25 cm 23 cm
The tube was 14 cm long,
4cm in diameter.
Results
• Late discovery of similar problems is not uncommon.
Participant
#1 #2 #3 #4
Time cost of the spacing error
days to months 30 days days to months months
Financial cost of the spacing error
$100,000s -$1,000,000+
largest cost is huge hit in schedule
$100,000s -$1,000,000+
$100,000s
Case Study Conclusions
• Object reconstruction VEs benefits:– Specialized tools and parts require no modeling– Short development time to try multiple designs– Allows early testing of subassembly integration
from multiple suppliers
• Possible to identify assembly, design, and integration issues early that results in considerable savings in time and money.
Conclusions
Innovations
• Presented algorithms for
– Incorporation of real objects into VEs
– Handling interactions between real and virtual objects
• Applied to real-world task
Future Work
• Improved model fidelity
• Improved collision detection and response
• Apply system to upcoming NASA payload projects.
ThanksCollaborators
Dr. Larry F. Hodges
Danette Allen (NASA LaRC)
UNC-CH Effective Virtual Environments
UNC-C Virtual Environments Group
For more information:http://www.cs.uncc.edu/~bclok
(I3D2001, VR2003)
Correct Email:[email protected]
Funding Agencies The LINK Foundation
NIH (Grant P41 RR02170)
National Science Foundation
Office of Naval Research
Object Pixels
• Identify new objects
• Perform image subtraction
• Separate the object pixels from background pixels
current image - background image = object pixels
Current Projects at UNC-Charlottewith Dr. Larry Hodges
• Digitizing Humanity– Basic research into virtual characters
• What is important?• How does personality affect interaction?
– Applications:• Social situations• Human Virtual-Human Interaction
• Virtual Reality– Basic Research:
• Incorporating Avatars• Locomotion Effect on Cognitive Performance
– Applications:• Balance Disorders (w/ Univ. of Pittsburg)
Research Interests• Computer Graphics – computer scientists are toolsmiths
– Applying graphics hardware to:• 3D reconstruction • simulation
– Visualization– Interactive Graphics
• Virtual Reality– What makes a virtual environment effective?– Applying to assembly verification & clinical psychology
• Human Computer Interaction– 3D Interaction– Virtual Humans
• Assistive Technology– Computer Vision and Mobile Technology to help disabled
Future Directions
• Long Term Goals– Help build the department into a leader in using
graphics for visualization, simulation, and training.
– Effective Virtual Environments (Graphics, Virtual Reality, and Psychology)
– Digital Characters (Graphics & HCI)• Additional benefit of having nearby companies
(Disney) and military
– Assistive Technology (Graphics, VR, and Computer Vision)
Occlusion
Occlusion