ray space factorization for from-region visibility authors: tommer leyvand, olga sorkine, daniel...

28
Ray Space Factorization for From-Region Visibility Authors: Tommer Leyvand, Olga Sorkine, Daniel Cohen-Or Presenter: Alexandre Mattos

Post on 22-Dec-2015

226 views

Category:

Documents


1 download

TRANSCRIPT

Ray Space Factorization for From-Region

Visibility

Authors: Tommer Leyvand, Olga Sorkine, Daniel Cohen-Or

Presenter: Alexandre Mattos

Goal

Real-time walkthroughs of large 3D scenes Server contains all world geometry and

needs to transmit it to clients Only send geometry that is visible to client Reduce network traffic

Client needs to compute what geometry to render

Strategy

Point-wise visibility What user can see from their exact current

location Needs to be calculated every time player

moves Will not work due to network latency Might not work on client either

Strategy

Divide scene into view cells As user moves around, calculate visible

geometry for that view cell and adjacent cells

From-Region Visibility

Given a view cell, compute what is visible from that view cell

An object is visible if there is at least one ray exiting the view cell that intersects that object

From-Region Visibility

Assumptions

Scenes are largely 2.5 + εD Not much vertical complexity Example – Sky scrapers of varying heights

Will first explain algorithm assuming 2.5D

Dividing Up the Problem

Paper splits the problem into easier-to-solve vertical and horizontal components

Determine if objects occlude each other vertically or horizontally and then combine the results

Build K-d Tree over entire scene Allows front to back traversal of scene

View Cell Parameterization

Define two concentric squares One is view cell One is outside view cell

Parameterize inner

and outer square with

S and T respectively

Rays (S,T)

Can define all rays (view directions) leaving view cell with (S, T)

Horizontal Component

Orthographically project all geometry onto the ground

Geometry has a mapping to parameter space

Key Insight

Render geometry in parameter space front to back

If parameter space for geometry is already rendered, then geometry is occluded

Vertical Component

(S, T) define a plane Intersection of a plane and a triangle defines

a vertical line and casts a directional umbra

Vertical Component

Object is occluded if it is contained within vertical umbra

Vertical Component

One way to solve the problem: Traverse scene front to back and maintain

aggregated umbra

Video

In the 2.5D only Objects are visible if the slope of their umbra

is larger than the current umbra

Putting It Together

For all geometry render it in 3D as (S, T, α) where α is angle of the umbra at that point

Using graphics hardware, we can do occlusion tests for all geometry

Hardware Implementation

Disable Z-buffer updates and render geometry If a single pixel renders, it is visible

To update occlusion map, render geometry with Z-buffer updates enabled

Algorithm

Traverse K-d tree

Extension from 2.5D to 3D

Only comparing α not valid anymore

3D Umbra

Need to keep four angles (α1, α2, α3, α4) to represent umbra uniquely

Merging Umbra

Many cases Umbrae may be disjoint

Hardware Implementation

For all geometry render it in 3D as (S, T, V) where V = (α1, α2, α3, α4)

Use a pixel/fragment shader that checks whether a pixel is visible based on V

Pass V values to graphics card in a buffer Render (S, T, X) where X is index into buffer

for V

Hardware Limitations

Can only maintain one aggregated umbra per vertical slice Pack 16 bit floats into 32 bit floats to allow two

aggregated umbrae Use many buffers to store multiple umbrae

Paper claims that one umbra is sufficient because umbrae merge rapidly

Results

Buildings are 9-12 units and rotated at most 30 degrees

Box model contains random boxes in random orientation

Vienna model

Results

City Model Half-umbra VS Full-umbra

Box Model Resolution Effect

VS is 10,072 triangles

Vienna Model

Discussion

Algorithm is sensitive to how much it has to render

Works well for dense scenes because occlusion map quickly occludes entire scene For a minor cost of rendering one extra K-d tree

node they can double model size If the VS is large, then a lot of geometry is

rendered and algorithm slows down Can calculate PVS over several frames

Discussion

No tests done for sparse models Umbrae will not converge rapidly Many K-d tree nodes need to be tested

Algorithm prefers horizontal occlusion over vertical occlusion Horizontal occlusion is exact up to rendering Vertical occlusion is conservative based on how

many umbrae are used