01/26/05© 2005 university of wisconsin last time raytracing and pbrt structure radiometric...
DESCRIPTION
01/26/05© 2005 University of Wisconsin Monte Carlo Efficiency We can get an estimate faster (do less work per sample) Or we can get an estimate with lower variance as a function of N Either improves the efficiency of an estimator F:TRANSCRIPT
01/26/05 © 2005 University of Wisconsin
Last Time
• Raytracing and PBRT Structure• Radiometric quantities
01/26/05 © 2005 University of Wisconsin
Today
• Improving Efficiency with Monte Carlo Integration
01/26/05 © 2005 University of Wisconsin
Monte Carlo Efficiency
• We can get an estimate faster (do less work per sample)• Or we can get an estimate with lower variance as a function
of N• Either improves the efficiency of an estimator F:
FTFVF 1
01/26/05 © 2005 University of Wisconsin
Ways to Improve Efficiency (PBR Chap. 15)
• Less work per sample:– Russian Roulette– Splitting
• Careful sample placement:– Stratified Sampling– Low-discrepancy sequences– Best-Candidate samplers
• Introducing Bias• Importance Sampling
01/26/05 © 2005 University of Wisconsin
Russian Roulette (PBR 15.1)
• Say the integrand, f(x), is expensive to compute– It may require tracing rays and evaluating reflectance functions, or it
may even require an infinite amount of work
• For each sample, choose some value q• Sample • If <q, then use a constant c instead of f(x)• Otherwise, evaluate the integrand, but weight it• Why does it work …
01/26/05 © 2005 University of Wisconsin
Roulette Math
• Never decreases variance• But can reduce time without increasing variance if only
samples with low F are terminated
FEqcqqcFqFE
q
cqqcF
F
1)1(
1 otherwise if
01/26/05 © 2005 University of Wisconsin
Choosing q and c
• Try to base q on the anticipated value of F• Integrating the direct contribution of lights requires a
shadow ray test• If the light is far away or the test ray will hit a low-
contribution part of the light, then q should be low• The contribution of reflection rays goes down as the ray-tree
gets deeper• Base q on the depth of the ray tree (the number of reflections
so far)• Choose c=0 in these cases
01/26/05 © 2005 University of Wisconsin
Splitting (PBR 15.1.1)
• Say you need to compute a multi-dimensional integral– Such as the integral over the area seen by a pixel and the directions
to an area light source
• Say you expect the integral to vary more slowly in one dimension than the other– Incoming illumination is going to vary more rapidly, due to
occluders, than over positions within the pixel
• Choose a value for the slow varying component, and many values for the fast varying component– One ray through the pixel to find the surface point, then many rays
to the light
01/26/05 © 2005 University of Wisconsin
Effects of Splitting
• Reduces time with little increase in variance• The example on the previous slide is extremely common• Extreme contrived case:
• Less contrived case:
5
0
5
0
2dxdyx
5
0
5
0
3 ydxdyx
01/26/05 © 2005 University of Wisconsin
Stratified Sampling (PBR Sect 7.3 and 15.2.1)
• Consider uniformly sampling inside a square• Truly at random (choose random x, random y) will give
clumps of points– Uniformly distributed talks about probability, but not the intuitive
definition of uniform (like evenly)
• Instead, break domain into little regions, or strata, and put one sample in each piece– Choose uniformly at random in each strata – jittered sampling
01/26/05 © 2005 University of Wisconsin
Stratified Example
01/26/05 © 2005 University of Wisconsin
Effect on Images
01/26/05 © 2005 University of Wisconsin
Stratification Comments
• Stratification reduces variance• But what if number of samples required is not a product of
two reasonable numbers – N=NxNy?• What do you do in multiple dimensions?
– The curse of dimensionality gets to you
• It isn’t always great
01/26/05 © 2005 University of Wisconsin
Latin Hypercube Sampling• Say you want 5 samples in the square• Use a 5x5 grid, and place samples in squares s.t. only one
sample per row and only one sample per column– Can be done by permuting of rows or columns of identity
• Performance degrades for increasing numbers of samples– Can get large empty areas
01/26/05 © 2005 University of Wisconsin
Stratifying Multiple Dimensions
• Do not attempt to put a sample in every possible multi-dimensional strata
• For each dimension, stratify independently• For multi-dimensional samples, permute strata, then choose
1st value from every sequence, 2nd value, etc
01/26/05 © 2005 University of Wisconsin
Stratification can Fail
• Can get unlucky, typically by clumping in one dimension
01/26/05 © 2005 University of Wisconsin
Low-Discrepancy Samplers
• Deterministic sequences that look random in the more natural sense – noisy– Also guarantees on arrangement
• Generation beyond scope of class
• Quasi Monte Carlo: Instead of random samples, use low discrepancy sequences
01/26/05 © 2005 University of Wisconsin
Best Candidate Sampling• Poisson distribution: Uniform distribution with condition
that no two points are closer than a minimum distance– Excellent distribution to use, but hard to generate
• Best-Candidate patterns approximate Poisson distributions
• New methods along these lines are always being invented
01/26/05 © 2005 University of Wisconsin
Biased Samplers
• A sampler is biased if the expected value is not the correct one– Bias is the difference between estimate and desired value
• Biased estimators for uniform[0,1], with =-0.5/N+1:
FFE
N
N
ii
XXX
XN
,,,max21
11
21
1
01/26/05 © 2005 University of Wisconsin
Bias can be Good
• Even with low sample counts, bias can be good– Variance can be lower, e.g. O(N-2)
• Most image reconstruction filters give biased estimates– Reduces variance and hence apparent noise in image
• Standard photon map estimator (later) is biased– But result is less noisy
01/26/05 © 2005 University of Wisconsin
Importance Sampling
• The function p is called the importance function• A wise choice of p, as close as possible to f, can
dramatically reduce variance• A poor choice can dramatically increase variance• Choosing importance functions is a well established art in
physically based rendering
y
N
i i
i dyyfxpxf
NE )(
)()(1
1
01/26/05 © 2005 University of Wisconsin
Important Function Generalities
• If you are integrating something like f(x)g(x), it can be helpful to choose p=f or p=g
• Multiple Importance Sampling lets you combine results from many importance samplers– Generate some samples according to p– Generate some according to q– Form a weighted sum – details in PBR – good when no one
importance function handles all cases
01/26/05 © 2005 University of Wisconsin
Next Time
• Cameras and Film