overview - computer graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfa tale of two...
TRANSCRIPT
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Overview
Earlier lectures
■Monte Carlo I: Integration
■ Signal processing and sampling
Today: Monte Carlo II
■Noise and variance
■ Efficiency
■ Stratified sampling
■ Importance sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Review: Basic MC Estimator
Xi 2 [0, 1)nE
"1
N
NX
i=1
f(Xi)
#=
Zf(x) dx
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
High-Dimensional Integration
Complete set of samples:
■ ‘The curse of dimensionality’
Random sampling error:
Numerical integration error:
In high dimensions, Monte Carlo requires fewer samples than quadrature-based numerical integration for the same error
N = n⇥ n⇥ · · ·⇥ n| {z }d
= nd
E ⇠ V 1/2 ⇠ 1pN
E ⇠ 1
n=
1
N1/d
A Tale of Two Functions
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
f(x) = e�5000(x� 12 )
2
f(x) = e�20(x� 12 )
2
Monte Carlo Estimates, n=32
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.4190.3790.3770.3610.3320.3800.4760.463
0.01000.05200.02360.00010.00780.03560.02960.0270
f(x) = e�5000(x� 12 )
2
Zf(x)dx = 0.39571 . . .
Zf(x)dx = 0.025067 . . .
f(x) = e�20(x� 12 )
2
Random Sampling Introduces Noise
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
1 shadow ray
Center Random
Less Noise with More Rays
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
16 shadow rays1 shadow ray
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Variance
Definition
Variance decreases linearly with sample size
V [Y ] ⌘ E[(Y � E[Y ])2]
= E[Y 2]� E[Y ]2
V
"1
N
NX
i=1
Yi
#=
1
N2
NX
i=1
V [Yi] =1
N2NV [Y ] =
1
NV [Y ]
V [aY ] = a2V [Y ]
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Comparing Different Techniques
Efficiency measure
Comparing two sampling techniques A and B
If A has twice the variance as B, then it takes twice as many samples from A to achieve the same variance as B
If A has twice the cost of B, then it takes twice as much time reduce the variance using A compared to using B
The product of variance and cost is a constant independent of the number of samples
Recall: Variance goes as 1/N, time goes as C*N
E�ciency / 1
Variance · Cost
(Live Demo of Variance)
High Variance From Narrow Function
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Estimate of integral: 6.1418⇥ 10�12
Zf(x)dx = 0.025067 . . .
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Stratified Sampling
Allocate samples per region
Estimate each region separately
New variance
If the variance in each region is the same, then total variance goes as
21
1[ ] [ ]N
N ii
V F V FN =
= ∑
1
1 N
N ii
F FN =
= ∑
1/N
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Stratified Sampling
Sample a polygon
If the variance in some regions are smaller, then the overall variance will be reduced
2 1.51
1 [ ][ ] [ ]N
EN j
i
V FV F V FN N=
= =∑
Stratified Samples, n=32
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Zf(x)dx = 0.025067 . . .
VarianceUniform 3.99E-04Stratified 2.42E-04
Jittered Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Add uniform random jitter to each sample
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Jittered vs. Uniform Supersampling
4x4 Jittered Sampling 4x4 Uniform
Theory: Analysis of Jitter
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
( ) ( )n
nn
s x x xδ=∞
=−∞
= −∑
n nx nT j= +
~ ( )
1 1/ 2( )
0 1/ 2
( ) sinc
nj j x
xj x
x
J ω ω
" ≤$= %
>$&=
2 22
2
1 2 2( ) 1 ( ) ( ) ( )
1 1 sinc ( )
n
n
nS J JT T T
T
π πω ω ω δ ω
ω δ ω
=−∞
=−∞
& '= − + −( )
& '= − +( )
∑
Non-uniform sampling Jittered sampling
Poisson Disk Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Dart throwing algorithm Less energy near origin
Distribution of Extrafoveal Cones
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Monkey eye cone distribution Fourier transform
Yellot
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Fourier Radial Power Spectrum
0 1 2 3 4
Frequency
0
1
2Power
0 1 2 3 4
Frequency
0
1
2
Power
Jitt
erPois
son D
isk
Pilleboue et al. [2015]
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Fourier Radial Power Spectrum
0 1 2 3 4
Frequency
0
1
2Power
0 1 2 3 4
Frequency
0
1
2
Power
Jitt
erPois
son D
isk
Pilleboue et al. [2015]
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Theoretical Convergence Rates in 2D
Sampler Worst Case Best Case
Jitter
Poisson Disk
Random O(N�1) O(N�1)
O(N�1) O(N�1)
O(N�1.5) O(N�2)
Measuring Power at a Pixel
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Film Plane
Lens Aperture
p
p0
r2
✓
✓0
Pixel area
Measuring Power at a Pixel
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Film Plane
Lens Aperture
p
p0
r2
✓
✓0
Pixel area
E(p) =
Z
Alens
L(p0 ! p)
cos ✓ cos ✓0
||p0 � p||2 dA0
Measuring Power at a Pixel
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Film Plane
Lens Aperture
p
p0
r2
✓
✓0
Pixel area
E(p) =
Z
Alens
L(p0 ! p)
cos ✓ cos ✓0
||p0 � p||2 dA0
W =
Z
Apixel
Z
Alens
L(p0 ! p)
cos ✓ cos ✓0
||p0 � p||2 dA0dA
Lens Exit Pupils
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
In Focus Out of Focus
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
How to Stratify?
n strata in d dimensions:
■ 8 strata in 4 dimensions: 4096 samples!
O(nd)
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
How to Stratify?
n strata in d dimensions:
■ 8 strata in 4 dimensions: 4096 samples!
Solution: padding
■Generate stratified lower dimensional point sets, randomly associate pairs of samples
O(nd)
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
(p0, p1, p2, p3)
(Live Demo of Stratification)
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
“Biased” Sampling is “Unbiased”
Probability
Estimator
Proof
~ ( )iX p x
( )( )
ii
i
f XYp X
=
∫
∫
=
=
"#
$%&
'=
dxxf
dxxpxpxf
xpxf
EYE i
)(
)()()()()(][
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
( )( )[ ]f xp xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
( )( )[ ]f xp xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2 2[ ] [ ] [ ]V f E f E f= −
Variance
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
22
2
2
( )[ ] ( )( )
( ) ( )( ) / [ ] [ ]
[ ] ( )
[ ]
f xE f p x dx
p x
f x f xdx
f x E f E f
E f f x dx
E f
! "= # $
% &
! "= # $
% &
=
=
∫
∫
∫
! !!
( )( )[ ]f xp xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2 2[ ] [ ] [ ]V f E f E f= −
Variance
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
22
2
2
( )[ ] ( )( )
( ) ( )( ) / [ ] [ ]
[ ] ( )
[ ]
f xE f p x dx
p x
f x f xdx
f x E f E f
E f f x dx
E f
! "= # $
% &
! "= # $
% &
=
=
∫
∫
∫
! !!
( )( )[ ]f xp xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2[ ] 0V f =!
Zero variance!
2 2[ ] [ ] [ ]V f E f E f= −
Variance
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
22
2
2
( )[ ] ( )( )
( ) ( )( ) / [ ] [ ]
[ ] ( )
[ ]
f xE f p x dx
p x
f x f xdx
f x E f E f
E f f x dx
E f
! "= # $
% &
! "= # $
% &
=
=
∫
∫
∫
! !!
( )( )[ ]f xp xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2[ ] 0V f =!
Zero variance!
2 2[ ] [ ] [ ]V f E f E f= −
Variance
Gotcha?
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
2
4
6
8
Sample distribution
f(x) = e�5000(x� 12 )
2
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
2
4
6
8
Sample distribution
f(x) = e�5000(x� 12 )
2
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
2
4
6
8
Sample distribution
f(x) = e�5000(x� 12 )
2
MC Estimates, n=16
0.025020.024900.025340.022290.024430.025920.026780.02612
Zf(x)dx = 0.025067 . . .
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
0.2 0.4 0.6 0.8 1.0
2
4
6
8
Sample distribution
f(x) = e�5000(x� 12 )
2
VarianceUniform 3.99E-04Stratified 2.42E-04Importance 3.75E-07
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Sample distribution?
f(x) = e�5000(x� 12 )
2
0.0 0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Sample distribution?
f(x) = e�5000(x� 12 )
2
0.0 0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Importance Sampling: Area
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Solid Angle
100 shadow rays
Area
100 shadow rays
Ambient Occlusion
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Z
⌦V (!) cos ✓ d!
Landis, McGaugh, Koch @ ILM
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Importance Sampling
Uniform hemisphere sampling:
p(!) =1
2⇡
(⇠1, ⇠2) ! (
q1� ⇠21 cos(2⇡⇠2),
q1� ⇠21 sin(2⇡⇠2),
q1� ⇠21)
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
f(!) = Li(!) cos ✓
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Importance Sampling
Uniform hemisphere sampling:
p(!) =1
2⇡
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Z
⌦f(!) d! ⇡ 1
N
NX
i
f(!i
)
p(!i
)
=
1
N
NX
i
Li(!i
) cos ✓o
1/2⇡=
2⇡
N
NX
i
Li(!i
) cos ✓
f(!) = Li(!) cos ✓
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Importance Sampling
Cosine-weighted hemisphere sampling:
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
p(!) =cos ✓
⇡Z
⌦f(!) d! ⇡ 1
N
NX
i
f(!i)
p(!i)=
1
N
NX
i
Li(!i) cos ✓icos ✓i/⇡
=
⇡
N
NX
i
Li(!i)
f(!) = Li(!) cos ✓
Ambient Occlusion
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Z
⌦V (!) cos ✓ d!
Reference: 4096 samples
Uniform, 32 samples Variance 0.0160
Cosine, 32 samples Variance 0.0103
Uniform, 72 samples Variance 0.00931
Sampling a Circle
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
1
2
2 U
r U
θ π=
=
Equi-Areal
Shirley’s Mapping: Better Strata
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
1
2
14
r UUU
πθ
=
=
1. Numerical integration
■ Quadrature/Integration rules
■ Efficient for smooth functions
■ “Curse of dimensionality”
2. Statistical sampling (Monte Carlo integration)
■ Unbiased estimate of integral
■ High dimensional sampling:
3. Signal processing
■ Sampling and reconstruction
■ Aliasing and antialiasing
■ Blue noise good
4. Quasi Monte Carlo
■ Bound error using discrepancy
■ Asymptotic efficiency in high dimensions
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017
Views of Sampling
1/N1/2