overview - stanford university
TRANSCRIPT
![Page 1: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/1.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Overview
Earlier lectures
■Monte Carlo I: Integration
■ Signal processing and sampling
Today: Monte Carlo II
■Noise and variance reduction
■ Importance sampling
■ Stratified sampling
■Multidimensional sampling patterns
![Page 2: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/2.jpg)
Undersampling: Aliasing
CS348b Lecture 9 Pat Hanrahan, Spring 2016
⊗ =
Band-limited signal
Aliasing occurs if the sampling rate is less than 1/2 the maximum frequency in the signal
Note that high frequencies can appear as low frequencies
Frequency Space
![Page 3: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/3.jpg)
Zone plate:
Sampling a “Zone Plate”
CS348b Lecture 9 Pat Hanrahan, Spring 2016
2 2sin x y+
Left rings: signal Right rings: aliases Middle rings: sum of the signal and the aliases
![Page 4: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/4.jpg)
Jittered Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Add uniform random jitter to each sample
![Page 5: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/5.jpg)
Jittered vs. Uniform Supersampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
4x4 Jittered Sampling 4x4 Uniform
Regular pattern of aliases replaced by noise
![Page 6: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/6.jpg)
Theory: Analysis of Jitter
CS348b Lecture 9 Pat Hanrahan, Spring 2016
( ) ( )n
nn
s x x xδ=∞
=−∞
= −∑
n nx nT j= +
~ ( )
1 1/ 2( )
0 1/ 2
( ) sinc
nj j x
xj x
x
J ω ω
" ≤$= %
>$&=
2 22
2
1 2 2( ) 1 ( ) ( ) ( )
1 1 sinc ( )
n
n
nS J JT T T
T
π πω ω ω δ ω
ω δ ω
=−∞
=−∞
& '= − + −( )
& '= − +( )
∑
Non-uniform sampling Jittered sampling
![Page 7: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/7.jpg)
Poisson Disk Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Dart throwing algorithm Less energy near origin
![Page 8: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/8.jpg)
Distribution of Extrafoveal Cones
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Monkey eye cone distribution Fourier transform
Yellot
![Page 9: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/9.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Intuition: Uniform Sampling
Uniform sampling
■ The spectrum of uniformly spaced samples is also a set of uniformly spaced spikes
■Multiplying the signal by the sampling pattern in the spatial domain, corresponds to placing a copy of the spectrum at each spike in the frequency domain
■ Aliases are coherent, and very noticeable
![Page 10: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/10.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Intuition: Non-uniform Sampling
Non-uniform sampling
■ Samples at non-uniform locations have a different spectrum; a single spike at the center plus noise
■ Sampling a signal in this way converts aliases into broadband noise
■Noise is incoherent, and much less objectionable
![Page 11: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/11.jpg)
Random Sampling Introduces Noise
CS348b Lecture 9 Pat Hanrahan, Spring 2016
1 shadow ray
Center Random
![Page 12: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/12.jpg)
Less Noise with More Rays
CS348b Lecture 9 Pat Hanrahan, Spring 2016
16 shadow rays1 shadow ray
![Page 13: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/13.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Variance
Definition
Variance decreases linearly with sample size
2
2 2
[ ] [( [ ]) ][ ] [ ]
V Y E Y E YE Y E Y
≡ −
= −
2 21 1
1 1 1 1[ ] [ ] [ ] [ ]N N
i ii i
V Y V Y NV Y V YN N N N= =
= = =∑ ∑
2[ ] [ ]V aY a V Y=
![Page 14: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/14.jpg)
Why is Area Better than Solid Angle?
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Solid Angle
100 shadow rays
Area
100 shadow rays
![Page 15: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/15.jpg)
Uniform Random Sampling Works
CS348b Lecture 9 Pat Hanrahan, Spring 2016
1
1 11
1 01
1 01
0
1[ ] [ ]
1 1[ ] [ ( )]
1 ( ) ( )
1 ( )
( )
N
N ii
N N
i ii i
N
i
N
i
E F E YN
E Y E f XN N
f x p x dxN
f x dxN
f x dx
=
= =
=
=
=
= =
=
=
=
∑
∑ ∑
∑∫
∑∫
∫
[ ] ( )NE F I f=
Assume uniform probability distribution for now
[ ] [ ]i ii i
E Y E Y=∑ ∑[ ] [ ]E aY aE Y=
Properties
![Page 16: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/16.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
“Biased” Sampling is “Unbiased”
Probability
Estimator
Proof
~ ( )iX p x
( )( )
ii
i
f XYp X
=
∫
∫
=
=
"#
$%&
'=
dxxf
dxxpxpxf
xpxf
EYE i
)(
)()()()()(][
![Page 17: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/17.jpg)
Importance Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
( )( )[ ]f x
p xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
![Page 18: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/18.jpg)
Importance Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
( )( )[ ]f x
p xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2 2[ ] [ ] [ ]V f E f E f= −
Variance
![Page 19: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/19.jpg)
Importance Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
22
2
2
( )[ ] ( )( )
( ) ( )( ) / [ ] [ ]
[ ] ( )
[ ]
f xE f p x dx
p x
f x f xdx
f x E f E f
E f f x dx
E f
! "= # $
% &
! "= # $
% &
=
=
∫
∫
∫
! !!
( )( )[ ]f x
p xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2 2[ ] [ ] [ ]V f E f E f= −
Variance
![Page 20: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/20.jpg)
Importance Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
22
2
2
( )[ ] ( )( )
( ) ( )( ) / [ ] [ ]
[ ] ( )
[ ]
f xE f p x dx
p x
f x f xdx
f x E f E f
E f f x dx
E f
! "= # $
% &
! "= # $
% &
=
=
∫
∫
∫
! !!
( )( )[ ]f x
p xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2[ ] 0V f =!
Zero variance!
2 2[ ] [ ] [ ]V f E f E f= −
Variance
![Page 21: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/21.jpg)
Importance Sampling
CS348b Lecture 9 Pat Hanrahan, Spring 2016
22
2
2
( )[ ] ( )( )
( ) ( )( ) / [ ] [ ]
[ ] ( )
[ ]
f xE f p x dx
p x
f x f xdx
f x E f E f
E f f x dx
E f
! "= # $
% &
! "= # $
% &
=
=
∫
∫
∫
! !!
( )( )[ ]f x
p xE f
=!
( )( )( )f x
f xp x
=!!
Sample according to f
2[ ] 0V f =!
Zero variance!
2 2[ ] [ ] [ ]V f E f E f= −
Variance
Gotcha?
![Page 22: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/22.jpg)
Importance Sampling: Area
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Solid Angle
100 shadow rays
Area
100 shadow rays
![Page 23: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/23.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Comparing Different Techniques
Efficiency measure
Comparing two sampling techniques A and B
If A has twice the variance as B, then it takes twice as many samples from A to achieve the same variance as B
If A has twice the cost of B, then it takes twice as much time reduce the variance using A compared to using B
The product of variance and cost is a constant independent of the number of samples
Recall: Variance goes as 1/N, time goes as C*N
E�ciency / 1
Variance · Cost
![Page 24: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/24.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Jittered = Stratified Sampling
Allocate samples per region
Estimate each region separately
New variance
If the variance in each region is the same, then total variance goes as
21
1[ ] [ ]N
N ii
V F V FN =
= ∑
1
1 N
N ii
F FN =
= ∑
1/N
![Page 25: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/25.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Stratified Sampling
Sample a polygon
If the variance in some regions are smaller, then the overall variance will be reduced
In this example, the variance in the regions inside and outside the polygon is 0. The variance in the regions along the edge are greater, but there are fewer edge regions
2 1.51
1 [ ][ ] [ ]N
EN j
i
V FV F V FN N=
= =∑
![Page 26: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/26.jpg)
Sampling a Circle
CS348b Lecture 9 Pat Hanrahan, Spring 2016
1
2
2 U
r U
θ π=
=
Equi-Areal
![Page 27: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/27.jpg)
Shirley’s Mapping: Better Strata
CS348b Lecture 9 Pat Hanrahan, Spring 2016
1
2
14
r UUU
πθ
=
=
![Page 28: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/28.jpg)
Block Design
CS348b Lecture 9 Pat Hanrahan, Spring 2016
a
bcd
abc
d
ab
cd
a
bc
d
Alphabet of size n
Each symbol appears exactly once in each row and column
Rows and columns are stratified
Latin Square
![Page 29: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/29.jpg)
Incomplete Block Design
CS348b Lecture 9 Pat Hanrahan, Spring 2016
aa
aa
N-Rook Pattern
Replace n2 samples with n samples
Permutations:
Generalizations: N-queens
1 2( ( ), ( ), ( ))di i iπ π π!
( {1,2,3,4}, {4,2,3,1})x yπ π= =
![Page 30: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/30.jpg)
Path Tracing
CS348b Lecture 9 Pat Hanrahan, Spring 2016
4 eye rays per pixel 16 shadow rays per eye ray
64 eye rays per pixel 1 shadow ray per eye ray
Complete Incomplete
![Page 31: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/31.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Space-Time Patterns
Distribute samples in time
■ Complete in space
■ Incomplete in time
■ Decorrelate space and time
■Nearby samples in space should differ greatly in time
154101
3136
8
011
514
12
297
Pan-diagonal Magic Square
154
10
1
31368
0 115
14 122
97
Cook Pattern
![Page 32: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/32.jpg)
CS348b Lecture 9 Pat Hanrahan, Spring 2016
High-dimensional Sampling
Complete set of samples
Random sampling
Error (variance) …
Numerical integration
Error …
In high dimensional space, for the same error, Monte Carlo integration requires fewer samples than numerical integration
11 1~
dE
n N=
1/ 21/ 21~ ~E VN
![Page 33: Overview - Stanford University](https://reader030.vdocuments.net/reader030/viewer/2022012019/61687f29d394e9041f6ff378/html5/thumbnails/33.jpg)
1. Numerical integration
■ Quadrature/Integration rules
■ Efficient for smooth functions
2. Statistical sampling (Monte Carlo integration)
■ Unbiased estimate of integral
■ Variance reduction techniques
■ High dimensional sampling:
3. Signal processing
■ Sampling and reconstruction
■ Aliasing and antialiasing
■ Blue noise good
4. Quasi Monte Carlo
■ Bound error using discrepancy
■ Asymptotic efficiency in high dimensions
CS348b Lecture 9 Pat Hanrahan, Spring 2016
Views of Sampling
1/N1/2