lecture 7: monte carlo rendering - cs. · pdf filelecture 7: monte carlo rendering cs 6620,...
TRANSCRIPT
1
Lecture 7: Monte Carlo Rendering
CS 6620, Spring 2009Kavita Bala
Computer ScienceCornell University
© Kavita Bala, Computer Science, Cornell University
MC Advantages• Convergence rate of O( )• Simple
– Sampling– Point evaluation– Can use black boxes
• General– Works for high dimensions– Deals with discontinuities, crazy functions,…
N1
2
© Kavita Bala, Computer Science, Cornell University
0 1
)(xf
Importance Sampling
• Why do we sample by p(x)?
• Why not just uniformly?
• Better use of samples by taking more samples in ‘important’ regions, i.e. where the function is large
© Kavita Bala, Computer Science, Cornell University
• 1) Choose a normalizedprobability density function p(x)
• 2) Integrate to get a cumulativedistribution function P(x):
• 3) Invert P: 0 1
Non-Uniform Samples
1
0
iξ
ix
)(1 ξ−= Px
∫=x
dttpxP0
)()(
Note this is similar to going from y axis to x in discrete case!
3
© Kavita Bala, Computer Science, Cornell University
Importance Sampling
• General principle:The closer the shape of p(x) is to the shape of f(x), the lower the variance
• Variance can also increase if p(x) is chosen badly
∫=
D
xfxfxp
)()()(
© Kavita Bala, Computer Science, Cornell University
Cosine distribution
21
1
2
2
0 0
2
0
1
0
cos22
)(
cos1)(
2)cos1(sincos),(
sincos),(
sincos1
uu
F
F
drdCDF
p
ddf
ii−==
=
−=
−==
=
=
∫ ∫
∫ ∫
θπφπφφ
θθ
πφθθ
πθθφθ
πθθφθ
φθθθπ
θ φ
π
4
© Kavita Bala, Computer Science, Cornell University
Rejection Methods• Pick ξ1, ξ2
• If ξ2 < f(ξ1), select ξ2
• Is this efficient? What determines efficiency? A(f)/A(rectangle)
a b
)(xf
∫=b
a
dxxfI )(
© Kavita Bala, Computer Science, Cornell University
MC Advantages• Convergence rate of O( )
• Simple– Sampling– Point evaluation– Can use black boxes
• General– Works for high dimensions– Deals with discontinuities, crazy functions,…
N1
5
© Kavita Bala, Computer Science, Cornell University
∫Ω
Ψ⋅Ψ⋅Ψ←⋅Θ↔Ψ+Θ→=Θ→x
dnxLfxLxL xre ω),cos()()()()(
MC applied to RE
x
L x( )← Ψ
L xe ( )→ Θ
L x( )→ Θ
© Kavita Bala, Computer Science, Cornell University
Radiance Evaluation• Many different light paths contribute to
single radiance value– many paths are unimportant
• Tools we need:– generate the light paths– sum all contributions of all light paths– clever techniques to select important paths
6
© Kavita Bala, Computer Science, Cornell University
Assumptions: black boxes• Can query the scene geometry and
materials
– surface points
– light sources
– visibility checks
– tracing rays
xN = ?fr = ?
xn
ΘΨ
?),( =Ψ↔Θxfr
x
?)( =Θ→xLe
xn
Θ
V(x,z) = 0 or 1
© Kavita Bala, Computer Science, Cornell University
Rendering Equation
∫Ω
Ψ⋅Ψ⋅Ψ←⋅Θ↔Ψ+Θ→=Θ→x
dnxLfxLxL xre ω),cos()()()()(
∫Ω
⋅⋅+=x
re fL cos
function to integrate over allincoming directions over thehemisphere around x
Value we want
7
© Kavita Bala, Computer Science, Cornell University
How to compute?
L(x→Θ) = ?
Check for Le(x→Θ)
Now add Lr(x→Θ) = L=?
∫Ω
Ψ⋅Ψ⋅Ψ←⋅Θ↔Ψx
dnxLf xr ω),cos()()(
© Kavita Bala, Computer Science, Cornell University
How to compute?• Monte Carlo!
• Generate random directions on hemisphere Ωx, using pdf p(Ψ)
∫Ω
Ψ⋅Ψ⋅Ψ←⋅Θ↔Ψ=Θ→x
dnxLfxL xr ω),cos()()()(
∑= Ψ
Ψ⋅Ψ←⋅Θ↔Ψ=Θ→
N
i i
xiiir
pnxLf
NxL
1 )(),cos()()(1)(
8
© Kavita Bala, Computer Science, Cornell University
How to compute?
Generate random directions Ψi
– evaluate brdf– evaluate cosine term– evaluate L(x←Ψi)
∑= Ψ
⋅Ψ←⋅=
N
i i
ir
pxLf
NL
1 )()cos()()(1 KK
© Kavita Bala, Computer Science, Cornell University
How to compute?
• evaluate L(x←Ψi)?
• Radiance is invariant along straight paths
• vp(x, Ψi) = first visible point
• L(x←Ψi) = L(vp(x, Ψi) → Ψi)
9
© Kavita Bala, Computer Science, Cornell University
How to compute? Recursion ...
• Recursion ….
• Each additional bounce adds one more level of indirect light
• Handles ALL light transport
• “Stochastic Ray Tracing”
© Kavita Bala, Computer Science, Cornell University
When to end recursion?
• Contributions of further light bounces become less significant
• If we just ignore them, estimators will be biased!
10
© Kavita Bala, Computer Science, Cornell University
PPyf )/(
Russian Roulette
Integral
Estimator
Variance
0 1
)(xf
P
∫∫∫ ===P
dyP
PyfPdxPxfdxxfI
0
1
0
1
0
)/()()(
⎪⎩
⎪⎨⎧
>
≤=.0
,)(
Px
PxPxf
Ii
ii
roulette if
if
σσ >roulette
© Kavita Bala, Computer Science, Cornell University
Russian Roulette
• Pick some ‘absorption probability’ α– probability 1-α that ray will bounce– estimated radiance becomes L/ (1-α)
• E.g. α = 0.9– only 1 chance in 10 that ray is reflected– estimated radiance of that ray is multiplied by 10– instead of shooting 10 rays, we shoot only 1, but
count the contribution of this one 10 times
11
© Kavita Bala, Computer Science, Cornell University
Algorithm so far ...• Shoot viewing ray through each pixel
• Shoot # indirect rays, sampled over hemisphere
• Terminate recursion using Russian Roulette
© Kavita Bala, Computer Science, Cornell University
Algorithm
?=L ?=L ?0
==
r
e
LL
?0
==
r
e
LL ?=inL ?=inL ?=inL ?=inL
?0
==
r
e
LL
?0
==
r
e
LL
?=L ?=L?=inL ?=inL
?=L ?=L
?234.1
==
r
e
LL
?234.1
==
r
e
LL
12
© Kavita Bala, Computer Science, Cornell University
Pixel Anti-Aliasing
• Compute radiance only at center of pixel: jaggies
• Simple box filter:
• … evaluate using MC
∫=Pixel
dxxLL )(
© Kavita Bala, Computer Science, Cornell University
Stochastic Ray Tracing
• Parameters?– # starting rays per pixel– # random rays for each surface point
(branching factor)
• Path Tracing– Branching factor == 1
13
© Kavita Bala, Computer Science, Cornell University
Path tracing
1 ray / pixel 10 rays / pixel 100 rays / pixel
• Pixel sampling + light source sampling folded into one method
© Kavita Bala, Computer Science, Cornell University
Comparison
1 centered viewing ray100 random shadow rays per
viewing ray
100 random viewing rays1 random shadow ray per
viewing ray
14
© Kavita Bala, Computer Science, Cornell University
Performance/Error
• Want better quality with smaller number of samples – Fewer samples/better performance– Stratified sampling– Quasi Monte Carlo: well-distributed samples
• Faster convergence– Importance sampling: next-event estimation
© Kavita Bala, Computer Science, Cornell University
0 1
)(xf
Stratified Sampling
• Samples could be arbitrarily close
• Split integral in subparts
• Estimator
• Variance:
∑=
=N
i i
istrat xp
xfN
I1 )(
)(1
secσσ ≤strat
∫∫ ++=NXX
dxxfdxxfI )()(1
K
15
© Kavita Bala, Computer Science, Cornell University
Numerical example
© Kavita Bala, Computer Science, Cornell University
Stratified Sampling
9 shadow raysnot stratified
9 shadow raysstratified
16
© Kavita Bala, Computer Science, Cornell University
Stratified Sampling
36 shadow raysnot stratified
36 shadow raysstratified
© Kavita Bala, Computer Science, Cornell University
Stratified Sampling
100 shadow raysnot stratified
100 shadow raysstratified
17
© Kavita Bala, Computer Science, Cornell University
Stratified and Importance?
© Kavita Bala, Computer Science, Cornell University
•• • •
• • ••
• • • •
• • • •
→ N2 samples
2 Dimensions
• Problem for higher dimensions
• Sample points can still be arbitrarily close to each other
18
© Kavita Bala, Computer Science, Cornell University
Higher Dimensions• Stratified grid sampling:
→ Nd samples
• N-rooks sampling:
→ N samples
•• • •
• • ••
• • • •
• • • •
••
•
•
© Kavita Bala, Computer Science, Cornell University
N-Rooks Sampling - 9 rays
notstratified
stratified N-Rooks
19
© Kavita Bala, Computer Science, Cornell University
N-Rooks Sampling - 36 rays
notstratified
stratified N-Rooks
© Kavita Bala, Computer Science, Cornell University
• How does it relate to regular sampling
Other types of Sampling
Random sampling Regular sampling
20
© Kavita Bala, Computer Science, Cornell University
Quasi Monte Carlo• Eliminates randomness to find well-
distributed samples– Why? Avoid clumping– Why? Has better convergence properties
Random Stratified Sobol Halton
© Kavita Bala, Computer Science, Cornell University
Quasi Monte Carlo
21
© Kavita Bala, Computer Science, Cornell University
Quasi-Monte Carlo (QMC)• Notions of variance, expected value don’t
apply: why?
• Introduce the notion of discrepancy– Discrepancy mimics variance– Need a low discrepancy sequence– E.g., subset of unit interval [0,x]
Of N samples, n are in subsetDiscrepancy: |x-n/N|
– Mainly: “it looks random”
© Kavita Bala, Computer Science, Cornell University
Example: Halton• Radical inverse φp(i) for primes p• Reflect digits (base p) about decimal point
φ2(i): 1110102 → 0.010111• Radical inverse function
1)()(
)(
−−∑
∑=Φ
=
j
jjb
j
jj
biai
biai
22
© Kavita Bala, Computer Science, Cornell University
Halton• Sample:
– Where b1, b2, b3 are primes
– xi =(φ2(i), φ3(i), φ5(i), φ7(i), φ11(i), ….)
• Discrepancy:
),...)(),(),((321
iiix bbbi ΦΦΦ=
⎟⎟⎠
⎞⎜⎜⎝
⎛NNO
d)(log
© Kavita Bala, Computer Science, Cornell University
Example: Hammersley• Say we know what N is ahead of time• For N samples, a Hammersley point
– (i/N, φ2(i))• For more dimensions:
– Xi =(i/N, φ2(i), φ3(i), φ5(i), φ7(i), φ11(i), ….)
23
© Kavita Bala, Computer Science, Cornell University
Quasi Monte Carlo
• Converges as fast as stratified sampling– Does not require knowledge about how many
samples will be used
• Using QMC, directions evenly spaced no matter how many samples are used
• Samples properly stratified-> better than pure MC
© Kavita Bala, Computer Science, Cornell University
Performance/Error
• Want better quality with smaller number of samples – Fewer samples/better performance– Stratified sampling– Quasi Monte Carlo: well-distributed samples
• Faster convergence– Importance sampling: next-event estimation