coarse-to-fine image reconstruction rebecca willett in collaboration with robert nowak and rui...
TRANSCRIPT
Coarse-to-FineCoarse-to-FineImage ReconstructionImage Reconstruction
Rebecca Willett
In collaboration with
Robert Nowak and Rui Castro
Haar Tree PruningMSE = 0.0033
Poisson Data~14 photons/pixelMSE = 0.0169
Wedgelet Tree Pruning
MSE = 0.0015
O(n)
O(n11/6)
Iterative reconstructionIterative reconstruction
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
E-Step: Compute conditionalexpectation of new noisy
image estimate given data and current image estimate
Traditional Shepp-Vardi M-Step: Maximum
Likelihood Estimation
Improved M-Step:Complexity Regularized
Multiscale Poisson Denoising
(Willett & Nowak, IEEE-TMI ‘03)
MLE
Jeff Fessler’s PWLS Wedgelet-basedreconstruction
Shepp-Logan
Wedgelet-based tomographyWedgelet-based tomography
TomographyTomography
piecewise constant 2-d function with “smooth” edges
A simple image modelA simple image model
Access only to n noisy “pixels”
Measurement modelMeasurement model
Goal: find an estimate of the original image such that
is small.
Image spaceImage space
Kolmogorov metric entropyKolmogorov metric entropy
Dudley ‘74Dudley ‘74
approx. err estimation err.
Minimax lower bound
Korostelev & Tsybakov, ‘93
Adaptively pruned partitionsAdaptively pruned partitions
Tree pruning estimationTree pruning estimation
Partitions and EstimatorsPartitions and Estimators
Sum-of-squared errors empirical risk:
Complexity penalized estimator:
Complexity Regularization and Complexity Regularization and the Bias-Variance Trade-offthe Bias-Variance Trade-off
set of all possible tree prunings
|P|
fidelity to data
complexity
The Li-Barron boundThe Li-Barron bound
approximation error(bias)
estimation error(variance)
Li & Barron, ‘00Nowak & Kolaczyk, ‘01
The Kraft inequalityThe Kraft inequality
1
1110
1 1 1 1 1000 00 0 0
0000 00000000 0000 0000
Decorate each partition set with a constant:
squared approximation error
This class of models is not well-matched to the class of images
Estimating smooth contours - HaarEstimating smooth contours - Haar
Donoho ‘99
Approximating smooth contours - wedgeletsApproximating smooth contours - wedgelets
Approximating smoother contoursApproximating smoother contours
Original Image Haar Wavelet Partition Wedgelet Partition
WedgeletWedgelet
> 850 terms> 850 terms < 370 terms< 370 terms
(Donoho ‘99)(Donoho ‘99)
squared approximation error
Use wedges and decorate each partition set with a constant:
This is the best achievable rate!!!
Estimating smoother contours - wedgelets
Simple ComputationPoor approximation
Haar-based estimation Wedgelet estimation
Complex ComputationGood approximation
The problem with estimating smooth contoursThe problem with estimating smooth contours
Computational implicationsComputational implications
space of all signal models is very large from which one is selected
A solution:A solution:Coarse-to-fine model selectionCoarse-to-fine model selection
two-step process involves search first over coarse model space
coarse model space
second step involves search over small subset of models
Coarse-to-fine model selectionCoarse-to-fine model selection
Start with a uniform partition
C2F wedgelets: two-stage optimizationC2F wedgelets: two-stage optimization
Stage 1: Adapt partition to the data by pruning
Stage 2: Only apply wedges in the small boxes that remain
C2F wedgelets: two-stage optimizationC2F wedgelets: two-stage optimization
Error analysis of two-stage approach:Error analysis of two-stage approach:
(Castro, Willett, Nowak, ICASSP ‘04)
Controlling variance in the preview stageControlling variance in the preview stage
Start with a coarse partition in the first stage:• lowers the variance of the coarse resolution estimate• with high probability, pruned coarse partition close
to optimal coarse partition• unpruned boxes at this stage indicate edges or boundaries
Controlling bias in the preview stageControlling bias in the preview stage
Bias becomes large if a square containing a boundary fragment is pruned in the first stage (this may happen if a boundary is close to the side of the squares)
Solution:• Compute TWO coarse
partitions - one normal, and one shifted
• Refine any region unpruned in either or both shifts
potential problem area:
not a problem after shift:
Computational implicationsComputational implications
noisy dataMSE = 0.0052
stage 1 result MSE = 0.1214
stage 2 result O(n7/6), MSE = 0.00046
Main result in actionMain result in action
Compare with standardwedgelet denoising :
Significant computational savings and better result !
O(n11/6), MSE = 0.00073
low resolution
high resolution
C2F limitations: The “ribbon”C2F limitations: The “ribbon”
C2F and other greedy methods:C2F and other greedy methods:
Matching pursuit
20 Questions (Geman & Blanchard, ‘03)
Boosting
More general image modelsMore general image models
platelet planar fits
(Willett & Nowak, IEEE-TMI ‘03, Willett & Nowak, Wavelets X. Nowak, Mitra, & Willett, JSAC ‘03)
Platelet Approximation TheoryPlatelet Approximation Theory
m-term approximation error decay rate:
• Fourier: O(m-1/2)• Wavelets: O(m-1)• Wedgelets: O(m-1)• Platelets: O(m-2)• Curvelets: O(m-2)
Twice continuously differentiable
Twice continuously differentiable
Confocal microscopy simulationConfocal microscopy simulation
Noisy Image
Haar Estimate
Platelet Estimate
C2F limitations: complex imagesC2F limitations: complex images• “Images are edges”: many images consist
almost entirely of edges
• C2F model still appropriate for many applications:– nuclear medicine– feature classification– temperature field estimation
C2F in multiple dimensionsC2F in multiple dimensions
Final remarks and ongoing workFinal remarks and ongoing work
• Careful greedy methods can perform as well as exhaustive searches, both in theory and practice
• Coarse-to-fine estimation dramatically reduces computational complexities
• Similar ideas can be used in other scenarios– Reduce the amount of data required (e.g., active learning
and adaptive sampling)– Reduce number of bits required to encode model locations
in compression schemes