high dynamic range video using split aperture cameracgeyer/omnivis05/papers/3192.pdfwe apply a...

7
High Dynamic Range Video Using Split Aperture Camera Abstract We present a new approach to display High Dynamic Range (HDR) video using gradient based high dynamic range compression. To obtain HDR video, we utilize the split aperture camera. We apply a spatio-temporal gradient based video integration algorithm for fast and accurate in- tegration of the three input HDR videos into a low dynamic range video, which is suitable for display. The spatio- temporal video integration generates videos with temporal coherency and without artifacts. In order to improve the computational speed, we propose using a diagonal multi- grid algorithm to solve the Poisson equation. We show ex- perimental results on a variety of dynamic scenes. 1 Introduction A conventional digital camera typically provides a dy- namic range of two orders of magnitude through the CCD’s analog-to-digital converter (The ratio in intensity between the brightest pixel and the darkest pixel is usually referred to the dynamic range of a digital image). However, many real-world scenes have a larger brightness variation. Thus, some areas of the images captured by digital cameras are undersaturated or oversaturated. Tonemapping (also called tone reproduction) is used to establish an efficient way to reconstruct faithfully the high dynamic range radiance on a low dynamic range image for display. Display of HDR video is the main problem we address in this paper. To capture a high dynamic range image, several images with different exposures are usually taken to cover the whole range of a real scene using con- ventional cameras. Those images are combined into a sin- gle high dynamic range image (radiance map). High dy- namic range radiance maps are then recovered from these images [1]. Tonemapping methods are then applied to the radiance maps to reduce the dynamic range. The resultant low dynamic range (LDR) image can be viewed on conven- tional display devices. Capturing high dynamic range video involves dealing with motion in the scene and hence it is not possible to capture the radiance map via a single camera with succes- sive multi-exposures. In addition, for tone mapping HDR videos, we cannot trivially tonemap successive frames of the video. This naive approach will lack temporal coherence resulting in flicker. Thus capture and compression of HDR video has remained a challenging problem. As described later, some commercial and research hardware approaches have been proposed to this problem. In one of the few soft- ware solutions presented to this problem, Kang et. al. [2] describe an approach by varying the exposure of alternate frames. It requires a burdensome registration of features in successive frames to compensate for motion. Given the feature correspondence problem, rapid movements and sig- nificant occlusions cannot be dealt with easily. In addition, the two different exposures may not capture the full radi- ance map of a scene. More exposures will make the feature registration problem more difficult. In this paper, we propose a new approach for display- ing HDR video using gradient based HDR compression ap- proach. We use a camera rig composed of three built-in CCD sensors, which share the same view on a shared optical axis. Hence, we can capture truly dynamic scenes without frame registration. Our major contribution is a new 3D integration algorithm for HDR video compression. We use diagonally oriented grids to fast and accurately obtain the solutions of the resulting Poisson’s Equation in three-dimension space. 2 Related Work Though capturing high dynamic range video is not our main contribution, a specially designed camera is used in our HDR video display framework to overcome the disad- vantages of the work by Kang et.al’s [2]. Therefore, we first present a brief review on HDR capturing followed by the related work in tonemapping. 2.1 Capture To capture HDR video, sequential exposure change [3, 4, 5] is not an option. Some researchers have proposed us- ing specially designed single sensor [6, 7, 8] or multiple image sensors [9, 10, 11], as well as spatially varying [4]

Upload: vukhanh

Post on 27-Mar-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

High Dynamic Range Video Using Split Aperture Camera

Abstract

We present a new approach to display High DynamicRange (HDR) video using gradient based high dynamicrange compression. To obtain HDR video, we utilize thesplit aperture camera. We apply a spatio-temporal gradientbased video integration algorithm for fast and accurate in-tegration of the three input HDR videos into a low dynamicrange video, which is suitable for display. The spatio-temporal video integration generates videos with temporalcoherency and without artifacts. In order to improve thecomputational speed, we propose using a diagonal multi-grid algorithm to solve the Poisson equation. We show ex-perimental results on a variety of dynamic scenes.

1 Introduction

A conventional digital camera typically provides a dy-namic range of two orders of magnitude through theCCD’sanalog-to-digital converter (The ratio in intensity betweenthe brightest pixel and the darkest pixel is usually referredto the dynamic range of a digital image). However, manyreal-world scenes have a larger brightness variation. Thus,some areas of the images captured by digital cameras areundersaturated or oversaturated.

Tonemapping (also called tone reproduction) is used toestablish an efficient way to reconstruct faithfully the highdynamic range radiance on a low dynamic range image fordisplay. Display of HDR video is the main problem weaddress in this paper. To capture a high dynamic rangeimage, several images with different exposures are usuallytaken to cover the whole range of a real scene using con-ventional cameras. Those images are combined into a sin-gle high dynamic range image (radiance map). High dy-namic range radiance maps are then recovered from theseimages [1]. Tonemapping methods are then applied to theradiance maps to reduce the dynamic range. The resultantlow dynamic range (LDR) image can be viewed on conven-tional display devices.

Capturing high dynamic range video involves dealingwith motion in the scene and hence it is not possible to

capture the radiance map via a single camera with succes-sive multi-exposures. In addition, for tone mapping HDRvideos, we cannot trivially tonemap successive frames ofthe video. This naive approach will lack temporal coherenceresulting in flicker. Thus capture and compression of HDRvideo has remained a challenging problem. As describedlater, some commercial and research hardware approacheshave been proposed to this problem. In one of the few soft-ware solutions presented to this problem, Kang et. al. [2]describe an approach by varying the exposure of alternateframes. It requires a burdensome registration of featuresin successive frames to compensate for motion. Given thefeature correspondence problem, rapid movements and sig-nificant occlusions cannot be dealt with easily. In addition,the two different exposures may not capture the full radi-ance map of a scene. More exposures will make the featureregistration problem more difficult.

In this paper, we propose a new approach for display-ing HDR video using gradient based HDR compression ap-proach. We use a camera rig composed of three built-inCCD

sensors, which share the same view on a shared optical axis.Hence, we can capture truly dynamic scenes without frameregistration. Our major contribution is a new 3D integrationalgorithm for HDR video compression. We use diagonallyoriented grids to fast and accurately obtain the solutions ofthe resulting Poisson’s Equation in three-dimension space.

2 Related Work

Though capturing high dynamic range video is not ourmain contribution, a specially designed camera is used inour HDR video display framework to overcome the disad-vantages of the work by Kang et.al’s [2]. Therefore, wefirst present a brief review onHDR capturing followed bythe related work in tonemapping.

2.1 Capture

To capture HDR video, sequential exposure change [3,4, 5] is not an option. Some researchers have proposed us-ing specially designed single sensor [6, 7, 8] or multipleimage sensors [9, 10, 11], as well as spatially varying [4]

Page 2: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

or spatio-temporally varying [12] pixel exposures sensors.SomeCCD sensors are designed with each pixel having twoelements with different sensitivity [7, 8]. In [6], authors de-scribe a sorting computational sensor on which each pixelcan measure the time to obtain full potential well capacity.All pixels of the input image are sorted according to theirintensities. Many of these techniques trade spatial resolu-tion for dynamic range while maintaining the video framerate.

Several methods have been proposed that do not entailthe above tradeoff. [12] adapts the exposure of each pixelon the image detector using a controllable attenuator basedon the radiance of the corresponding scene point.

Multiple image sensors [9, 10, 11] are often used to cap-ture video-rate HDR video while keeping the spatial res-olution of original sensors. For example, Aggarwal andAhuja [9, 10] use a mirror-based beam splitter to split thelight refracted from the lens into three beams, which reachthree different sensors. The camera has video-rate capacityand controlled exposure time for each of the sensors. Weuse a similar 3 channel camera in our prototype.

2.2 Display (Tonemapping)

Many tonemapping algorithms for compressing and dis-playing HDR images have been proposed [13, 14, 15, 16].Durand et al. [13] propose using bilateral filtering to decom-pose an HDR image into a base layer and a detail layer, andthen compress the contrast of the base layer. Reinhard etal. [16] achieve local luminance adaptation by using pho-tographic technique of dodging-and-burning. Tumblin andTurk [15] propose the low curvature image simplifierLCISby applying anisotropic diffusion to prevent halo artifacts.Fattal et al. [14] propose a method to attenuate high in-tensity gradients while magnifying low intensity gradients.The luminance is recovered from the compressed gradientsby solving a Poisson equation.

In spite of the great efforts onHDR IMAGE display, ro-bust algorithms for tonemappingHDR VIDEO are not yetcommon. Kang et.al. [2] propose a solution to prevent flick-ering in the mapping due to temporal inconsistency. Butonly a global mapping is applied which is not adaptive tothe coarse temporal intensity variations.

In this paper, we propose a gradient domain technique tocompress high dynamic range videos. Our work is inspiredby that of Fattal et.al. [14]. The resulting images have nohalos and other artifacts. However, we can not apply thedynamic range compression method directly in a frame-by-frame manner. This is because the temporal consistencywill be violated, and undesired flickering and color shift willresult due to shift and scale ambiguity in image integrationand tonemapping exponents in color assignment.

Figure 1. Video after LEFT: frame-by-frame 2Dintegration; RIGHT: 3D integration of videocube

3 Gradient Domain Video HDR Compres-sion

Gradient domain techniques have been widely used incomputer vision and computer graphics. The idea is to min-imize the gradient difference between the source and targetimages when the gradient field of the source image is mod-ified to obtain the target one. This technique is inspiredby the retinex theory originally proposed by Land and Mc-Cann in 1971 [17]. Since then a number of applicationsbased on this technique have been proposed, such as im-age editing [18], shadow removal [19], multispectral imagefusion [20], image and video fusion for context enhance-ment [21] and HDR image compression [14]. This paperextends the gradient based technique to three-dimensionsby considering both spatial and temporal gradients.

In the following, we present our 3D video integrationalgorithm in detail.

3.1 Video as 3D Cube

The gradient domain method proposed by Fattalet.al. [14] can be considered 2D integration of modified 2Dgradient field. As mentioned earlier, the integration involvesa scale and shift ambiguity in luminance plus an image de-pendent exponent when assigning colors. Hence, a straight-forward application to video will result in lack of temporalcoherency in luminance and flicker in color. We insteadtreat the video as a 3D block of pixels and solve problemvia 3D integration of a modified 3D gradient field.

Consider an extreme example to test both approaches.We deliberately set the small gradients in a video which aresmaller than some threshold to zero. The video obtained via2D or 3D integration will have a (cartoon like) flattened-texture effect. As seen in the accompanying video, theframe by frame 2D integration approach results in notice-able flicker while the video by 3D integration shows near-constant and large flat colored regions. This is illustrated in

Page 3: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

Algorithm 1: General algorithm for HDR video displayData: LDR videoI1, I2, . . . , In

Result: HDR videoI

Recover the radiance map;Attenuate large gradients and magnify small ones(Sec. 3.3);Reconstruct new videoI by solving a Poissonequation;

a snapshot in Fig. 1. (see the accompanying video)

3.2 3D Video Integration

Our video HDR compression problem is stated as fol-lows: Given n synchronizedLDR videos, I1, I2, . . . , In,with different exposures, find an HDR video,I, which issuitable for typical displays. First, the radiance map fromthe input videos can be computed using a method such asin [1] for corresponding images in the videos (We will notdiscuss the details of recovering the radiance map here).Then our task is to generate a new video,I, whose gradi-ent field is closest to the gradient of the HDR radiance mapvideo,G. The general algorithm for HDR video display isdescribed in Algorithm 1.

One natural way to achieve this is to solve the equation

∇I = G (1)

However, since the original gradient field is modified insome way (attenuated high gradient and magnified low gra-dient in our case), the gradient fieldG is not necessarilyintegrable. Some part of the modified gradient may violate

∇×G = 0 (2)

(i.e. the curl of gradient is 0). This is a special case of theformulation by Kimmel et.al. [22] in the sense that only thegradient field is considered here. Kimmel et.al. proposedminimizing a penalty function of gradient and intensity us-ing a variational framework. A projected normalized steep-est descent algorithm was proposed to solve this problem.Since we consider only gradient field, we use a formula-tion similar to that of Fattal et.al [14], and extend it to 3Dspace by considering both spatial and temporal gradients.Then, our task is to find a potential function , whose gra-dients are closest to in the least squared sense by searchingthe space of all 3D potential functions, that is, to minimizethe following integral in 3D space (hence the reference to3D video integration in the sequel):

∫∫ ∫F (∇I, G)dxdydt (3)

where,

F (∇I,G) = ‖∇I −G‖2

= (∂I

∂x−Gx)2 + (

∂I

∂y−Gy)2 + (

∂I

∂t−Gt)2

According to the Variational Principle, a functionF thatminimizes the integral must satisfy the Euler-Lagrangeequation:

∂F

∂I− d

dx

∂F

∂Ix− d

dy

∂F

∂Iy− d

dt

∂F

∂It= 0

We can then derive the 3D Poisson Equation:

∇2I = ∇ •G (4)

where∇2 is the Laplacian operator,

∇2I =∂2I

∂x2+

∂2I

∂y2+

∂2I

∂t2

and∇•G is the divergence of the vector fieldG, defined as

∇ •G =∂Gx

∂x+

∂Gy

∂y+

∂Gt

∂t

3.3 Gradient Attenuation

Our goal is to compress the high dynamic range by at-tenuating large gradients and magnifying low gradients. Ifwe attenuate the 3D log-gradients in a straightforward way,some artifacts may result since the temporal gradients willget attenuated and the motion will be smoothed. This is ob-vious by imagining that a ball is moving in a scene. If wecompress the temporal gradient of the sequence, the recon-struction of the scene will be blurred. Therefore, we chooseto attenuate only spatial gradients. We use a similar gradientattenuation function as in [14], and the modified gradient isdefined by

G′ = (α/ ‖∇I ′‖)β · ∇I ′ (5)

where,G′ andI ′ are defined in the log-domain in spatialdomain;α = 0.1 times the average gradient norm of∇I ′;β is a constant with a value between 0 and 1. To reducehalo artifacts due to modified gradients, A Gaussian pyra-mid technique is used in a top-down manner. Readers areencouraged to refer to [14] for more details.

3.4 Discretization and Implementation

In order to solve the 3D Poisson equation (Equation 3),we use the Neumann boundary conditions∇I · n = 0,wheren is the boundary normal vector. For 2D image in-tegration, we can simply use a 4 neighbor grid to compute

Page 4: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

the Laplacian and divergence using discretization approxi-mation as in [14]. For 3D video integration, due to largerdata and computational complexity, we need to resort to afast algorithm. For this purpose, we use a diagonal multi-grid algorithm originally proposed by Roberts [23] to solvethe 3D Poisson equation. Unlike conventional multigrid al-gorithms, this algorithm uses diagonally oriented grids tomake the solution of 3D Poisson equation converge fast.

In this case, the intensity gradients are approximated byforward difference:

∇I =

I(x + 1, y, t)− I(x, y, t)I(x, y + 1, t)− I(x, y, t)I(x, y, t + 1)− I(x, y, t)

/h

We represent Laplacian as:

∇2I = [−6 · I(x, y, t) + I(x− 1, y, t) + I(x + 1, y, t)+I(x, y + 1, t) + I(x, y − 1, t) + I(x, y, t + 1)+I(x, y, t− 1) ] /h2

The divergence of gradient is approximated as:

∇ •G = (Gx(x, y, t)−Gx(x− 1, y, t) + Gy(x, y, t)− Gy(x, y − 1, t) + Gt(x, y, t)−Gt(x, y, t− 1))/h2

whereh is the grid distance.This results in a large system of linear equations. We use

the fast and accurate 3D multigrid algorithm in [23] to it-eratively find the optimal solution to minimize Equation 1.Due to the use of diagonally oriented grids, this algorithmdoes not need any interpolation when prolongating from acoarse grid onto a finer grid. Actually, a ’red-black’ Jacobiiteration of the residual between the intensity Laplacian anddivergence of gradient field avoids interpolation. Most im-portantly, its speed of convergence is much better than usualmultigrid scheme.

One thing to be noted in the implementation is theboundary conditions. Though Neumann boundary condi-tions are specified, some artifacts may still result due tohigh gradients at the image boundary. We use a simpli-fied padding technique as in [21] to avoid artifacts near theboundary. The source image is modified by padding withzeros the first and last several frames, and also the first andlast several rows/columns.

4 Experimental Results

4.1 HDR Video Capture

We use a split aperture camera [9] developed to capturethe HDR video. The camera uses a corner of a cube as

Figure 2. The camera used to capture HDRvideo

a 3-faced pyramid and threeCCD sensors. Three thin-filmneutral density filters with transmittances of 1, 0.5 and 0.25are put in front of the sensors respectively. We use Matroxmultichannel board capable of synchronizing and capturingthree channels simultaneously. The three sensors and thepyramid were carefully calibrated to ensure that all the sen-sors were normal to the optical axes. The setup of our HDRvideo capture devices is shown in Fig. 2.

4.2 Results

We test our 3D video integration algorithm for videoHDR compression on a variety of scenarios. To maintainNeumann boundary conditions, during preprocessing, wepad the video cube with 5 pixels in each direction. Thefirst and last 5 frames, and first and last 5 row/column pix-els of each frame input to the algorithm are all black. Theattenuation parameterβ in Equation 5 is set to 0.15 in allexperiments.

Fig. 3 shows an example of three videos captured usingour camera. Due to the shadow of the trees and strong sun-light, none of the individual sensors can capture the wholerange of this dynamic scene. For example, the trees andthe back of the walking person are too dark in (a) and (b),but too bright in (c). The light bar in (a) is almost totallydark, and the ground is overexposed in (b) and (c). How-ever, the video obtained using our 3D video integration al-gorithm can capture almost everything clearly in the scene.The detailed motion of the tree leaves is also visible.

Fig. 4 shows a challenging example with large move-ment in the scene, a walking person with car in motion onthe road. The shadow of the tree on the car is clear in (a)but washed out in (b) and (c). The details of the tree are lostin (a) and (b). The background buildings are overexposedin (b) and (c). The cars, shadow, person and background

Page 5: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

(a)

(b)

(c)

(d)

Figure 3. Experimental results on high dynamic range video. Rows (a)-(c): The three video sequencesobtained by split aperture camera; The brightness of the three videos are in ratios 1:2:4; Row (d):The video obtained using our 3D video integration algorithm. The size of video is 256× 256× 35

are all captured in our reconstructed video using 3D videointegration algorithm, while achieving temporal coherencein luminance. The motion blur of the moving car is main-tained. We believe our results are superior to other HDRhardware or software solutions shown for scenes with largemotion.

In our practice, the computational speed of 3D PoissonSolver using diagonal multigrid can speed up the integra-tion step, though it is still computationally intensive. Thediagonal multigrid algorithm has improved the speed up totwice as fast as correspondingly simple multigrid algorithm.Currently, our Matlab implementation takes approximately900 seconds for256×256 resolution video with 35 frames.A C/C++ implementation will help to further improve thespeed.

5 Conclusions and Future Work

In this paper, we have presented a new approach to cap-ture and display high dynamic range videos. Using a splitaperture camera, we capture a high dynamic range real-world scene. Using a gradient-based 3D integration algo-rithm applied to video, we compress the high dynamic rangeof the video for display on low dynamic range devices.

Achieving the integrability of gradient field is still anopen problem. To apply this method to high resolutionvideos, we need to avoid the minor but perceptible spatialsmoothing of intensities. We are investigating the theoret-ical aspects and applications of a set of techniques for im-age reconstruction from mixed gradient fields. In addition,some other methods may also provide temporal consistent

Page 6: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

(a)

(b)

(c)

(d)

Figure 4. Experimental results on high dynamic range video. Rows (a)-(c): The three video sequencesobtained by split aperture camera; The brightness of the three videos are in ratios 1:2:4; Row (d):The video obtained using our 3D video integration algorithm. The size of video is 256× 256× 35

video. In the future, we will explore other methods for com-parison with our method.

References

[1] P. Debevec and J. Malik, “Recovering high dynamicrange radiance maps from photographs,”SIGGRAPH97, Aug. 1997.

[2] S. Kang, M. Uyttendaele, S. Winder, and R. Szeliski,“High dynamic range video,”SIGGRAPH’03, vol. 61,pp. 1–11, 2003.

[3] T. Mitsunaga and S. Nayar, “Radiometric self calibra-tion,” IEEE Computer Society Conference on Com-

puter Vision and Pattern Recognition (CVPR’99),vol. 1, pp. 374–380, 1999.

[4] ——, “High dynamic range imaging: Spatially vary-ing pixel exposures,”IEEE Computer Society Con-ference on Computer Vision and Pattern Recognition(CVPR’99), vol. 1, pp. 372–479, 2000.

[5] S. Mann, C. Manders, and J. Fung, “Painting withlooks: Photographic images from video using quan-timetric processing,”ACM Multimedia, pp. 117–126,2002.

[6] V. Brajovic and T. Kanade, “A sorting image sen-sor: An example of massively parallel intensity-to-time processing for low latency computational sen-

Page 7: High Dynamic Range Video Using Split Aperture Cameracgeyer/OMNIVIS05/papers/3192.pdfWe apply a spatio-temporal gradient based video integration algorithm for fast and accurate in-tegration

sors,” IEEE conf. on Robotics and Automation, pp.1638–1643, Apr. 1996.

[7] R. Street, “High dynamic range segmented pixel sen-sor array,”U.S. Patent 5638118, June 1997.

[8] M. Konishi, M. Tsugita, M. Inuiya, and K. Masukane,“Video camera, imaging method using video camera,method of operating video camera, image processingapparatus and method, and solid-state electronic imag-ing device,”U.S. Patent5420635, May 1995.

[9] M. Aggarwal and N. Ahuja, “Split aperture imagingfor high dynamic range,”International Journal onComputer Vision, vol. 58, no. 1, pp. 7–17, June 2004.

[10] ——, “High dynamic range panoramic imaging,”Proc. International Conference on Computer Vision(ICCV), pp. 2–9, July 2001.

[11] K. Saito, Electronic image pickup device. JapanesePatent07-254965, Feb. 1995.

[12] S. K. Nayar and V. Branzoi, “Adaptive dynamic rangeimaging: Optical control of pixel exposures over spaceand time,”ICCV, 2003.

[13] F. Durand and J. Dorsey, “Fast bilateral filtering for thedisplay of high-dynamic-range images,”ACM Trans-actions on Graphics (TOG), vol. 21, no. 3, pp. 257–266, 2002.

[14] R. Fattal, D. Lischinski, and M. Werman, “Gradi-ent domain high dynamic range compression,”ACMTransactions on Graphics (TOG), vol. 21, no. 3, pp.249–256, July 2002.

[15] J. Tumblin, J. Hodgins, and B. Guenter, “LCIS: Aboundary hierarchy for detail-preserving contrast re-duction,”Proc. ACM SIGGRAPH, pp. 83–99, 1999.

[16] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda,“Photographic tone reproduction for digital images,”SIGGRAPH, pp. 267–276, 2002.

[17] E. Land and J. McCann, “Lightness and the retinextheory,”J. Opt. Soc. Am., vol. 61, pp. 1–11, 1971.

[18] P. P̀erez, M. Gangnet, and A. Blake, “Poisson imageediting,” Siggraph, pp. 313–318, 2003.

[19] G. Finlayson, S. Hordley, , and M. Drew, “Removingshadows from images,”ECCV, pp. 823–836, 2002.

[20] D. Socolinsky and L. Wolff, “A new visualizationparadigm for multispectral imagery and data fusion,”CVPR, vol. 1, June 1999.

[21] R. Raskar, A. Ilie, and J. Yu, “Image fusion for contextenhancement,”to appear NPAR’04, 2004.

[22] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. So-bel, “A variational framework for retinex,”HPL-1999-151R1, 1999.

[23] A. Roberts, “Fast and accurate multigrid solution ofpoissons equation using diagonally oriented grids,”Numerical Analysis, July 1999.