high dynamic range (hdr) video image processing for ... · over the right eye is the glass...

4
High Dynamic Range (HDR) Video Image Processing For Digital Glass Raymond Chun Hing Lo, Steve Mann, Jason Huang, Valmiki Rampersad, Tao Ai University of Toronto Department of Electrical and Computer Engineering http://hdr.glogger.mobi and http://www.eyetap.org/publications ABSTRACT We present highly parallelizable and computationally effi- cient High Dynamic Range (HDR) image compositing, re- construction, and spatotonal mapping algorithms for pro- cessing HDR video. We implemented our algorithms in the EyeTap Digital Glass electric seeing aid, for use in every- day life. We also tested the algorithms in extreme dynamic range situations, such as, electric arc welding. Our system runs in real-time, and requires no user intervention, and no fine-tuning of parameters after a one-time calibration, even under a wide variety of very difficult lighting conditions (e.g. electric arc welding, including detailed inspection of the arc, weld puddle, and shielding gas in TIG welding). Our ap- proach can render video at 1920x1080 pixel resolution at interactive frame rates that vary from 24 to 60 frames per second with GPU acceleration. We also implemented our system on FPGAs (Field Programmable Gate Arrays) for being miniaturized and built into eyeglass frames. Categories and Subject Descriptors C.1.4 [Computer Systems Organization]: Processor Ar- chitectures—Parallel Architectures ; I.4.9 [Computing Method- ologies]: Image Processing and Computer Vision—Appli- cations ; I.3.1 [Computing Methodologies]: Computer Graphics—Hardware architecture, Graphics processors General Terms Performance, Algorithms, Experimentation, Human Factors Keywords Extreme High Dynamic Range, Spatiotonal mapping, (GPU) Graphics Processing Unit, (FPGAs) Field Programmable Gate Arrays, HDR video, TIG Welding, Digital Eye Glass 1. INTRODUCTION The EyeTap Digital Eye Glass project began about 34 years ago, as a seeing aid to help with specific tasks like Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. MM’12, October 29–November 2, 2012, Nara, Japan. Copyright 2012 ACM 978-1-4503-1089-5/12/10 ...$15.00. electric arc welding, as well as in general day-to-day life, by way of a general-purpose computer in the form of eye glass [12]. The EyeTap causes the eye itself to, in effect, function as if it were both a camera and a display. Such design gives the wearer the appearance of having a glass eye (See Fig. 1 and 2) so this phenomenon has become known as Figure 1: The MannVis WeldGlass TM in a welding helmet that uses the EyeTap Principle for dynamic range management. This stereo rig allows the wearer to see clearly in extremely high dynamic range environments. the “Glass Eye” effect [9] (See also Presence Connect, MIT Press, Teleoperators and Virtual Environments, 2002 Au- gust 6, http://wearcam.org/presenceconnect/). Thus EyeTap is sometimes called the“Glass Eye”, as well as the“Eye Glass”, or simply“Glass”. Note the term“Glass” singular, rather than “Glasses” plural, has been widely used to describe this invention (e.g. “EyeTap Digital Eye Glass”, figure caption, Aaron Harris/Canadian Press, Monday Dec. 22, 2003). Example applications running on the Glass included the Visual Memory Prosthetic [10], and various other wayfinding aids that go beyond what is possible with optical glass. Some have noted the similarity of Google’s Glass to our work, both in its function, as well as its minimalist design, (“Project Glass and the epic history of wearable comput- ers...”, By Paul Miller, The Verge, 2012 June 26, 2:42 pm). See Fig. 2. This apparatus helps the wearer see better in everyday life, while also functioning as an interface to a general-purpose wearable computer. See Chapter 23 of the Encyclopedia of Interaction Design: http://www.interaction-design.org In this paper, we present a novel application of the Digital Eye Glass in dynamic range management, to help people see better in high contrast scenes, and we have tested our system in the most extreme dynamic range scene: TIG welding. 1477

Upload: others

Post on 08-Jun-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: High Dynamic Range (HDR) Video Image Processing For ... · Over the right eye is the Glass (Eye-Tap). Rightmost: Google’s design (rightmost image adapted from Antonio Zugaldia’s

High Dynamic Range (HDR) Video Image Processing ForDigital Glass

Raymond Chun Hing Lo, Steve Mann, Jason Huang, Valmiki Rampersad, Tao AiUniversity of Toronto

Department of Electrical and Computer Engineeringhttp://hdr.glogger.mobi and http://www.eyetap.org/publications

ABSTRACTWe present highly parallelizable and computationally effi-cient High Dynamic Range (HDR) image compositing, re-construction, and spatotonal mapping algorithms for pro-cessing HDR video. We implemented our algorithms in theEyeTap Digital Glass electric seeing aid, for use in every-day life. We also tested the algorithms in extreme dynamicrange situations, such as, electric arc welding. Our systemruns in real-time, and requires no user intervention, and nofine-tuning of parameters after a one-time calibration, evenunder a wide variety of very difficult lighting conditions (e.g.electric arc welding, including detailed inspection of the arc,weld puddle, and shielding gas in TIG welding). Our ap-proach can render video at 1920x1080 pixel resolution atinteractive frame rates that vary from 24 to 60 frames persecond with GPU acceleration. We also implemented oursystem on FPGAs (Field Programmable Gate Arrays) forbeing miniaturized and built into eyeglass frames.

Categories and Subject DescriptorsC.1.4 [Computer Systems Organization]: Processor Ar-chitectures—Parallel Architectures; I.4.9 [Computing Method-ologies]: Image Processing and Computer Vision—Appli-cations; I.3.1 [Computing Methodologies]: ComputerGraphics—Hardware architecture, Graphics processors

General TermsPerformance, Algorithms, Experimentation, Human Factors

KeywordsExtreme High Dynamic Range, Spatiotonal mapping, (GPU)Graphics Processing Unit, (FPGAs) Field ProgrammableGate Arrays, HDR video, TIG Welding, Digital Eye Glass

1. INTRODUCTIONThe EyeTap Digital Eye Glass project began about 34

years ago, as a seeing aid to help with specific tasks like

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.MM’12, October 29–November 2, 2012, Nara, Japan.Copyright 2012 ACM 978-1-4503-1089-5/12/10 ...$15.00.

electric arc welding, as well as in general day-to-day life,by way of a general-purpose computer in the form of eyeglass [12].

The EyeTap causes the eye itself to, in effect, functionas if it were both a camera and a display. Such designgives the wearer the appearance of having a glass eye (SeeFig. 1 and 2) so this phenomenon has become known as

Figure 1: The MannVis WeldGlassTM in a welding helmet that usesthe EyeTap Principle for dynamic range management. This stereorig allows the wearer to see clearly in extremely high dynamic rangeenvironments.

the “Glass Eye” effect [9] (See also Presence Connect, MITPress, Teleoperators and Virtual Environments, 2002 Au-gust 6, http://wearcam.org/presenceconnect/).

Thus EyeTap is sometimes called the “Glass Eye”, as wellas the “Eye Glass”, or simply “Glass”. Note the term “Glass”singular, rather than “Glasses” plural, has been widely usedto describe this invention (e.g. “EyeTap Digital Eye Glass”,figure caption, Aaron Harris/Canadian Press, Monday Dec.22, 2003).

Example applications running on the Glass included theVisual Memory Prosthetic [10], and various other wayfindingaids that go beyond what is possible with optical glass.

Some have noted the similarity of Google’s Glass to ourwork, both in its function, as well as its minimalist design,(“Project Glass and the epic history of wearable comput-ers...”, By Paul Miller, The Verge, 2012 June 26, 2:42 pm).See Fig. 2.

This apparatus helps the wearer see better in everyday life,while also functioning as an interface to a general-purposewearable computer. See Chapter 23 of the Encyclopedia ofInteraction Design: http://www.interaction-design.org

In this paper, we present a novel application of the DigitalEye Glass in dynamic range management, to help people seebetter in high contrast scenes, and we have tested our systemin the most extreme dynamic range scene: TIG welding.

1477

Page 2: High Dynamic Range (HDR) Video Image Processing For ... · Over the right eye is the Glass (Eye-Tap). Rightmost: Google’s design (rightmost image adapted from Antonio Zugaldia’s

Figure 2: Leftmost: Mann’s Glass design done in collaboration withdesigner Chris Aimone. Our minimalist design of Digital Glass for ev-eryday life has an aluminium strip that runs across the forehead, andis supported by two silicone nose pads attached to the aluminium stripitself (i.e. no eyeglass lenses). Over the right eye is the Glass (Eye-Tap). Rightmost: Google’s design (rightmost image adapted fromAntonio Zugaldia’s image in Wikimedia Commons, used under Cre-ative Commons License).

1.1 High Dynamic Range ImagingDespite recent advances in camera sensing technology, state

of the art digital cameras can only sense a limited dynamicrange – much less than the human eye. This limitation isparticularly pronounced when viewing an extreme dynamicrange scene such when looking into oncoming automobileheadlights at a license plate number on a dark road, or whendoing electric arc welding.

In our previous work over the last 25 years or so, we over-came this limitation by combining differently exposed im-ages of the same subject matter, to generate HDR (HighDynamic Range) images [8]. As Robertson et al. state[16]:

“The first report of digitally combining multiplepictures of the same scene to improve dynamicrange appears to be Mann[8]”.

Over the last decade, HDR imaging has gained major in-terest and numerous solutions have been proposed to createhigh quality HDR images [8, 11, 16, 2] and videos [7, 1, 6,17]. However, very little focus has been put into real-time al-gorithms that can allow real-time interaction with the world.Furthermore, the ability to run at interactive frame rates isparticularly important for the development of HDR seeingaids [13] which can allow people to see in extreme dynamicrange conditions, where humans would not be able to seewith the naked eye, especially among the elderly or thosewith mild visual impairment.

To address these issues, we present novel hardware acceler-ated algorithms for constructing HDR video from a sequenceof alternating exposures, using GPUs (or FPGAs) for real-time HDR processing. Our hardware-accelerated results areuseful for seeing aids, as well as, being able to shoot videowhile observing the HDR video result in a viewfinder, dis-play monitor, or other similar outputs. Thus, a videogra-pher, would be able to compose the shot more effectivelywhile being able to render the final result in real-time.

Together with our spatial-tonal mapping algorithm, whichis based on a GPU implementation of the edge-preservingrecursive filter in [5], we achieved real-time (see Table 1) re-sults that are comparable or better than some of the state-of-the-art tone mapping algorithms [15, 14, 4]; especially underthe extreme lighting conditions that occur, for example, inTIG welding (see Fig. 3).

2. HDR COMPOSITING

Tone Mapping Operator FPS Speed-upMantiuk, R. et al. [14] 0.58 143.42×Fattal, R. et al. [4] 0.58 142.5×Reinhard, E. et al. [15] 6.49 12.83×Implemented Edge-Preserving Method 83.33 1×

Table 1: This table shows the run-time of different Tone MappingOperators and the speed-up of using our implemented method. Ourimplementation achieves approximately 80 frames-per-second; whichis several times faster than other tone mapping operators.

In this section, we first discuss a HDR image compositionmethod, which is optimized for GPU or FPGA hardwareimplementation of electric eyeglasses [8, 16, 1]. Then, wepresent our approach in creating a real-time HDR video withour pairwise HDR composition and spatial tonal mappingmethod based on the edge-aware recursive filter (RF) [5].Together, our proposed algorithm can run in a real-time(30fps or higher on commodity graphics hardware such asNVIDIA 460GTX at 1280x720 resolution) and requires nouser intervention in the HDR creation processes.

2.1 Direct Lookup method for combining ex-posures

For the case of compositing two images with three-color(RGB) channels, one inverse comparametric lookup table(iCLUT) can be derived for each channel for a specific cam-era sensor. Each entry of an inverse comparametric lookuptable results a joint-estimated photometric quantity, q, froma pair of images captured with different exposures. EachiCLUT is composed of 256 × 256 entries of 8-bit outputs.The iCLUTs need only be calibrated once for every camera.

Each entry of an iCLUT is estimated by the followingequation:

q =f−1∆EV (f1, f2)

=f−1(f1) · w1(f1) + f−1(f2) · w2(f2) /2∆EV

w1(f1) + w2(f2)

(1)

where f1 and f2 are the pixel values from a pair images underdifferent exposure settings; ∆EV is the exposure differencebetween f1 and f2; f−1 is the inverse camera response andw is the certainty function, proposed by Mann [8]

2.2 HDR Composition for 3 or more ImagesFor the case of constructing HDR images from N images

(for N ≥ 3), we compute log(N) levels of pairwise estimateof the photometric quantities, qh,i, read as the ith pairwiseestimate of photometric quantity at level h. At the toplevel, we compute qlog(N),i using iCLUT generated by Eq. 1.For each of proceeding level, its corresponding set of qh,i isestimated by:

qh,i =qh−1,jwh−1,j + qh−1,j+1wh−1,j+1/2

∆EV

wh−1,j + wh−1,j+1(2)

where

wh,i = max(wh−1,j , wh−1,j+1) (3)

By capturing 4 images with ∆EV = 4, the dynamic rangeof the HDR image spans a total of 20 stops (220 = 1, 048, 576)approximately a million to one contrast ratio. To displaysuch wide range on the LDR display, the photoquantity q

1478

Page 3: High Dynamic Range (HDR) Video Image Processing For ... · Over the right eye is the Glass (Eye-Tap). Rightmost: Google’s design (rightmost image adapted from Antonio Zugaldia’s

(a) Raw frames captured in exposure bracketing mode (-5, 0, +5 ∆EV ). (b) Our result

(c) Logarithmic Compression (d) Mantiuk, R et al. [14] (e) Fattal, R. et al. [4] (f) Reinhard, E. et al. [15]

Figure 3: Results for HDR video of extreme dynamic range. Notice that the tip of the tungsten electrode is only visible in the darkest exposure,and the background is only visible in the lightest exposure. Our result, shown in (b), is the only of the spatiotonemapping algorithms that canrender the very tip of the tungsten electrode and background both clearly. (d-f) Tone mapping results from [14, 4, 15] using Luminance HDRprogram. In extreme cases, the HDR composition method by [2] introduced artifacts in the shadow area, and the tone mapping algorithms furtheramplified these defects in the final images. Our result is not only better than these tone mapping algorithms (and the only one to faithfully showthe tip of the tungsten electrode) but it is also capable of running in real-time.

will be compressed with a multi-scale spatial-tonal mappingalgorithm.

3. MULTI-SCALE SPATIAL TONAL MAP-PING

Our approach for real-time tone mapping and composi-tion, which is based on [5] and [3], provides the natural look-ing and stable results among a large variety set of conditions(see Fig. 3 for comparison). To compress the high dynamicrange images we first estimate the log luminance, Lc, of theimage based on the final estimate of photoquantities on theRGB channels, qr, qg, qb, from section 2.2.

Lc = log(0.2989 ∗ qr + 0.5870 ∗ qg + 0.1140 ∗ qb + 1). (4)

Then, the Lc is normalized and compressed to the range[0, 1]. Then, we can adjust the range, which has later ef-fect on the detail extraction, with the s and d parameters(typically set to s = 8.0, d = 5).

L = s ∗ Lc + d (5)

The global tone mapping, such as logarithmic compression,from the last step provides us a base image which is oftenlacking contrast. We can enhance the local contrast (spatial-tonal mapping) with the multi-scale edge-preserving decom-position method. This method is found to be effective forcompressing HDR images at extreme cases (see the clear ren-dition of the tip of the tungsten electrode with our proposedmethod in Fig. 1b). To extract details from the image andenhance their local contrast, we first create the multi-scaleedge-preserved smoothed image Ji where i ∈ {1, ..,M} usingthe recursive filter (RF) discussed in [5].

Ji = RC(L, σs, σr, k) (6)

where σs and σr are the filter spatial and range standard de-viation, and k is the number of iterations the filter smoothsthe image. In particular, we use σs = 20, σr = 0.0825 forJ1; σs = 50, σr = 0.165 for J2; σs = 100, σr = 0.335 for J3

and k = 1. We obtain the detail layers Di by finding thedifference between Ji−1 and Ji, with J0 = L, where each suc-cession of the detail layer contains coarser details than the

previous layer. The detail layers are weighted and summedto reduce to a single contrast mask Lf as the following:

Lf = 0.9 ∗ JM +

M−1∑i=0

aiDi (7)

where ai is the desired weight for each layer. In our setup,we have used a0 = 0.4, a1 = 0.3, a2 = 0.3 which emphasizethe texture from the layer containing the finest details. Byour observation, this setting allows greater local contrast oftonal values at the extreme bright and dark areas of thescene. To obtain a displayable output image, we compressthe photoquantity q using the contrast mask in eq.6 andquantize the results to the standard 24 bit RGB pixel values:

f = round(255.0 ∗ (q/10L)γ · Lf ), (8)

where the parameter γ controls the saturation in the finalimage, and we typically set them between 0.5 to 0.7. Overall,there are two main advantages of using the edge-preservingfilter proposed by [5]. First, the parameters σs and σr em-power users to refine their emphasis of detail enhancements.Second, the fine tuning of σ parameters helps minimizingthe halo artifacts when comparing to traditional image de-composition based on laplacian pyramid. Qualitatively, theoutput of our HDR rendition is comparable to many otherapproaches as shown in Fig 3.

4. PERFORMANCE

4.1 GPUComputation using GPU maximizes acceleration on algo-

rithms that run on a large matrix with independent elementoperations. Our proposed HDR algorithm heavily relies onper pixel operation without much cross dependencies withneighbour pixels. In the following discussion, we layout theruntime cost of each operation in milliseconds, benchmarkedon a NVIDIA 460GTX, over 150 videos at the resolutionof 1280x720. The operations of pixel-to-photoquantity con-version and the compression by logarithm consume on av-erage of 1.5ms. Normalization on images are done using

1479

Page 4: High Dynamic Range (HDR) Video Image Processing For ... · Over the right eye is the Glass (Eye-Tap). Rightmost: Google’s design (rightmost image adapted from Antonio Zugaldia’s

efficient min/max finder from NVIDIA’s CUBLAS library,which costs around 0.8ms per call. The spatial tone map-ping is the least parallelizable part of the process, which con-sumes up to 12ms. The edge-preserving filtering for threelayers executed in parallel takes up 11ms of runtime, be-comes the main contributor to the overall latency in ourproposed HDR pipeline. This is due under utilization ofavailable thread pool of the GPU architecture. Per itera-tion the RF only requires as many threads as the size of onedimension of the image multiplied by the number of colorchannels. In our proposed implementation, we launch the fil-ter on a monotone channel, L, to reduce the total amount ofcomputation required. Other optimization technique such asmatrix transpose is applied to ensure coalesced memory ac-cess patterns. The remainder of 1ms runtime is contributedby the contrast enhancement stage after obtaining the lay-ers using RF. Overall, the HDR composition and tonal rangecompression algorithm cost 17ms ( 60fps) on average, andthat is suitable for real-time system.

4.2 FPGAThe algorithm which uses direct look up table is imple-

mented on a Spartan-6 LX45 FPGA device for its low powerconsumption and portability. The supply power measuredfor the system is 1.448W, thus it allows the application torun for around 20 hours on a typical rechargeable batterywith capacity of 5800mAh. The board contains High Def-inition Multimedia Interface (HDMI) input ports used toreceive video in 720 × 480 at 60 frames per second, andoutput HDMI ports used to transmit HDR video frames.Two differently exposed video frames of the same subjectmatter are supplied via the HDMI input in rapid succes-sion, in an alternating order. The frames are stored intomemory and read out concurrently for composition. The128MB of DDR SDRM (Micron MT47H64M16-25E, 16-bitdata width) on the board is configured to run at 625MHzdata rate to store video frames. The board also contains2.1Mbits (116x18432bits) Block RAM (BRAM) in total, whichare used for line buffers and to store pre-computed LUT re-sults.

The post processing starts by compressing the producedHDR images with a square root. Then it converts the colorspace from RGB to YCrCb, in order to save the resourcesthat needed for implementation. Converted luma channelis convolved with multi-stage a 5-by-5 Gaussian Kernel forextracting two layers of edges from the original HDR image.The edges are then scaled and added back to the originalimage for bringing back the detailed textures.

The 4-up display mode was created to provide a real-timevisualization of multiple exposures, enabling the user to sim-ply point a camera and see the frames that comprise theHDR composition. This allows for easier testing and cali-bration of the input frames for HDR video processing Themaximum latency of the implemented logic is 223ns, whereasthe DDR2 memory access used up 80 percent of the process-ing time.

5. CONCLUSIONS AND FUTURE WORKWe demonstrated the feasibility of using hardware to con-

struct HDR video in realtime. Our approach is paralleliz-able, and suitable for implementation on GPUs or FPGAs.To test our system, we have applied to TIG welding where

extreme dynamic range scene is presented and also worn ourseeing devices in our daily life.

Similar to other HDR system which uses alternating ex-posures, image misalignment between the adjacent frameswould product unpleasant ghosting artifacts. Numerous so-lutions have been proposed to address the image alignmentsbetween the consecutive frames. However, these solutionsare computationally intense and not yet suitable for real-time usages. Alternatively, we can address this issue byoptics which allows us to capture at the same instance, andthus our work in HDR video would be fully benefited fromsuch hardware.

6. REFERENCES[1] M. A. Ali and S. Mann. Comparametric image compositing:

Computationally efficient high dynamic range imaging. In Toappear, Proc. Int. Conf. Acoust., Speech, and SignalProcessing (ICASSP). IEEE, March 2012.

[2] P. Debevec and J. Malik. Recovering high dynamic rangeradiance maps from photographs. In ACM SIGGRAPH 2008classes, page 31. ACM, 2008.

[3] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski.Edge-preserving decompositions for multi-scale tone and detailmanipulation. In ACM Transactions on Graphics (TOG),volume 27, page 67. ACM, 2008.

[4] R. Fattal, D. Lischinski, and M. Werman. Gradient domainhigh dynamic range compression. ACM Transactions onGraphics, 21(3):249–256, 2002.

[5] E. S. L. Gastal and M. M. Oliveira. Domain transform foredge-aware image and video processing. ACM TOG,30(4):69:1–69:12, 2011. Proceedings of SIGGRAPH 2011.

[6] P. Irawan, J. Ferwerda, and S. Marschner. Perceptually basedtone mapping of high dynamic range image streams. InProceedings of the Eurographics Symposium on Rendering,pages 231–242, 2005.

[7] S. Kang, M. Uyttendaele, S. Winder, and R. Szeliski. Highdynamic range video. ACM Transactions on Graphics,22(3):319–325, 2003.

[8] S. Mann. Compositing multiple pictures of the same scene. InProceedings of the 46th Annual IS&T Conference, pages50–52, Cambridge, Massachusetts, May 9-14 1993. The Societyof Imaging Science and Technology. ISBN: 0-89208-171-6.

[9] S. Mann. ‘mediated reality’. TR 260, M.I.T. M.L. vismod,Cambridge, Massachusetts, http://wearcam.org/mr.htm, 1994.

[10] S. Mann. Wearable, tetherless computer–mediatedreality: WearCam as a wearable face–recognizer, and otherapplications for the disabled. TR 361, M.I.T. Media LabPerceptual Computing Section; Also appears in AAAI FallSymposium on Developing Assistive Technology forPeople with Disabilities, 9-11 November 1996, MIT;http://wearcam.org/vmp.htm, Cambridge, Massachusetts,February 2 1996.

[11] S. Mann. Comparametric equations with practical applicationsin quantigraphic image processing. IEEE Trans. Image Proc.,9(8):1389–1406, August 2000. ISSN 1057-7149.

[12] S. Mann. Intelligent Image Processing. John Wiley and Sons,November 2 2001. ISBN: 0-471-40637-6.

[13] S. Mann. Continuous lifelong capture of personal experiencewith eyetap. In Proceedings of the the 1st ACM workshop onContinuous archival and retrieval of personal experiences,pages 1–21. ACM, 2004.

[14] R. Mantiuk, K. Myszkowski, and H. Seidel. A perceptualframework for contrast processing of high dynamic rangeimages. ACM Transactions on Applied Perception (TAP),3(3):286–308, 2006.

[15] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda.Photographic tone reproduction for digital images. ACMTransactions on Graphics, 21(3):267–276, 2002.

[16] M. Robertson, S. Borman, and R. Stevenson.Estimation-theoretic approach to dynamic range enhancementusing multiple exposures. Journal of Electronic Imaging,12:219, 2003.

[17] M. D. Tocci, C. Kiser, N. Tocci, and P. Sen. A Versatile HDRVideo Production System. ACM Transactions on Graphics(TOG) (Proceedings of SIGGRAPH 2011), 30(4), 2011.

1480