adaptive stray-light compensation in dynamic multi ... · computational visual media doi...

9
Computational Visual Media DOI 10.1007/s41095-017-0090-8 Vol. 3, No. 3, September 2017, 263–271 Research Article Adaptive stray-light compensation in dynamic multi-projection mapping Christian Siegl 1 ( ), Matteo Colaianni 1 , Marc Stamminger 1 , and Frank Bauer 1 c The Author(s) 2017. This article is published with open access at Springerlink.com Abstract Projection-based mixed reality is an effective tool to create immersive visualizations on real-world objects. Its wide range of applications includes art installations, education, stage shows, and advertising. In this work, we enhance a multi-projector system for dynamic projection mapping by handling various physical stray-light effects: interreflection, projector black-level, and environmental light in real time for dynamic scenes. We show how all these effects can be efficiently simulated and accounted for at runtime, resulting in significantly improved projection mapping results. By adding a global optimization step, we can further increase the dynamic range of the projection. Keywords mixed reality; projection mapping; multi- projector; real time; interreflection; environmental light 1 Introduction Projection mapping setups are a popular way to alter the appearances of real-world objects, and are used in a wide range of applications. The system presented in this paper is based on the work by Siegl et al. [1]. Their multi-projection system tracking the target object in blending between projectors is continuously adapted to the current object position and orientation. To compute the correct blending between projectors, a non-linear system, incorporating multiple projection quality terms, is solved on the GPU. With this system, a very high quality projection mapping is achieved 1 Computer Graphics Group, University of Erlangen- Nuremberg, Cauerstrasse 11, 91058 Erlangen, Germany. E-mail: [email protected] ( ). Manuscript received: 2017-03-03; accepted: 2017-04-29 at real-time rates on arbitrary white Lambertian geometry. However, Siegl et al.’s system ignores three key physical lighting effects that can have significant impact on projection quality (see Fig. 1): Interreflection: The indirect light produced by projecting on concave areas of white Lambertian target geometry. Black-level: The light a projector emits when set to present pure black. In particular when using LCD projectors, this light is very noticeable. Environmental light: Low-frequency light that is cast by other external sources. All these effects result in over-bright regions. In this paper we show how to simulate all these stray- light effects and compensate for them, by reducing the projected light accordingly (see Fig. 1). This requires the real-time simulation of interreflections and environmental lighting, for which we apply techniques from real-time rendering. When reducing the amount of projected light, we face the problem of losing dynamic range for dark scenes and bright environments. By introducing an additional global optimization step, we can counteract this effect. Our adaptive algorithm noticeably improves the visual quality without a significant impact on performance. 2 Previous work The basis for understanding the interaction of light with diffuse surfaces was presented by Goral et al. [2]. Building on this, Sloan et al. [3] presented work on real-time precomputed radiance transfer, which works very well for low-frequency lighting. While we use their basic idea for compensating for environmental light, our setup is quite different. With light from multiple projectors, our lighting is 263

Upload: duonghanh

Post on 30-Jul-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Computational Visual MediaDOI 10.1007/s41095-017-0090-8 Vol. 3, No. 3, September 2017, 263–271

Research Article

Adaptive stray-light compensation in dynamic multi-projectionmapping

Christian Siegl1 (�), Matteo Colaianni1, Marc Stamminger1, and Frank Bauer1

c© The Author(s) 2017. This article is published with open access at Springerlink.com

Abstract Projection-based mixed reality is aneffective tool to create immersive visualizations onreal-world objects. Its wide range of applicationsincludes art installations, education, stage shows, andadvertising. In this work, we enhance a multi-projectorsystem for dynamic projection mapping by handlingvarious physical stray-light effects: interreflection,projector black-level, and environmental light in realtime for dynamic scenes. We show how all theseeffects can be efficiently simulated and accounted for atruntime, resulting in significantly improved projectionmapping results. By adding a global optimization step,we can further increase the dynamic range of theprojection.

Keywords mixed reality; projection mapping; multi-projector; real time; interreflection;environmental light

1 Introduction

Projection mapping setups are a popular way toalter the appearances of real-world objects, andare used in a wide range of applications. Thesystem presented in this paper is based on the workby Siegl et al. [1]. Their multi-projection systemtracking the target object in blending betweenprojectors is continuously adapted to the currentobject position and orientation. To compute thecorrect blending between projectors, a non-linearsystem, incorporating multiple projection qualityterms, is solved on the GPU. With this system,a very high quality projection mapping is achieved

1 Computer Graphics Group, University of Erlangen-Nuremberg, Cauerstrasse 11, 91058 Erlangen, Germany.E-mail: [email protected] (�).

Manuscript received: 2017-03-03; accepted: 2017-04-29

at real-time rates on arbitrary white Lambertiangeometry.

However, Siegl et al.’s system ignores three keyphysical lighting effects that can have significantimpact on projection quality (see Fig. 1):• Interreflection: The indirect light produced by

projecting on concave areas of white Lambertiantarget geometry.• Black-level: The light a projector emits when set

to present pure black. In particular when usingLCD projectors, this light is very noticeable.• Environmental light: Low-frequency light that

is cast by other external sources.All these effects result in over-bright regions. In

this paper we show how to simulate all these stray-light effects and compensate for them, by reducingthe projected light accordingly (see Fig. 1). Thisrequires the real-time simulation of interreflectionsand environmental lighting, for which we applytechniques from real-time rendering. When reducingthe amount of projected light, we face the problemof losing dynamic range for dark scenes and brightenvironments. By introducing an additional globaloptimization step, we can counteract this effect. Ouradaptive algorithm noticeably improves the visualquality without a significant impact on performance.

2 Previous work

The basis for understanding the interaction of lightwith diffuse surfaces was presented by Goral etal. [2]. Building on this, Sloan et al. [3] presentedwork on real-time precomputed radiance transfer,which works very well for low-frequency lighting.While we use their basic idea for compensating forenvironmental light, our setup is quite different.With light from multiple projectors, our lighting is

263

264 C. Siegl, M. Colaianni, M. Stamminger, et al.

Fig. 1 Left: correcting for interreflection in the corner of a box, where the sides are colored using projection mapping, along with an imageof the indirect light. Middle: correcting for the black-level of the projectors. Note how the brightness discontinuity on the neck disappears.Right: correcting for daylight. For reference, projection without environmental light is shown below. Note the corresponding light probes. Allimages in this paper are pictures of real projections, captured with a DSLR.

dominated by very bright spot lights invalidating thelow-frequency assumption.

The impact of scattered light from projectionmapping was first discussed by Raskar et al. [4].They argued that scattered light contributes tothe realism of the projection as it generates globalillumination effects. For diffuse real-world lightingas well as diffuse lighting of virtual-materials, thisassumption is true. As a result they chose to ignorethe scattered light in their projection. However,since we want to simulate different types of materials(glass, etc.), we need to eliminate the diffusescattered light first.

The first compensation method for scattered lightwas given by Bimber et al. [5]. They generated aprecomputed irradiance map for a background scene.In contrast to our work, this map is restricted tostatic scenes, including the projected content.

Closer to our approach is later work by Bimberet al. [6]. However, they showed results onlyfor planar and other trivial developable targetsurfaces. In addition, they used an expensiveiteration scheme. We show that, with a simplifyingassumption, this is not required. As in our lighttransport computations, Bermano et al. [7] solvedthe contribution from multiple projectors, while alsoaccounting for subsurface scattering and projectordefocus. However, their system does not computethe results in real time. Sheng et al. [8] presenteda method for correcting artifacts from interreflectionbased on perception. While they showed promisingresults, their optimization scheme also does not runin real time.

Another related field of research concerns therendering of synthetic objects into real scenes; foran overview see Ref. [9]. Here, the main taskis to estimate the environmental lighting of the

real scene from a plain RGB-image, to simulatethe interaction with the rendered objects correctly.Since we change the appearance of the target objectwith the projection, we can no longer estimate theenvironmental light directly from an image of thetarget geometry. Therefore, we use a light-probe asa proxy to gather the environmental light.

3 Base system

We use the system presented by Siegl et al. [1]as a basis for this work. Their system is able tosolve the complex problem of blending multipleprojectors on an arbitrary target geometry in realtime. The target object is tracked (without markers)using a depth camera. Using the extrinsic andintrinsic information of the pre-calibrated system,the target object is then rendered from the viewpointof the projectors. During blending, the systemtakes into account the target geometry and theexpected projection quality. The resulting heuristicis based on the fact that incident rays will givea sharper projection if they hit the target surfaceat a more perpendicular angle. Their system is

Fig. 2 An example setup with two projectors, a depth camera fortracking, and a diffuse white target object.

Adaptive stray-light compensation in dynamic multi-projection mapping 265

based on a non-linear optimization problem thatincorporates the physical properties of light, theexpected projection quality, and a regularizationterm. The entire problem (represented as a transportmatrix) is solved in real time on the GPU, optimizinga base luminance pi for every projector ray i (whichaddresses the pixel coordinates of all projectorssequentially). Projecting the base luminances forall projectors’ pixels results in uniform illumination.To generate a target image on the object, theseluminances are modulated by the target color cj ,resulting in the required pixel color qi (for now, weassume a projector’s luminance to be linear, whichin practice is not the case):

qi = pi · cj (1)

4 Real-time interreflection correction

With multiple high powered projectors pointed ata white Lambertian target, surface points in non-convex regions receive light not only from theprojectors, but also from their surroundings. Notaccounting for this light results in regions which aretoo bright, as can be seen for example in Fig. 8. Itwould be possible to add this indirect illuminationto the transport matrix of the previously describedoptimization problem by introducing additionalmatrix entries containing the indirect contributionsof each projector ray. However, the greatly increasednumber of non-zero entries makes solving the systemmuch more expensive. In the following we describe acheaper but equally powerful solution, leaving thetransport matrix unchanged, and hence also theperformance of the per-pixel luminance solver.

Since we wish to examine the propagation oflight between surface areas of the target object,we first need a parameterization of this object.

b

q0

c0

c11

p0

p1

q2

q3

q1

p2

p3

Fig. 3 Light scattering b between two surface points ci, illuminatedby two projectors in a concave surface area.

We determine this parameterization by applyingstandard texture unwrapping algorithms commonlyused in 3D-modeling applications. Every texel i ofthe resulting texture corresponds to a surface pointci with associated normal ni.

To approximate the indirect irradiance Ii of asurface point, corresponding to texel i, we employa standard technique from ray-tracing: we cast Nrays from ci in sample directions ωi, which are cosinedistributed on the hemisphere around ni:

Ii = 1N

∑j=1,...,n

C(xi,ωj) (2)

where C(xi,ωj) is the target color at the surfacepoint hit by the ray starting at ci in direction ωj . Ifthe ray does not hit the object, C(xi,ωj) is black.

Since sampling the hemisphere during runtimewould contradict our real-time requirements, weprecompute the invariant locations of the surfaceintersections in texture space. In this preprocessingstep, the hit points of the cosine weighted hemispheresamples are gathered for every texel. We then savethe UV -coordinates of every hit point in a positionlookup table. The indirect lighting computation isthus reduced to

I i = 1N

∑j∈Ni

cj (3)

where Ni is the list of texels hit by the sample raysof texel i and cj is the target color at texel j.

In convex regions, these lists are empty, whilein concave regions, usually only few sample rayshit the object. Therefore, Ni generally containsfew elements, except in extreme cases. We furtherrestrict the lists to the 64 rays with the greatestcontribution. As a heuristic, we use Lambert’scosine law to determine the expected stray-lightcontribution. Moreover, the texture resolution doesnot need to be very high, resulting in moderateprecomputation time and memory consumption.

In projection mapping, our main objective is toreproduce the desired target colors cj at each surfacepoint on the target object. We utilize the fact that,in contrast to general image generation, the exacttarget color and illumination at every surface pointof the object are known. Given the assumption thatwe can obtain the exact illumination on the targetobject, we may assume the illumination I j to bealready present at every surface location j. UsingEq. (3), the indirect light I j at every texel j can

266 C. Siegl, M. Colaianni, M. Stamminger, et al.

quickly be determined. This indirect light is thensubtracted from each target color cj :

qi = pi · (cj − I j) (4)and sent to the projector.

An important additional implication of thisapproach is that iteration is not needed.

4.1 Self shadowing and limited range

The algorithm described so far produces artefactsdue to• self shadowing of the target geometry,• Lambertian and distance effects, and• limited range of the projectors.

All these effects cause certain areas of theilluminated object to fail to obtain the full targetcolor. However, if we incorrectly assume that theseareas contribute to interreflection with their fulltarget color, we compensate for interreflected lightthat does not exist. This is demonstrated for surfacepoint c0 in Fig. 4: the opposing surface is not lit andtherefore should not contribute to the interreflectedlight at c0.

To compensate for this, we compute a target-color map: by gathering the contributions from allprojectors at each surface point, we can compute thesurface color that results in the real world.

4.2 Implementation

In addition to sampling the hemisphere at everysurface point and saving the resulting UV -coordinates in the preprocessing step, we also savethe position and normal per surface point. Thisinformation is needed to compute the target-color

c0

c1

c2

Fig. 4 Self shadowing and limited range of the projectors canlead to artifacts. c0 can only receive interreflecting light from theopposed surface if this surface is actually lit. Furthermore, the surfacegeometry influences the reachable surface illumination. While c1 isfully lit, c2 is attenuated due to Lambert’s Law.

map.The live system generates the following

information:• Target-color map: Before we gather interreflected

light using precomputed UV -coordinates (seeEq. (3)), we store the target colors in a target-colormap. In the cases where the projection systemcannot achieve the desired illumination due tophysical limitations (self shadowing or Lambertianlaw), the stored colors are attenuated accordingly.• Interreflection map: Using the target-color map

and the precomputed UV -coordinates, a secondtexture, containing the interreflected light (seeEq. (3)), is computed.The final color sent to the projector is dimmed

with the values from the interreflection map (seeEq. (4)). For a schematic of the implementation,see Fig. 5.

4.3 Linear color space

In general, addition and subtraction of colors areonly valid in a linear color space. The colors in ourprocessing pipeline are not linear but already havegamma applied. The same problem also affects theluminance values from the per-pixel solver, whichdetermines blend values in linear space. Furthermore,the sum of color contributions and subtraction fromthe final projected color are only valid in a linearcolor space.

This means that all incoming colors (fromthe renderer and the light probe) have to belinearized (using inverse gamma transformation). Allcomputations are then performed in a linear colorspace (see Fig. 5). Before sending the final color tothe projector, we de-linearize the colors by applyinggamma correction. For a more detailed discussionwe refer the reader to Siegl et al. [1].

Renderer

Light probe

Solver

Pixel colors Corrected colors Projector

Radiance map Interrefl. map

Blacklevel map

EnvLight

Geometry, calibration,and tracking information

Precomputed SHs Precomputed UVs

pi q

i qi

ci

Linear color space

Fig. 5 An overview of the system.

Adaptive stray-light compensation in dynamic multi-projection mapping 267

5 Projector black-level

Another effect that impairs the quality of aprojection mapping system, especially for darkscenes, is the projector’s black-level. Even whenprojecting pitches black, the affected surface isbrighter than when not projecting on it. We utilizeLCD projectors for the benefit of reduced flickeringwhen capturing the projections with a video camera.While the black-level of DLP projectors in generaloffers a slightly better black-level, the problem is stillvery visible.

The previously introduced pipeline forinterreflection correction is easily extensible toinclude black-level correction. When computing thetarget-color map, every texel of the surface textureis already reprojected into every projector and theircontribution is gathered. At this stage a black-levelmap is computed, gathering the cosine-weightedincident black-level Bj from all projectors at everysurface point. This incident black-level from all theprojectors can, in an analogous way to interreflectedlight, be interpreted as being already present onthe target surface. As a result, Bj also has to besubtracted from the target surface color in the finalrendering pass. Applying this correction to Eq. (4)yields:

qi = pi · (cj − I j −Bj) (5)

Since the exact black-level of the projectors isunknown, a calibration step is required. To estimatethe value, a uniform grey illumination of 0.2on the object is generated. Without black-levelcompensation, areas that are illuminated by onlya single projector are noticeably darker than thoseto which multiple projectors contribute (see Fig. 1,middle). For calibration, the user adjusts the black-level such that this difference disappears.

6 Environmental light

Another influence causing unwanted light, affectingthe projection quality, is environmental light. Manyprojection mapping systems assume the environmentto be perfectly dark. Of course, in a real setup thisis generally not the case.

To counteract the influence of environmental lightin our dynamic real time setup, we capture thesurface of a mirrored hemisphere in real time.The camera is intentionally defocused and only a

Fig. 6 Environmental light compensation.

low resolution image is acquired. Using this low-frequency input, 9 spherical harmonic coefficients(per color channel) are computed from the lightprobe image (the SH vector). To apply thisinformation to the target color, an additionalprecomputation step is required: all hemispheresamples (see Section 4) missing the target object areprojected into the space of the spherical harmonicbasis functions. The resulting transfer vector allowsthe incident environmental light Ej to be determinedfrom the inner product of the SH vector and thetransfer vector (per color channel; for further detailssee Ref. [3]).

We interpret this environmental light Ej as anillumination that is already present on the targetsurface (as in Sections 4 and 5). As a result, theprojected color qi has to be reduced by Ej , extendingEq. (5) to

qi = pi · (cj − I j −Bj −Ej) (6)

7 Global optimization

Using the presented method of offsetting every targetsurface color by an amount of light that can beinterpreted as being already present on the surface,we very efficiently correct for artifacts from stray-light in projection mapping. However, one problemremains: if ambient, black-level, or interreflectedlight exceeds the target illumination at a surfacepoint, negative light would be required to achieve

268 C. Siegl, M. Colaianni, M. Stamminger, et al.

the target color. Obviously, this is impossible.Solving a global non-linear optimization problem

as discussed in Grundhofer [10] is a solution to thisproblem. However, for our real-time system theperformance of this approach is prohibitive. Wepropose a simpler and equally effective approach.By performing a scan over the final target colors ci,we find the smallest target color component cs. Thisinformation is then used to offset the final projectionas a whole, using a global correction factor C:

C ={C + (−C − cs)× α, if cs 6 00, otherwise

(7)

Since C is computed for every frame, it canpotentially change very quickly based on the lightingand target projection colors. This raises the need fora damping factor α, set to 0.1 in our examples. Itsvalue is dependent on framerate and the expectedvariation in lighting and projected colors. Applyingthis factor ensures that the user will not notice theadaptions required to achieve the best projectionquality possible.

The scalar global correction factor C is applied toEq. (6):

qi = pi · (cj − I j −Bj −Ej +

CCC

) (8)

By adding this scalar correction factor to all colorchannels, we prevent any shift in color. To preventthe system from failing under extreme lightingconditions, we restrict C to a maximum value of 0.25.

The effect of the global optimization step can beseen in Fig. 7. On the right, the border betweentwo projectors can no longer be compensated. Whileblack-level compensation works, in this especiallydark area the final color would need to be negative.Using the global optimization on the left, thediscontinuity disappears and the overall projectionregains detail in the dark areas.

One additional problem is introduced with thisapproach: the amount of interreflected light changesdue to the adapted colors. To correctly compensatefor this, an iterative scheme would be required.However, given that the lighting situation in generaldoes not change rapidly, we supply the correctionvalue to the next frame. Therefore, the projectionwill become correct within a few frames (alsodepending on α), which is not perceivable to the

Fig. 7 Results of our global optimization step.

user. This is further helped as changes in C onlyoccur when the projection or lighting conditionschange noticeably. These draw more attention fromthe viewer than our small adjustments for improvedprojection quality.

8 Results and discussion

Figure 1 shows results for all three presentedcompensation methods. The leftmost image showsprojection into the corner of a box, as well as anintensified image of the indirect light I we subtractwhen performing our correction (see Eq. (3)).Figures 8 and 9 show this compensation on a morecomplex surface. Note how bright and discoloredareas around the hair, eyes, and mouth noticeably

Fig. 8 Results of interreflection correction. Note the more evenluminance distribution.

Adaptive stray-light compensation in dynamic multi-projection mapping 269

Fig. 9 Results for our interreflection correction. Notice the reducedcolor spill and increased contrast around the eye.

disappear.The middle image in Fig. 1 demonstrates our

black-level compensation. Note how the projectionis too bright where two projectors illuminate theobject. With our correction, the artefact disappearsand the overall contrast of the projection improves.

The rightmost image in Fig. 1 demonstratesthe projection without (top) and with (middle)environmental light compensation, along with thecaptured light probes. For reference, the groundtruth (a projection without any environmentallight) is shown at the bottom of the depictedbust. The difference between the corrected anduncorrected image in a room lit by daylight isimmediately noticeable. Even with the additionof a large amount of environmental light, ourcorrected result is comparable to the groundtruth. Only very dark regions are not completelycompensated, since it is not possible to projectnegative light. This effect is best observed inour video (https://vimeo.com/188121147), where wegradually open the shades and thereby increasethe amount of environmental light. The perceiveddynamic range, contrast, and color of the projectionare constant due to our correction.

Enabling our global compensation mechanism ingeneral is of benefit, and improves the perceived colorcorrectness of the projection. However this global

step (increasing brightness) counteracts the dynamicenvironmental compensation (decreasing brightness)under certain circumstances. Here, we want todemonstrate the isolated effect of the environmentalcompensation, so we omit the global compensationin this section.

On the other hand, Fig. 6 shows a physicallimitation of the presented system without a globalstep. Correction for a given surface effect is onlypossible when the projectors physically have enoughheadroom left in terms of their dynamic range.For dark target colors, the incident light frominterreflection, environmental light, and black-levelmay no longer be compensatable, since it wouldrequire the projection of negative light. This alsoapplies to a single color channel (regions with a purecolor, or one close to a pure red, green, or bluetarget color). The effect is noticeable on the doorwhen comparing the compensated result (middle)with the ground truth (bottom). The applied carpaint is a very saturated, dark red. Thus, we cannot compensate for green and blue contributions tothe added environmental light. In the region ofthe blinker light, where the target color is brighterand less saturated, the compensation has the desiredeffect.

Figure 10 shows a vectorscope of theenvironmental light compensation depicted inFig. 1. Black is the vectorscope of the bust forthe original projection without environmentallight. When opening the shades, warm, yellowenvironmental light is added. The orange segment(dashed outline) in the scope represents our

Yl

R

Fig. 10 The vectorscope for a projection without (black),uncompensated (orange, dashed), and compensated (blue)environmental light.

270 C. Siegl, M. Colaianni, M. Stamminger, et al.

projection without any compensation. This resultsin a strong peak on the red side of the segment aswell as a general increase in brightness (radial scaleof the segment). The blue segment depicts resultswith our compensation turned on. The generalintensity and color distribution closely resemble theground truth (black segment). It is also noticeablethat the shapes of the orange and blue segmentstowards the center of the scope is broader due toadded environmental light on the background of theimage.

8.1 Setup

Our hardware setup consists of an off-the-shelfworkstation, and Intel Core i7 4771 (3.5 GHz) withan NVidia GeForce GTX 980 graphics card. Asprojectors, two NEC NP-P451WG, with a resolutionof 1280 × 800 pixels were calibrated to an ASUSXtion PRO Live depth sensor for object tracking.A possible setup is depicted in Fig. 2.

8.2 Performance

Given a good parametrization of the object, atexture resolution of 1024×1024 for the precomputeddata, target-color, interreflection, and black-levelmap proved to suffice in our experiments. Forthe hemisphere, 64 samples showed good results.These numbers depend on the target object and thequality of the surface parametrization. The textureresolution mainly depends on the size of the targetobject and the unwrapping quality. The objectcomplexity is the defining factor for the numberof samples. Since the treated effects are ratherlow in frequency, these values can be chosen ratherconservatively.

For detailed performance when generatingmaps, see Table 1. Applying the correction to

Table 1 Per-frame performance for all parts of our algorithm.Everything runs on the GPU, except the spherical harmoniccalculation, which runs in a concurrent CPU thread

Augustus Truck(25k faces) (300k faces)

Tracking 1.1 ms 1.3 msRendering 9.4 ms 12.9 ms

Target-C. / Black-L. map 1.1 ms 0.9 msInterreflection map 0.9 ms 0.7 ms

Per-pixel solver 8.5 ms 9.0 msOverall GPU time 21.0 ms 24.8 ms

Spherical harmonics (CPU) 17 ms 17 msFrame rate ∼ 40 fps ∼ 37 fps

the final projected color has no measurableperformance impact. Computing the sphericalharmonic coefficients is performed in a concurrentCPU thread in real time and does not impactperformance or latency. Since the preprocessingstep runs offline, its performance is non-critical.

Given the tight time constraint, certain data isprecomputed (UV -coordinates for interreflections,transfer vectors for spherical harmonics). As a resultwe can only project on non-deformable geometry.However, due to the runtime calculations, the targetobject can be moved and the projected content canchange dynamically. This is especially important foranimations on the target objects or a live paintingsystem such as the one in Lange et al. [11].

9 Conclusions

In this paper we have presented a fast and reliablesystem for correcting projection mapping artefacts indynamic scenes arising from interreflection, projectorblack-level, and environmental light. To meet thetight time constraints of a low latency projectionmapping system, some data is precomputed. Pairedwith the important assumption of knowing the exactcolor of every surface point, correcting artifacts fromunwanted lighting is performed efficiently duringruntime. With these extensions, the perceivedquality of any projection mapping system can beimproved significantly.

References

[1] Siegl, C.; Colaianni, M.; Thies, L.; Thies, J.; Zollhofer,M.; Izadi, S.; Stamminger, M.; Bauer, F. Real-time pixel luminance optimization for dynamic multi-projection mapping. ACM Transactions on GraphicsVol. 34, No. 6, Article No. 237, 2015.

[2] Goral, C. M.; Torrance, K. E.; Greenberg, D. P.;Battaile, B. Modeling the interaction of light betweendiffuse surfaces. In: Proceedings of the 11th AnnualConference on Computer Graphics and InteractiveTechniques, 213–222, 1984.

[3] Sloan, P.-P.; Kautz, J.; Snyder, J. Precomputedradiance transfer for real-time rendering in dynamic,low-frequency lighting environments. In: Proceedingsof the 29th Annual Conference on Computer Graphicsand Interactive Techniques, 527–536, 2002.

[4] Raskar, R.; Welch, G.; Low, K. L.; Bandyopadhyay,D. Shader lamps: Animating real objects withimage-based illumination. In: Rendering Techniques

Adaptive stray-light compensation in dynamic multi-projection mapping 271

2001. Gortler, S. J.; Myszkowski, K. Eds. SpringerVienna, 89–102, 2001.

[5] Bimber, O.; Grundhofer, A.; Wetzstein, G.; Knodel,S. Consistent illumination within optical see-throughaugmented environments. In: Proceedings of the 2ndIEEE and ACM International Symposium on Mixedand Augmented Reality, 198–207, 2003.

[6] Bimber, O.; Grundhofer, A.; Zeidler, T.; Danch,D.; Kapakos, P. Compensating indirect scattering forimmersive and semi-immersive projection displays. In:Proceedings of the IEEE Virtual Reality Conference,151–158, 2006.

[7] Bermano, A.; Bruschweiler, P.; Grundhofer, A.; Iwai,D.; Bickel, B.; Gross, M. Augmenting physical avatarsusing projector-based illumination. ACM Transactionson Graphics Vol. 32, No. 6, Article No. 189, 2013.

[8] Sheng, Y.; Cutler, B.; Chen, C.; Nasman, J.Perceptual global illumination cancellation in complexprojection environments. Computer Graphics ForumVol. 30, No. 4, 1261–1268, 2011.

[9] Debevec, P. Rendering synthetic objects into realscenes: Bridging traditional and image-basedgraphics with global illumination and high dynamicrange photography. In: Proceedings of the ACMSIGGRAPH 2008 Classes, Article No. 32, 2008.

[10] Grundhofer, A. Practical non-linear photometricprojector compensation. In: Proceedings of theIEEE Conference on Computer Vision and PatternRecognition Workshops, 924–929, 2013.

[11] Lange, V.; Siegl, C.; Colaianni, M.; Kurth, P.,Stamminger, M.; Bauer, F. Interactive painting andlighting in dynamic multi-projection mapping. In:Augmented Reality, Virtual Reality, and ComputerGraphics. De Paolis, L.; Mongelli, A. Eds. SpringerCham, 113–125, 2016.

Christian Siegl is a Ph.D. candidateat the Computer Graphics Group ofthe University of Erlangen-Nuremberg.His research is focused on mixed realityusing projection mapping, medicalimage processing, and the virtualcreation of apparel.

Matteo Colaianni is a Ph.D.candidate at the Computer GraphicsGroup of the University of Erlangen-Nuremberg. His focus of researchis geometry processing in the fieldof apparel development as well asstatistical shape analysis.

Marc Stamminger is a professorfor visual computing at the ComputerGraphics Group of the University ofErlangen-Nuremberg since 2002. Hisresearch is focused on real-time visualcomputing, in particular rendering,visualization, interactive computervision systems, and augmented and

mixed reality.

Frank Bauer is a research fellowat the Computer Graphics Group ofthe University of Erlangen-Nuremberg.His research is focused on 3D scenereconstruction, augmented, mixed, andvirtual-reality applications, as well asaccessible human–machine interactions.

Open Access The articles published in this journalare distributed under the terms of the CreativeCommons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permitsunrestricted use, distribution, and reproduction in anymedium, provided you give appropriate credit to theoriginal author(s) and the source, provide a link to theCreative Commons license, and indicate if changes weremade.

Other papers from this open access journal are available freeof charge from http://www.springer.com/journal/41095.To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.