course 3: computational photography jack tumblin northwestern university 3: improvements to...

Click here to load reader

Post on 23-Dec-2015

220 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Slide 1
  • Course 3: Computational Photography Jack Tumblin Northwestern University 3: Improvements to Film-like Photography
  • Slide 2
  • Improving FILM-LIKE Camera Performance What would make it perfect ? Dynamic Range Vary Focus Point-by-Point Field of view vs. Resolution Exposure time and Frame rate
  • Slide 3
  • The Photographic Signal Path Computing can improve every component: Light Sources Sensors Processing,Storage Display Rays OpticsOptics Scene Rays
  • Slide 4
  • Computational Photography Novel Illumination Novel Cameras Scene : 8D Ray Modulator Display Generalized Sensor Generalized Optics Processing Recreate 4D Lightfield Light Sources
  • Slide 5
  • Computable Improvements (just a partial list) SensorsSensors Parallel Sensor Arrays, High Dynamic Range, frame rate, resolution; 2, 3, 4-D Sensing, Superresolution Sensor OpticsSensor Optics image stabilization, steerable subpixels (Nayar), reconstructive sensing, pushbroom, omnidirectional, multilinear LightsLights Dynamic Range (Best tone map is no tone map P. Debevec ) 2,3,4D lightfield creation Light OpticsLight Optics Steered beam, structured light, ad-hoc/handheld, 2,3,4D control ProcessingProcessing Selective merging, Graph Cut/Gradient, boundaries Synthetic Aperture, virtual viewpoint, virtual camera, Quantization, optic flaw removal DisplayDisplay Ambient Light control, Light-Sensitive Display, True 4-D static white-light Holography (Zebra Imaging)
  • Slide 6
  • Common Thread: Existing Film-like Camera quality is VERY HIGH, despite low cost. Want even better results? Make MANY MEASUREMENTSMake MANY MEASUREMENTS Merge the best of them computationallyMerge the best of them computationally *Exceptions* to this may prove most valuable? Or will we learn to make arrays practical?
  • Slide 7
  • Virtual Optics MIT Dynamic Light Field Camera Multiple dynamic Virtual Viewpoints Efficient Bandwidth usage: send only what you see Yang, et al 2002 64 tightly packed commodity CMOS webcams 30 Hz, Scaleable, Real-time:
  • Slide 8
  • Camera, Display Array: MERL 3D-TV System 16 camera Horizontal Array16 camera Horizontal Array Auto-Stereoscopic DisplayAuto-Stereoscopic Display Efficient coding, transmissionEfficient coding, transmission Anthony Vetro, Wojciech Matusik, Hanspeter Pfister, and Jun Xin
  • Slide 9
  • Stanford Camera Array (SIGG 2005)
  • Slide 10
  • Assorted Pixels Sensor RRRR B BBB GGGG GGGG RRRR B BB B GGGG G GGG RRRR BBB B GGGG G GG G R RRR BBBB GGGG GGGG Bayer Grid Interleaved color filters. Lets interleave OTHER assorted measures too De-mosaicking helps preserve resolution
  • Slide 11
  • Improving FILM-LIKE Camera Performance What would make it perfect ? Dynamic Range
  • Slide 12
  • Film-Style Camera: Dynamic Range Limits Under-Exposure Highlight details: Captured Shadow details: Lost Over-Exposure Highlight details: Lost Shadow details: Captured
  • Slide 13
  • ???? 0 255 Domain of Human Vision: from ~10 -6 to ~10 +8 cd/m 2 Range of Typical Displays: from ~1 to ~100 cd/m 2 starlightmoonlight office light daylightflashbulb 10 -6 10 -2 110100 10 +4 10 +8 Problem:Map Scene to Display
  • Slide 14
  • Slide 15
  • Slide 16
  • Marc Levoy High dynamic range capture (HDR) overcomes one of photographys key limitations negative film = 250:1 (8 stops) paper prints = 50:1 [Debevec97] = 250,000:1 (18 stops) hot topic at recent SIGGRAPHs
  • Slide 17
  • Debevec97 (see www.HDRshop.com) j=0 j=1 i=2 j=3 j=4 j=5 j=6 STEP 1: --number the images i, --pick fixed spots (x j,y j ) that sample scenes radiance values logL i well: j=0123456? logL i Pixel Value Z f(logL)
  • Slide 18
  • Debevec97 (see www.HDRshop.com) j=0 j=1 i=2 j=3 j=4 j=5 j=6 STEP 2: --Collect pixel values Z ij (from image i, location j) --(All of them sample the response curve f(logL)) logL i Pixel Value Z j=0123456? f(logL)
  • Slide 19
  • logL i Use the multiple samples to reconstruct the response curve;Use the multiple samples to reconstruct the response curve; Then use the inverse response curve to reconstruct the intensities that caused the responsesThen use the inverse response curve to reconstruct the intensities that caused the responses Debevec97 (see www.HDRshop.com)
  • Slide 20
  • HDR Direct Sensing? An open problem! (esp. for video...)An open problem! (esp. for video...) A direct (and expensive) solution:A direct (and expensive) solution: Flying Spot Radiometer: brute force instrument, costly, slow, delicate Some Other Novel Image Sensors: Some Other Novel Image Sensors: line-scan cameras (e.g. Spheron: multi-detector) logarithmic CMOS circuits (e.g. Fraunhofer Inst) Self-resetting pixels (e.g. sMaL /Cypress Semi) Gradient detectors (CVPR 2005 Tumblin,Raskar et al)
  • Slide 21
  • Captured Images Computed Image (Courtesy Shree Nayar, Tomoo Mitsunaga 99) Ginosar et al 92, Burt & Kolczynski 93, Madden 93, Tsai 94, Saito 95, Mann & Picard 95, Debevec & Malik 97, Mitsunaga & Nayar 99, Robertson et al. 99, Kang et al. 03 HDR From Multiple Measurements
  • Slide 22
  • MANY ways to make multiple exposure measurments Sequential Exposure Change: time Ginosar et al 92, Burt & Kolczynski 93, Madden 93, Tsai 94, Saito 95, Mann 95, Debevec & Malik 97, Mitsunaga & Nayar 99, Robertson et al. 99, Kang et al. 03 Mosaicing with Spatially Varying Filter: time Schechner and Nayar 01, Aggarwal and Ahuja 01 Multiple Image Detectors: Doi et al. 86, Saito 95, Saito 96, Kimura 98, Ikeda 98, Aggarwal & Ahuja 01,
  • Slide 23
  • MANY ways to make multiple exposure measurements Multiple Sensor Elements in a Pixel: Handy 86, Wen 89, Murakoshi 94, Konishi et al. 95, Hamazaki 96, Street 98 Assorted Pixels: Nayar and Mitsunaga oo, Nayar and Narasimhan 02 RRRR BBBB GGGG GGGG RRRR BBB B GGGG G GGG RRRR B BB B GGGG G GG G RRRR BBBB GGGG GGGG Generalized Bayer Grid: Trade resolution for multiple exposure,color
  • Slide 24
  • Assorted-Pixel Camera Prototype Digital Still Camera Camera with Assorted Pixels ( Courtesy : Sony Kihara Research Lab )
  • Slide 25
  • Another Approach: Locally Adjusted Sensor Sensitivity Computational Pixels: Brajovic & Kanade 96, Ginosar & Gnusin 97 Serafini & Sodini 00 ( pixel sensivity set by its illumination) NO GRADIENT CAMERA: RAMESH HAS IT
  • Slide 26
  • LCD Light Attenuator limits image intensity reaching 8-bit sensor Unprotected 8-bit Sensor Output: Sensor: LCD Adaptive Light Attenuator Attenuator- Protected 8-bit Sensor Output detector element attenuator element I t T t+1 light Controller
  • Slide 27
  • Improving FILM-LIKE Camera Performance Vary Focus Point-by-Point
  • Slide 28
  • High depth-of-field adjacent views use different focus settings for each pixel, select sharpest view close focusdistant focuscomposite [Haeberli90] Levoy et al., SIGG2005
  • Slide 29
  • Single-Axis Multi-Parameter Camera (SAMP) 2005: Morgan McGuire (Brown), Wojciech Matusik (MERL), Hanspeter Pfister (MERL), Fredo Durand (MIT), John Hughes (Brown), Shree Nayar (Columbia) Idea: Cameras + Beamsplitters Place MANY (8) cameras at same virtual location Place MANY (8) cameras at same virtual location
  • Slide 30
  • SAMP Prototype System (Layout)
  • Slide 31
  • Multiple Simultaneous Focus Depths zFzF zBzB ForeBack Co-located Lenses Fore & Back Focal Planes Strongly desired in microscopy, too: see http://www.micrographia.com/articlz/artmicgr/mscspec/mscs0100.htm
  • Slide 32
  • Synthetic aperture photography GREEN In-focus point RED out-of-focus point Long history in RADAR, New to photography See Confocal Photography --Levoy et al. SIGG2004
  • Slide 33
  • Synthetic aperture photography Smaller aperture less blur, smaller circle of confusion
  • Slide 34
  • Synthetic aperture photography Merge MANY cameras to act as ONE BIG LENS Small items are so blurry they seem to disappear..
  • Slide 35
  • Synthetic aperture photography Huge lens ray bundle is now summed COMPUTATIONALLY:
  • Slide 36
  • Synthetic aperture photography Computed image: large lens ray bundle Summed for each pixel
  • Slide 37
  • Synthetic aperture photography Computed image: large lens ray bundle Summed for each pixel
  • Slide 38
  • Long-range synthetic aperture photography
  • Slide 39
  • Focus Adjustment: Sum of Bundles
  • Slide 40
  • Improving FILM-LIKE Camera Performance Field of view vs. Resolution? Are we done? Almost EVERY digital camera has panoramic stitching. No; Much more is possible:
  • Slide 41
  • A tiled camera array 12 8 array of VGA cameras abutted: 7680 3840 pixels overlapped 50%: half of this total field of view = 29 wide seamless mosaicing isnt hard cameras individually metered Approx same center-of-proj.
  • Slide 42
  • Tiled panoramic image (before geometric or color calibration)
  • Slide 43
  • Tiled panoramic image (after calibration and blending)
  • Slide 44
  • 1/60 1/120 1/60 1/30 1/120 1/60 1/30 same exposure in all cameras individually metered same and overlapped 50%
  • Slide 45
  • Improving FILM-LIKE Camera Performance Exposure time and Frame rate
  • Slide 46
  • High Speed Video Say you want 120 frame per second (fps) video. You could get one camera that runs at 120 fps Or
  • Slide 47
  • High Speed Video Say you want 120 frame per second (fps) video. You could get one camera that runs at 120 fps Or get 4 cameras running at 30 fps.
  • Slide 48
  • 52 Camera Cluster, 1560 FPS Levoy et al., SIGG2005
  • Slide 49
  • Conclusions Multiple measurements: Multi-camera, multi-sensor, multi-optics, multi-lighting Intrinsic limits seem to require it lens diffraction limits, noise, available light power. Are we eligible for Moores law? Or will lens making, mechanics limit us?