sensor modeling and demonstration of a multi-object … · 2009. 5. 29. · sensor modeling and...

12
Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P. Kerekes a , Michael D. Presnar ab , Kenneth D. Fourspring a , Zoran Ninkov a , David R. Pogorzala a , Alan D. Raisanen a , Andrew C. Rice c , Juan R. Vasquez c , Jeffrey P. Patel a , Robert T. MacIntyre a , and Scott D. Brown a1 a Chester F. Carlson Center for Imaging Science Rochester Institute of Technology 54 Lomb Memorial Drive Rochester, NY 14623-5604 b Air Force Institute of Technology 2950 Hobson Way Wright-Patterson AFB, OH 45433-7765 c Numerica Corporation 2661 Commons Blvd., Suite 210 Beavercreek, OH, USA 45431-3704 ABSTRACT A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a high- fidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets. Keywords: Hyperspectral scene simulation, physics-based modeling and simulation, DIRSIG, micromirror, DMD, multi-object spectrometer, adaptive multimodal sensor, performance-driven sensor, feature-aided target tracking 1. INTRODUCTION 1.1 Motivation and objectives The U.S. Air Force Office of Scientific Research (AFOSR) is the sponsor for a Discovery Challenge Thrust (DCT) in the area of integrated multimodal sensing, processing, and exploitation 1 . AFOSR is interested in basic research to conceive adaptive multimodal electro-optical/radio-frequency (EO/RF) sensor concepts in a “performance-driven” context in order to address problems of detecting, tracking, and identifying targets in highly cluttered, dynamic scenes. A performance-driven integrated approach is a coupling of adaptive multimodal EO/RF sensing hardware with physics- based modeling of target scene phenomenology, environmental interactions, data processing, and exploitation algorithms. Modeling and simulation of a staring imager system should demonstrate the ability to capture multiple Further author information: (Send correspondence to J.P.K. or J.R.V.) J.P.K.: Email: [email protected], Telephone: 1 585 475-6996 J.R.V.: Email: [email protected], Telephone: 1 937 427-9725 Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV, edited by Sylvia S. Shen, Paul E. Lewis, Proc. of SPIE Vol. 7334, 73340J · © 2009 SPIE CCC code: 0277-786X/09/$18 · doi: 10.1117/12.819265 Proc. of SPIE Vol. 7334 73340J-1

Upload: others

Post on 16-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing

John P. Kerekesa, Michael D. Presnarab, Kenneth D. Fourspringa, Zoran Ninkova,

David R. Pogorzalaa, Alan D. Raisanena, Andrew C. Ricec, Juan R. Vasquezc, Jeffrey P. Patela, Robert T. MacIntyrea, and Scott D. Browna1

a Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

54 Lomb Memorial Drive Rochester, NY 14623-5604

b Air Force Institute of Technology

2950 Hobson Way Wright-Patterson AFB, OH 45433-7765

c Numerica Corporation

2661 Commons Blvd., Suite 210 Beavercreek, OH, USA 45431-3704

ABSTRACT

A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a high-fidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

Keywords: Hyperspectral scene simulation, physics-based modeling and simulation, DIRSIG, micromirror, DMD, multi-object spectrometer, adaptive multimodal sensor, performance-driven sensor, feature-aided target tracking

1. INTRODUCTION 1.1 Motivation and objectives

The U.S. Air Force Office of Scientific Research (AFOSR) is the sponsor for a Discovery Challenge Thrust (DCT) in the area of integrated multimodal sensing, processing, and exploitation1. AFOSR is interested in basic research to conceive adaptive multimodal electro-optical/radio-frequency (EO/RF) sensor concepts in a “performance-driven” context in order to address problems of detecting, tracking, and identifying targets in highly cluttered, dynamic scenes. A performance-driven integrated approach is a coupling of adaptive multimodal EO/RF sensing hardware with physics-based modeling of target scene phenomenology, environmental interactions, data processing, and exploitation algorithms. Modeling and simulation of a staring imager system should demonstrate the ability to capture multiple

Further author information: (Send correspondence to J.P.K. or J.R.V.) J.P.K.: Email: [email protected], Telephone: 1 585 475-6996 J.R.V.: Email: [email protected], Telephone: 1 937 427-9725

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV,edited by Sylvia S. Shen, Paul E. Lewis, Proc. of SPIE Vol. 7334, 73340J · © 2009 SPIE

CCC code: 0277-786X/09/$18 · doi: 10.1117/12.819265

Proc. of SPIE Vol. 7334 73340J-1

Page 2: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

Performance Driven Sensing

System Performance Modeling

ScenePrienomenology

Research & Modeling

De'ice & Optical SystemExploitation &Sensor Conttol

AlgorithmResearch

PertormanceMeasures orEffectiveness/ Research & Morieling -v

electromagnetic observables using a variety of sensing modalities, including spatial, spectral, polarimetric, radiometric, and temporal within a broad wavelength region from the ultra-violet (UV) to the RF.

A fielded staring imaging sensor should be able to find and track individuals of interest in populated urban areas, detect activity and materials indicative of improvised explosive device (IED) placement, and detect and identify threatening space objects at long ranges. Thus, a novel multimodal detector design should utilize hyperspectral exploitation and multimode fusing to enhance deeply-hidden, high-clutter target recognition by optimally exploiting the phenomenology of multimodal target scene signatures. Innovation and development of a tunable, multimode, vertically integrated (common sensor package), large-format staring focal plane array are required to accommodate the dynamic sensing requirements dictated by the dynamic target scene.

1.2 Approach

The approach taken by the authors includes three major research veins, as shown in Figure 1. First, modeling of dynamic scene phenomenology and incorporation of realistic moving target characteristics is required as a simulated input to test a model of a performance-driven sensor. Second, basic research and integration of micro-electromechanical systems (MEMS) devices, refractive/reflective optics, and focal plane arrays is required to achieve co-registered EO imagery, video, polarization, and spectral sensing in a performance-driven adaptive optical sensor model. Third, a performance-driven algorithm is necessary to exploit multiple modalities of the sensor to track moving targets within the scene. The joint performance effects of simultaneously varying parameters within each of these research veins can be evaluated using various measures of effectiveness.

Figure 1. Performance-Driven Sensor Approach Flowchart by RIT-Numerica Team

1.3 Overview of paper

We first provide an introduction to the dynamic scene modeling tools used as inputs in the simulation of a performance-driven sensor. We describe the Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, a scene-building modeling and simulation tool that incorporates physics-based target scene phenomenology. The incorporation of vehicular traffic into MegaScene to generate frames of video that include moving targets is also discussed. Next, we present the modeling efforts of a micromirror array-based astronomical multi-object spectrometer (MOS) for the purpose of designing a downward-looking MOS as a primary component of a performance-driven adaptive sensor. Characterization of scattering when operating various micromirror array types within the adaptive sensor is discussed. The performance-driven algorithm, used for tracking moving vehicles and selecting sensor modalities of the micromirror-based adaptive sensor, is also presented. Finally, details of the future work required for an end-to-end simulation of a performance-driven sensor model are provided.

Proc. of SPIE Vol. 7334 73340J-2

Page 3: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

2. DYNAMIC SCENE MODELING

2.1 Hyperspectral image modeling in DIRSIG

The object tracking algorithms for this study were tested on a data set consisting of a series of synthetically-generated image frames encoded into a video stream. These individual frames were generated using the DIRSIG model, a first-principles, physics-based synthetic image simulation software package2. The model has the ability to produce imagery in a variety of modalities, including multispectral, hyperspectral, polarimetric, and LIDAR in the visible through the thermal infrared regions of the electromagnetic spectrum.

The video data set consisted of a number of vehicles moving about the scene through time. The base scene used for this was MegaScene 1, a high-fidelity recreation of region of northern Rochester, NY3. MegaScene 1 depicts a largely residential, suburban-style neighborhood featuring a large number of houses, trees and a middle school. Within the scene there is a main thoroughfare running in a north-south direction with several side streets and cul-de-sacs branching off. An RGB rendering of the portion of MegaScene used here is depicted in Figure 2. (for color image see electronic version)

Figure 2. Example RGB frame of the test scene area

The DIRSIG model has the ability to utilize a specific radiometry solver on a per-material basis. This enables the incorporation of bi-directional reflectance distribution functions (BRDFs) on appropriate materials. Although they were not used for the simulation shown in Figure 2, BRDF models will be used in conjunction with the spectral reflectance curves for the vehicle paints for this study. Since the vehicles are changing not just their location in the scene but also their orientation with respect to the sun and sensor, the use of BRDF models will enable DIRSIG to reproduce solar glint and other real-world phenomenology that would strain the vehicle tracking algorithm.

2.2 Vehicular traffic and video simulation

When simulating a video sequence with DIRSIG, the user has the ability to manually relocate scene content as a function of time. However, with regard to vehicle motion, this approach is prohibitive because the user would be unable to generate a realistic flow of traffic on a large scale. For this reason, vehicle motion was introduced into MegaScene 1

Proc. of SPIE Vol. 7334 73340J-3

Page 4: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

through the use of the Simulation of Urban MObility (SUMO) traffic model, a “microscopic, space continuous and time discrete traffic simulation” tool. The SUMO package allows the user to explicitly define any number of routes, on which any number of vehicles can travel.

A road network of the MegaScene 1 area was created in order to integrate SUMO simulations with DIRSIG. Maps from GoogleTM Maps were imported into Inkscape, a vector-based graphics package, and assigned to a background layer. The network edges (lanes) and nodes (intersections) were then drawn atop the map and exported to SUMO in an Extensible Markup Language (XML) format. Once the network had been generated, a series of routes were created for the vehicles to travel along. In an attempt to create a simulation that resembled real-world traffic flow a large number of routes, most of which traveled along the main thoroughfare for at least a portion of their journey, were generated. Once the network and the routes were defined in SUMO, the simulation was performed and the output was reformatted into a series DIRSIG inputs. A screen capture of the SUMO software simulating the traffic flow for this data set is shown in Figure 3.

Figure 3. Screen capture of SUMO simulating traffic flow

The video data set was designed to simulate a sensor operating at 10 Hz for a duration of 120 seconds. Each frame was rendered as a hyperspectral cube of 61 bands spanning 0.4 – 1.0 μm at a resolution of 0.01 μm. The spatial extent of the frames was approximately 700×400 m at a resolution of 0.75 m. These spectral and spatial parameters result in an image file for each frame that is approximately 2 GB in size, resulting in a total of 2.5 TB of data for a two minute simulation consisting of 1200 frames. The computing load was spread across a large number of CPU cores to parallelize the process. This enabled the entire video to be simulated in approximately 2-3 days.

3. ADAPTIVE SENSOR MODELING The system being studied for use as an adaptive performance-driven sensor is a previously constructed micromirror array-based MOS that acquires panchromatic imagery of a scene as well as the full spectrum of selected objects of interest.

We present the ongoing efforts to fully model the optical channels of a MOS, the input-output characteristics of a commercial micromirror array, and novel micromirror design concepts.

Proc. of SPIE Vol. 7334 73340J-4

Page 5: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

C C,0

lmager',a barrel

7namissiongrating

ircialar baffle

to/jima for--4°LILI

'I

DMD

Os,

Offnei

I

S

Erlifter wheel

4-

ca!ibrat/on lampIoreoptics* p telescope 'D assembly

apttroscopy CCV

rotation stage

0123456electronics inchesbulkhead

ImagingCCV

I

3.1 Multi-object spectrometer modeling

An adaptive multimodal optical sensor at the heart of a performance-driven sensor system is an imaging device with the capability of interacting with a performance-driven algorithm to detect, identify, and track targets in real-time using multiple modalities. A sensor that adaptively collects spatial, spectral, or polarization modalities was required for a high-rate target tracker to exploit from frame to frame. Thus, a MOS was used as the starting point for this sensor design.

Various designs of multi-object spectrometers are present in the literature. The Near-Infrared Camera MOS4 of the Hubble Space Telescope is a slitless design. The Gemini MOS5 requires custom manufacture of conventional laser-milled slit masks. The Hydra on the WIYN telescope’s MOS6 utilizes a robotic fiber-bundle positioner. The Infrared MOS7 and the RITMOS8 both utilize a micromirror array for slit formation. A micromirror array is the most suitable for slit creation in a performance-driven sensor due to its compactness and speed of commanded updates.

The RITMOS was chosen as the system to initiate modeling of an adaptive sensor. It was originally designed in 2003 as an astronomical spectrometer and imager connected to a telescope for the purpose of Morgan-Keenan (M-K) spectral classification of stars. It is sensitive within the M-K classical blue wavelength regime from 3900-4900 Å. Its optical bench section is shown in Figure 4.

Figure 4. Top View of the RITMOS Optical Bench

The RITMOS utilizes a Texas Instruments Digital Micromirror Device (DMD) 848×600 array at the focal point of the 6-element foreoptics assembly on its primary optical axis. Each of its individual 16 µm square mirrors is controlled to deflect incident light into one of two output paths: an imaging channel or a spectrometry channel. Thus the DMD array acts as a “light switch” to create on-the-fly slits within a 2-D pupil function, p[x,y]. The imaging channel consists of an Offner relay and two fold mirrors that reimage the DMD onto a cooled 512×512 charge-coupled device (CCD) detector. The utilization of a 5-position motorized filter wheel allows the collection of grayscale images that are either panchromatic (clear aperture) or the output of red, green, or blue passband filters. The spectrometry channel consists of a 3-mirror reflective collimator, a 1200 lines/mm transmission grating, a 5-element reimager, and a cooled 4097×4130 CCD detector. Baffles are arranged in both channels to reduce stray light.

Acquisition of target spectra within a scene consists of five main steps. First, all incoming light is deflected by the DMD array towards the imaging channel, which reimages the DMD through the Offner relay onto the imaging detector. Second, the locations of targets of interest are selected using a software tool, and the corresponding micromirrors are

Proc. of SPIE Vol. 7334 73340J-5

Page 6: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

Spectrometry Channel Output Spectrum of Kr Lamp through 202 Center Slit

nato

200 02081336 1472 1608 1744 1080 2016 2152 2289 2424 2560 2697

Wavelength [Al lTopl Spectrometry CCO Poe 6064097 Total (906090

4000 4130 4200 4300 4400 4500 4600 4700 4800 49017235

743194 IAI at 1907 pixelsI 029

0823

742756 [A] at 1845 ploelo0617

44634 [Al 012 03 p nets4376 IA at

at 2156 pIxels412

5 .5 IA] 052090 PiXelS

3627101011966 PXeIS 0206

1200

1000

800

600

0

400

200

- 450

0

300

900 4000 4100

White PaperBlue PaperPurple Paper

750 Red Paper

600

150

Spectrometry Channel Output Spectrum of Tungsten Lamp Reflected from 4 Paper Colors

4800

0.868

0.694

0.521

0.347

0.174

0.000

150 0.1741336 1472 1608 1744 1880 2016 2152 2288 2424 2560 2697

Wavelength [A] (Top); Spectrometry CCD Pixel # of 4097 Total (Bottom)

4500 4600 47004200 4300 4400

deflected towards the spectroscopy channel. Third, the light is sent through the spectrometry channel collimator, dispersed by the transmission grating, reimaged, exposed onto the spectrometry CCD detector, and differenced from a dark frame. Since the dispersion is simply the convolution of the pupil function p[x,y] with the target’s spectral intensity I[λ]|λ=y, replicas of p[x,y] are formed along the y-axis of the CCD centered on the row corresponding to the wavelength of each Lorentzian-shaped emission line or merged across a continuous spectrum. Fourth, the incoming light path is cut off, and light from a Krypton rare-gas calibration lamp is injected with a light-shaping diffuser into the optical axis through the same micromirror slits and exposed onto the spectrometry channel’s CCD detector for wavelength-to-pixel matching and extrapolation using the locations of seven known9 high-intensity Krypton emission lines between 4200-4600 Å. Finally, each row of CCD pixels corresponding to each of the multiple slits formed on targets of interest in the scene is matched to the same row showing peaks of emission lines from the Krypton light, thus producing plots of normalized spectral intensity vs. wavelength for each selected target.

A useful slit formed by the RITMOS for point source (e.g., star) spectrum collection is a 2×2 group of micromirrors. If a linear relationship between wavelength (λ) and pixel number (x) is assumed across a detector row for λ ∈ [3900, 4900] Å, then linear regression of the locations of the seven known locations of the Krypton emission lines yields the slope-intercept relationship λ = (0.7347x + 2918) Å for a 2×2 group of micromirrors at the exact center of the array. Here, the dispersion of 0.7347 Å/pixel is valid near the center wavelength of 4400 Å, while the intercept of 2918 Å corresponds to the wavelength at the detector row’s edge. Figure 5 shows the post-calibration results of the measured Krypton spectrum in image (a), and the application of the extrapolated pixel-to-wavelength linear relationship to the spectra collected from four types of paper colors using a Tungsten light source in image (b). Additional measurements of compact fluorescent lamps (CFLs) and black lamps have shown Mercury emission line matches, and outdoor measurements have shown Fraunhofer absorption line matches of elements in the upper layers of the sun. The point spread function (PSF) of the refractive components of the imaging channel was also measured using a 2×2 group of micromirrors (i.e., an inverse slit) to simulate a point source δ[x-x0,y-y0] directed through the Offner relay towards the imaging CCD.

Figure 5. (a) Measured Krypton lamp spectrum showing seven emission lines used for calibration.

(b) Measured Tungsten lamp spectrum reflected from four different paper colors. (for color image see electronic version) Optical modeling10,11 of the RITMOS subcomponents (foreoptics lens, Offner relay and fold mirrors, filter wheel, collimator, transmission grating, and reimager lens) was accomplished using Lambda Research Corporation’s OSLO Premium Edition. Work is in progress to integrate these models with the micromirror array model discussed in the next section, with an end goal to create a modulation transfer function (MTF) model that can generate the outputs of both imaging and spectrometry CCDs given a simulated input scene.

Radiometric modeling12 of the RITMOS provides insight into the efficiency of every subcomponent of the system. The baseline parameters of exoatmospheric irradiance, atmospheric transmission, and target reflectivity can be used to derive the sensor-reaching radiance. Transmissions and positions of the optically modeled RITMOS subcomponents were used to determine the irradiance onto each detector pixel. Consideration of each CCD’s integration time, readout time,

a b

Proc. of SPIE Vol. 7334 73340J-6

Page 7: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

Ti

H 4i%ipVAâitijièii

detector pixel area, and quantum efficiency yielded estimates of the respective signal-to-noise ratio. This model will be used during iteration of the design concepts.

3.2 Micromirror array modeling

A micromirror is either an electrostatically driven or thermally actuated microelectromechanical system (MEMS) device. Texas Instruments (TI) has continuously revised their Digital Micromirror Device (DMD) for over 35 years as the technological advancements of the IC manufacturing industry have evolved. The DMD has been used in a multitude of optical designs13. The original application for the DMD was a spatial light modulator (SLM) for Digital Light Processing (DLP®) televisions and projectors14. Each individual DMD tilts along the diagonal to direct the light from a metal-halide or mercury arc lamp source to either a light trap or a projection lens, as demonstrated in images (a) and (b) of Figure 6, respectively. Black, white, and intermediate grey levels across the imaging field are produced by temporally modulating the length of time that the mirror is in its “on” state to produce varying grey levels as observed by the human visual system. There is also a finite stabilization time on the order of 18 µs for both on and off transitions. In most projector applications the incoming wavefront is collimated. (for color image see electronic version of paper)

Figure 6. DMD Modeling Showing a Collimated Incoming Wavefront (Green Rays) and Specularly Reflected Output (Red Rays)

(a) The “Off” state in the DMD projector application directing the light towards a light trap. (b) The “On” state in the DMD projector application directing the light towards the projection optics. (c) The “imaging” switch position in the DMD MOS application directing the light towards the imaging channel. (d) The “spectrometry” switch position in the DMD MOS application directing the light towards the spectrometer channel.

The DMD array is located at the imaging plane of the foreoptics in the RITMOS system. The DMD is at the heart of the RITMOS system and is in many respects the limiting factor. One of the primary differences in the MOS switching application is a converging beam impinging on the micromirror array rather than a collimated wavefront. The incoming cone of rays is limited by the angle in which the mirrors can steer the wavefront to the imaging or spectrometry channel of the MOS, as demonstrated in images (c) and (d) of Figure 6. This sets a practical limit for the speed of the foreoptics because the reflected wavefront cannot overlap the incoming wavefront. However, the greater the angle the mirrors tilt,

a b

d c

Proc. of SPIE Vol. 7334 73340J-7

Page 8: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

the lower the fill factor of the micromirror array. The fill factor becomes very important once scattering is considered because as the space between the mirrors increases, more light will enter the region below the mirrors leaving the possibility of more stray light in the system. Therefore, a more complete model of the micromirrors was generated to fully model all of the phenomena associated with the stray light in the micromirror optical switch. Finally, in a non-ideal mirror such as the TI DMD, the central post or “via” that supports the mirror above the hinge acts as a severe scattering center. Thus, TI has reduced the via’s size in subsequent mirror designs to improve contrast ratios in projection systems.

The optical software package used to perform stray light modeling was Photon Engineering's FRED. FRED is a powerful optical design prototyping software package that utilizes non-sequential ray tracing. Non-sequential ray tracing is essential to modeling the scattering and diffraction effects of the multiple surfaces and materials in a DMD. Figure 7 shows an example of a detailed Texas Instruments micromirror 3-D model showing the reflected wavefront from a converging beam on a micromirror array. Detailed scatter measurements will be collected to obtain a baseline scatter model to use within the optical simulation software package to verify the accuracy of the current model. These scattering measurements will be made in a novel optical setup to extract the diffuse and specular components of reflection as a function of position within the micromirror. The most important outcome of this detailed modeling is the methodology that will be used to effectively model future micromirror designs optimized for MOS applications. Obtaining the spectral/polarized transfer function this micromirror device is the gateway to predicting accurate performance of future MOS systems. (for color image see electronic version of paper)

Figure 7. A DMD array model used for scattering simulations. The incoming converging wavefront (not plotted) is focused onto a 3×3 group of micromirrors. A single mirror is pointed towards the spectrometry channel (green output rays). All other mirrors are pointed towards the imaging channel (red output rays).

3.3 Micromirror design concepts

The RIT Semiconductor and Microsystems Fabrication Laboratory (SMFL) developed a mechanical and optical model for the TI DLP® DMD to be used in validation of our full-system optical throughput model. A collection of new complementary metal oxide semiconductor (CMOS) process compatible micromirror devices have been designed and mechanically modeled in order to investigate the behavior of complete optical systems incorporating these devices. Imaging modes can be investigated, and image defects induced by the mirror plane can be determined before difficult and expensive device fabrication processes are begun. In Figure 8, the structure color of image (e) indicates the temperature of a thermally-driven micromirror device, while the color in all other images indicates total translation from a rest position. The micromirror device in image (d) is a candidate for mixed beam steering / Fabry-Perot etalon filtering applications. A micromirror device with very large angular, vertical, and lateral deflection capabilities is shown in image (f). Designs shown in images (b), (c), and (e) have been successfully prototyped at RIT, while those shown in images (d) and (f) are purely design concepts at this time. (for color image see electronic version of paper)

Proc. of SPIE Vol. 7334 73340J-8

Page 9: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

Figure 8. (a) Model of Texas Instruments DLP®-technology DMD array section

(b) Model of simple 1-axis electrostatically driven torsion spring mirror (c) Model of 2-axis electrostatically driven torsion spring mirror supported at all four corners (d) Model of a vertically oriented thermally or electrostatically driven spring device supported at all four corners (e) Model of a single hinge 1-axis micromirror driven by differential thermal expansion (f) Model of a vertically-oriented thermally driven spring supported micromirror device

4. PERFORMANCE-DRIVEN ALGORITHM Performance-driven sensing is a process which conditions the design, employment, and – of particular interest to this study – adaptation of an instrument on exploitation results. Tracking moving vehicles within challenging environments through remote, persistent, hyperspectral imagery (HSI) data is an emerging field of research. Thus, an experiment has been conceived which applies performance-driven sensing techniques to a synthesized, adaptive, multimodal DMD-based instrument15. The goal is to maximize overall track-level performance by carefully choosing which pixels should collect HSI data at which times. This section briefly describes the motivation and fundamentals behind feature-aided-tracking, and discusses modality selection as a means of real-time instrument adaptation.

4.1 Tracking techniques

In this context, tracking is the process of estimating the kinematic state of multiple, agile ground vehicles in the presence of clutter, dropped measurements, confusable vehicles, environmental occlusion, and ambiguous movement. A critical phase in the tracking process is the association of new measurements with existing tracks. The tracking system under test employs various high-level association constructs to allow for statistics-based gating, multidimensional assignment, and deferred decision-making. However, the fundamental cost ,i jC to associate a track i with a measurement j is the key, as shown in Equation 1:

, , ,n

n

FKi j K i j F i j

nC C Cμ μ= +∑% %

(1)

Here, ,Ki jC% is a normalized kinematic cost based on the Mahalanobis distance, and ,

nFi jC% are likewise normalized costs

based on statistical distances in an n-dimensional feature-space. The weighting terms μ establish the relative importance of the kinematic and feature association costs. It is well known that HSI instruments provide high-saliency feature

Proc. of SPIE Vol. 7334 73340J-9

Page 10: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

measurements for many classes of ground vehicles. Hence, an HSI feature-aided tracking system has the potential to more accurately associate measurements with tracks and subsequently perform with longer overall track life and higher track purity metrics. These assumptions are predicated on the availability of feature data, i.e. full spectral information, for both measurements and track state. While many realizable instruments always collect full HSI information throughout their fields of view, there is generally a design-time tradeoff such as ground sample distance or scan rate which makes tracking difficult. This experiment focuses on adaptive modality selection of an instrument which collects high-rate panchromatic data for the sake of tracking, but allows some pixels to collect HSI data as required.

4.2 Modality selection

The goal of spatial sampling is to determine which pixels will collect HSI data. The utility function is a linear combination of heuristic values and assigns a value to the usefulness of collecting HSI data at each pixel. Let Uij(t) represent the utility of obtaining HSI data at the ijth pixel at time t and given by Equation 2:

( ) ( ) ( ) ( ) ( ) ( )

s.t. ( ) [0,1], { , , , , }, 1, 0

D D N N A A M Mij ij ij ij ij ij

ij

U t C U t C U t C U t C U t C U t

U t D N A M C C

ℑ ℑ

Φ Φ Φ

= + + + +

∈ Φ∈ ℑ = ≥ ∀ Φ∑ (2)

The values of C are the relative importance or weighting of the different utility components which are defined as:

( ):D

ijU t Default value that every target of interest receives which gradually decreases towards 0 as we consider pixels farther from the predicted location of the target track.

( ) :N

ijU t New model utility which is a function of the appearance of new or reacquired targets that need to be sampled in order to build a target feature model.

( ):A

ijU t Association utility defined for closely spaced targets where track state and the related uncertainty provide a measure of association ambiguity.

( ):M

ijU t Missed measurement utility which is a function of the number of missed detections for the kinematic tracker due to occlusion or shadow.

( ):ijU tℑ Model age which is a function of the Time since the last spectral model measurement was incorporated.

Additional constraints are applied to the modality selection algorithm to accommodate instrument limitations. For instance: preventing spectral/spatial overlap on the spectroscopy array due to conflicting pixel HSI requests. As target tracking scenarios increase in complexity and sensor resources begin to stretch thin – e.g., there are far fewer opportunities to collect HSI data than requirements to do so – it becomes increasingly important to have an optimal, real-time approximation to the utility function Uij(t). A genetic algorithm approach has been applied16, recovering values for C based on representative training data.

5. SUMMARY AND FUTURE WORK 5.1 Work accomplished to date

This paper summarizes the basic research performed by the authors in the first year of a three-year AFOSR DCT grant. Vehicle movement has been added to a hyperspectral DIRSIG scene, producing video frames as seen by a static sensor platform. Imagery and spectra were collected on the RITMOS to aid the design of its optical and radiometric modeling efforts. Initial scattering models of micromirror arrays were completed, and novel micromirror design concepts have been studied and modeled. Initial tests of a performance-driven algorithm on direct DIRSIG video frames have demonstrated HSI feature-aided tracking performance.

Proc. of SPIE Vol. 7334 73340J-10

Page 11: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

5.2 Dynamic scene modeling future work

Future iterations of DIRSIG-generated video data sets will have a number of enhancements. Near-term considerations include, as alluded to in Section 2.1, the addition of BRDF parameters to the vehicle paint spectra, introducing a moving sensor platform, and the incorporation of video artifacts such as dropped frames and MPEG compression that often arise in a real-world downlinked video scenario.

A long-term goal is to simulate a video product generated by a suite of sensors that are polarization-sensitive. Image products generated from such a sensor, such as Degree and Angle of Polarization, can add a new dimension of discriminability to the tracking algorithm by keying on such vehicle characteristics as surface roughness and orientation. A framework for simulating polarimetric data has been established for DIRSIG17, and the model has been shown to be able to accurately reproduce a nominal polarized scene18. However before such a data set can be generated, polarized BRDF (pBRDF) models must be attributed to both the static and dynamic scene content. Such models are currently being attributed to the static elements of MegaScene 1, and nominal data sets are being generated. Once the entire scene has been attributed, and once an appropriately diverse array of models have been identified and attributed to the vehicles, polarized video products will be generated and tested against.

5.3 Adaptive sensor modeling future work

The fidelity of the adaptive sensor model will be vastly increased from the baseline RITMOS design. Modifications to the foreoptics model will allow enhanced field-of view and zooming capabilities. Multiple transmission gratings (e.g., on a selectable turret) and calibration lamps with appropriate spectral emission lines will be added to increase the wavelength region to 0.4-2.5 µm. Arrays of novel micromirror devices will be simulated within the adaptive sensor model to reduce scattering and increase contrast. Polarization filters will contribute another sensing modality.

5.4 Performance-driven algorithm future work

Future performance-driven algorithm work will include refinement of the track feature model, particularly as it relates to the utility function element ( )N

ijU t . Integration with the adaptive sensor model will allow for an end-to-end demonstration of multi-modal target tracking using a dynamic video scene simulated by DIRSIG and SUMO.

6. ACKNOWLEDGEMENTS This material is based on research sponsored by the Air Force Office of Scientific Research (AFOSR) under agreement number FA9550-08-1-0028 (AFOSR-BAA-2007-08). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.

Scene simulation was accomplished using DIRSIG version 4.2.2, developed at RIT (http://www.dirsig.org). Traffic simulation was accomplished using SUMO version 0.10.1, developed by the German Aerospace Centre (http://sumo.sourceforge.net/). Optical modeling of RITMOS components was accomplished using OSLO Premium Edition revision 6.4.6, donated by Lambda Research Corporation (http://www.lambdares.com) for thesis research. FRED Optimum version 7.10, produced by Photon Engineering LLC (http://www.photonengr.com) was used to model the micromirror arrays and simulate scattering. COMSOL Multiphysics®, developed by COMSOL AB (http://www.comsol.com) was used to model the various micromirror design concepts.

DISCLAIMER The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, the Department of Defense, or the U.S. Government.

Proc. of SPIE Vol. 7334 73340J-11

Page 12: Sensor Modeling and Demonstration of a Multi-Object … · 2009. 5. 29. · Sensor Modeling and Demonstration of a Multi-Object Spectrometer for Performance-Driven Sensing John P

REFERENCES

[1] Air Force Office of Scientific Research, “Broad Agency Announcement (BAA), Discovery Challenge Thrusts (DCTs),” AFOSR-BAA-2007-08, Arlington, Virginia (2007).

[2] Schott, J. R., Brown, S. D., Raqueño, R. V., Gross, H. N. and Robinson, G., “An advanced synthetic image generation model and its application to multi/hyperspectral algorithm development,” Canadian Journal of Remote Sensing 25(2), 99-111 (1999).

[3] Ientilucci, E. J. and Brown, S. D., “Advances in wide area hyperspectral image simulation,” Proc. SPIE 5075, 110–121 (2003).

[4] Skinner, C. J., Bergeron, L. E., Schultz, A. B., MacKenty, J. W., Storrs, A., Freudling, W., Axon, D., Bushouse, H., Caizetti, D., Colina, L., Daou, D., Gilmore, D., Holfeltz, S. T., Najita, J., Noll, K., Ritchie, C., Sparks, W. B. and Suchkov, A., “On-orbit properties of the NICMOS detectors on HST,” Proc. SPIE 3354, 2-13 (1998).

[5] Szeto, K., Stilburn, J. R., Bond, T., Roberts, S. C., Sebesta, J. and Saddlemyer, L. K., “Fabrication of Narrow-Slit Masks for the Gemini Multi-Object Spectrograph,” Proc. SPIE 2871, 1262-1271 (1997).

[6] Barden, S. C., Armandroff, T., Muller, G., Rudeen, A. C., Lewis, J. and Groves, L., “Modifying Hydra for the WIYN telescope – an optimum telescope, fiber MOS combination,” Proc. SPIE 2198, 87-97 (1994).

[7] MacKenty, J. W., Greenhouse, M. A., Green, R. F., Sparr, L. M., Ohl, R. G. and Winsor, R. S., “IRMOS: An Infrared Multi-Object Spectrometer using a MEMS micro-mirror array,” Proc. SPIE 4841, 953-961 (2003).

[8] Meyer, R. D., Kearney, K. J., Ninkov, Z., Cotton, C. T., Hammond, P. and Statt, B. D., “RITMOS: a micromirror-based multi-object spectrometer,” Proc. SPIE 5492, 200-219 (2004).

[9] Lide, D. R., [Handbook of Chemistry and Physics], CRC Press, Boca Raton, Florida, 73 ed. (1992). [10] Hecht, E., [Optics], Addison-Wesley, San Francisco, 4 ed. (2002). [11] Smith, W. J., [Modern Lens Design], McGraw Hill, New York, 2 ed. (2005). [12] Schott, J. R., [Remote Sensing: The Image Chain Approach], Oxford University Press, New York, 2 ed. (2007). [13] Dudley, D., Duncan, W. M. and Slaughter, J., “Emerging digital micromirror device (DMD) applications,” Proc.

SPIE 4985, 14-25 (2003). [14] Hornbeck, L., “Projection displays and MEMS: timely convergence for a bright future," Proc. SPIE 2639, 2 (1995). [15] Rice, A. C., Vasquez, J. R., Kerekes, J. P. and Mendenhall, M. J., “Persistent Hyperspectral Adaptive Multi-modal

Feature-Aided Tracking,” Proc. SPIE 7334 (this proceedings) (2009). [16] Secrest, B. R. and Vasquez, J. R., “A genetic algorithm approach to optimal spatial sampling of hyperspectral data

for target tracking,” Proc. SPIE 6964, 69640I (2008). [17] Devaraj, C., Brown, S., Messinger, D., Goodenough, A. and Pogorzala, D., “A framework for polarized radiance

signature prediction for natural scenes,” Proc. SPIE 6565, 65650Y (2007). [18] Pogorzala, D., Brown, S., Messinger, D. and Devaraj, C., "Recreation of a nominal polarimetric scene using

synthetic modeling tools", Proc. SPIE 6565, 65650Z (2007).

Proc. of SPIE Vol. 7334 73340J-12