texturecam: autonomous image analysis for …

2
TEXTURECAM: AUTONOMOUS IMAGE ANALYSIS FOR ASTROBIOLOGY SURVEY David R. Thompson 1,2 , Abigail Allwood 2 , Dmitriy Bekker 2 , Nathalie A. Cabrol 3 , Tara Estlin 2 , Thomas Fuchs 4 , Kiri L. Wagstaff 2 ; 1 [email protected]; 2 Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr. Pasadena, CA 91109, USA; 3 SETI Institute, Mountain View, CA 94043, USA; 4 California Institute of Technology, Pasadena CA 91125 Introduction: The TextureCam concept is a “smart camera” that aims to improve scientific return by in- creasing science autonomy and observation capabilities both when the spacecraft is stationary and when it is traversing. This basic onboard science understanding can summarize terrain encountered during travel [1] and direct autonomous instrument deployment to targets of opportunity [2]. Such technology would assist Mars sample caching or return missions where the spacecraft must survey vast areas to identify potential sampling sites and habitat indicators. The instrument will use texture channels to differenti- ate and map habitat-relevant surfaces. Here we use the term texture not in the geologic sense of formal physi- cal properties, but rather in the computer vision sense to mean statistical patterns of image pixels. These numer- ical signatures can in turn distinguish geologically rel- evant elements such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differ- ential outcrop weathering. Similar algorithms can per- form microstructure analysis and sedimentology using microscopic images. On the scale of meters, surfaces and features can be recognized to summarize rover tra- verse and draft surficial maps for downlink. Reliable image classification in field conditions is dif- ficult due to factors like variable lighting, surface con- ditions, and sediment deposition. Future development aims for reliable recognition of basic geological ele- ments. Rad-hardened FPGA processing will instanti- ate these general-purpose texture analysis algorithms as part of an integrated, flight-relevant prototype. Experimental setup: An initial test demonstrates automatic recognition of stromatoform structures in out- crop. These surfaces have distinctive laminar textures that may indicate previous biogenic activity [3] and would be valuable targets for followup investigation and human review. While it would be unlikely to find such features exposed on the Mars surface, they provide a controlled test to guide initial development and evalua- tion. We use a machine learning approach, training the system on example pixel labels supplied by the designer. This yields a statistical model of pixel relationships that can classify new surfaces (Figure 1). Stromatolite images were acquired from the Pilbara formation [3]. We consider three outcrops of laminar structures with conical and wavy morphologies. We also include a scene containing much finer-scale localized egg carton laminations. These all show a common lay- 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 1: Top: Original image. Bottom: Automatic surface map showing the probability of belonging to the stromatolite class. Green, yellow and orange are uncertain classifications. ering structure that we aim to distinguish automatically from the surrounding surface. We hypothesize that the statistical properties of the image textures are generaliz- able across the three stromatolite types. Method: All image pixels were labeled by hand as belonging to one of three classes: (1) stromatoform, (2) non-stromatoform, or (3) ambiguous/unclassified. A high-pass filter mitigates most differences in shadow and illumination and reveals small-scale surface texture (Figure 2 Right). We use a single color channel, and rotate the training image and labels by 15-degree incre- ments to provide generality and rotational invariance. We use this training data to construct a random forest pixel classification system [4]. It combines the result of 1659.pdf 43rd Lunar and Planetary Science Conference (2012)

Upload: others

Post on 19-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

TEXTURECAM: AUTONOMOUS IMAGE ANALYSIS FOR ASTROBIOLOGY SURVEY David R.Thompson1,2, Abigail Allwood2, Dmitriy Bekker2, Nathalie A. Cabrol3, Tara Estlin2, Thomas Fuchs4, Kiri L.Wagstaff2; [email protected]; 2Jet Propulsion Laboratory, California Institute of Technology, 4800Oak Grove Dr. Pasadena, CA 91109, USA; 3SETI Institute, Mountain View, CA 94043, USA; 4California Institute ofTechnology, Pasadena CA 91125

Introduction: The TextureCam concept is a “smartcamera” that aims to improve scientific return by in-creasing science autonomy and observation capabilitiesboth when the spacecraft is stationary and when it istraversing. This basic onboard science understandingcan summarize terrain encountered during travel [1] anddirect autonomous instrument deployment to targets ofopportunity [2]. Such technology would assist Marssample caching or return missions where the spacecraftmust survey vast areas to identify potential samplingsites and habitat indicators.

The instrument will use texture channels to differenti-ate and map habitat-relevant surfaces. Here we use theterm texture not in the geologic sense of formal physi-cal properties, but rather in the computer vision sense tomean statistical patterns of image pixels. These numer-ical signatures can in turn distinguish geologically rel-evant elements such as roughness, pavement coatings,regolith characteristics, sedimentary fabrics and differ-ential outcrop weathering. Similar algorithms can per-form microstructure analysis and sedimentology usingmicroscopic images. On the scale of meters, surfacesand features can be recognized to summarize rover tra-verse and draft surficial maps for downlink.

Reliable image classification in field conditions is dif-ficult due to factors like variable lighting, surface con-ditions, and sediment deposition. Future developmentaims for reliable recognition of basic geological ele-ments. Rad-hardened FPGA processing will instanti-ate these general-purpose texture analysis algorithms aspart of an integrated, flight-relevant prototype.

Experimental setup: An initial test demonstratesautomatic recognition of stromatoform structures in out-crop. These surfaces have distinctive laminar texturesthat may indicate previous biogenic activity [3] andwould be valuable targets for followup investigation andhuman review. While it would be unlikely to find suchfeatures exposed on the Mars surface, they provide acontrolled test to guide initial development and evalua-tion. We use a machine learning approach, training thesystem on example pixel labels supplied by the designer.This yields a statistical model of pixel relationships thatcan classify new surfaces (Figure 1).

Stromatolite images were acquired from the Pilbaraformation [3]. We consider three outcrops of laminarstructures with conical and wavy morphologies. We alsoinclude a scene containing much finer-scale localizedegg carton laminations. These all show a common lay-

100 200 300 400 500 600

50

100

150

200

250

300

350

400

450

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Figure 1: Top: Original image. Bottom: Automatic surfacemap showing the probability of belonging to the stromatoliteclass. Green, yellow and orange are uncertain classifications.

ering structure that we aim to distinguish automaticallyfrom the surrounding surface. We hypothesize that thestatistical properties of the image textures are generaliz-able across the three stromatolite types.

Method: All image pixels were labeled by handas belonging to one of three classes: (1) stromatoform,(2) non-stromatoform, or (3) ambiguous/unclassified. Ahigh-pass filter mitigates most differences in shadowand illumination and reveals small-scale surface texture(Figure 2 Right). We use a single color channel, androtate the training image and labels by 15-degree incre-ments to provide generality and rotational invariance.

We use this training data to construct a random forestpixel classification system [4]. It combines the result of

1659.pdf43rd Lunar and Planetary Science Conference (2012)

Figure 2: Left: Training image, a conical stromatolite. Right:A high-pass filter reduces variability due to shadows and illu-mination.

Class B!

test1 (x) < t1 ?!

test2(x) < t2 ?! test3 (x) < t3 ?!

test4 (x) < t4 ?!

Class C!Class A!

no!yes!

yes! yes! no!no!

x

Figure 3: Decision tree classification. We aim to classifythe pixel marked with an ’X;’ it propagates down a tree ofleft/right decisions to arrive at a classification label. The deci-sion at each intermediate node applies a threshold test to oneor more pixel values from its local neighborhood.

multiple independent decision tree classifiers. Each treeis a hierarchical sequence of tests applied to local imagevalues in the neighborhood of the classified pixel. Eachtest considers a numerical attribute such as (1) the abso-lute intensity of a nearby location, at some offset relativeto the target pixel; (2) the difference in intensity betweentwo nearby locations; or (3) the absolute difference inintensity between two nearby locations. A threshold onthe attribute determines whether the pixel will propagateleft or right. Such tree sequences can form complex de-cisions based on local edges, discontinuities, and othercharacteristic microtexture. Figure 3 illustrates the con-cept for classification of a pixel marked X . It propa-gates down a simple tree of binary decisions - tests onvalues within a local neighborhood represented by thered rectangle. The population of pixels arriving at eachfinal node of the decision tree gives a distribution overclasses for pixels that follow this path.

The training procedure grows each decision tree froma single root node. At each new decision node, itsearches a random subset of possible tests to find themost informative one as determined by an expected in-formation gain metric [5]. This score captures the un-

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

true

posi

tive

rate

false positive rate

Scene AScene B

A!

B!

Figure 4: Test scene ROC performance

certainty in classes of the pixels in the resulting left andright subpopulations, weighted by size. We grow alltrees for a fixed number of iterations. The process takesjust a few minutes on a modern consumer-grade CPU.

Results: We evaluate performance on each imageby training on the two other images (Leave One OutCross Validation). Figure 4 shows Receiver Operat-ing Characteristic (ROC) curves for the pixels in twoscenes: scene A with wavy morphology; and B whichintroduces weathering and variable depth-of-field. Theimage of a conical stromatolite (Figure 2) has insuffi-cient stromatoform pixels for a confident ROC estimateso we omit it from the test set. ROC performance sug-gests that the laminar structures are recognizable acrossimages, with reasonable performance given the limitedtraining data.

Figure 1 shows the original image for scene B alongwith the final surficial classification probabilities. Val-ues near P = 0.5 indicate ambiguous classifications.The classification learns to favor areas with multiplenarrow layers; aligned fractures in the non-stromatolitesurface are a main source of errors. One solution couldbe a larger training set that includes non-stromatolitelaminae. A complementary solution is a subsequentsegmentation step to clean the map into contiguous ar-eas and eliminate small clusters of outlier pixels withoutsupport from neighboring areas [5].

Acknowledgements: A portion of this research wascarried out at the Jet Propulsion Laboratory, California In-stitute of Technology. Copyright 2012 California Institute ofTechnology. All Rights Reserved; U.S. Government SupportAcknowledged. The TextureCam project is supported by theNASA Astrobiology Science and Technology Instrument De-velopment program (NNH10ZDA001N-ASTID).

References: [1] D. Thompson, et al. (2011) Journal ofField Robotics 28(4):542. [2] T. Estlin, et al. (to appear) ACMTrans on Intelligent Systems and Technology. [3] A. Allwood,et al. (2006) Nature 441(7094):714. [4] L. Breiman (2001)Machine learning 45(1):5. [5] J. Shotton, et al. (2008) IEEEConf on Computer Vision and Pattern Recognition 1–8.

1659.pdf43rd Lunar and Planetary Science Conference (2012)