11-063.qxd 5/9/12 4:17 pm page 625 spatial resolution...

11
Abstract In disaster response, timely collection and exploitation of remotely sensed imagery is of increasing importance. Image exploitation approaches during the immediate (first few days) aftermath of a disaster are predominantly through visual analysis rather than automated classification meth- ods. While the temporal needs for obtaining the imagery are fairly clear (within a one- to three-day window), there have only been educated guesses about the spatial resolution requirements necessary for the imagery for visual analysis. In this paper, we report results from an empirical study to identify the coarsest spatial resolution that is adequate for tasks required immediately following a major disaster. The study was conducted using cognitive science experimental methods and evaluated the performance of individuals with varying image interpretation skills in the task of mapping hurricane-related residential structural damage. Through this study, we found 1.5 m as a threshold for the coarsest spatial resolution imagery that can successfully be used for this task. The results of the study are discussed in terms of the likelihood of collection of this type of imagery within the temporal window required for emergency management operations. Introduction The collection and utilization of airborne/satellite imagery for disaster response has become expected and necessary. The response phase of the disaster cycle is the time period between impact (e.g., hurricane landfall) and when human lives can be saved (e.g., those with serious injuries or without water). Remotely sensed imagery is an important asset for mapping disaster extent, damaged lifelines, build- ings, or flood conditions. This period of time is typically ~72 hours after the event. The management of the disaster event then moves into the recovery phase where shelter, infrastructure, and lifelines are rebuilt. Numerous airborne and satellite-based remote sensing collections occur after US disaster events. Whether the imagery is collected by an airborne platform or satellite platform is not particularly relevant in disaster response. The critical issues are: can the imagery be collected quickly, processed quickly, and delivered to the managers/responders for information extraction? Each airborne collection is either flown for an agency/state’s specific needs or, in some instances, flown in the hope a consumer will purchase the imagery (Hodgson et al. 2010). The temporal requirements in post-disaster Spatial Resolution Imagery Requirements for Identifying Structure Damage in a Hurricane Disaster: A Cognitive Approach Sarah E. Battersby, Michael E. Hodgson, and Jiayu Wang image collections are understood by the disaster response community and have been documented in national and event specific surveys (Hodgson et al., 2005 and 2010; Langhelm and Davis, 2002; Thomas et al., 2003). Emergency responders need such imagery within three days of the disaster event for general use in the response phase (Hodgson et al., 2005; Zhang and Kerle, 2008). For some tasks, such as identification of infrastructure damage, it has been suggested that this imagery is needed within a 24 hour window (Hodgson et al., 2010). While the temporal needs are fairly clear, spatial resolution requirements are not as well defined. While the finest spatial resolution imagery with full color spectral bands (e.g., natural color composite or color- infrared composite) might seem the appropriate choice, a tradeoff exists between image spatial and spectral resolution and coverage area collection time. The finer the spatial resolution, the more detail in the imagery. However, finer resolution imagery also requires considerably more time to collect and process for use. For instance, an aerial platform with a fixed lens would need to fly at lower altitudes for greater spatial resolution, thus, covering a smaller geo- graphic area in each frame (or swath). The result is a larger number of flightlines (more flight time) at closer separations would be required to cover the same geographic area. Following a major disaster, the time necessary to collect and process fine resolution imagery may also be compounded by the challenges presented by the event’s impact (e.g., closed airports for airborne collections) that push delivery of imagery well beyond the one- to three-day window neces- sary for use in emergency response. For example, following Hurricane Katrina, approximately two weeks were required for obtaining complete color aerial image coverage of the Louisiana-Mississippi coastal areas (Hodgson et al., 2010; Adams et al., 2006) although a local aeroservice company flew the entire Mississippi coastline with black/white film at approximately 0.3 m (1.0 foot) spatial resolution during the day after Katrina’s landfall. In this two week time span, no fewer than five airborne image collections took place with spatial resolutions between 0.15 m to 2 meters. Slightly coarser resolution (e.g., 0.5 m to 4.0 m) imagery may be collected more expediently (assuming the orbital path for the time period is acceptable) using commercial satellites. However, it is questionable whether the coarser resolution satellite imagery is sufficient to meet the immediate needs of an emergency response team (Bolus and Bruzewicz, 2002). PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING June 2012 625 University of South Carolina, Department of Geography, Callcott Building, 709 Bull Street, Columbia, SC 29208 ([email protected]). Photogrammetric Engineering & Remote Sensing Vol. 78, No. 6, June 2012, pp. 625–635. 0099-1112/12/7806–625/$3.00/0 © 2012 American Society for Photogrammetry and Remote Sensing

Upload: others

Post on 14-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

AbstractIn disaster response, timely collection and exploitation ofremotely sensed imagery is of increasing importance. Imageexploitation approaches during the immediate (first fewdays) aftermath of a disaster are predominantly throughvisual analysis rather than automated classification meth-ods. While the temporal needs for obtaining the imagery arefairly clear (within a one- to three-day window), there haveonly been educated guesses about the spatial resolutionrequirements necessary for the imagery for visual analysis.In this paper, we report results from an empirical study toidentify the coarsest spatial resolution that is adequate fortasks required immediately following a major disaster. Thestudy was conducted using cognitive science experimentalmethods and evaluated the performance of individuals withvarying image interpretation skills in the task of mappinghurricane-related residential structural damage. Throughthis study, we found 1.5 m as a threshold for the coarsestspatial resolution imagery that can successfully be used forthis task. The results of the study are discussed in terms ofthe likelihood of collection of this type of imagery within thetemporal window required for emergency managementoperations.

IntroductionThe collection and utilization of airborne/satellite imageryfor disaster response has become expected and necessary.The response phase of the disaster cycle is the time periodbetween impact (e.g., hurricane landfall) and when humanlives can be saved (e.g., those with serious injuries orwithout water). Remotely sensed imagery is an importantasset for mapping disaster extent, damaged lifelines, build-ings, or flood conditions. This period of time is typically ~72 hours after the event. The management of the disasterevent then moves into the recovery phase where shelter,infrastructure, and lifelines are rebuilt. Numerous airborneand satellite-based remote sensing collections occur after USdisaster events. Whether the imagery is collected by anairborne platform or satellite platform is not particularlyrelevant in disaster response. The critical issues are: can theimagery be collected quickly, processed quickly, anddelivered to the managers/responders for informationextraction? Each airborne collection is either flown for anagency/state’s specific needs or, in some instances, flown inthe hope a consumer will purchase the imagery (Hodgson et al. 2010). The temporal requirements in post-disaster

Spatial Resolution Imagery Requirements forIdentifying Structure Damage in a Hurricane

Disaster: A Cognitive ApproachSarah E. Battersby, Michael E. Hodgson, and Jiayu Wang

image collections are understood by the disaster responsecommunity and have been documented in national andevent specific surveys (Hodgson et al., 2005 and 2010;Langhelm and Davis, 2002; Thomas et al., 2003). Emergencyresponders need such imagery within three days of thedisaster event for general use in the response phase (Hodgson et al., 2005; Zhang and Kerle, 2008). For sometasks, such as identification of infrastructure damage, it hasbeen suggested that this imagery is needed within a 24 hourwindow (Hodgson et al., 2010). While the temporal needsare fairly clear, spatial resolution requirements are not aswell defined.

While the finest spatial resolution imagery with fullcolor spectral bands (e.g., natural color composite or color-infrared composite) might seem the appropriate choice, atradeoff exists between image spatial and spectral resolutionand coverage area collection time. The finer the spatialresolution, the more detail in the imagery. However, finerresolution imagery also requires considerably more time tocollect and process for use. For instance, an aerial platformwith a fixed lens would need to fly at lower altitudes forgreater spatial resolution, thus, covering a smaller geo-graphic area in each frame (or swath). The result is a largernumber of flightlines (more flight time) at closer separationswould be required to cover the same geographic area.Following a major disaster, the time necessary to collect andprocess fine resolution imagery may also be compounded bythe challenges presented by the event’s impact (e.g., closedairports for airborne collections) that push delivery ofimagery well beyond the one- to three-day window neces-sary for use in emergency response. For example, followingHurricane Katrina, approximately two weeks were requiredfor obtaining complete color aerial image coverage of theLouisiana-Mississippi coastal areas (Hodgson et al., 2010;Adams et al., 2006) although a local aeroservice companyflew the entire Mississippi coastline with black/white film atapproximately 0.3 m (1.0 foot) spatial resolution during theday after Katrina’s landfall. In this two week time span, nofewer than five airborne image collections took place withspatial resolutions between 0.15 m to 2 meters. Slightlycoarser resolution (e.g., 0.5 m to 4.0 m) imagery may becollected more expediently (assuming the orbital path forthe time period is acceptable) using commercial satellites.However, it is questionable whether the coarser resolutionsatellite imagery is sufficient to meet the immediate needs ofan emergency response team (Bolus and Bruzewicz, 2002).

PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u n e 2012 625

University of South Carolina, Department of Geography,Callcott Building, 709 Bull Street, Columbia, SC 29208([email protected]).

Photogrammetric Engineering & Remote Sensing Vol. 78, No. 6, June 2012, pp. 625–635.

0099-1112/12/7806–625/$3.00/0© 2012 American Society for Photogrammetry

and Remote Sensing

11-063.qxd 5/9/12 4:17 PM Page 625

Page 2: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

Hazard practitioners have suggested that for disasteremergency response, imagery of a resolution between 0.25 to 2 meters should be sufficient for most interpretationtasks (Schweitzer and McLeod, 1997) i.e., a large resolutionspread. Remote sensing researchers have suggested that forquantitative assessment of damage to housing, transportationroutes, or utilities, the acceptable range of resolution is from0.3 to 1 meter (Cowen and Jensen, 1998). Bolus andBruzewicz (2002) have suggested that for identification ofroof damage, an optimal pixel size for true-color imagery is0.2 m (0.67 feet) to 0.3 m (1.0 feet). These guidelines havebeen established based on an evaluation of remote sensingliterature and a few authors’ practical experience (Cowenand Jensen, 1998), although not through empirical testing ofthe imagery interpretation process. Furthermore, the appro-priate spatial resolution requirement is likely sensitive to theinformation extraction approach (i.e., visual analysis versusautomated classification). Other than the type of disasterimpact of interest, such as damage to buildings, infrastruc-ture, or forests, the appropriate spatial resolution require-ment is likely sensitive to the information extractionapproach. For the immediate use in state or regional emer-gency joint field offices (JFOs) that manage the disaster, thevisual analysis approach prevails. The use of human visualinterpretation is more reliable for high spatial resolutionimagery, can be easily explained, and the staff available inthe JFO are more familiar with visual interpretation than theautomated information extraction approaches. Controlled,empirical cognitive research to evaluate the limits of humaninterpretation of imagery within each of these suggestedguidelines is almost nonexistent. The spatial resolution thatis just “fine enough” to be adequate for the tasks, therebyminimizing collection time immediately following a majordisaster, is not known.

Determination of this “fine enough” spatial resolution isbest answered through a cognitive study evaluating individ-uals’ success with interpreting imagery at different levels ofresolution. It is not sufficient to simply suggest guidelinesfor required resolution of imagery without testing humansubjects to identify if the imagery can really be used suc-cessfully. Numerous classical cognitive works (e.g., Lewickiet al., 1988) have demonstrated that verbal articulation of ananswer may be substantially different than the answerresulting from cognitive processing, and that the cognitiveprocess is context dependent (Wilson, 2002). For example inthe context of image interpretation a visual analyst may say“I need 0.25 m spatial resolution” when in fact the analystcould just do just as well if tasked to physically interpret animage with 1.0 m resolution. The relevant questions in thisresearch are:

• Is there a threshold of image resolution beyond which finerresolution imagery accuracy ceases to improve imageinterpretation accuracy?

• What is the coarsest resolution imagery that is sufficient foremergency management tasks that are performed postdisaster?

• Does the coarsest acceptable resolution vary depending onthe skill and experience level of the interpreter (e.g., withrespect to image interpretation or emergency managementexperience, practical experience, or academic training)?

Answers to these research questions might also dependon the radiometric resolution and spectral resolution (i.e.,natural color, color-IR, panchromatic, or even single bands)of the imagery; however, we confine our study to panchro-matic imagery for hurricane wind impacts. Panchromaticairborne imagery was the predominate imagery for hurricanedisasters and is still the finest resolution satellite imagery

available. Also, other types of disasters (e.g., earthquake,wildfire, flooding) might cause damages to structures (e.g.,water damage, foundation-only damage) in a different form.

If the intent is to ensure that useful imagery is in thehands of emergency management teams as quickly aspossible, we need to find the balance between the interpre-tation benefit of fine spatial resolution imagery and thespeed/cost of collection for coarser spatial resolutionimagery. Imagery that is just “fine enough” is ideal i.e.,resulting in faster collection, smaller file sizes, and fasterdistribution through the Internet. In this paper we reportresults from a cognitive study examining the effect ofspatial resolution on post-disaster image interpretationaccuracy. In consultation with an experienced disasterresponse official, we defined target categorizations forresidential building damage (i.e., no damage, partialdamage, total damage). Through this research, we identifiedthe threshold at which the use of finer resolution imageryno longer improves the interpretation process and con-versely, identified the threshold for the coarsest acceptableresolution. This is the first cognitive research study todetermine image analysis accuracy with varying spatialresolution for disaster images.

Cognitive Science in Remote Sensing ResearchCognitive science research is focused on improving ourunderstanding of the workings of the human mind,including the study of perception, learning, thinking, andremembering. Work in this area is largely interdisciplinary,drawing methods from fields such as psychology, neuro-science, and computer science, with application to a widevariety of studies of human mental processes. Whilecognitive science research methods are not common in theremote sensing literature, they have been widely recog-nized as useful for improving our understanding of theimage interpretation process in remote sensing (e.g., Esteset al., 1983; Hoffman and Conway, 1989; Hodgson, 1998;Lloyd et al., 2002; Lloyd and Hodgson, 2002), and havebeen used extensively in related fields such as geographyand cartography (Montello, 2002) and visual perception(Cavanagh, 2011). Any image interpretation by humanswill necessarily involve the processes of perception,reasoning, and categorization in order to identify, extract,characterize, and use the relevant information in the image.To evaluate the challenges in the human interpretationprocess requires careful study of the cognitive processesinvolved in interpretation.

For the task of identifying residential structural damage,our focus in this article, interpretation of the imagerydepends largely on a top-down process where the interpreterrelies on knowledge about the shape and form of structures,and the nature of storm damage; which, in many cases, mayhave been recently obtained through instructions at anemergency operations center. A guided search process, suchas Wolfe’s (1994) model of visual search, is implicitlyassumed (Hodgson, 1998; Lloyd and Hodgson, 2002) to beused to extract the relevant information from the remotelysensed imagery (e.g., Estes et al. (1983) elements of imageinterpretation). Practically no research with remotely sensedimagery exists to substantiate (or refute) this assumption.Information retrieved from this search of the imagery is thencharacterized and categorized according to knowledge ofdamage categories to interpret the patterns seen in theimagery.

Success with this process of characterization andcategorization for interpreting imagery relies on a number offactors related to the image and the knowledge, experience,

626 J u n e 2012 PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING

11-063.qxd 5/9/12 4:17 PM Page 626

Page 3: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

and goals of the interpreter. With respect to remotelysensed images, cognitive methods have been used toidentify the relationship between digital texture and visualperception of texture (Hsu and Burright, 1980; Hodgson andLloyd, 1986), minimum window size necessary for classify-ing Level II urban land use (Hodgson, 1998), and to exam-ine the nature of the visual search process (Lloyd andHodgson, 2002), but not to quantify the specific impact ofresolution on the human interpretation. In the followingsection, we incorporate cognitive science research methodsin an examination of the image interpretation process toidentify the effect of spatial resolution, as well as knowl-edge and experience of the interpreter, on the interpretationof structural damage.

Cognitive Design For Examining Impact of Spatial ResolutionIn this study, participants were asked to identify the level ofhurricane-related damage to individual residential structuresusing remotely sensed images. Structural damage wasclassified into one of three categories: no damage, partialdamage, and total damage. Written and graphical descrip-tions of the three damage categories (Figure 1) were pro-vided to familiarize the participants to the nature of theimagery and task. As has been common in disasters, onlyvertical imagery was used, so participants were instructed tocategorize the target based on damage to the roof of thestructure. In addition to estimating structural damage,participants also provided a confidence rating for theircategorization of the structure marked in each image. As asubtle measure of participants’ confidence, the response timeof each categorization was also recorded. Response time hasbeen demonstrated as a very subtle measure of task diffi-culty (Rosch, 1978; Greene and Oliva, 2009). Easy tasks aretypically accomplished quickly. As the task becomes moredifficult the response time increases. Very difficult taskstypically take a long time. As task difficulty increases to apoint where the individual considers it impossible, reactiontime may then decrease sharply as the individual quicklyguesses. Using a random sampling design, without replace-ment, each participant was shown an image at a randomspatial resolution and damage level and asked to categorizethe damage level and specify their confidence in theiranswer.

MethodsParticipantsPrevious work has found that experienced remote sensinganalysts are seldom found in state level emergency operationcenters (Hodgson et al., 2010). However, most states have atleast one full-time geographic information system (GIS) staffmember. During the response/recovery phase of a disaster adiverse mix of staff are typically drawn from Federal Emer-gency Management Agency’s (FEMA’s) Disaster AssistanceEmployee (DAE) program, the GIS Corps, or other local/stateagencies. To simulate the levels of experience and back-ground that might be found on a typical response teamassembled following a large-scale disaster, we recruitedparticipants from three different experience groups: Novice,Hazards Management (referred to as “Hazards Experts”), andImage Interpretation (“Image Experts”). Due to the specializednature of the last two groups, we were limited in the numberof participants that we were able to recruit. Each of thecategories of participants is described in more detail below.

NoviceNovice participants were identified as members of the groupthat may be recruited as volunteers for general GIS andimage analysis tasks following a major disaster. Theseparticipants were recruited from university-level introduc-tory/advanced GIS courses and introductory remote sensingcourses. A total of 47 participants from undergraduate andgraduate courses took part in the study; 12 of these partici-pants were excluded from analysis as they did not fullycomplete the study. Geography (eight participants) and CivilEngineering (23) were the most common majors. At the timeof the survey, all participants had completed a minimum often weeks of GIScience coursework. In the survey, 11participants (31.4 percent) indicated that they had takensome coursework (e.g., one or two courses) in air photointerpretation or remote sensing; over half of the participants(54.3 percent) indicated that they have some experienceviewing aerial imagery taken over a disaster area. As awhole, the Novice group averaged 0.7 years of photointerpretation experience (range: 0 to 5 years; � � 1.2). Asan incentive, participants received a small amount of coursecredit for their participation.

PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u n e 2012 627

(a) (b) (c)

Figure 1. Example images provided to participants to reflect the three damage categories used in evaluat-ing structures. Written descriptions of the damage criteria were also provided: (a) No damage: 25 percentor less of the rooftop area is visibly damaged; (b) Partial damage: 26 to 75 percent of the rooftop area isvisibly damaged; and (c) Total damage: 76 percent or more of the rooftop area is visibly damaged.

11-063.qxd 5/9/12 4:17 PM Page 627

Page 4: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

Hazards ExpertsThe members of the Hazards Expert group have the practicalskills that would typically be required of senior personnel inemergency management situations. A total of 14 HazardsExpert participants took part in the study. Fifty-sevenpercent of the participants had experience with aerialimagery taken over a disaster area, and the group averaged3.8 years of image interpretation experience (range: 0 to 14years; � � 4.2). Individuals in this group were recruitedpersonally by the authors and did not receive any compen-sation for participation.

Image ExpertsParticipants identified as Image Experts have high technicalskill with interpretation of remotely sensed imagery, thoughthey may not have the practical hazards management skill ofthe Hazards Expert group. A total of 15 Image Expert partici-pants took part in the study. Eighty-seven percent of theparticipants reported experience with aerial imagery takenover a disaster area, and the group averaged more than twelveyears of image interpretation experience (this is conservativesince several of the participants selected the survey option of“more than 20 years” of experience). Individuals in this groupwere recruited personally by the authors and did not receiveany compensation for participation.

MaterialsThis survey was conducted using a web-based approach.The rationale for using a web survey, rather than conductingthe survey in a laboratory setting, was to reach a largeraudience of experienced image interpreters and hazardsmanagers. In the survey, each participant was shown forty-five unique remotely sensed images of residential structuresin a disaster environment. In the center of each image asingle structure was unobtrusively, but clearly marked as thetarget structure for the participants to categorize damagelevel and their confidence in the assessment of damage(Figure 2).

The images used for this study each covered a groundarea of 174 m � 174 m. They were created by extractingsubsets of a larger dataset of one-foot resolution panchromaticimages collected of the Mississippi coast in the day followingHurricane Katrina’s landfall in Southern Mississippi. The

radiometric resolution for this imagery was 8-bit based on thedynamic range of the digital data as a continuous range ofbrightness values. This window size was selected as (a) it islarger than the minimum window size for accurately classify-ing fundamental Level II urban land cover classes frompanchromatic imagery (Hodgson 1998), and (b) it was thelargest window possible to guarantee that participants wouldnot have to do any vertical or horizontal scrolling to examinethe image (based on a conservative estimate of participant’smonitor resolution). The images were selected to containsingle- or duplex-family residential structures of similar sizesand to cover the range of damage typically seen in hurricane-related imagery. This strategy minimizes potential bias fromany specific level of damage being easier to interpret thanothers, or from the damage levels being easier or morechallenging to interpret with certain types or sizes of structure.

Images shown to participants were divided equallyamong three levels of damage (i.e., 15 images in each of nodamage, partial damage, and total damage) categories.Quantification of damage is often based on the cost ofrebuilding a structure to its pre-damage condition (FEMA2003). However, with only vertical imagery and no informa-tion about repair cost to use for assessment, we could notuse this method to classify damage in our images. Such anapproach would require specific housing value informationfor each structure with ground investigation of sustaineddamage. For the emergency response phase of the disastercycle, this type of specific information based on ground datais generally not available. In the response phase, a goodestimate of damage for geographic areas is generally suffi-cient. For our analysis, we focused on the ratio of damagedto undamaged roof area for each structure. The condition ofthe roof structure is a good indicator of overall damage to astructure as confirmed by Womble et al. (2008). Discussionswith a joint field office (JFO) image analyst in Mississippiafter Hurricane Katrina also confirm this. To quantifydamage for participants, we set parameters for damage as:

• No damage: 25 percent or less of the rooftop area is visiblydamaged;

• Partial damage: 26 to 75 percent of the rooftop area is visiblydamaged;

• Total damage: 76 percent or more of the rooftop area isvisibly damaged.

628 J u n e 2012 PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING

(a) (b) (c)

Figure 2. Example of “no damage” structure in panchromatic imagery used in the survey at three differentspatial resolutions: (a) 0.35 m, (b) 2.0 m, and (c) 4.0 m. In each image participants were asked toidentify the damage level (i.e., no damage, partial damage, total damage) of the structure marked in thecenter of the image.

11-063.qxd 5/9/12 4:17 PM Page 628

Page 5: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

The categories are consistent with FEMA’s damageclassification guidelines and roughly coincide with classifications of Limited Damage, Moderate Damage, and Extensive or Catastrophic Damage in the new GeoConopsguidance manual (Department of Homeland Security, 2010).To divide our images into categories based on damagelevel, a senior expert remote sensing scientist with a jointappointment at the Department of Homeland Security andNASA and with more than 23 years of imagery interpreta-tion experience in hurricane disaster applications (e.g., Hurricanes Andrew, Katrina, and Ike) was recruited to examine the images and provide definitive categorizationagainst which the participants’ categorizations werecompared. This particular scientist also lived in theHurricane Katrina disaster impact area for which theimagery was collected and is thus, intimately familiar with the impact signature for this geography (i.e., hisneighborhood).

Spatial ResolutionSince we were primarily interested in the impact of spatialresolution on an individual’s ability to accurately classifydamage in remotely sensed images, we re-sampled each ofthe study images (from an original 0.3 m) into seven differ-ent levels of resolution: 0.35 m, 0.5 m, 0.75 m, 1.0 m, 1.5 m,2.0 m, and 4.0 m. These seven levels were selected based onsuggestions for minimum spatial resolution requirements foridentifying housing damage (Cowen and Jensen, 1998), foruse in emergency management following disasters(Schweitzer and McLeod, 1997) and the native resolution ofcommon types of satellite imagery that may be collectedpost disaster. The re-sampling was modeled in such a waythat the resulting image approximated imagery collected ateach level of resolution. To conduct the resampling, a modelof the modulation transfer function (MTF) that approximatedimagery that would have been collected at different spatialresolutions (e.g., flying heights or orbital altitudes) with thesame sensor was used. Our resampling model uses a 2DGaussian form (Harlick and Shapiro, 1992) with � � .495 tocompute a reflectance value for a coarser resolution at eachpixel G(x,y):

We did not attempt to model the additional atmosphericattenuation and somewhat less contrast from higher flyingaltitudes. Thus, this model approximates, albeit somewhatimperfectly, the ground projected instantaneous field ofview from the same imaging sensor (e.g., a pixel in an aerialimage) at higher altitudes.

Survey ProcedureRecognizing that interpretation ability is related to knowl-edge, other abilities, and education of the interpreter, westarted the survey by collecting background informationabout each participant. This included questions about theparticipant’s academic major, age, years of imagery inter-pretation experience, experience with post-disaster aerialimagery, related coursework, and practical experience withGIS, air photo interpretation, remote sensing, hazards,and/or emergency management. Following the collection ofbackground information, participants were providedwritten and graphical instructions explaining the study taskand damage levels. A short training session to practicecategorizing similar imagery into the three damage categories was given. Feedback on the participant’s

G(x,y) � 1

2ps2 � ex2�y2

2s2

.

categorization accuracy for each training image was pro-vided throughout the training. The questions used in thetraining module were formatted in the same way as thequestions in the study to help familiarize participants withthe task and the interface.

Following the training, participants were again pre-sented with the instructions. In the instructions, participantswere informed that both speed and accuracy were importantand that they should work as quickly as possible withoutsacrificing accuracy. Participants were presented with forty-five unique images and asked to assess the damage level forthe single residential structure (i.e., the “target house”)outlined in each image.

Images were presented one at a time, and all partici-pants saw the same target houses and surrounding geogra-phy in each of the forty-five unique images. For each image,participants indicated damage level by clicking on thebutton for the appropriate damage level (Figure 3). Theresponse time between the initial view of the target houseand the participant’s selection of a damage level wasrecorded. Following the damage categorization, participantswere asked to rate their confidence in their assessment ofdamage for this house. Confidence was rated on a five point Likert scale ranging from 1 (“not confident”) to 5 (“completely confident”).

The order of images and the resolution at which eachimage was presented was randomized. While each partici-pant saw all forty-five target houses, no two participants sawexactly the same combination of houses at the same resolu-tion. For example, participant ‘1’ may see house “A” at the0.35 m resolution while participant ‘2’ may see house “A” atthe 1.0 m resolution. Also, no participant saw the samehouse at different spatial resolutions to avoid learning bias.To ensure that we had a sufficient sample of images at eachresolution to conduct analyses, we constrained the selectionof resolution for each image to ensure that each participantwould see the same number of images at each resolutionlevel; thus, the selection of imagery was pseudo-random.Additionally, since we anticipated, based on previousresearch and JFO analysts, the threshold of image interpreta-tion accuracy to be in one of the middle resolution levels,participants saw more images at the middle-levels ofresolution than at the extreme fine or coarse resolutions(Table 1).

PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u n e 2012 629

Figure 3. Sample image with interpretation questions.

11-063.qxd 5/9/12 4:17 PM Page 629

Page 6: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

ResultsAccuracy of InterpretationWe first assessed performance for each image individuallyto identify if there were any specific images, or types ofimage, which were particularly confusing or difficult tointerpret. We anticipated the damage level for images withspatial resolutions finer than 1.0 m would be somewhat easyto identify. To determine if there were any images thatpresented a particular challenge to interpretation in thedataset, we examined mean performance across groups forall images. Assuming that the coarsest resolution imageswould more likely be confusing or difficult to interpret andthus prone to participants guessing in their interpretation,we first evaluated image performance using only the fourfinest resolutions (0.35 m through 1.0 m). This processallowed us to better identify problematic images, i.e., theones that were too challenging to interpret correctly evenwith fine resolution imagery.

The mean accuracy for all images and damage levels, inthe 1.0 m or finer range was 90.8 percent (� � 0.29). Themean accuracy for any single image ranged from 48.6percent to 100 percent across the three experience groups.Based on accuracy levels for individual images, we decidedto eliminate the one image with particularly low classifica-tion accuracy (48.6 percent) from further analyses. Thisimage appears to be close to the 25 percent damage thresh-old that differentiates the No Damage and Partial Damagecategories, which seems to have caused interpretationdifficulty. No other image was problematic.

The mean overall accuracy for each experience group(Novice, Hazards Experts, and Image Experts) across theremaining 44 images and resolution levels in the surveyranged from 82 percent to 86 percent (Table 2). Weexpected to find a gradual and then rapid drop-off inclassification accuracy as spatial resolution becomescoarser, and we did find this functional relationship(Figure 4). Classification accuracy gradually decreased asresolution became coarser than 1.0 m and then decreasedmore noticeably after 1.5 m. Of all the resolutions tested,the mean accuracy for all three experience groups wasparticularly poor for the 4.0 m spatial resolution, rangingfrom 38 percent to 56 percent. Not surprisingly, meanclassification accuracy for the Novice participants waslower than the two more experienced groups for almost allresolution levels.

To examine for statistical differences in these factors, athree-factor mixed ANOVA with repeated measures wascomputed. Damage (3 levels) and Resolution (7 levels)served as the within-subjects variables and ExperienceGroup (3 levels) as the between-subjects variable. Significantmain effects were found for Experience Group, F(2, 61) � 3.66,p � 0.05, Resolution, F(6, 366) � 58.39, p � 0.0001, andDamage, F(2, 122) � 13.53, p � 0.0001. A post-hoc pairwisecomparison using a correction for alpha inflation revealedsignificant differences between the Novice group and both the Hazards Expert and Image Expert groups, t(2794) � �2.98, p � 0.01 and t(2794) � �2.26, p � 0.05,respectively; the Novice group was significantly less accurate than the two specialized groups (Table 2). Therewas no significant difference in accuracy between theHazards Expert and Image Expert groups.

The main effects of Damage Level and Resolution wereexamined further because of a significant interactionbetween the two, F(12,721) � 6.25, p � 0.0001 (Figure 5).This interaction implies that the degradation of resolutionwas not an independent explanatory factor for the changesin accuracy. For images showing No Damage and TotalDamage, there was a clear drop in accuracy for images at aresolution coarser than 1.5 m. For the images showingPartial Damage, however, the performance of participantsexhibited a less substantial drop in accuracy with coarserresolutions, and, oddly, an increase in accuracy at thecoarsest resolution. The significant interaction effect appearsto be largely driven by this unexpected increase in accuracyfor the coarse resolution Partial Damage images. Examina-tion of a classification matrix to show how participantsclassified each image at the coarsest resolution indicated

630 J u n e 2012 PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING

TABLE 1. NUMBER OF IMAGES SHOWN TO PARTICIPANTS AT EACH LEVEL OF RESOLUTION

Resolution No Partial Total (in meters) Damage Damage Damage

0.35 1 1 10.5 2 2 20.75 3 3 31.0 3 3 31.5 3 3 32.0 2 2 24.0 1 1 1

TABLE 2. MEAN ACCURACY BY EXPERIENCE GROUP

Experience Group Mean Accuracy

Novice 81.9%Hazards Expert 86.4%Image Expert 86.2%

Figure 4. Mean accuracy by Resolution and ExperienceGroup.

Figure 5. The interaction between Damage and Resolution.

11-063.qxd 5/9/12 4:17 PM Page 630

Page 7: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

that the higher level of accuracy afforded to the PartialDamage images at this resolution may be caused if partici-pants simply guessed the Partial Damage category when theywere unsure in their assessment of damage: which wouldnaturally result in the appearance of greater accuracy forPartial Damage images at the coarsest resolution (Table 3). Infact, in an “after experiment” conversation with several ofthe participants, they informed us of their “guessing” forthese problematic images.

To further examine the overall effect of spatial resolu-tion, without the confounding factor of the interaction withDamage level, a pair-wise comparison of Resolution levelswith correction for alpha inflation was conducted. For theResolution effect, we found a statistically significant break-ing point in classification accuracy between the 1.0 m and1.5 m resolution images. That is to say the participants’accuracy level started to decrease significantly once theimages reached a resolution of 1.5 m; differences in accu-racy between images at resolution levels of 1.0 m or finerwere not significant. While the decrease in interpretationsuccess first becomes statistically significant between 1.0 mand 1.5 m, we find that participants still do quite well withthe 1.5 m resolution, with a mean accuracy across groups of83.4 percent (Figure 4). Noting this, and that the differencein interpretation accuracy between resolutions 1.5 m and 2.0m is both substantial (a decrease of 14.2 percent) andstatistically significant (t(366) � 9.78, p � 0.0001), webelieve that 1.5 m is a threshold indicating the coarsestacceptable resolution for categorizing structural damage inremotely sensed imagery.

Confidence in InterpretationIn addition to measuring categorization accuracy, we alsoasked participants to rate their confidence in their interpre-tation of each image (Table 4). For completing imageinterpretation tasks, it is important not only to be accurate,but to have confidence in the assessment. Mean confidencefor the three experience groups on a 1 (“not confident”) to 5(“completely confident”) scale ranged from 3.58 to 3.75. Asexpected, confidence ratings dropped markedly with coarserresolution imagery (Figure 6). Confidence ratings were alsolower for Partial Damage images, as would be expectedbased on the accuracy results (Table 5).

To examine for statistical differences in confidenceratings, we computed a three-factor mixed ANOVA withrepeated measures. Damage Level (3 levels) and Resolution(7 levels) served as the within-subjects variables andExperience Group (3 levels) as the between-subjects variable.The participants’ confidence ratings served as the dependentvariable. Significant main effects were found for DamageLevel, F(2, 2742) � 60.05, p � 0.0001, and Resolution,F(6,2742) � �187.76, p � 0.0001. Again, these effects mustbe considered in light of a significant interaction betweenDamage Level and Resolution, F(12, 2742) � 2.29, p � 0.01.There was no significant effect of Experience Group; allparticipants were equally confident regardless of group.

The significant interaction between Damage Level andResolution indicates that the confidence ratings of partici-pants did not change in the same way for each DamageLevel as the resolution of images became coarser. Furtheranalysis shows that the confidence ratings were significantlylower for Partial Damage images than for No Damage,t(2742) � 8.30, p � 0.0001, or for Total Damage, t(2742) � �10.43, p � 0.0001. There was no significantdifference between the confidence ratings for Total Damage and No Damage images.

Reaction TimeWe also examined the impact of Damage Level and Resolu-tion on participants’ reaction times. Reaction times can serveas a proxy measure of the cognitive difficulty of a task.Longer reaction times suggest the task was more difficult;and conversely, shorter reaction times suggest easier tasks.

PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u n e 2012 631

TABLE 3. RESULTS OF CLASSIFICATION OF 4.0 M RESOLUTION IMAGES WITH A CONFIDENCE RATING OF 1 OR 2(THE LOWEST CONFIDENCE LEVELS); WHEN PARTICIPANTS WERE UNSURE OF THEIR CLASSIFICATION, THEY APPEAR

MORE LIKELY TO IDENTIFY THE IMAGE (E.G., TOTAL DAMAGE) AS PARTIAL DAMAGE

ACTUAL PARTICIPANT ANSWER

DAMAGE No Damage Partial Damage Total Damage Number of Images

No Damage 15 32 7 54Partial Damage 13 34 2 49Total Damage 7 27 15 49TOTAL 35 93 24 152

Figure 6. Mean confidence ratings by Resolution andExperience Group

TABLE 4. MEAN CONFIDENCE RATING BY EXPERIENCE GROUP ACROSSALL IMAGES (ON A 1 THROUGH 5 SCALE; WITH 1 BEING

“NOT CONFIDENT” AND 5 BEING “COMPLETELY CONFIDENT”)

Experience Group Mean Confidence

Novice 3.69Hazards Expert 3.58Image Expert 3.75

TABLE 5. MEAN CONFIDENCE RATINGS BY DAMAGE LEVEL

Damage level Mean Confidence

No Damage 3.81Partial Damage 3.24Total Damage 3.95

11-063.qxd 5/9/12 4:17 PM Page 631

Page 8: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

For the image classification problem we expected longreaction times for coarser levels of imagery because theinterpretation task is more challenging. Thus, reaction timeswould follow a pattern opposite that seen with accuracy andconfidence levels. This was, in fact, the case (Figure 7);while accuracy and confidence tended to decrease asresolution became coarser, reaction time tended to increase(Figure 8). With the Partial Damage images, the reactiontime is moderately high for all resolutions, with a generaltrend of increasing reaction time as resolution becomescoarser. For the No Damage or Total Damage images,reaction times were rather low in the finer resolutions withnotable increases with images at a resolution coarser than1.0 m.

To examine the statistical significance of these differ-ences, we used a three-factor mixed ANOVA with repeatedmeasures, using Damage Level (3 levels) and Resolution (7 levels) as the within-subjects variables and ExperienceGroup (3 levels) as the between-subjects variable. Theparticipants’ reaction time for identifying damage levelserved as the dependent variable. In this analysis we foundsignificant main effects of Resolution, F(6, 2745) � 16.30, p � 0.0001, Damage Level, F(2, 2745) � 15.24, p � 0.0001,and Experience Group, F(2, 2745) � 82.62, p � 0.0001. Wealso found significant interactions between Resolution andDamage Level, F(12, 2745) � 2.47, p � 0.01, and Resolutionand Experience Group, F(12,2745) � 2.73, p � 0.01. Theseresults confirm that the increase in reaction time is, in fact,directly tied to the coarseness of the resolution, and that

both damage level and participant experience level are alsoof importance. Partial Damage imagery consistently showedlonger reaction times for interpretation. Additionally, theImage Expert group tended to have longer reaction timesthan the other two participant groups. This may be due tothe trained image interpreters using a more systematicevaluation process for each image, or due to the fact that theImage Expert group tended to be older and, perhaps, slower.

The Role of Knowledge and ExperienceSuccess in the interpretation process relies on a number offactors related to both the imagery and to the knowledge,experience, and goals of the interpreter. The goals of theparticipants in this study were the same, as the task wasdefined in the same way for all participants. However, wehave not yet discussed how the individual knowledge andexperience of the participants impacted their performanceother than the basic group differences reported above(typically that the Hazards Expert and Image Expert groupsperformed better than the Novice group). To examine furtherwhether, and how, individual knowledge and experienceimpacted the categorization task we examined answers tothe background questions (i.e., major, age, coursework,experience) from all participants.

Of the participants that answered the backgroundquestion on relevant coursework, 39.2 percent indicated thatthey had taken at least one course in interpretation of aerialimagery. The participants that had training in image inter-pretation had a mean accuracy of 85.8 percent, while thosethat did not had a mean accuracy of 83.0 percent. Toexamine the role of training in this image interpretationtask, we used a simple mixed ANOVA with Resolution as abetween-subjects variable and training in image interpreta-tion (Coursework) as a within-subjects variable. In thisanalysis we found a significant main effect of Coursework,F(1, 2801) � 4.5, p � 0.05, and an expected main effect ofResolution, F(6, 2801) � 65.01, p � 0.0001. There was nointeraction between Coursework and Resolution, indicatingthat while formal training in image interpretation provided aslight benefit in overall accuracy (across all resolutionlevels), it did not provide specific benefit for interpretationof the more challenging coarse resolution imagery. In otherwords, at coarse resolutions, classification accuracy wasequally difficult regardless of experience.

In terms of situational familiarity, 66.3 percent ofparticipants indicated that they had previously viewedaerial imagery taken over a disaster area. On average,participants with this experience were more accurate (84.6 percent) than those who indicated no experiencelooking at post-disaster aerial imagery (81.3 percent). Asimple mixed ANOVA was used with Resolution as thebetween-subjects variable and viewing of aerial imagerytaken over a disaster (Disaster Imagery) as the within-subjects variable. In this analysis we found significant maineffects of Disaster Imagery, F(1, 2757) � 6.37, p � 0.05, andResolution, F(6, 2757) � 57.38, p � 0.0001. Again, therewas no significant interaction, indicating that while somefamiliarity with aerial imagery collected over a disaster areais beneficial, it does not specifically help with interpreta-tion of coarser resolution imagery.

We expected that performance should increase withyears of experience and would probably reach a point atwhich additional training no longer provided significantimprovement in interpretation skill. To test for the relation-ship of experience and interpretation ability, we used asimple mixed ANOVA using Resolution as the between-subjects variable and years of experience (Experience) as thewithin-subjects variable. For this analysis, participants weregrouped into three categories: 0 to 4 years of experience,

632 J u n e 2012 PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING

Figure 7. Mean reaction time by Resolution and Experience Group.

Figure 8. The relationship between Accuracy with Resolu-tion, Reaction Time, and Confidence, aggregated acrossall Experience Groups.

11-063.qxd 5/9/12 4:17 PM Page 632

Page 9: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

5 to 9 years, and 10� years. We found significant maineffects of Experience, F(2, 2838) � 7.62, p � .001, and ofResolution, F(6, 2838) � 42.84, p � 0.0001. There was nointeraction between these effects. Post-hoc analyses showsignificant differences between 0 to 4 years and 5 to 9 years,t(2838) � �3.36, p � 0.001, and between 0 to 4 years and10� years, t(2838) � �2.56, p � 0.05, however there was nosignificant difference between participants with 5 to 9 yearsand 10� years of experience, t(2838) � �0.99, p � 0.1.Thus, a few years of image interpretation experience is, notsurprisingly, beneficial for this interpretation task; however,many years of experience are not necessary. We mightsuggest, however, that for other more difficult interpretationtasks experience may play a greater role.

SummaryThis research examined the effects of image spatialresolution on the accuracy of categorizing residentialbuilding damage on panchromatic imagery by humaninterpreters: a typical key application of remotely sensedimagery in a disaster context. Our goals centered aroundthree questions:

• Is there a threshold of image resolution beyond which finerresolution imagery accuracy ceases to improve imageinterpretation?

• What is the coarsest resolution imagery that is sufficient foremergency management tasks that are performed postdisaster?

• Does the coarsest acceptable resolution vary depending onthe skill and experience level of the interpreter (e.g., withrespect to image interpretation or emergency managementexperience, practical experience, or academic training)?

In identifying the threshold beyond which finer resolu-tion imagery ceases to improve interpretation ability, theresults from our study show that this threshold for panchro-matic imagery is in the 1.0 m to 1.5 m spatial resolutionrange. While there were apparent marginal improvements (~3 percent) in interpretation accuracy for resolutions finerthan 1.0 m, there was no statistical significance betweenthese differences.

In terms of identifying the coarsest resolution imagerythat is sufficient for this type of task, the findings indicatethere is a substantial loss of information to guide theinterpretation process in the 1.0 m to 1.5 m spatial resolu-tion range, with image resolutions of greater than 2.0 mclearly unacceptable for fundamental damage assessment.This threshold at 1.5 m is a resolution coarser than therange recommended by Cowen and Jensen (1998) or byBolus and Bruzewicz (2002). It is within the range suggestedby Schweitzer and McLeod (1997), though it indicatesdisagreement with their suggestion that resolutions as coarseas 2.0 m are also acceptable. We note, however, that ourstudy focused on residential building damage with panchro-matic imagery while the previous statements were aboutbroad generalizations of remotely sensed imagery fordisasters.

With respect to the skill and experience level of theinterpreter, we have found, not surprisingly, that experts inhazards management and experts in image interpretation aremore accurate at categorizing damage than individuals withminimal training. However, we also find that while theirperformance is better than novice image interpreters, severaldifferent measures of skill and experience level that wecollected proved to be irrelevant in establishing a thresholdfor the coarsest resolution imagery sufficient for structuraldamage identification; the threshold remained the same, 1.5 m, for all participant groups.

Conclusion and DiscussionRemotely sensed imagery plays a critical part in theresponse phase immediately following a major disaster. Tobe of use, the imagery must be obtained as quickly aspossible (within one to three days, at most), and must be ofsufficient resolution to be interpretable by the emergencymanagement team. Until now, the definition of “sufficientresolution” has been fairly vague typically ranging from 0.25 m to 2.0 m (Jensen and Cowen, 1999; Schweitzer andMcLeod , 1997) or coarser in some instances and withouttying the resolution requirements to a specific damage type(Jensen, 2000). This study has empirically identified, using acognitive science approach, a specific threshold of 1.5 mresolution at which coarser imagery ceases to be useful foridentifying structural damage (i.e., as observed for rooftops)after a disaster. This threshold was consistent across indi-viduals at all levels of experience. Identification of thisthreshold provides an empirically-derived guide for humanimage interpretation that can be used to prioritize potentialsources of imagery for disaster response.

Coarser resolution imagery (1.5 m) than was previ-ously thought acceptable has now been shown to be suffi-cient for post-disaster interpretation tasks. What does thisspatial resolution requirement imply for the use of satellite-borne imaging sensors for post-disaster imagery collection? Currently, there are seven commercial US satellites (Geoeye-1, Ikonos, Orbview-3, QuickBird-1, QuickBird-2,Worldview-1, and Worldview-2) carrying sensors withspatial resolutions of 1.5 m or better. There are an additionalthree non-US satellites (Cartosat-2A, Cartosat-2AT, Kompsat-2) with 1.5 m or better imaging sensors. If the USFederal or state agencies leading the response to a disastercan draw on imagery from at least three of these satellite-sensor combinations, then the likelihood of obtainingimagery within three days is up to 100 percent (Hodgson et al., 2010). This capability should be sufficient for meetingthe temporal and resolution requirements for imageryneeded by emergency response teams, i.e., at least forstructural damage.

The effect of spatial resolution on the performance ofautomated information extraction methods was not exam-ined. Because of the expertise required and the rather weakperformance of automated approaches for extracting specificdetails (e.g., damaged residential structure versus no dam-age) from high spatial resolution imagery, visual interpreta-tion of imagery after a disaster is commonly used in emer-gency response offices (e.g., a joint field office). Abundantgeospatial staff are largely lacking in state-level emergencyoperations centers (Parrish et al., 2007). Our colleagues inacademic remote sensing would likely argue they (or theirstudents) could do a good job with automated approachesusing ancillary data and pre- and post-event imagery tomapping damage information (as one reviewer did). How-ever, these scientists are (a) not part of the response plan,(b) would not likely be available in the first 24 hours after adisaster event, and (c) such pre-event imagery and ancillaryinformation, if they exist, are seldom readily available(Hodgson et al., 2010). Based on our experience, we suggest,however, that any automated approach would be no moreaccurate (and likely less accurate) than the human inter-preter using panchromatic imagery. This research also didnot examine the use of multi-band imagery, such as color,color infrared, or others band combinations. However, evenwith color imagery, based on our experience, it is notexpected that coarser spatial resolutions from commercialsatellite sensor (e.g., 4.0 m � 4.0 m) would result in adramatic improvement in classification accuracy greater thanthe 44 percent (across all experience groups) seen at the 4.0 m spatial resolution from panchromatic imagery. Clearly,

PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u n e 2012 633

11-063.qxd 5/9/12 4:17 PM Page 633

Page 10: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

and ironically, the imaging sensors in the Disaster Monitor-ing Constellation (DMC) with 32 m � 32 m spatial resolu-tions (or the 4 m panchromatic on Beijing-1) are not appropriate for mapping structural damage categories.

While categorization of residential structures is veryimportant for allocating emergency response resources (e.g.,food, water, shelter) other essential elements of information(e.g., road damage, bridge damage, utility, electrical, water,and sewer damage) may be derived from imagery. FEMA andthe Interagency Remote Sensing Coordinating Committee(IRSCC) have continually attempted to define essentialelements of information (EEI) and match these EEIs with thespatial and spectral resolution requirements for disasterresponse. When information extraction uses human imageinterpreters, we have suggested an experimental designbased on cognitive principles as an appropriate and quanti-tative approach for defining the spatial resolution require-ments. If desired, other factors, such as spectral features,radiometric resolution, or ancillary information could beintroduced when creating stimuli (i.e., test images) in theexperimental design. Imagery collected in disaster responsemay be justified based on one agency’s need; however, thesame image may be used for other mapping purposes. Whatis the threshold for accurately mapping damage levels forthese other EEIs? Future cognitive research could bedesigned to probe this question for other features.

The role of ancillary information, such as buildingfootprints, parcel boundaries, or even pre-event imagery, isconsidered very important in the interpretation of post-eventimagery. However, such ancillary information may not becommonly accessible in the few days following a disaster eventdue to licensing restrictions even if the data exist. Buildingfootprints and parcel boundaries are often subject to licensingrestrictions and memorandums of agreement (or licensing)would be required before they are available in a state or FEMAdirected GIS unit within an emergency operations center. Somestates and/or counties are working to establish such agreementprior to disaster events. Future studies could examine therelative value of such ancillary information in the interpreta-tion of imagery, varied by spatial resolution.

In a final note, we believe the key role in rapidlyassessing disaster-related damage information for anthro-pogenic features (e.g., residential, commercial, industrial,etc.) is with the human image interpreter. As such, it isimportant to evaluate the role of imagery characteristics,ancillary information, and anecdotal information from thehuman interpreter’s perspective. Cognitive studies that proberemote sensing related image interpretation questions areessential in defining empirical-driven requirements forFederal and State emergency response plans.

AcknowledgmentsFunding for this research was provided by the University ofSouth Carolina Research Opportunities Program. The authorswould like to acknowledge the long-term experience andsubsequent advice that Dr. Bruce A. Davis provided in thisresearch. Three anonymous reviewers also provided insight-ful comments.

ReferencesAdams, B.J., J.A. Womble, S. Ghosh, and C. Friedland, 2006.

Deployment of remote sensing technology for multi-hazard post-Katrina damage assessment within a spatially-tiered reconnais-sance framework, Paper Presentation at the 4th InternationalWorkshop on Remote Sensing for Post-Disaster Response, 25–26September, Cripps Court, Magdalene College, Cambridge, UK.

Bolus, R., and A. Bruzewicz, 2002. Evaluation of new sensors foremergency management, Technical Report ERDC/CRREL TR-02-11,US Army Corps of Engineers, Engineer Research and DevelopmentCenter.

Cavanagh, P., 2011. Visual cognition, Vision Research, 51(13):1538–1551.Cowen, D.C., and J.R. Jensen, 1998. Extracting and modeling of urban

attributes using remote sensing technology, People and Pixels:Linking Remote Sensing and Social Science (D. Liverman, E.F. Moran, R.R. Rindfuss, and P.C. Stern, editors) WashingtonD.C., National Academy Press.

Department of Homeland Security, 2010. Federal InteragencyGeospatial Concept of Operations (GeoCONOPS).

Estes, J., E. Hajik, and L. Tinney, 1983. Manual and digital analysisin the visible and infrared Regions, Manual of Remote Sensing,Second Edition, Falls Church, Virginia: American Society forPhotogrammetry and Remote Sensing, pp. 258–277.

FEMA, 2003. RSDE Field Workbook: Preparing Structure InventoriesUsing Residential Substantial Damage Estimator SoftwareProgram RDSE 2.0, URL: www.floods.org/PDF/FEMA_RSDE_311WB.pdf (last date accessed: 29 February 2012).

Greene, M.R., and A. Oliva, 2009. Recognition of natural scenesfrom global properties: Seeing the forest without representingthe trees, Cognitive Psychology, 58:137–176.

Haralick, R., and L. Shapiro, 1992. Computer and Robot Vision,Addison-Wesley Publishing Company.

Hodgson, M.E., 1998. What size window for image classification? A cognitive perspective, Photogrammetric Engineering & RemoteSensing 64(8):797–807.

Hodgson, M.E., B.A. Davis, and J. Kotelenska, 2005. State-LevelHazard Offices: Their Spatial Data Needs and Use of GeospatialTechnology, Final Research Report to NASA-Stennis SpaceCenter (University of South Carolina).

———, 2010. Remote sensing and GIS data/information in theemergency response/recovery Phase, Geospatial Technologies inUrban Hazard and Disaster Analysis (P.S. Showalter, and Y. Lu,editors) Dordrecht: Springer, pp. 327–354.

Hodgson, M.E., and R.E. Lloyd, 1986. Cognitive and statisticalapproaches to texture, Proceedings of the 1986 APPRS-ACSMAnnual Convention, 4:407–416.

Hoffman, R.R., and J. Conway, 1989. Psychological factors in remotesensing: A review of some recent research, Geocarto Interna-tional, 4(4):3–21.

Hsu, S., and R.G. Burright, 1980. Texture perception and the RADC/Hsu texture feature Extractor, Photogrammetric Engineering &Remote Sensing 46(8):1051–1058.

Jensen, J.R., 2000. Remote Sensing of the Environment, UpperSaddle River, New Jersey: Prentice Hall.

Jensen, J.R., and D.C. Cowen, 1999. Remote sensing of urban/suburbaninfrastructure and socio- economic attributes, PhotogrammetricEngineering & Remote Sensing 65(5):611–622.

Langhelm, R.J., and B.A. Davis, 2002. Remote sensing coordinationfor improved emergency Response, Proceedings of Pecora15/Land Satellite Information IV/ISPRS Commission I/FIEOS2002, 10-15 November, Denver, Colorado.

Lewicki, P., T. Hill, and E. Bizot, 1988. Acquisition of proceduralknowledge about a pattern of stimuli that cannot be articulated,Cognitive Psychology, 20:24–37.

Lloyd, R.E., and M.E. Hodgson, 2002. Visual search for land useobjects in aerial photographs, Cartography and GeographicInformation Science, 29(1):3–16.

Lloyd, R.E., M.E. Hodgson, and A. Stokes, 2002. Visual categorizationwith aerial photographs, Annals of the Association of AmericanGeographers, 92(2):241–266.

Montello, D.R., 2002. Cognitive map-design research in the twentiethcentury: Theoretical and empirical approaches, Cartography andGeographic Information Science 29(3):283–304.

Parrish, D.R., J.J. Breen, and S. Dornan, 2007. Survey DevelopmentWorkshop for the Southeast Region Research Initiative (SERRI)Project, Gulfport, Mississippi, Mississippi State UniversityCoastal Research and Extension Center, 29 p.

634 J u n e 2012 PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING

11-063.qxd 5/9/12 4:17 PM Page 634

Page 11: 11-063.qxd 5/9/12 4:17 PM Page 625 Spatial Resolution ...people.cas.sc.edu/hodgsonm/Published_Articles_PDF... · uals’ success with interpreting imagery at different levels of resolution

Rosch, E., 1978. Principles of categorization, Cognition andCategorization (E. Rosch and B. Lloyd, editors), Hilldale, NewJersey: Lawrence Erlbaum.

Schweitzer, B., and B. McLeod, 1997. Marketing technology that ischanging at the speed of light, Earth Observation Magazine,6(7):22–24.

Thomas, D.S.K., S.L. Cutter, M.E. Hodgson, M. Gutenkunst, and S.Jones, 2003. Use of spatial data and geographic technologies inresponse to the September 11th terrorist attack on the World TradeCenter, Beyond September 11th: An Account of Post-DisasterResearch,: University of Colorado, Boulder: NHRAIC, pp. 147–162.

Wilson, M., 2002. Six views of embodied cognition, PsychonomicBulletin & Review, 9(4):625–636.

Wolfe, J.M., 1994. Guided search 2.0: A revised model of visualsearch, Psychonomic Bulletin & Review, 1(2):202–238.

Womble, J.A., D.A. Smith, and B.J. Adams, 2008. Remote sensing ofhurricane damage - Part 2: Identification of neighborhoodwind/water damage patterns, Paper Presentation at theAmerican Association for Wind Engineering 2008 Workshop.

Zhang, Y., and N. Kerle, 2008. Satellite remote sensing for near-realtime data collection, Geospatial Information Technology forEmergency Response (S. Zlatanova and J. Li., editors), London:Taylor & Francis.

(Received 29 July 2011; accepted October 2011; final version 19December 2011)

PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u n e 2012 635

11-063.qxd 5/9/12 4:17 PM Page 635