effect of computer graphics enhancements on spatial performance using perspective displays

6
Effect of computer graphics enhancements on spatial performance using perspective displays Woodrow Barfield 1 Virtual Environment Laboratory, Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA Received 18 August 1997; received in revised form 18 March 1998; accepted 18 March 1998 Abstract Two studies were performed to investigate the effect of providing computer graphics enhancements to a perspective display on the observer’s ability to estimate the azimuth and elevation separating two computer-generated images. The enhancements included the ability to rotate the perspective scene, the use of Lambertian shading, and the use of Lambertian shading with computer-generated shadows. The results indicated that the computer graphics enhancements of shadows and Lambertian shading did not aid subjects in judgments of azimuth or elevation. However, the ability to rotate the scene significantly improved judgments of elevation. Furthermore, this effect was most pronounced for larger elevation separations between images. The results also showed that the geometric field of view used to design the perspective display influenced the magnitude of azimuth errors. Implications of the results for the design of spatial displays are discussed. q 1998 Elsevier Science Ltd. All rights reserved. Keywords: Computer graphics enhancements; Spatial performance; Lambertian shading; Perspective display 1. Introduction There is a growing trend among designers to enhance two-dimensional spatial displays by using techniques in computer graphics. One of the motivations for this trend is due to limitations associated with current two-dimensional display formats; the most obvious being the lack of an intui- tive format in which to present three-dimensional spatial data. Two closely related application areas where the use of three-dimensional spatial displays are important include displays for flight decks and displays for air traffic control. In the first case, Ellis et al. [7] and Smith et al. [12] have shown that the use of a plan view display for flight control may result in more frequent horizontal (x and z axis) than vertical (y axis) avoidance maneuvers. Other application areas where three-dimensional display formats, and specifi- cally enhancements provided by computer graphics techni- ques could aid in display design are in-vehicle navigation displays for automobiles; and displays for telerobotics, tele- medicine, submarines, and surface ships. One computer graphics technique which has been used to present three-dimensional spatial information is the perspective display format. While showing great promise in providing a format for displaying three-dimensional data, due to distortions associated with parameters of perspective, studies have shown that operators make char- acteristic errors when making directional judgments using perspective displays. Many studies have been conducted to help identify these response biases and in turn determine what display conditions, or parameters of perspective used in designing a perspective display will minimize these biases so as to provide the viewer with the best performance in spatial judgements [1, 8, 11]. McGreevy and Ellis [11], for example, examined the effect of varying the geometric field of view (GFOV) while maintaining constant local scal- ing effects of perspective. Their display consisted of two cubes above a horizontal grid plane with droplines connect- ing each cube to the surface. The task consisted of judging azimuth and elevation angles of the target cube with respect to the reference cube. The results of this study showed that target elevation was consistently overestimated by the viewer, especially in ‘telephoto’ images. Target azimuth, on the other hand, varied sinusoidally with respect to the target cube location within a particular quadrant. In a similar study, Ellis et al. [8] examined the effect of varying both the GFOV, and the station point distance on performance in exocentric spatial judgments. It was observed that errors in azimuth decreased when the target cube was located near the major meridians of the horizontal grid plane. If the grid plane was replaced with a simple plane void of Displays 19 (1998) 127–132 0141-9382/98/$ - see front matter q 1998 Elsevier Science Ltd. All rights reserved. PII: S0141-9382(98)00032-8 1 Tel.: 1 1-540-231-2547; Fax: 1 1-540-231-3322.

Upload: woodrow-barfield

Post on 02-Jul-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Effect of computer graphics enhancements on spatial performance usingperspective displays

Woodrow Barfield1

Virtual Environment Laboratory, Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA24061, USA

Received 18 August 1997; received in revised form 18 March 1998; accepted 18 March 1998

Abstract

Two studies were performed to investigate the effect of providing computer graphics enhancements to a perspective display on theobserver’s ability to estimate the azimuth and elevation separating two computer-generated images. The enhancements included the ability torotate the perspective scene, the use of Lambertian shading, and the use of Lambertian shading with computer-generated shadows. Theresults indicated that the computer graphics enhancements of shadows and Lambertian shading did not aid subjects in judgments of azimuthor elevation. However, the ability to rotate the scene significantly improved judgments of elevation. Furthermore, this effect was mostpronounced for larger elevation separations between images. The results also showed that the geometric field of view used to design theperspective display influenced the magnitude of azimuth errors. Implications of the results for the design of spatial displays are discussed.q 1998 Elsevier Science Ltd. All rights reserved.

Keywords:Computer graphics enhancements; Spatial performance; Lambertian shading; Perspective display

1. Introduction

There is a growing trend among designers to enhancetwo-dimensional spatial displays by using techniques incomputer graphics. One of the motivations for this trend isdue to limitations associated with current two-dimensionaldisplay formats; the most obvious being the lack of an intui-tive format in which to present three-dimensional spatialdata. Two closely related application areas where the useof three-dimensional spatial displays are important includedisplays for flight decks and displays for air traffic control.In the first case, Ellis et al. [7] and Smith et al. [12] haveshown that the use of a plan view display for flight controlmay result in more frequent horizontal (x andz axis) thanvertical (y axis) avoidance maneuvers. Other applicationareas where three-dimensional display formats, and specifi-cally enhancements provided by computer graphics techni-ques could aid in display design are in-vehicle navigationdisplays for automobiles; and displays for telerobotics, tele-medicine, submarines, and surface ships.

One computer graphics technique which has been used topresent three-dimensional spatial information is theperspective display format. While showing great promisein providing a format for displaying three-dimensional

data, due to distortions associated with parameters ofperspective, studies have shown that operators make char-acteristic errors when making directional judgments usingperspective displays. Many studies have been conducted tohelp identify these response biases and in turn determinewhat display conditions, or parameters of perspective usedin designing a perspective display will minimize thesebiases so as to provide the viewer with the best performancein spatial judgements [1, 8, 11]. McGreevy and Ellis [11],for example, examined the effect of varying the geometricfield of view (GFOV) while maintaining constant local scal-ing effects of perspective. Their display consisted of twocubes above a horizontal grid plane with droplines connect-ing each cube to the surface. The task consisted of judgingazimuth and elevation angles of the target cube with respectto the reference cube. The results of this study showed thattarget elevation was consistently overestimated by theviewer, especially in ‘telephoto’ images. Target azimuth,on the other hand, varied sinusoidally with respect to thetarget cube location within a particular quadrant. In a similarstudy, Ellis et al. [8] examined the effect of varying both theGFOV, and the station point distance on performance inexocentric spatial judgments. It was observed that errorsin azimuth decreased when the target cube was locatednear the major meridians of the horizontal grid plane. Ifthe grid plane was replaced with a simple plane void of

Displays 19 (1998) 127–132

0141-9382/98/$ - see front matterq 1998 Elsevier Science Ltd. All rights reserved.PII: S0141-9382(98)00032-8

1 Tel.: 1 1-540-231-2547; Fax:1 1-540-231-3322.

grid lines, azimuth errors were minimized when the targetcube was located along the viewing vector.

Other studies focusing on the design of three-dimensionaldisplay formats have evaluated performance as a function ofthe elevation of the computer graphics virtual camera abovethe computer-generated scene [9]. For example, in a studyby Yeh and Silverstein [15] subjects judged which of twoobjects were closer to the viewer, and which of two objectswere higher above the ground plane as a function ofeyepoint elevation and perspective versus stereoscopicviewing conditions. Depth judgments were found to beslightly faster at a 458 EPEA compared with a 158 EPEA.Altitude judgments, however, were much slower at the 458EPEA compared with the 158 EPEA. In a study by Kim et al.[10], eyepoint elevation was shown to influence manualtracking performance. Their results indicated that as theEPEA approached extremes, 08 or 908, the rms error intracking increased. This is because as the eyepoint elevationapproaches one of the two extreme viewing angles, theperspective display gradually loses one axis of positioninformation, either the depth or the vertical dimension.Thus, it is not surprising that the rms tracking errors werefound to be at a minimum for the 458 eyepoint elevationangle.

Another display condition which has received consider-able attention is that of a stereoscopic presentation of three-dimensional information [3, 10, 14, 15]. Yeh and Silverstein[15], in the study mentioned above, found that responsetimes and accuracy for depth and altitude judgments weresuperior using a stereoscopic display compared with perfor-mance using a perspective display. Furthermore, in thestudy by Kim et al. [10], they showed that rms error for atracking task was smaller using a stereoscopic displaycompared with a perspective display, especially whenmonoscopic depth cues were missing. In a different study,Barfield and Rosenberg [3] compared the use of a

perspective display versus a stereoscopic display for judg-ments of azimuth and elevation. They found that the stereo-scopic display was superior for judgments of elevation, butnot for judgments of azimuth. The results of these studiesseem to indicate that stereoscopic viewing providesenhanced performance over perspective displays when themonoscopic depth cues used to design the scene are sparseor degraded [9].

As noted, previous studies have shown that use of aperspective display may aid operators in judgments ofspatial information [4, 5]. However, in many of the previousstudies, the perspective displays consisted of wireframeimages [2, 9] representing a relatively low state-of-the-artin rendering techniques. Furthermore, wireframe displayslack many of the depth cues which are necessary for accu-rate spatial performance [3, 9]. These cues include texturegradients, arrangements of light and shadows, motion paral-lax, and binocular disparity. In the current studies it was ofinterest to determine whether pictorial depth cues added to aperspective display would result in an improvement inspatial judgments compared with performance using a wire-frame perspective display. The depth cues/graphicsenhancements added to the wireframe perspective displayincluded: (1) the ability to rotate the scene, providing thedepth cue of motion parallax; (2) Lambertian shading,providing the depth cue of interposition; (3) computer-generated shadows, providing the depth cues associatedwith information about the shape of an object and the direc-tion of a light source in the scene [13]. The major objectivesof the studies were to determine if directional judgmentswould improve as a function of the type of display (wire-frame, Lambertian shaded, Lambertian shaded withshadows) and specifically the type of depth cue providedto the display.

2. Experiment 1

The purpose of Experiment 1 was to determine whetherthe depth cues of motion parallax and interposition wouldlead to an improvement in the accuracy of azimuth andelevation judgments compared with performance using aperspective wireframe display. Fourteen university students,five female and nine male participated in the study (meanage 22.1 years). All subjects reported having normal orcorrected to normal visual acuity.

To obtain a wide range of viewing parameters in which toevaluate the effect of monocular depth cues, four geometricfields of view (GFOV) (308, 458, 608, and 758) were used todesign the perspective display. Fig. 1 shows a schematicdrawing of computer graphics variables that relate to thedesign of a perspective display. As shown in Fig. 1 an arbi-trary view into a three-dimensional environment is specifiedby a viewing frustum, sometimes called a perspective view-ing volume. Of particular interest are the center of projec-tion (COP) and the GFOV. The GFOV is the field view

W. Barfield / Displays 19 (1998) 127–132128

Fig. 1. Essential components of a perspective display showing the center ofprojection, geometric field of view (GFOV), and near and far clippingplanes.

W. Barfield / Displays 19 (1998) 127–132 129

Fig. 2. Photographic representation of three display formats, (a) a wireframe image, (b) a Lambertian shaded image, and (c) a Lambertian shaded imagewithshadows. In each condition the reference cube is located at the center of the display, the other object is the target cube with dropline.

pertaining to the vertical and horizontal angle from thecomputer’s virtual eye to the viewport, which is determinedby the clipping planes. The effect of changing the GFOV isto proportionally increase (magnification effect, say 108GFOV) or decrease (minification effect, say 1208 GFOV)the size of the image displayed in the viewport. Anotherimportant term in computer graphics is the COP which isthe point in three-dimensional space, through which allimaged light rays pass. Furthermore, the station point isthe location representing the viewer during image genera-tion. For monoscopic displays, the station point is coinci-dent with the COP.

The perspective displays consisted of a grid plane andtwo objects which were modeled after scenes usedpreviously by McGreevy and Ellis [11]. The two objectsconsisted of a target cube and reference cube with droplinesconnecting the cubes to a grid which overlaid a flat surface(Fig. 2). In relation to the reference cube, for each GFOV,the target cube appeared in one of eight azimuth directions(108, 558, 1008, 1458, 1908, 2358, 2808, 3258) and eightelevation directions (̂ 108, ^208, ^308, ^408). For eachof the four main conditions, subjects viewed 64 perspectivescenes which were created by combining the azimuth,elevation, and GFOV viewing parameters. The 64 perspec-tive scenes were randomly selected from the set of 256images which resulted from fully crossing the perspectiveparameters (4 GFOVs× 8 azimuths× 8 elevations). Half ofthe 256 images were randomly assigned to the rotationcondition, the other half to the nonrotation condition.

The perspective scenes were created using a SiliconGraphics workstation with a screen resolution of 1024×1280 pixels and a 60 Hz refresh rate. The images wereviewed from an eyepoint located at a bearing of 2028 andan elevation of 228 above the reference cube. This locationgave the observer a perspective of the scene as viewed fromthe third quadrant and was located in a similar position tothe eyepoint chosen by McGreevy and Ellis [11], and Elliset al. [7] in prior studies. Each target cube was positioned ata standard radial distance from the reference cube as deter-mined by the GFOV. The reference cube was alwaysdisplayed at the center of the screen and was 1 cm2 in sizefor the 758 GFOV. In total, there were four radial distanceseach uniquely paired with a GFOV. The radial distance forthe 308 GFOV condition was 1.75 cm; 458, 2.70 cm; 608,3.75 cm; 758, 5.00 cm. These values were based on the scal-ing factor for the target and reference cubes obtained for agiven GFOV. Stimuli images were also scaled across eachGFOV so that the reference cube maintained the same visualangle across all conditions. The scaling of the grid wasdetermined by the particular GFOV. In addition, thedropline was always perpendicular to the grid. To distin-guish the target cube from the reference cube, the droplineof the reference cube contained a small cone on the base ofthe dropline. The background of the scene was illuminatedat 9.0 fL, the surface grid at 6.3 fL, the gridlines at 0.56 fL,and the solid shaded cubes at 4.9 fL.

During the rotation condition subjects used a mouse tointeractively rotate the entire scene while maintaining thesame radial and angular eyepoint position relative to thereference cube. The perspective scene was rotated aboutthe vertical axis of the reference cube which was alwayslocated in the center of the display. Four absolute azimuthvalues (08, 908, 1808, 2708) were labeled on the edges of thegrid surface so that a constant frame of reference wasprovided for the rotation and static image conditions.

The experimental task was to judge the relative elevationand azimuth which separated the reference cube from thetarget cube. The dependent variables were collected usingsoftware which allowed the subject to rotate a pointeraround a 3608 circle to indicate azimuth estimates and1808 semicircle to indicate elevation estimates. Trainingwas provided by allowing the subject to practice severaltrials each representative of the experiment task withgraphical feedback on azimuth and elevation judgmentsprovided.

3. Results

The dependent variable consisted of the error in direc-tional judgment, defined as the difference in absolute valuebetween the estimated and actual azimuth or elevation judg-ment. The dependent variables were analyzed using arepeated measures analysis of variance (ANOVA) proce-dure.

The results of the ANOVA indicated that the magnitudeof the azimuth error did not significantly change as a func-tion of the type of display (wireframe, Lambertian shaded)(F1,13� 1.63,p . 0.05) (mean az. error for the wireframescene 6.358; mean az. error for the Lambertian shaded scene6.128). Furthermore, the addition of monocular motionparallax (scene rotation condition) to the perspective displaydid not lead to more accurate judgments of azimuth (F1,13�0.00, p . 0.05). Mean azimuth errors for the static androtation conditions were remarkably similar (static scene,6.238; rotation scene, 6.248). However, the particularGFOV used to design the perspective display was shownto affect the magnitude of the azimuth error (F3,39� 5.43,p , 0.003). The mean azimuth error was larger at the 308GFOV (6.738), followed by the 608 (6.258), 458 (6.088), and758 (5.888) GFOVs. The Scheffe test indicated that the onlysignificant difference in performance was between the 308and 758 GFOVs, i.e., the GFOV with the most pronouncedscene magnification resulted in the least accurate azimuthestimates.

For azimuth errors, the two-way interaction between theGFOV used to design the display and the type of display(wireframe, Lambertian shaded) was statistically significant(F3,39 � 2.96, p , 0.04). The interaction revealed that incomparison with the wireframe perspective display, meanazimuth errors were least using the Lambertian shadedscheme in combination with the 308, 458, and 608 GFOV

W. Barfield / Displays 19 (1998) 127–132130

displays. However, with the 758 GFOV display, the wire-frame scene resulted in more accurate estimates of azimuth(mean azimuth error: wireframe scene, 5.78; Lambertianshaded scene, 6.18). Finally, for azimuth errors, the two-way interaction between display type and scene rotationwas not statistically significant (F1,13� 1.69,p . 0.05).

Regarding elevation judgments, the ANOVA procedureindicated that subjects were more accurate estimating eleva-tion when they were allowed to rotate the scene (F1,13 �6.82,p , 0.02) (mean elevation error: static scene, 8.888;rotation scene, 7.798). However, whether the perspectivedisplay was presented as a wireframe (mean elevationerror, 8.258) or Lambertian shaded scene (mean elevationerror, 8.408) did not significantly affect the accuracy ofelevation judgements (F1,13� 0.69,p . 0.05).

For elevation judgments, the interaction between displaytype (wireframe or Lambertian shaded) and the ability torotate the scene was statistically significant (F1,13 � 5.01,p , 0.04). Interestingly, the interaction revealed that theability to rotate the scene was more beneficial for Lamber-tian shaded images than for wireframe scenes. For elevationjudgments, the interaction between scene rotation and actualazimuth was significant (F7,91� 5.74,p , 0.0001). Whendisplays were rotated such that the reference and targetcubes (with droplines) were parallel to the viewing vector,elevation judgments were more accurate than for imagescoincident with the viewing vector.

For elevation estimates, the interaction between scenerotation and actual elevation was also significant (F7,91 �2.93, p , 0.008). The two-way interaction revealed thatscene rotations were more beneficial for larger elevationseparations between the reference and target cubes(^408), than when the target and reference cubes werecloser in elevation (̂ 108). Finally, the interaction betweenimage type and elevation was not significant (F7,91� 1.10,p . 0.05).

4. Experiment 2

Experiment 2 was designed to determine whether the useof computer-generated shadows in combination with aLambertian shaded scene, was an effective depth cue forimproving spatial judgments using a perspective display.The length of a shadow provides cues as to the size anddepth of an object in a scene as well as the direction of alight source. Ten university students, two females and eightmales, participated in the study (mean age 21.1 years). Allsubjects had normal or corrected to normal visual acuity andnone of the subjects had participated in Experiment 1.

The perspective scenes were created with the same equip-ment and software discussed above. Furthermore, the sameperspective viewing parameters were used. There were threemain conditions studied, a wireframe scene, a Lambertianshaded scene, and a Lambertian shaded scene with shadows.Forty-eight perspective scenes were created for each of the

three main conditions. These images were randomlyselected from the set of 192 images which resulted fromcrossing the perspective viewing parameters (4 GFOV× 8azimuths× 8 elevations). Each target cube was positioned ata standard radial distance from the reference cube whichwas always displayed at the center of the screen. The experi-ment task was the same as in Experiment 1. Training wasprovided by allowing the subject to practice several trialseach representative of the experiment task with graphicalfeedback on azimuth and elevation judgments provided.

5. Results

Regarding azimuth judgments, the ANOVA procedureindicated that the main effects for GFOV (F3,27 � 1.22,p . 0.05) (mean azimuth error: 308 GFOV, 7.508; 458GFOV 7.608; 608 GFOV 6.608; 758 GFOV 6.08), displaytype (mean azimuth error: wireframe, 5.908; Lambertianshaded, 7.308; shadows, 7.508) (F2,18 � 1.72, p . 0.05),actual azimuth (F7,63� 1.80,p . 0.05), and actual elevation(F5,45� 1.54,p . 0.50), were all not statistically significant.In addition, for azimuth judgments, the combination of two-way interactions and the three-way interaction betweenGFOV, azimuth, and elevation were not statistically signif-icant.

For estimates of elevation, the main effect for GFOV(F3,27� 0.26,p . 0.05) and type of display (mean elevationerror: wireframe, 7.918; Lambertian shaded, 7.748; shadows,7.448) (F3,27 � 0.90, p . 0.05) were also not statisticallysignificant. However, the results indicated that estimates ofelevation varied as a function of the actual elevation whichseparated the two images (F5,45� 10.50,p , 0.0001). Forboth positive (above) and negative (below) elevations,errors increased as the elevation separating the referenceand target cubes increased. The main effect for actualazimuth was statistically significant (F7,63 � 4.63, p ,0.0003). Elevation errors were decreased when the positionof the target cube was approximately orthogonal to thereference cube. Finally, the two-way interaction betweenGFOV and type of display (F6,54 � 0.97, p . 0.05) wasnot significant.

6. Discussion

In both studies, compared with performance using a wire-frame perspective display, the depth cues provided byLambertian shading and computer-generated shadows,were not effective in reducing the magnitude of azimuthor elevation errors. However, the results from Experiment1 indicated that the ability to interactively rotate the sceneaided elevation judgments. Interestingly, the effect of scenerotations were most pronounced for larger elevation separa-tions between images. The ‘rotation effect’ is an interestingfinding since one of the benefits associated with using aperspective display is to give the observer increased access

W. Barfield / Displays 19 (1998) 127–132 131

to vertical information. Moreover, since the rotation of awireframe scene requires relatively little computationalresources, it may be beneficial to allow users to interactivelyrotate the scene if information about vertical separation is acomponent of the spatial task (e.g., air traffic control). Thus,for scenes that are complex in terms of number of polygons,computational resources used for rendering could poten-tially be used to allow the subject to rotate the scene withan update rate sufficient to produce smooth motion. The‘rotation effect’ in relation to the update rate of a simulation,should be investigated in more detail in future studies.

Consistent with results from previous studies [2, 3, 6] thegeometric parameters of perspective used to design theperspective display were shown to influence spatial perfor-mance. For example, azimuth judgments were influenced bythe particular GFOV used to design the perspective displaywith the magnification case (the 308 GFOV), resulting in thelast accurate estimates of azimuth. One effect of using atelephoto lens is to compress the depth information in thedisplay. In effect, with a 308 GFOV, the gridlines are highlycompressed in the direction towards the COP. In previousstudies, Hendrix and Barfield [9] and Barfield and Rosen-berg [3] showed that subjects used the intersection betweenthe dropline and the surface for target and reference imagesas essential information to judge azimuth. Thus, anygeometric parameter of perspective which distorts informa-tion on the display surface can be expected to decrease theaccuracy of azimuth judgments. In such cases, symbolicenhancements added to the display may be necessary toassist the user in recovering azimuth information [9, 11].

Also of interest was that observers were more accuratemaking elevation and azimuth judgments when the targetcube was positioned close to the horizontal and verticalmeridians. This pattern of performance, which is a functionof the physical position of the target cube (with dropline) inrelation to the grid reference, supports previous findings byMcGreevy and Ellis [9] and Barfield et al. [2]. This findingalso explains why there was no significant difference inperformance between wireframe and Lambertian shadedimages. We postulate that information provided by the grid-lines, a display feature common to both display types, isessential in judging azimuth. In particular, as shown inour previous studies, the intersection of the dropline withthe grid surface, relative to the horizontal and vertical meri-dians is essential information to support azimuth judgments[3].

In summary, given the increased usage of perspectivedisplays for the presentation of three-dimensional data,basic research on parameters which may influence thedesign of these displays is critical. Furthermore, given therecent advances made in virtual interface technology,design information on GFOV and visual enhancements tocomputer-generated scenes is especially timely and impor-tant. This research, part of a continuing series of studies on

this topic from our laboratory, represents a step towardsproviding design guidelines for perspective displays.

Acknowledgements

This research was partially funded by a grant from theNational Science Foundation (DMC-857851).

References

[1] W. Barfield, R. Lim, C. Rosenberg, Visual enhancements andgeometric field of view as factors in the design of a three-dimensionalperspective display, in: Proceedings of the Human Factors Society34th Annual Meeting, Human Factors Society, Orlando, FL, 1990, pp.1470–1473.

[2] W. Barfield, C. Hendrix, O. Bjorneseth, Spatial performance withstereoscopic displays as a function of computer grahics eyepointelevation and geometric field of view, Applied Ergonomics 26(1995) 307–314.

[3] W. Barfield, C. Rosenberg, Judgments of azimuth and elevation as afunction of monoscopic and binocular depth cues using a perspectivedisplay, Human Factors 37 (1995) 173–181.

[4] S.V. Bemis, J.L. Leeds, E.A. Winer, Operator performance as a func-tion of type of display: Conventional versus perspective, HumanFactors 30 (1988) 163–169.

[5] M. Burnett, W. Barfield, An evaluation of a plan-view versus perspec-tive display for an air traffic controller task, in: Proceedings of theHuman Factors Society 35th Annual Meeting, San Francisco, 1991.

[6] S.R. Ellis, M.W. McGreevy, Influence of a perspective display formaton pilot avoidance maneuvers, Proceedings of the Human FactorsSociety 27th Annual Meeting, 1983, pp. 762–766.

[7] S.R. Ellis, M.W. McGreevy, R.J. Hitchcock, Perspective trafficdisplay format and airline pilot traffic avoidance, Human Factors 29(1987) 371–382.

[8] S. Ellis, G. Tharp, A. Grunwald, S. Smith, Exocentric judgments inreal environments and stereoscopic displays, in: Proceedings of theHuman Factors Society 35th Annual Meeting, Human FactorsSociety, San Francisco, CA, 1991, pp. 1442–1446.

[9] C. Hendrix, W. Barfield, Relationship Between monocular and bino-cular depth cues for judgments of spatial information and spatialinstrument design, Displays, Technology and Applications 16(1995) 103–113.

[10] W.S. Kim, S.R. Ellis, M.E. Tyler, B. Hannaford, L.W. Stark, Quanti-tative evaluation of perspective and stereoscopic displays in three-axis manual tracking tasks, IEEE Transactions on Systems, Man, andCybernetics 17 (1987) 61–71.

[11] M.W. McGreevy, S.R. Ellis, The effect of perspective geometry onjudged spatial information instruments, Human Factors 28 (1986)439–456.

[12] J.D. Smith, S.R. Ellis, E.C. Lee, Perceived threat and avoidancemaneuvers response to cockpit traffic displays, Human Factors 26(1984) 33–48.

[13] C. Wickens, Engineering Psychology and Human Performance, 2nded., Harper Collins, New York, 1992.

[14] C.D. Wickens, S. Todd, K. Seidler, Three dimensional displays:Perception, implementation, and applications (CSERIAC SOAR-89-01), Armstrong Aerospace Medical Research Laboratory, Wright-Patterson AFB, OH, 1989.

[15] Y. Yeh, L.D. Silverstein, Spatial judgments with monoscopic andstereoscopic presentation of perspective displays, Human Factors 34(1992) 583–600.

W. Barfield / Displays 19 (1998) 127–132132