virtual reality for scientific visualization an …/67531/metadc500890/... · hetsel, gene a.,...
TRANSCRIPT
3~,rt8
VIRTUAL REALITY FOR SCIENTIFIC VISUALIZATION
AN EXPLORATORY ANALYSIS OF
PRESENTATION METHODS
Presented to the Graduate Council of the
University of North Texas in Partial
Fulfillment of the Requirements
For the Degree of
Doctor of Philosophy
by
Gene A. Hetsel, A.A.S., B.A., M.S.
Denton Texas
August, 1997
Hetsel, Gene A., Virtual Reality for Scientific Visualization, an Exploratory
Analysis of Presentation Methods. Doctor of Philosophy (Information Science), August
1997, 107 pp., 12 tables, 15 illustrations, references, 72 titles.
Humans are very effective at evaluating information visually. Scientific
visualization is concerned with the process of presenting complex data in visual form to
exploit this capability. A large array of tools is currently available for visual
presentation. This research attempts to evaluate the effectiveness of three different
presentation models that could be used for scientific visualization. The presentation
models studied were, two-dimensional perspective rendering, field sequential
stereoscopic three dimensional rendering and immersive virtual reality rendering.
A large section of a three dimensional sub surface seismic survey was modeled as
four-dimensional data by including a value for seismic reflectivity at each point in the
survey. An artificial structure was randomly inserted into this data model and subjects
were asked to locate and identify the structures.
A group of seventeen volunteers from the University of Houston student body
served as subjects for the study. Detection time, discrimination time and discrimination
accuracy were recorded. The results showed large inter subject variation in presentation
model preference. In addition the data suggest a possible gender effect. Female subjects
had better overall performance on the task as well as better task acquisition.
TABLE OF CONTENTS
Page
LIST OF TABLES............................................. ....................................... ......... iv
LIST OF ILLUSTRATIONS..........................................v
Chapter
1. INTRODUCTION ......................................................................................... .1The presentation modelsScientific VisualizationLarge Holistically relevant data setsVirtual reality technologyStatement of the problemPurpose of the studySignificance of the study
2. LITERATURE REVIEW.......................................9Scientific visualizationVirtual realityCognitive science
3. RESEARCH METHODOLOGY ................................................................... 24Research questionsResearch hypothesesResearch participantsExperimental DesignProcedures
4. RESULTS AND ANALYSIS........................................................................44AccuracyDetectionDiscriminationPerformance
5. CONCLUSIONS................................................... ..................................... 69SummaryDiscussions of resultsImplications of the FindingsFurther ResearchLimitations and delimitations
APPENDCES.................................. .......................... .... 79A. Data Collection FormsB. Sample Subject DataC. Presentation Model Examples
WORKS CITED................................. ........... 99
WORKS REFERENCED BUT NOT CITED ..... .................... 102
LIST OF TABLES
Table Page1. Accuracy, All Subjects.......... ......... ............... ........... 462. Accuracy, By Gender........ ........... .................. 483. Accuracy, By Video Game Experience .. ........... ........ ......... 494. Detection, All Subjects.......... ............. .............. 545. Detection, By Gender .................................................................................... 556. Detection, By Video Game Experience...............................567. Discrimination, All Subjects............................ ........ 598. Discrimination, By Gender ............................. ......... 609. Discrimination, By Video Game Experience..................................................61
10. Performance, All Subjects............................. ....... 6311. Performance, By Gender.....................................6612. Performance, By Video Game Experience.............................67
LIST OF ILLUSTRATIONS
Figure Page1. Stimulus Data Set ........................... ....................... 322. Stimulus Data Set with Structure.......................... ... 333. Training Data Set............. ............... ................... 364. Chair................................ ......... .. ...... ......... ............. 385. 3-D Accuracy, All Subjects, All Trials...............................506. 3-D Detection, All Subjects, All Trials..............................577. 3-D Discrimination, All Subjects, All Trials........................628. 3-D Performance, All Subjects, All Trials..........................................................68
CHAPTER I
INTRODUCTION
Humans are very effective at evaluating information visually. Scientific
visualization is concerned with the process of presenting complex data in visual form to
exploit this capability. This research attempts to evaluate the effectiveness of three
presentation models, which can be used for scientific visualization. While scientific
visualization refers to any type of data visualization, this study focused on the
visualization of large, complex data sets.
The Presentation Models
The presentation models that were evaluated in this study are: two dimensional
perspective rendering of four dimensional data, field sequential stereoscopic three
dimensional rendering of four dimensional data and immersive virtual reality rendering
of four dimensional data. The same test data were rendered for all three presentation
techniques using conventional isosurface generation to construct a polygonal model that
approximates a contour surface. The fourth independent variable was represented by
color. The study looked at the relative effectiveness of the three presentation models
using a pattern detection and discrimination task. Three dependent variables (time on
2
task for detection, time on task for discrimination and response correctness) were used as
performance measures of presentation effectiveness. The first presentation
method, two-dimensional perspective rendering, is now widely used in the visualization
and analysis of three-dimensional data. Many techniques have been developed to expand
this presentation mode to four dimensions. The last two presentation methods fall into
the category of virtual reality. The idea of virtual reality was first introduced by
Sutherland over twenty-five years ago (Sutherland, 1968). It became popular about two
decades later with the availability of production systems (Brooks, 1986: Fisher et al.
1986). The idea behind virtual reality is to present a computer generated environment
with which a user can interact naturally. The human user can then benefit from the many
skills developed over an entire lifetime of interacting with the three dimensional world
that surrounds us.
Scientific Visualization
Scientific visualization is the process of displaying data in a form for assimilation
visually by humans. This is useful because humans are very good at visual image
processing. We can recognize complex patterns and trends in large data sets when
presented with a visual representation of that data. Current techniques for visualization
of large multidimensional data sets involve the use of different types of presentations to
assist the user in assimilating a three-dimensional image (Tang 1993). The most
common today is the use of three-dimensional perspective display in a two dimensional
3
plane. Research by S. Pinker and others indicate that humans maintain three-dimensional
mental models of three-dimensional space (Pinker 1980). While the use of perspective
rendering of three dimensional data will assist the user in assimilating three dimensional
data, some amount of cognitive process is required for the user to develop a three
dimensional image of that data. Uttal (1993) has developed a taxonomy of processes
involved in the processing of visual information. Uttal identified six levels of
psychonueral processing, levels 0 through level 5. Uttal states:
The main property of responses affected by processes of Level 4 is thatthey are characteristically multidimensionally determined; aspects ordimensions of change as a function of the joint influence of multipleaspects or dimensions of the stimulus (Uttal, W. L. 1983 pg. 19).
This study attempts to measure the effects of different presentation models on level 4 and
level 5 processes, the highest levels of psychonueral processing as described by Uttal.
When the data set is expanded to four dimensions, with the addition of an amplitude
value for each x, y, z triplet, for example, the problem of visualization becomes even
more difficult. It would appear that presentation models that assist the user in
maintaining the three-dimensional mental model would in effect reduce the cognitive
overhead required and in turn improve the user's ability to perform other cognitive tasks
such as pattern recognition. The three presentation models being studied offer different
types of information to the user. The three-dimensional rendering on a two dimensional
display offers the highest pixel density of any of the models. The field sequential
stereoscopic display offers true binocular disparity for depth cueing, but it does so at the
cost of one half the pixel density. The true immersive virtual reality using a head-
4
mounted display offers head motion parallax, true binocular disparity and look behind
capability again at the cost of reduced pixel density. The information bandwidth cannot
be readily compared between the models.
Large Holistically Relevant Data Sets
Large data sets can be divided into two major classes. The first are data sets that
while large can be analyzed in small sections. Many problems lend themselves to
solution by dividing the large data set into several smaller sections and analyzing them
one at a time. The second class requires that the whole data set be analyzed at once. The
problem or data pattern that is sought must be viewed in relation to the whole data set. I
call this second class of data sets large holistically relevant data sets. A large, holistically
relevant data set is one that is large enough that the subject cannot remember the overall
structure, and that requires visualization of the entire, or at least a very large portion of
the data set at once. Examples of this type of data include geophysical data, weather
system model data and many other types of real world data.
In addition to being large, many real world problems require the representation of
at least four independent variables. For the purpose of this study, a data set that contains
four independent variables is considered a four-dimensional data set. Many examples
exist that relate some fourth independent measure to a three dimensional space. A data
set that contains surface temperature of a three dimensional surface would be considered
a four dimensional data set. Weather models often show temperature or wind velocity as
5
a function of location in space. The supercomputer models of thunderstorm development
represent this type of data. This is also a classical cartographic problem. Many maps
display several different characteristics as a function of location on some surface.
Virtual Reality Technology
Virtual reality has been defined in many different ways. For the purpose of this
research virtual reality is defined as a three-dimensional, real-time, interactive computer
generated visual simulation. The systems in current use may not fully realize this
definition. Systems in use today are not fully three-dimensional and may not operate in
exact real time. Clearly, virtual reality technology has many applications in areas such as
training, molecular modeling, astronomy, architecture, medicine, and entertainment
(Durlach and Mavor, 1995).
Statement of the Problem
While researchers have discussed the usefulness of VR, both immersive and non-
immersive, in scientific visualization applications (Feiner 1992, Bryson 1993), I have
found no studies that seek to measure, quantitatively, the effectiveness of these
techniques. The difference in cost of the three approaches, in dollars, programming
development time and computational overhead, is substantial, making it worthwhile to
learn the comparative effectiveness of the presentation models before continuing
development in scientific visualization tools for each. Research indicates that humans
6
maintain three-dimensional mental models internally (Pinker 1980), and that the ability
to develop these models from perspective rendering is an acquired skill (Epstein et al
1992). This might suggest that substantial bias could be developed by skilled users of
current visualization technology. This bias could easily mask any inherent advantage of
new technology. This study, by using novice users and abstract data construction, will
attempt to quantify the effectiveness of the presentation models independent of factors
such as familiarity with the presentation model and developed skill in imaging data sets
that would be found with more experienced users. In addition, less experienced subjects
having not acquired as much skill might have more cognitive overhead in performing the
task. This would result in larger time on task values. The longer average time would
allow larger differences in individual time on task values that would be more easily
detected.
Purpose of the Study
The purpose of this study is to extend the basic knowledge relating to visual
presentation models and their comparative effectiveness in rendering large, holistically
relevant, four-dimensional data sets. This study is designed to look at the comparative
effectiveness of: (a) two dimensional perspective representation; (b) field sequential
stereoscopic presentation; and (c) immersive virtual reality in representing large, four
dimensional, holistically relevant data sets. The ability of subjects to assimilate a useful
image of the data and to recognize structural features, in four dimensions, is the tool used
7
to measure the presentation model effectiveness. The subject's time on task, to detect and
discriminate test structures, was the measured metric.
Significance of the Study
The main purpose of scientific visualization is to present data in a form that is
more useful for the researcher in discovering underlying phenomena. Bryson states:
Virtual environment interfaces are very striking, but the advantages of thisinterface are apparent only for those using the virtual environment system.For this reason, virtual environment interfaces are not particularly usefulfor the presentation of scientific results. They are useful, however, for theexploration and hopefully discovery of phenomena that can then bepresented in more conventional ways. (Bryson 1993 pg. 679)
While much research has been conducted in the area of perception, the problem of
evaluating visualization presentation models is quite different. The comparative effects
of resolution, headmotion parallax, and binocular disparity in removing some of the
ambiguities that result from two-dimensional perspective rendering of multi-dimensional
data sets are still unknown.
A great deal of research is being conducted in an effort to improve display
capability. Many different types of presentation hardware solutions are currently under
development (Deering 1993). While a qualitative analysis of the presentation models
may be appropriate for evaluating display technology when used for simulation or
modeling, a more quantitative approach is needed when looking at scientific visualization
applications.
8
This study will provide a starting point for quantified study of presentation
models as they apply to the problems of scientific visualization of large, holistically
relevant multi-dimensional data sets.
CHAPTER II
LITERATURE REVIEW
Related Research
The literature relating to the current study can be found in four general areas.
Much of scientific visualization research has focused on the problems of visualization
itself. Most of this work looks at ways to improve current technology and to extend the
ability of current technology in both sizes of data sets visualized and in expanding
capability into multi-dimensional data sets.
Virtual reality research tends to look at the capability of the technology to be
applied to the visualization problem. The major interest of VR researchers is in the area
of simulation, but some work that looks at emerging technologies especially in the area
of visual presentation, is relevant to the current study. A major problem related to
research in this area is that much of the VR research that is being done is being done by
industry, and is kept confidential for competitive reasons. This includes the major
energy exploration companies as well as companies like Lehman Brothers, a major Wall
Street investment firm. A spokesman for Lehman Brothers is quoted by (Virtual Reality
Report 1994) as saying they are "definitely using virtual reality" but declined to expand
on that for competitive reasons. This spokesman went on to say "all major investment
firms are using virtual reality because the technology is valuable in making information
10
overload less of a strain." The cartography literature was reviewed in two areas. The
basic problem of displaying multi-dimensional data sets is a classical cartographic
problem. This literature was also searched for an evaluation method that would be
appropriate for the current study. Cartographers are interested in the effectiveness of
maps, which form a subset of visualization techniques.
The perception literature was also reviewed, specifically, research that would help
understand the underlying process of three-dimensional model development by the
subject. This literature also helped with the research design.
Scientific Visualization
The process of data analysis can be divided into two major methods. The
confirmatory or inductive approach is (adapted from Tukey, 1977)
1. Formulate hypotheses to be tested.
2. Collect data according to an appropriate sampling scheme and select an
appropriate probability model.
3. Use the data and probability model to test the hypotheses.
This process then assumes that you have formulated ideas or questions and wish to
confirm or validate those ideas by testing them against observation.
The second major method is the exploratory, (adapted from Tukey, 1977)
1. Gather available data.
2. Look for patterns or structure within the data.
11
3. Develop hypotheses based on the structure or patterns.
This process then may be used to help formulate ideas or develop hypotheses. While a
great deal of energy is expended studying the process of confirmatory data analysis and
the statistical tools have become very sophisticated, the exploratory area has received
much less attention. Many of the techniques of exploratory analysis can be grouped
together as graphical exploratory techniques. Very often graphical exploratory
techniques can be very useful in the early stages of data analysis. They can be used to
identify appropriate statistical tests or may be used to assist in the development of new
hypothesis.
Recent research has focused on the extension of exploratory analysis using
computational techniques and complex tools to deal with large multivariate data sets
(Tang 1993). Many techniques including multiple presentations, geographic brushing,
scatterplot brushing and brightness mapping have been developed (Tang 1993). In his
masters thesis, Tang gives examples of cluster analysis, data integration, filtering
techniques and geo-temporal trend analysis.
While many applications of scientific visualization focus on data that represent
natural phenomenon the technique is not limited to the natural sciences. Koike states:
Information has no shape or color. Thus, one of the advantages of usingdiagrams is that virtual shape can be given to information to aid ourcomprehension and understanding.... Once such mental models arecreated we begin to think with these figures. Therefore, it is essential tocreate and portray diagrams which are designed from a human cognitiveperspective. (Koike 1993)
12
Koike develops a three-dimensional graphical representation of a complex computer
software application. In the conclusion Koike states:
... using another spatial dimension (the 3rd dimension which isperpendicular to the 2D plane makes it possible to represent separatelytwo (or more) relations in one 3D diagram in a way that makes it easy forusers to quickly create their own mental models. Since only onerepresentation is given to users, they have less difficulty in constructingmental models. As we discussed before, visualization systems must bedesigned from a user's cognitive perspective. In particular, efforts mustbe made to reduce the user's cognitive load, thereby improvingperformance. (Koike 1993)
While Koike clearly identified the problem of assisting the user in developing mental
models and the idea of cognitive load, he stopped short of using virtual reality to both
improve the capability of the system to display more complex models and also to assist
the user in constructing and maintaining mental models.
Virtual Reality
The majority of work in the area of virtual reality is focused in areas such as
simulation. The November 1994 listing of over 160 VR articles maintained on-line by
Toni Emerson of the Human Interface Technology Laboratory at Washington University
contained only two non-scholarly and four scholarly articles related to visualization
(Emerson 1994). Some researchers have looked at the potential for scientific
visualization. It is clear that current technology makes VR technology a powerful tool
for exploratory data analysis.
Virtual environment interfaces are very striking, but the advantages of thisinterface are apparent only for those using the virtual environment system.
13
For this reason, virtual environment interfaces are not particularly usefulfor the presentation of scientific results. They are useful, however, for theexploration and hopefully discovery of phenomena that can then bepresented in more conventional ways. (Bryson 1993)
Bryson is speaking of immersive virtual environments. Some of the limitations do not
apply to non-immersive technologies.
Interest is not limited to scientific applications. A recent application that allows
users to explore the effects of market variables on the value of financial interments, n-
vision, is described by Feiner (Feiner 1992).
Head mounted displays are considered the standard visual output device for
virtual reality systems. The real issue is not the particular output device but the visual
cues that a particular system can produce. All systems provide a number of visual cues
that are common in western perspective rendering including shading, texture, occlusion
and shape. Head mounted displays also provide head-motion cueing; that is, the view
changes as the head position changes. They provide binocular stereo images and enough
field of view to provide "presence", at least 65-degree field of view (Deering 1993).
Several alternatives are currently available including virtual workstations and the virtual
portal. Qualitative evaluation of these has been reported (Deering 1993). A stereo head
tracking system was developed by Digital Image Design Inc. They used a molecular
model to show the system to a group of practicing chemists at the ACS (American
Chemical Society) in 1991. Most felt that the VR model improved their ability to
understand the three-dimensional nature of the molecules (Paley 1992). No attempt was
14
made to measure the performance of the system or compare it to conventional
perspective rendering.
Several display technologies other than head mounted displays are being
considered including a range of autostereoscopic display devices. Lipton (1992) states:
I am not optimistic about the near-term availability of such products but itis, in concept, the ultimate means for displaying stereoscopic images.Meanwhile there are excellent stereoscopic, as opposed toautostereoscopic, products for viewing electronic images.
While little has been reported on quantitative performance measures of VR
display systems, a study evaluating 3D-task performance in virtual worlds was reported
by Arthur, Booth and Ware (1993). Three experiments were performed. The first rated
subjective impression of 3D. This experiment included questions such as:
Is head coupling as important, more important, or less important than stereo?
Is the combination of head coupling and stereo better than either alone?
Is head coupling alone worthwhile?
Is head coupling with stereo worthwhile?
The results of this section showed striking preference for head coupling. All users stated
that they would use it for object visualization if it were available (Arthur et al 1993).
The second experiment was a quantitative study employing a graph-tracing task.
Pairs of overlapping tree structures were presented to each subject. An end leaf was
identified and the subject was asked to identify which tree it belonged to. Error rates of
21.8% were noted for conventional planar display while the error rate dropped to 1.3%
for the head-coupled stereo display. Time on task also was lower in the VR vs. non-VR
15
environment. (Arthur et al 1993). The third experiment related the effects of lag and
frame rate and is not relevant to the current study.
Brill (1994) has identified a taxonomy of VR systems, stating:
VR is a technology that is represented by a series of computer platformsthat provide different ways to access a virtual environment. Thesedifferent platforms are distinguished by the relation of the user to thevirtual environment and by the types of equipment required to interactwithin a virtual environment.
Brill describes four levels or classes of VR with some subclasses. The first is the stage
world; this would include the head-mounted display, (HMD) type system used in this
research. This model is a complete immersion model in which the participant is visually
surrounded by a virtual world.
The second level is the mirror world. In this type of system the user sees himself
within the world. The participant can then direct the movement of the model of them
selves. I did not find this approach appropriate for scientific visualization work.
The third level is the desktop world. This is a partial immersion system. The
taxonomy model includes both head tracked and non-head tracked systems in which the
user gets a view of the VR world from a monitor as though looking through a window
into the world. The version used in this research was full 3D stereoscopic, but without
head tracking. The head tracking would allow some look around capability.
The fourth level of the taxonomy is called the chamber world, now commonly
called the cave environment. In this model the user can move about a small room; the
walls of which are actual rear projection screens. Wearing special 3D glasses like the
16
ones used in desktop VR, the user is able to achieve complete visual immersion. One
unique feature of this type of presentation is that several people can experience the same
virtual world simultaneously. This is the most costly both in computing overhead and in
facilities requirements.
VR Health
Linda Jacobson author of Garage Virtual Reality, believes that there are at least
two major areas of health concern in reference to the use of VR. The first is the health
risk of placing video monitors very near the eyes. This is of course the largest concern
with HMD's. The HMI) in most cases mounts a small video monitor within a few inches
of each eye. With desktop VR the exposure is the same as a standard computer monitor.
The second concern is the effect of prolonged viewing of 3D images.
Researchers at the University of Edinburgh in the UK and at Waseda University in
Tokyo claim that prolonged use of 3D devices can disrupt the participant's ability to re-
focus after emerging from the virtual world. (Dec, 1993 VR Report)
Manufactures including Scott Redmond of RPI Advanced Technology Group feel
the problem is with the older technology. He states, "If you do it wrong, you have all
these problems, nausea, headaches, and blurred vision. But with the right technology, the
right software, and the right deployment, you don't have any problems."
A large portion of the literature relating to VR side effects has been done by the
military and is reported as simulator sickness research. Kennedy, a major researcher in
17
the area of simulator sickness, states that the first documentation of simulator sickness
was by Havron and Butler (1957) for Navy training in a helicopter.
In "Prediction of Simulator Sickness in a Virtual Environment", a Ph D
dissertation, Kolasinski (1996) looks at several aspects of VR simulator sickness
problem. Kolasinske identifies several factors that may affect susceptibility to simulator
sickness including age, ethnicity, level of adaptation (VR experience), gender, mental
rotation ability, perceptual style and postural stability. Of the factors identified by
Kolasinske, age, gender, VR experience, and ethnicity were measured. While the current
study does not involve the prediction or control of simulator sickness, it was of concern.
Simulator sickness could adversely affect the subject's ability to function immediately
following the experiment and could also influence the data. A simulator sickness
analysis instrument developed by Kennedy was used before and after immersion to insure
that subjects were safe to return to the natural environment. A more detailed discussion
of simulator sickness can be found in Kolasinski (1995).
Cognitive Science, Perception
Concepts of visual perception in 3 dimensions
The complexity of the problem is seen in work such as that of Steven Pinker.
Pinker (1980) looks at the problem of internal representation of three-dimensional
spaces. Pinker states:
... people can form mental images that preserve the metric three-dimensionaldistances between objects in a scene. Therefore, mental images can be neither
18
simple two-dimensional snapshots of visual scenes nor like Marr and Nishihara's'primal sketch.' Second, people can form images that preserve two-dimensionalmetric interpoint distances as they would appear from the original viewing angle,Therefore mental images are neither like simple three-dimensional scale models,in which no particular angle of view is defined, nor like Marr and Nishihara's'object-centered' representation.
Many factors affect the subject's ability to recognize patterns in 3 dimensional
space. Dodwell (1970) states:
The generation of depth impressions purely in terms of disparities between thepattern elements at the two eyes is called "stereoptic depth perception." Thesedisparities, also called binocular parallax, are not the only cues to depth; some ofthe others are perspective, interposition of objects, motion parallax (relativeapparent motion of near and far objects as the observer moves his head), andshadows.
This is of interest as the three different models studied provide some or all of the above
cues. The standard perspective rendering cannot provide stereoptic depth perception but
can provide cueing from relative size and shadows. If the system performance is such
that real-time motion is available, then motion parallax is available in a limited form
when the user changes his viewing position. The stereoscopic three-dimensional or non-
immersive VR presentation model provides stereoptic depth cues and motion parallax
cueing in addition to shadow cueing. Motion parallax cueing is limited in that while
some head motion is usable, the point of view does not change and this introduces some
positional uncertainty. The immersive VR model provides all of the above described
depth cues. It is interesting to note that the study of binocular vision is rather old.
Wheatstone described the fundamental concepts of stereopsis in his description of the
19
stereoscope in 1838. As early as 1942 Boring had published a history of the ideas
surrounding binocular vision. (Boring, E. G. 1942)
A number of studies (e.g. Kosslyn et al. 1978) have looked at the idea of metric
space in visual images. The literature indicates that metric space is preserved in three-
dimensional visual images. In addition, a relation exists between distance scanned and
response time. Kosslyn et al. also reported that subjectively large images required more
time to scan. This does complicate the current study as it indicates that response time
may vary based on the amount of information within test images. The subjective
information content can also vary during the study. Subjects may become more
discerning over time, resulting in apparent increases in information content.
Cognitive overhead and perception
The problem with most of this work is that it is psychophysics. Psychophysics is
a laboratory discipline interested in the manner in which a subject detects or
discriminates stimuli. The problem with this is that laboratory stimuli do not contain
information. When studying the ability of a system to visualize complex data sets, I am
interested the ability of the system to convey large and complex sets of information.
Gibson (1979) speaks to this aspect when he talks of the psychophysics of space and
form perception. I think this is a major underlying characteristic of large holistically
relevant data sets. Large holistically relevant data sets contain very large amounts of
20
information. The problem may be one of information synthesis and analysis rather than
one of visual perception.
Gibson also describes an experiment in which animals are reared with translucent
eye-caps over their eyes from birth. When the eye caps are removed the animals are
functionally blind. The eye has not degenerated because the visual stimulation was
present but because no visual information was present. Without visual information
present during development, the animals did not develop methods of processing that
information and could not then convert the visual information they received into mental
models of the space around them. This would support both the idea that visualization is a
learned skill and that cognitive overhead is required to perform it.
Studies such as the one by Epstien, Babler and Bowinds (1992) clearly indicate a
significant cognitive component in three-dimensional image processing. They report that
two processes are involved in the development of a three-dimensional spatial model from
a two-dimensional representation:
The early operations, culminating in representation of projective shapeand representation of orientation in 3D space, are preattentive; the lateroperations, integrating projective shape and orientation to generate anobject-centered representation of shape at a slant, are attentional.
In the process of developing a measurable task, several aspects of the process
have been reported in the psychophysics literature. In the book "New Directions in
Psychophysics" Galanter (1962) devotes a full chapter to each of two problems, the
detection problem and the recognition problem. Psychophysics is concerned with
measuring a behavioral result of physical stimuli on a subject. Two measures, in theory,
21
could be made for both detection and recognition. The first is the threshold of detection
or recognition; the second is the time required for that detection or recognition to occur.
Galanter developed a method for the measurement of a threshold. This is an interesting
problem. Unlike a physical system, a biological system such as a human subject does not
have an absolute threshold but rather threshold function. That function represents a
probability of detection that increases with increasing stimuli strength. Galanter (1962)
states:
What people have done is to select some arbitrary value of the responsestrength and define the stimulus associated with this arbitrary value as theabsolute threshold. The arbitrary value usually selected for the absolutethreshold is the value at which the probability of saying "Yes" equals theprobability of saying "No"; that is, where they both equal 0.50
It should be noted that this is always an estimate of the actual threshold because the
sample size is finite. Larger sample sizes result in less error. Unfortunately, the error is
reduced by the square root of the sample size. This means that to reduce the error by a
factor of two the sample size must be four times as large.
Galanter differentiates the recognition problem from that of detection. In the
latter case the stimuli is clearly above the threshold of detection. In the current study, the
subject must not only see the structure, but must also determine what it is. Galanter
states:
Many of the perceptual tasks that we perform are dependent upon stimuluspresentations that are clearly above the detection threshold, but that cantake a variety of forms. Our problem in these cases is to "recognize"which of the various possibilities has occurred.
22
The problem of recognition, reported in the current study as discrimination, can be
divided into two classes, one in which the stimulus is clearly above the threshold of
detection and the second is one of simultaneous detection and recognition. The current
study is only interested in the first. No discrimination data were recorded unless the
subject first reported detection of a structure.
A more subtle problem with the use of classical psychophysical methods is that
the stimuli in these experiments have a clearly scaleable physical attribute. The strength
of the stimuli can be measured in physical units that are ratio or interval data. How this
problem relates to the current research is discussed in more detail in chapter three.
Uttal (1983) describes a taxonomy of visual processing levels. It is important to
note that "This is a taxonomy of processes and not of phenomena." The taxonomy
describes six levels from level 0 to level 5. Uttal states:
Except for level 5, this taxonomy deals with immediate responsesmediated by preattentive, passive, mentally effortless neural processes ofvarious degrees of complexity, Level 5 requires cognitive effort, attentivefocusing, and active processing.
While the general ideas presented are useful in gaining an understanding of the processes
involved, the research reported uses stimuli with very limited information content, line
drawings or sets of dots.
A general discussion of the issues of visual cues in computer generated images is
provided by Wanger, Ferwerda and Greenberg (1992). The authors divide the visual
cues into primary and pictorial cues. Convergence and accommodation have been
studied for some time. Convergence is the angle between the two eyes when focusing on
23
an object. This cue would be useful for only a limited range because at distances over a
few meters the change is angle is too small to be detected. Accommodation is the state
of eye's lens to focus on objects. Skavensi and Steinman (1970) report that researchers
have been unable to demonstrate the feedback systems needed from the muscles of the
eye to the brain required for either convergence or accommodation.
The pictorial cues described by Wanger, Ferwerda, and Greenberg (1992) include
perspective, texture, shadow and motion. A complicating aspect of this research is the
inability to control some aspects of visual cueing. A detailed discussion of the
limitations this aspect imposes on the current research is provided in chapter five.
The final area reviewed in the perception literature was gender effects. Work by
Ernest and Paivio (1971) shows significant gender effects in imaging tasks under some
circumstances. Ernest (1968) reports high imager females performed significantly better
than high-imager males. The low-imagers showed no gender difference. (Ernest, 1968)
In a second study Ernest reports, "Again, high-imagery females excelled. They recalled
significantly more items than their male counterparts." (Ernest & Paivio, 1971) In
addition to the information on gender difference, Ernest describes the many forms of
imaging skills as well as the current range of imaging theories.
CHAPTER III
RESEARCH METHODOLOGY
Research Questions
1. Can a large, holistically relevant, four-dimensional data set be modeled using three-
dimensional perspective rendering with a two-dimensional display device?
2. Can a large, holistically relevant, four-dimensional data set be modeled using field
sequential stereoscopic display technology?
3. Can a large, holistically relevant, four-dimensional data set be modeled using
immersive virtual reality technology?
4. Can human subjects detect small relief structures in a large, holistically relevant,
four-dimensional data set when presented with a perspective representation using a two-
dimensional display device?
5. Can human subjects detect small relief structures in a large, holistically relevant,
four-dimensional data set when presented with a field sequential stereoscopic 3-D
representation?
6. Can human subjects detect small relief structures in a large, holistically relevant,
four-dimensional data set when presented with an immersive virtual reality representation
of that data set?
25
7. Will the time required by subjects to detect small relief structures be the same in all
three presentations?
8. Will the time required by subjects to discriminate the structure shape be the same
in all three presentations
9. Will the performance of the two gender groups differ in the detection task?
10. Will the performance of the two gender groups differ in the discrimination task?
Research Hypotheses
1. Large, holistically relevant, four-dimensional data sets cannot be represented using
three-dimensional perspective rendering with a two-dimensional display device.
2. Large, holistically relevant, four-dimensional data sets cannot be represented in a
non-immersive virtual reality environment.
3. Large, holistically relevant, four-dimensional data sets cannot be represented in an
immersive virtual reality environment.
4. Subjects will not be able to detect small relief structures in a large holistically
relevant, four-dimensional data set when presented with a perspective representation using
a two-dimensional display device.
5. Subjects will not be able to detect small relief structures in a large holistically
relevant, four-dimensional data set when presented with a field sequential stereoscopic 3-
D representation.
26
6. Subjects will not be able to detect small relief structures in a large holistically
relevant, four-dimensional data set when presented with an immersive virtual reality
representation of that data set.
7. Subjects will perform simple pattern recognition tasks in both VR and IVR
environments with the same time on task for discrimination and detection as in the two-
dimensional representation.
8. No gender difference will be detected in the performance of simple pattern
recognition tasks in the VR and IVR environments.
Research Participants
Research participants were volunteers from the University of Houston student
body. The subjects were all students in introductory psychology classes and were given
extra credit in that class for participating in the study. Twenty-seven subjects were tested
resulting in seventeen usable data sets. Seven subjects were unusable because of hardware
failures. Two subjects were unable to complete the task and their data were discarded.
One was unable to complete the task because of simulator sickness and the other was
unable to successfully complete the training session. Six male and eleven female subjects
provided usable data. Subject age ranged from eighteen to forty years with a mean age of
25.5 years. The six male subjects ranged in age from twenty-two to thirty-six years with a
mean of 25.2 years while the eleven female subjects ranged in age from eighteen to forty
years with a mean of 25.6 years. The subjects represented fourteen different declared
majors. The subjects were required to report normal or normal corrected vision in both
27
eyes and normal color vision. The subjects had no previous experience with the VR
interface used in the study.
Experimental Design
This study was conducted in the Virtual Environment Technology Laboratory at
the University of Houston with the cooperation of University of Houston and NASA
personnel. The literature indicated that a significant gender effect might occur. Some
sources also indicated that large between subject variation on visualization tasks was
common. In order to control for both of these problems a two by five, repeated measures,
within subject design was used. The subjects were trained on the use of the navigation
interface and at the same time pre-exposed to the test structures they would be asked to
detect in each data-rendering mode. The subjects were also pre-exposed to the complete
data set without artificial structures. The subjects were then presented a series of pattern
recognition tasks in each of the three rendering modes.
Independent Variables
Subject gender, male or female, was recorded by the researcher during the
interview and is reported in the results section.
The presentation model, 2-D, non-immersive or immersive, was the second
independent variable.
28
Dependent Variables
Detection Time on Task
Detection time on task is defined as the time from stimulus presentation until the
subject reported detection of a test structure. It was measured by the data collection
computer and recorded when the subject pulled a trigger on the control apparatus.
Discrimination Time on Task
Discrimination time on task is defined as the time from stimulus presentation until
the subject reported discrimination of the structure shape. It was also measured by the
data collection computer and recorded when the subject pushed a button on the control
apparatus. The subject was required to push the button stopping the discrimination time
on task clock before reporting the structure shape. The subjects were reminded if they
failed to press button before reporting the shape. Some subjects were observed failing to
press the button which resulted in some error in the discrimination time on task
measurements in early sections of the study. This was minimized by use of the training
sessions and the pre-conditioning session before each series of tests.
Accuracy
The subjects reported verbally what they thought the shape was. The correctness
of this response was recorded manually by the experimenter. The subject was advised if
29
the response was correct. These data were later integrated with the data collected
automatically during the session.
Equipment
Computer equipment used at the VETL consisted of a Silicon Graphics Iris reality
engine(tm) for image rendering, a Silicon Graphics Indy(tm) workstation for experimental
control and a Silicon Graphics Indigo(tm) workstation for model storage and data
collection. The final data models were constructed using SGI Inventor(tm) with the
rendering done on the Iris reality engine using SGI Performer(tm). All support code was
written in C or C++ on the SGI Indy(tm) workstation.
Display of the data was done using a SGI high resolution, twenty-inch, monitor for
both the 2-D and field sequential stereographic display modes. The field sequential
stereographic presentation utilized a Stereographics Corporation CrystalEyes(tm) system.
The immersive VR was presented using a VPL Research Eyephone(tm) HMID with
headtracking provided by a Polhemus FastTrack(tm) magnetic position tracking system.
The navigation interface used in all presentation modes was the NASA hcc chair.
This unit provides a full six degrees of freedom navigation interface. The control system
was developed for the control of spacecraft, starting with the Apollo manned missions.
This unit was built for simulation training by Johnson Space Center NASA.
30
Modeler software
Silicon Graphics Corp. Open Inventor(tm) an object-oriented toolkit for
developing interactive and 3D graphics applications was used for model development.
The same models were used in all three presentation modes.
VR software
Silicon Graphics Corp. Performer 2.0(tm) Virtual Reality rendering software was
used to present the data sets.
Study Data Set
The basic study data set is comprised of an actual subsurface geophysical structure
imaged with 3-D seismic techniques. The original seismic survey data were provided by
GeoTrace of Dallas, Texas. The data represent about one square mile of survey located in
North Texas. The data were processed and a horizon pick (a horizontal slice through the
earth) performed by geoscientists at GeoTrace. The horizon pick file was supplied in
industry standard Seg-y format. A program was written to convert these data into a
surface array that could be processed by the SGI modeling software. The original three-
dimensional conversion software was written by JSC NASA personnel. I later modified
the three-dimensional conversion software to accommodate four-dimensional data.
The fourth independent variable mapped on the structure surface was seismic
amplitude. The seismic amplitude was represented at each point by color. The amplitude
values were mapped into four intervals with a color assigned to each. The modeling
31
software was then used to provide smooth color transitions over the surface. The test
presentations were produced by adding an array containing test object images to the base
data array. This results in changing only the Z value of each point within the structure
area. The Z value offset of a test object could be varied to present different levels of
relief. The software treated all variables as integer data. The study data set is a sixty by
seventy-five point array containing 4,500 (x,y,z,a) data points.
32
Figure 1Stimulus Data Set
Several different structure shapes were considered. After viewing the test data
with different structure shapes imposed, two were selected. The final test structures were
squares and triangles. Each shape represented an area six units by six units. The software
that randomly located the test structures limited the edge distance so that the structures
would not overlap the edge of the data set.
33
Figure L
Stimulus Data Set with Structure
A pilot study was conducted to determine the range of stimuli to be used. A total
of twenty-seven pilot subjects were used; some were not exposed to all three presentation
modes. The lowest relief value of two units was almost never detected and the highest
relief value of six units was always detected. Two sets of five stimuli relief values were
34
used five each for the triangle and the square shape. The range was limited by the unit
increment of the presentation system. Software was developed to create the presentation
data sets. In total, two hundred study data sets were created. During the pilot study, it
was determined that only twenty presentation sets would be used. The pilot study
demonstrated that considerable differences in difficulty existed between the presentation
task sets. It was originally thought that large relief structures would be easy to detect and
discriminate but that the relief of the structure would determine the task difficulty. This
was not the case. Because of the interference effect of the fourth variable, color, location
as well as the relief affected task difficulty. In addition, while it was true that large relief
structures were very easy to detect, they were not necessarily easy to discriminate. Very
tall structures required more navigation time in order to get a viewing point that would
allow discrimination.
A within subject repeated measure design allowed subjects to be exposed to the
same data sets in all three presentation modes. The complexity of the data set coupled
with long session times resulted in little learning effect. This approach allowed comparing
the same presentation set in each mode, thereby eliminating the task scaling problem.
Procedures
Test subjects were given an informed consent form (see appendix) that outlined the
study when they arrived. After reading the form, any general questions they had about the
study were answered. Subjects were then placed in the experimental setting and asked
general information questions. Answers were recorded on test data record forms. Data
35
included the subjects age, gender, major area of study and if they regularly played video
games.
Training
After completion of the information form the first training session was started.
Subjects were presented a training data set consisting of a flat surface with color mapping
of the actual data set (y=O, z=actual). The first training set had a square structure. The
general task was explained while the subject viewed the first training data set.
36
_ 4 _
Figure 3Training Data Set
The subjects were asked to perform two tasks. The first was to determine if one
of the artificial structures was present. The subjects were told that not more than one
structure would be present in any one presentation. The subjects were also advised that
some data sets might have no structure at all, but in no case would more than one
structure be present. The second task was to discriminate the shape of the structure, if
37
present. The subjects were shown the operation of the controls needed to navigate the
data space. I also explained that this experiment was designed to test the effectiveness of
the presentation modes and not the subject's performance in relation to other subjects.
Subjects were asked not to guess on any of the test sessions. I also explained that
guessing could provide invalid data about the presentation modes and asked that if they
were unsure; report that rather than guess.
Navigation interface
The navigation interface is a complete six degrees of freedom interface that allows
a subject to fly to any point in the data set and also to look in any direction. The interface
consists of a hand controller used with the left hand and a control stick used with the right
hand. The left hand controller controls translation in three dimensions. This control
allows the subject to move up, and down, right, and left, as well as forward and backward.
With the left hand control, a subject's viewing point could be moved to any location in or
around the data set. The right hand control stick allowed subjects to change the viewing
angle. This controller allowed pitch forward or backwards, rotation left or right as well as
roll left or right. From a given point, the subject could look in any direction by using the
right hand controller. From previous experience with the interface at NASA, it was
determined that limiting the pitch to 60 degrees and roll to 30 degrees would reduce the
chances of an inexperienced user becoming disoriented. Pitch limitation also prevented
subjects from getting directly overhead and looking down on the data set. The task
becomes rather trivial from directly overhead, and I was interested in the pattern
38
recognition elements and not the cognitive problem solving issue of orientation. The right
hand controller also contained several buttons and a trigger switch similar to an aircraft
control stick. Subjects were instructed to signal detection of a structure by squeezing the
trigger. This action was recorded by the data acquisition system, but no feedback was
given the subject. Subjects were advised to press a button on the control stick when they
had determined the shape of the structure. This stopped the presentation and recorded the
time. Subjects then verbally reported the shape and the experimenter advised if the
response was correct.
I
FigurChai
4k V~
"'tea:
39
Subjects were told that it would be useful to be able to circle (fly-around) the
structures in order to identify them. Each subject was taught how to perform this
maneuver. This required the coordinated use of both controls and some level of spatial
awareness. Sessions did not proceed until subjects were able to perform this maneuver.
Training time was recorded for each subject. After successfully circumnavigating a square
structure subjects terminated the session by pressing a button on the control stick, as they
would in test sessions. Subjects were then presented a triangular structure. They were
then asked to circumnavigate this structur . Subjects were advised that the test structures
would be of the same size and same orient tion as the training sample they had just
encountered. They were also told that test structures would only vary in height (relief),
and location. They could be any height d could occur anywhere in the data set. If
subjects had trouble seeing the shapes, the experimenter described or even pointed out the
characteristics that made up the shape.
Two-dimensional presentation
Each subject was tested first in the traditional model (2-D computer monitor). The
initial training was done in this mode so no additional pre-exposure was given. Subjects
were reminded before the start of the first data set to squeeze the trigger when they saw a
structure and to press the button on the control stick when they had identified its shape. If
it appeared that subjects were detecting structures and waiting until discrimination to
squeeze the trigger they were reminded of the procedure. Ten data sets were presented in
the normal two-dimensional presentation mode.
40
Field sequential stereoscopic three-dimensional presentation
Following completion of the initial presentation mode, the use of the Crystal Eyes
stereographic system was explained. The sample training set was again presented, and
subjects were asked to circumnavigate the training structure while wearing the Crystal
Eyes glasses. The subjects were asked if they saw an apparent three-dimensional image
before proceeding. An idiosyncrasy of the VR system at VETL results in a brief
presentation of the last view seen when the system was stopped before a new presentation
is shown. Subjects were reminded of this problem and advised to wait until they saw the
new data set to start. This was not a major problem as all data sets started from the same
location well outside the data, and in almost every case; subjects terminated the last
session rather close to the structure they were discriminating. After demonstrating this
characteristic and allowing the subjects to fly around the training set to get comfortable
with the three-dimensional environment, they were asked to terminate the training session
by pressing the button on the control stick as they had earlier. Again ten data sets were
presented in the same manner as before. Subjects' verbal responses were recorded by the
experimenter.
Immersive virtual reality, HIMD
After completing the three-dimensional session subjects were given a pre-
immersion questionnaire, (see appendix) to collect general information for an immersive
VR physiologic side effects study. After recording the general information, the
41
physiologic effects instrument was administered. The experimenter reviewed this form for
any indication that the session should be terminated. If any of the indications on the
questionnaire exceeded "mild" the session was terminated. This occurred in only one
case. The operation of the headmounted display, (HMD) was then explained. Subjects
were shown how to adjust focus and inter-ocular distance. The same training data set
used in the previous two sessions was started in HMD mode and the HIMD was placed on
the subject's head. The subjects were asked to adjust the focus and the inter-ocular
distance and then to confirm that they did see a single three-dimensional image. I
explained that the controls worked the same as before but now the subjects could change
the direction of view by moving their head. Subjects were asked to move their head to
each side as well as up and down to demonstrate the available motion. The subjects were
then allowed some time to fly around a sample data set in the immersive mode. When
subjects said that they were comfortable with the HMD, they were asked to terminate the
presentation by pressing the button on the control stick as they had in the previous
sessions.
Following the training session, ten data sets were presented in the same manner as
the two-dimensional and three-dimensional tests. A subject's time on task and search path
were recorded by the system in a subject data file; the subject's discrimination responses
were reported verbally and recorded by the experimenter. After completing ten data sets,
the HMID was removed and the subject completed a second physiologic effects
questionnaire. Subjects were given the chance to take a break at this time. Removing the
HIMD disrupts HMI) focus and inter-ocular distance settings. When the subjects returned
42
after a short break, the training data set was again presented, and the subjects were
allowed to re-adjust the HMD. When the subjects advised they were comfortable with the
HMD adjustment ten more data sets were presented. At the completion of the second ten
data sets the subjects were again asked to complete the physiology effects questionnaire.
If any subject complained of discomfort during the immersive portion, the session was
terminated; this occurred in only one case.
Replication of field sequential stereoscopic three-dimensional presentation
Following the physiology survey, the 3-D mode was tested a second time. The
training data set was presented and subjects allowed to re-familiarize with the 3-D mode.
They were also reminded of the short flashback problem associated with the 3-D
presentation before starting. Again ten data sets were presented with subjects' verbal
discrimination responses recorded by the experimenter.
Replication of two-dimensional presentation
The final session was a repeat of the very first session using the standard 2-D
presentation mode. Again, the training set was presented to verify the system operation
and familiarize subjects with this mode before starting. Ten data sets were presented in
the final session. After completing the final ten data sets, subjects were asked if they had
any general question about the experiment. They were then asked if they had a preference
for any one of the presentation modes or if they experienced any trouble with any one of
the modes.
43
All of the presentations viewed by the subjects were recorded on videotape.
Search positions and view directions were also recorded digitally each second.
Procedures for Analyzing the Data
While the original research concept called for measuring the detection threshold
for each subject in each presentation mode, the pilot study demonstrated that this would
prove impossible. The number of presentations possible in a reasonable time period was
simply too small to allow any type of reliable data on threshold. The time on task, on the
other hand, was easily acquired. These data were recorded and the actual time on task,
the difference in performance by each subject across the presentation models, and
difference in performance by subject between repeated session within presentation model
were computed.
CHAPTER IV
RESULTS AND ANALYSIS
Data Analysis Method
Detection and discrimination time data were recovered by the data collection
computer. A sample file output is provided in Appendix B. Detection and discrimination
times were extracted from the raw data and transferred to a Microsoft Excel spreadsheet.
The first trial of each presentation set was treated as a conditioning trial. The time for the
first trial was collected but was not considered during analysis. A subject's response
correctness was then transferred from the subject's data form and entered into the
spreadsheet. Spreadsheet data were then checked against data collection forms for error
in either the response or the presentation data set. The presentation data set
identifications were recorded on each subject's data sheet and also recorded by the data
collection computer with the other data. A second spreadsheet containing demographic
data for all subjects was constructed and checked against the data collection forms.
In addition to the raw measures, I also analyzed one computed measure that I have
labeled performance. Performance is ten times the average accuracy divided by average
discrimination time. This gives an index of overall performance by combining the
accuracy with the time the subject took to get that accuracy.
45
While artificial, it does provide an overall view of the task. Accuracy can range from 0 to
1. The average discrimination times ranged from 16.74 seconds to 90.57 seconds. One
over the average discrimination time is therefore less than one giving this index a range
from 0.0478 to 0.7867
Two factors that I thought might be of concern were gender and video game
playing experience. Some early literature suggests that a difference in image processing
occurs between the genders. It is unclear what the image process is but several
experiments indicate that males and females process image information differently. More
recently it has been reported that both gender and cultural background may have an effect
on susceptibility to simulator sickness. (Kolasinski 1995) Both Kennedy and Kolasinski
have reported that as groups, women and Asians are more susceptible to simulator
sickness. While the number of non-westerners in the study was far too small to provide
any insight into the cultural question, the only subject discarded because of simulator
sickness was a non-western woman.
The second factor that I thought might have an impact was video game playing
experience. Many college age people have considerable experience using different types
of image system interface via video games. These games represent both two-dimensional
and three-dimensional perspective rendering. Subjects were asked if they were regular
video game players. Six subjects reported some video game playing while two reported
regularly playing video games. While the interface used in this experiment was
significantly different from any that one would encounter in video games, I still felt that
46
some positive transfer would occur. I will discuss both gender and video game effects for
each of the three direct measures and the one computed measure.
Accuracy
Accuracy is the percent of correct responses for a given presentation trial. In this
measure, a higher score represents better task performance. Table 1 summarizes accuracy
data for all subjects. The average value for each test set, standard (std), stereo (str) and
HMD is listed along with overall session average score for each subject.
Table 1.-- Average Accuracy Score by SubjectSub # Sex Age std str hmd hmd
3 m 25 0.4 0.5 0.7 0.55 f 40 0.7 0.9 0.9 0.76 f 21 0.6 0.7 0.4 0.48 f 25 0.4 0.7 0.8 0.89 m 22 0.9 0.8 0.5 0.7
10 f 22 0.6 0.7 0.6 0.811 f 31 0.6 0.7 0.7 0.813 f 34 0.9 0.8 0.5 0.714 f 24. 0.9 0.8 0.7 0.915 m 36 0.5 0.6 0.8 0.916 f 24 0.8 0.9 0.9 0.917 m 22 0.9 0.4 0.6 0.618 f 18 0.8 0.7 0.8 0.819 f 21 0.5 0.6 0.6 0.920 f 22 0.6 0.7 0.6 0.621 m 22 0.9 0.7 0.8 0.922 m 24 0.6 0.7 0.6 0.5
Avg 25.5Max 40
str0.40.70.70.80.70.90.80.90.90.90.80.80.90.90.90.80.5
std0.40.80.70.50.80.90.80.80.90.70.60.6
10.80.80.80.5
average0.48330.78330.58330.66670.73330.75000.73330.76670.85000.73330.81670.65000.83330.71670.70000.81670.5667
0.71670.8500
47
It is clear that large inter-subject variation exists. Subject eight, for example,
shows improved accuracy in the non-immersive VR environment and even greater
accuracy in the immersive VR environment. Subject nine shows just the opposite with the
best accuracy in the traditional presentation, slightly lower accuracy scores in the non-
immersive and the lowest scores in the immersive VR environment.
The gender effect was the most obvious. While still not significant at the .05 level
(actual probability one-tailed .056), the overall accuracy scores of the females were higher
than the males. Not only was the overall average accuracy score higher for females but
five of the six presentation models score averages were higher. Only the first presentation
score was lower for females and that by only 3 percent.
Table 2.-- Average Accuracy Scores Grouped by GenderFemales
std str hmd hmd str std0.7 0.9 0.9 0.7 0.7 0.80.6 0.7 0.4 0.4 0.7 0.70.4 0.7 0.8 0.8 0.8 0.50.6 0.7 0.6 0.8 0.9 0.90.6 0.7 0.7 0.8 0.8 0.80.9 0.8 0.5 0.7 0.9 0.80.9 0.8 0.7 0.9 0.9 0.90.8 0.9 0.9 0.9 0.8 0.60.8 0.7 0.8 0.8 0.9 1.00.5 0.6 0.6 0.9 0.9 0.80.6 0.7 0.6 0.6 0.9 0.8
0.78330.58330.66670.75000.73330.76670.85000.81670.83330.71670.7000
Sub # Sex5 f6 f8 f
10 f11 f13 f14 f16 f18 f19 f20 f
n 11MeanMaxMin
Age4021252231342424182122
25.64018
252236222224
25.23622
0.48330.73330.73330.65000.81670.5667
0.70 0.62 0.67 0.68 0.68 0.63 0.66390.90 0.80 0.80 0.90 0.90 0.80 0.81670.40 0.40 0.5Q, 0.50 0.40 0.40 0.4833
All Subjects
0.68 0.70 0.68 0.73 0.78 0.73 0.71670.90 0.90 0.90 0.90 0.90 1.00 0.8500
48
0.67 0.75 0.68 0.75 0.84 0.78 0.74550.90 0.90 0.90 0.90 0.90 1.00 0.85000.40 0.60 0.40 0.40 0.70 0.50 0.5833
Males0.4 0.5 0.7 0.5 0.4 0.40.9 0.8 0.5 0.7 0.7 0.80.5 0.6 0.8 0.9 0.9 0.70.9 0.4 0.6 0.6 0.8 0.60.9 0.7 0.8 0.9 0.8 0.80.6 0.7 0.6 0.5 0.5 0.5
m
m
m
m
m
m6
39
15172122
nMeanMaxMin
nMeanMax
1725.5
40
49
Table 3.-- Average Accuracy Scores Grouped by Video Game Playing ExperienceRegularly Play Video Games
Sub # Sex Age std str hmd hmd str std3 m 25 0.4 0.5 0.7 0.5 0.4 0.4 0.4833
21 m 22 0.9 0.7 0.8 0.9 0.8 0.8 0.8167n 2
Mean 23.50 0.65 0.60 0.75 0.70 0.60 0.601 0.6500Max 25.00 0.90 0.70 0.80 0.90 0.80 0.80 0.8167Min 22.00 0.40 0.50 0.70 0.50 0.40 0.40 0.4833
Occasionally Play Video Games6 f 21 0.6 0.7 0.4 0.4 0.7 0.7 0.5833
10 f 22 0.6 0.7 0.6 0.8 0.9 0.9 0.750016 f 24 0.8 0.9 0.9 0.9 0.8 0.6 0.81679 m 22 0.9 0.8 0.5 0.7 0.7 0.8 0.7333
15 m 36 0.5 0.6 0.8 0.9 0.9 0.7 0.733322 m 24 0.6 0.7 0.6 0.5 0.5 0.5 0.5667
n 6
Mean 24.83 0.67 0.73 0.63 0.70 0.75 0.701 0.69721Max 36.00 0.90 0.90 0.90 0.90 0.90 0.90 0.8167Min 21.00 0.50 0.60 0.40 0.40 0.50 0.50 0.5667
Rarely or Never Play Video Games5 f 40 0.7 0.9 0.9 0.7 0.7 0.8 0.78338 f 25 0.4 0.7 0.8 0.8 0.8 0.5 0.6667
11 f 31 0.6 0.7 0.7 0.8 0.8 0.8 0.733313 f 34 0.9 0.8 0.5 0.7 0.9 0.8 0.766714 f 24 0.9 0.8 0.7 0.9 0.9 0.9 0.850018 f 18 0.8 0.7 0.8 0.8 0.9 1 0.833319 f 21 0.5 0.6 0.6 0.9 0.9 0.8 0.716720 f 22 0.6 0.7 0.6 0.6 0.9 0.8 0.700017 m 22 0.9 0.4 0.6 0.6 0.8 0.6 0.6500
n 9Mean 26.33 0.70 0.70 0.69 0.76 0.84 0.78 0.7444Max 40.00 0.90 0.90 0.90 0.90 0.90 1.00 0.8500Min 18.00 0.40 0.40 0.50 0.60 0.70 0.50 0.6500
All Subjectsn 17Mean 25.47 0.68 0.70 0.68 0.73 0.78 0.73 0.7167Max 40.00 0.90 0.90 0.90 0.90 0.90 1.00 0.8500Min 18.00 0.40 0.40 0.40 0.40 0.40 0.40 0.4833
50
While the accuracy data seem to support the gender effect, the same is not true of the
possible effect of experience with video games. The very small number of video game
players make it impossible to make any generalizations but for the two regular players it
appears that some negative transfer occurred. The video game players scored below the
average of all subjects in all trials except the first immersive VR trial. The video game
players seemed to perform slightly better in the first immersive experience but did not
continue to perform above average in the second. The occasional players also had lower
accuracy, on average, in all but one category. The differences are very small and the
number of observations was far to small to draw any conclusions but the result was
certainly not what I had expected.
1.0-
0.90.8 0.0.9-1.0
0.7 0.8-0.9
0.5 00.7-0.8
0.4 " 0.6-0.7
0.3 o 0.5-0.6
0.2-U.0.4-0.5
0.1 _ 0.3-0.40.0 -30.2-0.3
-M-to- - *.1-0.2
Sob 2
Figure 53-D Accuracy
51
Figure 5, a three-dimensional graph, gives an overall view of the subjects'
accuracy. This visualization provides an overall view of the entire experiment for one
dependent variable, in this case accuracy. This graph, like the three others to follow, will
provide an overview of the average performance in each presentation trail set for all trails
and all subjects. The Y-axis displays the average accuracy, the X-axis represents the
subject number from 1 to 23 and Z-axis the presentation trial number. The subject
numbers are not consecutive because some subjects did not complete the study and are
therefore not shown. Since this axis is a subject listing and not scaled data, the missing
values in the X-axis does not inject any distortion or other type of error. The Z-axis is
reversed for better viewing. The first trial set is at the back and the last trail set is at the
front.
Several aspects of the study can be seen in this visualization. The subjects could of
course be listed in any order. Since this display lists subjects in the order that the sessions
actually occurred, any effects that occurred across the sessions would appear as a general
slope of the data surface from left to right. The lack of any such slope indicates that no
longitudinal effects occurred over the term of the experiment. Remember that this graph
is actually a histogram and the points are average values for different trails. The
connecting surface is to aid visualization only. The values between the points represented
by lines and surface shading are not real.
The experimental trails, Standard, Stereo and HMD were repeated in the reverse
order HMD, Stereo and Standard. Thus, the standard presentation data is the very front
and very back of the graph for each subject. Both of the HMD trials are together in the
52
middle and the stereo trials are on either side of the HMD trials. A subject that has better
performance in the HMD mode would therefore have a hump in the middle of this line.
An example of this would be subject number eight. Note that for subject eight both the
front and back values are lower than the middle. While the number of observations was
far too small for any meaningful statistical analysis, this visualization can yield a sense of
the data collected. This same type of presentation will be provided for detection,
discrimination and overall performance.
Detection
Detection is the time required for a subject to recognize that a structure exists in
the data set. It is a time measure reported in seconds and a lower score represents better
task performance.
Detection time represents the time required for the subject to recognize a structure
in the data set and press a button. Only detection times associated with correct responses
are used. Detection was considered correct if the subject correctly identified the structure
at the end of the discrimination section. While it is possible that a subject detects a
structure but cannot discriminate it, I had no other way to confirm correct detection. This
is discussed in more detail in chapter 5.
Again, large variations between subjects occurred. Eight of the seventeen subjects
had higher detection times in the immersive environment than any other mode. It is
interesting to note that a large drop in detection times occurred for most subjects after the
immersive experience. This is very obvious in the three-dimensional chart figure 6.
53
Two subjects, subjects 20 and 21, did show a clear decrease in detection time in
the immersive environment with that time going back up in the non-immersive
environment and going even higher in the traditional two-dimensional presentation.
Gender again appears as a possible actor. The female subjects' overall average
detection times were lower than the males and again the females performed better than the
males in all but one presentation mode.
54
Table 4.-- Average Detection times by SubjectSub # Sex Age std str hmd hmd str std
3 m 25 55.29 37.45 46.86 24.83 03.62 07.90 29.3252
5 f 40 04.92 05.85 08.73 11.46 01.64 02.94 5.9228
6 f 21 05.71 06.91 10.66 12.26 03.99 02.22 6.9577
8 f 25 27.68 03.48 02.98 01.56 01.82 01.62 6.5213
9 m 22 23.20 07.46 08.62 07.97 05.32 02.93 9.2506
10 f 22 07.03 02.65 11.95 08.82 04.70 01.69 6.1428
11 f 31 02.34 01.18 13.25 07.19 01.67 03.06 4.7824
13 f 34 15.45 02.73 02.94 05.10 03.06 03.39 5.4437
14 f 24 03.13 02.80 07.81 02.07 03.36 02.27 3.5745
15 m 36 17.40 20.73 27.13 13.93 04.45 00.96 14.1016
16 f 24 17.62 13.32 16.55 04.93 03.37 05.91 10.2844
17 m 22 13.80 04.90 15.22 13.17 03.15 02.17 8.7359
18 f 18 05.94 04.31 14.91 24.95 03.60 08.90 10.4363
19 f 21 11.65 02.71 20.05 18.76 07.22 14.32 12.4518
20 f 22 26.02 23.47 17.56 20.95 09.55 08.38 17.6557
21 m 22 10.57 08.64 03.52 01.83 06.02 05.74 6.0540
22 m 24 05.86 04.02 03.32 06.09 03.62 01.87 4.1293
n 17
Mean 25.5 14.92 08.98 13.65 10.93 04.13 04.49 9.5159
Max. 40 55.29 37.45 46.86 24.95 09.55 14.32 29.3252
Min. 18 02.34 01.18 02.94 01.56 01.64 00.96 3.5745
Table 5. - Average Detection Time Grouped by GenderFemales
Sub # Sex5 f6 f8 f
10 f11 f13 f14 f16 f18 f
Age402125223134242418
19 f 2120 f 22
nMeanMaxMin
39
15172122
nMeanMax.Min.
nMeanMax.Min.
std str hmd hmd str std04.92 05.85 08.73 11.46 01.64 02.9405.71 06.91 10.66 12.26 03.99 02.2227.68 03.48 02.98 01.56 01.82 01.6207.03 02.65 11.95 08.82 04.70 01.6902.34 01.18 13.25 07.19 01.67 03.0615.45 02.73 02.94 05.10 03.06 03.3903.13 02.80 07.81 02.07 03.36 02.2717.62 13.32 16.55 04.93 03.37 05.9105.94 04.31 14.91 24.95 03.60 08.9011.65 02.71 20.05 18.76 07.22 14.3226.02 23.47 17.56 20.95 09.55 08.38
average5.92286.95776.52136.14284.78245.44373.5745
10.284410.436312.451817.6557
1125.6440.0018.00
m
m
m
m
m
m6
252236222224
25.1736.0022.00
1725.4740.0018.00
11.59 6.31 11.58 10.73 4.00 4.97 8.197627.68 23.47 20.05 24.95 9.55 14.32 17.6557
2.34 1.18 2.94 1.56 1.64 1.62 3.5745
Males55.29 37.45 46.86 24.83 3.618 7.9
23.2 7.457 8.621 7.975 5.319 2.92717.4 20.73 27.13 13.93 4.454 0.96513.8 4.901 15.22 13.17 3.154 2.175
10.57 8.644 3.519 1.835 6.022 5.7375.86 4.016 3.323 6.089 3.623 1.865
29.32529.2506
14.10168.73596.05404.1293
21.02 13.87 17.44 11.30 4.37 3.59 11.932855.29 37.45 46.86 24.83 6.02 7.90 29.3252
5.86 4.02 3.32 1.83 3.15 0.96 4.1293
All Subjects
14.92 8.98 13.65 10.93 4.13 4.49 9.515955.29 37.45 46.86 24.95 9.55 14.32 29.3252
2.34 1.18 2.94 1.56 1.64 0.96 3.5745
55
56
Table 6.-- Average Detection Times Grouped by Video Game Playing ExperienceRegularly Play Video Games
Sub # Sex Age std str hmd hmd str std average3 m 25 55.29 37.45 46.86 24.83 03.62 07.90 29.3252
21 m 22 10.57 08.64 03.52 01.83 06.02 05.74 6.0540n 2
32.93 23.05 25.19 13.33 4.8255.29 37.45 46.86 24.83 6.0210.57 8.64 3.52 1.83 3.62
Occasionally Play Video Games05.7123.2007.0317.4017.6205.86
12.8123.20
5.71
06.9107.4602.6520.7313.3204.02
9.1820.73
2.65
10.6608.6211.9527.1316.5503.32
13.0427.13
3.32
12.2607.9708.8213.9304.9306.09
9.0013.934.93
03.9905.3204.7004.4503.3703.62
4.245.323.37
6.821 17.68967.90 29.32525.74 6.0540
02.2202.9301.6900.9605.9101.87
MeanMaxMin
69
10151622
nMeanMaxMin
58
11131417181920
nMeanMax.Min.
27.6802.3415.4503.1313.8005.9411.6526.02
03.4801.1802.7302.8004.9004.3102.7123.47
02.9813.2502.9407.8115.2214.9120.0517.56
01.5607.1905.1002.0713.1724.9518.7620.95
12.32 5.71 11.49 11.6927.68 23.47 20.05 24.952.34 1.18 2.94 1.56
01.8201.6703.0603.3603.1503.6007.2209.55
3.909.551.64
01.6203.0603.3902.2702.1708.9014.3208.38
6.95779.25066.1428
14.101610.28444.1293
2.601 8.477715.91 14.10160.96 4.1293
5.92286.52134.78245.44373.57458.7359
10.436312.451817.6557
4.62 8.391614.32 17.6557
1 3.5745
The effect of video game playing experience is much more difficult to analyze in
the detection time data. The average performance still shows a lower performance for the
video game playing subjects. The problem is that only two subjects reported regular video
game playing, and one of the them did very poorly on the detection task overall. The
23.5025.0022.00
fmf
mf
m6
212222362424
24.8336.0021.00
Rarely or Never Play Video Games04.92 05.85 08.73 11.46 01.64 02.94f
ffffmfff9
402531342422182122
26.3340.0018.00
57
second, subject 21, clearly performed the detection task better in the immersive
environment, and his average times of 3.5 and 1.8 seconds are well below the overall
average for the HMD modes of 11.5 and 11.7. This would appear to indicate that some
positive effect might be possible. While the scores are well below average, it should be
noted that they are not the lowest detection time scores. The lowest detection time scores
were reported by a non-video game-playing subject.
(00
E
0
0v
601
50
40
30
20
10
C00)
Q~ 5(N
o 50-60
0 40-50
o30-40
o 20-30
i"10-2010-10
Figure 63-D Detection
This three-dimensional graph (figure 6) gives an overall view of the subjects'
detection times. The Y-axis displays the average detection time, the X-axis represents the
4.
(1
58
subject number and Z-axis the presentation trial number. (Note: the Z-axis is reversed for
better viewing) The data for detection are times; unlike the accuracy data shown, before
higher numbers represent poorer performance. Substantial inter-subject variability is very
visible in this presentation. The humps in the middle for several subjects represent higher
detection times in the HMD model. The significant downward slope for almost all
subjects after trial four represents a substantial reduction in detection times for the other
modes after subjects experienced the immersive environment. The slight forward slope for
all the data represents a slight overall improvement across trials, probably a learning effect.
Discrimination
Discrimination is the time required for a subject to correctly identify structure
shape. This is a time measure and lower scores represent better task performance. The
task was terminated and the response considered incorrect if times exceeded 120 seconds.
This was necessary to keep total session time under 2 hours 30 minutes.
Table 7 summarizes the average discrimination time data for all subjects.
Table 7.--Average Discrimination Time by SubjectSub # Sex Age std str hmd hmd str std average
51.24 38.70 42.09 46.83 48.111161.32 34.42 27.87 27.00 45.243077.80 62.57 37.50 34.02 63.771542.57 36.76 20.22 20.84 42.236139.98 33.88 29.21 22.26 36.016345.65 49.31 27.62 23.44 39.829029.60 34.49 18.61 20.10 28.616023.66 30.43 22.94 21.61 37.175532.46 17.23 19.35 16.74 30.459390.57 63.75 44.41 34.83 65.737933.44 28.72 34.11 34.25 35.310054.46 43.65 42.08 51.98 45.189956.70 56.04 29.38 38.06 52.172574.76 38.98 32.54 26.82 50.684627.87 42.58 34.72 31.29 34.645355.90 39.57 27.24 26.41 39.468333.49 34.57 40.41 34.22 33.2766
48.91 40.33 31.19 30.041 42.8202190.57 63.75 44.41 51.98 65.7379
3 m 25 57.99 51.825 f6 f8 f9 m
10 f11 f13 f14 f15 m16 f17 m18 f19 f20 f21 m22 m
n 17MeanMax.
40
212522223134243624221821222224
25.540
65.98 54.8786.15 84.5867.51 65.5258.62 32.1658.85 34.1137.24 31.6682.15 42.2658.72 38.2680.05 80.8143.82 37.5337.86 41.1158.15 74.7046.10 84.9137.12 34.2835.30 52.3929.04 27.93
55.33 51.1186.15 84.91
Discrimination times show a learning effect for almost all users. This is very
apparent in the three-dimensional chart (figure 7). The surface slopes down toward the
front for all users. The Z-axis is reversed so that the front is the last trial and the back is
the first trial. Clearly, the subjects improve their ability to discriminate structures quickly
with practice. It is interesting to note the rather large drop in discrimination time after the
immersive presentation for all subjects. Again this is very clear in the three-dimensional
presentation in figure 7. It would appear that while subjects improved their ability to
discriminate structures over time, they learned to identify the structures very well in the
immersive presentation mode and that had positive transfer to the other modes in the later
trials.
59
Table 8.--Average Discrimination Time grouped by GenderFemales
std str65.98 54.8786.15 84.5867.51 65.5258.85 34.1137.24 31.6682.15 42.2658.72 38.2643.82 37.5358.15 74.7046.10 84.9137.12 34.28
58.34 52.9786.15 84.9137.12 31.66
hmd61.3277.8042.5745.6529.6023.6632.4633.4456.7074.7627.87
45.9877.8023.66
hmd34.4262.5736.7649.3134.4930.4317.2328.7256.0438.9842.58
39.2362.5717.23
str27.8737.5020.2227.6218.6122.9419.3534.1129.3832.5434.72
27.7137.5018.61
std27.0034.0220.8423.4420.1021.6116.7434.2538.0626.8231.29
average45.243063.771542.236139.829028.616037.175530.459335.310052.172550.684634.6453
26.741 41.831238.06 63.771516.74 28.6160
Males57.989 51.817 51.238 38.702 42.091 46.831 48.111158.619 32.157 39.979 33.876 29.208 22.257 36.016380.055 80.813 90.571 63.746 44.408 34.834 65.737937.865 41.112 54.461 43.646 42.079 51.977 45.189935.301 52.392 55.901 39.568 27.237 26.41 39.468329.039 27.932 33.487 34.575 40.411 34.217 33.2766
49.81 47.70 54.27 42.35 37.57 36.09 44.633480.05 80.81 90.57 63.75 44.41 51.98 65.737929.04 27.93 33.49 33.88 27.24 22.26 33.2766
All Subjects
55.33 51.11 48.91 40.33 31.19 30.041 42.8202!86.15 84.91 90.57 63.75 44.41 51.98 65.737929.04 27.93 23.66 17.23 18.61 16.74 28.6160
60
Age4021252231342424182122
1125.6440.0018.00
Sub # Sex5 f6 f8 f
10 f11 f13 f14 f16 f18 f19 f20 f
nMeanMax.Min.
39
15172122
nMeanMax.Min.
nMeanMax.Min.
mm
m
m
m
m
6
252236222224
25.1736.0022.00
1725.4740.0018.00
61
Table 9.--Average Discrimination Times Grouped by Video Game Playing ExperienceRegularly Play Video Games
Sub # Sex Age std str hmd hmd str std average3 m 25 57.99 51.82 51.24 38.70 42.09 46.83 48.1111
21 m 22 35.30 52.39 55.90 39.57 27.24 26.41 39.4683n 2
Mean 23.50 46.64 52.10 53.57 39.13 34.66 36.62 43.7897
Max 25.00 57.99 52.39 55.90 39.57 42.09 46.83 48.1111Min 22.00 35.30 51.82 51.24 38.70 27.24 26.41 39.4683
Occasionally Play Video Games6 f 21 86.15 84.58 77.80 62.57 37.50 34.02 63.77159 m 22 58.62 32.16 39.98 33.88 29.21 22.26 36.0163
10 f 22 58.85 34.11 45.65 49.31 27.62 23.44 39.829015 m 36 80.05 80.81 90.57 63.75 44.41 34.83 65.737916 f 24 43.82 37.53 33.44 28.72 34.11 34.25 35.310022 m 24 29.04 27.93 33.49 34.57 40.41 34.22 33.2766
n 6
Mean 24.83 59.42 49.52 53.49 45.47 35.54 30.50 45.6569Max 36.00 86.15 84.58 90.57 63.75 44.41 34.83 65.7379Min 21.00 29.04 27.93 33.44 28.72 27.62 22.26 33.2766
Rarely or Never Play Video Games5 f 40 65.98 54.87 61.32 34.42 27.87 27.00 45.24308 f 25 67.51 65.52 42.57 36.76 20.22 20.84 42.2361
11 f 31 37.24 31.66 29.60 34.49 18.61 20.10 28.616013 f 34 82.15 42.26 23.66 30.43 22.94 21.61 37.175514 f 24 58.72 38.26 32.46 17.23 19.35 16.74 30.459317 m 22 37.86 41.11 54.46 43.65 42.08 51.98 45.189918 f 18 58.15 74.70 56.70 56.04 29.38 38.06 52.172519 f 21 46.10 84.91 74.76 38.98 32.54 26.82 50.684620 f 22 37.12 34.28 27.87 42.58 34.72 31.29 34.6453
n 9
Mean 26.33 54.54 51.95 44.82 37.17 27.52 28.27j 40.7136Max 40.00 82.15 84.91 74.76 56.04 42.08 51.98 52.1725Min 18.00 37.12 31.66 23.66 17.23 18.61 16.74 28.6160
All Subjectsn 17
Mean 25.41 55.66 51.12 54.59 43.13 34.98 32.041 45.4054
Max. 40.00 86.15 84.58 90.57 63.75 44.41 46.83 65.7379Min. 21.00 29.04 27.93 33.44 28.72 27.24 22.26 33.2766
62
100
90
80 .90-100
70 8 0 -9 0
50 070-80
40 060-70
30 10 50-60
20 40-5010 030-40
0 020-30
X 10-20
0- -
Figure 73-D Discrimination
This three-dimensional graph (fig 7) gives an overall view of the subjects'
discrimination times. The Y-axis displays average discrimination time, the X-axis
represents subject number and the Z-axis indicates presentation trial number. (Note: the
Z-axis is reversed for better viewing.) The overall impression of the discrimination data is
similar to that of the detection data. A large number of the subjects had a general
improvement in performance after the immersive experience. It appears that the learning
that occurred during the immersive experience did transfer to both other modes that were
tested afterwards. The large change in slope, or the steepness of the surface after trial
four, implies an effect from the immersion and not just a general learning effect. A general
II.I MI M
63
learning effect would produce a more uniform downward slope towards the front rather
than the steep slope seen after trial four.
Performance
Performance is an artificial index computed by combining the accuracy and the
discrimination time to give an overall indication of task performance. In this index a
higher score is represents better task performance.
Table 10 summarizes the data for performance scores for all subjects.
Table 10.-- Average Performance Scores by SubjectSub # Sex Age std str hmd hmd str std
3 m 25 .0789 .0899 .1549 .0915 .0789 .11605 f 40 .1085 .1954 .1525 .1553 .3072 .39766 f 21 .0768 .0828 .0712 .0478 .2057 .23918 f 25 .1148 .0971 .2015 .2791 .3683 .26399 m 22 .1896 .2791 .1222 .2431 .2515 .4393
10 f 22 .1164 .2239 .1588 .1768 .3455 .408711 f 31 .3125 .4199 .3099 .3411 .7867 .668713 f 34 .1111 .2043 .1861 .2297 .3313 .324114 f 24 .1762 .2396 .2920 .7156 .5010 .556515 m 36 .0673 .0950 .0903 .1466 .2419 .283616 f 24 .2436 .2327 .2732 .3384 .2660 .209117 m 22 .3245 .1573 .2178 .1652 .2553 .220018 f 18 .1580 .1638 .1471 .2315 .3138 .308519 f 21 .1067 .1092 .1388 .3293 .3485 .389820 f 22 .1933 .2279 .2536 .2795 .2940 .292821 m 22 .3060 .2096 .1561 .2943 .3003 .330322 m 24 .2263 .2786 .1485 .1627 .1238 .1077
MeanMaxMin
25.4740.0018.00
average.1017.2194.1206.2208.2541.2384.4731.2311.4135.1541.2605.2234.2205.2370.2568.2661.1746
.1712 .1945 .1809 .2487 .3129 .3268 .2392
.3245 .4199 .3099 .7156 .7867 .6687 .4731
.0673 .0828 .0712 .0478 .0789 .1077 .1017
These data showed a general improvement in performance over the test session.
Overall average performance increases from 0.1712 to 0.3268. Again the improvement
64
seems to be larger after the immersive experience. While the average improvement from
standard two-dimensional to non-immersive VR was 0.02, the improvement from first
immersion to second immersion was 0.06 and from second immersion to second non-
immersive 0.07. It would appear that the immersive sessions do have the potential for
very good task acquisition. Skills learned or developed in the immersive setting appeared
to have good positive transfer to the non-immersive and even the conventional
presentation models.
The performance data showed some interesting trends when analyzed by gender.
Males show a decline in overall performance for the first three test sessions going from
0.20 to 0.18 and finally down to 0.15. They only returned to initial performance levels
after the second non-immersive presentation. The final presentation mode performance of
0.25 was only slightly above the initial performance of 0.20
Females on the other hand showed steady improvement over the test session.
They showed substantial improvement after the second immersive session and continued
to improve over the remaining section of the test.
These data showed a slight negative transfer effect for video game playing. Again
the number of observations is far to small to draw any conclusions.
Figure 8 gives an overall view of the subjects' performance scores. The Y-axis
displays average performance score, the X-axis represents subject number and Z-axis is
the presentation trial number. (Note: the Z-axis is reversed for better viewing.) When I
combine accuracy and discrimination, I did see a general overall learning effect. This is
represented by a general rising slope of the data in Fig 8. These data showed several
65
subjects that had better overall performance in the middle HMD than in the other models.
The substantial inter-subject variability is still very obvious.
Table 11.-- Average Performance Scores Grouped by GenderFemales
Sub # Sex Age5 f 40
6 f 218 f 25
10 f 2211 f 3113 f 3414 f 24
16 f 2418 f 1819 f 2120 f 22
n 11MeanMaxMin
3 m9 m
15 m17 m21 m22 m
n
MeanMaxMin
25.6440.0018.00
252236222224
625.1736.0022.00
std str hmd hmd str.1085 .1954 .1525 .1553 .3072.0768 .0828 .0712 .0478 .2057.1148 .0971 .2015 .2791 .3683.1164 .2239 .1588 .1768 .3455.3125 .4199 .3099 .3411 .7867.1111 .2043 .1861 .2297 .3313.1762 .2396 .2920 .7156 .5010.2436 .2327 .2732 .3384 .2660.1580 .1638 .1471 .2315 .3138.1067 .1092 .1388 .3293 .3485.1933 .2279 .2536 .2795 .2940
.1562 .1997 .1986 .2840 .3698
.3125 .4199 .3099 .7156 .7867.0768 .0828 .0712 .0478 .2057
Males.0789 .0899 .1549 .0915 .0789.1896 .2791 .1222 .2431 .2515.0673 .0950 .0903 .1466 .2419.3245 .1573 .2178 .1652 .2553.3060 .2096 .1561 .2943 .3003.2263 .2786 .1485 .1627 .1238
.1988 .1849 .1483 .1839 .2086
.3245 .2791 .2178 .2943 .3003
.0673 .0899 .0903 .0915 .0789
std.3976.2391.2639.4087.6687.3241.5565.2091.3085.3898.2928
average.2194.1206.2208.2384.4731.2311.4135.2605.2205.2370.2568
.36901 0.2629
.6687 0.4731
.2091 0.1206
.1160
.4393
.2836
.2200
.3303
.1077
.1017
.2541
.1541
.2234
.2661
.1746
.2495 0.1957
.4393 0.2661
.1077 0.1017
All Subjects
.1712 .1945 .1809 .2487 .3129
.3245 .4199 .3099 .7156 .7867
.0673 .0828 .0712 .0478 .0789
.3268 0.23921
.6687 0.4731
.1077 0.1017
66
n 17MeanMaxMin
25.4740.0018.00
67
Table 12.-- Average Performance Scores Grouped by Video Games Playing ExperienceRegularly Play Video Games
Sub # Sex Age std str hmd hmd str std average3 m 25 .0789 .0899 .1549 .0915 .0789 .1160 .1017
21 m 22 .3060 .2096 .1561 .2943 .3003 .3303 .2661
n 2.1925 .1497 .1555 .1929
.3060 .2096 .1561 .2943
.0789 .0899 .1549 .0915
Occasionally Play Video Games.0768 .0828 .0712 .0478.1896 .2791 .1222 .2431.1164 .2239 .1588 .1768.0673 .0950 .0903 .1466.2436 .2327 .2732 .3384.2263 .2786 .1485 .1627
.1533 .1987 .1440 .1859
.2436 .2791 .2732 .3384
.0673 .0828 .0712 .0478
Rarely or Never Play Video Games.1085.1148.3125.1111.1762.3245.1580.1067.1933
.1784.3245.1067
.1712
.3245
.0673
.1954 .1525
.0971 .2015
.4199 .3099
.2043 .1861
.2396 .2920
.1573 .2178.1638 .1471.1092 .1388.2279 .2536
.2016 .2110
.4199 .3099
.0971 .1388
All Subjects
.1945 .1809
.4199 .3099
.0828 .0712
.1553.2791.3411.2297.7156.1652.2315.3293.2795
.3029
.7156
.1553
.2487
.7156.0478
.1896
.3003.0789
.2057
.2515
.3455
.2419
.2660
.1238
.2391
.3455.1238
.2231 0.1839
.3303 0.2661
.1160 0.1017
.2391
.4393
.4087
.2836
.2091.1077
.1206
.2541
.2384
.1541
.2605
.1746
.281 3 0.20041
.4393 0.2605
.1077 0.1206
.3072 .3976
.3683 .2639
.7867 .6687
.3313 .3241
.5010 .5565
.2553 .2200
.3138 .3085
.3485 .3898
.2940 .2928
.2194
.2208
.4731
.2311
.4135
.2234
.2205
.2370
.2568
.3896 .38021 0.2773
.7867 .6687 0.4731
.2553 .2200 0.2194
.3129 .32681 0.2392
.7867 .6687 0.4731.0789 .1077 0.1017
MeanMaxMin
6 f9 m
10 f15 m16 f
22 mn 6MeanMaxMin
23.5025.0022.00
212222362424
24.8336.0021.00
fffffmfff9
40253134242218
2122
58
11131417181920
n
MeanMaxMin
nMeanMaxMin
26.3340.0018.00
17
25.4740.0018.00
68
0.80
0.70
0.60o50.70-0.80
0.40 0.60-0.70
00.50-0.60S0.30-
.0.40-0.500.20
0. 0.30-0.40
0 0.20-0.300.00
*0.10-0.20
__ *00.00-0.10
CO1 4 ,4 3
N5
Figure 83-D Performance
I
CHAPTER V
CONCLUSIONS
Summary
Seventeen subjects completed test sessions. It was determined that large
holistically relevant data sets could be presented using both immersive and non-immersive
display technologies. All subjects that completed the test sessions were able to complete
that pattern recognition task in all three presentation modes to some degree. Only one
subject was unable to master the navigation interface and was therefore unable to perform
the pattern recognition task. It was also demonstrated that inexperienced users could, in
most cases, adapt to a six-degree of freedom interface system. Most of the subjects were
able to adapt to the interface and use that interface to implement their search strategies in
the solving the pattern recognition task. Some subjects did indicate after the sessions that
they had altered their search strategy because of difficulty with the interface. Two
subjects indicated that they preferred the HMD because they were having trouble getting
to a position they wanted using the interface but with the HMD they could just get close
and move their head to reach the position.
While the automated data retrieval system recorded the location and view angle
every second, I did not recover headposition data independent of view angle.
70
In observing the sessions, it appeared that most subjects made very limited use of the head
cueing available in the HMD environment. Even though they were able to look in any
direction, most looked almost directly ahead while in the HMD setting. The one subject
that was removed due to simulator sickness did make extensive use of headmotion. Some
of the studies by Kennedy indicate that extensive use of headmotion in VR environments
can increase the potential for simulator sickness.
At the end of the session each subject was asked for general comments and if
he/she had a preference for one of the presentation methods. Seven preferred the HMD,
four preferred the stereo, two preferred the standard and four had no preference
Several limitations of the current technology were identified as well as several
aspects of the experimental design were identified that would impact the usefulness of this
technology for scientific visualization.
Discussions of Results
With a small number of subjects, generalizations are inappropriate: however, some
trends do appear. The argument that VR presentation models would reduce cognitive
overhead and thereby improve task performance is clearly not supported across this
subject pool. Looking at individual subjects, however, does imply that substantial changes
in task performance based on presentation model do occur.
Any attempt to measure the performance of visual presentation systems must deal
with a large number of variables. The subject's performance is affected by many factors
such as navigation ability (adaptation to the interface), spatial awareness, and search
71
strategy, in addition to visual image or pattern recognition. While the general idea that
performance would be improved by the use of VR was not supported across this subject
population, it is clear that a subset of the population did perform better in the VR setting.
The gender effect reported in the perception literature is consistent with the results
of this study. The perception work of Enerst separated high-imagers from low imagers.
The low imaging group had no performance difference by gender while the high imagers
had a significant difference in performance based on gender. The research reported here
did not make any attempt to measure imaging ability. The task identified by the
perception literature as imaging is substantially different from the pattern recognition task
used in this experiment.
The apparent negative transfer from video game playing to this task would be very
interesting if it continued with a larger sample. While some skill in the use of VR type
interface devices may be acquired in the use of video games and certainly some degree of
spatial awareness is required in most, these skills may not actually improve complex
pattern recognition tasks. Subjects may develop strategies for game playing that would
not be useful in general exploration of three-dimensional space. The fast paced nature of
most video games may in fact reduce the subjects tendency to carefully evaluate their
surroundings.
While these data show that performance varies across presentation model and task,
user preference may also vary by task across presentation models. In this study I asked
subjects to report any preferences at the completion of the test session. The subjects were
not asked to separate their preferences based on task. Subjects may be unable to report
72
accurately preference by task across presentation models. In looking at these data, it was
apparent that subjects do use cues provided by the different presentation models
differently and that in some cases a presentation model may improve performance on one
task while decreasing performance on another. Subjects may not be able to describe why
or how this occurs. It is also not clear that subjects actually perform better in the mode
they report as preferred. Subject five, for example, reported preference for the stereo
mode but performed detection best in the standard model and performed discrimination
better in the HMD. This subject performed better only on the accuracy task, in the model
that was reported as preferred.
Implications of the Findings
The most important implication is the need for further research. It is clear that the
problem is not a simple perception experiment. Simply increasing the information
bandwidth did not result in improved performance for all subjects.
The research reported here indicates that the two tasks, that of detection and
discrimination of structure shape, are actually two different tasks. Performance on one
task does indicate similar performance on the second. Subject five, for example, had
steadily increasing detection times from standard to stereo to HMD. This subject had the
best detection performance in the standard presentation model with the poorest
performance in the HMD environment. Subject five had steadily decreasing discrimination
times. Subject five showed the best discrimination performance in HMD and the poorest
in standard. This subject's accuracy scores were best in the stereo presentation model.
73
This subject performed each of the three tasks, accuracy, detection and discrimination,
best in three different presentation models. Subject eight, on the other hand, had the same
relative performance in all tasks, steadily increasing accuracy with steadily decreasing time
on task for both detection and discrimination. Subject eight's performance improved
moving from the standard presentation model to stereoscopic presentation and improved
even more when in the immersive virtual environment model.
While observing the test sessions, it became clear to me that scaling of the task
was a problem. As discussed earlier, relief did not scale directly to task difficulty. In
addition it appears that a given stimuli may have different difficulties for detection and
discrimination. For example, the very large structures with relief of five or six were very
easy to detect. Most subjects detected the presence of these large structures immediately,
but many of the subjects found it difficult to determine the shape of the structure. For this
study I was able to control for it by using the same stimuli set for all subjects and using the
same stimuli presented in different orders for each of the presentation models. The
inability to scale the task was one factor in being unable to measure the detection and
discrimination threshold. While the study of time on task provides some insight to
effectiveness of these tools, in scientific visualization most tasks are not time dependent.
In scientific visualization we want to be able to find more subtle or more complex
patterns. The ultimate test of visualization should be measurement of detection and
discrimination threshold values over a range of subjects with differing imaging styles. This
would require the development of a standard presentation data set and a standard task. In
the pilot study I discovered that many aspects of this problem are task dependent. While
74
simply looking at the data set size will tell one if a data set is large or not, it is necessary to
look at the information retrieval task to determine if it is holistically relevant. In many
cases the need to analyze a data set as a whole is not a characteristic of the data set but of
the question being asked. If a standard presentation data set with corresponding standard
pattern recognition tasks were developed, then it would be possible to compare data from
different studies on any number of presentation models.
Despite the small number of observations, this research does indicate that
substantial differences exist in the subjects' use of additional imaging cues provided. I
think that this indicates substantial difference in the subjects' spatial information
processing. Some of the perception literature has long argued that males and females
process visual information differently, but this study would indicate that perhaps within the
gender groups visual information processing may be different. While some subjects did
perform better in the VR setting, some subjects clearly performed better in the non-VR
setting, with scores going down during immersion and going back up in the post
immersion traditional presentation.
Further Research
Further research is needed in several areas including gender effects, individual
preferences, effect of training on model preference, effect of search strategy and the effect
of discrimination strategy. The data from this study strongly suggest a significant gender
effect. Increasing the number of subjects and controlling some the factors such as video
game playing experience would be very useful.
75
The indications of strong individual preferences for presentation type are very
interesting and could have significant impact on the development of scientific visualization
systems. The indication that both preference and actual performance of the different
presentation models could be task dependent is also very interesting. While engineers
attempt to improve the information bandwidth, it may be more useful to provide a range
to visualization tools that the user can select from, thus enabling better matching of
cognitive style and query type with system presentation style.
The data collected in this study include data on search path. These data could be
investigated to develop ideas about the difference in both search and discrimination
strategy.
The research reported here used a four-dimensional data set for the test
presentations. The fourth variable, color was totally independent. The pattern recognition
task was however only a three-dimensional task with the fourth variable being a distracter.
It would be useful to develop a four-dimensional task that would involve both structure
relief and color recognition.
It would also be interesting to investigate the use of headmotion cueing. The VR
literature has talked a great deal about the impact of headmotion cueing. It is certainly
very important in providing a sense of immersion, but it may be that it is not as useful in
the area of scientific visualization or it may be that specific training is needed to utilize this
aspect of VR effectively for visualization.
76
Limitations and Delimitations
Several aspects of the presentation system limited the results of the study. While
the SGI system is capable of 16,000 plus color resolution only four amplitude ranges were
used to convert the fourth variable into color. This was necessary in order to get
reasonable processing times. A second limitation is the treatment of the data as integer
data. Again the system is capable of representing real values but the complexity of the
software for conversion would be much greater. It was felt that for this study the
improvement would not be sufficient to justify the significant increase in software
development effort.
The test sessions were controlled manually. The data set selection was done by
the experimenter during each presentation session. A more complex interface design
could automate this process and reduce the session time. It might be possible to get more
presentation sets for each subject if the entire process were automated. This could include
autoloading the next presentation immediately after a subjects' response. The current
system requires a substantial time between presentations because of experimenter
intervention and system load time. After completing the testing, a method was discovered
that would remove the flash back or previous view problem in the three-dimensional
presentation. If the entire process were automated, it would be possible to eliminate this
problem.
The study could benefit from an improved interface to the video recording
equipment. The videotape system is unable to record the entire scene rendered by the high
resolution SGI system in the two-dimensional and three-dimensional presentations. The
77
immersive system uses standard VGA resolution and that can be recorded without any
trouble. An improved interface for the high-resolution models would provide better data
for future studies.
The study population, while representing a broad range of college students, is not
at all representative of many scientific visualization user groups. It appears that large
between subject variation occurs and it might be expected that even larger variations
would occur in trained users. It would not be prudent to generalize this study to other
subject populations. It is very possible that some aspects including the gender effect may
be affected by training in visualization intensive disciplines. Data from this study that
indicate substantial improvement for most subjects in the traditional presentation after the
immersion tasks might also have application in training people in visualization intensive
disciplines such as engineering and science.
In the energy exploration field geologists use complex pattern recognition skills
developed over years of experience to locate prospective oil and gas reserves. Individual
differences are very large and represent the difference between a successful oil company
and one that is not. Many tools have been developed over the past decade to aid in this
pursuit but many would still say it is as much art as science. This research would indicate
that some of that art may be visualization style. This research then points out three areas
of interest for further study, visualization aptitude, visualization style and methods of
teaching visualization.
The large individual difference suggests that a large range of visualization aptitude
exist. It is unclear if this represents an underlying ability issue or is indicative of a range of
78
visualization styles. If, as I suspect, it represents a number of visualization styles then
visualization systems could be developed to take advantage of the different visualization
styles of users.
Performance improvement in the traditional setting following immersion also
indicates that some of this technology may be appropriate for the development of teaching
methods to improve visualization skills.
APPENDIX A
DATA COLLECTION FORMS
80
Informed Consent Form: Scientific Visualization Methods
You are invited to participate in a research project investigating the use of virtual reality forvisualizing 4-dimensional (4-D) data. We will compare task performance using virtual reality, a.traditional-2-D computer display and a stereoscopic 3-D computer display (i.e., a 3-dimensional
display seen on a 2-dimensional computer screen). I am Gene A. Heisel, a doctoral candidate in
Information Science at the University of North Texas. I am studying the use of virtual reality forscientific visualization. The purpose of this study is to evaluate several different currentlyavailable methods of presenting scientific data, specifically seismic data, visually to researchers.
This research will be conducted at the Virtual Environments Technology Laboratory at theUniversity of Houston.
By participating in this research project, you will have the opportunity to observe firsthand the
use of virtual reality to display 4-D data. you will also observe the representation of 4-D data
using traditional 2-D display and a stereoscopic 3-D display. You will learn about how different
types of displays can be used to display scientific data.
As a participant, you will be asked to view three different types of presentations of fourdimensional data and asked to identify patterns, i.e., shapes, in the data. You will iew first anormal 2-D computer display and be asked to locate a series of shapes in a map representinggeographic features. Then, you will try to find a series of shapes in a map shown using a 3-D)display on a computer screen and finally in a map shown using a virtual reality system. You willthen repeat the task for the 3-D display on the computer screen and the normal 2-D computerdisplay. Thus, you will perform five 20-minute trials, trying to find a series of shapes on mapsshown using each of these three types of displays. This will require about two hours of yourtime.
To participation in the study you must have normal vision in both eyes, including normal colorvision. You should have no vision problems that you are aware of that would interfere with yourparticipation.
A small portion of people experience some side effects from using virtual reality systems. Theeffects or symptoms are similar to motion sickness and can include disorientation, vertigo,blurred vision, and nausea. To minimize these risks, the total time spent using virtual reality willbe limited to a maximum of 40 minutes. Previous research has shown that these side effectsusually don't occur in trail periods lasting less than 1 hour. However, you can stop theexperiment at any time you experience any of these side effects. You will be provided with anarea to rest and recover if needed. Also, you will be asked to complete a checklist of thesesymptoms at the start and end of the experiment, and you will be asked periodically during theexperiment whether you are experiencing any of these symptoms. Also, these symptoms aretemporary. Finally, in the unlikely event injury occurs, you will be entitled to receive themedical treatment and compensation provided to any member of the general population whobecomes injured while at the University of Houston.
If your instructor accepts extra credit points, you have the potential to earn 2 hours' worth ofextra credit points that will be applied to your course grade. You should also understand that youwill only earn the 2 hours' worth of extra credit if you complete the session. No partial creditwill be given. Also, no money will be received. Your participation is completely voluntary, andyou may withdraw at any time without penalty. The only loss of benefits associated withwithdrawing from the experiment is the loss of the extra credit points. All data collected will bekept confidential. No individuals will be identified by name. All data will be identified solelyby a subject number. Only aggregate data will be reported in any resulting reports orpublications.
If you have further questions connected with your participation in this project, you shouldcontact the principal investigator, Dr. James P. Bliss, Adjunct Professor, Department of
81
Psychology, University of Houston (713) 967-6214, or Gene A. Hetsel, graduate student,Information Science, University of North Texas (903) 429-6856).
THIS PROJECT HAS BEEN REVIEWED BY THE UNIVERSITY OF NORTH TEXASCOMMITTEE FOR THE PROTECTION OF HUMAN SUBJECTS (817) 565-3940
THIS PROJECT HAS BEEN REVIEWED BY THE UNIVERSITY OF HOUSTONCOMMITTEE FOR THE PROTECTION OF HUMAN SUBJECTS (713) 743-9222
82
Informed Consent Form: Scientific Visualization Methods
1, , agree to participate in a study of scientificvisualization presentation methods. I understand that my participation in this study is completelyvoluntary and that I may leave the study at any time without prejudice or penalty.
Signature of Participant
Signature of Experimenter
THIS PROJECT HAS BEEN REVIEWED BY THE UNIVERSITY OF NORTH TEXASCOMMITTEE FOR THE PROTECTION OF HUMAN SUBJECTS (817) 565-3940
THIS PROJECT HAS BEEN REVIEWED BY THE UNIVERSITY OF HOUSTONCOMMITTEE FOR THE PROTECTION OF HUMAN SUBJECTS (713) 743-9222
Date
Date
83
Participant # 4, Time Date
PRE-VE BACKGROUND INFORMATION
Instructions: Please fill this page out BEFORE you enter the
virtual environment. Fill in the blanks or
circle the appropriate item.
1. Your AGE HEIGHT WEIGHT
2. Are you in your usual state of fitness? YES NO
If not, please indicate the reason:___________
3. Have you been ill in the past week? YES NO
4. If "Yes", please indicate:
a) The nature of the illness (flu, cold, etc .) :
b) Severity of the illr.ess:' Very VeryMild Severe
c) Length of illness: Hours/Days
d) Major symptoms:
e) How much work did you miss? Hours/Days
f) Are you fully recovered? YES NO
5. How much alcohol have you consumed during the past 24
hours?
beers _ ozs. wine ozs. hard liquor
6. Please indicate all medication that you have used in the
past 24 hours:(a) NONE
(b) Sedatives or tranquilizers
(c) Aspirin, Tylenol, etc.
(d) Antihistamines
(e) Decongestants
(f) Other (specify):
7. How many hours sleep did you get last night?(Hours)
8. Was this amount sufficient? YES NO
9. Please list any other comments regarding your presentphysical state which might affect your performance on ourtest battery
F
84
Participant #Time_Date_
PRE-VE SYMPTOM CHECKLIST
Instructions: Please fill this out BEFORE you enter the
virtual environment. Circle below if any symptoms apply
to you riaht now. (After your exposure to the virtual
environment, you will be asked these questions again.)
1. General discomfort None
2. Fatigue None
3. Boredom None
4. Drowsiness None5. Headache None
6. Eye strain None
7. Difficulty focusing None8. a. Salivation increased None
b. Salivation decreased None9. Sweating None
10. Nausea None
11. Difficulty concentrating None
12. Mental depression None
13. "Fullness of the head" None14. Blurred-vision None15. a. Dizziness (eyes open) None
b. Dizziness (eyes closed)None16. *Vertigo None17. **Visual flashbacks None
18. Faintness None19. Aware of breathing None20. ***Stomach awareness None
21. Loss of appetite None
22. Increased appetite None
23. Desire to.move bowels None
24. Confusion None
25. Burping No26. Vomiting No27. Other
SlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlight-SlightSlightSlightSlightSlightYesYes
Moderate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereNo. of timesNo. of times
* Vertigo is experienced as loss of orientation with respect
to vertical upright.** Visual illusion of movement or false sensations similar
to movement in the virtual world, when not in the virtualworld.
*** Stomach awareness is usually used to indicate a feelingof discomfort which is just short of nausea.
85
Participant #TimeDate
MID-VE SYMPTOM CHECKLIST
B. Circle below if any symptoms apply to you right now.
(After your exposure to the virtual world, you will be
asked these questions again.)
1. General discomfort None
2. Fatigue None
3. Boredom None
4. Drowsiness None
5. Headache None
6. Eye strain None
7. Difficulty focusing None
8. a. Salivation increased Noneb. Salivation decreased None
9. Sweating None
10. Nausea None
11. Difficulty concentrating None
12. Mental depression None
13. "Fullness of the head" None
14. Blurred vision None
15. a. Dizziness (eyes open) None
b. Dizziness (eyes closed)None
16. *Vertigo None
17. **Visual f lashbacks None
18. Faintness None
19. Aware of breathing None
20. ***Stomach awareness None
21. Loss of appetite None
22. Increased appetite None
23. Desire to move bowels None
24. Confusion None
25. Burping No
26. Vomiting No
27. Other
SlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightYesYes
Moderate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate 2avereModerate SevereModerate SevereModerate SevereNo. of timesNo. of times
* Vertigo is experienced as loss of orientation with respect
to vertical upright.** Visual illusion of movement or false sensations similar
to movement in the virtual world, when not in the virtual
world.*** Stomach awareness is usually used to indicate a feeling
of discomfort which is just short of nausea.
86
Participant #
TimeDate
POST-VE SYMPTOM CHECKLIST
B. Circle below if any symptoms apply to you right now.
1. General discomfort None2. Fatigue None3. Boredom None4. Drowsiness None5. Headache None6. Eye strain None.7. Difficulty focusing None8. a. Salivation increased None
b. Salivation decreased None9. Sweating None10. Nausea None11. Difficulty concentrating None12. Mental depression None13. "Fullness of the head" None14. Blurred vision None15. a. Dizziness (eyes open) None
b. Dizziness (eyes closed)None16. *Vertigo None17. **Visual flashbacks None18. Faintness None19. Aware of breathing None20. ***Stomach awareness None21. Loss of appetite None22. Increased appetite None23. Desire to move bowels None24. Confusion None25. Burping No26. Vomiting No27. Other
SlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightSlightYesYes
Moderate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereModerate SevereNo. of- timesNo. of times
* Vertigo is experienced as loss of orientation with respectto vertical upright.
** Visual illusion of movement or false sensations similarto movement in the virtual world, when not in the virtualworld.
*** Stomach awareness is usually used to indicate a feelingof discomfort which is just short of nausea.
APPENDIX B
SAMPLE SUBJECT DATA
88
Subject 10.19 Data Set Is2.iv_10.19.dat Started at 835470247.481960
Time X Y Z Roll Yaw Pitch0 8.5 -38 -60 0 0 -16
1 8.5 -38 -60 0 0 -16
2 8.5 -37.970985 -59.8988 0 0 -16
3 8.5 -37.434223 -58.0269 0 0 -16
4 8.5 -39.705643 -65.9484 0 0 -16
5 8.5 -41.083809 -70.7547 0 0 -16
6 13.068628 -41.083809 -70.7547 0 0 -16
7 13.068628 -41.57893 -72.4814 0 0 -16
8 13.068628 -44.192616 -73.6557 0 0 -16
9 13.068628 -41.748264 -72.9548 0 0 -16
10 13.068628 -25.073137 -67.1054 0 0 -16
11 13.068628 -6.688249 -61.4467 0 0 -16
12 13.068628 13.450062 -49.2335 0 0 -16
13 13.068628 28.616003 -40.4427 0 0 -1614 13.068628 45.22295 -35.6808 0 0 -16
15 13.068628 59.394699 -31.6172 0 0 -16
16 13.068628 60.936531 -28.2034 0 0 -16
17 13.068628 61.81263 -27.8438 0 0 -16
18 13.068628 72.578796 -24.7567 0 0 -16
19 13.068628 73.932785 -20.0347 0 0 -1620 13.068628 73.932785 -20.0347 0 0 -16
21 19.872551 73.932785 -20.0347 0 0 -16
22 21.813726 73.932785 -20.0347 0 -4.84091 -16
23 25.462328 74.772919 -20.0347 0 -15.5909 -16
24 37.006115 79.983131 -20.0347 0 -34.75 -16
25 43.397091 85.98539 -20.0347 0 -54.75 -16
26 40.16441 90.011139 -18.7091 0 -65.7045 -16
27 43.335064 97.034492 -18.7091 0 -65.8636 -1628 43.823589 99.074516 -18.7091 0 -78.6136 -1629 43.54987 110.31443 -18.4151 0 -94.9546 -1630 38.863441 114.033318 -17.281 0 -103 -1631 33.489494 121.262619 -16.6064 0 -112.546 -1632 30.293133 125.917809 -16.3045 0 -129.977 -1633 30.293133 125.917809 -16.3045 0 -136.182 -1634 25.732386 129.632751 -16.3045 0 -141.341 -1635 22.121483 132.495148 -16.3045 0 -149.614 -1636 22.121483 132.495148 -16.3045 0 -154.977 -1637 20.308241 133.203537 -16.0515 0 -161.659 -1638 12.602054 137.229065 -9.98039 0 -165.886 -16
39 12.293607 137.286377 -9.98039 0 -169.477 -1640 -0.60346 139.68187 -9.98039 0 -174.5 -1641 -0.60346 139.68187 -9.98039 0 -194.5 -1642 -4.05183 138.171295 -9.98039 0 -203.659 -1643 -5.309049 137.620575 -9.98039 0 -203.659 -1644 -10.31996 135.425568 -9.98039 0 -203.659 -1645 -10.822847 135.205276 -9.98039 0 -212.727 -1646 -10.822847 134.475357 -12.526 0 -218.114 -1647 -9.041863 130.878799 -16.6615 0 -218.114 -1648 -15.783749 139.141022 -19.6724 0 -218.114 -1649 -25.288198 150.788818 -23.9172 0 -218.114 -1650 -25.252934 150.74559 -23.9015 0 -218.114 -1651 -25.141254 150.608734 -23.8516 0 -223.523 -1652 -25.90011 149.81514 -23.8516 0 -226.909 -1653 -33.823643 140.372604 -23.8516 0 -234.432 -1654 -39.096619 130.154938 -23.8516 0 -252.727 -16
55 -40.414326 123.205963 -23.8516 0 -268.795 -16
56 -40.602749 114.051056 -23.8516 0 -269.432 -1657 -40.606594 112.756981 -23.8516 0 -282.773 -1658 -37.81776 101.555977 -23.8516 0 -289.818 -1659 -33.464725 93.712891 -23.8516 0 -310.773 -16
60 -24.804848 83.860039 -23.8516 0 -311.318 -1661 -20.364883 78.808411 -23.8516 0 -316.545 -1662 -20.364883 78.808411 -23.8516 0 -321.204 -16
Det Time = 21.787114 ID Time = 66.203530
89
Test Cond Data Set Start Time Day from Jun 1 Time Det Time Desc Time Corec C/Det C/Des835468166.9
10.11 2t5.iv_10.11.dat 835469396 22 15.83 11.758849 50.486011 010.12 1s5.iv_10.12.dat 835469480.8 22 15.86 5.341395 65.195707 010.13 2s6.iv_10.13.dat 835469704.4 22 15.92 2.349826 57.746438 1 2.35 57.75
10.14 2s5.iv_10.14.dat 835469792.6 22 15.94 2.2603 85.071594 1 2.26 85.0710.15 2t5.iv_10.15.dat 835469910.2 22 15.98 1.963694 64.707373 1 1.964 64.7110.16 1s4.iv 10.16.dat 835470009.6 22 16.00 2.252849 49.929572 1 2.253 49.93
10.17 2s4.iv 10.17.dat 835470096.9 22 16.03 2.407198 47.322442 1 2.407 47.32
10.18 1t3.iv 10.18.dat 835470173.4 22 16.05 2.843843 38.958823 1 2.844 38.9610.19 1s2.iv 10.19.dat 835470247.5 22 16.07 21.787114 66.20353
90
Accuracy by Subject
Sub # Sex Age VG Edu5 f 40 0.0 jurn6 f 21 0.5 4 soc8 f 25 0.0 4 Crim Just
10 f 22 0.5 jurn11 f 31 0.0 5 math13 f 34 0.0 4 polsci14 f 24 0.0 4 hot mgt16 f 24 0.5 3 polsci18 f 18 0.0 1 theatre19 f 21 0.0 3 music20 f 22 0.0 3 soc
3 m 25 1.0 3 finance9 m 22 0.5 4 ME
15 m 36 0.5 4 econ17 m 22 0.0 1actg21 m 22 1.0 4 kenis22 m 24 0.5 4 psych
AvgMaxMin
std str hmd hmd str std0.7 0.9 0.9 0.7 0.7 0.80.6 0.7 0.4 0.4 0.7 0.70.4 0.7 0.8 0.8 0.8 0.50.6 0.7 0.6 0.8 0.9 0.90.6 0.7 0.7 0.8 0.8 0.80.9 0.8 0.5 0.7 0.9 0.80.9 0.8 0.7 0.9 0.9 0.90.8 0.9 0.9 0.9 0.8 0.60.8 0.7 0.8 0.8 0.9 10.5 0.6 0.6 0.9 0.9 0.80.6 0.7 0.6 0.6 0.9 0.80.4 0.5 0.7 0.5 0.4 0.40.9 0.8 0.5 0.7 0.7 0.80.5 0.6 0.8 0.9 0.9 0.70.9 0.4 0.6 0.6 0.8 0.60.9 0.7 0.8 0.9 0.8 0.80.6 0.7 0.6 0.5 0.5 0.5
25.5 0.29 3.4040 1.00 5.0018 0 1
0.78330.58330.66670.75000.73330.76670.85000.81670.83330.71670.70000.48330.73330.73330.65000.81670.5667
0.71670.85000.4833
APPENDIX C
PRESENTATION MODEL EXAMPLES
92
Original Seismic Data
93
Training Set Square
94
Training Set Triangle
95
Test Set Square
relief 6
96
Test Set Square
relief 4
97
Test Set Triangle
98
Test Set Triangle
relief 4
WORKS CITED
Arthur, K. W., K. S. Booth, and C. Ware. Evaluating 3D task performance for fishtank
VR. ACM Transactions on Information Systems I. 11 (1993): 239-265.
Boring E. G. 1942. Sensation and perception in the history of experimental
psychology. New York: Irvington Publishers.
Brill, L. M. Virtual reality: from simple beginnings. Virtual Reality Report 4 (1994): 5-
7.
Brooks, F. P., Jr. Walkthrough: A dynamic graphics system for simulating virtual
buildings. Computers & Graphics 10 (1986): 35-36.
Bryson, S., Virtual Reality in Scientific Visualization. Computers & Graphics 17
(1993): 679-685.
Deering, M. F., 1993. Explorations of display interfaces for virtual reality. In
Proceedings of the Virtual Reality Annual International Symposium 18-22
September 1993. Seattle, Washington: IEEE.
Dodwell, P. C. 1970. Visual pattern recognition. Ontario: Queen's University.
Durlach, N. and A. S. Mayor, 1995 Virtual Reality: Scientific and Technological
Challenges Washington D.C.: National Academy Press
Emerson, T. Virtual reality source list available via FTP at hitl.washington.edu
Ernest, C. H. 1968. Individual differences in imagery ability and compound stimulus
effects in paired associated learning. Master's thesis, University of Western
Ontario.
. 1983. Spatial imagery ability, sex differences and hemispheric functioning.
In Imagery, Memory and Cognition, ed. J. C. Yuille
and A. Paivio. Imagery and sex differences in incidental recall. British
Journal of Psychology 62 (1971): 67-72.
Epstein, W., T. Babler. and S. Bownds. Attention demands of processing shape in three-
dimensional space: evidence from visual search and precuing paradigms. Journal
of Experimental Psychology: Human Perception and Performance 18 (1992):
503-511.
100
Feiner, S. 1992. Virtual worlds for visualizing information. In a brochure published by
IEEE.
Fisher, S. S., M. McGreevy, J Humphries, and W. Robinett. 1986.. Virtual environment
display system. In Proceedings of the 1986 ACM Workshop on Interactive 3D
Graphics. Published in Computer Graphics, 9-21.
Galanter, E. 1962. Contemporary psychophysics. New Directions in Psychology Series.
New York: Holt, Rinehart and Winston.
Gallelli, D. M. 1992. An exploratory environment for geographic data analysis.
Masters thesis, State University of New York.
Gibson, J. J. 1979. Experimental evidence for direct perception persisting layout. In
The Ecological Approach to Visual Perception, 147-168 Boston: Houghton
Mifflin Company.
Havron and Butler. Evaluation of Training Effectiveness of the 2FH2 Helicopter Flight
Trainer Research Tool (NAVTRADEVCEN 1915-00-1) Port Washington, NYNaval Training Device Center 1957. military documents
Jacobson, Laura. 1994. Garage Virtual Reality. Carmal IN: Sams.
Koike, H. The Role of another spatial dimension in software visualization. ACM
Transactions on Information Systems 1103 (1993): 266-286.
Kolasinski, E. M. 1996 Prediction of simulator sickness in a virtual environment. Ph.D.
diss., University of Central Florida
. 1995 Simulator Sickness in Virtual Environments. technical report 1027 by
United States-Army Research Institute for the Behavioral and Social Sciences
Kosslyn, S. M., T. M. Ball, and B. J. Reiser. Visual images preserve metric spatial
information. Journal of Experimental Psychology Human Perception and
Performance 4 (1978): 47-60.
Paley, W. B. 1992. Head-tracking stereo display: experiments and applications. SPIE
In Proceedings of Stereoscopic Displays and Applications III, by SPIE. New
York: SPIE, 1669.
Pinker, S. Mental imagery and the third dimension. Journal of ExperimentalPsychology: General 109 (1980): 354-371.
101
Sutherland, I. 1968. A Head-mounted three dimensional display. In Proceedings of the
Fall Joint Computer Conference, AFIPS Conference Proceedings, 1968, by
AFIPS. Arlington, Virginia:, 757-764.
Skavensi, A. and R. Steinman. "Control of eye position in the dark." Vision Research
10 (January 1970): 193-203.
Tang, Q. 1993. A dynamic visualization approach to the exploration of area-based,
spatial-temporal data. Ph.D. diss., Ohio State University.
Tukey, J. W. 1977. Exploratory data analysis. Reading, Mass: Addison-Wesley
Publishing.
Uttal, W. R. 1983. Visual form detection in 3-dimensional space,. Hilsdale, New
Jersey: Lawrence Erlbaum, Associates.
Wanger, L. R, J. A. Ferwerda and D. P. Greenberg. perceiving spatial relationships in
computer-generated images. IEEE Computer Graphics & Applications 12 (May
1992): 44-58.
Wheatstone, C. H. 1838. See Binocular Vision and Pattern Coding
WORKS REFERENCED BUT NOT CITED
Related Research
Scientific Visualization
Spring, M.B. "Virtual reality and abstract data: Virtualizing information." 1
Virtual Reality World (Spring 1993) 1.
Bly, S. 1992. Multivariate data mappings. In Proceedings of the ICAD 1992:
The first international conference on auditory display. by ICAD Reading,
Massachusetts: ICAD,
Bryson, S. 1992. Virtual environments in scientific visualization. In
Proceedings COMPCON of the 37th Annual IEEE International Computer
Conference, Spring 1992. by IEEE COMPCON. PLACE OFPUBLICATION: COMPCON, 460-461.
. 1994. Virtual environments in scientific visualization. In
Proceedings of VRST 1994 Virtual Reality Software and Technology, by
VRST, : VRST, 201-222.
Cruz-Neira, C. Scientists in wonderland: A report on visualization applications in
the CAVE virtual reality environment. In Proceedings of the IEEE
Symposium on Research Frontiers in Virtual Reality, by IEEE. Los
Alamitos, California: IEEEPress, 59-66.
Deering, M.F. 1993. Data complexity for virtual reality: Where do all the
triangles go? In Proceedings of IEEE Virtual Reality Annual
International Symposium. by VRAIS. Washington D.C.: IEEE Press,
357-363.
Hirota, K. 1993. Development of surface display. In Proceedings of IEEE
Virtual Reality Annual International Symposium. by VRAIS.Washington D.C.: IEEE Press, 256-262.
Keller, P.R. 1993. Visual cues: practical data visualization. Washington D.C.:IEEE Computer Society Press.
103
Merwin, D. H. Visual analysis of scientific data: Comparison of 3D topographic,
color, and Gray scale displays in a feature detection task. In Proceedings
of the Human Factors and Ergonomics Society 38th Annual Meeting. by
HFES, Place of publication: HFES, 240-244.
Virtual Reality
Arthur, E. J. 1993. Spatial orientation in real and virtual worlds. In
Proceedings of The Human Factors and Ergonomics Society 37th Annual
Meeting. by HFES, Santa Monica, CA. Human Factors Society, 328-
332.
McNeill, M. 1992. Virtual reality research at the national center for
supercomputing applications. In Proceedings of Virtual Reality 1992:
VR becomes a business, 115-117.
Side Effects
Regan, E. D. "Some side effects of immersion virtual reality." 2 VR News
(July 1993): 10-12.
Biocca, F. "Will simulation sickness slow down the diffusion of virtual
environment technology?" 1 Presence Teleoperators and Virtual
Environments 1992: 334-343.
DiZio, P. "Spatial orientation, adaptation, and motion sickness in real and virtual
environments." 1 Presence Teleoperators and Virtual Environments 1992:
319-328.
Ebenholtz, S. M. "Motion sickness and oculomotor systems in virtual
environments." 1 Presence Teleoperators and Virtual Environments 1992:
302-305.
Hettinger, L. J. "Visually induced motion sickness in virtual environments." 1
Presence Teleoperators and Virtual Environments 1992: 306-310.
. 1992. Difference in simulator Sickness Symptom Profiles in different
Simulators 1992 IMAGE Conference VI 29-40 IMAGE Society.
104
. Measurement and control of motions sickness aftereffects from
immersion in VR Virtual Reality and Medicine: the cutting edge SIG
advanced application Inc.
. 1993. Posteffects of flight simulators: Should virtual reality systems
have learning labels? In 1993 Proceedings of the Conference on
Intelligent Computer Aided Training and Virtual Environment
Technology by National Aeronautics and Space Administration.
Houston, Texas: NASA, 407.
. Profile analysis of simulator sickness symptoms: application to virtual
environment systems Presence Teleoperators and Virtual Environments
1(3) 295-301.
Cartography
Jacobson, R. "Applying the virtual worlds paradigm to mapping and surveying
data." 2 Virtual Reality World 1994: 60-69.
. "The Ultimate user interface." Byte (April 1992): 175-181.
. "Virtual worlds: a new type of design environment." 2 Virtual
Reality World 1994: 46-52.
Lang, L. "Terrain Modeling." 16 Computer Graphics World (September 1993):22-28.
Cognitive Science
Performance Analysis
Lampton, D. R. and J. Bliss. 1993. Assessing human performance in virtual
environments. 1993 Proceedings Conference on Intelligent computer
Aided Training Virtual Environment Technology, by National
Aeronautics and Space Administration. Houston, Texas: NASA, 270.
Lampton, D. R. and J. Bliss. the virtual environment performance assessment
battery (vepab) development and evaluation Presence Teleoperators and
Virtual Environments 3(2) 145-157.
105
Cognitive Processing and Imaging
Carr, K. Stereo Visual Information and Mental Rotation Tasks 1993 Annual
Meeting of the Applied Vision Association.
Psotka, J. 1993. Cognitive factors associated with immersion in virtual
environments. In Proceedings of 1993 Conference inIntelligent
computer-Aided Training and Virtual Environments, by National
Aeronautics and Space Administration. Houston, Texas, 277-284.
Gender Effects in Imaging
Conor, J. A. 'Sex related difference in responses to practice on a visual spatial
test and generalization to a related test." 49 Child Development 1978:
24-29.
Ernest, C. H. 1972. Spatial imagery ability and the recognition of verbal and
nonverbal stimuli. Ph. D. diss., University of Western Ontario.
. "Imagery and verbal associative latencies as a function of imagery
ability." 25 Canadian Journal of Psychology 1971: 83-90.
Fairweather, H. "Sex differences in cognition." 4 Cognition 1976: 231-280.
Freedman, R. J. Ocular dominance, cognitive strategy and sex differences in
Spatial Ability Perceptual and Motor Skills 1981.
Hannay, H. J. "Real or imagined incomplete lateralization of function in
females." 71 Perception and Psychophysics 1976: 99-104.
Harshman, R. Sex, Language and the Brain. Adult Sex Differences in
Lateralization paper presented at the UCLA conference on Human Brain
Function Los Angeles 1974.
Kail, R. The Locus of sex differences in spatial ability. 26 Perception and
Psychophysics 1979: 26: 182-186.
McGee, M.G. "Human Spatial Abilities: Psychometric Studies and
Environmental, Genetic, Hormonal, and Neurological Influences." 86
Psychological Bulletin 1979: 889-918.
106
McGlone, J. "Sexual Variation in Behavior During Spatial and Verbal Tasks."
35 Canadian Journal of Psychology 1981: 277-282.
Tapley, C. M. "An Investigation of Sex Differences in Spatial Ability: Mental
Rotation of Three-Dimensional Objects An Investigation of ex
Differences in Spatial Ability: Mental Rotation of Three-Dimensional
Objects." 31 Canadian Journal of Psychology 1977: 122-130.
Waber, D. P. "Sex Differences in Cognition: A Function of Maturation Rate."
192 Science 1976: 572-574.
Wetelson, S. F. "Sex and The Single Hemisphere: Specialization of the right
hemisphere for Spatial Processing." 193 Science 1976: 425-427.