visualisation of tsunami simulations adam wilde bsc · pdf filevisualisation of tsunami ......

66
Visualisation of Tsunami Simulations Adam Wilde BSc (Hons) Computing (Ind) (2005/2006) The candidate confirms that the work submitted is their own and the appropriate credit has been given where reference has been made to the work of others. I understand that failure to attribute material which is obtained from another source may be considered as plagiarism. (Signature of student) _______________________________

Upload: vonhi

Post on 28-Mar-2018

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Visualisation of Tsunami Simulations Adam Wilde

BSc (Hons) Computing (Ind) (2005/2006)

The candidate confirms that the work submitted is their own and the appropriate credit has been given where reference has been made to the work of others. I understand that failure to attribute material which is obtained from another source may be considered as plagiarism. (Signature of student) _______________________________

Page 2: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Summary

This project focuses on the use of 3D visualisation and interaction techniques with the aim of

developing informative, flexible, efficient and attractive approaches to displaying water wave

simulations. The water wave simulations will be generated in binary data form, using the existing 2D

numerical wave model OTT-2D but will be transformed into a full screen 3D visualisation that can be

used to explore areas of interest on the water surface and bed topography, whilst simultaneously

displaying water velocity and producing animations of the wave run-up.

i

Page 3: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Acknowledgements To my supervisor, Matthew Hubbard, for explaining the structure of the OTT-2D data files (on more

than one occasion) and also for offering alternative approaches when I felt out of my ‘depth’. To

Leroy, for his time and C++ expertise. My parents, for desperately trying to understand the technical

aspects of the project and to my girlfriend, for keeping me alive when I didn’t have time to cook for

myself.

ii

Page 4: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Table of Contents

Chapter 1 Project Introduction 1.1 Problem Statement …………………………………………………………..… 1 1.2 Objectives ………………………………………………………….………….. 1 1.3 Minimum Requirements……………………………………………………….. 2 1.4 Other Areas to Consider ………………………………………………………. 2 1.5 Deliverables ……………………………………………………………..…….. 3 1.6 Relevance to Degree Programme ……………………………………….…….. 3 1.7 Background Reading ………………………………………………..……….... 4 1.7.1 Existing 3D Graphic Languages and Visualisation Environments … 4 1.7.2 Spatial Data Structures ……………………………………………... 7 1.7.3 Adaptive Mesh Refinement ………………………………………… 10 1.7.4 Existing AMR Visualisation Techniques …………………………... 12 1.7.5 Human-Computer Interactions within 2D/3D Environments ……… 15 1.8 OTT-2D Overview ……………………………………………………………. 16 Chapter 2 Design ………………………………………………………………………… 18 2.1 Requirements . ………………………………………………………………… 18 2.1.1 Ability to Recognise and Handle OTT-2D Output Files …………… 18

2.1.2 Facilitate Internal Storage of Simulation Data Values ……………… 19 2.1.3 Generate Accurate Representations of Wave Data in Mesh Form….. 20 2.1.4 Control Simulation Timing …………………………………………. 21 2.1.5 Allow Simulation Manipulation ………………………………….… 21 2.1.6 Support Cross Platform Operation…………………………………... 22 2.2 Preliminary Design…………………………………………………….……….. 22 2.2.1 Vectors as Dynamic Storage Elements……………………………… 22 2.2.2 Using a Quadtree for Mesh Joining at Different Levels…………….. 23 2.2.3 Lighting and Normals………………………………………………... 24 2.2.4 Creating Timeframes………………………………………………… 25 2.2.5 Using the GLUI Toolkit for Creating Interactive Controls………….. 26 2.3 Hardware Requirements………………………………………………………… 26 2.4 Software Requirements…………………………………………………………. 27

iii

Page 5: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Chapter 3 Implementation…………………………………………………………….… 28 3.1 Methodology………………………………………………………………..…. 28 3.2 Development Plan……………………………………….…………………….. 28 3.3 Development Phases…………………………………………………………… 29 3.4 Description of Methods………………………………………………………... 33 3.5 Solutions to Problems Encountered……………………………………………. 36 3.5.1 Vertex Arrays………………………………………………..………. 36 3.5.2 Vertex Buffer Objects………………………………………….……. 37 3.5.3 Unstructured vs. Structured Output Files………………….…….…… 38 3.5.4 Handling the Unstructured Data Format. …………………………… 38 3.5.5 Using GLUT for Timing. ……………………………..………….…. 39 Chapter 4 Evaluation……………………………………………………………………... 40 4.1 Introduction…………………………………………………………….………. 40

4.2 Evaluation Against Minimum Requirements…………….……….……………. 40 4.3 Heuristic Evaluation……………………………………………………………. 42 4.4 Formative Evaluation. …………………………………………………………. 45 4.5 End User Evaluation……………………………………………………………. 45 4.6 Functionality versus Efficiency. ……………………………….………………. 47 Chapter 5 Conclusion…………………………………………………………..…………. 49 5.1 Successes…………………………………………………………………….…. 49 5.2 Failures…………………………………………………………………….……. 49 5.3 Future Enhancement Possibilities………………………………………………. 49 5.4 Evaluation of Decisions Made……………………………………………….…. 50 5.5 Overall Methodology Evaluation………………………………………………. 50 References………………………………………………………………………………….……. 51 Accompanying Material……………………………………………………..…………………. 54 Appendices………………………………………………………………………………………. 55 Appendix A Personal Reflection…………………………………………..………. 55 Appendix B Initial and Final Project Plan…………………………………………. 57 Appendix C Test Plan…………………………………………………………..…. 59 Appendix D Visualisation User Manual…………………………..………………. 60 Appendix E GLUI Feature List……………………………………………………. 61

iv

Page 6: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

1 Project Introduction

1.1 Problem Statement

The aim of the project is to construct and use suitable data structures and algorithms to generate 3D

visualisations of the output from an existing numerical water wave model OTT-2D written in Fortran

90 Code. The existing code generates simple but adaptive simulations of water waves in the form of

numeric data output files; however the aim of the project is to produce an efficient and clear 3D

representation of the flow of water alongside the corresponding bed topography. The visualisations

should be informative, flexible, efficient and attractive and have the ability to run on various

platforms. It should make good use of HCI techniques to provide the user with simple, straight-

forward, yet powerful methods of manipulating the viewpoint and progress of the simulation.

1.2 Objectives

The current 2D numerical model provides accurate simulations of wave transformation, run-up,

overtopping and regeneration. The result of the simulation is a series of data files, containing arrays of

water depth, height of sea bed and flow velocity at each grid point throughout the period of the

simulation. The simulation can run across grids of various sizes and generate several finer scale grids

of varying range using an adaptive mesh refinement technique (AMR). The AMR allows for a high

level of efficiency, while providing greater detail (more points of recordable data) and increased

model adaptability, as the overlaid finer meshes are only applied to parts of the simulation that require

higher levels of resolution (such as a region of moving shoreline). While the numerical wave model

provides very accurate output data it requires a suitable form of visualisation for the end user. The

objectives of the project are therefore, to investigate appropriate techniques for visualising the

numerical wave model data in a 3D environment and allow several methods for manipulating and

controlling the 3D representation. The objectives of this project are not to explore the means by which

AMR techniques should be developed, but rather how AMR data scenarios can and should be

effectively visualised.

1

Page 7: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

1.3 Minimum Requirements

The requirements below define the minimum that must be produced in order to deliver a solution to

the problem:

Produce a method of processing and storing wave run-up data from external data files

Produce an algorithm to generate efficient and clear step by step animations of the wave

run up

Employ AMR within the 3D visualisation of the wave and bed topography

Employ techniques to manipulate the 3D wave visualisation such as

zoom/pan/rotate/colouring

Regular feedback from the author of the numerical wave model will result in a 3D visualisation that is

tailored exactly to the author’s requirements and any further enhancements or modifications can be

made as are appropriate.

1.4 Other Areas to Consider

Once minimum requirements have been met, their will be further scope to add:

The ability to display flow characteristic calculations (velocity, height, etc.) at any given

point on the water surface

The functionality to load a selection of bed topography files, to change/enhance the

animation of the wave run-up

The ability to control animation speed, as well as add support for pause/fast forward and

rewind

The ability to control the appearance of the mesh, with choices of wireframe surface,

monotone or multicoloured (dependent on wave height/velocity)

Incorporate use of NVIDIA Cg rendering techniques to speed up simulation/visualisation.

(Only available for recent graphics cards; DirectX 9.0c compatible)

2

Page 8: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

1.5 Deliverables

Final project deliverables will be:

Final solution in the form of one or more pieces of code and all necessary libraries

required to run the code and visualisation

Setup and operating instructions for the visualisation

1.6 Relevance to Degree Programme

This project will span a wide selection of topics that have been covered in university modules

including theories studied in lectures and practical skills learnt during coursework assignments. The

project will look mainly towards the computer graphics modules studied in the second and third years

but will also cover human computer interaction techniques from the GI11 and GI32 modules, software

methodology and management from the SE22 module and mathematical/programming experience

gained from the MA12, SO13 and SE21 modules.

3

Page 9: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

4

1.7 Background Reading

1.7.1 Existing 3D Graphical Languages and Visualisation Environments

The choices of appropriate graphical languages and visualisation packages for this project are heavily

dependent on the platform and applications already available across the Leeds SoC computer network.

With this in mind, viable options are the MathWorks Inc. Matlab package, Iris Explorer and OpenGL.

Matlab is a data manipulation environment and a core programming language that includes a suite of

analytical tools and visualisation techniques, and allows for the use of existing functions and user-

defined programs. It provides an interface to allow matrix manipulation, algorithm implementation

and user interfaces as well as many other functions and can run across a wide range of operating

systems. It is a proprietary piece of software costing as much as $2000 per license for commercial use

[MathWorks, 2006].

Iris Explorer is currently developed and released through the software house Numerical Algorithms

Group (NAG). It was originally designed and developed by SGI, the people responsible for OpenGL

and runs on a collection of OpenGL, Open Inventor and ImageVision libraries as well as NAG’s own

numerical libraries. The aim of Iris Explorer is to provide a powerful visual programming

environment for 3D data visualisation, animation and manipulation. Visualisations are built up

through the selection of existing modules within the application. This removes the requirement for

any coding, although custom modules can be built in the FORTRAN or C languages. It is currently

available across a large number of platforms such as Microsoft Windows and Linux, however the cost

of purchasing the software is approximately £900 (for academic purposes) [NAG, 2006].

OpenGL is an open specification 3D graphics and modelling library that is extremely portable and

very fast to implement [Wright, 2000]. It offers an application program interface (API) for defining

2D and 3D objects rather than being an actual programming language and must therefore be used in

conjunction with a programming language such as c or C++. Initially developed by Silicon Graphics,

it allows any computer user to download the required libraries and create an application that will run

cross platform providing the user has an OpenGL specification graphics card. The majority of this off

the shelf hardware has been OpenGL compliant for a number of years, as end users have strived for

business applications and games that make use of 3D graphics [Wright, 2000]. It is widely used in

CAD, virtual reality, scientific visualisation, information visualisation and video game development

[SGI, 2006].

A comparison of the Matlab, Iris Explorer and OpenGL software follows:

Page 10: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Yes

OpenGL Matlab Iris Explorer

Available in SOC Yes Yes

Prior Experience No – therefore steep learning

curve

No – therefore steep learning curve, although Iris Explorer does implement OpenGL so some

features will be familiar Yes - GI21 and GI31 computing modules.

Cost to install outside of the SOC

$100 academic license, $2000 commercial use [MathWorks,

2006]. Therefore costly to install outside of SOC

Approximately £900 for academic purposes [NAG, 2006]

End users are free from licensing [SGI, 2006]. Easily available to anyone using any platform.

Cross Platform Supported on Linux, Mac OSX,

Solaris and Windows [MathWorks, 2006].

Supported on Apple Mac, UNIX, IBM RISC, Linux, Windows, Solaris and Silicon Graphics IRIX although it is only available on the SOC

Linux system [NAG, 2006].

Supported on UNIX is standard on Windows 95/98/2000/NT and MacOS PC. OpenGL runs on every major operating system including Mac OS, OS/2, UNIX, Windows 95/98,

Windows 2000, Windows NT, Linux, OPENStep, and BeOS. It is possible to implement OpenGL in Ada, C, C++, Fortran,

Python, Perl and Java [SGI, 2006].

Mouse and keyboard functionality is offered by the GLUT libraries and 3D party toolkits offer screen decoration in the

form of buttons, sliders, arcballs, checkboxes etc.

Visualisation Features

Colour control, scaling, rotation and timing

Colour control, scaling, rotation and timing

Almost endless. Colour control, transparency, texture mapping, lighting properties, material properties,

perspective/orthographic projections, co-ordinate (matrix) manipulation (rotation, scaling etc) and timing functions

Interaction features

Point and click GUI interface. The work area has standard drop

down menus.

Point and click interface. The render window has a standard drop down menu to which items can be added/removed. Decoration can also be enabled to provide thumbwheels, sliders and

push buttons accessible via keyboard and mouse. Widgets provide a GUI during

development

Figure 1.7.1 Table of comparison for Matlab, Iris Explorer and OpenGL applications.

Page 11: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

In a comparison with Iris Explorer and Matlab, OpenGL stands out as offering the most in terms of

visualisation features and the best support for cross platform operations. Compared to Matlab alone, it

offers more interactive features and is on a par with Iris Explorer with this respect. When comparing

the advantages and disadvantages of the three software alternatives, OpenGL makes the most viable

argument as the platform on which to base the visualisation project, not only in terms of

compatibility, availability and features but also in the fact that prior knowledge and experience of

using it, already exists. On a C++ foundation, the OpenGL graphical API and GLUT libraries make a

sound proposal.

GLUT and MESA

GLUT (OpenGL Utility Toolkit) is a windows system independent library of graphic functions that

implements simple window management for use with OpenGL. As the GLUT controls the window

management, menu management, call-back registration, colour variation, object drawing, and

initialisation, it makes learning OpenGL considerably easier [Skillsoft, 2000]. The latest version,

allows the user to create a single application in OpenGL that will work on Windows, Mac and Linux

machines without any need to write operating system specific code. The latest version of GLUT,

developed by Mark J. Kilgard, is not in the public domain, but is freely distributable without licensing

fees.[SGI, 2006]. However, Kilgard does state that “GLUT is not a full-featured toolkit so large

applications requiring sophisticated user interfaces are better off using native window system

toolkits.” [SGI, 2006]. This is unlikely to cause a problem as it is intended that a 3rd party interface

library will be used to create an appropriate user interface.

Mesa provides an open-source implementation of the OpenGL specification [Paul, 2006]. It is similar

to GLUT in that it provides an API, sitting directly above the OpenGL syntax and offers portability

for OpenGL applications to run on systems that have no other OpenGL solution.

On the face of it, GLUT and MESA are both well written and popular OpenGL toolkits; however the

availability of GLUT is more widespread in university and at home, therefore making it the choice for

this project.

6

Page 12: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

1.7.2 Spatial Data Structures.

The organisation of spatial data structures, is usually hierarchical and as such is nested and recursive

in nature. The advantage of using a hierarchal structure, is that generally, queries become much faster,

with an improvement from O(n) to O(log n) [Moller, 2002].

Quadtree

The quadtree is a hierarchical data model that recursively partitions a two dimensional space into four

quadrants. The root node of the tree represents the complete domain and each node of the tree

represents a uniform subdivision of that domain. The quadtree gets it name from the fact that it always

divides a parent node into 4 smaller but equal child nodes, that each represent a quarter of the space of

their parent node [Figure 1.7.2.1]. As division is applied, child nodes become parent nodes, receiving

4 more children and they in turn become parents until the desired depth/level of partition is obtained.

There are several different forms of quadtree in use in a variety of situations. The region quadtree is

typically used to recursively subdivide a collection of objects into quadrants, until each object is

located within its own cell (tree node). With enough levels of recursion, each object will find itself

within its own cell and will therefore not be sharing a tree node.

Figure 1.7.2.1. Quadtree Example with 5 levels of refinement. (Lawlor, 2001).

Quadtrees are regular in nature, meaning that they deal with partitioning data in a uniform manner.

While this can be more restrictive it can also be far more efficient due to the predictable way in which

they handle data. The quadtree data structure was originally proposed by Raphael Finkel and J.L.

Bentley [Finkel, 1974] and has become a very popular method for image representation, spatial

indexing, efficient collision detection in two dimensions, view frustum culling of terrain data and

storage of sparse data [Tremblay, 2004].

7

Page 13: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Octree

An octree is fundamentally the same as a quadtree but works in a three dimensional environment.

Each parent node has eight child nodes that represent a refinement in space of the x, y and z

dimensions. Each leaf node therefore becomes an octant of its parent node and presents a cuboid

shape in physical space [Philips, 2002]. In an ideal dataset of equally distributed objects, an octree

would provide a balanced data structure. In reality, a clustering of data will produce a very un-

balanced tree, as uniform partitions would require a large recursion to divide the dataset up. A k-d tree

attempts to solve this problem by dividing space un-evenly. [Tremblay, 2004].

Figure 1.7.2.2. Octree hierarchy. Adapted from [Lachance, 2005].

BSP Tree

A Binary Space Partitioning Tree Data Structure uses arbitrary splitting planes to recursively sub

divide regions of the domain. They exist in two noticeably different forms, axis-aligned and polygon-

aligned, the difference being that the former makes use of bounding boxes within the scene, to equally

divide (or not) space depending upon the algorithms involved, and the latter, choosing to use edges

from the shapes within the scene to generate splitting planes [Haines, 2002]. Both approaches are

undertaken in a recursive manner, gradually dividing up the scene into smaller and smaller

compartments until the desired level of detail is achieved.

To store the data on each side of the binary tree, the partitioning algorithm assigns the left-hand leaf

nodes as negative and the right-hand leaf node as positive. Once an object within the scene has been

chosen as the basis of the splitting plane, all objects to the positive side of the plane are placed in the

positive side of the tree (the right-hand side).

8

Page 14: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Figure 1.7.2.3. BSP splitting planes and associated tree structure [Philips, 2002].

All the items appearing on the negative side of the plane are placed in the negative side of the tree (the

left-hand side). The positive and negative sides of the plane are dictated by the positioning of a

control within the domain, with the positive side being closest to the control and the negative side,

further away [Figure 1.7.2.3]. In computer games the control would be the camera (player). Where a

non-uniform item is chosen as the splitting plane, the direction and location of the plane is determined

by finding the shortest vector from plane to item boundary [Figure 1.7.2.4]. The item can then either

be split in two, with one half belonging to the positive side of the tree and the other to the negative, or

for simplicity (at the expense of performance) ,the entire item can be placed in both the positive and

negative side of the tree.

Figure 1.7.2.4. Positioning the splitting boundary [Philips, 2002].

Binary Space Partitioning Trees can be employed in more than one dimension of space if required but

usually quadtrees and octrees are preferred for two dimensional and three dimensional subdivision

due to the complexity of keeping track of splitting planes and their vectors in 3D dimensions

[Tremblay, 2004].

9

Page 15: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

k-d tree

A k-d tree is a form of BSP tree that uses splitting planes perpendicular to the x, y or z axis. The ‘k’ in

k-d tree refers to the number of dimensions that the data structure is deployed in. A 1d-tree would deal

with splitting the area along just one axis. A k-d tree provides a balanced option to using a quadtree or

octree, as it divides space unevenly in such a way that one half of the tree holds as many objects as the

other half [Tremblay, 2004].

Conclusion

The construction of most spatial data structures is expensive, and is usually done as a pre-process

although incremental updates are possible in real time [Haines, 2002]. This is not such a disadvantage

if loading times aren’t crucial, as once the tree has been set up, the performance gains can be very

impressive. The major use of spatial data structures is in the games industry, where their use in culling

and collision detection allows for higher FPS speeds and finer detailing [Tremblay, 2004].

From the 4 data structures discussed, the quadtree displays the best attributes for use in an Adaptive

mesh refinement simulation and in storing and processing the OTT-2D data. The reason for this

choice is that the quadtree firstly matches the 2D state of the OTT-2D and secondly, is very similar to

the way in which the model stores wave data for each mesh refinement. Constructed from recursively

quartering cells into equally sized squares, it is ideally suited to the Cartesian mesh structure of the

OTT-2D model where each mesh cell is presented as a quad.

1.7.3 Adaptive Mesh Refinement

Adaptive mesh refinement (AMR) is a computational technique for improving the processing and

storage efficiency of numerical simulations. Refinements in both space and time, to selected regions

of the computational domain allow for higher detailing of important and interesting features, while

less interesting parts are left at lower resolution. This approach offers more efficiency than drawing

the entire domain at high resolution as the processor is required to do less work. So, for a small

increase in processing power over the requirements for a coarse simulation, highly refined proportions

of the domain are possible.

10

Page 16: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Figure 1.7.2.5 AMR with 3 levels of refinement [Quirk, 1994]

Adaptive Mesh refinement was first discussed in a series of papers [Berger, 1984. Berger, 1989] in

which, an algorithm for local adaptive mesh refinement was developed to achieve dynamic gridding.

The algorithm starts with “a coarsely resolved base-level regular Cartesian grid” and “progressively

calculates individual grid cells that require refinement” [Berger, 1984]. The criteria for choosing

higher refinement, can either be user-supplied (refine cells where value x is greater than y) or based

on a numerical analysis method such as Richardson Extrapolation [Israel, 2002], that gives acceptable

approximations as to where high detail will occur. Cells which are marked for refinement are then

processed in more detail and the resulting higher resolution mesh is overlaid onto the base grid. A

correction procedure is implemented to check that no two meshes of similar refinement level overlap

or that no two different levels of refinement share boundaries if they differ by more than 1 level of

refinement. This process is repeated, so that the finer mesh from the previous progression now

becomes the base grid on which refinement requirements are calculated. Where more detail is

required, an extra refinement will overlay the previous resolution of mesh.

The Berger Oliver Method

The Berger Oliver Method of adaptive mesh refinement differs in comparison to usual unstructured

adaptive methods in the way it picks and handles cell refinement. Instead of replacing single cells

with finer ones, the algorithm proposed by Berger and Olive, utilises a patch-wise algorithm, whereby

cells flagged for refinement are clustered into rectangular sections ready for processing. [Berger,

1984]

11

Page 17: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Once the finer cell values have been calculated, interpolation functions are used to transfer the values

to the coarse grid. Cell values on the coarse grid are covered by the finer mesh and overwritten with

averaged fine grid values. The resulting extra work is “usually negligible compared to the

computational costs for integrating the superimposed refinement grids” [Berger, 1984]

One of the areas in which AMR is predominately utilised, is in astrophysical simulations and image

manipulation, an example of which can be seen in the collapsing giant molecular cloud core

simulations of Klein et al. AMR techniques allowed for a reduction in resolution of 131,072 cells

compared to 1015 cells in a uniform grid [Klein, 2004].

The advantages of AMR can be summarised as the following:

• Increased computational savings compared to the majority of static grid approaches.

• Increased storage savings over a static grid approach, therefore reducing processing

requirements.

• Control of grid refinement and resolution when compared to the fixed resolution of a static

grid approach.

AMR has the disadvantage of being significantly more complicated and therefore more time

consuming to develop in applications when compared to the algorithms and data structures used to

create standard uniform meshes. Intricate and complex data structures required to fulfil AMR, such as

Quadtrees and Octrees are not available as standard classes in all object orientated languages and as

such must be coded from scratch (the Java SDK does supply classes but they are not available in

C++).

1.7.4 Existing AMR Visualisation Techniques.

The following projects are examples of existing visualisations that make use of AMR techniques

DAGH (Distributed Adaptive Grid Hierarchy)

DAGH is a data-management infrastructure that provides a framework for solving systems of partial

differential equations with adaptive methods [Browne, 2006]. Currently in use at the Universities of

Texas and Rutgers (USA), it has been used to implement reservoir simulations, numerical relativity

and geophysical modelling. It is currently possible to interact with the DAGH toolkit using the

FORTRAN 77, FORTRAN 99 and C++ languages. The DAGH tutorial [Browne, 2006] currently has

listed the following features that are provided by the software.

• Distributed dynamic data-structures for Parallel Hierarchical AMR

12

Page 18: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

• Transparent access to scalable distributed dynamic Arrays/Grids/Grid-Hierarchies

• Multigrid/Line-Multigrid support within AMR

• Shadow grid hierarchy for on-the-fly error estimation

• Automatic dynamic partitioning and Load distribution

• Scalability, Portability, Performance

It is clear that the system incorporates AMR techniques similar to those in the OTT-2D model and

with the aim of providing high resolution modelling to problems while keeping processing power to a

minimum. It is interesting to see that the model uses parallel processing for extra performance boosts

and shares some of the requirements expected of the OTT-2D visualisation such as portability and

scalability. While a nice feature, it is considered out of scope to add parallel processing functionality

to the OTT-2D visualisation due to the complexities involved and the developer’s lack of experience

in this area.

ENZO

Enzo provides cosmological simulation code that utilises a structured Adaptive Mesh Refinement

algorithm to solve a wide range of astrophysical and cosmological problems. It is freely available to

download from the Enzo Resource webpages [Bryan, 2006] and has various achievements including,

responsibility for the world's largest simulation of the early universe [Bryan, 2006] and several front

page articles in National Geographic and Physics World magazine. While Enzo provides purely the

AMR code, there are several visualisation modules available to use. One of the most popular is

Jacques [Figure 1.7.4.1] which provides several levels of control over the simulation data.

13

Page 19: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Figure 1.7.4.1. Enzo simulation running on Jacques visualisation interface [Abel, 2006]

Jacques [Abel, 2006] presents a clean and uncluttered user interface that is built upon the IDL toolkit

but is very similar to some of the OpenGL interface packages available to download [Section 1.7.5].

The Jacques interface helps to demonstrate the kind of manipulation tools that users of AMR

simulations are likely to require and makes a good basis on which to design the OTT-2D visualisation.

MATLAB

The author of the OTT-2D code [Hubbard, 2002] has currently implemented a single refinement mesh

visualisation within Matlab. At present the system only utilises one of the OTT-2D output files and

must be re-loaded each time to create a step through of the simulation. The visualisation displays only

the water surface behaviour and the coarsest level of refinement but is a good starting point on which

to base the new OTT-2D simulation.

14

Page 20: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

1.7.5 Human-Computer Interactions within 2D/3D Environments

Throughout the project, the aim will be to create and provide visualisation techniques that deliver the

most appropriate and useful solutions to the end user. Feedback from the project supervisor and

author of the numerical wave model will help to guide the direction of design and implementation, in

areas such as efficiency and display requirements.

Initial thoughts during the early design phase, include a heads up display of animation controls,

allowing the visualisation speed and time parameters to be adjusted. With the provision of

pause/resume and rewind, the user will be able to revisit key scenes in the visualisation without

having to wait for completion of the sequence and performing a restart. The user should also have the

opportunity to explore the visualisation in a truly three dimensional aspect. To provide this, the heads

up display of controls could include options for zooming in/out/panning and clockwise/anti-clockwise

rotation. Finally the visualisation should provide some means of controlling lighting and colours, as

well as surface textures for the water, land and sea bed.

All aspects of lighting can be controlled within the visualisation using various OpenGL commands.

Basic flat surface colours can be manipulated to appear three dimensional through the use of ambient,

diffuse and specular highlighting. These properties typically control how light is reflected and

refracted onto and around a scene. Material shininess can be controlled and the type and position of

lighting can be altered. Multiply light sources are possible. Bilinear/trilinear and anisotropic filtering

can be used as required to smooth and accurately shade the 3D scene. Texture mapping is also a

possibility but will probably not be used due to performance implications and the likelihood of detail

masking.

Timing controls within the visualisation will be more difficult to implement. One approach could be

to count FPS (frames per second) and implement logic into the generation algorithm to control how

many iterations of the mesh are produced each second. Raising or lowering this value would speed

up/slow down the visualisation and could produce a pause effect. Using a FPS calculator will also be

a good solution to unify display generation (time to process and display each mesh implementation)

across different platforms and architectures.

GLUI

The GLUI toolkit offers an open source GUI library in the C++ language that can be used to quickly

build graphical user interfaces with interactive controls [Rademacher, 2006]. GLUI is built on top and

directly links to the GLUT libraries which mean it is platform independent and will operate on any

machine that supports GLUT. It relies on GLUT to handle all system-dependent issues, such as

window and mouse management so it is required that the GLUT libraries are present on the system to

15

Page 21: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

in order to make use of GLUI. User interface features within the toolkit include the ability to add a

number of interface windows, the addition of interactive radio buttons, checkboxes, buttons and

spinners and ‘live variables’, where values linked to the interface controls are automatically updated

when the state of the control changes [Rademacher, 2006].

Figure 1.7.5.1. Example GLUI interface [Rademacher, 2006]

GLUI is known to be supported by the SOC windows machines and can be stalled on the Linux

computers without the need for administration privileges. A complete listing of features supported by

GLUI are available in [Appendix E].

1.8 OTT-2D Overview

The OTT-2D numerical model is part of a suite of numerical models ANEMONE, developed for HR

Wallingford, and funded by the UK Ministry of Agriculture, Fisheries and Food, and is intended for

use in the application of coastal engineering for predicting hazardous wave run-up and overtopping on

coastal structures. The model can “accurately model complex water motions in the nearshore region”,

something that existing linear models do poorly. OTT-2D can also overcome the simple wave and

beach profile selection limitations, of earlier empirical tools such as the flume and basin experiments

[De Waal, 1992] and the Manual on the Use of Rock on Coastal and Shoreline Engineering [MAN,

1991] by modelling complex wave behaviour without the need for flume tests. The 2D model (2D +1

in the fact that it models two directions of propagation, cross-shore and alongshore, and the element

of time) shares the fact that it is predominately based on nonlinear shallow water equations (NLSW)

with existing 1D models [Dodd, 1998] and [Titov, 1995]. However, it improves on 1D models by

offering simulations of “overtopping and regeneration by obliquely incident and multi-directional

waves over alongshore-inhomogeneous sea walls and complex, submerged or surface-piercing

features” [Hubbard, 2002]. To tackle the increased computational requirements of 2D over 1D

simulations, the model employs finite volume techniques and an adaptive Cartesian AMR algorithm.

16

Page 22: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Finite volume techniques allow the domain to be divided into a number of cells, where, in the case of

OTT-2D, the values of interest are typically located at the centre of the cell. Over time, AMR is then

employed to adaptively control the number of cells, using finer resolutions of grids where necessary

and as a result creating large computational savings. Within OTT-2D, increases in speed of 10 fold

are obtainable, [Hubbard, 2002] without any loss of accuracy when compared to a uniform mesh

displaying the same level of detail. The Adaptive Mesh Refinement algorithm used in OTT-2D was

developed by [Quirk, 1991] and works by refining with higher resolution, areas of the domain that

require more accuracy. It then calculates data values for the entire domain by using the values stored

in the finest mesh levels. The model has been extensively tested and in the majority of cases has

accurately reproduced numerical solutions. [Hubbard, 2002].

Feature List

The following list describes the inner workings of the OTT-2D numerical model and is taken directly

from the ANEMONE OTT-2D User Manual [Hubbard, 2000]:

• OTT-2D replicates processes usually studied in a laboratory using a scale physical model, but

is quicker to set up, requires fewer staff and costs less than a wave flume basin.

• A wide variety of wave conditions can be studied; either regular or random waves can be

specified together with associated ‘set-down’ or long waves created by groups of primary

waves.

• The model absorbs reflected waves at its seaward end, allowing long wave sequences to be

studied.

• The principle uses of the model are to predict the varying position of shoreline (i.e. wave set-

up) and overtopping rates.

• OTT-2D can also predict the water surface variations, wave induced velocities and

accelerations at any required location. These locations can be surf-zone, on a beach, on a

coastal structure or landward of it.

• The model will automatically store a wide variety of results, and can produce computer

animations of wave propagation for use in reports.

17

Page 23: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

2 Design

2.1 Requirements

For the simulation to work effectively and accurately it will need to incorporate a number of

interdependent requirements.

2.1.1 Ability to Recognise and Handle OTT-2D Output Files

During a wave simulation run, OTT-2D will generate a number of files including data files of wave

calculations, as requested in the system parfiles. Each data file represents a slice in time during the

simulation, supplying the water depth, bed level and velocity in x and y directions for each point of

each mesh in the domain. The visualisation application will only be concerned with the data files

output from the OTT-2D simulator and will ignore any parfiles. The extension that is used by OTT-

2D to denote the files of output data have the extension .dat and in default take the form tpl000$.dat

where $ denotes a time slice in the simulation. The data output files are created in sequence so that

output file tpl0002.dat represents the timeframe of the simulation that occurs immediately before the

timeframe represented by tpl0003.dat and immediately after the timeframe represented by tpl0001.dat.

The visualisation application must be prepared to read in a small or large range of data files as

dictated by the input options of the OTT-2D code and be ready to accept the naming conventions used

in OTT-2D. This includes the naming of data files greater than 9 in sequence where a zero is dropped

before the number is appended (tpl0009.dat becomes tpl0010.dat and the filename remains only 7

characters in length) and similarly at sequence number 100 (tpl0099.dat becomes tpl0100.dat).

Within each data output file the underlying structure and layout of the data will remain the same. This

will include a header section of 5 lines holding the following information:

• Overall X and Y dimension values for the domain (base grid)

• Level of refinements used in that timeframe

• X and Y factoring in logical coordinates (quads of size X by Y)

• X and Y factoring in grid coordinates (quads of size X by Y)

• Total number of mesh in timeframe

These properties control the domain for the underlying mesh in one particular timeframe. Under the

header section, the data output file lists the data values corresponding to each vertex of each mesh.

Each mesh includes its own header section with the following information:

• Mesh Refinement Level (Is the mesh at the coarsest level or at a higher resolution)

• Mesh start position (Position X1 ,Y1 in logical coordinates)

18

Page 24: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

• Mesh end position (Position X2 ,Y2 in logical coordinates)

• Mesh start position (Position X1 ,Y1 in grid coordinates)

• Mesh end position (Position X2 ,Y2 in grid coordinates )

• Mesh dimensions

For each vertex of mesh, there are 4 associated data values. These fall under the column headings:

• d (Total water depth)

• u (Instantaneous velocity in X direction)

• v (Instantaneous velocity in Y direction)

• h (Still water depth)

The output files provide a structured representation of the mesh simulation data, sequentially storing

the data for each mesh vertex in ascending order fashion whereby, coordinate (x,y) on the mesh

appears as the first entry in the output file and coordinate (x+1,y) appears second. It is also structured

in the sense that a complete set of data is offered for all mesh refinements, even where vertex points

are made redundant because they are overlaid in another finer resolution mesh. This structure is an

important feature that will allow refinement levels to be controlled via the application, allowing the

removal or addition of the refinement levels from display as required.

2.1.2 Facilitate Internal Storage of Simulation Data Values

The visualisation application must be able to open and load each output file from OTT-2D and

accurately store each value into system memory ready for use. The layout and structure of each data

file will remain the same, however each file will differ in a number of ways. Each file within one

simulation sequence will vary in size and as input parameters are changed, there will be a variation in

the number and size of files created in successive simulation runs. Due to the nature of AMR, the

location and amount of refinement are dictated through the implementation of the tagging algorithm.

This means that the number and size of mesh will change as the algorithms decide which areas of the

domain require higher and lower resolutions. If the problem domain requires a large amount of high

resolution mesh to display the wave simulation, then the number of vertex points will increase.

Similarly, if a high number of refinement levels are chosen, then the number of vertex points is also

likely to increase. Finally, the overall number of vertex points is dictated by the size and shape of the

domain.

The application must be prepared for these changes in dimension, refinement levels and resolution

requirements but also must be prepared to accept a change in the number of data files produced. These

19

Page 25: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

variations are unpredictable and can only be determined once the OTT-2D algorithm has generated all

the data output files. The storage techniques employed by the visualisation must therefore be

completely dynamic:

• In the range of OTT-2D output data files (the operator of OTT-2D determines how many

steps each simulation will contain. If time remains constant, then longer simulations will

contain more steps and therefore more data output files).

• In the size of the domain (a 100x100 grid requires more data values than a 10x10 grid).

• In the level of refinement required. In a 10x10 domain with one level of refinement there will

be a maximum of one grid at the base resolution and one grid at twice the resolution (a 10x10

grid overlaid with a 20x20 grid in a worst case scenario). With 4 levels of refinement on the

same domain there will be 1 grid at base resolution, one at twice the resolution, one at 4 times

resolution and one at 8 times resolution (a 10x10 grid overlaid with a 20x20 grid, a 40x40

grid and an 80x80 grid in a worst case scenario).

• In the number of mesh required at each resolution (a complicated scenario may require a lot

of refinement in different locations within the domain, less complication requires less mesh

and therefore fewer data values).

2.1.3 Generate Accurate Representations of Wave Data in Mesh Form

The visualisation will use the data stored in the dynamic data structures to create a single mesh that

displays all the levels of refinement. The mesh will be built up as a series of quads that will

seamlessly link across areas of similar and different resolution without creating cracks or gaps in the

mesh. The mesh will appear to link between quad edges (except at boundaries of resolution change)

but will actually link between quad centres as the finite volume technique employed by the OTT-2D

model places all data values at the centre of the quads [Hubbard, 2002]. Using a quadtree structure,

the user will be able to add and remove levels of refinement from showing just one mesh (the base

and coarsest mesh) to the maximum number of mesh as defined in the OTT-2D output files, and still

visualise a solution without cracks and gaps. With a timing structure implemented, the user will be

able to run a step by step simulation of the wave run up with user defined levels of refinement.

As coordinates for the individual quad vertex points are not given in the structured data files, the mesh

will be rendered using the start and end points as defined in each mesh header block. The rendering

method will use a nested for-loop and a count variable to iterate through each row and column of quad

vertices, gradually building up the mesh. To calculate the level of the surface water, the renderer will

use the depth value - bed value. For displaying the velocity vectors the renderer will use the cross

product of the x and y vector components.

20

Page 26: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

2.1.4 Control Simulation Timing

Timing is a key component to creating an effective visualisation. It is fundamental that the application

allows step by step generation of the simulation in a timed and regular fashion while still offering the

user control over speed and direction. The user should be able to replay the simulation step by step, as

many times as required without having to reload data from disk or output files and without having to

exit and re-run the application as this would create unacceptable periods of user inactivity. To create

the periodic steps between each stage of the simulation, the application must have access to some

form of accurate and regular time keeping device that is irrelevant of processor clock cycles or

performance capabilities (this is only necessary, provided that the processor in question falls into the

categories discussed in 2.2 Hardware requirements). Controlling the run of the simulation through the

use of clock cycles would make it impossible to predict the timing on different machines, with slower

processors running the simulation over larger time periods than faster processors. In a similar manner,

using a Frame per Second (FPS) calculator could control the simulation, but run times would vary

across different platforms with more recent and powerful graphics cards having the capacity to

generate and display each mesh cycle in less time than older hardware. The only viable solution to

creating a standardised run time across different platforms and hardware is to use calls to the

Operating System time keeping operations. All operating systems based on the X86 architecture

facilitate the means to query time measured in the SI unit of seconds. Many of the applications that

run on the OS require the OS time keeping operations but it is most obvious to the user in the form of

the clock, displayed in the bottom right-hand corner of the screen in the Windows and Linux

Operating Systems. The OS keeps an accurate and orderly track of universal time by querying the

battery powered CMOS chip on the computer motherboard during start up. The OS will then keep a

uniform track of universal time regardless of the processor speed or graphics card on which the OS is

displayed. While methods are available to access the OS time keeping operations, these methods will

appear and behave differently across different platforms. This means that to allow the visualisation to

run across the Windows and Linux environments, there will have to exist separate time methods for

each operating system and there will be the requirement for different libraries/header files to be

loaded. It will become clear during the implementation and evaluation stages, the amount of extra

work required to implement this across the two platforms and how similar the visualisation timing

performs under Windows and Linux.

2.1.5 Allow Simulation Manipulation

Using interaction techniques the end user should be able to manipulate features of the simulation such

as viewpoints; scaling, panning and rotating; control lighting intensity and position, control the speed

at which each timeframe is played and have the ability to choose any timeframe to revisit.

21

Page 27: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

2.1.6 Support Cross Platform Operation

Through the use of OpenGL and C++, it should be possible to compile the project on any Windows or

Linux system that is setup with C++ and OpenGL compilers and necessary libraries. This includes the

vast majority of workstations within the SOC. Once compiled it should be possible to run the

visualisation application on any Windows or Linux machine without the need to install new files, add

libraries or alter existing system files. Using the windows based C++ compiler provided in Microsoft

Visual Studio will create an executable of the visualisation application that will run under the

Windows environment. Using the C++ compiler g++ on the SOC Linux machines will create an

executable that will run under the Linux environment.

2.2 Preliminary Design

To meet the minimum requirements of the project [Section 1.3] and achieve the requirements

discussed in the design requirements [Section 2.1], the application will have to make use of the

following the functionality:

2.2.1 Vectors as Dynamic Storage Elements

The C++ language includes a library which can be used for dynamic data storage. The vector.h library

allows the user to incorporate a container template within their application that behaves in a similar

manner to the array structure but is dynamic in the sense that the length of the structure does not have

to be declared before use. With an array, the maximum size of the contents must be declared using a

constant value, before the array can be used to store any data. This is a severe limitation, as the length

and number of wave data values will change unpredictably between different runs of the OTT-2D

simulation. If the application cannot adjust to take the extra values that may occur in the OTT-2D

output files, then the visualisation will not be displayed correctly and could be considered as invalid.

A vector does not require its length to be declared before use and instead incorporates memory

management techniques to expand the structure as new elements are added. Vectors will allow any

number of values to be added without the need for change in the structure that is apparent to the

programmer or user, and will accept all types of data including user defined data types like ‘struct’

and class. Appending items to the structure is done simply with the ‘push_back’ command and any

item within the vector can be accessed using the same methods that apply to the array structures

[Jenkins, 2003].

MeshVectors->push_back(any_item); Temp = MeshVectors [x];

22

Page 28: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Accessing members of a vector or appending elements can be done in constant time, whereas locating

a specific value or inserting elements into the vector takes linear time.[Jenkins, 2003] While it is easy

to create a 2D or even 3D array within C++, the vector container is only a 1D data structure. To create

a 2D vector, a vector structure that holds a list of vectors must be created (i.e. a vector of vectors).

typedef vector<float, allocator<float> > MESH; typedef vector<MESH, allocator<MESH> > MESH_LIST; typedef vector<MESH_LIST, allocator<MESH_LIST> > MESH_TOTAL;

As a number of 2D vectors and possibly 3D vectors will be required to correctly handle the OTT-2D

output files, several structures, as demonstrated above, will have to be created.

2.2.2 Using a Quadtree for Mesh Joining at Different Levels

Using the structured OTT-2D output files, the visualisation will need to implement a data structure

than can store and allow retrieval of each mesh refinement in such a way as to prevent overlap and

gaps/cracks where the refinements meet. The rules for refinement as used in the OTT-2D numerical

model state that, two mesh of similar refinement level can never overlap and no two different levels of

refinement can share boundaries if they differ by more than 1 level of refinement [Hubbard, 2002].

The proposed solution to the overlap and gap problem is the implementation of a quadtree structure as

discussed in [Section 1.5.2]. A quadtree will be created for each quad on the coarsest level of the mesh

(the base mesh) and in its root store the depth, bed and velocity components associated with that quad.

As levels of refinement are added to the mesh, it will become apparent which quads will be refined

and replaced. When this occurs, the values stored at the root of the quadtree will not be deleted but

instead 4 new nodes will be added to the tree, branching from the root node. The new leaf nodes will

hold the depth, bed and velocity components for the finer mesh quads that overlay the coarser quad.

This process will repeat until all refinement levels have been added to the quadtree structures. It will

then simply be the case of reading from the ends of each quadtree branch (the leaf nodes) to get the

finest values for each area of the mesh and the depth of the leaf node to determine the scale at which

the vertices should be plotted.[Figure 2.2.1].

23

Page 29: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Base Mesh

1st Level of Refinement

2nd level of refinement

Figure 2.2.1. Skipping leaf nodes to create refinement joins.

The quadtree approach has the advantage of allowing refinement levels to be removed and re-added at

will from within the visualisation. If the quadtree stores 4 levels of refinement but only 2 levels are

required then traversal of the tree can occur down each branch to a maximum depth of 2 [Figure

2.2.1]. The disadvantage to the quadtree approach is the extended processing power and storage

requirements of building and traversing a large number of trees. If a mesh of 40x40 is simulated, then

1600 quadtrees would be required and this would simulate only one timeframe of the simulation.

2.2.3 Lighting and Normals

Several different forms of lighting can be used within an OpenGL scene to add characteristics such as

highlighting and reflection, and with the aid of the stencil buffer, shadowing. Two forms of lighting

will be used within the application and could be controlled in terms of positioning and intensity via

the user interface. Ambient lighting will used to add backlighting to the entire scene so that no area of

the mesh falls into permanent shadow. Ambient illumination describes light that has been scattered

around the environment to such a large degree that it appears to travel from all directions and usually

occurs because the light has refracted from a large number of surfaces before reaching the eye. A

specular component will be added to scene lighting so that a stronger directional influence can be

used to pick out, from surrounding areas, particular areas of interest on the mesh surface. The specular

light is preferred to the use of diffuse lighting, as it doesn’t scatter on impact with the surface and if

not too intense, can give nice highlighting to the area of interest. The strength, colouring and position

of the lighting in the scene will be controlled using the standard OpenGL lighting functions [Wright,

2000]

GLfloat LightAmbient[] = {0.4, 0.4, 0.4, 1.0}; GLfloat LightSpecular[]= {0.0, 0.0, 0.0, 1.0}; GLfloat lightPos[] = {0,100,0,0};

24

Page 30: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Using averaged normals to control lighting within the scene, will give the impression of a smooth

rather than faceted surface to the water and bed topography mesh. To create a smooth impression the

normals are averaged between neighbouring quads so that hard edges are smoothed out. The normal

of each quad is the element perpendicular to the quad face [Figure 2.2.2] and are calculated by finding

the cross product of three vertices within the quad (so long as the three vertices do not lie in a straight

line) [Redbook, 1994].

Figure 2.2.2. Calculating a normal [Redbook, 1994].

It is not yet known how effective the implementation of normals or lighting will be in the scene,

especially as the mesh will ideally drawn using a wireframe superimposed onto the solid fill of the

quad. As normals serve to smooth a faceted surface, the appearance of the wireframe will just add

back a definition of the quad edges that the normals removed. Although it can be assumed that the

extra code and algorithms needed to detect the normal for each quad in the mesh will require more

processing power from the CPU and could therefore reduce rendering speed.

2.2.4 Creating Timeframes

The use of timing functions will differ across the Linux and Windows platforms. This is due to

different libraries and routines that are used to keep track of time in the operating systems.

Unfortunately, developing the timing functions will take twice the amount of effort and time to fulfil

the requirements of both systems. Where the windows system is concerned, the use of the windows.h

will provide access to the method timeGetTime(), which retrieves the current system time in

milliseconds. Using current time as the starting point, the application will need to create and keep

track of the time intervals between each timeframe either through the use of the accurate performance

counter, or if not available, the multimedia timer. The Linux variant will operate under similar

principles but will use time retrieval and time counting methods that are available to Linux C++

25

Page 31: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

compilers. As windows.h is a windows only header file, the application will need to inform the

compiler to load it if it current resides on a windows machine, or ignore it if it is on a Linux machine.

2.2.5 Using the GLUI Toolkit for Creating Interactive Controls

Version 2 of the GLUI toolkit will be used to create the interactive user interface as it is the most

recently supported release and offers the most functionality to date. The toolkit has been designed

with platform independency in mind and as such will run across and appear identical on any operating

system that supports the OpenGL framework and GLUT libraries. The toolkit offers a number of user

controls that can be placed onto menu bars and can be coded to perform actions such as rotation,

linear transformation, checkboxes, radio buttons, text box input and command style buttons. The

implementation of the GLUI interface occurs in the main method of the application and uses separate

methods to perform actions when each interface tool is used.

Void main {

glui=GLUI_Master.create_glui_subwindow(main,GLUI_SUBWINDOW_RIGHT); glui->add_button( "Run", RUN, control ); …

} Void control{

if ( control == RUN ) { runscenerio(); } …

}

The above code, demonstrates how the GLUI windows are created and placed into the scene at the

desired locations (first line in main method) before each control is added. Each control is given a

style, as it appears on screen (second line main method), in this scenario the control is a command

style button as found in the majority of graphical user interfaces. When the control is activated via the

user interface it calls an appropriate action in the control method which in turn initiates the desired

transaction. The GLUI interface will be used in the visualisation to create a side panel of tools for

controlling the simulation run, rotation, panning and scaling functionality.

2.3 Hardware Requirements

Machine specifications will influence the visualisation loading times and overall usefulness of the

interaction techniques employed. For the user to experience and manipulate the visualisation in real-

time, a speed of 25FPS or higher must be achievable. With vertex arrays and Vertex Buffer Objects

employed where available, unnecessary load from the CPU should be switched to the GPU. With this

in mind, a reasonable 3D accelerator will be required and a CPU capable of performing

transformation and lighting matrix manipulation. The machines available in the laboratories of the

SOC will be more than adequate with specifications of around 2.8Ghz for the CPUs and Geforce

26

Page 32: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

series 4 and 5 graphics cards. The absolute minimum specification for a mesh of maximum size 50x50

and maximum level of refinement 4 would be for a PentiumIII 1Ghz processor and a Geforce 4 series

graphic card [Section 5.4]. As mesh sizes and refinements increase so will the specifications required

to run the simulations.

2.4 Software Requirements

No additional files or setup will be required to run the executable of the simulation provided that the

system has present the library for GLUT. If this is not he case then the library can be made available

simply by placing the file GLUT32.DLL inside the windows\system folder. GLUT is available

to download from the Nate Robins OpenGL GLUT for Win32 website and is open source.

27

Page 33: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

3 Implementation 3.1 Methodology

In terms of software development lifecycles, the project will only have a short time to produce the

final solution. For this reason it is important to use techniques that track and counteract time scale

slippage throughout the development phase. Communication and understanding between team

members will not factor as there will only be a single developer working on the system and therefore

a rapid prototyping approach can be used to get a working demo of the visualisation up and running in

a short time. When this is complete the project can move onto an iterative development process,

allowing the developer to test and re-design each step as is required. The iterative process will allow

the visualisation to be built in stages starting with the basic framework. When this has been tested and

proven to be stable, it will allow further functionality to be added which in turn can be tested and then

altered if required. When a new component operates correctly and meets its requirements,

development can shift to the next piece of functionality where the build, test and re-design cycle will

be repeated. Finally, a third phase will allow for the re-design of particular sections of the project

without placing the project time scales in jeopardy.

A suitable methodology to use in this scenario would be the Rapid Application Development process,

developed and published by James Martin while at IBM in the 1980s. The RAD process is a direct

response to the non-agile development processes offered by models such as the Waterfall lifecycle

and has the key goal of combating long project development lifecycles. RAD allows applications to

be developed quickly thus avoiding complications such as last minute requirement changes that can

render finished applications unusable and also improves quality, as on shorter projects costs are less.

However, the downside of the RAD, is that the fast prototyping approach can reduce scalability and

features.

3.2 Development Plan

A Gantt chart displaying the project deliverables is shown in Appendix B in both initial and final

forms. The Gantt chart is an invaluable asset in project planning as it provides a very clear visual of

the project steps, how they interrelate and how long they will take. Using the Gantt as a comparison

with the current progress of a project, will quickly show any slippage and how this will have an affect

on other elements of the project.

The following table demonstrates the key milestones and dates for successful completion of the

project, meeting all the minimum requirements and delivering the solution on time:

28

Page 34: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Date Deliverable Description

Requirements Gathering

End Nov‘05

Project aim and requirements discussed

Project aims, minimum requirements and possible future enhancements discussed.

Phase 1 End Dec‘05

Completion of input/processing algorithms, rough single layer

mesh, displayed on screen

Completion of algorithms for reading in external data files, storing and manipulating the data ready to

generate mesh simulations. Start of on-screen mesh visualisations.

Phase 2 Mid Feb‘06

Implementation of single layer mesh with some manipulation

techniques

On screen visualisation of 3D waves using a single layer mesh and with user manipulation features such

as zoom/pan Phase 3

Mid Mar‘06 Addition of multiple layer adaptive

mesh refinement Addition of algorithms to deal with the adaptive mesh

refinement Phase 4

End Mar‘06 Further work on manipulation

techniques Work on mesh layer colouring and bed topography

settings. Phase 5

Mid Apr‘06 Incorporation of possible

enhancements

Figure 2.2.1. Skipping leaf nodes to create refinement joins.

3.3 Development Phases

Phase 1

Following the methodologies of the RAD process, the user requirements collected were used to

quickly build a simple, working visualisation solution. This initial phase concentrated on developing

the methods required to read in 1 level of mesh data and display the mesh on the screen and

importantly provided the basic framework on which the rest of the application was based. The initial

file reading and mesh drawing methods were tested for stability using a selection of data files of

various sizes and values and then used to demonstrate the underpinnings of the visualisation. The end

user was able to appraise the basic functionality and provide critical feedback on expected behaviour.

When it was agreed that the application was correctly accepting the OTT-2D files and drawing the

anticipated mesh, development moved onto the next phase.

29

Page 35: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Figure 3.4.1. Initial plot of water surface.

Phase 2

At this stage it was decided to switch from applying manipulation techniques outlined in the

development plan, to concentrating on the development of a multi-layered mesh. Due to longer than

planned development in phase 1, manipulation functionality was pushed back to phase 3 and attention

in phase 2, switched to applying the adaptive mesh refinement with the quadtree implementation.

Adhering to the time boxing element of the RAD development methodology, it was decided that the

building of manipulation techniques should be switched to phase 3 allowing development to focus on

incorporating minimum requirements. While the interactive element was still seen as important, it was

assumed that these techniques would be far easier and quicker to implement that the adaptive griding

functionality. Unfortunately, it soon became apparent that the design and implementation of the

quadtree structure would not be possible under the project time constraints. Attention switched to

alternative solutions [as described in 3.5.3 and 3.54] and an adaptive mesh visualisation without

overlap was achieved using unstructured mesh data. The adaptive mesh was tested for stability using

different input files and improved to show bed topography and surface water simultaneously.

30

Page 36: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Figure 3.4.2. Plot of water surface and bed topography using unstructured data

Phase 3

Phase 3 concerned the implementation of basic manipulation techniques. Keyboard control for

panning and zoom was implemented alongside functionality allowing mouse movement to control

rotation about the x and y axis within the scene. A numbering system based on the keyboard number

keys was developed to control the stages of the wave run-up simulation. At this stage, minimum

requirements had been met, if not in the way originally intended. After a series of tests to confirm the

rotation and linear transformations were operating correctly, development switched to phase 4.

Phase 4

A fully interact user interface was incorporated using the GLUI toolkit [1.7.5 GLUI ]. Development of

the features available in the toolkit provides a fully windowed, system independent means for the user

to control the visualisation. The addition of rotation and translation controls were used to give

complete 16 point rotation, scaling and positioning of the scene and lighting. Keyboard control of the

simulation timeframe was removed and replaced with a slider allowing any step of wave flow to be

revisited regardless of the number of steps within the simulation (the keyboard is limited to a

maximum of 10 steps as there are only 10 number keys). Fine tuning and re-testing of the control

sensitivity was carried out to make sure that scene rotation and simulation timings were of practical

speed.

31

Page 37: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Figure 3.4.3. Plot of water surface and bed topography using unstructured data and user interface.

Phase 5

From initial testing it was clear that the visualisation speed was too slow to allow seamless human

computer interaction. The inefficient calls to the OpenGL drawing primitives [Section 3.5.1] were

replaced with vertex array [Section 3.5.1] functionality, which had the added bonus of reducing the

data file loading times [Section 5.4]. As a result of implementing vertex arrays, vertex buffer objects

[Section 3.5.2] were incorporated to deliver huge reductions in rendering times and allowing extra

features such as transparency and anti-aliasing to become viable. To aid in cross platform operability,

it was decided that a move to the GLUT timing properties would be beneficial. This replaced the

current timing methods that had to exist in two instances, one for the Windows environment and one

for Linux and as a result was far easier to implement. Checkboxes were added to the GLUI user

interface to allow the user to control display options including viewing the animation in wireframe

mode or solid fill, showing vectors of the wave velocity for each time frame, displaying the wave and

bed topology independently or simultaneously and displaying and adjusting the level of water

transparency. Also incorporated into the user interface were buttons to start an unaided simulation run

and reset buttons to place the camera into predetermined viewpoints. To adjust the speed of the

32

Page 38: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

simulation, a slider was incorporated to control the time intervals between the display of each

timeframe. To round off the user interface, a FPS calculator was created and used to display the speed

of running visualisation over each second.

Figure 3.4.4. Plot of transparent water surface, bed topography and user interface.

3.4 Description of methods

FileParse

The FileParse class is responsible for opening the OTT-2D output files, reading in the data values and

storing them internally within the application so that they can be accessed by the DrawMesh method.

Using C++ input streams (fstream), the application is able to locate and then open the relevant OTT-

2D files. If the correct output files cannot be located or for some cannot be opened, the FileParse

method has error handling techniques to alert the user.

33

Page 39: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

file1 = ("unstr"+ostr.str()+".lat");

ifstream lat(file1.c_str()); if (lat.is_open()){ .. }

if (!lat){ cerr << "Error: File " << file1.c_str() << " cannot be opened" << endl; }

The FileParse method handles the reading of data values from both the unstr.lat and unstr.pyr files,

first reading in all the entries from unstr.lat file and storing the x and y coordinates, depth value, bed

value and, x and y components of velocity into a 1D vector. Using the values in the unstr.pyr file, the

DrawMesh method can now cross reference with the values in the vector abstracted from the unstr.lat

file to fill another 1D vector with a sequential list of all the mesh coordinates, depth and vector

components needed to recreate the mesh from the OTT-2D algorithm. To add the time component of

the visualisation, the Fileparse method simply loops through the required number of frames (specified

by the user on the command line), reading in and cross referencing the values from the relevant

unstr.lat and unstr.pyr files so that a 2D vector containing a sequential list of each 1D mesh is created

[Figure 3.4.2].

DrawMesh

The drawMesh method is responsible for recreating a graphical mesh representation of the data values

collected from the FileParse method. To achieve satisfactory display times using OpenGL, the

DrawMesh function utilises vertex array and vertex buffer objects [Sections 3.51 and 3.52

respectively] but is also responsible for applying viewpoint and model transformations, mesh

colouring and scene lighting. Calls to the OpenGL colour command, (4fcolour) are used to give the

surface water areas of the mesh transparency while ambient, diffuse and specular lighting components

are used to control the appearance of light and materials within the scene. The DrawMesh method

applies the viewpoint and model transformations by calling the Camera function which in turn

receives the model transformation matrix values from the move and MouseMove methods. The

drawMesh method is repeatedly called from the Main method to create a dynamic rather than static

visualisation.

Init

On loading the visualisation, the initialisation method is called and occurs immediately after the

GLUT display and windows are created by the main method. The Init function is only ever called

once as it is responsible for the setting up the perspective in the scene.

34

Page 40: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Main

The main method is responsible for controlling the order of all the other method calls. It is also where

the GLUT window creation occurs and where the GLUI interaction tools are built. The

glutMainLoop() command is used to initiate the continuous cycling of the DrawMesh function.

RunScenerio

The RunScenerio method is used to increment through each mesh of the simulation when the user

selects the ‘Run’ option on the GUI. It takes a time variable chosen by the user and,

Using the GLUT command GLUT_ELAPSED_TIME , it achieves a constant moving simulation of the

change between corresponding meshes.

float fps; char sfps[10]; frame++; timesp=glutGet(GLUT_ELAPSED_TIME); if (timesp - timebase > 1000) { fps = frame*1000.0/(timesp-timebase); sprintf(sfps,"%5.2f",fps); glui_fps->set_text(sfps); timebase = timesp; frame = 0; }

FPS

The FPS (Frames per Second) method is responsible for calculating the speed of the simulation. Using

the GLUT command GLUT_ELAPSED_TIME, the FPS method creates a variable that calculates the time

elapsed, in milliseconds since the Init method was called. [The GLUT_ELAPSED_TIME command is

explained in more detail in 3.5.5].

Camera

The camera method is supplied with the transformation and rotation values calculated by the Move

and MouseMove methods. The method then uses these to apply the transformations and rotations

using the OpenGL commands glRotatef and glTranslated.

MouseMovement

The MouseMovement method, called in the Main method of the application, uses the movement of

the mouse to control the rotation of the scene about the x and y axis. The current position of the

mouse is constantly tracked and compared with the last known mouse position. If the current mouse

location value is greater than the previous position value, then rotation is applied in a clockwise

direction and if the current mouse location is less than the previous position value, then rotation is

applied in the anti-clockwise direction. The angle of rotation is then supplied to the camera function

either as a positive value, for clockwise rotation, or as a negative value, for anti-clockwise rotation.

35

Page 41: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Using the GLUT command glutMotionFunc() the rotation of the scene is only activated when a mouse

button is held in the down position.

Move

The Move method is responsible for controlling the linear transformations within the visualisation,

mainly the zoom (scaling) factor and panning properties. It tracks and adjusts the current panning and

scaling properties as dictated by the Keyboard method, incrementing and decrementing the scaling

and panning properties as required.

Keyboard

In the keyboard method, commands within the visualisation are assigned to individual keyboard keys.

The user is able to use these keys to control visualisation zoom, panning and rotation as well as

initiating a simulation run. When one of the control keys responsible for scene panning or scaling is

activated, it calls the Move method and passes a value that is used to determine whether the scene is

to be positively or negatively scaled or the direction the scene should be panned. In the case of

initiating the simulation run, the control key accesses the RunScenerio method.

3.5 Solutions to Problems Encountered

3.5.1 Vertex Arrays

For drawing large quantities of primitives such as triangles and quads, OpenGL offers a faster and

more efficient method. “The best way to provide a modern graphics accelerator with model data is by

using what OpenGL calls vertex arrays”. Vertex arrays replace the need to repeatedly call and supply

the vertex function with values to create the vertex of each shape

renderDevice->beginPrimitive(RenderDevice::TRIANGLES); renderDevice->sendVertex(Vector3(0,0,0)); renderDevice->sendVertex(Vector3(1,0,0)); renderDevice->sendVertex(Vector3(1,1,0)); ... (other vertices) renderDevice->endPrimitive();

Sending a vertex to the GPU on every frame will create a bottleneck as the GPU waits for the data to

arrive across the system bus from the CPU. It also demonstrates how inefficient repeatedly calling and

supplying values to a vertex function is, as in large models, the same values will be sent on multiple

occasions where the shapes that make up a mesh share their vertices. Vertex Arrays over come these

shortfalls by placing all model data into a contiguous chunk of memory. Akienne-Moller et al [Moller,

2002] point out that “Copying memory is expensive in and of itself, and it also pollutes the CPU

memory cache” so allowing the GPU to use pointers to navigate the array of model data, picking out

36

Page 42: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

the relevant vertex data values in the correct order will limit the requirement for vertex data to

swapped in and out of memory. As the GPU uses less OpenGL calls and reduces the amount of data

that needs to be copied between the application and OpenGL, the user can expect to see a

considerable performance gain. As well as storing vertex coordinates, vertex arrays offer the

advantage of being able to store vertex colours, textures and normals. Vertex arrays were

implemented in the final phase of development in an attempt to increase rendering performance.

3.5.2 Vertex Buffer Objects

Vertex Buffer Objects, are a recent extension to the OpenGL language that offer increased rendering

speeds for large quantities of triangles, quads and lines, including triangle strips, quad strips and line

strips. Vertex Buffer functionality is obtained through the use of the ARB_vertex_buffer_object

which was approved by the Architecture Review Board (ARB) on 12th February 2003. As Vertex

Buffer Objects are a fairly recent addition to the OpenGL language, not all graphics accelerators will

support them, however this is a minority case as updating to a current graphics accelerator driver will

usually add missing functionality. Vertex Buffer Object functionality is available on all cards of

specification NVIDIA Geforce2 series or later and it is not expected that the 3D visualisation should

run at adequate speed on any lower specification than this.

Vertex Buffer Objects are an extension of Vertex Arrays, improving throughput by placing the Vertex

Array data into the graphics accelerator memory. As the idea behind Vertex Arrays is to store model

data in a contiguous chunk of system memory, it is relatively simple to swap the data chunk to the

graphics accelerator memory. Storing model data on the accelerator’s memory can vastly reduce

rendering in a number of ways. Graphics accelerator memory is usually much faster than system

memory with tighter timings and higher clock speeds (The very latest top of the range accelerators

sport DDRIII memory with clock speeds in excess of 1200Mhz and in quantities up to 512MB

compared to standard system memory of between 266Mhz and 400Mhz for DDR). It is the

responsibility of the onboard graphics accelerator memory to only store model data compared to

system RAM which is responsible for all the operating system tasks. This leads to system RAM often

being fragmented and polluted, making it difficult to store large chunks of graphics data without

splitting. Calling data from the graphics accelerator memory to the GPU for rendering is significantly

faster than requesting it over the AGP and system bus. Swapping the chunk of model data from

system memory to graphics accelerator memory in one operation before rendering proceeds, reduces

the overhead required to request each piece of model data across the system bus and is no longer

limited by the slow transfer bottleneck of the AGP.

Vertex buffer objects were implemented in the final phase of development in an attempt to increase

rendering performance.

37

Page 43: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

3.5.3 Unstructured vs. Structured Output Files

Using an existing conversion script, unstructured data files can be created as an alternative method to

the structured OTT-2d output files. These scripts, written in Fortran 90, were created by the author of

the OTT-2d as a means of converting the structured output data files of OTT-2d into a series of

unstructured data files for the purpose of displaying the simulation in the Matlab software utility. The

advantage of the unstructured data files, is that they present a single uniform and contiguous mesh that

incorporates all the levels of refinements without repetition of vertex coordinates. This makes it

efficient for visualisation purposes and removes the need for a complex tree structure that could prove

extremely difficult and time consuming to implement therefore significantly increasing the chance of

project failure. What the unstructured files do not hold, and the structured do, is the redundant data

held under the finer levels of resolution. The structured files hold the data values for all mesh levels,

whether overlap occurs or not, which would allow functionality to strip away the finer meshes right

down to the coarse mesh in the visualisation, if so desired.

3.5.4 Handling the Unstructured Data Format.

The unstructured data files, as described in 3.5.3, provide an alternative to the structured but difficult

to process, output files produced by the OTT-2D numerical model. If the unstructured files do not

exist, then they must be created using the OTT-2D conversion script and the original structured output

files. The script takes one structured output file as an argument and produces two unstructured files

that can be cross referenced to provide 1 complete mesh that incorporates all the mesh vertex

coordinates over all the levels of refinement. The structure of the two files are different with one

providing a comprehensive list of all quads used to construct the mesh and the other file providing the

coordinates of each vertex of each quad as well as the water height, bed depth and velocity

components for that quad.

For example, the first quad of the mesh will be constructed from 4 vertices which will appear as the

first 4 values in the .pyr file however the first 4 values of this file are not the coordinates of the

vertices in question, they are pointers to the lines within the .lat file that contain the vertices

coordinates. The process must be repeated for every timeframe of the simulation, ending up with 2

unstructured files for every structured file.

The application must first open the .pyr file and store the first 4 values. It must then continue to

collect and store the next 4 successive values of the .pyr file, recognising them to be pointers to

coordinates within the .lat file that construct each quad, until the end of the file is found. When

reading is complete, the .pyr file must be closed and the associated .lat file opened. Using the stored

pointers, the application can read out all the values and coordinates that make up each quad and store

them ready to be rendered. The whole process is repeated for each timeframe in the simulation.

38

Page 44: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

3.5.5 Using GLUT for Timing

Implementing a timing function within C++ has proved inefficient and problematic. To use time

functions under Windows the header file ‘windows.h’ must be available but must be added to the code

in such a way that it is ignored when compiled on the Linux operating system because the library is

not available under Linux. Providing the necessary separate functions to control time under windows

and Linux does work but increases code size and coding times dramatically. A simple and relatively

easy to build solution is available in the GLUT command: getGlut(GLUT_ELAPSED_TIME)

within the OpenGL libraries. As it is part of the OpenGL framework [SGI, 2006] it is platform

independent and will not require two implementations to compile under Windows and Linux. Calling

GLUT_ELAPSED_TIME within the application will create a variable that stores the time elapsed, in

milliseconds since the Init method was called. If a comparison is made between calling the method

prior and post one loop of the rendering method, then the time taken to render one frame (FPS) can be

determined.

39

Page 45: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

4 Evaluation 4.1 Introduction

Throughout the design process an iterative evaluation process was present, using both Heuristic

[Nielson, 1994] and formative [Dix et al, 1998] methods [details are presented in Appendix C].

To gather a better understanding of the system and obtain a representative evaluation from various

perspectives, the final application will be appraised using the following approaches. The system will

first be evaluated against the minimum requirements stated at the start of the project to determine

whether the solution adequately fulfils its objective. A Heuristic evaluation as defined by Nielson in

Usability Engineering [Nielson, 1994], will be conducted to test the final usability of the visualisation

and the appropriateness of the user interface and will be followed with a final formative evaluation

[Bowman et al, 2004]. There will also be a short end-user evaluation and a comparison between the

initial solution and the final solution, in terms of functionality and efficiency. As each method of

evaluation provides a different approach to testing system qualities, there should be a better chance of

uncovering all the strong and weak system elements and reducing the risk that serious problems might

fail to surface.

4.2 Evaluation Against Minimum Requirements

The minimum requirements were developed through an understanding of the user’s visualisation

requirements. In meeting the minimum requirements set forth in section 1.4 of this report, the

visualisation application will provide a more efficient and meaningful depiction of the OTT-2D

numerical wave model output. As well as improving upon the current output of large binary files it

will also provide a better solution to existing models the user has built in the Matlab software. An

evaluation of the minimum requirements and any extra features that have been incorporated, follows:

Produce a Method of Processing and Storing Wave Run-up Data from External Data Files

The system meets and exceeds the minimum requirements for reading and storing the wave model

data. In the final solution the application is able to find the relevant data files (as long as they are

stored in the same directory as the application) and load them sequentially so that the timeframes of

the simulation are not out of sequence. The solution also, almost entirely makes use of dynamic data

structures which allow it handle any size of data file whether the size is in terms of overall mesh

dimensions or in terms of the number, shape, position and levels of refinement. Through the use of

custom built vector functions, the application is able to build a 2-Dimensional dynamic structure that

allows any range of simulation in terms of timeframes and data files (each time frame requires one

data file) as requested by the user at the command line. The application incorporates error handling at

the command line to prevent the user from entering incorrect simulation ranges (the user cannot load

21 files if only 20 exist) which would unexpectedly terminate the program.

40

Page 46: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Produce an Algorithm to Generate Efficient Step by Step Animations of the Wave Run Up

The application uses the data stored in the 2D vector structure to create a sequential animation of the

wave run up. The animation method meets the minimum requirements by allowing the use to initiate

the visualisation of the correct sequence of wave run up, from start to finish. It surpasses the minimum

requirements by adding extra control of the simulation timing. Using the slider control presented on

the user interface, the operator can select and view any timeframe of the visualisation either pre or

post simulation run. Using a second slider, the user can adjust the speed of the simulation, choosing

the length of time each timeframe is presented on screen from a minimum of 1second to a maximum

of 10seconds. The time element of the simulation is controlled using the GLUT command:

glutGet(GLUT_ELAPSED_TIME), which has the added advantage of being platform independent

and independent of system specification. For example, when set at a constant value, the simulation

will run at the same speed across any hardware, irrelevant of the processing power of the graphics

card or CPU.

Employ AMR Within the 3D Visualisation of the Wave and Bed Topography

While the visualisation is not responsible for creating Adaptive mesh refinement or understanding

why certain parts of the mesh should be refined, it is expected that the system will accept and

correctly display data, held in an AMR format. The visualisation does achieve an AMR representation

of the sea topography and surface water, however not as first intended. Instead of using a quadtree

implementation, the application simply reads and stores an unstructured mesh of each timeframe into

a 2D dynamic structure and using grid coordinates, plots them to screen. Importantly, this method

displays a contiguous mesh of all refinement levels without creating overlap or gaps between the

different resolutions and correctly joins the cell centres where the depth and velocity components are

actually stored. It does however rely on the data being presented in unstructured format using existing

scripts developed by the OTT-2D author. Where the data is unstructured there is no clear

identification of when mesh resolution changes except to analysis the data for the proximity of the

coordinates of each mesh point and also means that where a finer resolution mesh is incorporated all

underlying data is lost. With a quadtree implementation, the user would be able to skip between the

mesh refinement levels, choosing to display mesh from the single coarse mesh, up to the maximum

mesh refinement but this is not possible using the unstructured data as the coarser level values are

replaced with finer ones before being loaded into the application data structures.

Employ Techniques to Manipulate the Wave Visualisation such as zoom/pan/rotate/colouring

The visualisation meets the minimum requirements for manipulating the scene by offering controls for

changing level of zoom, panning and rotation. The scene is also coloured although this functionality is

static and cannot be altered by the user, however there is functionality to create transparent surface

water and a means to control the level of transparency via the user interface. Further functionality,

41

Page 47: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

above and beyond the minimum requirements is offered to the user in the shape of scene lighting and

inclusion of velocity vectors, both of which can be disabled or enabled via the control panel.

Checkboxes for hiding the surface water or bed topography are provided as well as the ability to

change between a wireframe view and a solid fill for the entire scene.

4.3 Heuristic Evaluation

Heuristic evaluation is a method of evaluating 2D User Interfaces using a set of ten heuristics laid out

by [Nielsen, 1994]. In simple terms, it is a method of looking and passing judgement on the usability

as one perceives it and taking note of any problems that inhibit that usability. Heuristic evaluation is

unusually carried out by a group of evaluators of between 5 and 15 people as studies have shown that

groups of 5 or more generally find 75% or more of usability defects [Nielsen, 1990].

Heuristic evaluations offer several advantages. They are usually cheap, require only small amounts of

planning, can be used throughout the design process and are generally intuitive in the sense that they

are based on an existing list of tried and tested rules (heuristics). The disadvantage of using such a

system, is that the focus of the evaluation is usually on the problems and not how to solve them

[Nielsen, 1990].

The following heuristic evaluation was carried out with a small test group of candidates and is based

on the 10 evaluation heuristics as proposed by Jakob Nielsen [Nielson, 1994]. The range of candidates

included, relatively inexperienced computer users with no experience of 3D modelling, relatively

experienced computer users who had reasonable knowledge of 3D graphics and OpenGL and an end

user with understanding of the simulation and OTT-2D underpinnings. Several heuristic evaluations

were used as part of the iterative design process. The statements given in this section are a final

heuristic evaluation of the system as it presently stands:

Simple and Natural Dialogue that the User can Understand

The system only enters dialogue with the user during initial loading. The dialogue occurs on the

command line in a textual format and serves to gather the relevant information on input files and to

provide information to the user while the input files are loaded. In the first instance of dialogue the

system asks the user to enter the number of files required to run the simulation. It is anticipated that

the user will know how many files they want to load and why the question is being asked of them

however, during the group testing phase, it became clear that not all users were expecting, or

understood why they had to choose the number of files to be loaded. The dialogue that follows the

input request, repeats the value entered by the user so that they are made of aware of an incorrect

entry and then continues to list the files as they are loaded into the application memory space. The

42

Page 48: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

user will receive dialogue in the form of a command line error message if a certain file requested to be

loaded, cannot be found. This would be the case if the user specified a number of files larger than the

number of files actually available.

During testing it was agreed that the use of plain and simple language is consistent throughout the

application and in no way requires in-depth knowledge of computing related terms. Although users of

the system could not always answer the dialogue on system load they did understand what was being

asked of them. Within the user interface, labelling of controls uses simple English that can be

understood without knowledge of the underlying OpenGL techniques at work.

Recognition Rather than Recall (Minimizing the User’s Memory Load)

The only piece of information required to be remembered by the user, is the number of files to be

loaded into the simulation. The user must first know the amount of files required for the simulation

but can improve efficiency once the visualisation has loaded if they remember the number of files

used. The slider control for selecting the timeframe of simulation to be displayed to the user,

automatically calculates the first and last frame based on the number of files loaded. Therefore the

user cannot select an invalid timeframe. However, using the slider to go from the first to the last frame

of a large simulation is very time consuming (for instance 100 frames requires 100 clicks) so the user

can enter a timeframe to visit if they remember the number of files loaded. At present, there are no

instructions or user manual that can be accessed through the visualisation. Some of the test candidates

noted that this would be useful functionality and as such it has been noted for future improvement.

Consistency and Standards

For the user interface the visualisation uses the basic visual appearance of the GLUI toolkit. The

controls on the interface are very similar to those that are found in the win32 environment and most

other windowed based operating systems. All the test candidates showed an immediate understanding

of how to interactive with the controls and how to operate the mouse correctly to control the rotation.

Visibility of System Status

On launch, the user is made aware of the fact that the application is reading in the files to be used by

the simulation. As each file is opened for loading, the filename is immediately printed to the console.

Once inside the visualisation, the user is made aware of the current timeframe, if and only if they have

chosen to manually select it using the slider control on the user interface. If the user has entered the

timeframe via the slide control text input box then the timeframe number is displayed in the text input

box. If the user selects the timeframe using the slider control increment/decrement buttons then the

current timeframe number will be shown to increment/decrement accordingly within the text input

area. However, the timeframe value within the slider control text input box is not incremented if the

43

Page 49: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

‘Run’ simulation button is used, instead remaining on 0 as the simulation runs through each

timeframe.

User Control and Freedom

The system comprises of a single window, in which the visualisation scene and user interface toolbar

is offered. There are no options to load up dialogue windows or message windows so the layout and

appearance is very simple to comprehend. With just a single window pane, the user should never find

that they have entered a chain of command that they cannot terminate. The use of checkboxes within

the application are an excellent example of the control the user has to undo and redo some of the

functions as all the user must do to reach the previous state is to perform the opposite of the last

command (i.e. uncheck the box if it is checked). The placement of several reset buttons allows the

rotation of the simulation to be changed to a predefined position, however the user does not have the

opportunity to stop the simulation while it is in progress. To exit the application the user only needs

to press the command button labelled ‘Quit’. During the final stage of testing it became apparent that

on lower screen resolutions, the ‘Quit’ command button would actually be pushed off the bottom of

the screen. At present the solution is to minimise the other toolbars so that they take up less space and

do not push the ‘Quit’ button off the screen.

Flexibility and Efficiency of Use (Shortcuts)

With an uncomplicated system there are very few occasions that warrant the inclusion of shortcuts,

the only instances being the input text boxes attached to the scrollers on the user interface. With these

input boxes, the user is able to specify an exact simulation timeframe, scale or lighting position

without having to scroll through a large selection of values. All the testers showed understanding of

the connection between the scrollers and the text boxes with some preferring to stick with the mouse

for interaction and others preferring to use the keyboard to input precise values.

Error Prevention and Error Messages

As stated in Heuristic evaluation [Nielsen, 1994], ‘even better than good error messages is a careful

design which prevents a problem from occurring in the first place’. The system almost reaches this

idealism as at present there is only one requirement for error checking and therefore one error

message. The error message ‘Cannot load file: filename’ occurs if the user enters more files to be

loaded than actually exist on the system. The error message is passive and requires no interaction

from the user, displaying the filenames to the files that cannot be loaded and instead choosing to just

ignore the fact that they were requested. Some users described this as confusing, noting down the

reason that, they are familiar with having confirmation options when presented with error messages.

Several solutions to this user input error are discussed in section [5.3]. The fact that the system only

required one instance of user input validity, is testament to the work involved in creating a closed and

44

Page 50: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

error free system. With a simple and intuitive user interface it is impossible for users to enter invalid

commands.

Help and Documentation

At present there are no help files, either attached to the visualisation or in paper form, and only a

short, concise user manual that describes how to run and control the application, and how to use the

conversion scripts to convert from structured to unstructured files. It was noted by several testers that

a help section, attached to the application, would benefit users.

Aesthetic and Minimalist Design (Visual Continuity)

The user interface is built around the 2nd release of the GLUI toolkit. As such, the appearance of the

rotation and linear transformation controls cannot be changed. Aesthetically they are simple yet

convey their purpose very well with none of the test candidates noting down any problems with the

appearance of the toolbar. The visualisation is loaded up as full screen so there are no distractions

from the Operating system in terms of external windows, menus or taskbars. There are elements of

minimalism present in the toolbar, as each set of controls is grouped together under heading sections

that can be minimised to hide the controls grouped under it.

Late on the development cycle it was discovered that the reset buttons on the user interface did not

offer good visual continuity as, although they reset the position of the visualisation back to plan and

cross section view, when the mouse is used to change the rotation, the visualisation immediately flicks

back to its previous position before the reset was performed. At first users noted this as over-

sensitivity but on success test and closer inspection it became clear that it was a problem with the

code responsible for the rotation matrices.

4.4 Formative Evaluation

Formative evaluation is described as, “...an observational empirical evaluation method applied during

the evolving stages of design” [Bowman et al, 2004]. In simple terms, “…the method of judging the

worth of a program while the program activities are forming or happening” [Bhola, 1990].

Throughout the iterative design steps of the project, a formative evaluation was conducted to check

that newly added feature worked as expected. The majority of the formative evaluation was

undertaken by the developer using the test plan in [Appendix C].

4.5 End User Evaluation

A short end user evaluation was conducted, near the end of product development to try and highlight

any major deficiencies and possible future expansion possibilities for the system. The idea behind end

user evaluating, is to allow the client to get a feel for the product without any intervention from the

45

Page 51: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

developer. This should help to get a clear and substantiated opinion of the application behaviour as

appears to the customer and not the developer. The end user was given free reign of the software,

asked to answer a series of questions and encouraged to give their opinion of the system. The outcome

is presented in the following figure:

Questions Answer Outcome

Mouse rotation from previous build was re-

instated to allow rotation via the mouse and the

toolbar controls. Reset buttons added to allow the

position of the visualisation to be instantly placed

into plan and cross section view.

Are you able to control

the position of the

visualisation as you

would like?

Yes, provided mouse functionality for

rotation is switched on and position reset

buttons are added to the toolbar

Are the loading times

satisfactory?

For large datasets the loading times are

too long.

A large improvement to undertake, therefore added

to future enhancements. However, still meets

minimum requirements

Is speed of visualisation

adequate? Yes None required

What aspects do you find

useful?

Ability to experience the scene in 3

dimensions and to rotate, pan and scale

around it.

Ability to simulate the wave run up and

to step forwards and backwards through

it using the step control

None required

What aspects are not so

useful?

Lighting does not cast any shadows

within the scene. Velocity vectors require

arrow heads to demonstrate direction

Large improvements to undertake, therefore added

to future enhancements. However, still meets

minimum requirements

What improvements

would you suggest?

Ability to move between mesh

refinement levels

Not possible using the unstructured files as the

data for the underlying layers are not present

Others

Time increments used to control

simulation speed are not in standard

format.

Time method adjusted to increment time values in

seconds.

Figure 5.3. End User evaluation..

As a result of the end user evaluation, the developer was made aware a few improvements that could

be implemented in the remaining timescales. Features that couldn’t be added have been pushed into

the future enhancements scope. The advantage of this kind of evaluation method is that it has

presented the developer with a list of improvements that the customer and not the developer, feel are

necessary. It has also provided expert feedback from somebody who is very knowledgeable in the

46

Page 52: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

underlying principles of the model and as such has the ability and authority to spot mistakes in the

visualisation behaviour.

4.6 Functionality versus Efficiency

As a result of end user testing and the formative evaluation it became apparent that there were several

trade offs between system functionality and efficiency. [Figure 5.4.1] and [Figure 5.4.2] show the

respective loading times and visualisation speed for the initial structured solution and the final

unstructured solution

Loading Times (mins:secs) FPS

System Sample Set

1 2 3 1 2 3

40x40 base grid with 2 levels of

refinements and 21 timeframes. 2:10 2:28 9:30

12

minimum

6

minimum No run

40x40 base grid with 3 levels of

refinements and 26 timeframes. 6:58 7:43 No load

5

minimum

2

minimum No run

30x30 base grid with 2 levels of

refinement and 101 timeframes 11:34 13:00 No load

12

minimum

6

minimum No run

Figure 5.4.1. Test results for initial structured solution.

Loading Times (mins:secs) FPS

System Sample Set

1 2 3 1 2 3

40x40 base grid with 2 levels of

refinements and 21 timeframes. 0:40 0:55 3:40

60

minimum

20

minimum

5

minimum

40x40 base grid with 3 levels of

refinements and 26 timeframes. 2:48 3:13 8.23

25

minimum

15

minimum No run

30x30 base grid with 2 levels of

refinement and 101 timeframes 5:24 6:17 17:30

60

minimum

20

minimum

5

minimum

Figure 5.4.2. Test results for final unstructured solution.

System 1: Pentium IV 3Ghz Hyperthreading, 1GB RAM, Geforce4 128MB, 240GB RAID 0 IDE Hard disks.

System 2: Athlon 64 3000 (1.8Ghz), 512MB RAM, Geforce6 (system memory), 80GB IDE Hard Disk.

System 3: Pentium III 700mhz, 256MB RAM, TNT2 64MB, 9GB IDE Hard Disk.

47

Page 53: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

60FPS demonstrated the maximum frames per second the graphics card was limited to. Even when all

the drawing methods were switched off and only the toolbar was visible, the card could only achieve a

maximum 60FPS so it can be assumed that OpenGL or the hardware drivers limit the card to this

when faster speeds are required. A ‘No run’ occurred when the hardware refused to display anything

of any use. In the case of both solutions it was that rotation and placement manipulation proved so

slow and jerky as to be completely unusable. A ‘No Load’ occurred when loading the files took so

long it wasn’t worth continuing the test (25mins +).

The results from the tests show clearly that there is a correlation between the number of mesh

refinements and the speed at which the graphics card is capable of regenerating the display. As mesh

refinements are added, the number of vertices plotted, rises by a factor of 4 (in worst case scenario).

Any simulation beyond 4 levels of refinement becomes very large in terms of data and becomes

difficult to visualise on standard resolution monitors with vertex points and line edges merging into a

see of solid black. [Moller, 2002] states that for completely smooth and jerk free visualisation, a

system should perform at a speed between 65-80 FPS but that anything above 20FPS is sufficient.

The tests also highlighted the importance of the CPU, especially in lowering loading times as with a

storage system at least twice as fast as system 2 but with a similar speed of CPU, system 1 was only

able to load the output files marginally quicker. The slow results for the Geforce 6 card over the

Geforce 4 card are explained by the fact that the Geforce accelerator is an onboard graphics chip with

no dedicated memory (built into the motherboard) and therefore has to use the slower system

memory. This would also explain why the card did not take advance of the Vertex buffer object

functionality.

48

Page 54: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

5. Conclusion

5.1 Successes

All the minimum requirements were met, if not in the way originally intended [Section 4.2] and the

final solution did offer an improvement in visualisation and interactive techniques over the

preliminary work done in Matlab [MathWorks, 2006]. A good choice of methodology and a sturdy

development plan was fundamental in producing an appropriate and valuable solution within the

timescales of the project.

5.2 Failures

The project failed to implement a working quadtree structure however this didn’t prevent the final

solution from meeting all the minimum requirements. Ideally a quadtree would be used to allow

control over the levels of mesh refinement but this feature would be considered as a future

enhancement. The solution also fails to implement the Vertex Buffer Objects under the SOC Linux

environment although it does work on different versions of Linux outside of the university. The

probable cause within the SOC is missing libraries files or the use of an older OpenGL release that

does not fully support the VBO functionality. Both of these problems are ‘out of the hands’ of the

developer, however the use of VBOs under the Linux environment is not of top priority as it was not

originally part of the project scope.

5.3 Future Enhancement Possibilities

The main extension to the project would be the implementation of a working quadtree. Ideally the tree

would store the structured OTT-2D output data efficiently, while displaying a correct mesh of

multiple resolutions. The tree would provide a mesh of multiple refinements with no overlap or gaps

and provide functionality to the user to control the number of mesh refinements visible, with the

option to add and remove them as required.

Another important consideration, would be to re-design the initial OTT-2D output file reading and

storage routines, to dramatically increase the speed at which it performs and to add functionality that

allows the application to query the user as to the location of the files to be processed. It would also be

possible for the application to scan the input file directory and to obtain the number of files to be read

in without the need for the user to enter the required amount. As an addition to the file loading

functionality, the application could be rewritten to include commands to load in new sets of data files

without the requirement to exit and re-load the entire application.

49

Page 55: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

The project could also be enhanced by fixing the current list of unfinished functionality which

comprises of the requirement of directional arrowheads on the velocity vectors and the ability of the

scene to cast self shadows via functionality available in the stencil buffer.

As a result of the heuristic evaluation, it became clear that users would benefit from a help library,

direct linked and accessible from the visualisation toolbar and the inclusion of a screen dump utility

that the user would be able to utilise to take and save snapshots of the simulation.

5.4 Evaluation of Decisions Made

Towards the end of the project, it became apparent that after several failed attempts and several weeks

of research and coding, the likelihood of finishing a working quadtree implementation was very

unlikely. Rather than risking the chance that the minimum requirements would not be met in at least

one area, it was decided that focus should switch to finding a different approach. The alternative

method presented, was the opportunity to utilise the unstructured files from the OTT-2D conversion

scripts, which ultimately lead to the construction of a solution that met and exceeded the minimum

requirements [Section 1.3]. While credit for the conversion scripts is entirely due to the author

[Hubbard, 2002], the switch to the unstructured output allowed the project to progress without

stagnating and loosing too much momentum. The switch in direction also gave the opportunity to

evaluate the old and new solutions side by side and to compare the functionality and efficiency

offered by both the structured and unstructured files.

It is difficult to determine if the OpenGL route was a better choice than choosing the Matlab or Iris

Explorer options but there was certainly a lot more features to be utilised in the OpenGL libraries than

was ever envisaged at the start of the project. Choice of the GLUI toolkit proved to be a good decision

as the user interface scored highly in the usability and heuristic evaluations.

5.5 Overall Methodology Evaluation

In choosing the RAD (Rapid Application Development process) methodology, there was the inherent

risk that the required features would be pushed out of scope and that with rapid prototyping the

scalability would be reduced. With the RAD approach, the developed application started as a

prototype and evolved into the finished application thus making it potentially difficult to incorporate

addition features not captured by the minimum requirements. However, this turned out not to be the

case and instead the fast nature of the RAD, helped to get the project back on track when the initial

solution reached a premature end [3.4 Phase 2]. For this reason, the methodology turned out to be

potentially the best option as when compared to waterfall and spiral methods focus was kept to

satisfying the core requirements first, and did not get bogged down in endless iterative design, build

and test cycles.

50

Page 56: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

References Abel, T., (2006). Jacques: Enzo's best friend. http://www.astro.psu.edu/users/tabel/Jacques/ [28th April

2006].

Berger, M.J., Colella, P., (1989). Local adaptive mesh refinement for hyperbolic partial differential

equations. J. Comput. Phys. 82, 67-83.

Berger, M.J., Oliger, J., (1984). Adaptive mesh refinement for hyperbolic partial differential

equations. J. Comput. Phys. 53, 484-512.

Bhola, H. S., (1990). Evaluating "Literacy for development" projects, programs and

campaigns: Evaluation planning, design and implementation, and utilization of evaluation

results. UNESCO Institute for Education. xii, 306 pages.

Bowman, D., Kruijff, E., LaViola, Jr, J. J., and Poupyrev, I. (2004) 3D User Interfaces: theory

and practice. Addison Wesley.

Browne, J.C., Parashar, M., (2006). DAGH: Data-Management for Parallel Adaptive Mesh-

Refinement Techniques. CAIP Center & Department of Electrical and Computer Engineering.

Bryan, G., et al., (2006). Enzo, Cosmological Simulation Code, General Information.

http://cosmos.ucsd.edu/enzo/ [28th April 2006].

De Waal, J.P., Van Der Meer, J.W., (1992). Wave run-up and overtopping on coastal structures. Proc.

23rd Int.Conf. Coastal Eng. A.S.C.E., Venice, pp. 1758-1771.

Dodd, N., (1998) . A numerical model of wave run-up, overtopping and regeneration. A.S.C.E

J.Waterw. port. Coat. Ocean Eng. 124 (2), 73-81.

Dix et al., (2004). Human Computer Interaction 2nd Ed. Prentice Hall.

Hubbard, M.E., Dodd, N., (2002). A 2D numerical model of wave run-up and overtopping. Elsevier.

Coastal Engineering 47 (2002) 1-26.

Hubbard, M.E., Dodd, N., (2000). ANEMONE : OTT-2D, A User Manual (Version 1). HR

Wallingford. Report TR 65.

51

Page 57: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Finkel, R.A., Bentley, J.L., (1974). Quad Trees: A Data Structure for Retrieval on Composite Keys. Acta Informatica, 4:1-9.

Haines, E., Akenine-Moller, T., (2002). Real Time Rendering 2nd Ed. Ak Peters.

Israel, R., (2002). Richardson Extrapolation. http://www.math.ubc.ca/~israel/m215/rich/rich.html

[12th April 2006].

Jenkins, T., (2003). How to Program Using C++. Palgrave Macmillan.

Klein, R.I., Fisher, R., McKee1, C.F., (2004). Resolution Issues in the Collapse and Fragmentation of

Turbulent Molecular Cloud Cores. RevMexAA (Serie de Conferencias), 22, 3-7 (2004).

Lachance, B., (2005). Développement d'une structure topologique de données 3D pour l'analyse de

modèles géologiques. Maître ès sciences (M.Sc.). Université Laval.

Lawlor, O.S., Kalé, L.V., (2001). Supporting Dynamic Parallel Object Arrays. ISCOPE, Stanford,

CA.

Manual on the Use of Rock on Coastal and Shoreline Engineering (MAN), 1991. Construction

Industry Research and Information Association and Centre for Civil Engineering Research and Codes.

MathWorks, (2006). MATLAB® - The Language of Technical Computing.

http://www.mathworks.com/products/matlab/ [24th April, 2006].

NAG, (2006). Iris Explorer. http://www.nag.co.uk/welcome_iec.asp [24th April, 2006].

Nielsen, J., (1994). Enhancing the explanatory power of usability heuristics. CHI '94:

Proceedings of the SIGCHI conference on Human factors in computing systems, ACM

Press, pp 152-158.

Nielsen, J., Molich, R., (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90

Conf. (Seattle, WA, 1-5 April), 249-256.

Paul, B., (2006). The Mesa 3D Graphics Library. http://www.mesa3d.org/ [24th April, 2006].

52

Page 58: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Philips, L,. (2002). Trick 14 - Space Partitioning with Octrees, Game Programming Tricks of the

Trade. Premier Press.

Quirk, J., (1991). An adaptive grid algorithm for computational shock hydrodynamics. PhD thesis,

College of Aeronautics. Cranfield Institute of Technology.

Quirk, J.J., (1994). A Cartesian Grid Approach with Hierarchical Refinement for Compressible

Flows. Institute for Computer Applications in Science and Engineering, ICASE Report No. 94-51.

Rademacher, P., (1999). GLUI. A GLUT-Based User Interface Library. User manual.

http://www.cs.unc.edu/~rademach/glui/src/release/glui_manual_v2_beta.pdf [24th April 2006].

Rademacher, P., (2006). GLUI interface library. http://www.cs.unc.edu/~rademach/glui/ [24th April

2006].

Redbook, (1994). Appendix F Calculating Normal Vectors. Addison-Wesley Publishing.

SGI. (2006). OpenGL Overview. http://www.opengl.org/about/overview/ [24th April 2006].

Skillsoft, (2003). OpenGL Code InstantCode: Advanced Graphics. Skillsoft

Titov, V.V., Synolakis, C.E., (1995).Modelling of breaking and non-breaking long-wave evolution

and run-up using VTCS-2. A.S.C.E J. Waterw. Port Coast. Ocean Eng. 121 (6), 308-316.

Titov, V.V., Synolakis, C.E., (1998). Numerical modelling of 3D long wave run-up. ASCE J. Waterw.

Port Coast. Ocean Eng. 124 (4), 157-171.

Tremblay, C., (2004). Chapter 16 - Space Partitioning: Cleaning Your Room, Mathematics for Game

Developers. Course Technology.

Wright R.S., Sweet, M., (2000). OpenGL SuperBible, Second Edition. Waite Group Press.

53

Page 59: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Accompanying Material A CD-ROM containing the visualisation user manual, several test run output files and the project

executables, libraries and source code, is available for Matthew Hubbard (School of Computing,

University of Leeds, UK).

Online images and videos of the simulations are available at:

www.personal.leeds.ac.uk/~scs2atw/fyp/

54

Page 60: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Appendices Appendix A Personal Reflection Throughout the course of this project, I have had the chance to show intuitiveness and demonstrate

initiative in tasks that proved to be both academically challenging and rewarding. It has offered me

the opportunity to undertake an individual project on a scale of which I will probably never again be

able to attempt and demonstrates the accumulation of 4 hard years study for a Computing Degree at

Leeds University. Researching the topics and building the final solution for this report has been an

enjoyable although at times frustrating, experience, that I hope I can look back on for inspiration and

support in my future career.

Over the course of the project, I feel that there have been many lessons learnt, two of the most

important being, the lack of a diary to make notes at convenient times and spending too much time on

the programming element of the project and neglecting the written report early on. I would there fore

suggest:

Keep a diary

Unfortunately this was something that I didn’t thing to use until the well into the implementation

stages of the project. Many a flash of inspiration was forgotten, as my memory tried to cope with

confusing theories and formulae. My advice would be to buy a small diary or pad and carry it around

with you at all times. There is nothing worse than knowing you had the answer but now, you just can’t

quite remember it.

Do not get carried away with the programming side of the project.

Everybody wants to build the latest and greatest, but remember, no matter how good it is it will only

get you a maximum of 20 out of 100 for the entire report. I fell into this trap, hurried trying to cram in

more and more improvements when really I should have been concentrating on the report.

Other helpful suggestions that I think people should consider before undertaking a project are:

Pick a subject you will enjoy.

If you don’t pick a subject you have a personal interest or feel particularly attracted to, then it is

unlikely that you will show enough commitment. When the deadlines start to approach and you

suddenly realise how much work is left, will you still have enough interest to stay up late working to

get it done?

55

Page 61: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Respect the people that offer you help.

Do not go looking for help and expect answers straight away. Nobody is going to sit and code your

project for you and people will be unwilling to help at all if you don’t show that you have at least tried

to solve or understand the problem. Leaving it late in the day to seek help with problems is also a bad

idea. Remember that if people offer to help you will have to patient. If they are spending their free

time to help you, then it is courteous to allow them to do it in their own time and not to be rushed by

you.

Behave professionally when dealing with the client

If you undertake a project that requires regular communication with a client, then remember to be

professional and polite at all times. This is an excellent opportunity to experience the interactions

between client and consultant, similar to that of a real company. Remember that the client will know

you are a student but won’t necessarily need to be reminded with scruffy clothes or lack of

punctuality. In the real world of work, you won’t keep clients for long if you show disrespect by

turning up late or not delivering work to schedule.

56

Page 62: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Appendix B Initial project plan:

57

Page 63: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Final project plan:

58

Page 64: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Appendix C Test Plan (Abridged).

A short section of the test plan:

Test Case Outcome Pass/Fail Fixed

Correct output file is found?

Application finds the first OTT-2D file and loads it correctly

Pass

Fixed – Extra 0 character found in input stream.

Correct sequence of files is found?

Application does not load OTT-2D files in correct sequence

Fail – Loads first 9 files only

Only loads files that are available, no matter how many are requested?

Application only loads the files that are available. User is alerted that files requested do not exist if they aren’t available in the immediate directory

Pass

Displays error message when incorrect number of files are loaded?

Correctly displays an error message informing the user that the number of files they have entered is incorrect.

Pass

Application loads full screen? Application loads full screen Pass

‘Exit’ button terminates application on single press?

Application terminated on pressing ‘Exit’ Button Pass

Scene rotation occurs when mouse button held down?

Scene only rotates when mouse button is not pressed. Fail

glutPassiveFunc replaced with glutMotionFunc

Scene is scaled appropriately on a widescreen display?

No, scene is stretched horizontally. Fail No fix implemented

Scene rotation is not limited to 360 degrees

Scene rotation goes beyond 360 degrees. Pass

59

Page 65: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Appendix D Visualisation User Manual (Windows)

Requirements:

• The GLUT libraries are installed on your system. If not, copy the file glui32.dll from the CD-

ROM and place in your system directory

$:\windows\system\ (where $ is your primary drive).

GLUT is already installed on the SOC Linux and Windows machines

• Unstructured output files present in same directory as mesh.exe executable

To Run:

• Simply double click the mesh.exe file and enter the number of output files to be processed

when prompted. After loading the files, the visualisation will load up as full screen.

Functionality:

• To run the simulation press the ‘Run’ button at the top right of the toolbar.

• To slow down and speed up the simulation use the increment/decrement slider (This only

works before and after the simulation has been run and not during!)

• To rotate the scene, press and hold the left mouse button and then move the mouse to control

the rotation. Optionally you can use the arcball rotation control on the toolbar.

• Scale the visualisation by click and dragging over the scale button

• Position the visualisation by click and dragging over the position button

• Pan the visualisation by click and dragging over the pan button

• Add or remove, seabed, water surface, transparent water and velocity vectors by sing the

check boxes.

• Convert to wireframe and anti-aliased wireframe by using the checkboxes provided.

Quit

• The visualisation can be terminated at any time by selecting the ‘Quit’ button.

60

Page 66: Visualisation of Tsunami Simulations Adam Wilde BSc · PDF fileVisualisation of Tsunami ... and producing animations of the wave run-up. i . ... from an existing numerical water wave

Appendix E GLUI feature list

The following is a complete list of the GLUI features, taken directly from the GLUI website [##].

• Complete integration with GLUT toolkit

• Simple creation of a new user interface window with a single line of code

• Support for multiple user interface windows

• Standard user interface controls such as:

Buttons

Checkboxes for Boolean variables

Radio Buttons for mutually-exclusive options

Editable text boxes for inputting text, integers, and floating-point values

Spinners for interactively manipulating integer and floating-point values

Static text fields

Panels for grouping sets of controls

Separator lines to help visually organize groups of controls

• Controls can generate call-backs when their values change

• Variables can be linked to controls and automatically updated when the value of the control

changes

• ("live variables")

• Controls can be automatically synchronized to reflect changes in live variables

• Controls can trigger GLUT redisplay events when their values change

• Layout and sizing of controls is automatic

• Controls can be grouped into columns

• User can cycle through controls using Tab key

61