bachelor thesis .pdf (2010)

101
FAKULT ¨ AT F ¨ UR INFORMATIK DER TECHNISCHEN UNIVERSIT ¨ AT M ¨ UNCHEN Bachelorarbeit in Informatik Analyzing the Velocity Fields in Cosmological Simulations using the ParticleEngine Dimitar Dimitrov

Upload: dimitar-dimitrov

Post on 10-Feb-2017

89 views

Category:

Data & Analytics


0 download

TRANSCRIPT

Page 1: Bachelor Thesis .pdf (2010)

FAKULTAT FUR INFORMATIKDER TECHNISCHEN UNIVERSITAT MUNCHEN

Bachelorarbeit in Informatik

Analyzing the Velocity Fields inCosmological Simulations using the

ParticleEngine

Dimitar Dimitrov

Page 2: Bachelor Thesis .pdf (2010)
Page 3: Bachelor Thesis .pdf (2010)

FAKULTAT FUR INFORMATIKDER TECHNISCHEN UNIVERSITAT MUNCHEN

Bachelorarbeit in Informatik

Analyzing the Velocity Fields in CosmologicalSimulations using the ParticleEngine

Analyse von Vektorfeldern kosmologischerSimulationen mit Hilfe der ParticleEngine

Author: Dimitar DimitrovSupervisor: Prof. Dr. Rudiger WestermannAdvisor: Kai BurgerDate: August 16, 2010

Page 4: Bachelor Thesis .pdf (2010)
Page 5: Bachelor Thesis .pdf (2010)

I assure the single handed composition of this bachelor thesis only supported bydeclared resources.

Munchen, den 16. August 2010 Dimitar Dimitrov

Page 6: Bachelor Thesis .pdf (2010)
Page 7: Bachelor Thesis .pdf (2010)

Abstract

The cosmology group led by Prof. Avishai Dekel at the Hebrew University of Jerusalem(HU), Israel, in collaboration with German scientists in MPE, MPA and LSU in Munich, arerunning state-of-the-art cosmological, gravo-hydrodynamical simulations to study galaxyformation within the new standard cosmological model LCDM.

Their goal is to try to understand the complex, three-dimensional flow pattern of theirsimulations, using visualization tools, developed by the visualization group at TUM ledby Prof. Westermann. In particular, they have been using the ParticleEngine to visualizethe velocity field in individual simulation snapshots.

This thesis presents implemented upgrades to the ParticleEngine tool, which directlyaddress the wishes of the astrophysicists at MPA and Jerusalem, and try to further enhancethe usability of the tool for their research.

vii

Page 8: Bachelor Thesis .pdf (2010)

viii

Page 9: Bachelor Thesis .pdf (2010)

Contents

Abstract vii

I. Introduction and Background 1

1. Introduction 31.1. Experimental Flow Visualization . . . . . . . . . . . . . . . . . . . . . . . . . 31.2. Computational Fluid Dynamics and Velocity Fields . . . . . . . . . . . . . . 41.3. Particle Tracing for 3-D Flow Visualization . . . . . . . . . . . . . . . . . . . 61.4. Volume Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2. The ParticleEngine 112.1. Basic Principles behind the ParticleEngine . . . . . . . . . . . . . . . . . . . . 11

2.1.1. Vector Field Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2. GPU Particle Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.1.3. Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3. Thesis Goals 17

II. Analyzing Velocity Fields in Cosmological Simulations 21

4. Multiple Sources of Particles 234.1. Initial Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2. The Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.2.1. Source of Particles as a Stand-Alone Entity. . . . . . . . . . . . . . . . 254.2.2. Particles’ Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2.3. Adjustable Options and ’Change Detectors’ . . . . . . . . . . . . . . . 264.2.4. 4th Component Aware Operations . . . . . . . . . . . . . . . . . . . . 274.2.5. Shared Shader Variables and Effect Pools . . . . . . . . . . . . . . . . 29

4.3. Probe Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3.1. The ParticleProbeContainer Class . . . . . . . . . . . . . . . . . . . . 304.3.2. The ParticleProbeController Class . . . . . . . . . . . . . . . . . . . . 314.3.3. Multi-probe Management Architecture . . . . . . . . . . . . . . . . . 32

4.4. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.4.1. Tracer Parameters UI Reference . . . . . . . . . . . . . . . . . . . . . . 334.4.2. Probe Parameters UI Reference . . . . . . . . . . . . . . . . . . . . . . 354.4.3. User Input Modes UI Reference . . . . . . . . . . . . . . . . . . . . . . 38

4.5. The Lense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

ix

Page 10: Bachelor Thesis .pdf (2010)

Contents

4.6. Lense Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.7. Lense UI Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5. Transfer Function Editor 435.1. The Raycaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.2. The Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.2.1. The Transfer Function Control . . . . . . . . . . . . . . . . . . . . . . 455.2.2. The TFEditor Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.3. The RaycastController Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.4. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.4.1. Raycast Controller UI Reference . . . . . . . . . . . . . . . . . . . . . 495.4.2. Raycaster UI Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.4.3. Transfer Function Editor UI Reference . . . . . . . . . . . . . . . . . . 51

6. Fourth-component Recalculation 556.1. ParticleTracer3D class upgraded . . . . . . . . . . . . . . . . . . . . . . . . . 566.2. Updating the Volume Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.3. Recalculation Fragment Shaders . . . . . . . . . . . . . . . . . . . . . . . . . 596.4. User Interface Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.4.1. Supported Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

7. Physical Units Display 637.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637.2. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647.3. Projecting the Physical Units on the Screen . . . . . . . . . . . . . . . . . . . 657.4. User Interface Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

8. Exporting Particles 698.1. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698.2. ParticleProbe’s ExportParticles Method . . . . . . . . . . . . . . . . . . . . . 70

8.2.1. Geometry Shader for Export . . . . . . . . . . . . . . . . . . . . . . . 718.3. Multi-probe Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

8.3.1. Example Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

III. Results and Conclusion 75

9. Visualizations 779.1. Multi-probe Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779.2. Direct Volume Rendering and Fourth-component Recalculation . . . . . . . 81

10. Conclusion 85

Bibliography 87

x

Page 11: Bachelor Thesis .pdf (2010)

List of Figures

1.1. Different methods for flow visualization . . . . . . . . . . . . . . . . . . . . . 41.2. Images of CFD simulation visualizations [?]. . . . . . . . . . . . . . . . . . . 41.3. A snap-shot of a two-dimensional fluid with some of the velocity vectors

shown [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4. On recent GPUs, textures can be accessed in the vertex units, and render-

ing can be directed into textures and vertex arrays. This and other featuresenable these GPUs to advect and render large amounts of particles [?]. . . . 6

1.5. Hurricane Isabelle is visualized with transparent point sprites [?]. . . . . . . 71.6. Different particle-based strategies are used to visualize 3D flow fields by the

ParticleEngine. (Left) Focus+context visualization using an importance mea-sure based on helicity density and user-defined region of interest. (Middle)Particles seeded in the vicinity of anchor lines show the extend and speed atwhich particles separate over time. (Right) Cluster arrows are used to showregions of coherent motion [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.7. Volume ray casting is a direct volume rendering technique to visualize vol-ume data. 2D image is produced by shooting a ray from the eye positioninto the volume, and accumulating the sampled values along this ray ontothe 2D image plane [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.8. Images of isosurfaces rendered from a volume data set. . . . . . . . . . . . . 10

2.1. The ParticleEngine displaying a rough approximation of its underlying vec-tor field. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2. The advection-rendering cycle of the ParticleEngine . . . . . . . . . . . . . . 122.3. User defined probe injecting particles into the field. Partialy transparent

point primitives are used for rendering. . . . . . . . . . . . . . . . . . . . . . 142.4. The ParticeEngine in Clearview mode. Two isosurfaces blended together us-

ing a user defined ’lense’. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1. Example of an experiment utilizing multi-probe configuration . . . . . . . . 234.2. Initial application architecture. The ParticleTracerBase class hosts all vari-

ables holding particle parameters. The advection and rendering steps areperformed by methods of the class. The ParticleTracer3D class is responsi-ble for managing the vector field data. . . . . . . . . . . . . . . . . . . . . . . 24

4.3. ParticleProbe class and its ParticleProbeOptions member . . . . . . . . . . 254.4. Some change detectors of the ParticleProbeOptions class . . . . . . . . . . . 274.5. Adjustable options, controlling the 4th component aware operations, and

their effect variables. Additionally, there is a flag to enable or disable themaccording to the format of the vector field data . . . . . . . . . . . . . . . . . 28

xi

Page 12: Bachelor Thesis .pdf (2010)

List of Figures

4.6. Shared shader variables and effect in the ParticleTracerBase, and the childadvection effect in ParticleProbe . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.7. The container class, responsible for probe management. . . . . . . . . . . . . 304.8. The ParticleProbeController class, methods for registering and unregister-

ing a probe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.9. Multi-probe management architecture. . . . . . . . . . . . . . . . . . . . . . . 324.10. Tracer Parameters UI, found in the lower right corner of the application win-

dow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.11. Probe Parameters UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.12. User Input Mode UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.13. The new lense is a special kind of probe. The lense’s parameters are also

contained by the ParticleProbeOptions class. . . . . . . . . . . . . . . . . . . 394.14. Experiment, demonstrating the clip-plane functionality of the new lense.

The particles from each probe get projected onto the lense’s plane. . . . . . . 404.15. The ParticleProbeContainer is used also to manage lenses. . . . . . . . . . . 414.16. Probe Parameters UI - Lense selected . . . . . . . . . . . . . . . . . . . . . . . 42

5.1. Direct volume rendering by the ParticleEngine, visualizing the vector lenghtas a 4th component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.2. The Raycaster class, represented in the ParticleTracerBase. . . . . . . . . . . 445.3. The transfer function control element, displaying a transfer function . . . . 455.4. The TFEditor class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.5. RaycastController organizes all volume rendering options in a new UI . . . 475.6. Raycast Controller UI, and the Transfer Function Editor UI displayed below

it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.7. Raycast Controller UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.8. The Raycaster UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.9. The Transfer Function Editor UI . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6.1. ParticleTracer3D is upgraded to support the 4th component recalculation. . 566.2. The 4th Component Recalculation UI. . . . . . . . . . . . . . . . . . . . . . . 60

7.1. Vector field domain, rendered with its physical coordinates and dimensions,projected onto its bounding box. . . . . . . . . . . . . . . . . . . . . . . . . . 63

7.2. Upgrade for physical units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647.3. Physical Units

display UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

8.1. Multi-probe configuration classes extended to support particles exporting. . 698.2. Export file, created by the ParticleEngine, opened in Microsoft®Excel® . . . 73

9.1. Multiple small probes displaying streamlines. . . . . . . . . . . . . . . . . . 779.2. Two-probe configurations, using 4th component aware injection. Below, the

lifetime of the particles is set to one to show the modulated injection density. 789.3. Multi-probe configurations. Above, using 4th component aware injection.

Below, using 4th component aware modulation. . . . . . . . . . . . . . . . . 79

xii

Page 13: Bachelor Thesis .pdf (2010)

List of Figures

9.4. Three-probe configuration, using 4th component aware modulation. Thetwo big probes are displaying sprites (above), and points (below). The thirdprobe is displaying streamlines. . . . . . . . . . . . . . . . . . . . . . . . . . . 80

9.5. Direct volume renderings of the temperature values, encoded in the 4th,using different transfer functions. . . . . . . . . . . . . . . . . . . . . . . . . . 81

9.6. Direct volume rendering of the curl (above) and divergence (below), calcu-lated with the 4th component recalculation feature. . . . . . . . . . . . . . . . 82

9.7. Direct volume rendering of the vector lenght (above) and divergence (be-low), focused in the selected probe’s boundaries. . . . . . . . . . . . . . . . . 83

xiii

Page 14: Bachelor Thesis .pdf (2010)
Page 15: Bachelor Thesis .pdf (2010)

Part I.

Introduction and Background

1

Page 16: Bachelor Thesis .pdf (2010)
Page 17: Bachelor Thesis .pdf (2010)

1. Introduction

1.1. Experimental Flow Visualization

Fluid mechanics is the branch of the physical sciences concerned with how fluids behaveat rest or in motion. Its uses are very broad - fluid mechanics examines the behavior ofeverything that is not solid, including liquids, gases, and plasma. This makes it one of themost important physical sciences in engineering [?].

As fluid mechanics is an active field of research with many unsolved or partly solvedproblems [?], the methods of gaining insight into flow patterns are playing a major role inthe pursuit for further understanding of the topic.

Flow visualization is the study of methods to display dynamic behavior in liquids andgases. Most fluids (air, water, etc.) are transparent, making their flow patterns unrecogniz-able to the naked eye. Thus, techniques for flow visualization must be applied to enableobservation [?, ?].

In experimental fluid dynamics, three approaches are most commonly used for this task[?, ?]:

• Surface flow visualization: This method reveals the flow streamlines in the limit asthe flow approaches a solid surface. For example, applying colored oil to the surfaceof a wind tunnel model forms patterns as the oil responds to the surface shear stress(fig. 1.1(b)).

• Particle tracing: Particles, such as smoke or water bubbles, can be added to a flow totrace the its motion. The particles can be then illuminated with a sheet of laser lightin order to visualize a slice of a complicated fluid flow pattern. Assuming that theparticles faithfully follow the streamlines of the flow, measuring its velocity using theparticle image velocimetry or particle tracking velocimetry methods is also possible(fig. 1.1(a)).

• Optical methods: Some flows reveal their patterns by way of changes in their opti-cal refractive index. These are visualized by optical methods known as the shadow-graph, schlieren photography, and interferometry. More directly, dyes can be addedto (usually liquid) flows to measure concentrations; typically employing the lightattenuation or laser-induced fluorescence techniques (fig. 1.1(c)).

3

Page 18: Bachelor Thesis .pdf (2010)

1. Introduction

(a) Using air bubbles, generatedby electrolysis of water to tracewater flows [?].

(b) Surface oil flow visualization [?]. (c) Shadowgram of the turbulentplume of hot air rising from ahome-barbecue gas grill [?].

Figure 1.1.: Different methods for flow visualization

1.2. Computational Fluid Dynamics and Velocity Fields

Fluid mechanics problems can be mathematically complex. Most of the time, they arebest solved by numerical methods, typically using computers. A modern discipline, calledcomputational fluid dynamics (CFD), is devoted to this approach [?]. It extends the abil-ities of scientists to study flow by creating simulations of fluids under a wide range ofconditions [?, ?].

(a) A computer simulation of high velocity airflow around the Space Shuttle during re-entry.

(b) A simulation of the Hyper-X scramjet vehiclein operation at Mach-7.

Figure 1.2.: Images of CFD simulation visualizations [?].

The fundamental basis of almost all CFD problems are the Navier-Strokes equations,which define any single-phase fluid flow. Removing terms describing viscosity yields theEuler equations. Further simplification, by removing terms describing vorticity yields thefull potential equations. Finally, these equations can be linearized to yield the linearizedpotential equations. Historically, methods were first developed to solve the linearizedpotential equations. Then, Euler equations were also implemented. Ultimately, the Navier-Strokes equations were incorporated in a number of commercial packages [?]. In manycases, however, the complexity of the problems is larger than even the most powerful

4

Page 19: Bachelor Thesis .pdf (2010)

1.2. Computational Fluid Dynamics and Velocity Fields

computer systems of today can model.

The most fundamental consideration in CFD is how one treats a continuous fluid in adiscretized fashion on a computer. One method is to discretize the spatial domain intosmall cells to form a volume mesh or grid, and then apply a suitable algorithm to solve theequations of motion. Such a mesh can be either irregular (for instance consisting of trian-gles in 2D, or pyramidal solids in 3D) or regular. There are also a number of alternativesthat are not mesh-based. Some of them are Smoothed particle hydrodynamics, Spectralmethods, and Lattice Boltzmann methods [?].

Several variations of field data can be generated by CFD experiments, based on its time-dependency. A static field is one in which there is only a single, unchanging velocity field.Time-varying fields may either have fixed positions with changing vector values or bothchanging positions and changing vectors. These latter types are referred to as unsteady[?].

Throughout this thesis, only steady flow fields are considered. A regular 3-D grid ofvelocity vectors is assumed as the format, in which vector field data is present. The pri-mary reasons to choose such format is its simplicity and the ability to parallel process datastored in it. This velocity vector field describes mathematically the motion of a fluid. Thelength of the flow velocity vector at a particular position in the field corresponds to theflow speed at that position.

Figure 1.3.: A snap-shot of a two-dimensional fluid with some of the velocity vectorsshown [?].

5

Page 20: Bachelor Thesis .pdf (2010)

1. Introduction

1.3. Particle Tracing for 3-D Flow Visualization

Advances in experimental and CFD flow analysis are generating unprecedented amountof fluid flow data from physical phenomena. The ever increasing computational powerand the dedicated graphics hardware solutions now available are enabling new advancedways for visualizing this data in digital 3-D environments.

In experimental flow analysis, particle tracing has been established as a powerful tech-nique to show the dynamics of fluid flows. Its main principles can be easily adapted forsimulation in a digital environment, making it a potent technique for computer-aided fluidflow visualization.

Presented with experimental data, discretized as a finite grid of vector quantities, de-scribing the flow speed and direction at given coordinates, a particle system can be nu-merically advected to approximate real-world experiment. Then, graphical primitives,such as arrows, motion particles, particle lines, stream ribbons, and stream tubes, can beproduced to emphasize flow properties and to act as depth cues to assist in the explorationof complex spatial fields.

Such system is able to deal with large amounts of vector-valued information at inter-active rates. When implemented to exploit the functionality of recent graphics hardware,millions of particles can be traced through the flow at interactive frame rates. This makesthe exploration of complex fluid flows on a consumer hardware possible and greatly ex-tends its applicability [?].

Figure 1.4.: On recent GPUs, textures can be accessed in the vertex units, and renderingcan be directed into textures and vertex arrays. This and other features enablethese GPUs to advect and render large amounts of particles [?].

6

Page 21: Bachelor Thesis .pdf (2010)

1.3. Particle Tracing for 3-D Flow Visualization

Figure 1.5.: Hurricane Isabelle is visualized with transparent point sprites [?].

Importance-driven particle visualization

The capability to handle large system of particles, however, quickly overextends theviewer due to the massive amount of visual information produced by this technique. Withthe help of importance-driven strategies, interesting structures in the flow can be revealedby reducing the visual information and allowing the viewer to concentrate on importantregions.

In [?] a number of importance-driven visualization techniques are proposed. They makeexperiment exploration less prone to perceptual artifacts and minimize visual clutter, pro-duced by frequent positional changes of large amounts of particles. Relevant structuresin the flow are emphasized by integrating user-controlled and feature-based importancemeasures. These measures are used to control the shape, the appearance, and the densityof particles in such a way that the user can focus on the dynamics in important regionsand at the same time preserve context information.

Improvements for particle-based 3D flow visualization, proposed by [?]:

• Automatically adapt the shape, the appearance, and the density of particle primitiveswith respect to user-defined and feature-based regions of interest.

• Using vorticity, helicity density and finite time Lyapunov exponent as an importancemeasures. The finite time Lyapunov exponent is particularly useful for the selec-tion of characteristic trajectories in the flow, called anchor lines, and only visualizingthose particles, that leave an anchor.

• Clustering approach is applied to determine regions of coherent motion in the flow.Sparse set of static cluster arrows emphasize these regions. Cluster arrows are geo-metric primitives that represent regions of constant motion in the flow.

• Focus+context visualization. This means that, within the focus region, the flow fieldis visualized at the highest resolution level, and contextual information is preservedby visualizing a sparse set of primitives outside this region.

7

Page 22: Bachelor Thesis .pdf (2010)

1. Introduction

Figure 1.6.: Different particle-based strategies are used to visualize 3D flow fields by theParticleEngine. (Left) Focus+context visualization using an importance mea-sure based on helicity density and user-defined region of interest. (Middle)Particles seeded in the vicinity of anchor lines show the extend and speed atwhich particles separate over time. (Right) Cluster arrows are used to showregions of coherent motion [?].

Streamlines, streaklines, and pathlines

Streamlines, streaklines and pathlines are field lines resulting from a given vector fielddescription of a flow. They can serve as additional visual clues for flow patterns [?].

The different types of lines differ only when the flow changes with time: that is, whenthe flow is not steady.

• Streamlines are a family of curves that are instantaneously tangent to the velocityvector of the flow. These show the direction a fluid element will travel in at anypoint in time.

• Streaklines are the locus of points of all the fluid particles that have passed contin-uously through a particular spatial point in the past. Dye steadily injected into thefluid at a fixed point extends along a streakline.

• Pathlines are the trajectories that individual fluid particles follow. These can bethought of as a ”recording” of the path a fluid element in the flow takes over a cer-tain period. The direction the path takes will be determined by the streamlines of thefluid at each moment in time.

• Timelines are the lines formed by a set of fluid particles that were marked at a previ-ous instant in time, creating a line or a curve that is displaced in time as the particlesmove.

1.4. Volume Rendering

Here, a short introduction to volume rendering is made. The ParticleEngine tool, de-scribed in chapter 2, applies this powerful technique for visualizing additional spacialproperties of a flow field, and this is one of the aspects, which are to be addressed through-out this thesis.

8

Page 23: Bachelor Thesis .pdf (2010)

1.4. Volume Rendering

Volume rendering is a technique used to display a 2D projection of a 3D discretely sam-pled data set [?]. A typical 3D data set is a group of 2D slice images acquired by a CT, MRI,or MicroCT scanner. These are usually acquired in a regular pattern (e.g., one slice everymillimeter) and usually have a regular number of image pixels in a regular pattern. Thismakes the technique very suitable for the case of 3-D regular vector field grids, used bythe ParticleEngine.

To render a 2D projection of the 3D data set, the opacity and color of every voxel (avolumetric pixel in a 3-D texture) must be defined. This is usually done using an RGBA(for red, green, blue, alpha) transfer function that maps a RGBA value for every possiblevoxel value. This transfer function is then applied with, for example, the volume raycasting technique to obtain the final 2D image. This way of visualizing volume data iscalled direct volume rendering.

Figure 1.7.: Volume ray casting is a direct volume rendering technique to visualize vol-ume data. 2D image is produced by shooting a ray from the eye position intothe volume, and accumulating the sampled values along this ray onto the 2Dimage plane [?].

A volume may be also viewed by extracting surfaces of equal values from the volumeand rendering them as polygonal meshes. Such surface is called isosurface. Isosurfacesare used as data visualization methods in CFD, allowing engineers to study features of afluid flow (gas or liquid) around objects, such as aircraft wings [?].

9

Page 24: Bachelor Thesis .pdf (2010)

1. Introduction

(a) A (smoothed) rendering of a data set ofvoxels for a macromolecule [?].

(b) An isosurface, rendered by the ParticleEngine.

Figure 1.8.: Images of isosurfaces rendered from a volume data set.

10

Page 25: Bachelor Thesis .pdf (2010)

2. The ParticleEngine

The ParticleEngine is a particle system for interactive visualization of 3D flow fields onuniform grids. It exploits features of recent graphics hardware to advect particles in thegraphical processing unit (GPU), save the new positions in the graphics memory and sendthem back through the GPU to obtain images in the frame buffer. This approach allows forinteractive streaming and rendering of millions of particles and enables virtual explorationof high resolution fields in a way similar to real-world experiments. To provide additionalvisual clues, the GPU constructs and displays visualization geometry like particle linesand stream ribbons.

2.1. Basic Principles behind the ParticleEngine

2.1.1. Vector Field Data

The ParticleEngine operates on a 3-D uniform Cartesian grid, with each cell containingthree to four floating point components. The discretized velocity field data for particularexperiment is loaded from a file into this grid. The first three components of every gridcell contain the speed (magnitude) and the direction of the flow at this cell’s position. The4th component of the grid can be utilized to store some scalar physical characteristic of theflow field, such as density or temperature.

Figure 2.1.: The ParticleEngine displaying a rough approximation of its underlying vectorfield.

11

Page 26: Bachelor Thesis .pdf (2010)

2. The ParticleEngine

2.1.2. GPU Particle Tracing

The ParticleEngine traces massless particles in a flow field over time, computing theirtrajectory by solving the ordinary differential equation of the field

∂x

∂t= v(x(t), t) (2.1)

with the initial condition x(t) = x0. Here, the x(t) is the time-varying particle position, ∂x∂t

is the tangent to the particle trajectory, and v is an approximation to the real vector fieldv. As v is sampled on a discrete lattice, interpolation must be performed to reconstructparticle velocities along their characteristic lines.

Modern GPUs expose capabilities, such as the possibility to access texture maps in thevertex units (see figure 1.4), programmable geometry and fragment shaders, and abilityto stream vertex data from the geometry-shader stage (or the vertex-shader stage if thegeometry-shader stage is inactive) to one or more buffers in memory.

The ParticleEngine computes intermediate results on the GPU, saves these results ingraphics memory, and uses them again as input to the geometry units to render imagesin the frame buffer. This process requires application control over the allocation and use ofgraphics memory; intermediate results are ’drawn’ into invisible buffers, and these buffersare subsequently used to present vertex data or textures to the GPU.

Figure 2.2.: The advection-rendering cycle of the ParticleEngine

Initial particle positions are stored in the RGB color components of a floating point tex-ture of size M×N. These positions are distributed regularly or randomly in the unit cube.In the alpha component, each particle carries a random floating point value that is uni-formly distributed over a predefined range. This value is multiplied by a user definedglobal lifetime to give each particle an individual lifetime. By letting particles die - andthus reincarnate - after different numbers of time steps, particle distributions very similarto those generated in real-world experiments can be simulated.

12

Page 27: Bachelor Thesis .pdf (2010)

2.1. Basic Principles behind the ParticleEngine

Injection

Particles are initially uploaded to the GPU in a particle buffer. The elements of this bufferare structures, containing all the needed attributes to define a particle in the flow. Someof these parameters are: initial position, current position, direction for the next advectionstep, and lifetime. Another empty particle buffer is also created. This two buffers form theping-pong buffer system, used for the advection step.

The user can interactively position and resize a 3-D probe that injects particles into theflow. All particles are initialy born, and subsequently reincarnated within this region. Thebirth of a particle consists of reading its starting position from the M×N texture describedabove, and initializing its lifetime timer.

Advection

The advection step is performed by a geometry shader, using the RK3(2) integrationscheme. The geometry shader updates the positions and the timer of each particle in thesource ping-pong buffer, and streams them out to the receiving ping-pong buffer, usingthe stream-output pipeline stage. This effectively moves all particles one time step furtheralong the field. To complete the advection step, the two buffers are exchanged - the targetbecomes the source and vice versa.

Additionally, the advection step is also responsible to make a ’death test’ for each par-ticle. This test checks the lifetime of a particle and if it is still in the vector field domainboundaries. According to the test results, the particle can be reinjected, following the steps,specified in the ’Injection’ section above.

Rendering

The particle buffer, containing the new positions of the particles is then used to renderthem. This buffer is bound to the pipeline as a vertex buffer, containing a list of points.Then, each particle’s current position gets transformed with the momentary model-view-projection matrix and rendered onto the frame buffer.

Additionally, a number of user adjustable options play a role on which particles get dis-played and how. Different display modes are available to aid the understanding of thefield. Rendering one pixel for every particle can be used to simulate smoke-like matterdistribution. To add more geometry to the scene, sprites can be rendered for each par-ticle, using the geometry shader unit. Oriented sprites, for example, are very useful forvisualizing the directional properties of the field.

13

Page 28: Bachelor Thesis .pdf (2010)

2. The ParticleEngine

Figure 2.3.: User defined probe injecting particles into the field. Partialy transparent pointprimitives are used for rendering.

2.1.3. Additional Features

Importance-driven Particle Visualization

The ParticleEngine also incorporates all the importance-driven particle visualization tech-niques, described in section 1.3. This includes the importance measures, the user de-fined focus+context regions, the anchor lines, the cluster arrows and the different kindsof streamlines.

Volume Rendering

As already mentioned in the Vector Field Data section(2.1.1), the grid on which the Parti-cleEngine operates has an additional 4th component to every of its cells. Ignoring the threeother components, this can be interpreted as a volume data set, subjecting it to volumerendering techniques.

This makes possible the visualizing of any flow characteristic, loaded into this 4th com-ponent. The direct volume rendering and isosurfaces are both supported (see section 1.4).Additionally, the so called Clearview mode is also available within the ParticleEngine. TheClearview mode enables the user to compare two isosurfaces, by rendering them and thenblending them according to user specified options.

14

Page 29: Bachelor Thesis .pdf (2010)

2.1. Basic Principles behind the ParticleEngine

Figure 2.4.: The ParticeEngine in Clearview mode. Two isosurfaces blended together usinga user defined ’lense’.

15

Page 30: Bachelor Thesis .pdf (2010)

2. The ParticleEngine

16

Page 31: Bachelor Thesis .pdf (2010)

3. Thesis Goals

After using the ParticleEngine tool on individual snapshots of velocity field data of in-terest, the research group members acknowledged its potential, but also did notice, that inorder for it to be seriously considered for the project, upgrades must be realized.

Primary goal of this project is to address the wishes of the astrophysicists, and upgradethe tool to best suit their needs. In the process, the application’s existing functionalityshould be optimized and partly redesigned to seamlessly work with the new features.

The following outlines the stated upgrade wishes and gives more details about each one:

Implement a way to start particles from more than one box and color them separately.

Currently, the user can inject particles into the flow field by positioning a cuboid-shapedsource of particles, called probe, within the field’s spacial domain. The particles’ startpositions are distributed inside the probe’s boundaries according to a predefined scheme -either randomly, or uniformly. These positions are then used to initially inject the particles,or to reincarnate these, which left the domain, or reached the end of their lifetime.

The user can move and resize the probe, and observe how the particles move throughthe flow. Different parameters can be changed, such as number of particles, their color orsize.

Having just one probe places several limitations on the user:

• Only one region for particle injection can be defined. The user is not able to simul-taneously observe two or more features at different locations within the field. Forexample, two particular streams within the flow. Making the probe large enoughis not a solution, because the particles are distributed in the whole probe and willeventually have too much space in between to adequately represent the movementof the mass. Increase of particles number will distract the user from the importantregions.

The ParticleEngine’s feature injection capability can be useful in this case. It populatesthe probe according to the field’s 4th component, given an upper and lower thresh-olds. It, however, is computationally expensive, making movement and resizing ofthe probe problematic. Also, useful values for the 4th component must be preloaded,making interactive experimentation impossible.

• There cannot be more than one type of particles at a time in the flow. A ’type’ ofparticle is defined as a combination of the different parameters, controlling how theparticles are injected, advected and displayed. These include injection mode, displaymode, color, and lifetime to name a few.

• All particles start their advection simultaneously.

These limitations should be addressed and alleviated. Chapter 4 is devoted to this topic.

17

Page 32: Bachelor Thesis .pdf (2010)

3. Thesis Goals

Populate the starting positions of particles weighted by the local 4th-component valueof the field

In addition to enabling multiple probes, the feature injection mode of the probe, alreadymentioned above, should be adapted and optimized. The present implementation de-pends on upper and lower thresholds to decide, where the particles must be born. Thisintroduces sharp borders between regions, meeting the threshold condition, and the oth-ers.

Being able to modulate the density of the particles according to the 4th-component willallow regions of interest to have more particles than others, and will make for smoothtransition between them.

Furthermore, the particle density should update in respect of probe movement and re-sizing. To avoid hampering the exploration, this update should happen at interactive rates.

This is addressed in Chapter 4. In particular in section 4.2.4.

Improve/generalize ways in which to define the transfer function of the 4th-componentinside the ParticleEngine

The ray casting volume rendering technique, integrated in the ParticleEngine, provideshigh quality images of the field’s 4th component. This makes it powerful tool for gaininginsights of the field’s characteristics, thus aiding the understanding of flow patterns.

In particular, the direct volume rendering capability should be addressed here. For in-troduction to volume rendering, see 1.4.

At the moment, the ParticleEngine has no general way of defining a transfer functionfor direct volume rendering. The main interface allows the user to load a shader codefragment from file, which must include particular functions. The integrated ray casterthen uses these functions to map values to color.

This has several shortcomings:

• It does not support interactive exploration. The user must guess which transfer func-tion will yield usable results. For every different transfer function, there must be acode fragment, saved as a file.

• It requires programming skills, and also knowledge of the names and arguments ofthe functions, called by the ray caster.

• Loading files with inappropriate format may produce unexpected results. This re-quires extra care when creating the code fragments.

To unleash the full potential of the build-in volume renderer, a new way of defininga transfer function should be introduced. It should improve the exploration of the 4th

component by supplying intuitive user interface.

Chapter 5 is dedicated to this goal.

18

Page 33: Bachelor Thesis .pdf (2010)

Allow snapshots of particles to be exported to file. For each particle store original andfinal location, flag particles that have left the domain.

The use of multiple specialized tools for different purposes, to produce results, not ob-tainable by any of the tools alone, is always very a important consideration and sometimeseven unavoidable in large projects.

To make the ParticleEngine more applicable in such synergetic environment, a way ofexporting a snapshot of the particles, currently in the flow, should be implemented. Forfurther analysis or more advanced visualization, the particle start positions and currentpositions should be written out to a file on the disk.

To widen the interoperability, a standardized and simple format for the file should bechosen.

This is discussed in chapter 8.

Add physical coordinates to the display on screen. Do the same for the location andsize of the box that acts as source of particles.

Without some way of specifying real physical coordinates and scale, the usefulness ofthe visualized data for scientific research quickly reaches its limit.

The user should be able to set the coordinates and the dimensions of the field’s domainin specified from him physical units. User interface should be presented to facilitate thistask. Additionally, visual clues should be displayed on-screen, hinting the user of thephysical dimensions and scale of the data visualized.

Furthermore, the user should be presented with a way to define an exact position anddimensions of each probe currently in the scene, in the physical coordinates specified. Thefunctionality for exporting particles, described above should also be made aware of theseunits.

The solution to this requirement is discussed in chapter 7.

Allow calculation of properties of velocity fields and use it as ”fourth component”

As the power of the build-in volume renderer gets more accessible with the introduc-tion of user-definable transfer function, the 4th component visualization gains even moreon relevance. As of now, if the user wants to visualize some field characteristic, such astemperature or density, it must be encoded as the 4th component and loaded with thevector field.

A flow field can have many characteristics, delivered by the experimental data. Thisrequires many data sets, which only differ in their 4th components, to be managed. As aconsequence, oft reloads of vector field data must be taken into consideration. Further-more, some important characteristics are derivable from the vector field itself, or throughsome function of present ones.

Realizing a way to dynamically at run-time recalculate the data, stored into the 4th com-ponent, will make the ParticleEngine much more versatile and reduce the burden of man-aging many data sets. As a first step, calculation of local properties of the field, such as

19

Page 34: Bachelor Thesis .pdf (2010)

3. Thesis Goals

divergence and curl, should be implemented. Then, additional functions, like applyingsome function on the present 4th component values, or loading a 4th component from ex-ternal field and using it to change in some way the present values can be considered.

This requirement is addressed in chapter 6.

20

Page 35: Bachelor Thesis .pdf (2010)

Part II.

Analyzing Velocity Fields inCosmological Simulations

21

Page 36: Bachelor Thesis .pdf (2010)
Page 37: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

This chapter addresses the first two upgrade requirements, described in the section ’The-sis Goals’ (3). In order to alleviate the drawbacks pointed out there, changes to the ap-plication’s architecture were realized. The user-friendly interaction with the multi-probeconfigurations has been ensured by a new user interface. Also, already present functional-ity, which is considered useful for the project, has been seamlessly integrated into the newenvironment.

First, an introduction to the current architecture is made. This is intended to make clearwhat a source of particles is from application’s standpoint. Then, a new definition forprobe is given, and technical details about the architecture changes enabling the supportfor multi-probe configurations are presented. Finally, the new user interface is discussedin detail.

Figure 4.1.: Example of an experiment utilizing multi-probe configuration

23

Page 38: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

4.1. Initial Architecture

Here, the application’s architecture, as it was at the beginning if this project, is intro-duced. As it served as the base for the development, insight into it will help to fullyunderstand the reasons behind the performed application changes, described in the nextsections.

The ParticleTracer Object

The ParticleTracer is the main internal object of the ParticleEngine. It consists of theParticleTracerBase class, and an extending class. In this project, only the ParticleTracer3Dis considered. Figure 4.2 shows its structure as a class diagram. Only some members andmethods are displayed for readability.

Figure 4.2.: Initial application architecture. The ParticleTracerBase class hosts all variablesholding particle parameters. The advection and rendering steps are performedby methods of the class. The ParticleTracer3D class is responsible for manag-ing the vector field data.

The ParticleTracerBase class hosts all the variables holding particle parameters, such asstart positions (m pParticleStartposTex), injection mode (m StartPosMode), initial count(m iStartParticles), lifetime (m iMaxLifetime) and color (m vPartColor). The methodsperforming the advection (AdvanceParticles()) and rendering (RenderParticles()), and theping-pong buffers (m pParticleDrawFrom and m pParticleStreamTo) used by them, arealso integral for the class.

The ParticleTracer3D class extends the ParticleTracerBase class by adding functional-ity for loading and manipulating the 3D vector field data. The data is loaded into them pTexture3D variable, by the CreateVolumeTexture() method.

24

Page 39: Bachelor Thesis .pdf (2010)

4.2. The Probe

Next, an extension over this architecture is presented. The driving idea behind the newdesign is to make it more flexible by allowing easier extensibility. To achieve this, thearchitecture must be modularized. As a first step, the particle source is defined as a stand-alone entity. Then, additional structures are introduced to simplify the management ofmultiple particle sources.

4.2. The Probe

A source of particles within the new architecture is referred to as probe. This name isadopted from the current implementation, which also calls the source of particles a probe.However, this term is to be extended and more clearly separated from other ParticleTracerfunctionality.

4.2.1. Source of Particles as a Stand-Alone Entity.

The basic features and visual appearance of the ’old’ probe are to be kept. These are asfollows:

• cuboid-shaped region which defines possible start positions for particle injection.• it can change its position and dimensions along the main axes. No rotation is sup-

ported.• all the particles, injected from a probe, are of the same ’type’. The type of particles

is the collection of all adjustable particle parameters and is to be defined in the nextsection.

A new class called ParticleProbe (see figure 4.3) is created to represent the probe entity.It encapsulates all variables and defines all the functionality of a particle source. Thisincludes most notably the advection and rendering, and shader management functions.

Figure 4.3.: ParticleProbe class and its ParticleProbeOptions member

25

Page 40: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

.The ParticleProbe class has all its parameters encapsulated by the m ppOptions mem-

ber from the ParticleProbeOptions class (This class is explained in detail in the next sec-tion). The probe contains the particles’ start positions (m v4aParticlesStartParams), theping-pong buffers (m pParticlesContainer and m pParticlesContainerNew), and it is re-sponsible for advection of its own particles with the methods AdvectParticles() and Ren-derParticles() 1

4.2.2. Particles’ Type

As already mentioned above, a probe can inject only particles of the same type. Thissection defines what a particle type is.

A particle type is the collection of all parameters, which can be adjusted to get differentparticle behavior, or visual appearance. To separate the variables, representing such pa-rameters, from those, controlling internal processes, such as the advection and rendering,the class ParticleProbeOptions is introduced (see figure 4.3). Encapsulating all adjustableoptions in their own class enables the probe to expose only these for outside manipulation,ensuring its internal integrity.

Adjustable particles’ options The following list summarizes all the adjustable options 1:

• Injection Mode (InjectionMode);• Particles color / opacity (vProbeColor and fParticlesAlpha);• Particles count (iParticlesCount);• Particles lifetime (iParticlesLifetime);• Particles size (for sprites) (fSpriteSize);• Particles Display Mode (PVisMode);• 4th component aware operations options (see section 4.2.4)

Along the other adjustable particle options, the ParticleProbeOptions class exposes alsothe matrix, used to save the positions and dimensions of the probe (mProbeWorldMatrix),thus allowing outside manipulation of it.

4.2.3. Adjustable Options and ’Change Detectors’

The ParticleProbeOptions class also introduces the concept of change detectors. Thechange detectors are functions which indicate a change of an option since the was previ-ously checked. This allows the probe to keep track of the adjustable options and react tochanges, which require adjusting of internal structures.

1Additionally, there are variables and methods for trajectories, which are the new version of streamlines, andare no longer understood by the application as different display mode, but can coexist with the particleswithin a probe. However, for simplicity, these are not going to be further considered.

26

Page 41: Bachelor Thesis .pdf (2010)

4.2. The Probe

Figure 4.4.: Some change detectors of the ParticleProbeOptions class

Figure 4.4 shows some of the available detectors. These are triggered, if the respectiveproperty’s set accessor was used. After reading the detectors, the probe is responsible toreset them by invoking the ResetDetectors() method.

4.2.4. 4th Component Aware Operations

This section focuses on the second upgrade requirement from the ’Thesis Goals’ section(3), for enabling modulation of particle density according to particular field characteristic.Moreover, it describes the integration of some of the existent importance-driven visual-ization techniques in the new probe concept. This is referred as the 4th component awareparameter modulation.

The use of the 4th component of the vector field data to procedurally adjust a particle’sparameters is referred to as a 4th component aware operation. Those operations are ap-plied on per particle basis, but their threshold parameters are defined per probe. Thereare currently two types of 4th component aware operations - the 4th component awareinjection and the 4th component aware modulation.

4th component aware particle injection.

When this mode is enabled, only particles, whose positions satisfy the injection require-ment are born and subsequently advected. This injection requirement is defined by thethree adjustable options - scale (f4thCompAwareInjectionScale), min(f4thCompAwareInjectionMin) and max (f4thCompAwareInjectionMax) - and the 4th

component of the field at the particle’s position.The scale, min and max options are used to adjust the interval of 4th component values,

which is of interest for a particular experiment. This is done according to the followingformula:

min ∗ scale < 4thcomponent < max ∗ scale (4.1)

For all particles, whose 4th component values are bigger than max ∗ scale the chance ofbirth is 100%. If the value is lower than min ∗ scale, the chance is zero. The chance forparticles, lying in the specified range, is calculated with the HLSL’s smoothstep function,which uses Hermite interpolation to return a number between the specified lower andupper bound.

27

Page 42: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

Figure 4.5.: Adjustable options, controlling the 4th component aware operations, and theireffect variables. Additionally, there is a flag to enable or disable them accordingto the format of the vector field data

The effect of this method of injection is, that it allows the increase of particle density inregions of interest.

The 4th component aware injection options act in the advection phase. The geometryshader, responsible for particle birth is ignoring all particles which failed the describedtest. Thus, only born particles are considered by subsequent advection steps.

This method of particle injection density modulation is very fast, because the decisionsare taken on the GPU. This allows for real-time dynamic density adjustment as the probechanges its position and dimensions. However, in most cases many particles must betransferred to the GPU but only a handful are used in the advection process.

For an example, see 9.2.

4th component aware parameter modulation.

In this mode, some of the parameters of the particles are changed according to the 4th

component value at each particle’s momentary position. Which parameters get modu-lated depends on the particle’s display type - for points, the 4th component modulates theparticle’s opacity; for sprites - the size and opacity.

The modulation exposes the same adjustable options and conforms to the same formulaas described for the 4th component aware injection. If the 4th component value lies belowscale∗min the opacity / size is set to null. If otherwise higher than scale∗max the opacity/ size are set to the maximal value given in the ParticleProbeOptions of the respectiveprobe.

The 4th component aware parameter modulation happens during the rendering phase.This means it doesn’t require the advection to be activated to see its effects. Also, nointernal changes are caused by its options, making it very fast and interactive.

For an example, see 9.4.

28

Page 43: Bachelor Thesis .pdf (2010)

4.3. Probe Management

4.2.5. Shared Shader Variables and Effect Pools

Every probe advects and renders its own particles. This requires it to manage the effectsand shader variables needed for that itself.

Figure 4.6.: Shared shader variables and effect in the ParticleTracerBase, and the child ad-vection effect in ParticleProbe

The advection phase depends on many probe specific parameters, and also it needsaccess to the vector field data. This data, and other resources must be shared betweenprobes, otherwise the inappropriate resource usage will defy the multiple-probe concept.

This problem is solved with the help of effect pools and shared shader variables. Figure4.6 shows the employed structure. The ParticleTracer manages an effect pool(m pProbeParticlesAdvectionEffectPool), and the vector field data(m pAEP VolumeTex shared), along other shared resources, is a shared variable, man-aged by this pool. Every probe then extends this pool with its own effect file(m pParticlesAdvectionChildEffect), which manages the probe specific variables.

The rendering effect (m pProbeParticlesRenderingEffect) on the other hand is the samefor all probes and is hosted by the ParticleTracer. The probe gets a pointer to the respectiveeffect to create internal pointers to the technique and shader variables it needs to set.

4.3. Probe Management

Multiple probes in the ParticleEngine are now possible by creating and maintaining anarray of ParticleProbe instances. For this purpose, the vector type from C++’s StandardTemplate Library can be used. Utilizing a vector to store the probe instances has the ad-vantage of being flexible and extensible. This vector will then be a member of the Particle-Tracer.

However, as the probe concept grows more complex, handling this vector becomes in-creasingly difficult. This was the reason to incorporate the vector in a container class,devoted to the task of managing the probes.

Another aspect of probe management is exposing the available adjustable options of allcurrently instantiated probes for manipulation by the user. Also, supplying user-friendly

29

Page 44: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

and intuitive interface to these options is vital to making the concept practical. This isdone by a controller class, which is responsible for ensuring the accessibility and easymanipulation of multi-probe configurations.

4.3.1. The ParticleProbeContainer Class

Figure 4.7 depicts a simplified diagram of the ParticleProbeContainer class.

Figure 4.7.: The container class, responsible for probe management.

The ParticleProbeContainer is a member of the ParticleTracerBase. It hosts the vectorwith probe instances as a private member (m vParticleProbes), and defines interface foraccessing it. The adding (AddProbe()) and removing (RemoveProbe()) of probes are thebasic methods of the class. Additionally, methods for saving and loading probe configura-tions, and for exporting probe particles are supported (ResetProbesParticles()).

Saving and Loading Probe Configurations

The methods ExportProbesLayout() and ImportProbesLayout() implement saving andloading of probe configurations. Every probe exposes two methods allowing the import(ImportProbeOptions()) and export (ExportProbeOptions()) of its options. The containerthen wraps these methods to facilitate the process for multiple probes.

The SaveProbeLayout() and LoadProbeLayout() methods of the ParticleTracerBase aretriggered by the main user interface (see section 4.4.1). They display dialog for file selec-tion, and subsequently call the container’s methods to take the needed operations to saveor recreate a probe configuration.

30

Page 45: Bachelor Thesis .pdf (2010)

4.3. Probe Management

4.3.2. The ParticleProbeController Class

The ParticleProbeController class concentrates on presenting intuitive user interfacegiving access to the adjustable options of all the probes currently instantiated. Drawingbounding boxes around the probes is also the job of the controller.

Figure 4.8.: The ParticleProbeController class, methods for registering and unregisteringa probe.

Internally, the controller maintains a vector of ParticleProbeOptions instances(m vPPOptions). This implies, that all options, which should be able to be changed by theuser must be present in the ParticleProbeOptions class.

To use the controller to manage a probe, it must be registered first. The registrationprocess adds a pointer to the ParticleProbeOptions instance of the probe to the controller’svector, thus allowing the controller to display the interface for adjusting its options.

The probe’s method RegisterToController() is used to register a probe. This methodtakes a pointer to the controller class as an argument. It then uses this pointer to call theRegisterParticleProbe() method of the controller class, supplying it with a pointer to itsParticeProbeOptions instance. This ensures that only a particle controller class instancecan get a pointer to the adjustable options of the probe.

Unregistering a probe is the reverse process. It identifies the probe which has requestedto unregister, and removes it from the options vector.

31

Page 46: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

4.3.3. Multi-probe Management Architecture

The processes of registering and unregistering a probe are abstracted by the containerclass. When it is initialized, it get a pointer to the controller class (m pPPC). Afterwards,the methods AddProbe() and RemoveProbe() are responsible for creating a new probeand registering it to the controller, respectively unregistering and destroying it (figure 4.9).

Figure 4.9.: Multi-probe management architecture.

32

Page 47: Bachelor Thesis .pdf (2010)

4.4. User Interface

4.4. User Interface

The user interface is very important part of every software solution. The usability ofa tool is characterized by the power of its user interface. To make the process of ma-nipulating multiple probes and creating probe configurations a pleasant experience, new,redesigned interface is proposed. It tries to optimize over the already present solution, bymaking it more compact and intuitive.

This section is intended to explain the new user interface’s control elements in moredetail.

4.4.1. Tracer Parameters UI Reference

The main UI, or the Tracer Parameters UI, is found in the bottom-right corner of themain application window. It is created and maintained within the ParticleTracer object.

Figure 4.10.: Tracer Parameters UI, found in the lower right corner of the application win-dow.

Reference

Bounding Box (checkbox) Turn on/off the bounding box for the domain, containing thevector field data.

Advect (checkbox) Turn on/off the advection of particles.If the advection is off, the advection phase is omitted, thus pausing the particles at their

current positions. In this case the adjustable probe options, such as injection mode andlifetime, which act in the advection phase, will not have any effect. Some will reset theparticle’s buffer, causing all probe’s particles to disappear. All options acting in the ren-dering phase, such as probe color, display type and sprite size, will have their usual effect.

For thorough explanation of the different particle options refer to section 4.4.2.

Step Scale (slider) Increase / decrease the step length for each advection step.The advection phase is permanently repeated. Every time, each particle is moved a

small amount in the direction, pointed by the vector of the field at the particle’s position.How small this movement is is controlled by this slider. Increasing its value causes fasteradvection, but lower precision for the next particle position.

33

Page 48: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

Probes ’+’ (button) Adds a new probe to the current probe configuration.

Probes ’-’ (button) Removes a probe from the current probe configuration.The probe must be selected in the probe interface first. Otherwise nothing happens.

Probe interface is described in section 4.4.2.

Probes ’Sv’ (button) Saves current probe configuration to file.This saves the positions, and all adjustable options of all probes currently in the scene.

The current particles’ positions are not saved.

Probes ’Ld’ (button) Load probe configuration from file.Already saved probe configurations can be loaded from file. Probe’s position and di-

mension, and also all the adjustable options are loaded. The particle buffers are recreatedfrom scratch for each probe and the advection starts from the probe.

Lense ’+’ (button) Adds a new lense to the current probe configuration. For detailedinformation about lenses, see section 4.5.

Lense ’-’ (button) Removes a lense from the current probe configuration. For detailedinformation about lenses, see section 4.5.

Particles Reset (button) Forces the particles of all probes to reborn and start advectionfrom their initial position within their probe.

Particles Export (button) Exports the positions of selected probe’s particles currently inthe field to a file. To export all particles at once, deselect all probes (for detailed informationabout export functionality, see chapter 8).

Sprites ’Load 1’ (button) Loads a sprite from image file or geometry definition file. Thissprite is then used to display particles in ’Sprites’ and ’Oriented sprites’ display modes.

Sprites ’Load 2’ (button) Loads a second sprite from image file or geometry definitionfile.

depth info (checkbox) Indicates if depth information is to be generated when loadingspites from a geometry definition file.

Render Volume (checkbox) Turns on volume rendering.Volume rendering is discussed in more detail in section 2.1.3.

Show UI (checkbox) Turns on the volume rendering user interface. It gives access to allof the ray caster settings.

The new volume rendering UI of the ParticleEngine is discussed in section 5.4

34

Page 49: Bachelor Thesis .pdf (2010)

4.4. User Interface

4.4.2. Probe Parameters UI Reference

The ParticleProbeController’s UI, or the Probe Parameters UI, is displayed in the upper-left corner of the application window. It is maintained by the ParticleProbeControllerobject.

The probe interface presents the user with control elements to modify all available ad-justable probe options. Additionally, it allows the user to select a probe, turn on and offthe bounding boxes, and displays information about the selected probe’s current positionand dimensions. The interface is divided in a static and a dynamic part. The dynamicpart changes with the selection of a probe. Thus, to be able to see all the below describedelements, a probe must be first selected.

Reference

Figure 4.11.: Probe Parameters UI

Bounding Box (checkbox) Turns on/off the probes’bounding boxes. Apart from presenting the userwith a visual clue of each probe’s position and di-mensions, the bounding boxes allow mouse manip-ulations. For detailed information about mouse con-trol and user input modes, refer to section 4.4.3.

Only selected probe (checkbox) When active,only the bounding box of the selected probe is dis-played. This allows easier probe manipulation in acomplex multi-probe configurations.

Select probe (drop-down menu) This menu con-tains all the probes in the current probe configura-tion. It is used to select a probe for editing its op-tions. Selecting a probe makes all probe specificcontrol elements visible, and changes its boundingbox color to yellow. Selecting ’None’ will deselectall probes.

Probe position and dimensions (label) Displaysthe current probe position and dimensions withinthe domain’s [0,1] range and in the given physicalunits (see chapter 7). Three sliders represent thethree axes, about which the probe can be moved andresized. The radio buttons at the top choose whatthese sliders control - position or dimensions.

Injection Mode (drop-down menu) Defines howthe particles are initially placed within the probe.

35

Page 50: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

Random will distribute the particles pseudo-randomly inside the probe, the uniform placementwill place them in an uniform distance to each other.

R G B (Particles color) (sliders) Sets the color ofthe probe, and of the injected particles.

4th comp. aware inj. (checkbox and sliders) The checkbox enables the 4th componentaware injection. The sliders control the chance of particles to be born according to the 4th

component of the field. In particular: scale scales the field down (dividing its values by thescale value) and the min/max sliders define the range for respectively 0 (0%) and 1 (100%)chance of birth. For the exact formula, see 4.1.

Display particles (checkbox) Turn on/off the particles. This allows detailed control overwhich probe inserts particles into the field. When turned off, the advection phase for theprobe is being omitted, improving the performance of the ParticleEngine.

Particles type (no name on the UI) (drop-down menu) Selects which particle type to bedisplayed.

There are currently three types of particle types supported - points, sprites and orientedsprites. When choosing one of the two sprite types, performance hit should be expected,because additional geometry is being introduced.

Count (slider) How many particles will be initially injected into the flow.Activating 4th component aware injection will reduces this number upon injection, as

some particles will eventually not meet the birth condition.

Lifetime (slider) The number of advection steps before the particle gets reborn. Exitingthe domain restarts the particle regardless of this setting.

Opacity (slider) Controls the transparency of the particles. Different particle types arerendered with different blending modes, so this setting behavior is different according tothe chosen particle type.

Sprite size (slider) Meaningful only when displaying sprites. Gives the size of eachsprite.

4th comp. parameter mod. (checkbox and sliders) Activates the 4th component parame-ter modulation. The sliders have the same function as described by 4th component awareinjection above.

Here, however, not the chance of birth is calculated, but the value of the size and opacityparameters. As the size is meaningless for Points particle type, only opacity is modulatedfor this type. The minimum value of a parameter is 0, and the maximum is the value,currently set by the respective slider.

36

Page 51: Bachelor Thesis .pdf (2010)

4.4. User Interface

Display trajectories (checkbox) Turns on/off the display of trajectories.Trajectories are preemptively advected particles a given number of steps. Lines are then

used to connect the particles from one step to the next, creating a trajectory in the vectorfield.

Trajectory type (no name on the UI) (drop-down menu) Currently, only streamlines aresupported, as only steady flows are considered in this project.

Trajectory Opacity (slider) The transparency of the trajectory lines.

Trajectories Count (slider) How many particles are traced simultaneously.

Trajectory Lifetime (slider) How many preemptive advection steps are used to constructthe trajectories.

37

Page 52: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

4.4.3. User Input Modes UI Reference

The different user input modes control the camera and mouse behavior. There are cur-rently three different modes, which assign different functionality to the mouse and thekeyboard.

Figure 4.12.: User Input Mode UI

View This mode uses model-view camera. The right and left mouse buttons are assignedto rotate the camera around the field domain. The scroll wheel zooms in or out. This modecan be quickly selected by pressing the F5 key.

Probe Edit The same as the view mode. The difference is, that the left mouse button isused for probe selection and manipulation, rather than camera rotation.

In this mode the left mouse button events are handled by the probe controller. Thebounding boxes of the probes are used to catch the cursor. Thus, to enable this functional-ity, the bounding boxes must be turned on.

Upon hovering the mouse over a bounding box, its sides turn yellow to indicate whichside is currently catching the cursor. A single click selects a probe and allows furthermanipulation. To change the position of the probe, hover the mouse over a side, and thendrag. The probe will be repositioned in the plane, containing the picked side.

To resize the probe, hover over a side and Ctrl+drag. The probe’s dimensions will bechanged along the plane, containing the picked side.

The two operations can be seamlessly combined. By pressing Ctrl the dragging oper-ation change over to Ctrl+dragging without the need of releasing the mouse button, andvice versa.

First Person This mode uses the first-person camera. The keys W, A, S, D are used tomove the camera around. By dragging the mouse, the user can look around.

38

Page 53: Bachelor Thesis .pdf (2010)

4.5. The Lense

4.5. The Lense

The lense concept is taken from the current version of the ParticleEngine and is a part ofthe importance-driven visualization techniques. A lense allows the user to concentrate onimportant regions within the vector field domain in a complex probe configuration. Thisis achieved by manipulating the opacity/size of the particles based on their distance to auser-defined lense center.

The old lense definition consists of a center point and a radius. The particles fade awaywith increasing of their distance to lense’s center. The position of the lense center is setby the user using the mouse and the scroll wheel to adjust the depth in view space. Theradius is adjusted through the user interface.

In the new architecture, the lense got extended and abstracted as a stand-alone entity. Itis now defined as a special kind of probe (fig. 4.13). In that case, all the controls used tomanipulate the probes, including the mouse, can be directly used on the lense, too. Thisallows for consistent and more simple user interface, and for more precise placement andresizing.

Figure 4.13.: The new lense is a special kind of probe. The lense’s parameters are alsocontained by the ParticleProbeOptions class.

New set of adjustable options is defined specially for lenses. These options can be usedto simulate the old lense’s spherical shape behavior, and the clip-plane functionality.

Lense Redefined

• As a special kind of probe, it encloses is a cuboid-shaped region within the vectorfield domain.

39

Page 54: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

• Fading of the particles is controlled for each axis separately. It depends on the parti-cle’s distance from the lense’s edge along a particular axis.

• clip-plane functionality is simulated by the lense’s projection capabilities. The pro-jection can be turned on for one of the three main planes. Then, the particles areprojected onto the respective plane, going through the lense’s center.

(a) Multi-probe configuration with a lense turned off.

(b) Multi-probe configuration with a lense turned on.

Figure 4.14.: Experiment, demonstrating the clip-plane functionality of the new lense. Theparticles from each probe get projected onto the lense’s plane.

40

Page 55: Bachelor Thesis .pdf (2010)

4.6. Lense Management

4.6. Lense Management

The same method for managing probes is applied also to lenses. There is no limit placedon how many lenses can be created in a probe configuration.

Figure 4.15.: The ParticleProbeContainer is used also to manage lenses.

The probe container maintains a separate vector only for lenses (m vLenses). The reasonfor this is that the rendering phase in the presence of lenses requires special care. The con-tainer also has special functions for adding (AddLense()) and removing (RemoveLense())lenses.

The probe controller on the other hand doesn’t differentiate between lenses and probeswhen registering. It checks dynamically upon selection, if the selected object is a lense ora probe, by calling the IsLense() method of the ParticleProbe class. Then, it presents theuser with the appropriate interface.

Rendering phase in the presence of lenses

The rendering phase is managed by the particle container. It calls each probe’s OnRen-der() method sequentially. If a lense is introduced into the configuration, its OnRender()method must be called first. That is because the lense uses it to set the shader variables,which then manipulate the appearance of the particles. For every additional lense, all theprobes must be rendered once more, because each lense can set different rendering param-eters.

41

Page 56: Bachelor Thesis .pdf (2010)

4. Multiple Sources of Particles

4.7. Lense UI Reference

Figure 4.16.: Probe Parameters UI -Lense selected

The lense UI is also displayed by the Parti-cleProbeController. Only the dynamic part, belowthe position/dimensions sliders is changed, when alense is selected. Thus, only this part will be dis-cussed here.

Reference

Turn on/off (chackbox) Allows the lense to beturned on or off.

Fading ranges (sliders) For every axis there aretwo sliders, controlling the minimal, respectivelymaximal, distance from the lense’s edge along thisaxis. If a particle’s position is at a distance biggerthan the maximum, its opacity/size is set to 0. If thisdistance is smaller than the minimum, its opacity/-size is taken from the probe parameters. In-between,interpolated value is calculated, using Hermite in-terpolation.

Projection (radio buttons) Enables projection ontothe specified plane, going through the lense’s center.

The projection is enabling the lense to act as aclip-plane (included in the old version of the Par-ticleEngine). The fading ranges are also acting in thismode. Thus, modifying them will have an effect onhow many particles get projected onto the specified plane.

Clip plane presets To further facilitate the setup of a clip-plane lense, these three buttonsautomatically set the projection, and the position and dimensions of the lense. The dimen-sions along the two selected axes are maximized, and along the third - the plane is madevery thin.

42

Page 57: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

This chapter describes the improvements made to the ParticleEngine to the ways of defin-ing a transfer function for its direct volume rendering capability. The transfer function dic-tates how the integrated ray caster should map values, sampled from the field, to colorsfor the frame buffer. This corresponds to the third project goal in chapter 3 ’Thesis Goals’.

Currently, the only way to assign color of the sampled values is to load a shader codefragment from file, which then gets injected directly into the ray caster effect. In the nextsections, a new user interface component is introduced, with the primary purpose of theprecise, yet interactive setup of a transfer function. This greatly enhances the usability andthe user experience, as it enables the user to try out many combinations, keeping track ofthe results, as the feedback is seen immediately.

Figure 5.1.: Direct volume rendering by the ParticleEngine, visualizing the vector lenght asa 4th component.

43

Page 58: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

5.1. The Raycaster

The volume rendering capabilities of the ParticleEngine are provided by the Raycasterobject. It is responsible for generating the images from the data, loaded in the 4th compo-nent. The Raycaster supports three different modes of rendering - direct volume rendering(DVR), isosurfaces and Clearview (see section 2.1.3). The DVR is only discussed in thischapter, as only it needs a transfer function.

Figure 5.2.: The Raycaster class, represented in the ParticleTracerBase.

The Raycaster is a member of the ParticleTracerBase. The m bDoRaycasting flag con-trols if the Raycaster is on or off, that is, if it renders an image or not. The default is off.

When the vector field data is initially loaded, it is transferred to the Raycaster by meansof the m Volume variable. This variable is responsible for interpreting the vector field dataas a volume texture, to be used for volume rendering.

When it is activated, the Raycaster creates images by casting rays through the field do-main and sampling the volume texture (m Volume) along these rays at discrete steps. Thesampled values are the 4th component values of the vector field. In DVR mode, these sam-pled values are then mapped to colors by the transfer function, and blended together toproduce the final color for the frame buffer.

The actual mapping of values to color happens in the fragment shader of the Raycaster.The transfer function is internally represented by a 1-D texture. To get the particular color,the sampled value is first adjusted to the range from 0 to 1, and then used to sample the tex-ture. Each element of this texture has the DirectX DXGI FORMAT R32G32B32A32 FLOATformat. Thus, the returned value corresponds directly to the RGBA color components. Thetransfer function is maintained by the TFEditor class (see section 5.2.2), and is fed to theRaycaster as a ID3D10ShaderResourceView pointer by the SetTransferFunctionSRV()method.

44

Page 59: Bachelor Thesis .pdf (2010)

5.2. The Editor

To save time and resources, the casted rays are only sampled within the vector fielddomain. To determine the entry end exit positions, the Raycaster is running invisible ren-dering passes, used to render the bounding box of the domain (m Box) and saves the entryand exit depth values of each pixel covering the domain.

Probe Volume Rendering

An additional feature of the Raycaster, introduced with the new probe concept, is theprobe volume rendering. The probe volume rendering casts rays only through a the regionof the field domain, covered by the selected probe. This greatly speeds the process up,and is very useful for large data sets or slower machines. Also, this feature can be usedto concentrate on particular regions within the field, and maybe define different transferfunctions for different regions. Examples can be seen in section ’Visualizations’ (Figure9.7).

5.2. The Editor

The Transfer Function Editor is the user interface component, which is responsible fordisplaying the currently defined transfer function and providing a way to adjust it. It is astand-alone class, hosted within the ParticleTracerBase class. After it handles user action,the Transfer Function Editor updates the texture, containing the transfer function. Theupdated texture is then set to the Raycaster.

5.2.1. The Transfer Function Control

The transfer function control is a user interface control. It is maintained by the TransferFunction Editor, and it is incorporated within its user interface (UI). It displays a partiallylinear function for each color channel. The Y coordinates of the points represent the colorchannel’s value at this position in the texture. The value in-between points is a linear inter-polation of the two neighboring points’ Y coordinates. Each of the four color channels canbe divided by the user to as many linear parts as needed for good enough approximationof the mapping.

Figure 5.3.: The transfer function control element, displaying a transfer function

This control element handles all mouse messages, when the mouse is within its area.Different user actions are supported. For example, the user can drag a control point tochange its position. Double click will add or remove a point, depending on the cursor’sposition. For extensive reference, see 5.4.3.

45

Page 60: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

5.2.2. The TFEditor Class

The TFEditor is responsible for wrapping the transfer function control inside a UI, andproviding extended transfer function editing capabilities. Also, it defines a method formaking the resulting transfer function available as a texture. This texture is then suppliedto the Raycaster.

Figure 5.4.: The TFEditor class

Figure 5.4 depicts simplified diagram of the TFEditor class. First, there are the fourLineStripe members. They store the user-defined control points for each color channel ofthe transfer function. When the user changes the control point configuration, the methodupdateTexture() is called to update the m texTransFunc texture. It uses LineStripe class’smethods to reconstruct the values of the transfer function in the space between two pointswith linear interpolation of their Y coordinates.

The drawTransferFunction() method is handling the actual drawing of the transfer func-tion control. The TFEditor uses this method to insert it in its UI (m transferFuncEditorUI).This UI contains also additional interface elements for facilitating the editing of the transferfunction.

The method getTransferFunctionResource() returns the produced texture, which is thenset to the Raycaster’s m pTransferFuncSRV variable.

46

Page 61: Bachelor Thesis .pdf (2010)

5.3. The RaycastController Class

5.3. The RaycastController Class

The RaycastController class has similar functionality to the ParticleProbeController,described in section 4.3.2. It generalizes and simplifies the management of the Raycasteroptions, and the Transfer Function Editor.

Currently, the ParticleEngine exposes the Raycaster options directly on the main UI.However, the addition of all the new options there, would have caused too much clutter.To prevent that, the RaycastController is introduced. It organizes all the UIs, responsiblefor the control over the volume rendering, in one place. This requires not only DVR, butalso all other options to be moved to the new UI.

Figure 5.5.: RaycastController organizes all volume rendering options in a new UI

Figure 5.5 depicts the new architecture involving the Raycaster and the Transfer Func-tion Editor. The RaycastController exists parallel to the Raycaster as a member of theParticleTracerBase. Unlike the probe management structure, where the probes are expos-ing their options by means of another class, the Raycaster instance is controlled throughits get/set accessor methods, which are available for all the Raycaster parameter variables(m eRendermode, m fBorder, etc.).

To enable control over it, the Raycaster instance must be first registered to the controller.This is done by the RegisterRaycaster() method. The registration supplies a pointer to theRaycaster (m pRaycaster), which is then used by the UI to control its parameters.

The Transfer function Editor (m pTFEditor) is hosted by the RaycastController as a pri-vate member. This allows the controller to integrate its UI with the other volume renderingoptions, building a system of UIs. There are three UIs, maintained by the controller - the

47

Page 62: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

Raycast Controller UI m RaycastControllerUI, the Raycaster UI m RaycasterUI and theTransfer Function Editor UI. The RaycastControllerUI is the main UI in this system. Itprovides means to select between the two other sub-UIs. The RaycasterUI contains all thecontrols for the Raycaster parameters.

Save and load functionality is also exposed by the RaycastController. The both methodsExportRaycastControllerSettings() and ImportRaycastControllerSettings() are linked tothe Raycast Controller UI. Currently, these functions only support export and import ofthe transfer function.

48

Page 63: Bachelor Thesis .pdf (2010)

5.4. User Interface

5.4. User Interface

As mentioned in the previous section, the RaycastController builds a system of threeUIs. The Raycast Controller UI is the main one, providing the means to select which of theother two sub-UIs is displayed. The main UI can’t be displayed alone - one of the sub-UIsis always visible just below it (see figure 5.6).

Figure 5.6.: Raycast Controller UI, and the Transfer Function Editor UI displayed below it.

The Raycast Controller UIs are by default hidden. The main ParticleEngine interface’scheckbox ’Show UI’ is used to turn it on or off (see section 4.4.1).

Each of the three UIs will be discussed in detail in the next sections.

5.4.1. Raycast Controller UI Reference

The Raycast Controller UI is rendered in the bottom middle of the application window,just above the currently visuble sub-UI.

Figure 5.7.: Raycast Controller UI

Reference

Raycaster (radio button) Selects the Raycaster sub-UI.

Transfer function (radio button) Selects the Transfer Function Editor UI.

Save (button) Allows the user to save the current transfer function to a file.

Load (button) Loads transfer function from file into the Editor and updates the display.

5.4.2. Raycaster UI Reference

The Raycaster UI is displayed just below the Raycast Controller UI, if the ’Raycaster’ radiobutton is selected.

49

Page 64: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

Figure 5.8.: The Raycaster UI

Reference

Mode (drop-down) Selects the render mode for the volume renderer. There are three ren-der modes available - ISO (Isosurfaces), DVR (Direct Volume Rendering) and Clearview.For more information refer to section 2.1.3.

Step size (slider) Controls the quality of the image produced by the volume renderer.Smaller step corresponds to higher quality, and lower display update rate (performance ofthe ParticleEngine when volume rendering activated).

Internally, this controls the length of the sample step along a ray, shooted by the Ray-caster. Longer step means fewer samples, and consequently higher performance, andlower image quality.

Load Fragment (button) Allows the user to load custom shader code fragment. This isthe old way to setup a transfer function, and it is deprecated.

Custom (slider) The custom slider controls a shader variable, which is normally unused.It is meant to be incorporated in a custom fragment code, to control some arbitrary option.

Controls for ISO render mode

ISO Value 1 (slider) Sets the ISO value for the isosurface. The volume renderer is build-ing an isosurface of all values higher than this setting.

Controls for DVR render mode

TF Scale (slider) Scales the transfer function range. Labels on the Transfer Function Edi-tor UI show the actual range covered by the transfer function.

TF Offset (slider) Offsets the transfer function range. Labels on the Transfer FunctionEditor UI show the actual range covered by the transfer function.

50

Page 65: Bachelor Thesis .pdf (2010)

5.4. User Interface

Controls for Clearview render mode

ISO Value 2 (slider) Sets the ISO value for the second isosurface.

Context scale (slider) Controls the blending of the two isosurfaces in the Clearview lense.Increasing this value, the second isosurface gets more visible. Decreasing it makes the firstisosurface more visible.

Size (slider) Controls the size of the Clearview lense.

Border (slider) Controls the border width of the Clearview lense.

Edge (slider) Sharpens the contours of the isosurfaces.

5.4.3. Transfer Function Editor UI Reference

The Transfer Function Editor UI is displayed just below the Raycast Controller UI, if the’Transfer function’ radio button is selected. The UI wraps the transfer function control,which editing functions are discussed below.

Figure 5.9.: The Transfer Function Editor UI

Reference

R (button) Selects the red channel in the transfer function control, and brings it on top ofthe others.

G (button) Selects the green channel in the transfer function control, and brings it on topof the others.

B (button) Selects the blue channel in the transfer function control, and brings it on topof the others.

51

Page 66: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

Alpha (button) Selects the alpha channel in the transfer function control, and brings it ontop of the others.

These three buttons are made for easier access to the channels. Selecting channels withthe mouse is also possible (see ’transfer function control’ below).

Alpha Scale (slider) Scales the alpha channel Y axis down.This is needed to be able to fine tune the alpha component. As the alpha components of

the sampled values along a ray are accumulated, using many samples will require a verysmall alpha value to be assigned to each sample.

Reset (button) Resets the channels in the transfer function control to their default config-uration. In this state, every channel has two points in the bottom left and top right cornersof the transfer function control area. These two points cannot be removed.

Axis labels just below of the transfer function control area (labels) Show the range of4th component values, covered by the transfer function. This range is setup by the ’TFScale’ and ’TF Offset’ sliders in the Raycaster UI.

Cursor (labels) Show the current position of the cursor within the transfer function con-trol area. The X coordinate is respecting the range of the transfer function, shown by thelabels, described above. The Y coordinate is respecting the alpha slider value for the alphachannel. The other channels Y coordinates lie in the [0, 1] range, and they are assumed tobe known.

Selected point coordinates (no name on the UI) (textboxes) These are the two text boxesnext to the ’Cursor’ labels. They show the exact position of the currently selected controlpoint in the transfer function control (the white point on figure 5.9). Like the cursor labels,they respect the range of the transfer function. Additionally, they allow the user to inputexact coordinates of the selected point manually.

Transfer function control

The area, which encloses the transfer function, is referred to as the transfer functioncontrol. It provides the user with additional editing capabilities.

Select a channel for editing Click anywhere on the channel’s line or control points. Thiswill bring the channel on top of the others and allow further manipulations.

Select and move a control point Click on a control point to select it, and then drag themouse to move it around. The exact coordinates are displayed below in the respectiveTransfer Function Editor UI controls.

52

Page 67: Bachelor Thesis .pdf (2010)

5.4. User Interface

Add and remove control points Double-clicking in any empty space within the trans-fer function control’s area will add new control point to the currently selected channel.Double-clicking on a existent point will remove it.

53

Page 68: Bachelor Thesis .pdf (2010)

5. Transfer Function Editor

54

Page 69: Bachelor Thesis .pdf (2010)

6. Fourth-component Recalculation

As the power of the build-in volume renderer gets more accessible with the introduc-tion of user-definable transfer function, the 4th component visualization gains even moreon relevance. The fourth-component recalculation feature creates the possibility for ex-perimenting with the 4th component at run-time. This, in combination with the TransferFunction Editor present new ways to explore flow field characteristics.

This chapter presents the upgrades, taken on the ParticleEngine, which enable the 4th

component to be recalculated at run-time. Three distinct recalculation techniques are in-troduced:

• Calculating of entirely new values for the 4th component, using the underlying vec-tor information. All field characteristics, which can be derived for every cell by somefunction of the vector at the cell’s position and its neighbors, can be calculated usingthis technique.

• Updating every 4th component values by means of a mathematical function, accept-ing one argument. This technique replaces each value with the result of the specifiedfunction, given as argument this value.

• Updating every 4th component values by means of a mathematical function, accept-ing two arguments. The second argument of the function for each grid position istaken from the 4th components of another vector field, loaded extra from a file.

Furthermore, arbitrary concatenations of those three techniques are possible. This can bevery useful in the case, when a function of two already present field characteristics yieldsa new interesting characteristic.

55

Page 70: Bachelor Thesis .pdf (2010)

6. Fourth-component Recalculation

6.1. ParticleTracer3D class upgraded

The new feature is directly integrated into the ParticleTracer3D class. The recalcula-tion system consists of two parts - the method, which actually updates the 3-D texture,used to store the vector field, and the effect file, defining all possible functions on the 4th

component.

Figure 6.1.: ParticleTracer3D is upgraded to support the 4th component recalculation.

The ParticleTracer3D class is managing the 3-D texture, which stores the vector field(m pTexture3D). For the updating of the 4th component of this texture, the methodCalc4thComp VolumeTexture() is developed.

This method starts by creating a render target (m pC4CE VolumeTex) and binding itto the GPU pipeline. The 3-D texture is also bound by the variable m pC4CE VolumeTex.The method processes the volume a single slice at a time. Which slice is currently to be pro-cessed is given with m pC4CE iSliceDepth. A pixel shader for previously chosen functionfrom the m pCalc4thComponentEffect is applied for the current slice, rendering the newvalues for the slice onto the render target.

After the slice has been processed, the render target contains all the cells from this slicewith the updated 4th component values. Then, a staging texture with the usage flag set toD3D10 USAGE STAGING and CPU access set to D3D10 CPU ACCESS READ is usedto access these values, and subsequently update the respective slice of the 3-D texture.

Before executing the Calc4thComp VolumeTexture() method, the user must choose whichfunction should be applied. This is done through a UI, hosted in the ParticleTracerBase,

56

Page 71: Bachelor Thesis .pdf (2010)

6.2. Updating the Volume Texture

but managed by the ParticleTracer3D to display the 4th component recalculation function-ality (m TracerUI).

Normalizing the 4th component

Normalization of the 4th component is necessary to ensure that all the values will stayin the range of the user interface controls, presented in pervious chapters. The controls forthe 4th component aware operations, and for volume rendering are all only able to handlevalues in the range -1 to 1.

The normalization is done by the Norm4thComp VolumeTexture() method. It is real-ized as a special case of the Calc4thComp VolumeTexture() method, so it functions thesame way.

This method also saves the factor, used for normalization. This is needed to denormalizethe field before running other functions on it, as some functions, like logarithm, wouldproduce incorrect results.

Combining 4th components

The combining the 4th components of the current field and an external field, loaded on-demand, is performed by the Combine4thCompWithExternal VolumeTexture() method.This method is realized the same way as the Calc4thComp VolumeTexture(), differingonly in minor aspects - the method loads a new field from file, it binds it to the pipeline asthe variable m pC4CE VolumeTex compose.

6.2. Updating the Volume Texture

This section goes into more detail as how the Calc4thComp VolumeTexture() functions.It shows the most important code snippets to outline its algorithm.

First, the render target is created with the X and Y dimensions of the vector field. Astaging texture to read the data from the render target is also setup.

Listing 6.1: Calc4thComp VolumeTexture() methodm_p4thCompRT->Bind(false, false);

pRenderTargetTex = m_p4thCompRT->GetTex();

D3D10_TEXTURE2D_DESC desc;pRenderTargetTex->GetDesc(&desc);desc.Usage = D3D10_USAGE_STAGING;desc.BindFlags = 0;desc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;

m_pd3dDevice->CreateTexture2D( &desc, NULL, &pStagingTexture );

For each mip level and depth level (corresponds to slice), the chosen pass is executed.

57

Page 72: Bachelor Thesis .pdf (2010)

6. Fourth-component Recalculation

Listing 6.2: Calc4thComp VolumeTexture() methodfloat ClearColor[4] = { 0.0f, 0.0f, 0.0f, 0.0f };m_p4thCompRT->Clear(m_pd3dDevice, ClearColor);

m_pC4CE_iMipLevel->SetInt( iMIPLevel );m_pC4CE_iSliceDepth->SetInt( iDepthLevel );

m_pRenderSliceTq->GetPassByName( sPassName.c_str() )->Apply(0);m_pd3dDevice->Draw( 3, 0 );

Then, the render target is copied to the staging texture, and subsequently mapped.

Listing 6.3: Calc4thComp VolumeTexture() methodm_pd3dDevice->CopyResource(pStagingTexture, pRenderTargetTex);

D3D10_MAPPED_TEXTURE2D mappedTex;V_RETURN( pStagingTexture->Map( D3D10CalcSubresource(0,0,1), D3D10_MAP_READ,

NULL, &mappedTex ) );

The mapped data is then uploaded to the 3-D texture.

Listing 6.4: Calc4thComp VolumeTexture() methodD3D10_BOX box;ZeroMemory(&box, sizeof(D3D10_BOX));box.left = 0;box.right = iSize.x;box.top = 0;box.bottom = iSize.y;box.front = iDepthLevel;box.back = iDepthLevel+1;

m_pd3dDevice->UpdateSubresource(m_pTexture3D, D3D10CalcSubresource(iMIPLevel,0,1), &box, mappedTex.pData, mappedTex.RowPitch, 0);

58

Page 73: Bachelor Thesis .pdf (2010)

6.3. Recalculation Fragment Shaders

6.3. Recalculation Fragment Shaders

Here, the internal structure of the m pCalc4thComponentEffect will be shown. This ef-fect contains all the functions from which the user can choose. The chosen function is thentranslated into a effect technique pass name, and supplied to the Calc4thComp VolumeTexture()method. It then runs the appropriate pixel shader on the 3-D texture values.

The vertex shader is generating a triangle covering the whole screen. This would callthe fragment shader for every pixel of the render target.

Listing 6.5: The vertex shader used by the calculation of the new 4th componentsfloat4 VS_FullScreenTri( uint id : SV_VertexID ) : SV_POSITION{

return float4(+ ((id << 1) & 2) * 2.0f - 1.0f, // x (-1, 3,-1)

( id & 2) * -2.0f + 1.0f, // y ( 1, 1,-3)0.0f,1.0f );

}

Then, the chosen fragment shader is run on each pixel, producing the new 4th compo-nents.

Listing 6.6: The fragment shader which saves the vector lenght at a particular position inthe 4th component

float4 PS_Calc4thComp_Length( float4 pos : SV_POSITION ) : SV_Target{

float4 val = g_txVolume.Load( int4(pos.xy, g_iSliceDepth, g_iMipLevel) );

return float4(val.xyz, length(val.xyz) );}

It is first loading the current values from the 3-D texture for the particular slice and miplevel. Then, it returns the first three components unchanged, and the fourth equal to thelength of the vector, represented by them.

59

Page 74: Bachelor Thesis .pdf (2010)

6. Fourth-component Recalculation

6.4. User Interface Reference

The UI, providing the 4th component recalculation functionality is found at the top andmiddle of the application window.

Figure 6.2.: The 4th Component Recalculation UI.

Reference

Select submenu (drop-down menu) Selects a sub-menu of the Particle 3D Tracer SettingsUI. However, at the moment only the fourth component functionality is supported.

Select a recalculation function (drop-down menu) This is the first of two drop-downs,allowing the user to select from a number of functions. These functions are one-argumentmathematical functions, or vector data dependent functions.

Calculate (button) Initiates the selected recalculation function. After the process is fin-ished, the new 4th component is available for visualization.

Select a combination function (drop-down menu) This drop-down selects a combina-tion function. All combination functions need an external vector field to be loaded. Theyuse the current 4th components as the first argument of the selected function, and the ex-ternal 4th components as its second parameter.

6.4.1. Supported Functions

Recalculation functions

• Normalize. Normalizes the 4th component to a range of [-1,1].• Denormalize. Denormalizes the 4th component.• Log (Natural). 4thcomponent = ln(4thcomponent).• Log (base-2). 4thcomponent = log2(4

thcomponent).• Log (base-10). 4thcomponent = log10(4

thcomponent).• Pow (Natural). 4thcomponent = e4

thcomponent.• Pow (base-2). 4thcomponent = 24

thcomponent.• Pow (base-10). 4thcomponent = 104

thcomponent.• Square Root. 4thcomponent =

È4thcomponent.

• Reciprocal Square Root. 4thcomponent = 1/È4thcomponent.

60

Page 75: Bachelor Thesis .pdf (2010)

6.4. User Interface Reference

• Square. 4thcomponent = (4thcomponent)2.• Reciprocal Square. 4thcomponent = 1/(4thcomponent)2.• Vector length. The length of the vector at a particular cell is encoded in the cell’s 4th

component.• Divergence. The divergence of the vector field ~v = vx × ex + vy × ey + vz × ez , where

eaxis are the basis vectors, at a particular point is a measure of that field’s tendencyto converge on or repel from that point. It is calculated by the following formula

div~v =∂vx∂x

+∂vy∂y

+∂vz∂z

(6.1)

• Curl. The curl of the vector field ~v = vx × ex + vy × ey + vz × ez , where eaxis arethe basis vectors, represents the circulation density of the fluid, represented by thisvector field, at a point. It is calculated by the following formula

curl~v =

�∂vz∂y

− ∂vy∂z

�× ex +

�∂vx∂z

− ∂vz∂x

�× ey +

�∂vy∂x

− ∂vx∂y

�× ez (6.2)

As the curl is a vector, only its magnitude is written to the 4th component.

Combination functions The 4th component of the second field, loaded on-demand bythe combination functionality, is referred in the following as the external 4th component.

• Replace. Replaces the current 4th component with the external.• Add. 4thcomponent = 4thcomponent+ external4thcomponent

• Multiply. 4thcomponent = 4thcomponent× external4thcomponent

• Substract. 4thcomponent = 4thcomponent− external4thcomponent

• Divide. 4thcomponent = 4thcomponentexternal4thcomponent

61

Page 76: Bachelor Thesis .pdf (2010)

6. Fourth-component Recalculation

62

Page 77: Bachelor Thesis .pdf (2010)

7. Physical Units Display

7.1. Introduction

This chapter focuses on making the ParticleEngine more robust in terms of how it dis-plays positions and dimensions. Without a plausible units and coordinates, assigned tothe origin and axes of the vector field domain, the created visualizations’ use as a part of ascientific research project is very limited, as the data has no real meaning in physical terms.

To solve this problem, new UI element is introduced. This UI element gives the possi-bility to set the actual position of the origin, and the dimensions of the vector field domainin user-specified units. Also, projecting these numbers directly on the axes of the domainhas been added as a feature.

The structures, responsible for multi-probe configurations, are also adjusted to workwith physical units. This is enabling the precise setup of multi-probe configurations ac-cording to the units specified. The particle export functionality, described in chapter 8 isalso made aware of physical units, incorporating them into the export file.

Figure 7.1.: Vector field domain, rendered with its physical coordinates and dimensions,projected onto its bounding box.

63

Page 78: Bachelor Thesis .pdf (2010)

7. Physical Units Display

7.2. Implementation

During the system upgrade to support physical units, the ParticlerTracerBase, the Par-ticleProbeController and the ParticleProbeContainer were adjusted.

Figure 7.2.: Upgrade for physical units

New UI, m TracerVolumeInfoUI, has been added to the ParticleTracerBase. This UIdisplays all the controls for setting up origin position and domain dimensions, and turningon or off the projecting of labels onto the bounding box of the domain. More variables tohold these values are not needed, as the UI controls hold them themselves. The otherclasses get the needed information by querying the current control values.

The RenderVolumeInfo() method is responsible for creating the labels for the specifiedunits, and projecting them to the screen. This method is discussed in detail in the nextsection.

The ParticleProbeController is made aware of the set units by means of its SetDimen-sionalUnits() method. It is called every time the user adjusts the values in the Info UI. Set-DimensionalUnits() saves this information internally, and uses it to display the probes’positions and dimensions accordingly.

In the ParticleProbeContainer, the method for exporting particles ExportProbesPar-ticles() is utilizing the GetDimensionalUnitsLabel(), GetDimensionalUnitsScale(), andGetOriginDimensionalUnitsPos() methods of its ParticleProbeController referrence tooutput particles’ positions also in the set physical units. For in-depth explanation of ex-port functionality, see chapter 8.

64

Page 79: Bachelor Thesis .pdf (2010)

7.3. Projecting the Physical Units on the Screen

7.3. Projecting the Physical Units on the Screen

The projection of the physical units on the vector field domain’s axes enhances the per-ception of scale in the scene. Furthermore, it aids the orientation by labeling the origin andthe axes with meaningful names.

In the following, code snippets from the method RenderVolumeInfo() are shown tosuggest, how the label creation and subsequent projection is handled.

The displaying of the axis names and the origin is done by supplying the function Pro-jectPointAndRenderText() with the world-view-projection matrix, then the position of thelabel in object space, and then the text, which should be rendered. The domain coordinatesin object space are always extending from 0 to 1, making it very easy to place the labels.

Listing 7.1: Projecting the axes names and the origin labelsProjectPointAndRenderText( m_mWorldViewProj, D3DXVECTOR4(0.0f, 0.0f, 0.0f,

1.0f), L"Origin" );ProjectPointAndRenderText( m_mWorldViewProj, D3DXVECTOR4(1.0f, 0.0f, 0.0f,

1.0f), L"x" );ProjectPointAndRenderText( m_mWorldViewProj, D3DXVECTOR4(0.0f, 1.0f, 0.0f,

1.0f), L"y" );ProjectPointAndRenderText( m_mWorldViewProj, D3DXVECTOR4(0.0f, 0.0f, 1.0f,

1.0f), L"z" );

The function ProjectPointAndRenderText(), used to do the actual projection, is com-pletely shown below.

Listing 7.2: Projecting the axes names and the origin labelsvoid ProjectPointAndRenderText( D3DXMATRIX mWorldViewProj, D3DXVECTOR4 vec,

const WCHAR* strMsg ){

UINT h = DXUTGetDXGIBackBufferSurfaceDesc()->Height;UINT w = DXUTGetDXGIBackBufferSurfaceDesc()->Width;

D3DXVECTOR4 projected = D3DXVECTOR4(0.0f, 0.0f, 0.0f, 0.0f);// projectD3DXVec4Transform( &projected, &vec, &mWorldViewProj );projected.x /= projected.w;projected.y /= projected.w;projected.z /= projected.w;

if( projected.x >= -1.0f && projected.x <= 1.0f &&projected.y >= -1.0f && projected.y <= 1.0f &&projected.z >= 0.0f && projected.z <= 1.0f )

{// screen spaceD3DXVECTOR2 ssv = D3DXVECTOR2( ((projected.x + 1.0f)/2.0f) * w + 2, (1-(

projected.y + 1.0f)/2.0f) * h + 2 );

g_pTxtHelper->Begin();g_pTxtHelper->SetInsertionPos( ssv.x, ssv.y );g_pTxtHelper->SetForegroundColor( D3DXCOLOR( 1.0f, 1.0f, 0.0f, 1.0f ) );g_pTxtHelper->DrawTextLine( strMsg );

65

Page 80: Bachelor Thesis .pdf (2010)

7. Physical Units Display

g_pTxtHelper->End();}

}

It first transforms the given vector argument with the world-view-projection matrix.Then, after it extracts the project space coordinates, a test is done, to ensure the projectedpoint is visible. If this test succeeds, back buffer surface dimensions are used to transformthe projection space coordinates to screen space, and render the respective label.

66

Page 81: Bachelor Thesis .pdf (2010)

7.4. User Interface Reference

7.4. User Interface Reference

The user interface for setting up the physical units is found on the right side below themain controls.

Figure 7.3.: PhysicalUnitsdisplay UI

Display labels (checkbox) turns on or off the display of axislabels on the bounding box of the vector field domain.

Origin X (textbox) specifies the exact X position of the originof the domain in the below specified units.

Origin Y (textbox) specifies the exact Y position of the originof the domain in the below specified units.

Origin Z (textbox) specifies the exact Z position of the originof the domain in the below specified units.

Display position (checkbox) turns on or off the display ofthe axis’ endpoints positions on the bounding box of the vec-tor field domain. This also includes the display of the positionof the origin.

Scale (textbox) gives the dimensions of a grid cell of the grid,in which the vector field is loaded. Consequently, the actualsize of the domain is its size in number of grid cells, multipliedby the constant supplied in this textbox.

Unit (textbox) sets a label of the physical units. No restric-tions are applied here. However, too long labels will eventually overflow the user inter-face.

Display dimensions (checkbox) turns on or off the display of axis dimensions on thebounding box of the vector field domain. These are displayed next to the middle of eachaxis.

Dimensions: (label) shows the actual dimensions of the vector field domain accordingto the set scale and unit above.

67

Page 82: Bachelor Thesis .pdf (2010)

7. Physical Units Display

68

Page 83: Bachelor Thesis .pdf (2010)

8. Exporting Particles

In this chapter, the new capability of exporting snapshots of particles from the currentexperiment is discussed. Introducing this feature into the new version of the ParticleEnginemakes it much more versatile and allows for interfacing it with other tools for furtheranalysis or more sophisticated visualization of the particles.

The export functionality is also working with respect to the defined physical units of thescene (see chapter 7).

8.1. Implementation

The class diagram below displays all the methods involved in the export process.

Figure 8.1.: Multi-probe configuration classes extended to support particles exporting.

Every probe implements an ExportParticles() method. When called, this method streamsout all particles’ start and current positions, additionally transformed also to domain co-ordinates and adjusted by the specified physical units, to a wostringstream variable. Thephysical units actual numbers are taken from the GetDimensionalUnitsLabel(), GetDi-mensionalUnitsScale(), and GetOriginDimensionalUnitsPos() methods of the controller.As the probe has no direct access to the controller, this functionality is implemented in theExportProbesParticles() method of the probe container.

69

Page 84: Bachelor Thesis .pdf (2010)

8. Exporting Particles

To be able to stream multiple probes’ exports into the same file, the file header must bedefined in the ExportProbesParticles() method, and not by each probe. Additional headerinformation, such as the dimensions of the field domain, and the physical units, is alsowritten to the header of the file by this method.

The user can trigger the export through the main application interface’s ’Export’ button(4.4.1). This invokes the ExportProbesParticles() method described above. During theexport, the method checks if a probe is currently selected in the controller. If true, onlyparticles from this probe are exported. If otherwise false, all particles from all probes inthe current scene are exported.

8.2. ParticleProbe’s ExportParticles Method

This section explains the code used by a probe to stream out its particles. The actualexport is realized as a special kind of advection. The particles from the source ping-pongbuffer are streamed through a geometry shader, specially designed for this purpose.

First, the GPU pipeline is setup to use the Stream Output stage. As a target, the destina-tion ping-pong buffer m pParticlesContainerNew is set. Then, pass 2 from the advectioneffect is invoked. This pass applies to each particle specially designed geometry shader,which is outlined in the next section.

During the stream out, a ID3D10Query object is employed to track how many particlesare actually output by the Stream Output stage. This is required to export only particles,currently in advection and not those, which were not born.

Listing 8.1: ExportParticles method - initialization and drawingm_pd3dDevice->IASetInputLayout( m_pParticlesContainerVL );

UINT stride[1] = { sizeof(PARTICLE_VERTEX) };UINT offset[1] = { 0 };m_pd3dDevice->IASetVertexBuffers( 0, 1, &m_pParticlesContainer, stride,

offset );m_pd3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_POINTLIST );

m_pd3dDevice->SOSetTargets( 1, &m_pParticlesContainerNew, offset );

m_pParticlesAdvectionTq->GetPassByIndex( 2 )->Apply(0); // 2 = export pass

pQuery->Begin(); // run the query to receive SO statisticsif( m_bIsFirstAdvectionStep )m_pd3dDevice->Draw( m_ppOptions.GetParticlesCount(), 0 );

elsem_pd3dDevice->DrawAuto();

pQuery->End();

70

Page 85: Bachelor Thesis .pdf (2010)

8.2. ParticleProbe’s ExportParticles Method

After the drawing is complete, the ID3D10Query is consulted to supply the number ofprimitives outputted.

Listing 8.2: ExportParticles method - queriyng the SO stageD3D10_QUERY_DATA_SO_STATISTICS pQueryData;ZeroMemory( &pQueryData, sizeof(D3D10_QUERY_DATA_SO_STATISTICS) );

HRESULT hr = S_FALSE;while( hr == S_FALSE ) // wait, because asynchron

hr = pQuery->GetData( &pQueryData, pQuery->GetDataSize(), 0 );

By this time, all particles are written to the m pParticlesContainerNew buffer. Its datais then mapped with the help of a staging texture, and read on the CPU.

Listing 8.3: ExportParticles method - staging texture and mapping of the target bufferD3D10_BUFFER_DESC vbDesc;ZeroMemory( &vbDesc, sizeof(D3D10_BUFFER_DESC) );vbDesc.ByteWidth = MAX_VB_SIZE * sizeof(PARTICLE_VERTEX);vbDesc.Usage = D3D10_USAGE_STAGING;vbDesc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;

ID3D10Buffer* pExportedParticles = NULL;if( FAILED(m_pd3dDevice->CreateBuffer( &vbDesc, NULL, &pExportedParticles ))

)return ss;

m_pd3dDevice->CopyResource(pExportedParticles, m_pParticlesContainerNew);

The data is then streamed out to the wostringstream object and returned to the Particle-Container.

8.2.1. Geometry Shader for Export

A simple geometry shader is responsible for preparing the data for export. Two types ofcoordinates must be provided - the object space coordinates, ranging from 0 to 1, and theworld (domain) space coordinates.

Listing 8.4: GSExportParticles geometry shader[maxvertexcount(1)]void GSExportParticles(point VSParticleIn input[1], inout PointStream<

VSParticleIn> ParticleOutputStream){

float4 position_in_domain_space = mul( float4(input[0].pos, 1.0), g_mDomain);

input[0].dir = position_in_domain_space.xyz;

ParticleOutputStream.Append( input[0] );}

The world space coordinates are calculated by the first line, using the previously bounddomain matrix. These coordinates are saved in the dir member of the particle structure

71

Page 86: Bachelor Thesis .pdf (2010)

8. Exporting Particles

(the direction is not needed at this point). The pos member contains the current position,and the start member (not shown) has the coordinates of the start position in the texture,containing the particles’ start parameters. This texture is also present on the CPU and canbe indexed there.

8.3. Multi-probe Export

The method ExportParticles() creates the final export file. This is a text file, formated ascomma separated values.

The method first generates the header.

Listing 8.5: ExportParticles() method - headerwfs << L"Domain dimensions: " << m_pPPC->GetDomainSize().x << " " << m_pPPC->

GetDomainSize().y << " " << m_pPPC->GetDomainSize().z << "\n";wfs << L"Domain dimensions in units:"<< " " << m_pPPC->GetDomainSize().x*m_pPPC->GetDimensionalUnitsScale()<< " " << m_pPPC->GetDomainSize().y*m_pPPC->GetDimensionalUnitsScale()<< " " << m_pPPC->GetDomainSize().z*m_pPPC->GetDimensionalUnitsScale() <<

"\n";wfs << L"UNIT SCALE: " << m_pPPC->GetDimensionalUnitsScale() << "\n";wfs << L"UNIT NAME: " << m_pPPC->GetDimensionalUnitsLabel() << "\n";// table headerwfs << L"\"ID\",\"START_X\",\"START_Y\",\"START_Z\",";wfs << L"\"CURR_X\",\"CURR_Y\",\"CURR_Z\",";wfs << L"\"START_X_DOMAIN\",\"START_Y_DOMAIN\",\"START_Z_DOMAIN\",";wfs << L"\"CURR_X_DOMAIN\",\"CURR_Y_DOMAIN\",\"CURR_Z_DOMAIN\",";wfs << L"\"START_X_UNITS\",\"START_Y_UNITS\",\"START_Z_UNITS\",";wfs << L"\"CURR_X_UNITS\",\"CURR_Y_UNITS\",\"CURR_Z_UNITS\"\n";

The header contains the domain dimensions. These are effectively the grid dimensions,in which the field was loaded. Next come these dimensions in physical units, and the unitsscale and name. The scale represents the dimensions of a single grid cell. Consequently,the dimensions in physical units are equal to the grid dimensions, multiplied by the scale.The unit label is output just before the actual column headers.

To do the actual export, the vector m vParticleProbes is traversed.

Listing 8.6: ExportParticles() method - traversing the probes’ vectorconst void* pvSelectedProbeId = m_pPPC->GetSelectedProbeId();

for( int ind = 0; ind < m_vParticleProbes.size(); ind++ ){if( pvSelectedProbeId == NULL || // means no probe selected = export all

m_vParticleProbes[ind]->IsItMe( pvSelectedProbeId ) ){wfs << m_vParticleProbes[ind]->ExportParticles( m_pPPC->

GetDimensionalUnitsScale() ).str();}

}

72

Page 87: Bachelor Thesis .pdf (2010)

8.3. Multi-probe Export

For every probe, a check is made, if the probe is selected. The controller method GetS-electedProbeId() returns an ID, or NULL, if no probe is selected. This ensures that only aselected probe’s particles get exported.

8.3.1. Example Export

Figure 8.2 shows a screenshot of an export file, opened in Microsoft®Excel®.

Figure 8.2.: Export file, created by the ParticleEngine, opened in Microsoft®Excel®

73

Page 88: Bachelor Thesis .pdf (2010)

8. Exporting Particles

74

Page 89: Bachelor Thesis .pdf (2010)

Part III.

Results and Conclusion

75

Page 90: Bachelor Thesis .pdf (2010)
Page 91: Bachelor Thesis .pdf (2010)

9. Visualizations

This chapter presents some of the images, generated by the experiments of the Parti-cleEngine’s new features. All the images were generated with the vector field data setpresented by the research group at MPA and Jerusalem.

9.1. Multi-probe Configurations

Figure 9.1.: Multiple small probes displaying streamlines.

77

Page 92: Bachelor Thesis .pdf (2010)

9. Visualizations

Figure 9.2.: Two-probe configurations, using 4th component aware injection. Below, thelifetime of the particles is set to one to show the modulated injection density.

78

Page 93: Bachelor Thesis .pdf (2010)

9.1. Multi-probe Configurations

Figure 9.3.: Multi-probe configurations. Above, using 4th component aware injection. Be-low, using 4th component aware modulation.

79

Page 94: Bachelor Thesis .pdf (2010)

9. Visualizations

Figure 9.4.: Three-probe configuration, using 4th component aware modulation. The twobig probes are displaying sprites (above), and points (below). The third probeis displaying streamlines.

80

Page 95: Bachelor Thesis .pdf (2010)

9.2. Direct Volume Rendering and Fourth-component Recalculation

9.2. Direct Volume Rendering and Fourth-componentRecalculation

Figure 9.5.: Direct volume renderings of the temperature values, encoded in the 4th, usingdifferent transfer functions.

81

Page 96: Bachelor Thesis .pdf (2010)

9. Visualizations

Figure 9.6.: Direct volume rendering of the curl (above) and divergence (below), calculatedwith the 4th component recalculation feature.

82

Page 97: Bachelor Thesis .pdf (2010)

9.2. Direct Volume Rendering and Fourth-component Recalculation

Figure 9.7.: Direct volume rendering of the vector lenght (above) and divergence (below),focused in the selected probe’s boundaries.

83

Page 98: Bachelor Thesis .pdf (2010)

9. Visualizations

84

Page 99: Bachelor Thesis .pdf (2010)

10. Conclusion

This thesis has presented a number of upgrades to the ParticleEngine tool. These up-grades have been mainly about extending the tool’s functionality to meet the needs ofthe astrophysicists at MPA and Jerusalem, but also about making the tool’s user interfacebetter.

First, the base architecture of the application has been reorganized to provide supportfor multiple sources of particles. In the new version, a single experiment can includearbitrary number of probes. Not only have the particles of each probe different entry po-sitions, but they can be of completely different types. If the particles are rendered as pointprimitives, the blending of different colors can produce interesting visual clues. For exam-ple, the image obtained by lowering the opacity hints on the density of particular regionsin the field. If the particles are rendered as sprites, the perception of depth is enhanced,and also of direction. Moreover, the 4th component of the field is now extensively used tocontrol the particle injection density, and to modulate their opacities and sizes.

The possibility of saving and loading multi-probe configurations greatly facilitates ex-perimenting with these. All the adjustable options of probes and lenses are saved, whichallows the exact same configuration to be reused not only in the same experiment, but inothers too. The new probe interface allows the user to precisely specify the position anddimensions of every member of the multi-probe configuration.

Future developments can provide means for dynamically adjusting the maximum num-ber of particles that can be injected by a probe at run-time. As of now, only a limited, evenif very large, number, set at compile time, is possible. Another possibly useful featurewould be backwards advection, or even loop advection for predefined time steps.

The introduced lense concept extends over already defined functionality for focus+contextimportance-driven particle visualization. The new lenses incorporates also the clip-planefunctionality from the old implementation. This way, more consistent and uncluttereduser interface is achieved.

With lenses, there is also no limitation on their number in an experiment. Here, however,the user must be careful not to overload the GPU, as all particles are rendered once for eachlense currently on. Research can be made in the future to optimize this concept with thehelp of modern geometry shader programming.

Volume rendering capabilities of the ParticleEngine can now be fully utilized with theTransfer Function Editor. It simplifies very much the handling of a transfer functionneeded for the direct volume rendering, thus allowing for easy exploration of the 4th com-ponent of the vector field. All the four color channels are adjustable for achieving highquality results. Additionally, the save and load functionality greatly increases the usability

85

Page 100: Bachelor Thesis .pdf (2010)

10. Conclusion

of the Editor, as complex transfer function configurations can be reused. The range cov-ered by the transfer function can also be controlled precisely with the newly introducedscale and offset sliders. This enables the user to explore the field in a global scale, or insmall detail.

To allow the tool to convey important physical information, the possibility has beenadded for the user to specify the position and dimensions of the field in user-specifiedunits. To facilitate this, a comprehensive UI was created. There is also the possibility tochoose to display the physical units directly on the axes of the domain’s bounding box.

All the new functions of the application are made aware of these units. The controller ofprobe configurations uses them to display the exact position and dimensions of a selectedprobe in the field domain. The probe container is using them in its methods for particlesexport.

In future versions, it would be useful if the user could saved and reloaded, either as aseparate file, or as part of an extended vector field format.

Not only user interface was addressed, but also the program interface to other tools.The first step of this process is allowing the user to export the particles, living in the cur-rent simulation. This supports further analysis of the particles positions in a given probeconfiguration, in a given time step.

As a future development, an import of particle positions can be considered. Also, theloading the exact advection step and step size from a file.

One of the brand new additions to the ParticleEngine is the ability to manipulate the4th component of the field at run-time. The user can now use the provided UI to choosemathematical functions to which to subject the 4th component values of the field. More-over, a combination of the 4th components of two or more field’s can be conducted. Inaddition to the extended DVR capabilities, this allows for interesting experiments to beperformed.

This feature is arbitrary extensible, as it only requires a new drop-down menu item anda new fragment shader to be added. All mathematical functions, which are supported bythe underlying GPU are possible.

In the future, new formats for 4th component data can be developed, enabling the userto select a file containing such data. Extending this data to more components will presenteven more possibilities to be explored.

In this work only steady vector fields have been considered. One very interesting futuredevelopment will be to implement a way to load multiple snapshots of a vector field atclose timesteps, and then advect particles according to the time-evolving 4th component.Here, special method of interpolation must be developed, for example dependent on thevectors in the field itself, to reconstruct the information from missing timesteps.

This would require an extension to the ParticleTracer3D class, which currently man-ages the vector field. The other components of the system introduced in this work wouldremain the same.

86

Page 101: Bachelor Thesis .pdf (2010)

Bibliography

[1] Volume visualization. http://www.volviz.com/.

[2] Kai Burger, Polina Kondratieva, Jens Kruger, and Rudiger Westermann. Importance-driven particle techniques for flow visualization. Pacific Vis 2008, 2008.

[3] Marianne Freiberger. Births and deaths in fluid chaos. http://plus.maths.org/issue50/features/gollub/, March 2009.

[4] Robert A. Granger. Fluid mechanics. Dover Publications, Inc., 1995.

[5] Jens Kruger, Peter Kipfer, Polina Kondratieva, and Rudiger Westermann. A particlesystem for interactive visualization of 3d flows. IEEE TRANSACTIONS ON VISUAL-IZATION AND COMPUTER GRAPHICS, VOL. 11, NO. 6, 2005.

[6] Wolfgang Merzkirch. Flow visualization. New York:Academic Press, 1987.

[7] Dr. Matthew O. Ward. Flow visualization. http://web.cs.wpi.edu/˜matt/courses/cs563/talks/flowvis/flowvis.html, February 1996.

[8] Wikipedia. Isosurface. http://en.wikipedia.org/wiki/Isosurface,September 2009.

[9] Wikipedia. Computational fluid dynamics. http://en.wikipedia.org/wiki/Computational_fluid_dynamics, August 2010.

[10] Wikipedia. Flow visualization. http://en.wikipedia.org/wiki/Flow_visualization, March 2010.

[11] Wikipedia. Fluid mechanics. http://en.wikipedia.org/wiki/Fluid_mechanics, August 2010.

[12] Wikipedia. Streamlines, streaklines, and pathlines. http://en.wikipedia.org/wiki/Streamlines,_streaklines,_and_pathlines, June 2010.

[13] Wikipedia. Volume rendering. http://en.wikipedia.org/wiki/Volume_rendering, August 2010.

[14] Wikipedia. Voxel. http://en.wikipedia.org/wiki/Voxel, July 2010.

[15] www.nasa.gov. Flow visualization. http://www.grc.nasa.gov/WWW/K-12/airplane/tunvis.html, August 2009.

[16] www.sugawara labs.co.jp. Hydrogen bubble-wire generator mn-305. http://www.sugawara-labs.co.jp/english/strobo5.html.

87