volume visualization and volume rendering techniques

37
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228557366 Volume visualization and volume rendering techniques Article · January 2000 CITATIONS 9 READS 61 4 authors, including: Hanspeter Pfister Harvard University 244 PUBLICATIONS 7,922 CITATIONS SEE PROFILE Craig M. Wittenbrink NVIDIA 68 PUBLICATIONS 1,460 CITATIONS SEE PROFILE All content following this page was uploaded by Craig M. Wittenbrink on 12 June 2014. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately.

Upload: others

Post on 22-Nov-2021

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Volume visualization and volume rendering techniques

Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/228557366

Volumevisualizationandvolumerenderingtechniques

Article·January2000

CITATIONS

9

READS

61

4authors,including:

HanspeterPfister

HarvardUniversity

244PUBLICATIONS7,922CITATIONS

SEEPROFILE

CraigM.Wittenbrink

NVIDIA

68PUBLICATIONS1,460CITATIONS

SEEPROFILE

AllcontentfollowingthispagewasuploadedbyCraigM.Wittenbrinkon12June2014.

Theuserhasrequestedenhancementofthedownloadedfile.Allin-textreferencesunderlinedinblueareaddedtotheoriginaldocument

andarelinkedtopublicationsonResearchGate,lettingyouaccessandreadthemimmediately.

Page 2: Volume visualization and volume rendering techniques

EUROGRAPHICS’2000 Tutorial

VolumeVisualization and VolumeRendering Techniques

M. Meißner�, H. Pfister

�, R. Westermann

�, andC.M. Wittenbrink

�Abstract

There is a wide range of devicesand scientificsimulationgenerating volumetricdata. Visualizing such data,rangingfromregular datasetsto scattereddata,is a challengingtask.This coursewill give an introductionto the volumerenderingtransporttheoryand the involvedissuessuch asinterpolation, illumination, classificationand others. Different volumerenderingtechniqueswill be presentedillustrating their fundamentalfeatures and differencesas well as their limitations. Furthermore, accelerationtechniqueswill bepresentedincludingpure software optimizationsaswell asutilizing specialpurposehardwareasVolumePro but alsodedicatedhardware such aspolygongraphicssubsystems.

1. Intr oduction

Volume renderingis a key technologywith increasingim-portancefor the visualizationof 3D sampled,computed,ormodeleddatasets.The taskis to displayvolumetricdataasa meaningfultwo-dimensionalimagewhich revealsinsightsto the user. In contrastto conventionalcomputergraphicswhereone hasto deal with surfaces,volume visualizationtakes structuredor unstructured3D datawhich is the ren-deredinto two-dimensionalimage.Dependingon thestruc-tureandtype of data,differentrenderingalgorithmscanbeappliedanda variety of optimizationtechniquesare avail-able.Within thesealgorithms,several renderingstagescanbeusedto achieve a varietyof differentvisualizationresultsat differencost.Thesestagesmight changetheir orderfromalgorithmto algorithmor might evennot beusedby certainapproaches.

In thefollowing section,we will give a generalintroduc-tion to volumerenderingandthe involved issues.Section3then presentsa schemeto classify different approachesto�

WSI/GRIS,Universityof Tübingen,Auf derMorgenstelle10/C9,Germany, e-Mail: [email protected]

MERL - A MitsubishiElectricResearchLaboratory, 201Broad-way, Cambridge,MA 02139,USA, e-Mail: [email protected]

Computer Graphics Group, University of Stuttgart,something road 4711, 74711 Stuttgart, Germany, e-Mail:[email protected]

Hewlett-PackardLaboratories,PaloAlto, CA 94304-1126,USA,e-Mail: [email protected]

volumerenderingin categories.Accelerationtechniquestospeedup therenderingprocessin section4. Section3 and4area modifiedversionof tutorial notesfrom R. Yagelwhichwe would like to thankfully acknowledge.A side by side comparisonof the four most commonvol-umerenderingalgorithmsis givenin section5. Specialpur-posehardwareachieving interactive or real-timeframe-ratesis presentedin section6 while section7 focuseson applica-tionsbasedon 3D texturemapping.Finally, we presentren-deringtechniquesandapproachesfor volumedatanot rep-resentedon rectilinearcartesiangridsbut on curvilinearandunstructuredgrids.

2. Volumerendering

Volume rendering differs from conventional computergraphicsin many waysbut alsosharesrenderingtechniquessuchasshadingor blending.Within thissection,wewill giveashortintroductioninto thetypesof dataandwhereit origi-natesfrom. Furthermore,we presenttheprincipleof volumerendering,the differentrenderingstages,andthe issuesin-volvedwheninterpolatingdataor color.

2.1. Volumedata acquisition

Volumetric data can be computed,sampled,or modeledand thereare many different areaswhere volumetric datais available. Medical imaging is one areawherevolumet-ric data is frequently generated.Using different scanningtechniques,internalsof the humanbody can be acquiredusingMRI, CT, PET, or ultrasound.Volumerenderingcan

c�

TheEurographicsAssociation2000.

Page 3: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

be applied to color the usually scalar data and visualizedifferentstructurestransparent,semi-transparent,or opaqueand hence,can give useful insights.Different applicationsevolved within this areasuch as cancerdetection,visual-ization of aneurisms,surgical planning,andeven real-timemonitoringduringsurgery.

Nondestructive materialtestingandrapid prototypingisanotherexamplewherefrequentlyvolumetricdatais gener-ated.Here,the structureof an objectis of interestto eitherverify thequality or to reproducetheobjects.IndustrialCTscannersandultrasoundaremainly usedfor theseapplica-tions.

The disadvantageof the above describedacquisitionde-vices is the missing color information which needsto beaddedduring the visualizationprocesssinceeachacquisi-tion techniquesgeneratesscalarvaluesrepresentingdensity(CT), oscillation(MRI), echoes(ultrasound),andothers.Foreducationalpurposeswheredestructingthe original objectis acceptable,onecanslice thematerialandtake imagesofeachlayer. This revealscolor informationwhich sofar can-not becapturedby otheracquisitiondevices.A well-knownexampleis the visible humanproject wherethis techniquehasbeenappliedto a maleanda femalecadaver.

Microscopicanalysisis yet anotherapplicationfield ofvolumerendering.With confocalmicroscopes,it is possibleto gethigh-resolutionopticalslicesof a microscopicobjectwithout having to disturbthespecimen.

Geoseismicdatais probablyoneof thesourcesthat gen-eratesthelargestjunk of data.Usually, at least10243 voxels(1 GByteandmore)aregeneratedandneedto bevisualized.Themostcommonapplicationfield is oil explorationwherethe costscanbe tremendouslyreducedby finding the rightlocationwhereto drill thewhole.

Anotherlarge sourceof volumetricdatais physicalsim-ulationswherefluid dynamicsare simulated.This is oftendoneusing particlesor samplepoints which move aroundfollowing physical laws resulting in unstructuredpoints.Thesepointscaneitherbe visualizeddirectly or resampledinto any grid structurepossiblysacrificingquality.

Besidesall theabovementionedareas,therearemany oth-ers.For furtherreadingwe recommend61.

2.2. Grid structur es

Dependingonthesourcewherevolumetricdatacomesfromit mightbegivenasacartesianrectilineargrid, or asacurvi-linear grid, or maybeeven completelyunstructured.Whilescanningdevicesmostlygeneraterectilineargrids(isotropicor anisotropic),physical simulationsmostly generateun-structureddata.Figure1 illustratesthesedifferentgrid typesfor the2D case.For thedifferentgrid structuresdifferental-gorithmscanbeusedto visualizethevolumetricdata.Withinthe next sections,we will focus on rectilineargrids beforepresentingapproachesfor theothergrid typesin section8.

(a) (b) (c)

Figure1: Differentgrid structures:Rectilinear(a), curvilin-ear (b), andunstructured(c).

2.3. Absorption and emission

In contrastto conventionalcomputergraphicswhereobjectsarerepresentedassurfaceswith materialproperties,volumerenderingdoesnot directly dealwith surfaceseven thoughsurfacescan be extractedfrom volumetric data in a pre-processingstep.

Eachelementof thevolumetricdata(voxel) canemit lightaswell asabsorblight. Theemissionof light canbequitedif-ferentdependingonthemodelused.I.e.,onecanimplementmodelswherevoxels simply emit their own light or wherethey additionally realizesingle scatteringor even multiplescattering.Dependingon the model used,different visual-izationeffectscanberealized.Generally, scatteringis muchmore costly to realizethan a simple emissionand absorp-tion model, one of the reasonswhy they are hardly usedin interactive or real-timeapplications.While the emissiondeterminesthe color and intensity a voxel is emitting, theabsorptioncan be expressedas opacity of a voxel. Only acertainamountof light will bepassedthroughavoxel whichcanbeexpressedby 1 opacity andis usuallyreferredto asthetransparency of a voxel.

The parametersof the emission (color and inten-sity) as well as the parametersof the absorption(opac-ity/transparency) canbespecifiedon a pervoxel-valuebaseusingclassification.This is describedin moredetail in thefollowing section.For different optical modelsfor volumerenderingreferto 66.

2.4. Classification

Classificationenablestheuserto find structureswithin vol-umedatawithout explicitly definingtheshapeandextentofthat structure.It allows the userto seeinsidean objectandexplore its inside structureinsteadof only visualizing thesurfaceof that structureas donein conventionalcomputergraphics.

In theclassificationstage,certainpropertiesareassignedto asamplesuchascolor, opacity, andothermaterialproper-ties.Also shadingparametersindicatinghow shiny a struc-tureshouldappearcanbeassigned.Theassignmentof opac-ity to a samplecanbe a very complex operationandhasamajor impacton the final 2D imagegenerated.In order toassignthesematerialpropertiesit is usually helpful to use

c�

TheEurographicsAssociation2000.

Page 4: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

histogramsillustratingthedistributionof voxel valuesacrossthedataset.

Theactualassignmentof color, opacity, andotherproper-tiescanbebasedon thesamplevalueonly but othervaluescanbeaswell takenasinput parameters.Usingthegradientmagnitudeasfurtherinputparameter, sampleswithin homo-geneousspacecan be interpreteddifferently than the oneswith heterogeneousspace.This is a powerful techniqueingeoseismicdatawherethescalarvaluesonly changenotice-ably in betweendifferentlayersin theground.

2.5. Segmentation

Empoweringtheuserto seeacertainstructureusingclassifi-cationis not alwayspossible.A structurecanbesomeorganor tissuebut is representedasa simplescalarvalue.Whenlooking at volumetricdataacquiredwith a CT scanner, dif-ferenttypesof tissuewill resultin samedensityvaluesdueto the natureof CT. Therefore,no classificationof densityvaluescanbefoundsuchthatstructureswhich similarly ab-sorbX-rayscouldbeseparated.To separatesuchstructures,they needto be labeledor segmentedsuchthat they canbedifferentiatedfrom eachother. Dependingontheacquisitionmethodand the scannedobject, it can be relatively easily,hard,or even impossibleto segmentsomeof the structuresautomatically. Most algorithmsaresemi-automaticor opti-mizedfor segmentinga specificstructure.Oncea volumetric datasetis segmented,for eachsegmenta certainclassificationcan be assignedand appliedduringrendering.

2.6. Shading

Shadingor illuminationreferto a well-know techniqueusedin conventionalcomputergraphicsto greatlyenhancetheap-pearanceof a geometricobjectthatis beingrendered.Shad-ing tries to modeleffectslike shadows, light scattering,andabsorptionin the real world when light falls on an object.Shadingcanbeclassifiedinto global methods,directmeth-ods,andlocalmethods.While globalilluminationcomputesthe light being exchangedbetweenall objects,direct illu-mination only accountsfor the light the directly falls ontoanobject.Unfortunately, bothmethodsdependon thecom-plexity of theobjectsto berenderedandareusuallynot in-teractive. Therefore,the local illumination methodhasbeenwidely used.Figure2 shows a skull renderedwith local andwith directillumination.While directilluminationtakesintoaccounthow much light is presentat eachsample(figure2(b)), local illumination is much cheaperto computebutstill achieves reasonableimagequality (figure 2(a)). Localillumination consistsof an ambient,a diffuseanda specu-lar component.While ambientcomponentis available ev-erywhere,thediffusecomponentcanbecomputedusingtheanglebetweenthenormalvectoratthegivenlocationandthevectorto the light. Thespecularcomponentdependson the

(a) (b)

Figure 2: Comparisonof shading:Local illumination (a)anddirect illumination (b).

angleto thelight andtheangleto theeye position.All threecomponentscanbecombinedby weightingeachof themdif-ferentlyusingmaterialproperties.While tissueis lesslikelyto have specularcomponents,teethmight reflectmorelight.Figure3 shows a skull without andwith shading.

(a) (b)

Figure3: Comparisonof shading:Noshading(a) andlocalshading(b).

For furtherreadingreferto 61 66.

2.7. Gradient computation

As mentionedin the previous section,a normal is requiredto beableto integrateshadingeffects.However, volumetricdataitself doesnotexplicitly consistof surfaceswith associ-atednormalsbut of sampleddatabeingavailableongrid po-sitions.Thisgrid of scalarvaluescanbeconsideredasagreylevel volumeandseveral techniqueshave beeninvestigatedin thepastto computegrey-level gradientsfrom volumetricdata.

A frequentlyusedgradientoperatoris the centraldiffer-enceoperator. For eachdimensionof the volume,the cen-tral differenceof the two neighbouringvoxels is computedwhichgivesanapproximationof thelocalchangeof thegrayvalue.It canbewrittenasGradientx � y� z �� 1 0 1� . Gen-erally, the centraldifferenceoperatoris not the necessarilythebestonebut verycheapto computesinceit requiresonly

c�

TheEurographicsAssociation2000.

Page 5: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

six voxelsandthreesubtractions.A disadvantageof thecen-tral differenceoperatoris that it producesanisotropicgradi-ents.

Theintermediatedifferenceoperatoris similar to thecen-tral differenceoperatorbut hasa smallerkernel. It can bewritten as Gradientx � y� z �� 1 1� . The advantageof thisoperatoris thatit detectshigh frequencieswhich canbelostwhenusingthe centraldifferenceoperator. However, whenflipping the orientationa differentgradientis computedforthe identical voxel position which can causeundesiredef-fects.

A much better gradientoperatoris the Sobel operatorwhich usesall 26 voxels that surroundonevoxel. This gra-dient operatorwas developedfor 2D imaging but volumerenderingborrows many techniquesfrom imageprocessingandtheSobeloperatorcaneasilybeextendedto 3D. A nicepropertyof this operatoris that it producesnearlyisotropicgradientsbut it is fairly complex to compute61.

2.8. Compositing

All samplestakenduringrenderingneedto becombinedintoa final imagewhich meansthat for eachpixel of the imagewe needto combinethe color of the contributing samples.This can be donein randomorder if only opaquesamplesare involved but sincewe deal with semi-transparentdata,the blendingneedsto be performedin sortedorder whichcanbeaccomplishedin two ways:Front to backor backtofront. For front to back,thediscreteray castingintegral canthenbewritten as:

Trans = 1.0; - fullInten = I[0]; - initial valuefor (i=0; i<n; i++) {

Trans *= T[i-1];Inten += Trans * I[i];

}

The advantageis that the computationcan be terminatedoncethe transparency reachesa certainthresholdwherenofurthercontribution will benoticeable,i.e. 0 � 01.

For back to front, compositingis much lesswork sincewe do not needto keeptrackof theremainingtransparency.However, thenall samplesneedto beprocessedandnoearlyterminationcriteriacanbeexploited:

Inten = I[0]; - initial valuefor (i=0; i<n; i++) {

Inten = Inten + T[i] * I[i];}

Insteadof accumulatingthe color for eachpixel over allsamplesusingtheabove describedblendingoperations,onecan choseother operators.Another famousoperatorsim-ply takes the maximumdensityvalue of all samplesof apixel,known asmaximumintensityprojection(MIP). Thisismostly usedin medicalapplicationsdealingwith MRI data

(magneticresonanceangiography)visualizing arteriesthathave beenacquiredusingcontrastagents.

(a) (b)

(c) (d)

Figure4: Compositingoperators: Blendingshadedsamplesof skull (a) and arteries(c) and maximumintensityprojec-tion of skull (b) andarteries(d).

2.9. Filtering

Many volumerenderingalgorithmsresamplethevolumetricdatain a certainway usingrays,planes,or randomsamplepoints.Thesesamplepointsseldomlycoincidewith the ac-tual grid positionsand requirethe interpolationof a valuebasedon theneighbouringvaluesat grid position.

Therearenumerousdifferentinterpolationmethods.Eachof themis controlledby an interpolationkernel.The shapeof the interpolationkernel providesthe coefficients for theweighted interpolation sum. Interpolation kernels can bethoughtof as overlays.Whena value needsto be interpo-lated,the kernel is placedonto of the neighbouringvalues.The kernel is centeredat the interpolationpoint andevery-wheretheinterpolationkernelintersectswith thevoxels,thevaluesaremultiplied.Onedimensionalinterpolationkernelscan be appliedto interpolatein two, three,and even moredimensionsif the kernel is separable.All of the followinginterpolationkernelsareseparable.

The nearestneighbourinterpolationmethodis the sim-plest and crudestmethod.The value of the closestof allneighbouringvoxel valuesis assignedto thesample.Hence,it is more a selectionthan a real implementation.There-fore, whenusingnearestneighbourinterpolation,theimage

c�

TheEurographicsAssociation2000.

Page 6: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

quality is fairly low andwhenusingmagnification,a blobbystructureappears.

Trilinear interpolationassumesa linear relationbetweenneighbouringvoxelsandit is separable.Therefore,it canbedecomposedinto sevenlinearinterpolations.Theachievableimagequality is much higher than with nearestneighbourinterpolation.However, whenusinglargemagnificationfac-tors,threedimensionaldiamondstructuresor crossesappeardueto thenatureof thetrilinear kernel.

Betterqualitycanbeachievedusingevenhigherorderin-terpolationmethodssuchas cubic convolution or B-splineinterpolation.However, thereis a trade-off betweenqualityandcomputationalcostaswell asmemorybandwidth.Thesefiltersrequireaneighbourhoodof 64voxelsandasignificantlargeramountof computationsthantrilinear interpolation.Itdependson the applicationandthe requirementswhich in-terpolationschemeshouldbeused.

2.10. Color filtering

The previously mentionedtechniquescan be performedindifferentorderresultingin differentimagequalityaswell asbeingproneto certainartifacts.Interpolationof scalarvaluesis usuallyproneto aliasingsincedependingon the classifi-cationused,high frequency detailsmight bemissed.On theotherside,color interpolationby themeansof classificationandshadingof availablevoxel valuesandinterpolationof theresultingcolor valuesis proneto color bleedingwheninter-polatingcolor andα-valueindependentfrom eachother119.A simpleexampleof this is bonesurroundedby fleshwheretheboneis classifiedopaquewhite andthefleshis transpar-entbut red.Whensamplingthis color volumeoneneedstochosetheappropriateinterpolationscheme.Simply interpo-lating the neighbouringcolor andopacity valuesresultsincolorbleedingasillustratedin figure5. To obtainthecorrect

Figure 5: Color bleeding: Independentinterpolation ofcolor and opacityvalues(left) and opacityweightedcolorinterpolation(right).

color andopacity, oneneedsto mulitply eachcolor with thecorrespondingopacityvaluebeforeinterpolatingthe color.While it caneasilybe noticedin figure5 dueto the chosencolor scheme,it is lessobviousin monochromeimages.

Figure6 illustratesanotherexamplewheredarkeningar-tifactscanbe noticed.This exampleis a volumetricdataset

Figure 6: Darkeningand aliasing: Independentinterpola-tion of color andopacityvalues(left) andopacityweightedcolor interpolation(right).

from image basedrenderingthat originatesas color (red,green,andblue)ateachgrid position.Therefore,it illustrateswhatwouldhappenwhenvisualizingthevisiblehumanwithandwithout opacityweightedcolor interpolation.The arti-factsarequitesevereanddisturbing.

2.11. Summary

Within thissection,weprovidedanoverview of thedifferenttypesof gridsaswell assourcesof volumedata.Usingclas-sification, gradient estimation,shading,and compositing,extremlydifferentvisualizationresultscanbeachieved.Alsothe selectionof the filter usedto interpolatedataor colorhasastronginfluenceontheresultingimage.Therefore,onehasto carefullychoosedependingontheapplicationrequire-mentswhich of thedescribedtechniquesschemesshouldbeintegratedandwhich interpolationschemeused.

3. VolumeViewing Algorithms

The task of the renderingprocessis to display the primi-tivesusedto representthe 3D volumetricsceneonto a 2Dscreen.Renderingis composedof a viewing processwhichis the subjectof this section,andthe shadingprocess.Theprojectionprocessdetermines,for eachscreenpixel, whichobjectsareseenby thesightray castfrom this pixel into thescene.The viewing algorithm is heavily dependenton thedisplayprimitivesusedto representthevolumeandwhethervolumerenderingor surfacerenderingareemployed. Con-ventionalviewing algorithmsand graphicsenginescan beutilizedto displaygeometricprimitives,typically employingsurfacerendering.However, whenvolumeprimitivesaredis-playeddirectly, a specialvolumeviewing algorithmshouldbeemployed.This algorithmshouldcapturethecontentsofthevoxelsonthesurfaceaswell astheinsideof thevolumet-ric object being visualized.This sectionsurveys andcom-parespreviouswork in thefield of directvolumeviewing.

c�

TheEurographicsAssociation2000.

Page 7: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

3.1. Intr oduction

Thesimplestway to implementviewing is to traverseall thevolume regarding eachvoxel as a 3D point that is trans-formed by the viewing matrix and then projectedonto aZ-buffer and drawn onto the screen.Somemethodshavebeensuggestedin orderto reducethe amountof computa-tions neededin the transformationby exploiting the spatialcoherency betweenvoxels.Thesemethodsaredescribedinmoredetailsin Section4.1.

Theback-to-front(BTF) algorithmis essentiallythesameasthe Z-buffer methodwith oneexceptionthat is basedontheobservationthatthevoxel arrayis presortedin a fashionthatallowsscanningof its componentin anorderof decreas-ing or increasingdistancefrom theobserver. Exploiting thispresortednessof thevoxel arrays,traversalof thevolumeintheBTF algorithmis donein orderof decreasingdistancetotheobserver. This avoidstheneedfor a Z-buffer for hiddenvoxel removal considerationsby applyingthepainter’salgo-rithm by simply drawing the currentvoxel on top of previ-ouslydrawn voxelsor by compositingthecurrentvoxel withthescreenvalue24.

Thefront-to-back(FTB) algorithmis essentiallythesameasBTF only that now thevoxelsaretraversedin increasingdistanceorder. Front-to-backhasthepotentialof a moreef-ficient implementationby employing a dynamicdatastruc-ture for screenrepresentation87 in which only un-lit pixelsare processedand newly-lit pixels are efficiently removedfrom the datastructure.It shouldbe observed that while inthebasicZ-buffer methodit is impossibleto supporttheren-dition semi-transparentmaterialssincevoxelsaremappedtothescreenin arbitraryorder. Compositingis basedonacom-putationthat simulatesthe passageof light throughseveralmaterials.In this computationthe orderof materialsis cru-cial. Therefore,translucency caneasilybe realizedin bothBTF andFTB in which objectsaremappedto thescreenintheorderin which thelight traversesthescene.

Anothermethodof volumetricprojectionis basedon firsttransformingeachslice from voxel-spaceto pixel-spaceus-ing 3D affine transformation(shearing)33 90 52 and thenprojecting it to the screenin a FTB fashion,blending itwith theprojectionformedby previousslices22. Shear-warprendering52 is currently the fastestsoftware algorithm. Itachieves1.1 Hz on a single150MHzR4400processorfor a256 � 256 � 225 volumewith 65 secondsof pre-processingtime 51. However, the 2D interpolationmay lead to alias-ing artifactsif thevoxel valuesor opacitiescontainhighfre-quency components50.

Westover 107 108 hasintroducedthesplattingtechniqueinwhich eachvoxel is transformedinto screenspaceandthenshaded.Blurring, basedon2D lookuptablesis performedtoobtaina setof points(footprint) that spreadsthe voxels en-ergy acrossmultiple pixels.Thesearethencompositedwiththeimagearray. Sobierajskietal. havedescribed94 asimpli-fied splattingfor interactive volumeviewing in which only

voxelscomprisingtheobject’s surfacearemaintained.Ren-dering is basedon the usageof a powerful transformationenginethat is fed with multiple pointspervoxel. Additionalspeedupis gainedby culling voxelsthathaveanormalpoint-ing away from the observer andby adaptive refinementofimagequality.

The ray castingalgorithmcastsa ray from eachpixel onthescreeninto thevolumedataalongtheviewing vectorun-til it accumulatesanopaquevalue4 99 100 56. Levoy 59 57 hasusedthetermray tracingof volumedatato referto ray cast-ing andcompositingof even-spacedsamplesalongthe pri-mary viewing rays.However, more recently, ray tracing isreferredto as the processwherereflectedand transmittedraysare traced,while ray castingsolely considersprimaryrays,andhence,doesnot aim for “photorealistic”imaging.Rayscanbetracedthroughavolumeof coloraswell asdata.Raycastinghasbeenappliedto volumetricdatasets,suchasthosearisingin biomedicalimagingandscientificvisualiza-tion applications(e.g.,22 100).

We now turn to classify and compareexisting volumeviewing algorithms.In section4 we survey recentadvancesin accelerationtechniquesfor forwardviewing (section4.1),backward viewing (section4.2) and hybrid viewing (sec-tion 4.3).

3.2. Classificationof VolumeViewing Methods

Projectionmethodsdiffer in several aspectswhich can beusedfor a their classificationin variousways.First,we haveto observe whetherthe algorithmtraversesthe volumeandprojectsits componentsontothescreen(calledalsoforward,object-order, or voxel-spaceprojection)24 107 29, doesit tra-versethepixelsandsolvethevisibility problemfor eachoneby shootinga sightray into thescene(calledalsobackward,image-order, or pixel-spaceprojection)46 54 88 99 100 120 124,or doesit performsomekind of ahybridtraversal40 100 52 50.

Volume renderingalgorithmscan also be classifiedac-cordingto the partial voxel occupancy they support.Somealgorithms86 35 87 99 125 124 assumeuniform (binary) occu-pancy, thatis, avoxel is eitherfully occupiedby someobjector it is devoid of any objectpresence.In contrastto uniformvoxel occupancy, methodsbasedon partialvoxel occupancyutilize intermediatevoxel valuesto representpartial voxeloccupancy by objectsof homogeneousmaterial.This pro-videsa mechanismfor thedisplayof objectsthataresmallerthantheacquisitiongrid or thatarenotalignedwith it. Partialvolumeoccupancy canbeusedto estimateoccupancy frac-tions for eachof a setof materialsthat might be presentina voxel 22. Partial volumeoccupancy is alsoassumedwhen-ever gray-level gradient36 is usedasa measurefor thesur-faceinclination.Thatis, voxel valuesin theneighborhoodofa surfacevoxel areassumedto reflecttherelative averageofthevarioussurfacetypesin them.

Volumerenderingmethodsdiffer alsoin theway they re-

c�

TheEurographicsAssociation2000.

Page 8: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

gardthe materialof thevoxels.Somemethodsregardedallmaterialsasopaque27 29 37 87 98 99 while othersallow eachvoxel to have an opacity attribute 22 54 88 100 107 120 124 52.Supporting variable opacities models the appearanceofsemi-transparentjello andrequirescompositionof multiplevoxelsalongeachsightray.

Yetanotheraspectof distinctionbetweenrenderingmeth-ods is the numberof materialsthey support.Early meth-ods supportedscenesconsistingof binary-valued voxelswhile more recent methodsusually support multi-valuedvoxels. In the first caseobjectsare representedby occu-pied voxels while the backgroundis representedby voidvoxels 24 35 87 99. In the latter approach,multi-valuedvox-elsareusedto representobjectsof non-homogeneousmate-rial 27 36 98. It shouldbe observed that given a setof voxelshaving multiple valueswe can either regard them as fullyoccupiedvoxelsof variousmaterials(i.e., eachvaluerepre-sentsadifferentmaterial)or wecanregardthevoxel valueasanindicatorof partialoccupancy by a singlematerial,how-ever we cannot have both.In orderto overcomethis limita-tion, someresearchersadoptthemultiple-materialapproachasabasisfor aclassificationprocessthatattachesamaterial-label to eachvoxel. Onceeachvoxel hasa material label,theseresearchersregardthe original voxel valuesaspartialoccupancy indicatorsfor thelabeledmaterial22.

Finally, volumerenderingalgorithmscanalsobe classi-fied accordingto whetherthey assumeconstantvalueacrossthevoxel extent 46 or do they assume(trilinear) variationofthevoxel value54.

A severeproblemin the voxel-spaceprojectionis that atsomeviewing points,holesmight appearin the scene.Tosolve this problem one can regard eachvoxel in our im-plementationasa groupof points(dependingon the view-point) 95 or maintaina ratio of 1 : � 3 betweena voxel apixel 13. Another solution is basedon a hybrid of voxel-spaceandpixel-spaceprojectionsthatis basedon traversingthevolumein a BTF fashionbut computingpixel colorsbyintersectingthe voxel with a scanline (plane)andthenin-tegratingthe colors in the resultingpolygon 100. Sincethiscomputationis relatively time consumingit is moresuitableto small datasets.It is alsopossibleto apply to eachvoxela blurring function to obtaina 2D footprint that spreadsthesample’s energy onto multiple imagepixels which are lat-ter composedinto the image108. A major disadvantageinthe splattingapproachis that it tendsto blur the edgesofobjectsandreducetheimagecontrast.Anotherdeficiency inthevoxel-spaceprojectionmethodis thatit musttraverseandprojectall thevoxelsin thescene.Sobierajskietal.havesug-gestedthe useof a normalbasedculling in orderto reduce(possiblyby half) theamountof processedvoxels95. On theotherhand,sincevoxel-spaceprojectionoperatesin object-space,it is mostsuitableto variousparallelizationschemesbasedon objectspacesubdivision 28 79 107.

The main disadvantagesof the pixel-spaceprojection

schemearealiasing(speciallywhenassuminguniformvalueacrossvoxel extent)andthedifficulty to parallelizeit. Whilethe computationinvolved in tracingrayscanbe performedin parallel,memorybecomesthebottleneck.Sinceraystra-versethevolumein arbitrarydirectionsit seemsto benowayto distribute voxels betweenmemorymodulesto guaranteecontentionfreeaccess55.

Beforepresentinga sideby sidecomparisonof the fourmost popularvolume renderingalgorithms,we will intro-ducegeneralaccelerationtechniquesthat canbe appliedtoforwardandbackwardviewing algorithms.

4. AccelerationTechniques

Either forward projectionor backward projectionrequiresthe scanningof the volume buffer which is a large bufferof size proportional to the cubic of the resolution.Con-sequently, volume renderingalgorithmscan be very time-consumingalgorithms.This sectionfocuseson techniquesfor expeditingthesealgorithms.

4.1. Expediting Forward Viewing

The Z-buffer projection algorithm, although surprisinglysimple,is inherentlyvery inefficient andwhennaively im-plemented,produceslow quality images.The inefficiencyattributeof this methodis rootedin theN3 vector-by-matrixmultiplicationsit calculatesand the N3 accessesto the Z-buffer it requires.Inferior imagequality is causedby thismethod’s inability to supportcompositingof semitranspar-ent materials,due to the arbitrary order in which voxelsaretransformed.In addition,transforminga setof discretepointsis asourcefor varioussamplingartifactssuchasholesandjaggies.

Somemethodshave beensuggestedto reducetheamountof computationsneededfor thetransformationby exploitingthe spatialcoherency betweenvoxels. Thesemethodsare:recursive “divide and conquer” 27 69, pre-calculatedtables24, andincrementaltransformation44 65.

Thefirst methodexploitscoherency in voxel spaceby rep-resentingthe3D volumeby anoctree.A groupof neighbor-ing voxelshaving thesamevalue(or similar, up to a thresh-old value)may, undersomerestrictions,be groupedinto auniform cubic subvolume.This aggregateof voxels canbetransformedandrenderedasa uniform unit insteadof pro-cessingeachof its voxels.In addition,sinceeachoctreenodehaseight equally-sizedoctants,given the transformationoftheparentnode,thetransformationof its sub-octantscanbeefficiently computed.This methodrequires,in 3D, threedi-visionsandsix additionspercoordinatetransformation.

Thetable-driventransformationmethod24 is basedontheobservation that volumetransformationinvolves the multi-plication of the matrix elementswith integer valueswhich

c�

TheEurographicsAssociation2000.

Page 9: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

arealwaysin therange 1 ����� N � whereN is thevolumereso-lution. Therefore,in a shortpreprocessingstageeachmatrixelementti j is storedin table tabi j N � such that tabi j k� �ti j � k � 1 � k � N. During the transformationstage,coordi-nateby matrix multiplication is replacedby table lookup.This methodrequires,in 3D, nine table lookup operationsandnineadditions,percoordinatetransformation.

Finally, the incrementaltransformationmethodis basedon theobservation that thetransformationof a voxel canbeincrementallycomputedgiventhetransformedvectorof thevoxel. To begin the incrementalprocesswe needone ma-trix by vector multiplication to computethe updatedposi-tion of the first grid point. The remaininggrid points areincrementallytransformed,requiringthreeadditionsperco-ordinate.However, to employ this approach,all volumeel-ements,including the emptyones,have to be transformed.This approachis thereforemore suitableto parallel archi-tecturewhereit is desiredto keepthecomputationpipelinebusy65.

Sofarwehavebeenlookingatmethodsthateasethecom-putation burdenassociatedwith the transformation.How-ever, consultingthe Z-buffer N3 times is also a sourceofsignificantslow down. Theback-to-front(BTF) algorithmisessentiallythesameastheZ-buffer methodwith oneexcep-tion theorderin which voxelsarescanned.It is basedontheobservation that the voxel arrayis spatiallypresorted.Thisattributeallows therendererto scanthevolumein an orderof decreasingdistancefrom theobserver. By exploiting thispresortednessof thevoxel arrays,onecandraw thevolumein a back-to-frontorder, that is, in orderof decreasingdis-tanceto theobserver. This avoidstheneedfor a Z-buffer forhiddenvoxel removal by applying the painter’s algorithm.That is, the currentvoxel is simply drawn on top of previ-ouslydrawn voxels.If compositingis performed,thecurrentvoxel is compositedwith thescreenvalue23 24. Thefront-to-back(FTB) algorithmis essentiallythe sameasBTF, onlythat now the voxels aretraversedin increasingdistanceor-der.

As mentionedabove in thebasicZ-buffer methodit is im-possibleto supporttherenditionof semitransparentmateri-als becausevoxels aremappedto the screenin an arbitraryorder. In contrast,translucency caneasilyberealizedin bothBTF andFTB becausein thesemethodsobjectsaremappedto thescreenin viewing order.

Anotherapproachto forward projectionis basedon firsttransformingthevolumefrom voxel-spaceto pixel-spacebyemploying a decompositionof the3D affine transformationinto five 1D shearingtransformations33. Then, the trans-formedvoxel is projectedonto the screenin an FTB order,which supportsthe blendingof voxels with the projectionformedby previous(farther)voxels22. Themajoradvantageof this approachis its ability (usingsimpleaveragingtech-niques)to overcomesomeof thesamplingproblemscausingthe productionof low quality images.In addition, this ap-

proachreplacesthe 3D transformationby five 1D transfor-mationswhichrequireonly onefloating-pointadditioneach.

Anothersolutionto theimagequalityproblemmentionedabove is splatting 108, in which eachvoxel is transformedinto screenspaceandthenit is shaded.Blurring,basedon2Dlookuptables,is performedto obtainasetof points(acloud)thatspreadsthevoxel’s energy acrossmultiple pixelscalledfootprint. Thesearethencompositedwith the imagearray.However this algorithmwhich requiresextensive filtering istime consuming.

Sobierajskiet al. have described94 a simplified approxi-mationto thesplattingmethodfor interactive volumeview-ing in which only voxelscomprisingtheobject’s surfacearemaintained.Eachvoxel is representedby several 3D points(a 3D footprint). Renderingis basedon theusageof a con-temporarygeometryenginethat is fed with thosemultiplepoints per voxel. Additional speedupis gainedby cullingvoxels that have a normalpointingaway from theobserver.Finally, adaptive refinementof image quality is also sup-ported:whenthevolumeis manipulatedonly onepoint pervoxel is rendered,interactively producinga low quality im-age.When the volume remainsstationaryand unchanged,for someshortperiod,therenderingsystemrenderstherestof thepointsto increaseimagequality.

Another efficient implementationof the splatting algo-rithm, called hierarchicalsplatting53 usesa pyramid datastructureto hold a multiresolutionrepresentationof thevol-ume.For volumeof N3 resolutionthepyramiddatastructureconsistsof a sequenceof logN volumes.The first volumecontainstheoriginaldataset,thenext volumein thesequenceis half theresolutionof thepreviousone.Eachof its voxelscontainstheaverageof eightvoxels in thehigherresolutionvolume.Accordingto the desiredimagequality, this algo-rithm scansthe appropriatelevel of the pyramid in a BTForder. Eachelementis splattedusingthe appropriatesizedsplat.The splatsthemselvesareapproximatedby polygonswhich canefficiently berenderedby graphicshardware.

4.2. Expediting Backward Viewing

Backward viewing of volumes,basedon castingrays,hasthreemajor variations:parallel (orthographic)ray casting,perspective ray casting,and ray tracing. The first two arevariationsof ray casting,in which only primary rays, thatis, rays from the eye through the screen,are followed.Thesetwo methodshave beenwidely appliedto volumet-ric datasets,suchasthosearisingin biomedicalimagingandscientificvisualizationapplications(e.g.,22 100). Levoy 57 58

hasusedthe term ray tracingof volumedatato refer to raycastingandcompositingof even-spacedsamplesalongtheprimaryviewing rays.

Raycastingcanfurtherbedividedinto methodsthatsup-port only parallel viewing, that is, when the eye is at in-finity and all rays are parallel to one viewing vector. This

c�

TheEurographicsAssociation2000.

Page 10: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

viewing schemeis usedin applicationsthatcouldnot bene-fit from perspective distortionsuchasbiomedicine.Alterna-tively, ray castingcanbe implementedto supportalsoper-spective viewing.

Sinceray castingfollows only primary rays, it doesnotdirectly supportthe simulationof light phenomenasuchasreflection,shadows, andrefraction.As analternative, Yagelet al. have developedthe3D rasterray tracer(RRT) 122 thatrecursively considersboth primary andsecondaryraysandthuscancreate“photorealistic”images.It exploits thevoxelrepresentationfor theuniformrepresentationandraytracingof sampledand computedvolumetric datasets,traditionalgeometricscenes,or intermixingthereof.

Theexaminationof existing methodsfor speedingup theprocessof ray castingreveals that most of them rely ononeor moreof the following principles:(1) pixel-spaceco-herency (2) object-spacecoherency (3) inter-ray coherencyand(4) space-leaping.

We now turn to describeeachof thosein moredetail.

1. Pixel-spacecoherency: There is a high coherency be-tweenpixels in imagespace.That is, it is highly prob-able that betweentwo pixels having identical or similarcolor we will find anotherpixel having thesame(or sim-ilar) color. Thereforeit is observed that it might be thecasethatwe couldavoid sendinga ray for suchobviouslyidenticalpixels.

2. Object-spacecoherency: The extension of the pixel-spacecoherency to 3D statesthat thereis coherency be-tweenvoxelsin objectspace.Therefore,it is observedthatit shouldbepossibleto avoid samplingin 3D regionshav-ing uniform or similar values.

3. Inter -ray coherency: Thereis a greatdealof coherencybetweenraysin parallelviewing, thatis,all rays,althoughhaving different origin, have the sameslope.Therefore,thesetof stepstheseraystakewhentraversingthevolumearesimilar. We exploit this coherency so asto avoid thecomputationinvolvedin navigatingtheray throughvoxelspace.

4. Space-leaping:Thepassageof a ray throughthevolumeis two phased.In thefirst phasetheray advancesthroughthe empty spacesearchingfor an object. In the secondphasethe ray integratescolors and opacitiesas it pene-tratesthe object (in the caseof multiple or concave ob-jectsthesetwo phasescanrepeat).Commonly, thesecondphaseinvolvesoneor a few steps,dependingon the ob-ject’s opacity. Sincethepassageof emptyspacedoesnotcontribute to the final imageit is observed that skippingtheemptyspacecouldprovide significantspeedup with-out affectingimagequality.

The adaptive image supersampling,exploits the pixel-spacecoherency. It wasoriginally developedfor traditionalray-tracing7 and later adaptedto volume rendering57 60.First, raysarecastfrom only a subsetof the screenpixels(e.g., every other pixel). “Empty pixels” residingbetween

pixelswith similar valueareassignedaninterpolatedvalue.In areasof high imagegradientadditionalrays are casttoresolve ambiguities.

Van Walsum et al. 104 have used the voxel-spaceco-herency. In his methodthe ray startssamplingthe volumein low frequency (i.e., large stepsbetweensamplepoints).If a large value differenceis encounteredbetweentwo ad-jacentsamples,additionalsamplesaretaken betweenthemto resolve ambiguitiesin thesehigh frequency regions.Re-cently, this basicideawasextendedto efficiently lower thesamplingratein eitherareaswhereonly smallcontributionsof opacitiesaremade,or in regionswherethevolumeis ho-mogeneous20. Thismethodefficiently detectsregionsof lowpresenceor low variation by employing a pyramid of vol-umesthat decodethe minimum andmaximumvoxel valuein a small neighborhood,as well as the distancebetweenthesemeasures.

The template-basedmethod120 124 utilizes the inter-raycoherency. Observingthat,in parallelviewing, all rayshavethesameform it wasrealizedthatthereis no needto reacti-vatethediscreteline algorithmfor eachray. Instead,we cancomputetheform of therayonceandstoreit in a datastruc-ture calledray-template.All rayscanthenbe generatedbyfollowing theray template.The rays,however, differ in theexactpositioningof theappropriateportionof thetemplate,anoperationthathasto beperformedverycarefully. For thispurposea planethatis parallelto oneof thevolumefacesischosento serve asa base-planefor the templateplacement.The imageis projecteda by sliding the templatealongthatplaneemitting a ray at eachof its pixels. This placementguaranteescompleteanduniformtessellationof thevolume.Theregularityandsimplicity of thisefficientalgorithmmakeit very attractive for hardwareimplementation121.

So far we have seenmethodsthat exploit sometype ofcoherency to expeditevolumetricray casting.However, themostprolific andeffectivebranchof volumerenderingaccel-erationtechniquesinvolvetheutilizationof thefourthprinci-plementionedabove – speedingup raycastingby providingefficient meansto traversetheemptyspace.

Thehierarchicalrepresentation(e.g.,octree)decomposesthevolumeinto uniform regionsthat canbe representedbynodesin ahierarchicaldatastructure.An adjustedraytraver-salalgorithmskipsthe(uniform)emptyspaceby maneuver-ing throughthe hierarchicaldatastructure57 89. It wasalsoobservedthattraversingthehierarchicaldatastructureis in-efficient comparedto the traversalof regular grids.A com-binationof theadvantagesof bothrepresentationsis theuni-form buffer. The “uniformity information” decodedby theoctreecanbestoredin theemptyspaceof aregular3D raster.That is, voxels in the uniform buffer containeither a datavalueor informationindicatingto which sizeemptyoctantthey belong.Rayswhich arecastinto thevolumeencountereithera datavoxel, or a voxel containing“uniformity infor-mation” which instructsthe ray to perform a leap forward

c�

TheEurographicsAssociation2000.

Page 11: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

thatbringsit to thefirst voxel beyondtheuniform region 16.This approachsaves the needto perform a tree searchfortheappropriateneighbor– anoperationthatis themosttimeconsumingand the major disadvantagein the hierarchicaldatastructure.

When a volume consistsof one object surroundedbyempty space,a commonand simple methodto skip mostof this empty spaceuses the well known techniqueofbounding-boxes. The object is surroundedby a tightly fitbox (or othereasy-to-intersectobjectsuchassphere).Raysareintersectedwith the boundingobjectandstart their ac-tualvolumetraversalfrom this intersectionpointasopposedto startingfrom thevolumeboundary. The PARC (PolygonAssistedRayCasting)approach3 strivesto have a betterfitby allowing a convex polyhedralenvelopeto beconstructedaroundthe object. PARC utilizes available graphicshard-wareto renderthefront facesof theenvelope(to determine,for eachpixel, the ray entry point) and back faces(to de-terminethe ray exit point). The ray is then traversedfromentry to exit point. A ray that doesnot hit any objectis nottraversedat all.

It is obviousthattheemptyspacedoesnothaveto besam-pled – it hasonly to be crossedas fastaspossible.There-fore,Yagelet al. have proposed123 122 to utilize onefastandcrudeline algorithm in the empty space(e.g.,3D integer-based26-connectedline algorithm)andanother, slower butmoreaccurate(e.g.,6-connectedintegeror 3D DDA floatingpoint line algorithm),in thevicinity andinterior of objects.The effectivenessof this approachdependson its ability toefficiently switchbackandforth betweenthetwo line algo-rithm, andits ability to efficiently detecttheproximity of oc-cupiedvoxels.This is achievedby surroundingtheoccupiedvoxelsby aone-voxel-deep“cloud” of flag-voxels,thatis,allemptyvoxelsneighboringanoccupiedvoxel areassigned,ina preprocessingstage,a special“vicinity flag”. A cruderayalgorithm is employed to rapidly traversethe empty spaceuntil it encountersa vicinity voxel. This flags the needtoswitch to a moreaccurateray traversalalgorithm.Encoun-teringlateranemptyvoxel (i.e.,unoccupiedandnotcarryingthevicinity flag)cansignalaswitchbackto therapidtraver-salof emptyspace.

Theproximity-cloudsmethod16 128 is basedontheexten-sionof this ideaevenfurther. Insteadof having a one-voxel-deepvicinity cloud this methodcomputes,in a preprocess-ing stage,for eachemptyvoxel, the distanceto the closestoccupiedvoxel. When a ray is sentinto the volume it caneitherencounteran occupiedvoxel, to be handledasusual,or a “proximity voxel” carrying the value . This suggeststhattheray cantake a -stepleapfor-n n ward,beingassuredthatthereis no objectin theskippedspanof voxels.Theef-fectivenessof this algorithmis obviously dependenton theability of the line traversalalgorithmto efficiently jump ar-bitrary numberof steps16.

YagelandShi 127 have reportedon a methodfor speeding

up the processof volume renderinga sequenceof images.It is basedonexploiting coherency betweenconsecutive im-agesto shortenthepathraystakethroughthevolume.Thisisachievedby providing eachray with theinformationneededto leapover theemptyspaceandcommencevolumetraver-sal at the vicinity of meaningfuldata.The algorithmstartsby projectingthevolumeinto aC-buffer (Coordinate-buffer)whichstores,ateachpixel location,theobject-spacecoordi-natesof thefirst nonemptyvoxel visiblefrom thatpixel. Foreachchangein theviewing parameters,theC-buffer is trans-formedaccordingly. In the caseof rotationthe transformedC-buffer goesthrougha processof eliminatingcoordinatesthatpossiblybecamehidden30. Theremainingvaluesin theC-buffer serveasanestimateof thepointwherethenew raysshouldstarttheir volumetraversal.

4.3. Hybrid Viewing

The most efficient renderingalgorithm usesa ray-castingtechnique with hybrid object/image-orderdata traversalbasedon the shear-warp factorizationof the viewing ma-trix 124 91 52. The volume datais definedin object coordi-nates � u � v� w� , which are first transformedto isotropic ob-ject coordinatesby a scaleandshearmatrix L. This allowsto automaticallyhandleanisotropicdatasets,in which thespacingbetweenvoxelsdiffers in thethreedimensions,andgantry tilted datasets,in which the slicesare sheared,byadjustingthe warp matrix. A permutationmatrix P trans-forms the isotropicobject to permutedcoordinates� x � y� z� .Theorigin of permutedcoordinatesis thevertex of thevol-umenearestto theimageplaneandthez axis is theedgeofthevolumemostparallelto theview direction.A shearma-trix S representstherenderingoperationthatprojectspointsin thepermutedvolumespaceontopointson thebaseplane,which is thefaceof thevolumedatathat is mostparalleltotheviewing plane.

In the shear-warp implementation by Lacroute andLevoy 52, the volume is storedthreetimes, run-lengthen-codedalong the major viewing direction. The projectionis performedusingbi-linear interpolationandback-to-frontcompositingof volume slices parallel to the baseplane.Pfister et al. 83 perform the projection using ray-casting.This preventsview-dependentartifactswhenswitchingbaseplanes and accommodatessupersamplingof the volumedata.Insteadof castingraysfrom imagespace,raysaresentinto thedatasetfrom thebaseplane.This approachguaran-teesthat thereis a one-to-onemappingof samplepointstovoxels124 91.

The baseplaneimageis transformedto the imageplaneusingthe warp matrix W � M � L � 1 � P � 1 � S� 1. To re-sampletheimage,onecanuse2D texturemappingwith bi-linear interpolationon a companiongraphicscard.The ad-ditional 2D imageresamplingresultsin a slight degradationof imagequality. It enables,however, aneasymappingto anarbitraryuser-specifiedimagesize.

c�

TheEurographicsAssociation2000.

Page 12: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Themainadvantageof theshear-warpfactorizationis thatvoxelscanbereadandprocessedin planesof voxels,calledslices,thatareparallelto thebaseplane.Slicesareprocessedin positive z direction. Within a slice, scanlineof voxels(called voxel beams)are readfrom memoryin top to bot-tom order. This leadsto regular, object-orderdataaccess.Inaddition,it allows parallelismby having multiple renderingpipelineswork on severalvoxelsin a beamat thesametime.

4.4. Progressive Refinement

Onepracticalsolutionto the renderingtime problemis thegenerationof partialimagesthatareprogressively refinedasthe userinteractswith the crudeimage.Both forward andbackward approachcan supportprogressive refinement.Inthe caseof forward viewing this techniqueis basedon apyramiddatastructure.First,thesmallervolumein thepyra-mid is renderedusing large-footprintsplats.Later, higherresolutioncomponentsof thepyramidarerendered53.

Providing progressive refinementin backward viewing isachievedby first samplingthescreenin low resolution.Theregions in the screenwhereno rayswereemittedfrom re-ceive a valueinterpolatedfrom someclosepixels that wereassignedrays.Latermoreraysarecastandtheinterpolatedvalue is replacedby the more accurateresult 60. Addition-ally, raysthatareintendedto cover largescreenareascanbetracedin thelower-resolutioncomponentsof a pyramid57.

Not only screen-spaceresolutioncanbeprogressively in-creased.Samplingrateandstoppingcriteria canalsobere-fined.An efficient implementationof this techniquewasre-portedby DanskinandHanrahan20.

5. The four most popular Approaches

As we have seenin the previous sections,therearenumer-ousapproachsthat canbe taken in volumevisualization.Asideby sidecomparisonof all theseapproacheswouldcovermany pagesandwould probablynot give many insightsdueto the overwhelmingamountof information and the largeparameterset.Generally, therearetwo avenuesthat canbetaken:

1. Thevolumetricdataarefirst convertedinto asetof polyg-onaliso-surfaces(i.e.,via MarchingCubes63) andsubse-quentlyrenderedwith polygonrenderinghardware.Thisis referredto asindirectvolumerendering(IVR).

2. Thevolumetricdataaredirectly renderedwithout thein-termediateconversionstep.This is referredto asdirectvolumerending(DVR) 20 88 100.

Theformerassumes(i) thata setof extractableiso-surfacesexists,and(ii) that with the infinitely thin surfacethe poly-gon meshmodelsthe true object structuresat reasonablefidelity. Neither is always the case,as illustrative exam-ples may serve: (i) amorphouscloud-like phenomena,(ii)smoothlyvarying flow fields, or (iii) structuresof varying

depth(andvarying transparenciesof an isosurface)that at-tenuatetraversinglight correspondingto thematerialthick-ness.But evenif bothof theseassumptionsaremet,thecom-plexity of theextractedpolygonalmeshcanoverwhelmthecapabilitiesof the polygonsubsystem,anda direct volumerenderingmay prove moreefficient 81, especiallywhentheobjectis complex or large,or whentheisosurfaceis interac-tively variedandthe repeatedpolygonextractionoverheadmustbefiguredinto therenderingcost5.

Within this section,we concernourselvessolelywith thedirectvolumerenderingapproach,in which four techniqueshave emergedasthe mostpopular:Raycasting99 54, Splat-ting 108, Shear-warp 52, and3D texture-mappinghardware-basedapproaches9.

5.1. Intr oduction

Over the years,many researchershave worked indepen-dently on refining thesefour methods,anddueto this mul-tifariouseffort, all methodshave now reacheda high levelof maturity. Mostof thisdevelopment,however, hasevolvedalongseparatepaths(althoughsomefundamentalscientificprogresshasbenefitedall methodssuchasadvancesin fil-ter design10 6 73 or efficient shading103 105). A numberoffrequentlyusedandpublicly availabledatasetsexists (e.g.,theUNC CT / MRI headsor the CT lobster),however, dueto the large numberof parametersthat werenot controlledacrosspresentedresearch,it hasso far beendifficult to as-sessthebenefitsandshortcomingsof eachmethodin adeci-sive manner. Thegenerallyuncontrolledparametersinclude(apartfrom hardwarearchitecture,availablecache,andCPUclock speed):shadingmodel,viewing geometry, sceneillu-mination,transferfunctions,imagesizes,andmagnificationfactors.Further, so far, no commonsetof evaluationcrite-ria existsthatenablesfair comparisonsof proposedmethodswith existing ones.Within this section,we will addressthisproblem,andpresentanappropriatesetupfor benchmarkingandevaluatingdifferentdirectvolumerenderingalgorithms.Somework in this direction hasalreadybeendonein thepast:Bartz5 hascomparedDVR usingraycastingwith IVRusingmarchingcubesfor iso-surfaceextraction,while Tiede97 hascomparedgradientfilters for raycastingand march-ing cubes.However, a clear answerto which algorithm isbestcannotbe provided for the generalcasebut the resultspresentedhereareaimedat providing certainguidelinestodetermineunderwhatconditionsandpremiseseachvolumerenderingalgorithmis mostadequatelychosenandapplied.

5.2. CommonTheoretical Framework

We canwrite all four investigatedvolumerenderingmeth-ods as approximationsof the well-known low-albedovol-umerenderingintegral, VRI 8 48 41 66. TheVRI analyticallycomputesI l (x,r), theamountof light of wavelengthl com-ing from ray directionr that is receivedat locationx on the

c�

TheEurographicsAssociation2000.

Page 13: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

imageplane:

Iλ � x � r � � � L

oCλ � s� µ � s� e� ��� s

0 µ � t � dt � ds (1)

Here,L is the lengthof ray r. If we think of the volumeasbeingcomposedof particleswith certaindensities(or lightextinctioncoefficients66) µ, thentheseparticlesreceive lightfrom all surroundinglight sourcesandreflect this light to-wardsthe observer accordingto their specularand diffusematerialproperties.In addition,theparticlesmay alsoemitlight on theirown. Thus,in (1),Cλ is thelight of wavelengthl reflectedand/oremittedat locations in the directionof r.To accountfor thehigherreflectanceof particleswith largerdensities,we mustweigh the reflectedcolor by the particledensity. Thelight scatteredats is thenattenuatedby theden-sitiesof theparticlesbetweens andtheeye accordingto theexponentialattenuationfunction.

At least in the generalcase,the VRI cannot be com-putedanalytically66. Hence,practicalvolumerenderingal-gorithmsdiscretizetheVRI into a seriesof sequentialinter-valsi of width ∆s:

Iλ � x � r � � L ∆s

∑i ! 0

Cλ � si � µ � si � ∆si � 1

∏j ! 0

e� µ � sj � ∆s� (2)

Usinga Taylorseriesapproximationof theexponentialtermanddroppingall but thefirst two terms,we get thefamiliarcompositingequation57:

Iλ � x � r � � L ∆s

∑i ! 0

Cλ � si � α � si � i � 1

∏j ! 0� 1 α � sj �"� (3)

WedenotethisexpressionasdiscretizedVRI (DVRI), whereα � 1 � 0 transparency. Expression3 representsa commontheoreticalframework for all surveyedvolumerenderingal-gorithms.All algorithmsobtaincolorsandopacitiesin dis-crete intervals along a linear path and compositethem infront to back order or back to front order, seesection2.8.However, the algorithmscan be distinguishedby the pro-cessin which thecolorsC � si � andopacitiesα � si � arecalcu-latedin eachinterval i, andhow wide the interval width ∆sis chosen.The position of the shadingoperatorin the vol-umerenderingpipelinealsoaffectsC(si) anda(si). For thispurpose,wedistinguishthepre-shadedfrom thepost-shadedvolume renderingpipeline. In the pre-shadedpipeline, thegrid samplesareclassifiedandshadedbeforetheraysampleinterpolationtakesplace.We denotethis asPre-DVRI (pre-shadedDVRI) andits mathematicalexpressionis identicalto formula3. Pre-DVRI generallyleadsto blurry images,es-peciallyin zoomedviewing, wherefine objectdetailis oftenlost 39 77.

The blurrinessis eliminatedby switching the order ofclassification/shadingand ray sampleinterpolation.Then,the original densityvolumef is interpolatedandthe result-ing samplevalues f � i � areclassified,via transferfunctions,to yield material,opacity, andcolor. All blurry partsof the

edgeimagecanbeclippedawayusingtheappropriateclassi-ficationfunction77. Shadingfollows immediatelyafterclas-sificationandrequiresthecomputationof gradientsfrom thedensitygrid. Theresultingexpressionis termedPost-DVRI(post-shadedDVRI) andis written asfollows:

Iλ � x � r � � L ∆s

∑i ! 0

Cλ � f � si �"� α � f � si �#� i � 1

∏j ! 0� 1 α � f � sj �"�"� (4)

C andα arenow transferfunctions,commonlyimplementedaslookup-tables.Sincein Post-DVRI theraw volumedensi-tiesareinterpolatedandusedto index thetransferfunctionsfor colorandopacity, finedetailin thesetransferfunctionsisreadilyexpressedin thefinal image.Oneshouldnote,how-ever, that Post-DVRI is not without problems:Due to thepartial volume effect, a densitymight be interpolatedthatis classifiedas a materialnot really presentat the samplelocation,which can leadto falsecolors in the final image.This canbeavoidedby prior segmentation,which,however,canaddseverestaircasingartifactsdueto introducedhigh-frequency. Basedon formulas3 and4, we will now presentthefour surveyedalgorithmsin detail.

5.3. Distinguishing Featuresof the differ ent algorithms

Ourcomparisonwill focusontheconceptualdifferencesbe-tweenthe algorithms,andnot so muchon ingeniousmea-suresthat speedruntime.Sincenumerousimplementationsfor eachalgorithmexist – mainly providing acceleration–we will selectthe most generalimplementationfor each,employing themostpopularcomponentsandparameterset-tings.Morespecificimplementationscanthenusethebench-marksintroducedlater to comparethe impact of their im-provements.We have summarizedthe conceptualdiffer-encesof thefour algorithmsin Table1.

5.3.1. Raycasting

Of all volumerenderingalgorithms,Raycastinghasseenthelargestbodyof publicationsovertheyears.ResearchershaveusedPre-DVRI 57 54 aswell asPost-DVRI 2 38 97. Theden-sity and gradient(Post-DVRI), or color and opacity (Pre-DVRI), in eachDVRI interval aregeneratedvia point sam-pling, most commonlyby meansof a trilinear filter fromneighboringvoxels (grid points)to maintaincomputationalefficiency, andsubsequentlycomposited.Mostauthorsspacethe ray samplesapart in equaldistances∆s, but someap-proachesexist that jitter thesamplingpositionsto eliminatepatternedsamplingartifacts,or applyspace-leaping20 127 foracceleratedtraversalof emptyregions.For strict iso-surfacerendering,recentresearchanalytically computesthe loca-tion of the iso-surface,whenthe ray stepsinto a voxel thatis traversedby one81. But in the generalcase,the Nyquisttheoremneedsto be followed which statesthat we shouldchoose∆s � 1 � 0 (i.e., onevoxel length)if we do not knowanything aboutthe frequency contentin the sample’s local

c�

TheEurographicsAssociation2000.

Page 14: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

neighborhood.Then,for Pre-DVRI andPost-DVRI raycast-ing, theC � si �$� α � si �$� andf � si � termsin equations3 and4, re-spectively, arewritten as:

Cλ � si � � Cλ � i∆s�α � si � � α � i∆s� (5)

f � si � � f � i∆s�Note that a needsto be normalizedfor ∆s %� 1 � 0 62. In theusedimplementation,we useearly ray termination,whichis a powerful accelerationmethodof raycastingwhererayscanbeterminatedwhentheaccumulatedopacityhasreacheda value closeto unity. Furthermore,all samplesandcorre-spondinggradientcomponentsarecomputedby trilinear in-terpolationof therespective grid data.

5.3.2. Splatting

Splattingwasproposedby Westover108, andit worksby rep-resentingthevolumeasanarrayof overlappingbasisfunc-tions, commonlyGaussiankernelswith amplitudesscaledby thevoxel values.An imageis thengeneratedby project-ing thesebasisfunctionsto the screen.The screenprojec-tion of theseradially symmetricbasisfunction canbe effi-ciently achievedby therasterizationof a precomputedfoot-print lookuptable.Here,eachfootprint tableentrystorestheanalyticallyintegratedkernelfunctionalongatraversingray.A majoradvantageof splattingis thatonly voxelsrelevanttotheimagemustbeprojectedandrasterized.Thiscantremen-dously reducethe volume datathat needsto be both pro-cessedand stored78. However, dependingon the zoomingfactor, eachsplatcancover up to hundredsof pixels whichneedto beprocessed.

The preferredsplattingapproach108 summedthe voxelkernels within volume slices most parallel to the imageplane.This wasproneto severebrightnessvariationsin an-imatedviewing andalsodid not allow the variation of theDVRI interval distance∆s. Mueller76 eliminatedthesedraw-backsby processingthevoxel kernelswithin slabsof width∆s, alignedparallelto theimageplane– hencetheapproachwastermedimage-alignedsplatting:All voxel kernelsthatoverlapaslabareclippedto theslabandsummedinto asheetbuffer, followedby compositingthesheetwith thesheetbe-fore. Efficient kernelslice projectionis achieved by analyt-ical pre-integration of an array of kernel slicesand usingfastslicefootprint rasterizationmethods78. Both Pre-DVRIandPost-DVRI 77 arepossible,andtheC � si �$� α � si �$� andf � si �termsin equations3 and4 arenow written as:

Cλ � si � �'& � i ( 1� ∆si∆s Cλ � s� ds

∆s

α � si � �'& � i ( 1� ∆si∆s α � s� ds

∆s(6)

f � si � � & � i ( 1� ∆si∆s f � s� ds

∆s

We observe that splattingreplacesthe point sampleof ray-castingby a sampleaverageacross∆s. This introducesanadditionallow-passfiltering stepthat helpsto reducealias-ing, especiallyin isosurfacerenderingsandwhen∆s ) 1 � 0.Splattingalso typically usesrotationally symmetricGaus-siankernels,which have betteranti-aliasingcharacteristicsthan linear filters, with the side effect of performingsomesignalsmoothing.However, whensplattingdatavaluesandnot color, classificationis an unsolved problem since theoriginal datavalue, i.e. density, is smoothed,accumulated,andthenclassified.This aggravatestheso-calledpartialvol-umeeffect andsolvingthis remainsfutureresearch.

Splattingcanusea conceptsimilar to early ray termina-tion: early splatelimination,basedon a dynamicallycom-putedscreenocclusionmap,that (conservatively) culls in-visible splatsearlyfrom therenderingpipeline78. Themainoperationsof splattingare the transformationof eachrele-vant voxel centerinto screenspace,followed by an indexinto theocclusionmapto testfor visibility, andin caseit isvisible,therasterizationof thevoxel footprint into thesheet-buffer. The dynamicconstructionof the occlusionmapre-quiresa convolution operationafter eachsheet-buffer com-posite,which, however, can be limited to buffer tiles thathave received splat contributions in the current slab 78. Itshouldbenotedthat,althoughearlysplateliminationsavesthe costof footprint rasterizationfor invisible voxels, theirtransformationmuststill beperformedto determinetheiroc-clusion.This is different from early ray terminationwherethe ray can be stoppedandsubsequentvoxels arenot pro-cessed.

5.3.3. Shear-Warp

Shear-warpwasproposedby LacrouteandLevoy 52 andhasbeenrecognizedas the fastestsoftwarerendererto date.Itachievesthis by employing a clever volumeandimageen-codingscheme,coupledwith asimultaneoustraversalof vol-umeandimagethat skipsopaqueimageregionsandtrans-parentvoxels.In a pre-processingstep,voxel runsareRLE-encodedbasedon pre-classifiedopacities.This requirestheconstructionof a separateencodedvolume for eachof thethreemajorviewing directions.Therenderingis performedusingaraycasting-likescheme,whichis simplifiedby shear-ing the appropriateencodedvolume suchthat the rays areperpendicularto the volume slices. The rays obtain theirsamplevaluesvia bilinearinterpolationwithin thetraversedvolumeslices.A final warpingsteptransformsthevolume-parallelbaseplaneimageinto the screenimage.The DVRIinterval distance∆ s is view-dependent,sincetheinterpola-tion of samplevaluesonly occursin shearedvolumeslices.It variesfrom 1 � 0 for axis-alignedviews to 1 � 41 for edge-on views to 1 � 73 to corner-on views,andit cannotbevariedto allow for supersamplingalongthe ray. Thusthe Nyquisttheoremis potentially violated for all but the axis-alignedviews.

TheVolpackdistributionfrom Stanford(avolumerender-

c�

TheEurographicsAssociation2000.

Page 15: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Samplingrate freely selectable freely selectable fixed 1 � 0 � 1 � 73� freely selectable

Interpolationkernel trilinear Gaussian bilinear trilinear

Acceleration earlyray termination earlysplatelimination RLE opacityencoding graphicshardware

Voxelsconsidered all relevant mostlyrelevant all

Table1: Distinguishingfeaturesandcommonlyusedparameters of thefour differentalgorithms.

ing packagethat usesthe shearwarp algorithm) only pro-videsfor Pre-DVRI (with opacityweightedcolors),but con-ceptuallyPost-DVRI is alsofeasible,however, withoutopac-ity classificationif shear-warp’s fastopacity-basedencodingis used.TheC � si � , α � si � , and f � si � termsin equations3 and4 arewritten similar to raycasting,but with the addedcon-straintthat∆s is dependenton theview direction:

Cλ � si � � Cλ � i∆s�α � si � � α � i∆s� (7)

f � si � � f � i∆s�∆s �+* , dx

dz - 2 . , dydz - 2 .

1 (8)

where dx � dy� dz� T is the normalizedviewing vector, re-orderedsuchthatdz is themajorviewing direction.In Vol-pack,thenumberof rayssentthroughthevolumeis limitedto the numberof pixels in the baseplane(i.e., the resolu-tion of thevolumeslicesin view direction).Largerviewportsareachievedby bilinearinterpolationof theresultingimage(after back-warping of the baseplane),resultingin a verylow imagequality if thesizeof theview-port is significantlylargerthanthevolumeresolution.Thiscanbefixedby usinga scaledvolumewith a highervolumeresolution.

5.3.4. 3D Texture-Mapping Hardware

This is a shortintroductionto 3D texturemappingandmoredetailsare disclosedin section7.// The useof 3D texturemappingwaspopularizedby Cabral9 for non-shadedvol-umerendering.The volumeis loadedinto texture memoryandthe hardwarerasterizespolygonalslicesparallel to theviewplane.Theslicesarethenblendedbackto front, duetothemissingaccumulationbuffer for a. Theinterpolationfil-teris atrilinearfunction(onSGI’sRE2 andIR architectures,quadlinearinterpolationis alsoavailable,whereit addition-ally interpolatesbetweentwo mipmaplevels),andtheslicedistance∆ s canbe chosenfreely. A numberof researchershave addedshadingcapabilities19 70 26 106, and both Pre-DVRI 26 andPost-DVRI 19 70 106 arepossible.Usually, therenderingis brute-force,without any opacity-basedtermi-nationacceleration,but someresearchershave donethis 19.The drawbacksof 3D texture mappingis that larger vol-umesrequirethe swappingof volume bricks in andout of

thelimited-sizedtexturememory(usuallya few MBytesforsmallermachines).Fortunately, 3D texturemappingrecentlybecamepopular in PC basedgraphicshardware. Texture-mappinghardware interpolatessamplesin similar ways toraycastingand hencethe C � si � , α � si � , and f � si � terms inequations3 and4 arewritten as:

Cλ � si � � Cλ � i∆s�α � si � � α � i∆s� (9)

f � si � � f � i∆s�5.4. Comparison

For a fair comparisonof thepresentedfour algorithms,oneneedsto defineidenticalviewing andrenderingparameters.Thefirst onecanbe accomplishedusinga commoncameramodel, i.e. like OpenGLwhile the latter includesmultipleparameterssuchasfiltering method,classification,shading,blending,and so forth. Someof thesepropertiesare illus-tratedin table1. Otherssuchasshading,classification,andblendingneedto beadoptedaccrossthealgorithmssuchthatall usethesameoperations.

It is difficult to evaluaterenderingquality in aquantitativemanner. Often,researcherssimply put imagesof competingalgorithmssideby side,appointingthehumanvisualsystem(HVS) to bethejudge.It is well known thattheHVS is lesssensitiveto someerrors(stochasticnoise)andmoreto others(regularpatterns),andinterestingly, sometimesimageswithlarger numericalerrors,e.g.,RMS, arejudgedsimilar by ahumanobserver than imageswith lower numericalerrors.So it seemsthat the visual comparisonis moreappropriatethanthenumerical,sinceafterall weproduceimagesfor thehumanobserver andnot for error functions.In that respect,anerrormodelthat involvesthe HVS characteristicswouldbemoreappropriatethana purelynumericalone.But never-theless,to performsucha comparisonwe still needthetruevolume renderedimage,obtainedby analytically integrat-ing thevolumevia equation(1) (neglectingtheprior reduc-tion of the volume renderingtask to the low-albedocase).As waspointedout by Max 66, analyticalintegrationcanbedonewhen assumingthatC � s� and µ � s� are piecewise lin-ear. This is, however, somewhat restrictive on our transferfunctions,so in the following we employ visual quality as-sessmentonly.

c�

TheEurographicsAssociation2000.

Page 16: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Raycasting Splatting Shear-warp 3D texturemapping

post-shadedDVRI post-shadedDVRI pre-shadedDVRI pre-shadedDVRI

opacityweighted not opacityweightedcolor interpolation color interpolation

Figure 7: Comparisonof the four algorithms.Columnsfromleft to right: Raycasting, splatting, shear-warp, and3D texturemapping. Rowsfrom top to bottom:CT scanof a humanskull, zoomedview of the teeth,CT scanof brain arteriesshowingan aneurism,Marschner-Lobb functioncontaininghigh frequencies,and simulationof the potentialdistribution of electronsaroundatoms.

c�

TheEurographicsAssociation2000.

Page 17: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

5.5. Results

All presentedresultsweregeneratedon the sameplatform(SGI Octane).Thegraphicshardwarewasonly usedby the3D texturemappingapproach.Figure7 showsrepresentativestill framesof different datasetsthat we rendered.We ob-serve that the imagequality achieved with texture mappinghardwareshows severecolor-bleedingartifactsdueto inter-polationof colorsindependentfrom theα-value119 (seesec-tion 2.10,aswell asstaircasing.Furthermore,highly trans-parentclassificationresultsin darker images,dueto limitedprecisionof theRGBA-channelsof thehardware(8 bit).

Volpack shear-warp performsmuch better, with qualitysimilar to raycastingandsplattingwhenever the resolutionof the imagematchesthe resolutionof the baseplane.Fortheotherimages,therenderedbaseplaneimagewasof lowerresolutionthan the screenimageand had to be magnifiedusingbilinear interpolationin the warpingstep.This leadsto excessive blurring, especiallyfor the Marschner-Lobbdataset,wherethemagnificationis veryhigh.A morefunda-mentaldraw-backcanbe observed in the 45 degreeneghipview in Figure7, where– in additionto theblurring– signif-icantaliasingin theform of staircasingis present.Thisis dueto theray samplingratebeinglessthan1.0,andcanbedis-turbingin animatedviewing of somedatasetsbut is lessno-ticeablein still images.TheMarschner-Lobb datasetrender-ingsfor raycastingandsplattingdemonstratethedifferencesof point sampling(raycasting)andsampleaveraging(splat-ting). While raycasting’s point samplingmissessomedetailof thefunctionat thecrestsof thesinusoidalwaves,splattingaveragesacrossthewavesandrendersthemasblobbyrims.For theotherdatasetstheaveragingeffect is moresubtle,butstill visible. For example,raycastingrendersthe skull andthe magnifiedblood with crisperdetail thansplattingdoes,but cansuffer from aliasingartifacts,if thesamplingrateisnot chosenappropriately(Nyquist rate).However, thequal-ity is quitecomparable,for all practicalpurposes.

5.6. Summary

Generally, 3D texture mappingand shear-warp have sub-secondrenderingtimesfor moderately-sizeddatasets.Whilethe quality of the display obtainedwith our mainstreamtexture mappingapproachis limited and can be improvedas demonstratedin section7, the quality of shear-warp ri-vals that of the muchmoreexpensive raycastingandsplat-ting whentheobjectmagnificationis aboutunity. Handlinghighermagnificationsis possibleby relaxingthe conditionthatthenumberof raysmustmatchtheresolutionof thevol-ume.Althoughhigherinterpolationcostswill be the result,the renderingframe rate will most likely still be high (es-pecially if view frustumculling is applied).A moreseriousconcernis thedegradationof imagequalityatoff-axisviews.In thesecases,onecould usea volumewith extra interpo-latedslices,which is Volpack’s standardsolutionfor higherimageresolutions.But the fact that shear-warp requiresan

opacity-encodedvolumemakesinteractive transferfunctionvariationachallenge.In applicationswheretheselimitationsdonotapply, shear-warpprovesto beaveryusefulalgorithmfor volumerendering.Theside-by-sidecomparisonof splat-ting and raycastingyielded interestingresultsas well: Wesaw that image-alignedsplattingoffers a renderingqualitysimilar to thatof raycasting.It, however, producessmootherimagesdueto thez-averagedkernelandtheanti-aliasingef-fectof thelargerGaussianfilter. It is hencelesslikely to misshigh-frequency detail.Raycastingis fasterthansplattingfordatasetswith a low numberof non-contributing samples.Onthe otherhand,splattingis betterfor datasetswith a smallnumberof relevantvoxels andsheetbuffers.Sincethequal-ity is so similar andthe sametransferfunctionsyield simi-lar renderingresults,onecouldbuild a rendererthatapplieseither raycastingor splatting,dependingon the numberofrelevant voxels andthe level of compactnessof the dataset.Onecouldeven usedifferentrenderersin differentportionsof the volume,or for the renderingof disconnectedobjectsof differentcompactness.

More detailson this comparisoncanbefoundin 71.

6. The VolumePro Real-Time Ray-CastingSystem

Softwarebasedvolumerenderingapproachescanbeacceler-atedsuchthatinteractive frame-ratescanbeachieved.How-ever, this requirescertaintrade-offs in quality or parametersthatcanbechangedinteractively. In orderto achieve interac-tive or evenreal-timeframe-ratesat highestquality andfullflexibility , dedicatedhardwareis necessary. Within this sec-tion we will have a closerlook at specialpurposehardware,i.e. theVolumeProsystem.

6.1. Intr oduction

Specialpurposehardware for volume renderinghas beenproposedby variousresearchers,but only a few machineshave beenimplemented.VIRIM wasbuilt at theUniversityof Mannheim,Germany 31. The hardware consistsof fourVME boardsandimplementsray-casting.VIRIM achieves2.5frames/secfor 2563 volumes.

The first PCI basedvolume renderingacceleratorhasbeenbuilt by the University of Tübingen,Germany. TheirVIZARD systemimplementstrue perspective ray-castingand consistsof two PCI acceleratorcards 47. An FPGA-basedsystemachievesupto 10frames/secfor 2563 volumes.To circumventthelossydatacompressionandthelimitationsof changesin classificationor shadingparametersdueto thepre-processing,a follow-up systemis currentlyunderdevel-opment72 21. This VIZARD II systemwill becapableof upto 20 framesper secondfor datasetsof 2563 voxels. Thestrengthof this systemis its ray traversalenginewith anop-timizedmemoryinterfacethatallows for fly throughswhichis mandatoryfor immersive applications.Thesystemusesa

c�

TheEurographicsAssociation2000.

Page 18: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

singleray processingpipeline(RPU) andexploits algorith-mic optimizationssuchas early ray terminationandspaceleaping.

Within this section,we will describeVolumePro,thefirstsingle-chipreal-timevolumerenderingsystemfor consumerPCs83. Thefirst VolumeProboardwasoperationalin April1999(seeFigure8).

Figure8: TheVolumeProPCI card.

The VolumeProsystemis basedon the Cube-4volumerenderingarchitecturedevelopedat SUNY Stony Brook 84.MitsubishiElectric licensedtheCube-4technologyandde-velopedthe EnhancedMemory Cube-4(EM-Cube)archi-tecture80. TheVolumeProsystem,animprovedcommercialversionof EM-Cube,is commerciallyavailablesinceMay1999at a price comparableto high-endPC graphicscards.Figure9 shows several imagesrenderedon the VolumeProhardwareat 30 frames/sec.

6.2. RenderingAlgorithm

VolumeProimplementsray-casting54, oneof themostcom-monlyusedvolumerenderingalgorithms.Ray-castingoffershighimagequalityandis easyto parallelize.Thecurrentver-sionof VolumeProsupportsparallelprojectionsof isotropicandanisotropicrectilinearvolumeswith scalarvoxels.

VolumeProis a highly parallelarchitecturebasedon thehybrid ray-castingalgorithm shown in Figure 10 124 91 52.Rays are sent into the datasetfrom eachpixel on a base

Image Plane

Warp

Base Plane

VolumeData

Voxel Slice

Figure10: Template-basedray-casting.

plane,which is co-planarto thefaceof thevolumedatathatis mostparallelandnearestto theimageplane.Becausetheimageplaneis typically at someangleto thebase-plane,theresultingbase-planeimageis warpedontotheimageplane.

Themainadvantageof thisalgorithmis thatvoxelscanbereadandprocessedin planesof voxels(socalledslices)thatareparallelto thebase-plane.Within a slice,voxelsarereadfrom memoryascanlineof voxelsata time,in top to bottomorder. This leadsto regular, object-orderdataaccess.

In contrastto theshear-warpimplementationby Lacrouteand Levoy 52, VolumeProperformstri-linear interpolationandallows raysto startat sub-pixel locations.This preventsview-dependentartifactswhenswitchingbaseplanesandac-commodatessupersamplingof thevolumedata.

6.3. VolumePro SystemAr chitecture

VolumePro is implementedas a PCI card for PC classcomputers.The cardcontainsonevolume renderingASIC(calledthevg500) and128or 256MBytesof volumemem-ory. Thewarpinganddisplayof thefinal imageis doneonanoff-the-shelf3D graphicscardwith 2D texturemapping.Thevg500volumerenderingASIC,shown in Figure11,containsfour identicalrenderingpipelines,arrangedsideby side,run-ningat125MHz each.It is anapplicationspecificintegratedcircuit (ASIC) with approximately3.2million randomlogictransistorsand2 Mbits of on-chipSRAM. The vg500alsocontainsinterfacesto voxel memory, pixel memory, andthePCI bus.

Interpolation

GradientEstimation

Shading &Classification

Compositing

Pipeline 1

Pipeline 2

Pipeline 3

Voxel Memory Interface

Pixel Memory Interface

SDRAM SDRAM SDRAM SDRAM

SDRAM SDRAM SDRAM SDRAM

PC

I Interface

vg500ASIC

Slice B

uffers

Pipeline 0

PLL &

Glue

Figure 11: The vg500 volume renderingASIC with fouridenticalray-castingpipelines.

Eachpipelinecommunicateswith voxel andpixel mem-ory andtwo neighboringpipelines.Pipelineson the far leftandright areconnectedto eachotherin a wrap-aroundfash-ion (indicatedby grey arrows in Figure11).A maincharac-teristicof VolumeProis thateachvoxel is readfrom volumememoryexactlyonceperframe.Voxelsandintermediatere-sultsarecachedin socalledslicebufferssothatthey becomeavailablefor calculationspreciselywhenneeded.

Eachrenderingpipelineimplementsray-castingandsam-ple valuesalongraysarecalculatedusingtri-linear interpo-lation. A 3D gradientis computedusingcentraldifferencesbetweentri-linearsamples.Thegradientis usedin theshaderstage,which computesthesampleintensityaccordingto the

c�

TheEurographicsAssociation2000.

Page 19: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Figure9: Severalvolumesrenderedon theVolumeProhardwareat 30 framespersecond.

Phongillumination model.Lookup tablesin the classifica-tion stageassigncolor and opacity to eachsamplepoint.Finally, the illuminatedsamplesareaccumulatedinto baseplanepixelsusingfront-to-backcompositing.

Volumememoryuses16-bit wide synchronousDRAMs(SDRAMs) for up to 256 MBytes of volume storage.2 �2 � 2 cells of neighboringvoxels,so calledminiblocks,arestoredlinearly in volumememory. Miniblocks arereadandwritten in burstsof eight voxels using the fast burst modeof SDRAMs. In addition, VolumeProusesa linear skew-ing of miniblocks45. Skewing guaranteesthattherenderingpipelinesalwayshave accessto four adjacentminiblocksinany of thethreesliceorientations.A miniblockwith position xyz� in the volumeis assignedto the memorymodulek asfollows:

k � �0/ x2 1 . / y

2 1 . / z2 1 � mod4 � (10)

6.4. VolumePro PCI Card

The VolumeProboard is a PCI Short Card with a 32-bit66 MHz PCI interface(seeFigure 8). The boardcontainsa singlevg500renderingASIC, twenty 64 Mbit SDRAMswith 16-bit datapaths,clock generationlogic, anda voltageconverterto make it 3.3 volt or 5 volt compliant.Figure12shows a block diagramof thecomponentson theboardandthebussesconnectingthem.

Voxel Memory

PCI-Bus

vg500

V-Bus

Pixel MemorySection Memory

S-Bus P-Bus

SDRAMSDRAMSDRAMSDRAM

SDRAMSDRAM SDRAMSDRAM

EEPROM Clk Gen

SDRAMSDRAMSDRAMSDRAM

SDRAMSDRAMSDRAMSDRAM

SDRAMSDRAMSDRAMSDRAM

Module0 Module1 Module2 Module3

Figure12: VolumeProPCI boarddiagram.

The vg500 ASIC interfacesdirectly to the systemPCI-Bus.Accessto thevg500’s internalregistersandto theoff-chip memoriesis accomplishedthroughthe32-bit 66 MHzPCIbusinterface.Thepeakburstdatarateof this interfaceis264MB/sec.Someof this bandwidthis consumedby imageupload,someof it by otherPCI systemtraffic.

6.5. Supersampling

Supersampling32 improves the quality of the renderedim-ageby samplingthe volumedatasetat a higherfrequencythanthevoxel spacing.In thecaseof supersamplingin thexandy directions,thiswouldresultin moresamplesperbeamandmorebeamsperslice,respectively. In thez direction,itresultsin moresampleslicespervolume.

VolumeProsupportssupersamplingin hardware only inthez direction.Additional slicesof samplesareinterpolatedbetweenexisting slicesof voxels. The software automati-cally correctstheopacityaccordingto theviewing angleandsamplespacingby reloadingtheopacitytable.

Figure13 shows theCT scanof a foot (152x 261x 200)renderedwith no supersampling(left) andsupersamplinginz by 3 (right). The artifactsin the left imagestemfrom theinsufficient samplingrateto capturethehigh frequenciesofthe foot surface.Notice the reducedartifactsin the super-sampledimage.VolumeProsupportsup to eight times su-persampling.

Figure 13: No supersampling(left) andsupersamplingin z(right).

6.6. Supervolumesand Subvolumes

Volumes of arbitrary dimensionscan be stored in voxelmemory without padding. Becauseof limited on-chip

c�

TheEurographicsAssociation2000.

Page 20: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

buffers, however, the VolumeProhardwarecanonly rendervolumeswith a maximumof 256voxels in eachdimensionin one pass.In order to rendera larger volume (called asupervolume),softwaremustfirst partition the volumeintosmallerblocks.Eachblock is renderedindependently, andtheir resultingimagesarecombinedin software.

The VolumeProsoftware automaticallypartitionssuper-volumes,takescareof thedataduplicationbetweenblocks,and blendsintermediatebaseplanesinto the final image.Blocks are automaticallyswappedto and from host mem-ory if a supervolumedoesnot fit into the128MB of volumememoryontheVolumeProPCI card.Thereis no limit to thesizeof a supervolume,although,of course,renderingtimeincreasesdueto thelimited PCI downloadbandwidth.

Volumeswith lessthan256voxels in eachdimensionarecalledsubvolumes.VolumePro’s memorycontrollerallowsreadingandwriting singlevoxels,slices,or any rectangularslab to and from Voxel Memory. Multiple subvolumescanbepre-loadedinto volumememory. Subvolumescanbeup-datedin-betweenframes.This allows dynamicand partialupdatesof volumedatato achieve 4D animationeffects.Italsoenablesloadingsectionsof alargervolumein pieces,al-lowing theuserto effectively panthroughavolume.Subvol-umesincreaserenderingspeedto thepoint wheretheframerate is limited by the baseplanepixel transferand driveroverhead,which is currentlyat 30 frames/sec.

6.7. Cropping and Cut Planes

VolumeProprovides two featuresfor clipping the volumedatasetcalledcroppingandcutplanes.Thesemake it possi-bleto visualizeslices,cross-sections,or otherportionsof thevolume,thusproviding theuseranopportunityto seeinsidein creativeways.Figure14(a)showsanexampleof croppingon theCT foot of thevisible man.Figure14(b)shows a cutplanethroughtheenginedata.

(a) (b)

Figure14: (a) Cropping.(b) Cut plane.

6.8. VolumePro Performance

Eachof thefour SDRAMsprovidesburst-modeaccessat upto 125MHz, for asustainedmemorybandwidthof 4 � 125 �

106 � 500million 16-bit voxelspersecond.Eachrenderingpipeline operatesat 125 MHz andcan accepta new voxelfrom its SDRAM memoryeverycycle.500million tri-linearsamplesper secondis sufficient to render2563 volumesat30 framespersecond.

6.9. VLI - The Volume Library Interface

Figure15 shows the softwareinfrastructureof the Volume-Pro system.The VLI API is a setof C++ classesthat pro-

Figure 15: Software infrastructureof the VolumeProsys-tem.

vide full accessto the vg500 chip features.VLI doesnotreplaceanexistinggraphicsAPI. Rather, VLI workscooper-atively with a 3D graphicslibrary, suchasOpenGL,to man-agetherenderingof volumesanddisplayingtheresultsin a3D scene.Higherlevel toolkits (suchasvtk – TheVisualiza-tion Toolkit) andscenegraphson top of theVLI will likelybecometheprimaryinterfacelayerto applications.TheVLIclassescanbegroupedasfollows:2 Volume datahandling.VLIVolume managesvoxel data

storage,voxel dataformat,andtransformationsof thevol-ume data such as shearing,scaling, and positioning inworld space.2 Renderingelements.Thereare several VLI classesthatprovide accessto the VolumeProfeatures,suchascolorand opacity lookup tables,cameras,lights, cut planes,clipping,andmore.2 Renderingcontext. The VLI classVLIContext is a con-tainerobjectfor all attributesneededto renderthevolume.It is usedto specify the volume dataset andall render-ing parameters(suchas classification,illumination, andblending)for thecurrentframe.

TheVLI automaticallycomputesreflectancemapsbasedonlight placement,setsupα-correctionbasedonviewing angleand samplespacing,supportsanisotropicand gantry-tilteddatasetsby correctingthe viewing andimagewarp matri-ces,andmanagessupervolumes,supersampling,andpartialupdatesof volumedata.In addition,thereareVLI functionsthatprovide initialization,configuration,andterminationfortheVolumeProhardware.

6.10. Summary

This sectiondescribesthe algorithm,architecture,andfea-turesof VolumePro,the world’s first single-chipreal-timevolumerenderingsystem.Therenderingcapabilitiesof Vol-umePro– 500million tri-linear, Phongilluminated,compos-ited samplesper second– setsa new standardfor volumerenderingon consumerPCs.Its core features,suchas on-the-fly gradientestimation,per-samplePhongilluminationwith arbitrarynumberof light sources,4K RGBA classifi-cationtables,α-blendingwith 12-bit precision,andgradient

c�

TheEurographicsAssociation2000.

Page 21: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

magnitudemodulation,put it aheadof any otherhardwaresolutionfor volumerendering.Additional features,suchassupersampling,supervolumes,croppingandcut planes,en-ablethedevelopmentof feature-rich,high-performancevol-umevisualizationapplications.

Someimportantlimitationsof VolumeProaretherestric-tion to rectilinear scalarvolumes,the lack of perspectiveprojections,andno supportfor intermixingof polygonsandvolumedata.Mixing of opaquepolygonsandvolumedatacanbe achieved by first renderinggeometry, transferringzbuffer valuesfrom thepolygoncardto thevolumerenderer,andthenrenderingthevolumestartingfrom thesez values.Futureversionsof the systemwill supportperspective pro-jectionsandseveral voxel formats,including pre-classifiedmaterialvolumesandRGBA volumes.Thelimitation to rec-tilinear gridsis morefundamentalandhardto overcome.

7. 3D Texture Mapping

So far, we have seen different volume rendering tech-niquesandalgorithmicoptimizationsthat canbe exploitedto achieve interactiveframe-rates.Real-timeframe-ratescanbe accomplishedby specialpurposehardwaresuchaspre-sentedin theprevioussection(VolumePro).Anotheravenuethat can be taken is basedon 2D and 3D texture mappinghardwarewhich currentlymigratesinto the commodityPCgraphicshardwareandallows for mixing polygonsandvol-umes.

7.1. Intr oduction

With fast 3D graphicshardware becomingmore andmoreavailableevenonlow endplatforms,thefocusin developingnew algorithmsis beginning to shift towards higher qual-ity renderingand additional functionality insteadof sim-ply higher performanceimplementationsof the traditionalgraphicspipeline.

GraphicslibrarieslikeOpenGLandits extensionsprovideaccessto advancedgraphicsoperationsin thegeometryandtherasterizationstageandthereforeallow for thedesignandimplementationof completelynew classesof renderingal-gorithms.Prominentexamplescanbefoundin realisticim-agesynthesis(shading,bump/environmentmapping,reflec-tions) andscientificvisualizationapplications(volumeren-dering,vectorfield visualization,dataanalysis).

In this respect,thegoalof this sessionis twofold: To giveboth a state-of-the-artoverview of volume renderingalgo-rithms usingthe extendedOpenGLgraphicslibrary andtopresentanumberof selectedandadvancedvolumerenderingalgorithmsin which accessto dedicatedgraphicshardwareis paramount.The first part of this sectionsummarizesthemostimportantfundamentalsandfeaturesof thegraphicsli-braryOpenGLwith respectto thepracticalandefficient de-signof volumerenderingalgorithms.Thesecondpartis ded-icatedto theefficientuseof multi-textureregistercombiners

asavailableon low-costPCsin volumerenderingapplica-tions. In eachof both partshardware acceleratedgraphicsoperationsare usedthus allowing interactive, high qualityrenderingandanalysisof large-scalevolumedatasets.

7.2. Advancedvolume rendering techniques

OpenGLandits extensionsprovide accessto advancedper-pixel operationsavailable in the rasterizationstageand intheframebuffer hardwareof moderngraphicsworkstations.With thesemechanisms,completely new renderingalgo-rithmscanbedesignedandimplementedin a veryparticularway.

Over thepastfew yearsworkstationswith hardwaresup-port for the interactive renderingof complex 3D polygonalscenesconsistingof directly lit and shadedtriangleshavebecomewidely available.The last two generationsof high-endgraphicsworkstations1 75, however, besidesprovidingimpressive rates of geometryprocessing,also introducednew functionalityin therasterizationandframebuffer hard-ware,like textureandenvironmentmapping,fragmenttestsand manipulationas well as auxiliary buffers. The abilityto exploit thesefeaturesthroughOpenGLandits extensionsallows completelynew classesof renderingalgorithmstobe developed.Anticipating similar trendsfor the moread-vancedimagingfunctionality of todayshigh-endmachinesgraphicsresearchersare actively investigatingpossibilitiesto accelerateexpensive visualizationalgorithmsby usingtheseextensions.

In thissessionwewill summarizevariousapproachesthatmakeextensiveuseof graphicshardwarefor therenderingofvolumetricdatasets.In particular, thegoalof this sessionisto provide participantswith dedicatedknowledgeconcern-ing the applicationof 3D texturesin volumerenderingap-plicationsandto demonstratehow to exploit theprocessingpower andfunctionalityof therasterizationandtexturesub-systemof advancedgraphicshardware.Althoughatthistimehardwareaccelerated3D texturemappingis only supportedon a few particulararchitectureswe expect the samefunc-tionality to beavailableon low-endarchitectureslike PCsinthe nearfuture thusleadingto an increasingneedfor hard-wareacceleratedalgorithmsaswill be presented.Nonethe-less,we will alsodemonstratehow to efficiently exploit ex-isting PCgraphicshardwareon which only 2D texturesareavailablein orderto achievehighimagequalityatinteractiveframerates.

Hereafterwe will first describethebasicconceptsof vol-umerenderingvia 3D texturestherebyfocusingon the po-tential benefitsandadvantagescomparedto softwarebasedsolutions.We will further outline extensionsthat enableflexible and interactive editing and manipulationof largescalevolume data.We will introducethe conceptof clip-ping geometriesby meansof stencilbuffer operations,andwe will review the useof 3D texturesfor the renderingof

c�

TheEurographicsAssociation2000.

Page 22: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

lightedandshadediso-surfacesin real-timewithout extract-ing any polygonalrepresentation.Additionally, we will de-scribenovel approachesfor the renderingof scalarvolumedatausing2D texturesandmulti-textureregistercombinersasavailableonNvidia’sGeForce256PCgraphicsprocessor.Theintentionhereis to streamlinegeneraldirectionshow tobringhighqualityvolumerenderingto theconsumermarketby exploiting dedicatedbut affordablegraphicshardware.

Our majorconcernin this sessionis to outlinetechniquesfor the efficient generationof a visual representationofthe informationpresentin volumetricdatasets.For scalar-valuedvolumedatatwo standardtechniques,the renderingof iso-surfaces,andthe direct volumerendering,have beendevelopedto a high degreeof sophistication.However, dueto the hugenumberof volumecells which have to be pro-cessedandto the variety of differentcell typesonly a fewapproachesallow parametermodificationsandnavigationatinteractiveratesfor realisticallysizeddatasets.To overcometheselimitationsa basisfor hardwareacceleratedinteractivevisualizationof bothiso-surfacesanddirectvolumerender-ing hasbeenprovidedin 106.

Direct volumerenderingtries to convey a visual impres-sionof thecomplete3D datasetby taking into accounttheemissionandabsorptioneffectsasseenby anoutsideviewer.The underlying theory of the physicsof light transportissimplifiedto thewell known volumerenderingintegralwhenscatteringandfrequency effectsareneglected41 49 68 112. Afew standardalgorithmsexist for computingthe intensitycontribution alonga ray of sight,enhancedby a wide vari-etyof optimizationstrategies57 68 53 20 52. But only recently,sincehardwaresupported3D texture mappingis available,hasdirect volume renderingbecomeinteractively feasibleongraphicsworkstations9 18 115. Thisapproachhasbeenex-tendedfurtheronwith respectto flexible editingoptionsandadvancedmappingandrenderingtechniques.

Themajorgoalis themanipulationandrenderingof large-scale volumetric data sets at interactive rates within oneapplicationon standardgraphicsarchitectures.In this ses-sion we focus on scalar-valuedvolumesand show how toacceleratethe renderingprocessby exploiting featuresofadvancedgraphicshardwareimplementationsthroughstan-dardAPIslikeOpenGL.Thepresentedapproachis pixel ori-ented,takesadvantageof rasterizationfunctionalitysuchascolor interpolation,texture mapping,color manipulationinthe pixel transferpath, various fragmentand stencil tests,andblendingoperations.In this way it is possibleto2 extendvolume rendering via 3D textureswith respect

to arbitraryclipping geometries2 render shadediso-surfacesat interactive ratescombin-ing 3D texturesand fragmentoperationsthus avoidingany polygonalrepresentation2 extendvolume rendering via 2D textureswith respectto view independenttexture resamplingand advancedshadingandlighting models

7.3. Volumerenderingvia 3D textures

When 3D texturesbecameavailable on graphicsworksta-tionstheirbenefitin volumerenderingapplicationswassoonrecognized18 9. The basicideais to interpretthe 3D scalarvoxel array asa 3D texture definedover 0 � 1� 3 andto un-derstand3D texturemappingasthetrilinear interpolationofthevolumedatasetat anarbitrarypoint within this domain.The datais re-sampledon clipping planesthat areorientedorthogonalto theviewing planewith theplanepixels trilin-earlyinterpolatedfrom the3D scalartexture.This operationis successively performedfor multiple planesthat have tobe clippedagainstthe parametrictexture domain(seeFig-ure16). Thesepolygonsarerenderedfrom front-to-backorback-to-frontandtheresultingtextureslicesareblendedap-propriatelyinto the framebuffer therebyapproximatingthecontinuousvolumerenderingintegral.

Figure16: Volumerenderingby 3D texture slicing.

Dedicatedgraphicshardware is exploited for trilinearlyinterpolatingwithin the texture andfor blendingthe gener-atedfragmentson a per-pixel basis.However, the real po-tential of volumerenderingvia 3D texturesjust turnedoutafter texture lookup tablesbecameavailable. Scalarsam-plesthatarereconstructedfrom the3D textureareconvertedinto RGAα pixels by a lookup-uptableprior to their draw-ing. Thepossibilityto directly manipulatethetransferfunc-tions necessaryto performthe mappingfrom scalarvaluesto RGBα valueswithout the needfor reloadingthe entiretextureallows theuserto interactively find meaningfulmap-pingsof materialvaluesto visualquantities.In this way ar-bitrarypartsof thedatacanbehighlightedor suppressedandvisualizedusingdifferentcolorsandtransparencies.

Nevertheless,besidesinteractive frame rates, in manypracticalapplicationseditingthedatain a freeandeasywayis of particularinterest.Although texture lookup tablescanbe modifiedin orderto extractportionsof the data,the useof additionalclippinggeometriesoftenallowsseparatingtherelevant structuresin a muchmoreconvenientandintuitiveway. PlanarclippingplanesavailableascoreOpenGLmech-anismsmay be utilized, but from the user’s point of viewmorecomplex geometriesarenecessary.

7.4. Clipping geometriesand Stenciling

A straightforwardapproachwhich is implementedquiteof-ten is the useof multiple clipping planesto constructmore

c�

TheEurographicsAssociation2000.

Page 23: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

complex geometries.However, notice that even the simpletaskof clipping anarbitrarily scaledbox cannotberealizedin this way. More flexibility and easeof manipulationcanbeachieved by takingadvantageof theper-pixel operationsprovidedin therasterizationstage.As will beoutline,aslongastheobjectagainstwhich thevolumeis to be clippedis aclosedsurfacerepresentedby a list of trianglesit canbeef-ficiently usedastheclipping geometry.

Thebasicideais to determinefor all slicing planesthosepixels which are coveredby the cross-sectionbetweentheobjectandthis plane(seeFigure17).Then,thesepixelsarelocked, thus preventing the textured polygon from gettingdrawn to theselocations.The locking mechanismis imple-mentedby exploiting theOpenGLstenciltest.It allowspixelupdatesto beacceptedor rejectedbasedontheoutcomeof acomparisonbetweena userdefinedreferencevalueandthevalueof thecorrespondingentryin thestencilbuffer. Beforethe texturedpolygongetsrenderedthe stencilbuffer hastobe initialized in sucha way that all color valueswritten topixelsinsidethecross-sectionwill berejected.

343534353343534353343534353343534353343534353343534353343534353343534353Figure 17: The use of arbitrary clipping geometriesisdemonstrated for the caseof a sphere. In regions wherethe object intersectsthe actual slice the stencil buffer islocked. The intuitive approach of renderingonly the backfacesmightresultin thepatternederroneousregion.

In orderto determinefor a certainplanewhethera pixelis coveredby a cross-sectionor not the clipping object isrenderedin polygonmode.However, sinceone is only in-terestedin settingthestencilbuffer noneof theframebuffervaluesaltered.At first, an additionalclipping planeis en-abledwhichhasthesameorientationandpositionastheslic-ing plane.All backfaceswith respectto theactualviewingdirectionaredrawn, andeverythingin front of the planeisclipped.Wherever a pixel would have beendrawn thesten-cil buffer is set.Finally, by changingthe stenciltestappro-priately, renderingthe textured polygon, now, only affectsthosepixelswherethestencilbuffer is unchanged.

In general,however, dependingon theclipping geometrythis procedurefails in determiningthecross-sectionexactly(seerightmost imagein Figure 17). Therefore,beforethetexturedpolygonis renderedall stencilbuffer entrieswhicharesetimproperlyhave to beupdated.Noticethatin front ofa back facewhich waswritten erroneouslythereis alwaysa front facedueto the topologyof the clipping object.The

front facesarethusrenderedinto thosepixelswherethesten-cil buffer is setandthestencilbuffer is clearedwhereapixelalsopassesthedepthtest.Now thestencilbuffer is correctlyinitializedandall furtherdrawing operationsarerestrictedtothosepixelswhereit is setor vice versa.Clearingthestencilbuffer eachtimea new sliceis to berenderedcanbeavoidedby usingdifferentstencilplanes.Thenthenumberof slicesthatcanbeprocessedwithoutclearingthebuffer dependsonthenumberof stencilbits providedby thecurrentvisual.

Since this approachis independentof the usedgeome-try it allows arbitraryshapesto be specified.In particularitturnsout that transformationsof the geometrycanbe han-dledwithout any additionaloverhead,thusproviding a flex-ible tool for carvingportionsout of the datain an intuitiveway.

In Figure18two imagesareshown, whichshoulddemon-stratetheextendedfunctionalityof 3D texturebasedvolumerendering.In thefirst imagea simplebox wasusedto maskthe interior of a MRI-scanby meansof the stencil bufferapproach.The secondimageswas generatedby explicitlyclippingtheslicingplanesagainsttheboxandby tesselatingthe resultingcontours.Note that only the region of interestneedsto betexturedin this way.

(a) Box clipping with thestencilbuffer.

(b) Inversebox clippingwith OGL tesselation.

Figure 18: Boxclipping usingthestenciltest(left) and theOGLtesselation(right).

7.5. Rendering iso-surfacesvia 3D textures

Sofarwedescribedextensionsto texturemappeddirectvol-umerenderingthat have beenintroducedin orderto definea generalhardwareacceleratedframework for adaptive ex-plorationof volumetricdatasets.In practice,however, thedisplayof shadediso-surfaceshasbeenshown asoneof themost dominantvisualizationoptions,which is particularlyusefulto enhancethespatialrelationshipbetweenstructures.Moreover, this kind of representationoftenmeetsthephysi-cal characteristicsof therealobjectin a morenaturalway.

c�

TheEurographicsAssociation2000.

Page 24: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Differentalgorithmshavebeenproposedfor efficiently re-constructingpolygonalrepresentationsof iso-surfacesfromscalarvolumedata63 74 92 111, but noneof theseapproachescan effectively be usedin interactive applications.This isdue to the effort that hasto be madeto fit the surfaceandalsoto theenormousamountof trianglesproduced.For re-alistically sizeddatasetsinteractively manipulatingtheiso-valueseemsto be quite impossible,andalsorenderingthesurfaceat acceptableframeratescanhardlybeachieved.Incontrastto thesepolygonalapproaches,in 106 an algorithmwas designedthat completelyavoids any polygonalrepre-sentationby combining3D texture mappingandadvancedpixel transferoperationsin a way thatallows theiso-surfaceto berenderedon a per-pixel basis.

Recently, first approachesfor combininghardwareaccel-eratedvolumerenderingvia 3D texture mapswith lightingandshadingwerepresented.In 102 thesumof pre-computedambientandreflectedlight componentsis storedin thetex-ture volume and standard3D texture compositionis per-formed.On the contrary, in 34 the orientationof voxel gra-dientsis storedtogetherwith the volumedensityasthe 3Dtexture map. Lighting is achieved by indexing into an ap-propriatelyreplicatedcolor table.Theinherentdrawbackstothesetechniquesis the needfor reloadingthe texturemem-ory eachtimeany of thelighting parameterschange(includ-ing changesin theorientationof theobject)102, andthediffi-culty to achieve smoothlyshadedsurfacesdueto thelimitedquantizationof thenormalorientationandtheintrinsichard-wareinterpolationproblems34.

Basically, the non-polygonal3D texture basedapproachis similar to the oneusedin traditionalvolumeray-castingfor the displayof shadediso-surfaces.Let us considerthatthesurfaceis hit if thematerialvaluesalongtheray of sightdo exceedthe iso-value for the first time. At this locationthematerialgradientis computed,which is thenusedin thelighting calculations.

By recognizingthat texture interpolationis alreadyex-ploited to re-samplethe data,all that needsto be evaluatedis how to capturethosetexturesamplesabove the iso-valuethat arenearestto the imageplane.Thereforethe OpenGLalpha test canbe employed, which is usedto rejectpixelsbasedon the outcomeof a comparisonbetweentheir alphacomponentanda referencevalue.

Eachelementof the3D texturegetsassignedthematerialvalueasits alphacomponent.Then,texturemappedvolumerenderingis performedas usual,but pixel valuesare onlydrawn if they passthe depthtest and if the alphavalue islarger thanor equalto the selectediso-value.In any of theaffectedpixels in theframebuffer, now, thecolor presentatthefirst surfacepoint is beingdisplayed.

In order to obtain the shadediso-surfacefrom the pixelvaluesalreadydrawn into theframebuffer two differentap-proachesshouldbeoutlined:

2 Gradient shading: A four component3D texture isstoredwhich holds in eachelementthe materialgradi-entaswell asthematerialvalue.Shadingis performedinimagespaceby meansof matrix multiplicationusinganappropriatelyinitialized color matrix.2 Gradientless shading: Shadingis simulatedby simpleframe buffer arithmetic computingforward differenceswith respectto the light sourcedirection.Pixel texturingis exploitedto encompassmultiple renderingpasses.

Both approachesaccountfor diffuseshadingwith respectto a parallellight sourcepositionedat infinity. Thenthedif-fuseterm reducesto thescalarproductbetweenthesurfacenormal,N, andthedirectionof thelight source,L , scaledbythematerialdiffusereflectivity, kd.

The texture elementsin gradientshadingeachconsistofanRGBα quadruplewhichholdsthegradientcomponentsinthecolor channelsandthematerialvaluein thealphachan-nel. Before the texture is storedand internally clampedtotherange[0,1] thegradientcomponentsarebeingscaledandtranslatedby a factorof 0.5.

By slicing thetexture therebyexploiting thealphatestasdescribedthetransformedgradientsat thesurfacepointsarefinally displayedin theRGB framebuffer components(seeleft imagein Figure19). For thesurfaceshadingto proceedproperly, pixel valueshave to bescaledandtranslatedbackto the range[-1,1]. In order to accountfor changesin theorientationof theobjectthenormalvectorshave to betrans-formed by the model rotation matrix. Finally, the diffuseshadingterm is calculatedby computingthe scalarproductbetweenthe light sourcedirectionandthe transformednor-mals.

Figure 19: On theleft, for an iso-surfacethegradientcom-ponentsare displayedin theRGBpixelvalues.Ontheright,for thesameiso-surfacethecoordinatesin texturespacearedisplayedin theRGBcomponents.

All threetransformationscan be appliedsimultaneouslyusing one 4x4 matrix. It is stored in the currently se-lectedcolor matrix which post-multiplieseachof the four-componentpixel valuesif pixel data is copied within theactive framebuffer. For the color matrix to accomplishthe

c�

TheEurographicsAssociation2000.

Page 25: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

transformationsit hasto beinitialized asfollows:

CM � 6778 Lx Ly Lz 0Lx Ly Lz 0Lx Ly Lz 00 0 0 1

9;::< Mrot 6778 2 0 0 10 2 0 10 0 2 10 0 0 1

9;::<By just copying theframebuffer contentsontoitself each

pixel getsmultiplied by the color matrix. In addition, it isscaledandbiasedin orderto accountfor thematerialdiffusereflectivity andtheambientterm.Theresultingpixel valuesare =>>?

IaIaIa0

@ AAB . =>>?kdkdkd1

@ AAB CM

=>>?RGBα

@ AAB � =>>?kd C L � Nrot D . Iakd C L � Nrot D . Iakd C L � Nrot D . Ia

α

@ AABwhereobviously different ambienttermsand reflectivitiescanbespecifiedfor eachcolor component.

Figure20 illustratesthe quality of the describedrender-ing techniquefor shadediso-surfaces.Thesurfaceontheleftimagewasrenderedin roughly 9 secondsusinga softwarebasedray-caster. 3D texturebasedgradientshadingwasrunwith about6 framespersecondon thenext image.Thedis-tancebetweensuccessive sliceswas chosento be equaltothe samplingintervals usedin the software approach.Thesurfaceon theright appearssomewhat brighterwith a littlelesscontrastdue to the limited framebuffer precision,butbasicallytherecanhardlybeseenany differences.

Figure20: Iso-surfacerenderingbydirectray-casting(left)andbyusinga gradienttexture (right).

To circumvent the additionalamountof memorythat isneededto storethe gradienttexturea secondtechniquecanbe employed which appliesconceptsborrowed from 82 butin anessentiallydifferentscenario.Thediffuseshadingtermcan be simulatedby simple frame buffer arithmetic if thesurfaceis assumedto be locally orthogonalto the surfacenormalandthe normalaswell asthe light sourcedirectionareorthonormalvectors.

Notice that the diffuseshadingterm is thenproportionalto the directionalderivative towardsthe light source.Thus,

it canbesimulatedby takingforwarddifferencestowardthelight sourcewith respectto thematerialvalues:

Id E ∂X∂L

� X �GFp0 �H X �IFp0

.KJML FL �By renderingthescalarmaterialvaluestwice, oncethose

thatcorrespondto theoriginal surfacepointsandthenthosethatcorrespondto thesurfacepointsshiftedtowardsthelightsource,OpenGL blending operationscan be exploited tocomputetheforwarddifferences.

In orderto obtainthe coordinatesof the surfacepointsitis taken advantageof the alphatest as proposedandpixeltexturesareappliedto re-samplethematerialvalues.There-fore it is importantto know that eachvertex comeswith atexturecoordinateaswell asa colorvalue.Usuallythecolorvaluesprovide a basecolorandopacityin orderto modulatetheinterpolatedtexturesamples.

By consideringthat to eachvertex the computedtexturecoordinate � u � v� w� is assignedas RGB color value. Tex-ture coordinatesare supposedto be within the range[0,1]sincethey arecomputedin parametrictexturespace.More-over, thecolorvaluesinterpolatedduringrasterizationcorre-spondto thetexturespacecoordinatesof pointson theslic-ing plane.As a consequencewe now have the position ofsurfacepoints available in the framebuffer ratherthanthematerialgradients.

In order to display the correct color values they mustnot bemodulatedby thetexturesamples.However, remem-ber that in gradientlessshadingthe sametexture format isusedasin traditionaltextureslicing.Eachelementcomprisesa single-valuedcolor entry which is mappedvia a RGBαlookuptable.Thisallowsoneto temporarilysetall RGBval-uesin thelookuptableto onethusavoiding any modulationof color values.

At this point, the real strengthof pixel texturescan beexploited. The RGB entriesof the texture lookup tableareresetin orderto producetheoriginalscalarvalues.Then,thepixel datais readinto main memoryandit is drawn twiceinto the framebuffer with enabledpixel texture.In the sec-ondpasspixel valuesareshiftedtowardsthelight sourcebymeansof theOpenGLpixel bias.By changingtheblendingequationappropriatelyall valuesget subtractedfrom thosealreadyin the framebuffer thusyielding the approximateddiffuselighting.

In Figure 21 illustratesthe differencebetweengradientshadingandgradientlessshading.Obviously, surfacesren-deredby the latter oneexhibit low contrastandeven incor-rectresultsareproducedespeciallyin regionswherethevari-ation of the gradientmagnitudeacrossthe surfaceis high.Althoughthematerialdistribution in theexampledatais al-most iso-metric,at somepointsthe differencescanbe eas-ily recognized.At thesesurfacepointsthestepsizeusedtocomputethe forwarddifferencehasto be increased,which,of course,cannot berealizedby thepresentedapproach.

c�

TheEurographicsAssociation2000.

Page 26: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

However, only onefourthof thememoryneededin gradi-entshadingis usedin gradientlessshadingbut therenderingtimes,ontheotherhand,only differ insignificantly. Theonlydifferencelies in theway theshadingis finally computed.Ingradientshadingthe whole framebuffer is copiedonce.Ingradientlessshadingthepixel datahasto bereadandwrittentwice with enabledpixel texturing.On theotherhand,sincethe overheaddoesnot dependon the dataresolutionbut onthesizeof theviewport, its relative contribution to theover-all renderingtime canbe expectedto decreaserapidly withincreasingdatasize.

Figure 21: Comparisonof iso-surfacerenderingusing agradienttexture (left) andframebuffer arithmetic(right).

7.6. Volume renderingvia 2D texturesusing registercombiners

In orderto exploit 2D texturesfor volumerendering,thevol-umedataset is representedby threeobject-alignedtexturestacks.Thestackto be usedfor renderinghasto be chosenaccordingto theview directionin orderto avoid degeneratedprojectedpolygons(seeFigure 22). For a particularstack,the textured polygonsare renderedin back-to-frontorder,and the generatedfragmentsare compositedwith the pix-els alreadyin the framebuffer by α-blending.Due to vary-ing view directionthedistancebetweenconsecutive samplepointsthat areblendinto a certainpixel variesaccordingly.Consequently, the opacityof slicesshouldbe adaptedwithrespectto this distancein orderto accountfor thethicknessthesesamplesrepresent.In addition,only bilinear interpo-lation within the original slices is performedthus leadingto visually lesspleasantresults.Both drawbacks,however,canbe avoidedquite efficiently usingmulti-texture registercombinersasavailablein theNvidia GeForce256graphicsprocessor.

Multi-texturing is an optional extension available inOpenGL 1 � 2, which allows one polygon to be texturemappedusingcolor andopacity informationfrom multipletextures.OpenGL1 � 2 specifiesmulti-texturingasasequenceof stages,in which theresultof onestagecanbe combinedwith theresultof thepreviousone.

Unfortunately, in generalthis conceptturns out to betoo static for many desiredapplications.Therefore,recent

PC graphicsboardssupportmulti-stagerasterizationin or-der to explicitly control how color-, opacity- and texture-componentsare combinedto form the resultingfragment.By meansof this extensionrathercomplex calculationscanbeperformedin a singlerenderingpass.

Although multiple rasterizationstagesare supportedbyPCgraphicsboardsfrom differentvendors,until now thesefeaturesare optional extensionsto the OpenGL standardand thus hardware-dependent.Since every manufacturerof graphicshardware definesits own extensions,we willrestrict our descriptionto graphicsboardswith NVidia’sGeForce 256 processor. In this respectwe strictly focusonthework of 85, wheremostof theideasthatwill bepresentedhereafterhave beenintroduced.

As anOpenGLextensionthatenablesoneto gainexplicitcontrolover per-fragmentinformation,NVidia hasprovidedtheNV_register_combiners 96 which completelyby-passsthestandardOpenGLtexturing units (seeFig. 23). Itconsistsof two flexible generalrasterizationstagesandonefinal combinerstage.Thegeneralcombinerstageis dividedinto an RGB-portionand a separateAlpha-portionas dis-playedin Figure24.

A numberof input registerscanbeprogrammedandcom-binedwith eachotherquiteflexibly (seeFig. 24).Theoutputregistersof thefirst combinerstagearethenusedasthe in-putregistersfor thenext stage.Fixedpointcolorcomponentsthatareusuallyclampedto a rangeof 0 � 1� caninternallybeexpandedto a signedrangeof 1 � 1� . Vectorcomponents,for example,canbehandledin this way, which significantlysimplifiesthe computationof local diffuseillumination forthemethodsdescribedbelow.

The output registersof the secondgeneralstageare di-rectedinto a final combinerstagewith restrictedfunction-ality. Oncemulti-stagerasterizationis performedin thede-scribedway the standardOpenGLper-fragmentoperations

TextureFetching

Color Sum

Fog CoverageApplication

RegisterCombiners

TextureEnvironmentApplication

Texture Unit 1

Texture Unit 0

ToFragment

Processing

Pixel Rect.Rasterization

BitmapRasterization

PolygonRasterization

LineRasterization

PointRasterization

FromPrimitiveAssembly

DrawPixels

Bitmap

Ope

n G

L 1.

2 s

peci

fic

atio

ns

Ext

ensi

onNV_register_combiners

General Stage 0

General Stage 1

Final Stage

Figure 23: Sincethe multi-texture model of OpenGL1.2turns out to be too limiting, NVidia’s GeForce 256 proces-sor providesmulti-stage register combiners that completelybypassthestandard texturingunit.

c�

TheEurographicsAssociation2000.

Page 27: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Viewport-Aligned Slices Object-Aligned Slices

Figure22: Viewport-alignedslices(left) in comparisonto objectalignedslices(right) for a spinningvolumeobject.

NONPNNONPNNONPNNONPNQPQPQOQPQPQPQOQQPQPQOQPQPQPQOQQPQPQOQPQPQPQOQ RORPRPRORPRPRORRORPRPRORPRPRORRORPRPRORPRPRORSPSPSSPSPSSPSPSTOTPTPTOTPTTOTPTPTOTPTTOTPTPTOTPTUPUPUUPUPUUPUPUUPUPUVPVPVVPVPVVPVPVWPWPWWPWPW

WPWPWXPXPXXPXPXXPXPXYPYPYYPYPYYPYPYYPYPYZPZPZZPZPZZPZPZ[P[P[[P[P[[P[P[

\O\P\P\O\P\\O\P\P\O\P\\O\P\P\O\P\\O\P\P\O\P\]P]P]]P]P]]P]P]^P^P^^P^P^^P^P^_O_P_P_O_P__O_P_P_O_P__O_P_P_O_P_`O`P`P`O`P``O`P`P`O`P``O`P`P`O`P`not readable

inputmap

inputmap

inputmap

inputmap

A B C D

primary color

secondary color

texture 0

texture 1

spare 0

spare 1

fog

constant color 0

constant color 1

zero

primary color

secondary color

texture 0

texture 1

spare 0

spare 1

fog

constant color 0

constant color 1

zero

input registers output registers

RGB A RGB A

computations

scaleandbias

A B + C D−or−

A B mux C D

A B−or−

A ● B

C D−or−

C ● D

not writeable

Figure 24: TheRGB-portionof thegeneral combinerstagesupportsarbitrary registermappingsandcomplex computa-tion like dotproductsandcomponent-wiseweightedsum.

inputmap

A B C D

primary color

secondary color

texture 0

texture 1

spare 0

spare 1

fog

constant color 0

constant color 1

zero

input registers

RGB A

inputmap

inputmap

inputmap

inputmap

inputmap

inputmap

EF

spare 0 +secondary color

E F

G

AB + (1−A)C + D

G

computations

fragment RGB out

fragment Alpha out

Figure 25: Thefinal combinerstage is usedto computetheresultingfragmentoutputfor RGBandAlpha.

areperformed.We shouldnotehere,that in contrastto theSGI texture color tablesthe GeForce architecturesupportspalettedtexturesthat allow for interpolationof pre-shadedtexturevalues.

7.7. Multi-T extureInter polation

In order to achieve view independentdistancesbetweensamplepointsintermediateslicesaretrilinearly interpolatedon the fly from the original slices.The missingthird inter-polationstepis performedwithin therasterizationhardwareusingmulti-textures,asoutlinedin Figure26).

Any intermediatesliceSi ( α canbeobtainedby blendingbetweenadjacentslicesSi andSi ( 1 from theoriginal stack:

Si ( α � � 1 α � L Si

.αLSi ( 1 � (11)

With eachsliceimagestoredin aseparate2D-texture,thefi-nal interpolationbetweenbilinearly interpolatedsamplesiscomputedby blendingtheresults.As displayedin Figure26,blendingis computedby a single generalcombinerstage,wherethe original slicesSi andSi ( 1 arespecifiedasmulti-texturestexture 0 andtexture 1. The combinerissetupto computea component-wiseweightedsumAB

.CD

with the interpolationfactorα storedin oneof theconstantcolor registers.Thecontentsof this registeris mappedto in-putvariableA, andatthesametimeit is invertedandmappedto variableC. In theRGB-portion,variablesB andC areas-signedtheRGBcomponentsof texture 0 andtexture1 respectively. Analogously, the Alpha-portioninterpolatesbetweenthealpha-components.Now, theoutputof this firstcombinerstageis combinedin with thepixel valuesalreadyin theframebuffer usingα-blending.

All that remainsto be doneis to choosethe location ofintermediateslicesdependingon the actualview directionin orderto guaranteeäquidistantsampledistances.Thus,wecompletelyavoid adaptingtheopacityfor every view by se-lecting the numberof slicesto be reconstructedappropri-ately.

7.8. Shadediso-surfaces

As mentionedin the first part of this section,in 106 an ef-ficient algorithm was introducedthat exploits rasterizationhardware to display shadediso-surfacesusing a 3D gradi-entmap.Usingmulti-stagerasterization,this methodcanbeefficiently adaptedto the GeForcegraphicsprocessor. Thevoxel gradientis computedas beforeand written into theRGB componentsof a setof 2D-texturesthat representthevolume. Analogously, the material is codedin the alpha-component.Theregistercombinerarethenprogrammedasillustratedin Fig. 27.Thefirst generalcombinerstageis ap-plied asdescribedin Section7.7 to interpolateintermediateslices.The secondgeneralcombinernow computesthe dotproductA 2 B between, wherevariableA is mappedto theRGBoutputof thefirst combinerstage(theinterpolatedgra-dient Fn) and variableB is mappedto the secondconstant

c�

TheEurographicsAssociation2000.

Page 28: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

slice i

slice (i +1)

input registers

RGB A

texture 1

interpolationfactor α const color 1

texture 0

INVERT

generalcombiner 0

A

B

C

D

A B + C D

output register

RGB A

fragment

A

B

C

D

finalcombiner

A B +(1−A) C

+ DZERO

ZERO

ZERO

G

Alpha portion:interpolated

alpha

RGB portion:interpolated

color

Figure26: Combinersetupfor interpolationof intermediateslices.

input registersRGB A

interpolationfactor α const color 1

slice i texture 0

intensitygradient

slice (i +1) texture 1

intensitygradient

A

B

C

D

A B + C D

INVERT

color of diffuse light

direction of light const color 2

color of ambient light

primary color

second color

generalcombiner 0

generalcombiner 1

A

B

C

D

dotproduct

A ● B

output registerfinalcombiner

RGB A

fragment

A

B

C

A B +(1−A) C

+ D

(RGB and Alpha portion) (RGB portion only)

RGB portion:interpolated

normal

Alpha portion:interpolated

intensity

D

G

E F

ZERO

Figure27: Combinersetupfor fastrenderingof shadedisosurfaces

color register, that containsthe light vector Fl . The alpha-componentis not modified by the secondcombinerstage.Note that the generalcombinerstagessupportsignedfixedpoint values,sothereis no needto scaleandbiasthevectorcomponentsto positive range.

Sincethe final combineris capableof computingAB.� 1 A� C . D, whenstoring the color of diffuseandambi-

ent light in the registersfor primary and secondarycolor,thefinal combinercanbeusedfor compositingall involvedterms.ThereforevariableA is assignedto primarycolor (Id)andis multiplied with variableB which is mappedto thedotproduct,computedby the RGB-portionof the secondgen-eral combiner. VariableC is set to zero and variableD ismappedto secondarycolor (Ia).

Note that this particularimplementationis a single-passrenderingtechnique,sinceall computationsare performedin the registercombinersbeforefragmentsaregoing to becombined.In this way, the copy operationin orderto mul-tiply pixel valueswith the properlyinitialized color matrixcanbeavoided.

Moreover, by using the samecombinersetupmultipletransparentiso-surfacescanbedisplayed.Therefore,theα-lookup-tableis setup in sucha way, thatthematerialrangesto bedisplayedarerepresentedby α-valueslargerthanzero.For all othermaterialsthe alphavalue is set to zero.Now,

the alphatest discardseverything equalto zero, the depthtestis disabledandα-blendingis performed.Renderingtheslicesin back-to-frontorderyieldsthesemi-transparentiso-surfacescorrectlyaccumulatedwith respectto the selectedattenuation.

8. Unstructur edVolumeRendering

Figure28showsfour typesof volumedatasets:regular, recti-linear, curvilinear, andunstructured.Interactive volumeren-deringhasalways beena challengingproblem.So far, wehave seenhow a combinationof algorithm developmentsandhardwareadvancescancreatedinteractive renderingso-lutionsfor regularrectilineardatasets,seeprevioussectionsand 117. Thesesolutionsprovide changingviewpoints withinteractivity 1-30frames/second.In this portionof thetuto-rial, we focuson unstructureddata.Suchdatasetsarecom-puted in computationalfluid dynamicsand finite elementanalysis.

8.1. Intr oduction

Providing interactivity requiresapplying optimally tunedgraphicshardwareandsoftwareto acceleratethefeaturesofinterest.Many researchersin the past 93 have commentedthat creationof specialpurposehardware would solve the

c�

TheEurographicsAssociation2000.

Page 29: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

finite element simulation

Regular Rectilinear Rectilinear Curvilinear Tetrahedral

Structured Unstructured

arbitrary height locationsfixed grid spacingsdx,dy,dz world space of grid warped

MRI, CT, Ultrasound atmospheric circulation fluid flow

Figure28: Examplegrid types.

slownessof handlingunstructuredgrids. And, generalpur-posehardware is not adequate,becausethe operationspersecondrequirementis simply too high. Within the last yearspecialpurposegraphicshardware is now fast enoughtomake theexisting algorithmsinteractive – if only theproperoptimizationsaremade.Currentgraphicshardwareis inter-active, usingour optimizations,for moderatesizeddatasetsof thousandsto millions of cells. Specialpurposevolumehardware or reprogrammablegraphicshardware are addi-tional ways to further accelerateunstructuredvolume ren-deringalgorithms.

In orderto achieve the highestpossibleperformanceforinteractive renderingof unstructureddata,many softwareoptimizationsare necessary. In this sectionof the tutorial,we explore software optimizationsusing OpenGLtrianglefans,customizedquicksort,memoryorganizationfor cacheefficiency, displaylists, tetrahedralculling, andmultithread-ing. The optimizationscanvastly improve the performanceof projectedtetrahedrarenderingto provide interactive ren-dering on datasetsof hundredsof thousandsof tetrahedra.Theseresultsareanorderof magnitudefasterthanthebestinthe literaturefor unstructuredvolumerendering126 101. Thesortingis alsoan orderof magnitudefasterthanthe fastestsortingtiming reportedfor Williams MPVONC 17.

Momentumfor unstructuredrendering,cellularbasedren-dering,andray tracingof irregular grids wascreatedat theVolume VisualizationSymposium67 93 25. PeterWilliamsdevelopedextensionsto Shirley etal. 93 tetrahedralrenderer,andprovided simplified cell representationsto give greaterinteractivity 113. Ray tracinghasbeenusedfor renderingofirregulardataaswell 25. Most recentlytherehasbeenworkon multiresolutionalirregularvolumerepresentations11 dis-

tributed volume renderingalgorithmson massively paral-lel computers64, And, optimizationusingtexture mappinghardware126. Softwarescanconversionimplementationsarealsobeingresearched110.

The renderingof unstructuredgrids canbe doneby sup-porting cell primitives, at a minimum tetrahedra,whichwhen drawn in the appropriateorder, can be accumulatedinto the frame buffer for volume visualization.Thereis adisadvantageof using only tetrahedralcells to supportthehexahedralandothercells,mainly a 5x explosionin dataasa hexahedralcuberequires5 tetrahedraat a minimumto berepresented,andthe tiling of adjacentcells whensubdivid-ing maycauseproblemsin matchingedgesincludingcrack-ing, wherea small crack may appearbetweenhexahedralcellsthathave beensubdividedinto tetrahedra93.

For propercompositingeitherthecellsviewpointorderingcanbe determinedfrom the grid or a depthsorting is per-formed.For regulargridstheviewpoint orderingis straight-forward.For unstructureddata,theviewpoint orderingmustbe determinedfor eachviewpoint by comparingthe depthof cells.Schemesthatusecell adjacency datastructurescanhelpto reducetherun time complexity of thesort114 93.

The basic approachto computecell’s contributions isthreedimensionalscanconversion.Severalvariantswerein-vestigatedin Wilhelms and Van Gelder109. The first vari-ant is averagingopacityand color of front and back facesthenusinga nonlinearapproximationto the exponent.Thesecondvariant is averagingopacity andcolor of front andbackfaces,thenexactlycomputingtheexponential.And, thethird variant is, using numericallinear integration of colorandopacitybetweenfront andbackfaces,andexactly com-

c�

TheEurographicsAssociation2000.

Page 30: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

puting theexponentials.Gouraudshadinghardwareis usedby interpolatingcolorsandopacitiesacrossfaceswhich ispossibleif theexponentialapproximationis evaluatedat thevertices93.

8.2. ProjectedTetrahedra

This tutorial shows how to make unstructuredrenderingasinteractive aspossibleon availablehardwareplatforms.Theprojectedtetrahedraalgorithm of Shirley and Tuchman93

usesthe geometryaccelerationand rasterizationof poly-gon renderingto maximumadvantage.The majority of thework, thescanconversionandcompositing,areacceleratedby existing graphicshardware. In earlier work, we inves-tigatedhardware accelerationwith parallelcompositingonPixelFlow 116. OpenGLhardwareaccelerationis now widelyavailable in desktopsystems,and using the earlier imple-mentationas a startingpoint we investigatedfurther opti-mizationsto improvedesktoprenderingperformance.Figure29 providespseudo-codeof Shirley andTuchman’s 93 pro-jectedtetrahedraalgorithm,wheretetrahedraareprojectedat thescreenplane,andsubdividedinto triangles.Figure31showsthefour classesof projectionsthatresultin oneto fourtriangles.

I. Preprocess datasetfor a new viewpoint:II. Visibility sort cellsfor every cell (sorted back-to-front)III. Test plane equations to determine

class (1,2,3,4)IV. Project and split cell unique

frontback facesV. Compute color and opacity for thick

vertexVI. (H) Scanconvert new triangles

Figure29: Projectedtetrahedra pseudo-code

Preprocessingis necessaryto calculatecolorsandopac-ities from input data,setupfor visibility sorting of primi-tives,andcreationof planeequationsusedfor determiningthe classa tetrahedrabelongsto for a viewpoint. Cells arevisibility sorted(step II) for proper compositingfor eachviewpoint 114. Figure31 shows that new verticesare intro-ducedduring stepsIII and IV. The new vertex requiresacolor and opacity to be calculatedin stepV. Then the tri-anglesaredrawn andscanconvertedin stepVI. Figure30shows the output of our rendererfor the Phoenix,NASALangley Fighter, andF117datasets.

8.3. Acceleration Techniques

8.3.1. Triangle strips.

Trianglestripscangreatlyimprove the efficiency of an ap-plication sincethey area morecompactway of describing,

storing,andtransferringasetof neighbouringtriangles.Cre-ating triangle strips from triangularmeshesof datacan bedonethroughgreedyalgorithms.But, in projectedtetrahe-dra,thesplit casesillustratedin Figure31candirectlycreatetrianglefans.Eachnew vertex givenwith glVertexspecifiesanew triangle,insteadof redundantlypassingvertices.Figure31 givestrianglestripsfor thefour classesof projections.

Class 1:

Class 2:

Class 3:

Class 4:

4 versus 9vertices

5 versus 12vertices

vertices4 versus 6

same

Figure31: Numberof verticesfor fanversustriangles.

In experimentsontheNASA Langley fighterdataset,with70,125tetrahedra,therearean averageof 3.4 trianglespertetrahedra(a sumof thepercentageof Class1–60%3 trian-gles,2–40%4 triangles,3, and4). Fewer verticesaretrans-mitted for the samegeometryand many fewer procedurecallsaremadeto OpenGL.For n tetrahedrathereare10� 8nprocedurecallswith fansversus20� 4n procedurecallswith-out fans,a factorof 2 reduction.Williams alsoinvestigatedtrianglestrippingusingIris-GL.

8.3.2. Display Lists For Static Geometry.

Displaylists allow thegraphicsdriversto optimizevertices,colors,and trianglestrips for the hardware. I convertedallstatic geometryinto display lists. Figure 32 shows exam-plesof vertices,surfaces,anda backgroundmesh.The pri-maryimpactof thesechangesis to eliminateany slowdownwhentheseauxiliary dataarealsorendered.Unfortunately,theprojectedtetrahedrarecomputesthe thick vertex for ev-ery new viewpoint sothatthevolumedatacannotbeplacedin displaylists.Also, becausethethick vertex is recalculatedfor eachnew view, vertex arrayscannotbeused.

8.3.3. Visibility Sorting—CustomizedQuicksort.

Visibility sortingof cells is donethroughCignoniet al. andKarasicket al.’s 12 42 techniqueof sortingthetangentialdis-tancesto circumscribingspheres.We have compareda cus-

c�

TheEurographicsAssociation2000.

Page 31: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

Figure30: Unstructuredvolumerenderingof Phoenix(left), NASALangley Fighter (middle),andF117(right).

Figure 32: Points (left) surfaces(middle)and mesh(right)storedin displaylists.

tom codedquicksortto the C library utility qsort() an im-provementof 75%to 89%.A genericsortingroutinecannotasefficiently handlethedatastructuresthatareto besorted.Two values,the tangentialsquareddistance(float) and thetetrahedralindex aremoved.

8.3.4. Taking advantageof view coherence.

The proper choice of pivots gives an efficient sorting ofsortedand nearly sortedlists. We previously resortedthesameinput for eachview. But, usingthesortedvaluesfromthe previous view speedsup the sorting.The programwasalso modified to use smaller viewpoint changes,and runtimeswereimproved by an additional18%. Both sortsaremuch fasteron sorteddata.This property is exploited forview coherence.Theratefor thecustomquicksortvariesbe-tween600,000to 2 million cells per seconddependingonhow sortedthelist is. For comparison,recentlyreportedre-sults for MPVONC 114 are from 185,000to 266,000cellsper second17. In our earlier work, we showed that quick-sortachieved109,570cells/secondona PA RISC7200(120MHz) 116, without thetheview coherenceanddatastructureoptimizationsdiscussedhere.

8.3.5. Cachecoherency.

Becausethe tetrahedraldata structuresare randomly ac-cessed,a highpercentageof time is spentin thefirst fetchofeachtetrahedra’s data.Tetrahedraareaccessedby theirview

and not memory storageorder. Reorderingthe tetrahedrawhenperformingtheview sorteliminatedcachestallswhenrenderingdata,but thesortingroutinewassloweddown bymoving moredata.Therewasanoverallslowdown, sofuturework is neededto find effective cachingstrategies.

8.3.6. Culling .

Tetrahedrawhoseopacityarezeroareremovedfrom sortingand rendering.This is classificationdependent,but yields20%and36%reductionin tetrahedrafor theLangley FighterandF117datasets.Theruntimedecreasesaccordingly.

8.3.7. Multithr eading.

Many desktopsystemsnow have two CPU’s. We multi-threadedtherendereron Windows NT to exploreadditionalspeedupavailable.Profiling of the codeshowed that spareCPU cycles were available, and that the graphicsperfor-mancewasnotyetsaturated.Two threadsfor thecomputein-tensivefloatingpointwork of testingandsplittingcells,wereused,andthesortingandOpenGLcallswereseparatedintoseparatethreads.The sortingwasdonefor the next frame,for a pipelineparallelism.Thepartitioningfor thecell sort-ing andsplitting wasdoneon a screenbasedpartitionusingthe centroidof the visible or nonculledtetrahedra.Suchadynamicpartitioningprovidedafastdecision,yetgavegoodload balancingin mostcases.The multithreadingprovidedanadditional14%to 25%speedup.

8.4. Summary

Unstructuredvolumerenderingis now truly interactive dueto acombinationof advancesin softwarealgorithmsfor sort-ing of cells in depthandgraphicshardwareimprovements.To implementan interactive unstructuredrendererrequiresunderstandinghow to exploit the specialpurposegraphicshardware,and the CPU’s available in today’s desktopma-chines.We have shown the key elementsneededto opti-mize a rendererincluding use of triangle fans for reduc-ing the bandwidthbottleneck,culling of unseendata,mul-

c�

TheEurographicsAssociation2000.

Page 32: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

tithreadingfor lowly parallelism,andmakingOpenGLstatechangesasinfrequentlyaspossible.Display lists arepossi-ble only for the nonvolume data,becauseof the natureofthe projectedtetrahedra,requiresrecomputingdifferenttri-anglesfrom thetetrahedrafor every new viewpoint. We didshow that placingsurfaces,points,andauxilliary visualiza-tion planesin display lists literally addsthesecapabilitiesfor free. For renderingperformanceresults,pleaseseeourpapers116 117 118. Thenext stepin reachingevenhighervisu-alizationratesover whatcommoditygraphicshardwareanddesktopsystemsprovidewill beto addimportantextensionsandcapabilitiesto the hardwareto directly supportvolumeprimitives.

9. Acknowledgments

Data sets are thankfully acknowledged: for the skullSiemens,Germany; for the blood vesselsPhilips Research,Hamburg; for theNASA Langley Fighter, NeelyandBatina;for theSuperPhoenixNuclearreactor, BrunoNitrosso,Elec-tricite deFrance;for theF117,RobertHaimes,MIT;

We also would like to thank Bruce Culbertson,TomMalzbender, Greg Slabaugh,Jian Huang, Klaus Mueller,RogerCrawfis, Dirk Bartz,DeanBrederson,ClaudioSilva,VivekVerma,andPeterWilliams for helpin gettingimages,data,andclassifications.

References

1. K. Akeley. RealityEnginegraphics. In ComputerGraphics, Proceedingsof SIGGRAPH93, pages109–116,August1993.

2. R. Avila, T. He, L. Hong, A. Kaufman, H. Pfister,C. Silva, L. Sobierajski,andS. Wang. VolVis: A di-versifiedsystemfor volumevisualizationresearchanddevelopment.In Proceedingsof Visualization94, pages31–38,Washington,DC, October1994.

3. R. Avila, L. Sobierajski,andA. Kaufman. Towardsacomprehensive volume visualizationsystem. In Pro-ceedingsof Visualization 92, pages13–20, Boston,MA, October1992.

4. R. Bakalash and A. Kaufman. 3D visualizationof biomedical data. In Proc. 6th Annual Interna-tional Electronic Imaging Conference, pages1034–1036,Boston,MA, October1989.

5. D. Bartz andM. Meißner. Voxels versusPolygons:AComparative Approachfor VolumeGraphics.Proc.of1stWorkshopon VolumeGraphics,March1999.

6. M. Bentum. Frequency analysisof gradientestimatorsin volumerendering. IEEE Transactionsof Visualiza-tion andComputerGraphics, 2(3):242–254,1996.

7. L. Bergman,H. Fuchs,E. Grant, and S. Spach. Im-age renderingby adaptive refinement. In Computer

Graphics, pages29–37. Proceedingsof SIGGRAPH86,November1986.

8. J. F. Blinn. Light refelction functionsfor simulationof clouds and dusty surfaces. ComputerGraphics,16(3):21–29,July 1982.

9. B. Cabral,N. Cam,andJ. Foran. Acceleratedvolumerenderingand tomographicreconstructionusing tex-turemappinghardware. In 1994Workshopon VolumeVisualization, pages91–98,Washington,DC, October1994.

10. I. Carlbom. Optimal filter designfor volume recon-struction.In Proc.of IEEEVisualization, SanJosé,CA,USA, October1993.IEEE ComputerSocietyPress.

11. P. Cignoni,L. De Floriani, C. Montani,E. Puppo,andR. Scopigno.Multiresolutionmodelingandvisualiza-tion of volumedatabasedon simplicial complexes. InProceedingsof the Symposiumon Volume Visualiza-tion, pages19–26,1994.

12. PaoloCignoni,ClaudioMontani,D. Sarti,andRobertoScopigno. On the optimizationof projective volumerendering.In R. Scaneni,J. vanWijk, andP. Zanarini,editors,Proceedingsof theEurographicsWorkshop,Vi-sualizationin ScientificComputing’95, pages59–71,Chia,Italy, May 1995.

13. H. E. Cline, W. E. Lorensen,S. Ludke, C. R. Craw-ford, andB. C. Teeter. Two algorithmsfor the three-dimensionalreconstructionof tomograms. MedicalPhysics, 15(3):320–327,May 1988.

14. D. Cohen. 3D scanconversionof geometricobjects.Doctoral Dissertation,Departmentof ComputerSci-ence,SUNY at Stony Brook,December1991.

15. D. CohenandA. Kaufman.Scanconversionalgorithmsfor linearandquadraticobjects.In A. Kaufman,editor,VolumeVisualization, pages280–301.IEEE ComputerSocietyPress,Los Alamitos,CA, 1990.

16. D. CohenandZ. Shefer. Proximity clouds- anacceler-ationtechniquefor 3D grid traversal.TechnicalReportFC 93-01,Departmentof MathematicsandComputerScience,BenGurionUniversityof theNegev, February1993.

17. J. Comba,J. T. Klosowski, N. Max, J. S.B. Mitchell,C. T. Silva, and P. L. Williams. Fast polyhedralcell sorting for interactive renderingof unstructuredgrids. In Proceedingsof Eurographics’99, Milan, Italy,September1999.

18. T.J.Cullip andU. Neumann.AcceleratingVolumeRe-constructionwith 3D TextureHardware.TechnicalRe-port TR93-027,University of North Carolina,ChapelHill N.C.,1993.

19. F. Dachille, K. Kreeger, B. Chen, I. Bitter, and

c�

TheEurographicsAssociation2000.

Page 33: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

A. Kaufman. High-Quality Volume RenderingUs-ing Texture Mapping Hardware. In Proc. of Eu-rographics/SIGGRAPHWorkshopon GraphicsHard-ware, pages69–76,Lisboa,Portugal,August1998.

20. J. DanskinandP. Hanrahan.Fastalgorithmsfor vol-umeray tracing. In A. KaufmanandW. L. Lorensen,editors,Workshopon VolumeVisualization, pages91–98,Boston,MA, October1992.

21. M. C. Doggett,M. Meißner, andU. Kanus.A low-costmemoryarchitecturefor volume rendering. In Proc.of Eurographics/SIGGRAPHWorkshop on GraphicsHardware, pages7–14,August1999.

22. R. A. Drebin,L. Carpenter, andP. Hanrahan.Volumerendering. ComputerGraphics, 22(4):65–74,August1988.

23. E. J. Farrell and R. A. Zappulla. Three-dimensionaldata visualization and biomedical applications.CRC Critical Reviews in Biomedical Engineering,16(4):323–363,1989.

24. G. Frieder, D. Gordon,andR. A. Reynolds. Back-to-front displayof voxel-basedobjects. IEEE ComputerGraphics& Applications, 5(1):52–60,January1985.

25. M. P. Garrity. Raytracingirregular volume data. InSymposiumon VolumeVisualization, SanDiego, CA,November1990.

26. A. Van Gelderand K. Kim. Direct Volume Render-ing With Shadingvia Three-DimensionalTextures. InSymposiumonVolumeVisualization, pages23–30,SanFrancisco,CA, USA, October1996.

27. S. M. GoldwasserandR. A. Reynolds. TechniquesfortheRapidDisplayandManipulationof 3D BiomedicalData. Dept. of Computerand Info. Science,Univ. ofPennsylvaniaReportMS-CIS-86-14,GRASPLAB 60.,Philadelphia,July 1986.

28. S.M.Goldwasser, R.A. Reynolds,D.A. Talton,andE.S.Walsh.High performancegraphicsprocessorsfor med-ical imagingapplications.In Proc. InternationalCon-ferenceonParallel Processingfor ComputerVisionandDisplay, Leeds,UK, January1988.

29. D. GordonandR. A. Reynolds. Imagespaceshadingof 3-dimensionalobjects. ComputerVision, Graphics,andImage Processing, 29:361–376,1985.

30. B. GudmundssonandM. Randen.Incrementalgener-ation of projectionsof CT-volumes. In Proceedingsof the First Conferenceon Visualization in Biomedi-cal Computing, pages27–34,Atlanta,GA, May 1990.IEEE ComputerSocietyPress,Los Alamitos,Califor-nia.

31. T. Guenther, C. Poliwoda, C. Reinhard, J. Hesser,R. Maenner, H.-P. Meinzer, andH.-J.Baur. VIRIM: A

massively parallelprocessorfor real-timevolumevisu-alizationin medicine. In Proceedingsof the9th Euro-graphicsWorkshoponGraphicsHardware, pages103–108,Oslo,Norway, September1994.

32. P. HaeberliandK. Akeley. The accumulationbuffer;hardwaresupportfor high-qualityrendering. In Com-puter Graphics, volume 24 of Proceedingsof SIG-GRAPH90, pages309–318,Dallas,TX, August1990.

33. P. Hanrahan.Three-passaffine transformsfor volumerendering. ComputerGraphics, 24(5):71–78,Novem-ber1990.

34. M. Haubner, Ch. Krapichler, A. Lösch, K.-H. En-glmeier, and van Eimeren W. Virtual Realityin Medicine - Computer Graphics and InteractionTechiques.IEEETransactionsonInformationTechnol-ogy in Biomedicine, 1996.

35. G. T. Herman and H. K. Liu. Three-dimensionaldisplay of humanorgansfrom computedtomograms.ComputerGraphics and Image Processing, 9:1–21,1979.

36. K. H. Höhneand R. Bernstein. Shading3D-imagesfrom CT usinggray-level gradients.IEEETransactionsonMedicalImaging, MI-5(1):45–47,March1986.

37. K. H. Höhne,M. Bomans,A. Pommert,M. Riemer,C. Schiers, U. Tiede, and G. Wiebecke. 3D-visualization of tomographicvolume data using thegeneralizedvoxel model. The Visual Computer,6(1):28–37,February1990.

38. K. H. Höhne, B. Pfiesser, A. Pommert,M. Riemer,T. Schiemann,R. Schubert,and U. Tiede. A virtualbodymodelfor surgicaleducationandrehearsal.IEEEComputer, 29(1):45–47,1996.

39. J.Huang,K. Mueller, N. Shareef,andR. Crawfis. Edgepreservationin volumerenderingusingsplatting.pages63–69, ResearchTriangle Park, NC, USA, October1998.

40. D. Jackel and W. Straßer. Reconstructingsolidsfrom tomographicscans- the PARCUM II system.In A.A.M. Kuijk and W. Straßer, editors, Advancesin ComputerGraphics Hardware II , pages209–227.Springer-Verlag,Berlin, 1988.

41. J. T. Kajiya and T. Von Herzen. Ray tracing volumedensities. ComputerGraphics, 18(3):165–173,July1984.

42. M.S. Karasick, D. Lieber, L.R. Nackman,and V.T.Rajan. Visualization of three-dimensionaldelaunaymeshes. Algorithmica, 19(1-2):114–128,Sept.-Oct.1997.

43. A. Kaufman. An algorithmfor 3D scan-conversionofpolygons.Proc.EUROGRAPHICS’87, pages197–208,August1987.

c�

TheEurographicsAssociation2000.

Page 34: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

44. A. Kaufman. Thevoxbltengine:A voxel framebufferprocessor. In A.A.M. Kuijk and W. Straßer, editors,Advancesin GraphicsHardware III . Springer-Verlag,Berlin, 1989.

45. A. KaufmanandR. Bakalash.Memoryandprocessingarchitecturefor 3D voxel-basedimagery. IEEE Com-puterGraphics& Applications, 8(6):10–23,November1988.

46. A. KaufmanandR. Bakalash.Parallel processingfor3D voxel-basedgraphics. In Proc. InternationalCon-ferenceonParallel Processingfor ComputerVisionandDisplay, Leeds,UK, January1988.

47. G. Knittel andW. Straßer. Vizard– visualizationaccel-eratorfor real-timedisplay. In Proceedingsof theSig-graph/EurographicsWorkshoponGraphicsHardware,pages139–146,Los Angeles,CA, August1997.

48. W. Krueger. Theapplicationof transporttheoryto thevisualizationof 3d scalarfields. Computers in Physics5, pages397–406,1991.

49. W. Krüger. The Application of TransportTheory tothe Visualizationof 3-D ScalarDataFields. In IEEEVisualization’90, pages273–280,1990.

50. P. Lacroute. Fast VolumeRenderingUsing a Shear-Warp Factorization of the Viewing Transform. PhDthesis,StanfordUniversity, ComputerSystemsLabora-tory, Departmentsof ElectricalEngineeringandCom-puterScience,Stanford,CA 94305–4055,1995. CSL-TR-95-678.

51. P. Lacroute. Analysis of a parallel volume renderingsystembasedon the shear-warp factorization. IEEETransactionsonVisualizationandComputerGraphics,2(3):218–231,September1996.

52. P. LacrouteandM. Levoy. Fastvolumerenderingus-ing ashear-warpfactorizationof theviewing transform.In ComputerGraphics, Proceedingsof SIGGRAPH94,pages451–457,July 1994.

53. D. LaurandP. Hanrahan.Hierarchicalsplatting:A pro-gressive refinementalgorithm for volume rendering.ComputerGraphics, 25(4):285–288,July 1991.

54. M. Levoy. Displayof surfacesfrom volumedata.IEEEComputerGraphics& Applications, 8(5):29–37,May1988.

55. M. Levoy. Designfor real-timehigh-quality volumerenderingworkstation. In C. Upson,editor, Proceed-ingsof theChapelHill WorkshoponVolumeVisualiza-tion, pages85–92,ChapelHill, NC,May 1989.Univer-sity of North Carolinaat ChapelHill.

56. M. Levoy. Displayof surfacesfrom volumedata.Ph.D.Dissertation,Departmentof ComputerScience, TheUniversityof NorthCarolinaat ChapelHill , May 1989.

57. M. Levoy. Efficient ray tracingof volumedata. ACMTransactionsonGraphics, 9(3):245–261,July 1990.

58. M. Levoy. A hybrid ray tracerfor renderingpolygonandvolumedata. IEEE ComputerGraphics& Appli-cations, 10(2):33–40,March1990.

59. M. Levoy. A taxonomyof volumevisualizationalgo-rithms. In SIGGRAPH90 CourseNotes, pages6–12,1990.

60. M. Levoy. Volumerenderingby adaptive refinement.TheVisualComputer, pages2–7,July 1990.

61. B. Lichtenbelt. Design of a High PerformanceVol-ume Visualization System. In Proc. of the Euro-graphics/SIGGRAPHWorkshop on Graphics Hard-ware, pages111–119,Los Angeles,CA, USA, July1997.

62. B. Lichtenbelt,R. Crane,and S. Naqvi. Introductionto volume rendering. Hewlett-Packard ProfessionalBooks,Prentice-Hall,Los Angeles,USA, 1998.

63. W. E. LorensenandH. E. Cline. Marching–cubes:Ahigh resolution3D surfaceconstructionalgorithm. InComputerGraphics, Proceedingsof SIGGRAPH 87,pages163–169,1987.

64. Kwan-Liu Ma. Parallel volume ray-casting forunstructured-griddataon distributed-memoryarchitec-tures.In Proceedingsof theParallel RenderingSympo-sium, pages23–30,Atlanta,GA, Oct 1995.IEEE.

65. R. Machiraju, R. Yagel, and L. Schwiebert. Parallelalgorithmsfor volumerendering.OSU-CISRC-10/92-R29„ Departmentof Computerand Information Sci-ence,TheOhio StateUniversity, October1992.

66. N. Max. Optical models for direct volume render-ing. IEEETransactionsonVisualizationandComputerGraphics, 1(2):99–108,June1995.

67. N. Max, P. Hanrahan,andR. Crawfis. Areaandvolumecoherencefor efficient visualizationof 3D scalarfunc-tions. ACM ComputerGraphics(Proceedingsof the1990WorkshoponVolumeVisualization), 24(5):27–33,1990.

68. N. Max, P. Hanrahan,andR. Crawfis. Area andVol-umeCoherencefor EfficientVisualizationof 3D ScalarFunctions.In ACM Workshopon VolumeVisualization’91, pages27–33,1991.

69. D. Meagher. Efficient syntheticimagegenerationofarbitrary3D objects.In Proceedingsof IEEEComputerSocietyConferenceon PatternRecognition and ImageProcessing, June1982.

70. M. Meißner, U. Hoffmann,andW. Straßer. EnablingClassificationand Shadingfor 3D Texture MappingbasedVolume Renderingusing OpenGL and Exten-sions. In Proc.of IEEE Visualization, pages207–214,

c�

TheEurographicsAssociation2000.

Page 35: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

San Franisco,CA, USA, October1999. IEEE Com-puterSocietyPress.

71. M. Meißner, J. Huang, D. Bartz, K. Mueller, andR. Crawfis. A practicalevaluationof four popularvol-ume renderingalgorithms. In Symposiumon VolumeVisualization, SaltLake City, UT, USA, October2000.

72. M. Meißner, U. Kanus, and W. Straßer. VIZARDII, A PCI-Card for Real-Time Volume Rendering.In Proc. of Eurographics/SIGGRAPHWorkshop onGraphics Hardware, pages61–68, Lisboa, Portugal,August1998.

73. T. Moeller, R. Machiraju, K. Mueller, and R. Yagel.Evaluationanddesignof filters usinga taylorseriesex-pansion.IEEETransactionsof VisualizationandCom-puterGraphics, 3(2):184–199,1997.

74. C. Montani,R. Scateni,andR. Scopigno. DiscretizedMarchingCubes.In IEEEVisualization’94, pages281–287,1994.

75. J.Montrym,D. Baum,D. Dignam,andC. Migdal. Infi-nite Reality:A Real-Time GraphicsSystem.ComputerGraphics,Proc. SIGGRAPH’97, pages293–303,July1997.

76. K. Mueller and R. Crawfis. Elminating popping ar-tifacts in sheetbuffer-basedsplatting. In Proc. ofIEEE Visualization, pages239–246,TriangleResearchPark,NC,USA, October1998.IEEEComputerSocietyPress.

77. K. Mueller, T. Moeller, andR. Crawfis. Splattingwith-out the blur. In Proc. of IEEE Visualization, pages363–371,SanFranisco,CA, USA, October1999.IEEEComputerSocietyPress.

78. H. Müller, N. Shareef,J. Huang, and R. Crawfis.High-qualitysplattingonrectilineargridswith efficientculling of occludedvoxels. IEEETransactionsonVisu-alizationandComputerGraphics, 5(2):116–134,1999.

79. T. Ohashi, T. Uchiki, and M. Tokoro. A three-dimensionalshadeddisplay method for voxel-basedrepresentation.In ProceedingsEUROGRAPHICS’85,pages221–232,Nice,France,September1985.

80. R. Osborne,H. Pfister, H. Lauer, N. McKenzie,S.Gib-son,W. Hiatt, andT. Ohkami. EM-Cube:An architec-ture for low-costreal-timevolumerendering. In Pro-ceedingsof the Siggraph/Eurographics WorkshoponGraphicsHardware, pages131–138,LosAngeles,CA,August1997.

81. S.Parker, P. Shirley, Y. Livnat,C.Hansen,andP. Sloan.Interactiveraytracingfor isosurfacerendering.In Proc.of IEEE Visualization, pages233–238,Triangle Re-searchPark,NC, USA, October1998.IEEE ComputerSocietyPress.

82. M. Peercy, J.Airey, andB. Cabral.Efficientbumpmap-ping hardware. In ComputerGraphics, ProceedingsofSIGGRAPH’97, pages303–306,August1997.

83. H. Pfister, J. Hardenbergh, J. Knittel, H. Lauer, andL. Seiler. Thevolumeproreal-timeray-castingsystem.In ComputerGraphics, SIGGRAPH 99 Proceedings,pages251–260,Los Angeles,CA, August1999.

84. H. PfisterandA. Kaufman. Cube-4– A scalablear-chitecturefor real-time volume rendering. In 1996ACM/IEEESymposiumonVolumeVisualization, pages47–54,SanFrancisco,CA, October1996.

85. C. Reszk-Salama,K. Engel, T. Bauer, T. Ertl, andG.Greiner. Interactivevolumerenderingonstandardpcgraphicshardwareusingmulti-texturesandmulti-stagerasterization.EGHardware Workshop’2000, 2000.

86. R. A. Reynolds,G. Frieder, andD. Gordon. Back-to-front displayof voxel-basedobjects. IEEE ComputerGraphics& Applications, pages52–59,January1985.

87. R.A. Reynolds,D. Gordon,andL.-S.Chen.A dynamicscreentechniquefor shadedgraphicsdisplayof slice-representedobjects. ComputerVision, Graphics,andImage Processing, 38(3):275–298,1987.

88. P. Sabella. A renderingalgorithm for visualizing 3Dscalarfields.ComputerGraphics, 22(4):59–64,August1988.

89. H. Samet. The quadtreeandrelatedhierarchicaldatastructures. ComputingSurveys, 16(2):187–260,June1984.

90. P. SchröderandJ. B. Salem. Fast rotationof volumedataon dataparallelarchitectures.In ProceedingsofVisualization’91, pages50–57,SanDiego, CA, Octo-ber1991.IEEE CSPress.

91. P. SchröderandG. Stoll. Dataparallelvolumerender-ing asline drawing. In 1992Workshopon VolumeVi-sualization, pages25–31,Boston,MA, October1992.

92. H. Shen and C. Johnson. SweepingSimplices: AFast Iso-Surface Axtraction Algorithm for Unstruc-turedGrids. In IEEEVisualization’95, pages143–150,1995.

93. P. Shirley andA. Tuchman.A polygonalapproximationto direct scalarvolume rendering. In 1990WorkshoponVolumeVisualization, pages63–70,SanDiego,CA,December1990.

94. L. Sobierajski,D. Cohen,A. Kaufman,R. Yagel,andD. Acker. Trimmed voxel lists for interactive surgi-cal planning. TR 90.05.22,SUNY Stony Brook, May1990.

95. L. Sobierajski,D. Cohen,A. Kaufman,R. Yagel,andD. Acker. Fastdisplaymethodfor interactive volumerendering.TheVisualComputer, 1993.

c�

TheEurographicsAssociation2000.

Page 36: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

96. J. Spitzer. GeForce 256 and RIVA TNT Combiners.http://www.nvidia.com/Developer.

97. U. Tiede, K. H. Höhne, M. Bomans,A. Pommert,M. Riemer, andG. Wiebecke. Investigationof medical3D-renderingalgorithms.IEEE ComputerGraphics&Applications, 10(2):41–53,March1990.

98. Y. TroussetandF. Schmitt. Active-raytracingfor 3Dmedicalimaging. In G. Marechal,editor, Proceedingsof EUROGRAPHICS’87, pages139–149.Elsevier Sci-encePublishers,1987.

99. H. K. Tuy and L. T. Tuy. Direct 2-D display of 3Dobjects. IEEE ComputerGraphics & Applications,4(10):29–33,November1984.

100.C. UpsonandM. Keeler. V-BUFFER:Visible volumerendering. ComputerGraphics, 22(4):59–64,August1988.

101.A. VanGelder,V. Verma,andJ.Wilhelms.Volumedec-imationof irregulartetrahedralgrids.In ProceedingsofComputerGraphicsInternational, pages222–230,247,Canmore,Alta., Canada,June1999.

102.A. VanGeldernandK. Kwansik. Direct VolumeRen-deringwith Shadingvia Three-DimensionalTextures.In R. Crawfis and Ch. Hansen,editors,ACM Sympo-siumon VolumeVisualization’96, pages23–30,1996.

103.J. Terwisschavan Scheltinga,J. Smit, andM. Bosma.Designof anon Chip ReflectanceMap. In Proc.of the10th EG Workshopon GraphicsHardware, pages51–55,Maastricht,TheNetherlands,August1995.

104.T. van Walsum,A. J. S. Hin, J. Versloot,and F. H.Post. Efficient hybrid renderingof volume dataandpolygons. SecondEUROGRAPHICSWorkshopon Vi-sualizationin ScientificComputing, April 1991.

105.D. VoorhiesandJ. Foran. Stateof theart in datavisu-alization.pages163–166,July 1994.

106.R. WestermannandT. Ertl. Efficiently Using Graph-ics Hardware in Volume RenderingApplications. InComputerGraphics, Proc.of ACM SIGGRAPH,pages169–177,Orlando,FL, USA, August1998.

107.L. Westover. Interactive volumerendering. In C. Up-son,editor, Proceedingsof the ChapelHill Workshopon VolumeVisualization, pages9–16,ChapelHill, NC,May 1989.Universityof NorthCarolinaatChapelHill.

108.L. Westover. Footprint evaluation for volume ren-dering. In ComputerGraphics, Proceedingsof SIG-GRAPH90,pages367–376,August1990.

109.J. Wilhelms. Decisionsin volumerendering. In Stateof the Art in Volume Visualization, volume 8, pagesI.1–I.11.ACM SIGGRAPH,Las Vegas,NV, Jul./Aug.1991.

110.J.Wilhelms,A. VanGelder, P. Tarantino,andJ.Gibbs.Hierarchicalandparallelizabledirectvolumerenderingfor irregular andmultiple grids. In Proceedingsof Vi-sualization, pages57–64,SanFrancisco,CA, October1996.

111.J.WilhelmsandA. VanGeldern.Octreesfor fasterIso-SurfaceGeneration. ACM Transactionson Graphics,11(3):201–297,July 1992.

112.P. Williams and N. Max. A Volume Density OpticalModel. In ACM WorkshoponVolumeVisualization’92,pages61–69,1992.

113.P. L. Williams. Interactive splattingof nonrectilinearvolumes. In ProceedingsVisualization, pages37–44,Boston,October1992.

114.P. L. Williams. Visibility orderingmeshedpolyhedra.ACM TransactionsonGraphics, 11(2):103–126,1992.

115.O. Wilson, A. Van Geldern,and J. Wilhelms. DirectVolume Renderingvia 3D Textures. TechnicalRe-portUCSC-CRL-94-19,Universityof California,SantaCruz,1994.

116.C. M. Wittenbrink. Irregular grid volume render-ing with compositionnetworks. In ProceedingsofIS&T/SPIEVisual Data Exploration and AnalysisV,volume 3298,pages250–260,SanJose,CA, January1998.SPIE. Available as Hewlett-PackardLaborato-riesTechnicalReport,HPL-97-51-R1.

117.C. M. Wittenbrink. Cellfast: Interactive unstruc-turedvolumerendering.TechnicalReportHPL-1999-81(R1),Hewlett-PackardLaboratories,July 1999. Ap-pearedin 1999IEEE Visualization-LateBreakingHotTopics.

118.C. M. Wittenbrink, M. E. Goss, H. Wolters, andT. Malzbender. Interactiveunstructuredvolumerender-ing andclassification.TechnicalReportHPL-2000-13,Hewlett-PackardLaboratories,January2000.

119.C. M. Wittenbrink, T. Malzbender, and M. E. Goss.Opacity-Weighted Color Interpolation For VolumeSampling. In Symposiumon Volume Visualization,pages135–142,ResearchTrianglePark,NC, USA, Oc-tober1998.

120.R. Yagel. Efficient Methodsfor Volumetric Graph-ics. PhDthesis,StateUniveristyof New York at StonyBrook,December1991.

121.R. Yagel. The flipping CUBE: A device for rotat-ing 3D rasters.6th EurographicsHardware Workshop,September1991.

122.R.Yagel,D. Cohen,andA. Kaufman.Discreteraytrac-ing. IEEE ComputerGraphics& Applications, pages19–28,September1992.

c�

TheEurographicsAssociation2000.

Page 37: Volume visualization and volume rendering techniques

Meißneret al. / VolumeRendering

123.R. Yagel,D. Cohen,A. Kaufman,andQ. Zhang. Vol-umetricray tracing. TR 91.01.09,ComputerScience,SUNY at Stony Brook,1991.

124.R. Yagel and A. Kaufman. Template-basedvolumeviewing. ComputerGraphicsForum,ProceedingsEu-rographics, 11(3):153–167,September1992.

125.R. Yagel,A. Kaufman,andQ. Zhang.Realisticvolumeimaging. In ProceedingsVisualization’90, pages226–231, SanDiego, CA, October1991. IEEE ComputerSocietyPress.

126.R. Yagel, D. M. Reed, A. Law, Po-Wen Shih, andN. Shareef.Hardwareassistedvolumerenderingof un-structuredgridsby incrementalslicing. In ACM/IEEESymposiumonVolumeVisualization, pages55–62,SanFrancisco,CA, October1996.

127.R. YagelandZ. Shi. Acceleratingvolumeanimationbyspace-leaping. OSU-CISRC-3/93-R10„ Departmentof ComputerandInformationScience,TheOhio StateUniversity, March1993.

128.K. Z. Zuiderveld,A. H. J.Koning,andM. A. Viergever.Accelerationof ray castingusing 3D distancetrans-form. In R. A. Robb,editor, Proceedingsof Visualiza-tion in BiomedicalComputing, pages324–335,ChapelHill, NC, October1992.SPIE,Vol. 1808.

c�

TheEurographicsAssociation2000.

View publication statsView publication stats