image guided interactive volume visualization for confocal...

11
Image guided interactive volume visualization for confocal microscopy data exploration Tom Biddlecome'1, Shiaofen Fang'1, Ken Dunn and Mihran Tuceryan' aDepartment of Computer and Information Science Indiana University Purdue University Indianapolis 723 W. Michigan St. 5L280, Indianapolis, IN 46202 bDepartment of Medicine, Indiana University Medical Center 1120 South Drive, Fesler Hall, Room 115, Indianapolis, IN 46202 ABSTRACT 3D microscopy visualization has the potential of playing a significant role in the study of 3D cellular structures in biomedical research. Such potential, however, has not been fully realized due to the difficulties of current visualization methods in coping with the unique nature of microscopy image volumes, such as low image contrast, noise and unknown transfer functions. In this paper, we present a new 3D microscopy imaging approach that integrates volume visualization and 3D image processing techniques for interactive 3D data exploration and analysis. By embedding 3D image enhancement procedures into the volume visualization pipeline, we are able to automatically generate image-dependent tranfer functions to reveal subtle features that are otherwise difficult to visualize. It allows the users to interactively manipulate a small number of parameters to achieve desired visualization effects. Other 3D image processing techniques, such as quantification and segmentation, may also be integrated within the data exploration process for interactive image analysis. Keywords — 3D microscopy imaging, volume visualization, image enhancement, image processing. 1. INTRODUCTION Although microscopy has long played a significant role in the study of cell biology,3'2 it has only been in recent years that an appreciation for the 3D nature of cells resulted in the development of methods for acquiring 3D image volumes from microscopic samples. For many cell types, such as columnar epithelial cells and neurons, traditional 2D representations are no longer considered sufficient. Much of the development of confocal microscopy has been driven by researchers attempting to study these elaborate three dimensional systems. Although advances in confocal microscopy and image deconvolution have made it feasible to collect high resolution, 3D image volumes of thick samples such as epithelial cells, application of this technology to 3D imaging is still in its infancy. In particular, the proliferation of confocal microscopy in cell biology has generated vast amounts of image data that need to be explored and analyzed. This paper addresses such data exploration problems using advanced volume visualization and image processing techniques. Volume visualization9 is a new 3D computer graphics technique concerned with the abstraction, interpretation, rendering and manipulation of large volume datasets. Volume rendering algorithms,12'10'2' for instance, can directly display the entire volume dataset through semi-transparent images, and allow the viewer to peer inside the internal structures of the image volume for tasks that surface-based graphics techniques are incapable of. Although volume visualization has been extensively used in many scientific and medical applications such as scientific simulation and CT/MRI imaging, its application in 3D microscopy imaging has not been very effective. Although there exist many academic and commercial volume visualization and imaging systems, such as XCOSMby Washington University, 3D- VIEWNIX by University of Pennsylvania, VolVis by SUNY at Stony Brook and Analyze by Mayo Clinic, and initial research efforts have also been made recently,'9'16 these existing approaches are primarily directed at visualizing regular volume datasets with pre-defined rendering parameters, offering inadequate data exploration capabilities. The visualization techniques of these applications appear to have been largely borrowed from those used for MRI and CT scanning, as they show few concessions to the special characteristics of microscopy images. Part of the SPIE Conference on Image Display • San Diego, California • February 1998 130 SPIE Vol. 3335 • 0277-786X/98/$10.0O Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Upload: others

Post on 24-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

Image guided interactive volume visualizationfor confocal microscopy data exploration

Tom Biddlecome'1, Shiaofen Fang'1, Ken Dunn and Mihran Tuceryan'

aDepartment of Computer and Information ScienceIndiana University Purdue University Indianapolis723 W. Michigan St. 5L280, Indianapolis, IN 46202

bDepartment of Medicine, Indiana University Medical Center1120 South Drive, Fesler Hall, Room 115, Indianapolis, IN 46202

ABSTRACT3D microscopy visualization has the potential of playing a significant role in the study of 3D cellular structures inbiomedical research. Such potential, however, has not been fully realized due to the difficulties of current visualizationmethods in coping with the unique nature of microscopy image volumes, such as low image contrast, noise andunknown transfer functions. In this paper, we present a new 3D microscopy imaging approach that integratesvolume visualization and 3D image processing techniques for interactive 3D data exploration and analysis. Byembedding 3D image enhancement procedures into the volume visualization pipeline, we are able to automaticallygenerate image-dependent tranfer functions to reveal subtle features that are otherwise difficult to visualize. It allowsthe users to interactively manipulate a small number of parameters to achieve desired visualization effects. Other3D image processing techniques, such as quantification and segmentation, may also be integrated within the dataexploration process for interactive image analysis.Keywords — 3D microscopy imaging, volume visualization, image enhancement, image processing.

1. INTRODUCTION

Although microscopy has long played a significant role in the study of cell biology,3'2 it has only been in recentyears that an appreciation for the 3D nature of cells resulted in the development of methods for acquiring 3D imagevolumes from microscopic samples. For many cell types, such as columnar epithelial cells and neurons, traditional2D representations are no longer considered sufficient. Much of the development of confocal microscopy has beendriven by researchers attempting to study these elaborate three dimensional systems. Although advances in confocalmicroscopy and image deconvolution have made it feasible to collect high resolution, 3D image volumes of thicksamples such as epithelial cells, application of this technology to 3D imaging is still in its infancy. In particular,the proliferation of confocal microscopy in cell biology has generated vast amounts of image data that need to beexplored and analyzed. This paper addresses such data exploration problems using advanced volume visualizationand image processing techniques.

Volume visualization9 is a new 3D computer graphics technique concerned with the abstraction, interpretation,rendering and manipulation of large volume datasets. Volume rendering algorithms,12'10'2' for instance, can directlydisplay the entire volume dataset through semi-transparent images, and allow the viewer to peer inside the internalstructures of the image volume for tasks that surface-based graphics techniques are incapable of. Although volumevisualization has been extensively used in many scientific and medical applications such as scientific simulation andCT/MRI imaging, its application in 3D microscopy imaging has not been very effective. Although there exist manyacademic and commercial volume visualization and imaging systems, such as XCOSMby Washington University, 3D-VIEWNIX by University of Pennsylvania, VolVis by SUNY at Stony Brook and Analyze by Mayo Clinic, and initialresearch efforts have also been made recently,'9'16 these existing approaches are primarily directed at visualizingregular volume datasets with pre-defined rendering parameters, offering inadequate data exploration capabilities.The visualization techniques of these applications appear to have been largely borrowed from those used for MRIand CT scanning, as they show few concessions to the special characteristics of microscopy images.

Part of the SPIE Conference on Image Display • San Diego, California • February 1998130 SPIE Vol. 3335 • 0277-786X/98/$10.0O

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 2: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

First , fluorescently-labeled samples characteristically have low signal levels , sometimes consisting of a singlephoton, so that microscopy images are typically much noisier than CT or MRI images. Furthermore, since excitationof fluorescence also destroys fluorophores through photobleaching, signal-to-noise ratio decreases with the collectionof each focal plane of an image volume. The resulting low contrast and small intensity gradients make these imagevolumes sensitive to small changes in rendering parameters, such as optical transfer functions that map image intensityvalues to colors, opacity or shading parameters. Consequently, ordinary volume visualization algorithms frequentlyfail to capture the delicate structures present in many cellular objects. Secondly, structures in the microscopic scaletypically show higher complexity than those of the anatomic organs in CT or MRI images. This is particularly truein multi-parameter images, in which several different proteins will be imaged simultaneously, each in a specific colorof fluorescence. A third problem is that the structure of the objects to be examined are often partially or entirelyunknown. Without prior knowledge of the structures in an image, it is difficult to determine an appropriate transferfunction to generate a useful and informative rendering. As a result of noise and inappropriate transfer functions,rendering artifacts may be created and cause misleading interpretations of the dataset.

To overcome these problems, we developed a new approach that integrates 3D image enhancement tools andvolume visualization techniques to automatically identify and enhance the desired levels of feature details in a given3D image. Instead of creating new image volumes, the image enhancement process is embedded into the visualizationpipeline as part of the automatic transfer function generation step. Traditionally, the searching for transfer functionis primarily a trial-and-error process. For complex microscopic structures, this is very time-consuming. inefficientand inaccurate. Our new approach allows the users to adjust only a few parameters to visualize various levelsof automatically enhanced feature details, and therefore provides an efficient and image-guided data explorationenvironment. As an important component in data exploration, other 3D image processing tools, such as segmentationand quantification, have also been developed and implemented in a 3D image processing package, currently beingused for image analysis by the Department of Medicine at Indiana University Medical Center.

In the following, we first describe, in section 2, the microscopy image collection process and related issues.Section 3 presents the image-guided volume visualization approach with emphasis on the integration of 3D imageenhancement and volume rendering. A 3D image processing package is described in section 4. We conclude the paperin section 5 with further discussions and future work.

2. IMAGE COLLECTION

Collecting a 3D image requires that a series of 2D images are collected, each of a particular focal plane. Images maybe collected by confocal microscopy, in which case the optics ensure that an image of a given focal plane has minimal"contamination" from fluorescence in adjacent focal planes, thus providing for adequate vertical discrimination.Optical sections may also be collected by wide-field microscopy, and out-of-focus information removed by imagedeconvolution, a technique which mathematically reverses the image "smearing" created by the finite aperture of anobjective lens. Figures 1(a) and (b) show a comparison of fluorescence images of the microtubule cytoskeleton ofcanine kidney cells collected by conventional wide-field microscopy and confocal fluorescence microscopy, respectively.In either case, however, the sample is repeatedly illuminated with intense light, which can be toxic to living samples,and more generally decreases image signal-to-noise ratio by photobleaching fluorescent labels. This problem iscompounded in 3D microscopy. In conventional fluorescence microscopy, the entire depth of the sample is illuminatedwith light that both excites and destroys fluorophores through photo-oxidation. When one attempts to collect serialoptical sections of a sample volume, the images are characterized by an increase in the amount of photobleachingof each sequential plane, such that the later planes have much worse signal-to-noise ratio than the initial collectedplanes. As a consequence, the ability to extract information from microscopy images is primarily limited by thesignal-to-noise ratio of the images. Ironically, for most researchers these limitations have limited confocal microscopyto the study of thin cells.

In the microscopy laboratory of the Department of Medicine at Indiana University Medical Center, a Bio-RadMRC-1024 confocal microscope is used with the Krypton-Argon laser, whose emission lines at 488, 568 and 647nm permit simultaneous efficient excitation of three different fluorophores. Up to three fluorescence images may becollected simultaneously, with the color distribution between the detectors determined by user-replaceable barrierand dichroic filters. Image scales on various parameters may be adjusted to achieve the best signal-to-noise ratio.The adjustable parameters include the spatial sampling density, pixel size, and vertical sampling frequency whichmay be varied using an adjustable micro-stepping focus motor with a minimum increment as small as 0.1 micron.

131

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 3: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

Signal-to-noise ratios may also be manipulated by changing signal level by varying illumination, PNIT gail! andsize of the confocal pinhole, or by ave.raging several frames using the PMTs in analog mode, or by accumulatingthe fluorescence of the image in a photon-counting mode. Aside from affecting signal-to-noise ratios, each of thesevariables has specific consequences, including changes in phototoxicity, photocheinistrv, image acquisitioti speed andoptical properties.

For low-level fluorescence microscopy, a wide field microscope equipped with a cooled CCI) detector is moreefficient than a confocal microscope equipped with a PNIT.'8 When coijihined with digital deconvohiitioii, thisapproach is capable of generating optical sections in samples with weak fluorescence, but can also he exploitedII) minimize illumination, and thus phototoxicity and photobleachoing, or to acquire images muon raj)i(llv than ispossible with a confocal microscope. The wide-field microscope we are using is a Nikon l)iaphiot 300 equipped forepifluorescence imaging via a Princeton Instruments Pentamnax cooled ('CI) detector. This system is eql1ip)ed witha single, 4 way dichroic mirror and excitation and emission filters suitable for (letecting blue, green, red atoll miear-infrared dyes. The role of image scale may he explored with various objectives and by joixel-himmimnog, a process iiiwhich groups of adjacent pixels behave as a single pixel. Binning has the effect of multiplying pixel sensitivity, whilereducing the pixel resolution. Vertical sampling frequency can be determined by means of a mnicro-stepung focuscontroller with a repeatable accuracy of 100 rim. Signal-to-noise ratio may be manipulated by chammgiimg signal levelsby pixel—binning, modulating illumination and modulating the duration of image collection.

Digital deconvolution, using the Applied Precision 1)eltavision software, can also he aJ)J)Iied to image volumiiescollected with either the confocal microscope or with the wide-field CCD imaging system. It is a way of reclaimingresolution and of increasing the signal to noise ratio of collected images. Since de.comivolutiomm requires choaracterizatiomiof the point spread function for each experimental situation, a vertical series of fluorescence images of stmh-mesolutiomisized fluorescent microspheres, whose fluorescent properties match those of the fluorophore iii the sample must becollected for each experiment.

3. IMAGE-GUIDED VOLUME VISUALIZATION

3.1. Image-enhanced volume renderingTwo commonly used techniques in volume visualization are surface rendering and volume rendering. In surfacerendering.13 iso—surfaces are extracted from the volummie data based omi aim intensity threshold, and then renderedusing conventional surface graphics techniques. Since the intensity threshold is pre—defimmed, the optical properties(the intensity values) of the surfaces to he displayed needs to he knowji in advance. Although surface boundaryinformation is often available for C'T and MRI scans (e.g. hiumnaim organs). this is often toot the case for mmlicroscopv

132

(a) (b)

Figure 1. (a) image from wide—field mmoicroscope (b) image from confocal 111icroscope

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 4: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

images. The problem is even more difficult for volume rendering"2'21'20 where voxels are first mapped into semi-transparent blocks with different opacity and color values, and then rendered into a semi-transparent image. Acrucial component in this process is the transfer function that maps the image intensity values into color and opacityvalues to composite the final image. Inappropriate transfer functions may generate confusing and noisy images thatdo not provide useful structural information for data exploration purpose. The common trial-and-error approachin transfer function searching is an extremely time-consuming task for microscopy data exploration where we oftenknow very little about the optical properties of the underlying cellular objects. In addition, transfer function isintrinsically image-dependent, i.e. such trial-and-error searching has to be done for every new image volume. Oneprevious effort about transfer function searching7 uses stochastic search technique to generate many image sampleswith different transfer functions, and let the users to select the proper ones based on visual examinations of thesample images. But since the transfer function model (population) is pre-determined without image analysis, andthe representativeness of the sample images is also in question, the searching process is still fairly arbitrary. Our goalin this work is to develop an image dependent, automatic and interactive method, as an integrated component of thevisualization pipeline, for generating transfer functions that maximally enhance the structural features in the desiredlevel of details. Since surface rendering can also be achieved by volume rendering using a transfer function thathighlights boundary voxels,1' we will focus mainly on the transfer function searching problem in volume renderingin our discussion.

This problem, however, is not entirely new. Intensive research efforts have been made over the last four decadesin two-dimensional (2D) image processing.'5 Many of the techniques developed in image processing, such as imageenhancement and edge detection, serve a similar purpose in a 2D image domain. Although some of the imageprocessing techniques have been applied to 3D applications for CT and MRI images,8'17'5'6'22 very little has beendone in integrating 3D image processing tools into the visualization pipeline for more intelligent 3D rendering.

Combining 3D image processing and volume visualization, a natural solution is to apply a sequence of 3D imageenhancement tools to a microscopy image volume prior to rendering. The resulting image volume can then be directlycomposited and/or shaded during the post-enhancement rendering. Such two-step process, however, requires thereconstruction of the usually huge image-enhanced volumes, which is not only memory intensive, but also inefficientsince not all voxels in the image-enhanced volume will be needed for rendering. Moreover, data exploration is aninteractive process in which users need to experiment various rendering parameters for their subjective visualizationgoals, reconstructing large image volumes in an interactive application is not practical. In our approach, as an effortof integrating 3D image enhancement and visualization, we developed a new volume rendering method that embedsthe image enhancement process into the rendering pipeline, and therefore avoid the volume reconstruction process.Various 3D extensions of 2D image enhancement methods and their integrations with the volume rendering pipelinewill be described in the following subsections.

3 .2 . Voxel-based enhancement operationsVoxel-based image enhancement operations apply some functions to each voxel's intensity value, individually, togenerate a new value. They are essentially the 3D extensions of the point operations in 2D image enhancement.15As in the 2D case, a voxel-based image enhancement process generates an intensity look-up table that maps theintensity values of the original volume to new values. This look-up table can be naturally embedded into the volumerendering pipeline as follows:

Let the intensity mapping function represented by the look-up table be f : R —*R, which is pre-computed byan image enhancement procedure. The volume rendering algorithm (e.g. a raycasting algorithm12) is then directlyapplied to the original volume with two modifications:

1 . Using f as the opacity transfer function of the original volume. This is equivalent to applying a linear opacitytransfer function to the image-enhanced volume. This also implies that we consider the image-enhanced volumea more accurate representation of the materials we want to see under the given level of details (determined bythe image enhancement parameters).

2. The shading computation also needs to be modified since the gradient function (for shading) should now becomputed from the image-enhanced volume. Let g: R3 -÷R be the intensity field of the original volume. Thusthe intensity of the enhanced volume at any point P is h(P) = f(g(P)). The gradient function for the enhancedvolume can be computed by

Vh(P) = f'(g(P)) Vg(P)

133

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 5: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

outputinteflty

Figure 2. (a) Intensity modification; (b) Histogram equalization

where Dg(P) =( , , ) is the gradient of function g computed directly from the original volume, f' is thederivative of function f, which may be estimated from the look-up table by a simple central difference formula.

It is easy to see that almost all point operations in 2D image enhancements15 can be easily applied to 3D images.Among them, two commonly used operations are Intensity modification and Histogram modification.

. Intensity modification: One or more intensity intervals may be highlighted to generate enhanced exposurewithin the given intensity ranges. An example is shown in Figure 2(a) where intensity interval [ti, t2] is stretchedto display more details within this intensity range. Here ti, t2 and r are input parameters. But ti and t2normally come from the output of some other image processing procedures (e.g. boundary detection), while ris often used as a user adjustable parameter.

. Histogram modification: An image volume's histogram curve represents the relative frequency with whicheach intensity value occurs in the image. Modification operations can be applied to the histogram curve ofthe image volume to redistribute its intensity values to match another histogram (user defined or computedfrom another image). One particularly useful modification operation is the histogram equalization, in whichthe histogram curve is made as fiat as possible while its overall curve shape is maintained,15 as shown inFigure 2(b). This normally increases the contrast in the areas with large number of low intensity voxels.

3.3. Spatial enhancement operations

Unlike the voxel-based operations where a single intensity-to-intensity mapping is generated, a spatial enhancementmethod derives the new intensity value of a voxel from its neighborhood voxels, i.e. the result is region dependent.The 3D versions of these methods can be derived by straightforward 3D extensions of the spatial operations in 2Dimage enhancement. As in the 2D case, spatial operations can be classified into sharpening and smoothing operations.

. Smoothing operations: We use smoothing operations primarily to remove image noise. We sometimes alsowant to remove very small feature details in order to better present the larger features. A simple method is thelowpass filtering operation that uses a 3D lowpass mask to smooth out high frequency components from theimage. By carefully choosing the mask, certain noise or small unwanted details can be removed. A drawbackof this operation, however, is that it tends to blur edges and surface boundaries as well. A Median filteringoperation can largely avoid this problem while still being able to achieve effective noise reduction. For a givenvoxel, the median filtering operation takes all the intensity values in a neighborhood, sorts them in magnitude,and then uses the middle ranked (median) value as the new value. This is particularly effective in removingthe so-called "salt-and-pepper" type of noise.

• Sharpening operations: Sharpening operations aim to increase the exposures of geometric features by em-phasizing the high frequency components of the images.'5 This can be achieved by applying a highpass filter

134

(a) (b)

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 6: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

to each voxel to generate the new value. A simple but very useful operation is the subtraction of a Laplacianoperator:

f(x,y,z) = g(x,y,z) — V2g(x,y,z)

Its discrete form can be represented by an ii x n x ri mask. For instance, a 3 x 3 x 3 mask represents:

f(i,j,k) = 7g(i,j,k) — {g(i+1,j,k) +g(i — 1,j,k) 1,k)+g(i,j — 1,k) +g(i,j,k+1) +g(i,j,k— 1)

Most other 2D highpass filters15 can also be extended to 3D with similar 3D masks. Nonlinear highpass filtersalso exist for image sharpening purposes.15 Since sharpening operations tend to enhance voxels with highfrequency intensity field, including noise which usually exhibit strong high frequency signal characteristics, asmoothing operation may need to be applied first to remove or reduce the noise before a sharpening operationis applied.

In both smoothing and sharpening operations, the size of the mask (i.e. the size of the neighborhood) can be anadjustable parameter for users to experiment to reach their subjective visualization goals. Parameters may also beused to change filters and masks for different visualization effects, and to select different levels of feature details.

The spatial enhancement operations can also be integrated into a volume rendering process where the new voxelvalues are linearly mapped to opacity and color values for rendering. Since the result of spatial enhancement is not asimple intensity mapping — it depends on the intensity distribution in the neighborhood of the voxel being computed,the new voxel values have to be dynamically computed within the visualization process. This leads to higher cost ineach reference to the new voxel value, and thus may prevent us from frequent access of the new voxel values duringrendering. Consequently, shading based on the enhanced volume becomes very expensive and inefficient due to therepeated computation of new voxel values in computing the gradient function of the enhanced volume. Althoughwith a linear filter, such as the Laplacian operator, a Jacobian matrix may be computed and then multiplied withthe gradient of the original volume to generate the gradient of the enhanced volume, the same cannot be easily donefor nonlinear filters (e.g. median filter). There are, however, two alternatives that avoid this problem:

. Shading may not be necessary for many cellular images where lighting is not considered important. In this case,the new intensity values will be directly mapped, using a simple linear transfer function, to opacity and colorvalues that are then composited and blended to form the final image. In other words, the image enhancementoperation is directly used as the transfer function for both color and opacity values. One advantage of thisapproach is that the final color and opacity values of each voxel do not depend on the viewing angle, andtherefore need to be computed only once for all viewing directions. Consequently, the computed color andopacity volume (RGBA volume) can be used as a 3D texture. This enables the use of 3D texture mappinghardware, available in some high-end graphics workstations, for very fast (potentially real-time) hardware-assisted volume rendering.4 For interactive data exploration, rendering speed is crucial for interactivity.

. If lighting is important (e.g. the rendering of surface boundaries), another alternative is to simply use thegradient function of the original volume for shading, but apply the image enhancement operation for theopacity transfer function. Because the contribution of each voxel to the image volume is mainly determinedby its material opacity (or density), this will still be able to serve the enhancement purpose. It may, however,generate potentially unnatural lighting effects. All the volume rendering images shown in this paper aregenerated using this approach.

An example of using 3D spatial image enhancement in the rendering of a microtubule volume is shown in Figure 4.Microtubules are the molecular tracks of cells. Intracellular organelles or compartments are spatially distributed insidethe cell but this spatial distribution is dynamic. The organelles can move along these tracks in molecular motors.In addition, these tracks are dynamic structures that can be reorganized quickly to suit the metabolic and cell cycledemands. Figure 4(a) is rendered with a linear opacity transfer function, and without any image enhancement.Figure 4(b) is rendered with a median filter applied. The result shows much clearer individual cell and nuclearstructures. In Figure 4(c), an additional highpass filter, the Laplacian filter, is applied after the median filter. Theresult shows the tubule structures that are not clearly visible from other renderings.

135

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 7: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

,,;;:i'::;"t',,;,":_—Tl

ti t2 t3 t4

Figure 3. The intensity curve modification for surface boundary rendering

3.4. Boundary detection

Surface rendering'3 displays the surface boundaries of the 3D objects within a volume dataset by first extracting theboundaries as polygonal surfaces, which can then be rendered using conventional graphics techniques. Similar effectscan also be achieved through volume rendering by emphasizing the part of the intensity curve within the boundaryintensity ranges, and at the same time de-emphasizing the intensities in other regions. This can be easily done usingthe intensity modification operation described earlier. In either case, the algorithm needs to know, beforehand, theintensity thresholds/ranges of the object boundaries for surface extraction or intensity modification. Unfortunately,such object boundary information is often not available in microscopy images where sometimes very little is knownabout the geometric structures and optical properties of the underlying 3D objects. A trial-and-error approach isapparently too time-consuming and inaccurate in this case.

Using the integrated approach, we can first apply a 3D boundary detection operation to automatically find theintensity ranges of all object boundaries before applying the intensity modification operation. Figure 3 shows anexample where the entire intensity curve is first compressed to a very low range (the curve Cl), and the boundaryintensity range, [ti, t2] and {t3, t4}, are then lifted and stretched (to curve c2) to highlight the boundary surfaces.Since the boundary detection is a pre-processing step used only to compute the boundary intensity ranges, therendering process is essentially the same as the one with any voxel-based enhancement operation, i.e. the boundaryvoxels will have relatively high opacities, and other voxels are rendered with very low opacities. Figure 5 shows arendering example of a Golgi complex using this approach. The Golgi complex is an intracellular organelle thatparticipates in protein export. Inside this compartment, newly synthesized protein has sugar molecules added andthe protein has a chance to fold into a mature conformation. Researchers, at Indiana University Medical Center(IUMC), have been using volume visualization and 3D image analysis techniques to study the effects of cellular injuryon the protein maturation pathway and so, looked at the effect of injury on the Golgi compartment.

3D boundary detection is essentially the 3D extension of the edge detection problem in 2D image processing. Sinceour purpose is only to find the intensity values of the boundary voxels, many of the edge tracking and connectivityproblems in edge detection do not exist here. In general, we consider voxels with high magnitudes of gradients topossibly be boundary voxels. A threshold is set to determine when a voxel's gradient has a large enough magnitudeto qualify as a boundary voxel. This threshold can also be used as an adjustable system parameter to control the levelof details in the surface rendering. Other edge detection techniques'4'1 can also be used for 3D boundary detection.

One major problem with this intensity thresholding based surface rendering is that it assumes that all voxels withintensity values within the detected range are boundary voxels (and therefore will be rendered with high opacities).This is perhaps true for CT images (e.g. bone surfaces clearly have different intensity ranges than tissues) . But thismay not be the case in microscopy images where the intensity distribution is much more complicated and diverse.Thus the algorithm is likely to display many non-boundary voxels which may interfere and destroy the coherenceof the true boundary surfaces. A solution to this problem is to dynamically compute the boundary voxels duringrendering based on a thresholding of the gradient magnitudes. In other words, the boundary detection is no longera pre-processing step, but an integrated part of the volume rendering pipeline. Since boundary detection involves

136

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 8: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

Figure 5. a HEnilering with a lineai olnuitv tranffr hm(ru)11: H Hiehnng the iufu hnn1nh1n4 \vit}i hitnhai(If tf(ti( (II I1I(I i1)tffl1tY l1I(f(hif1(ItH)1

137

Figure 4. (al \hl1in rfndfrilig f tin origilial micr(,tlli)uif vf)I1uhIf with a hinfa! nparitvtranshr hnIinftL Ha nf(hi;nI hItfritIg: \hdiaii fIreiing fuIlowfd ha a hnghpa hitiing wrhi lJIaiw iiiiik

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 9: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

138

gradient computation and thresholding, which is based on a neighborhood of voxel values, it can be considered asanother spatial enhancement operation. Consequently, volume rendering integrated with boundary detection is morecostly than the intensity thresholding based surface rendering (as a voxel-based enhancement operation).

4. 3D IMAGE PROCESSING AND ANALYSIS

Image analysis refers to any type of processing on the input 3D microscopy image used to extract useful informationfrom the data. These include preprocessing of these images for 3D visualization algorithms, segmentation andboundary detection (i.e., object detection), computing various statistics about these objects, performing objectclassification and analyzing the various 3D structures existing in the cells. An experimental package for 3D microscopyimage analysis, CM-Sieve (Confocal Microscopy — Special Interactive Environment for Volume Enhancement), hasbeen developed and is currently being used for microscopy volume quantification and segmentation in the Departmentof Medicine at Indiana University Medical Center. Aside from providing 3D image enhancement tools for theintegrated volume rendering approach described in last section, it also performs many other important image analysisand processing tasks.

CM-Sieve was specifically designed for analyzing 3D data sets from confocal microscopy. These volumes presentunique challenges when performing image analysis. Confocal data sets lead to specialized problems for genericsegmentation algorithms. This can be due to image collection or fluorescent tagging techniques. The structure beingstudied may be subresolution of a single pixel where one or more pixels identify a single subresolution object or thefiourescent probe is robust and makes the image appear noisy.

Interactive exploration of confocal data sets often require voxel data be processed to ascertain regions whichrepresent objects or parts of objects. 3D image segmentation is the process by which these objects are extractedfrom regions of a three dimensional volume of voxels. An image may be a 2D array of pixel data, a 3D array of voxeldata, or a 4D array of tixel data all containing objects of interest to the investigator. Segmentation is commonly usedfor enhancement, removal of background noise and enhancement of all or part of the image. The segmentation ofthe rastered image may be based on intensity of voxel data, gradient curves, grouping of a band of similar intensityvalues or other criteria unique to the particular data set.

Segmentation: CM-Sieve allows the extraction and enhancement of 3D objects from volumetric data based onhigh and low thresholding, intensity clustering, gradient information, histogram equalization, and an array of othertechniques. The program is also capable of segmenting out objects from one image while taking into account dataill a related file. Occasionally it is necessary to segment objects from a volume which have been treated by otherfilters. This may be required when data sets are treated by a median filter for the explicit purpose of backgroundsubtraction or noise reduction. This can cause the pixel values to be modified and thus result in lossy data. Forexample, BioRads confocal laser scanning microscope (CLSM) limits users to an 8 bit resolution (or 256 possibleintensity values) . After background subtraction bits at or near the upper bound are substantially reduced. This maylead to erroneous data because a pixel value of 256 in the original image is saturated and possibly not the real valuesince the upper intensity threshold was reached. CM-Sieve has the ability to process a data set while analyzing thesource image. Therefore, if the investigator chooses to delete objects with original pixels at or near the saturationpoint the data from the original volumetric data set is available for comparison and evaluation.

CM-Sieve can also perform edge detection and volume isolation of objects found within the 3D data sets. Theadvantage of the edge detection algorithm is the pixel value intensity ranges at the edge of objects can be determined.This enables us to accurately perform ray casted surface renderings by edge intensity values. If the program is runwith more aggressive parameters the edge intensity ranges inside of the objects are calculated. Thus not only can theoutside edge of the object be rendered, but the internal structures within objects can be located and volume rendered.Using this method, CM-Sieve allows parameters for grayscale modification to be determined. Since correct intensityvalues of sub-objects are determined, CM-Sieve can statistically analyze the intensity ranges, intensity values, andvolume of these sub-objects. A watershed type algorithm has been implemented in CM-Sieve. This allows us toreduce objects to their maximal or near maximal pixel intensities. The objects are re-grown based upon heuristicmethods and intensity gradients. This method allows a more accurate segmentation of objects where the purelygradient information is not necessarily enough.

Image processing in parallel channels In confocal microscopy, the ability to collect multiple parallel channelsof data results in multiple data sets of diverse information from the same physical space. CM-Sieve uses a multiple

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 10: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

window, graphical user interface to display separate channels from the CLSM using object-wise colocalization andobject-wise differentiation. The researchers now have the ability and the tools necessary for object-wise, in additionto pixel-wise, examination of the 3D data. Therefore, separate channels from confocal data sets can be merged withthe explicit purpose of identifying colocal as well non-colocal objects and their relationships. This methodology isalready being used in endocytosis to identify transport mechanism of transferrins and LDL.

Quantification CM-Sieve segments and quantifies objects from 3D data sets. The quantification determines thevolume; minimum, maximum, total, and relative intensities; gradient information; centroid; and object location(coordinates). This gives the investigator the unique ability to quantitatively analyze and compare the objectswithin the data set.

5. CONCLUSIONSWe have presented a new volume visualization approach for 3D microscopy data exploration. Because of the uniquenature of microscopy image volumes, such as noise and unknown transfer functions, 3D image enhancement proce-dures are applied before the final rendering. To avoid constructing new image volumes and to enable interactivevisualization, the image enhancement procedures are integrated into the visualization pipeline and serve as an auto-matic transfer function searching process. Both voxel-based and spatial image enhancement operations are discussed.Combinations of these operations may be used to achieve desired visualization goals. One important issue in thisintegration effort is that the volume rendering algorithm needs to be modified to accommodate the dynamic imageenhancement computation. Since most enhancement operations carry a number of parameters, users are able toadjust these parameters to obtain desired visualization effects, and to select different levels of feature details. Itis our belief that static or pre-defined sequences of renderings cannot provide sufficient insight into a complicatedimage volume. It is the dynamic and interactive exploration process with guided user control that provides themost comprehensive perspective into the dataset. We are currently developing such an interactive environment formicroscopy data exploration based on the approach described in this paper. We also plan to embed many other 3Dimage processing tools, such as segmentation, registration and morphing, into volume visualization to accomplishmore complicated image analysis tasks, such as object manipulation and quantification.

REFERENCES1. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine

Intelligence, 8(6):679—698, 1986.2. K. Dunn and F. Maxfield. Ratio imaging instrumentation. Methods in Cell Biology, 56:217—236, 1998.3. K. Dunn, S. Mayor, J. Meyer, and F. Maxfield. Applications of ratio fluorescence microscopy in the study of

cell physiology. FASEB J., 8:573—582, 1994.4. Shiaofen Fang, Rajagopalan Srinivasan, Su Huang, and Raghu Raghavan. Deformable volume rendering by 3D

texture mapping and octree encoding. In : Proc. of IEEE Visnalization'96, San Francisco, pages 73—80, October1996.

5. G. Gerig, 0. Kübler, R. Kikinis, and F.A. Jolesz. Nonlinear anisotropic filtering of MRI data. IEEE Transactionson Medical Imaging, 11(2):221—232, June 1992.

6. G. Gerig, J. Martin, R. Kikinis, 0. Kübler, M. Shenton, and F.A. Jolesz. Unsupervised tissue type segmentationof 3D dual-echo MR head data. Image and Vision Computing, 1O(6):349—360, July 1992. IPMI 1991 specialissue.

7. T. He, LichanHong, A. Kaufman, and H. Pfister. Generation of transfer functions with stochastic searchtechniques. In IEEE Visualization 96, pages 227—234, Oct 1996.

8. K. H. Höhne and W. Hanson. Interactive 3D segmentation of MRI and CT volumes using morphologicaloperations. Journal of Computer Assisted Tomography, 16(2):285—294, 1992.

9. Arie Kaufman. Volume Visualization. IEEE Computer Society Press, 1991.10. P. Lacroute and M. Levoy. Fast volume rendering using a shear-warp factorization of the viewing transformation.

SIGGRAPH'94, pages 451—458, 1994.1L Marc Levoy. Display of surfaces from volume data. IEEE Computer Graphics and Application, 8(3):29—37, May

1988.12. Marc Levoy. Efficient ray tracing of volume data. ACM Trans. on Graphics, 9(3):245—261, July 1990.

139

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms

Page 11: Image guided interactive volume visualization for confocal ...tuceryan/research/Microscopy/Biddlecome1998.pdfKeywords 3D microscopy imaging, volume visualization, image enhancement,

13. W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. ComputerGraphics, SIGGRAPH'8/', 21(4):163—169, July 1987.

14. D. Marr and E. Hildreth. Theory of edge detection. Proceeding of the Royal Society of London, B 207:187—217,1980.

15. Azriel Rosenfeld and Avinash Kak. Digital Picture Processing. Academic Press, 1982.16. Georgios Sakas, M. Vicker, and P. Plash. Case study: Visualization of laser confocal microscopy datasets. In

IEEE Visualization'96, pages 375—379, 1996.17. T. Schiemann, M. Bomans, U. Tiede, and K. H. Höhne. Interactive 3D segmentation. In R. A. Robb, editor,

Proceedings of SPIE: Visualization in Biomedical Computing II, volume 1808, Chapel Hill, NC, 1992.18. P. J. Shaw. Comparison of wide-field/deconvolution and confocal microscopy for 3d imaging. In Handbook of

Biological Confocal Microscopy, 2nd Edition, pages 373—387, 1995.19. Lisa Sobierajski, R. Avila, D. O'Malley, S. Wang, and A. Kaufman. Visualization of calcium activity in nerve

cells. IEEE Computer Graphics and Applications, 15(4):55—61, 1995.20. Rajagopalan Srinivasan, Shiaofen Fang, and Su Huang. Volume rendering by template-based octree projection.

In 8th Enrographics Workshop on Visualization in Scientific Computing, April 1997.21. Craig Upson and Michael Keeler. V-buffer: Visible volume rendering. Computer Graphics, SIGGRAPH'88,

22(4):59—64, August 1988.22. Ross T. Whitaker and Stephen M. Pizer. A multi-scale approach to nonuniform diffusion. Computer Vision,

Graphics, and Image Processing: Image Understanding, 57(1):99—110, January 1993.

140

Downloaded from SPIE Digital Library on 31 May 2011 to 134.68.140.125. Terms of Use: http://spiedl.org/terms