progress report on the u.s. department of energy's scientific

130
SCIENTIFIC DISCOVERY A progress report on the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) Program

Upload: truongdang

Post on 05-Feb-2017

227 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERYA progress report on the U.S. Departmentof Energy's Scientific Discovery throughAdvanced Computing (SciDAC) Program

Page 2: Progress report on the U.S. Department of Energy's Scientific

Cover: When the cores of massive stars collapse to form black holes and accretion disks, relativis-

tic jets known as gamma ray bursts pass through and then explode the star. SciDAC’s Supernova

Science Center used simulations like the one of the cover to test and confirm theories about the

origin of gamma-ray bursts.

This work was supported by the Director, Office of Science, Office of Advanced Scientific

Computing Research of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

Page 3: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERYA progress report on the U.S. Departmentof Energy's Scientific Discovery throughAdvanced Computing (SciDAC) Program

Page 4: Progress report on the U.S. Department of Energy's Scientific

2

Page 5: Progress report on the U.S. Department of Energy's Scientific

EXECUTIVE SUMMARY

3

Greetings:

Five years ago, the U.S. Departmentof Energy Office of Science launchedan innovative software developmentprogram with a straightforward name— Scientific Discovery throughAdvanced Computing, or SciDAC.The goal was to develop scientificapplications to effectively take advan-tage of the terascale supercomputers(capable of performing trillions ofcalculations per second) then becom-ing widely available.

From the most massive explo-sions in our universe to our planet’schanging climate, from developingfuture energy sources to understand-ing the behavior of the tiniest parti-cle, the SciDAC program has livedup to its name by helping scientistsmake important discoveries in manyscientific areas.

Here are some examples of theachievements resulting from SciDAC:

• For the first time, astrophysicistscreated fully resolved simulationsof the turbulent nuclear combus-tion in Type 1a supernovae,exploding stars which are criticalto better understanding thenature of our universe.

• Using climate modeling toolsdeveloped and improved underSciDAC, climate change scientistsin the United States are makingthe largest contribution of globalclimate modeling data to theworld’s leading body on climatestudies, the IntergovernmentalPanel on Climate Change.

• To better understand combustion,which provides 80 percent of theenergy used in the United States,

scientists created the first labora-tory-scale flame simulation inthree dimensions, an achievementwhich will likely help improveefficiency and reduce pollution.

• Looking toward future energyresources, magnetic fusionresearchers, applied mathemati-cians and computer scientistshave worked together to success-fully carry out advanced simula-tions on the most powerful modernsupercomputing platforms. Theydiscovered the favorable resultthat for the larger reactor-scaleplasmas of the future such asITER (the planned internationalfusion experiment), heat lossescaused by plasma turbulence donot continue to follow the empiri-cal trand of increasing with thesize of the system.

• To make the most of existing par-ticle accelerators and reduce thecost of building future accelerators,teams developed new methodsfor simulating improvements. Inaddition to helping us understandthe most basic building blocks ofmatter, accelerators make possiblenuclear medicine.

• Physicists studying the StandardModel of particle interactionshave, after 30 years of trying, beenable to model the full spectrum ofparticles known as hadrons at thehighest level of accuracy ever. Theresults could help lead to a deeperunderstanding of the fundamentallaws of physics.These and other impressive sci-

entific breakthroughs are the resultsof hundreds of researchers workingin multidisciplinary teams at national

laboratories and universities acrossthe country.

The SciDAC Program consistedof three research components:

• Creating a new generation of sci-entific simulation codes to takefull advantage of the powerfulcomputing capabilities of teras-cale supercomputers

• Creating the mathematical andcomputing systems software toenable these scientific simulationcodes to effectively and efficientlyuse terascale computers

• Creating a distributed sciencesoftware infrastructure to enablescientists to effectively collaboratein managing, disseminating andanalyzing large datasets fromlarge-scale computer simulationsand scientific experiments andobservations.Not only did the researchers work

in teams to complete their objectives,but the teams also collaborated withone another to help each other suc-ceed. In addition to advancing theoverall field of scientific computingby developing applications which canbe used by other researchers, theseteams of experts have set the stagefor even more accomplishments.

This report will describe many ofthe successes of the first five years ofSciDAC, but the full story is probablytoo extensive to capture — many ofthe projects have already spurredresearchers in related fields to use theapplications and methods developedunder SciDAC to accelerate theirown research, triggering even morebreakthroughs.

While SciDAC may have startedout as a specific program, ScientificDiscovery through AdvancedComputing has become a powerfulconcept for addressing some of thebiggest challenges facing our nationand our world.

Michael Strayer

SciDAC Program DirectorU.S. Department of EnergyOffice of Science

Executive Summary

Page 6: Progress report on the U.S. Department of Energy's Scientific
Page 7: Progress report on the U.S. Department of Energy's Scientific

TABLE OF CONTENTS

5

Introduction: Scientific Discovery through Advanced Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Scientific Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Supernovae Science: Earth-based Combustion Simulation Tools Yield Insights into Supernovae . . . . . . . . . . . . . . . . . 8From Soundwaves to Supernova: SciDAC Provides Time, Resources to

Help Astrophysicists Simulate a New Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Insight into Supernovae: SciDAC’s Terascale Supernova Intiative Discovers New Models

for Stellar Explosions, Spinning Pulsars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14CCSM: Advanced Simulation Models for Studying Global Climate Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Plasma Physics: Terascale Simulations Predict Favorable Confinement Trends for Reactor-Scale Plasmas . . . . . . . . 22Fueling the Future: Simulations Examine the Behavior of Frozen Fuel Pellets in Fusion Reactors . . . . . . . . . . . . . . . 26Burning Questions: New 3D Simulations Are Closing in on the Holy Grail of Combustion Science . . . . . . . . . . . . . 30Pushing the Frontiers of Chemistry: Dirac’s Vision Realized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Accelerators: Extraordinary Tools for Extraordinary Science. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Lattice QCD: Improved Formulations, Algorithms and Computers Result in Accurate Predictions of

Strong Interaction Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Scientific Challenge Codes: Designing Scientific Software for Next-Generation Supercomputers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Basic Energy Sciences: Understanding Energy at the Atomic Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Biological and Environmental Research: Advanced Software for Studying Global

Climate Change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Fueling the Future: Plasma Physics and Fusion Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52High Energy and Nuclear Physics: Accelerating Discovery from Subatomic

Particles to Supernovae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Building a Better Software Infrastructure for Scientific Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Applied Mathematics ISICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Computer Science ISICs71

Creating an Advanced Infrastructure for Increased Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

National Collaboratory Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Middleware Projects81 Networking Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

List of Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Table of Contents

Page 8: Progress report on the U.S. Department of Energy's Scientific

INTRODUCTION

6

As these computer systemsbecome larger and more powerful,they allow researchers to create moredetailed models and simulations,enabling the design of more complexproducts, ranging from specializedmedicines to fuel-efficient airplanes.Critical to the success of these effortsis an every-increasing understandingof the complex scientific processesand principles in areas such asphysics, chemistry and biology.

Over the past five years, teams ofscientists and engineers at nationallaboratories operated by the U.S.Department of Energy, along withresearchers at universities around thecountry, have been part of a concert-ed program to accelerate both theperformance of high performancecomputers and the computing alloca-tions used to advance our scientificknowledge in areas of critical impor-tance to the nation and the world.

Launched in 2001, this broad-based program is called ScientificDiscovery through AdvancedComputing, or SciDAC for short.The $57 million-per-year programwas designed to accelerate the devel-

opment of a new generation of toolsand technologies for scientific com-puting. SciDAC projects were selectedto capitalize on the proven success ofmultidisciplinary scientific teams. Inall, 51 projects were announced involv-ing collaborations among 13 DOEnational laboratories and more than50 colleges and universities.

SciDAC is an integrated programthat has created a new generation ofscientific simulation codes. The codesare being created to take full advan-tage of the extraordinary computingcapabilities of today’s terascale com-puters (computers capable of doingtrillions of calculations per second) toaddress ever larger, more complexproblems. The program also includesresearch on improved mathematicaland computing systems software thatwill allow these codes to use modernparallel computers effectively andefficiently. Additionally, the programis developing “collaboratory” softwareto enable geographically separatedscientists to effectively work togetheras a team, to control scientific instru-ments remotely and to share datamore readily.

Today, almost five years after itwas announced, the SciDAC pro-gram has lived up to its name andgoal of advancing scientific discoverythrough advanced computing.

Teaming Up for Better ScienceIn the 1930s, physicist Ernest

Orlando Lawrence pioneered a newapproach to scientific research.Rather than working alone in theirlabs, physicists, engineers, chemistsand other scientists were broughttogether to form multidisciplinaryteams which could bring a range ofknowledge and expertise to bear onsolving scientific challenges. This “bigscience” approach formed the basisfor the national laboratories operatedby the Department of Energy.

Since the 1950s, scientists at theDOE national labs have increasinglyemployed high performance comput-ers in their research. As a result,computational science has joinedexperimental science and theoreticalscience as one of the key methods ofscientific discovery.

With SciDAC, this approach hascome full circle, with mutlidiscipli-nary teams from various researchinstitutions working together todevelop new methods for scientificdiscovery, while also developingtechnologies to advance collaborativescience.

And as with other researchaccomplishments, the results arebeing shared through articles pub-lished in scientific and technical jour-nals. Additionally, the tools and tech-niques developed under the SciDACprogram are freely available to otherscientists working on similar chal-lenges.

A Full Spectrum of ScientificDiscovery

The SciDAC program was creat-ed to advance scientific research inall mission areas of the Departmentof Energy’s Office of Science. Fromlooking back into the origins of our

Scientific Discoverythrough AdvancedComputingOver the past 50 years, computers have transformed nearlyevery aspect of our lives, from entertainment to education,medicine to manufacturing, research to recreation. Many ofthese innovations are made possible by relentless efforts byscientists and engineers to improve the capabilities of highperformance computers, then develop computational tools totake full advantage of those capabilities.

Page 9: Progress report on the U.S. Department of Energy's Scientific

7

INTRODUCTION

universe to predicting global climatechange, from researching how toburn fuels more efficiently and clean-ly to developing environmentallyand economically sustainable energysources, from investigating thebehavior of the smallest particles tolearning how to combine atoms tocreate new nanotechnologies, com-putational science advanced underSciDAC affects our lives on manylevels.

Today, almost five years after itwas announced, the SciDAC pro-gram has lived up to its name andgoal of advancing scientific discoverythrough advanced computing. Thisprogress report highlights a numberof the scientific achievements madepossible by SciDAC and also looks atthe computing methods and infra-structure created under the programwhich are now driving computation-al science advances at research insti-tutions around the world.

The achievements range fromunderstanding massive explodingstars known as supernovae to study-ing the tiniest particles which com-bine to form all the matter in theuniverse. Other major accomplish-ments include better methods forstudying global climate, developingnew sources of energy, improvingcombustion to reduce pollution anddesigning future facilities for scientificresearch.

Computing then and nowWhile the achievements of the SciDAC

teams are impressive on their own, they also

demonstrate just how far scientific computing

has come in the past 20 years.

Back in 1986, a state-of-the-art supercom-

puter was a Cray-2, which had a peak speed of

2 gigaflops (2 billion calculations) per second

and a then-phenomenal 2 gigabytes of memory.

Such machines were rare, located only at a lim-

ited number of research institutions, and access

was strictly controlled. Scientific programs

were typically written by individuals, who

coded everything from scratch. Tuning the

code to improve performance was done by

hand – there were no tools freely available, no

means of creating visualizations. The lucky sci-

entists were able to submit jobs remotely using

a 9600 baud modem.

Today, inexpensive highly parallel com-

modity clusters provide teraflops-level per-

formance (trillions of calculations per second)

and run on open source software. Extensive

libraries of tools for high performance scientific

computing are widely available — and expand-

ing, thanks to SciDAC. Computing grids offer

remote access at 10 gigabits per second, allow-

ing hundreds or thousands of researchers to

use a single supercomputer. Scientists can do

code development on desktop computers

which have processors faster than the Cray-2

and offer more memory. Once an application

runs, powerful visualization tools provide

detailed images which can be manipulated,

studied and compared to experimental results.

Page 10: Progress report on the U.S. Department of Energy's Scientific

In the far reaches of space, a small star about the size of Earth simmers away,building up the heat and pressure necessary to trigger a massive thermonuclearexplosion.

Supernovae Science:Earth-Based Combustion Simulation Tools Yield Insights into Supernovae

Page 11: Progress report on the U.S. Department of Energy's Scientific

The star, known as a white dwarf,consists of oxygen and carbon. At itscenter, the star is about two billiontimes denser than water. Already 40percent more massive than our Sun,the star continues to gain mass as acompanion star dumps material ontothe surface of the white dwarf.

As this weight is added, the inte-rior of the star compresses, furtherheating the star. This causes the car-bon nuclei to fuse together, creatingheavier nuclei. The heating process,much like a pot of water simmeringon the stove, continues for more than100 years, with plumes of hot materialmoving through the star by convectionand distributing the energy.

Eventually, the nuclear burningoccurs at such a rate that convectioncannot carry away all the energy thatis generated and the heat builds dra-matically. At this point, a burningthermonuclear flame front movesquickly out from the core of the star,burning all the fuel in a matter ofseconds. The result is a Type Iasupernova, an exploding star which isone of the brightest objects in theuniverse. Because of their brightness,supernovae are of great interest toastrophysicists studying the origin,age and size of our universe.

Although astronomers have beenobserving and recording supernovaefor more than 1,000 years, they stilldon’t understand the exact mechanismswhich cause a white dwarf to explode

as a supernova. Under SciDAC, teamsof researchers worked to developcomputational simulations to try tounderstand the processes. It turns outthat an algorithm developed to simu-late a flame from a Bunsen burner ina laboratory is well suited — with someadaptation — to studying the flamefront moving through a white dwarf.

The SciDAC program broughttogether members of the SupernovaScience Center project and LBNLmathematicians and computationalscientists associated with the AppliedPartial Differential EquationsIntegrated Software InfrastructureCenter (ISIC). “We had two groupsin very different fields,” said MichaelZingale, a professor at the StateUniversity of New York–Stony Brookand member of the SupernovaeScience Center. “We met with thesemathematicians who had developedcodes to study combustion andthrough SciDAC, we started usingtheir tools to study astrophysics.”

The adaptive mesh refinement(AMR) combustion codes developedat the Center for ComputationalSciences and Engineering (CCSE) atLawrence Berkeley NationalLaboratory are very effective formodeling slow-moving burningfronts. The behavior of the flamefront leading up to a supernova isvery similar, and the scientistsworked together to adapt the codesto astrophysical environments.

Computational scientists tacklesuch problems by dividing them intosmall cells, each with different valuesfor density, pressure, velocity andother conditions. The conditions arethen evolved one time step at a time,using equations which say how thingslike mass, momentum and energychange over time, until a final time isreached. The astrophysicists had beenusing a code that included modelingthe effects of sound waves on theflame front, which limited them tovery small time steps. So, modeling aproblem over a long period of timeexceeded the availability of computingresources. The CCSE code allowedthe astrophysicists to filter out thesound waves, which aren’t importantin this case, and take much biggertime steps, making more efficient useof their supercomputing allocations.

The net result, according toZingale, is that the project team wasable to carry out studies which hadnever been possible before. Theirfindings have been reported in aseries of papers published in scientificjournals such as AstrophysicalJournal.

While observational astronomersare familiar with the massive explo-sion which characterizes a Type Iasupernova, one of the unknowns iswhere the flame starts, or if it startsin more than one location. When theflame front starts out, it is moving atabout one one-thousandth of the

FIGURE 1. This series shows the development of a Rayleigh-Taylor unstable flame. The interface between the fuel and ash is visualized The buoyant ashrises upward as the fuel falls downward in the strong gravitational field, wrinkling the flame front. This increases the surface area of the flame and accel-erates the burning. At late times, the flame has become fully turbulent.

Page 12: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

speed of sound. As it burns more fuel,the flame front speeds up. By the timethe star explodes, the front must bemoving at half the speed of sound.The trick is, how does it accelerate tothat speed?

One answer explored by theSciDAC team is the effect of a wrin-kled flame. As the flame becomeswrinkled, it has more surface area,which means it burns more fuel,which in turn accelerates the flamefront. The AMR codes were used tomodel small areas of the flame frontin very high resolution.

As the flame burns, it leaves in itswake “ash,” which through intenseheat and pressure is fused into nickel.This hot ash — measuring several bil-lion degrees Kelvin — is less densethan the fuel. The fuel ahead of thefront is relatively cooler, at about 100million degrees Kelvin, and denserthan the ash. These conditions makethe flame front unstable. This instabil-ity, known as the Rayleigh-Taylorinstability, causes the flame to wrinkle.

As the flame starts out from thecenter and burns through most of thestar, the front between the fuel andthe ash is sharp and the front is calleda “flamelet.”

Astrophysicists had predicted thatas the flame front burns further outfrom the center, the lower density ofthe star would cause it to burn lessvigorously and become more unstabledue to increased turbulence and mix-ing of fuel and ash. This represents adifferent mode of combustion knownas a “distributed burning regime.”Scientists believe that such combus-tion occurs in the late stage of asupernova explosion.

Using the AMR combustion codesand running simulations for 300,000processor hours at DOE’s NationalEnergy Research Scientific ComputingCenter, the Supernova Science Centerteam was able to create the first-everthree-dimensional simulations of suchan event. The results are feeding intothe astrophysics community’s knowl-edge base.

“While other astrophysicists aremodeling entire stars at a differentscale, they need to know what isgoing on at scales their models can’tresolve — and we’re providing thatmodel for them,” Zingale said.“Although it takes a large amount ofcomputing time, it is possible nowthanks to these codes. Before ourSciDAC partnership, it was impossible.”

FIGURE 2. This series of images show the growth of a buoyant reactive bubble. At the density simu-lated, the rise velocity is greater than the laminar flame velocity, and the bubble distorts significantly.The vortical motions transform it into a torus. Calculations of reactive rising bubbles can be used togain more insight into the early stages of flame propagation in Type Ia supernovae.

10

Page 13: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

11

Supernovae have been documentedfor 1,000 years and astrophysicistsknow a lot about how they form,what happens during the explosionand what’s left afterward. But for thepast 40 years, one problem hasdogged astrophysicists — what is themechanism that actually triggers themassive explosion? Under SciDAC, anumber of computational projectswere established to simulate plausibleexplanations. One group created sim-ulations showing a new mechanismfor core-collapse supernovae. Thismodel indicates that the core of thestar generates sound waves which inturn become shock waves powerfulenough to trigger the explosion.

Understanding these explosionsand their signals is important, not

only because of their central role inastronomy and nucleosynthesis, butbecause a full understanding of super-novae may lead to a better under-standing of basic physics.

The massive stars which becomesupernovae feature a number of con-ditions which are also found on Earth— winds, fluid flows, heating andcooling. But whereas a hurricane mayblast along at 150 kilometers an houron earth, the winds on these starsrage at tens of kilometers per second.The temperature of the star is 100million times that of Earth, and thedensity of the object can be 14 ordersof magnitude more dense than lead.

In short, studying supernovae is avery complicated problem. Key tofinding an answer is understanding

the physics of all the interactions inthe star.

One thing that is known is thatsupernovae produce neutrinos, parti-cles with very little mass which travelthrough space and everything in theirpath. Neutrinos carry energy from thedeep interior of the star, which isbeing shaken around like a jar ofsupersonic salad dressing, and depositthe energy on the outer region. Onetheory holds that if the neutrinosdeposit enough energy throughoutthe star, this may trigger the explo-sion.

To study this, a group led byAdam Burrows, professor of astrono-my at the University of Arizona and amember of the SciDAC SupernovaScience Center, developed codes forsimulating the behavior of a super-novae core in two dimensions. Whilea 3D version of the code would beoptimum, it would take at least fivemore years to develop and wouldrequire up to 300 times as muchcomputing time. As it was, the groupran 1.5 million hours of calculationsat DOE’s National Energy ResearchScientific Computing Center.

But the two-dimensional model issuitable for Burrows’ work, and theinstabilities his group is interested instudying can be seen in 2D. Whatthey found was that there is a bigoverturning motion in the core,which leads to wobbling, which inturn creates sound waves. Thesewaves then carry energy away fromthe core, depositing it farther out nearthe mantle.

According to Burrows, theseoscillations could provide the “power

Once every 30 to 50 years — and then just for a fewmilliseconds — an exploding star known as a core-collapse supernova is the brightest object in the uni-verse, brighter than the optical light of all other starscombined.

From Soundwaves toSupernova:SciDAC Provides Time, Resources to HelpAstrophysicists Simulate a New Model

Page 14: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

12

source” which puts the star over theedge and causes it to explode. Toimagine what such a scenario wouldlook like, think of a pond into whichrocks are thrown, causing waves toripple out. Now think of the pond as asphere, with the waves moving through-out the sphere. As the waves movefrom the denser core to the less densemantle, they speed up. According tothe model, they begin to crack like abullwhip, which creates shockwaves. Itis these shockwaves, Burrows believes,which could trigger the explosion.

So, what led the team to thismodel? They came up with the ideaby following the pulsar — the neutronstar which is the remains of a superno-va. They wanted to explore the originof the high speed which pulsars seemto be born with and this led them tocreate a code that allowed the core tomove. However, when they imple-mented this code, the core not onlyrecoiled, but oscillated, and generatedsound waves.

The possible explosive effect of theoscillations had not been consideredbefore because previous simulations ofthe conditions inside the core usedsmaller time steps, which consumedmore computing resources. With thislimitation, the simulation ran theircourse before the onset of oscillations.With SciDAC support, however,Burrows’ team was able to developnew codes with larger time steps,allowing them to model the oscilla-tions for the first time.

Calling the simulation a “real

FIGURE 1. A 2D rendition of the entropy field of the early blast in the inner 500 km of anexploding supernova. Velocity vectors depict the direction and magnitude of the local flow.The bunching of the arrows indicates the crests of the sound waves that are escalating intoshock waves. These waves are propagating outward, carrying energy from the core to themantle and helping it to explode. The purple dot is the protoneutron star, and the purplestreams crashing in on it are the accretion funnels. (All images courtesy of Adam Burrows,University of Arizona)

Page 15: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

numerical challenge,” Burrows saidthe resulting approach “liberated theinner core to allow it to execute itsnatural multidimensional motion.”This motion led to the excitation ofthe core, causing the oscillations at adistinct frequency.

SciDAC made the work possible,he said, by providing support overfive years — enough time to developand test the code — and the comput-ing resources to run the simulations.The results look promising, but as isoften the case, more research isneeded before a definitive mecha-nism for triggering a supernova isdetermined.

The group published a paperon their research in theAstrophysical Journal. Neutrinotransfer is included as a centraltheme in a 2D multi-group,multi-neutrino, flux-limitedtransport scheme. It is approxi-mate but has the importantcomponents, and is the onlytruly 2D neutrino code withresults published in the archivalliterature.

“The problem isn’t solved,”Burrows said. “In fact, it’s justbeginning.”

FIGURE 2. These shells are isodensity contours, coloredaccording to entropy values. The red (blast area) indicatesregions of high entropy, and the orange outer regionshave low entropy. In the image on the left, matter isaccreting from the top onto the protoneutron star in thecenter (the orange dot that looks like the uvula in thethroat). The image is oriented so that the anisotropicexplosion is emerging down and towards the viewer. Thescale is roughly 5000 km. The image on the right showsthe same explosion later in time and at a different orien-tation with respect to the viewer.

FIGURE 3. Another isodensity shell colored with entropy,showing simultaneous accretion on the top and explosionon the bottom. The inner green region is the blast, and

the outer orange region is the unshocked materialthat is falling in. The purple dot is the newly

formed neutron star, which is accumulat-ing mass through the accretion fun-

nels (in orange).

13

Page 16: Progress report on the U.S. Department of Energy's Scientific

The massive stellar explosions known ascore-collapse supernovae are not only someof the brightest and most powerful phe-nomena in the universe, but are also thesource of many of the elements whichmake up our universe — from the iron inour blood cells to the planet we live on tothe solar systems visible in the night sky.

Supernova science has advanceddramatically with the advent of pow-erful telescopes and other instrumentsfor observing and measuring thedeaths of these stars, which are 10times more massive than our sun, ormore. At the same time, increasinglyaccurate applications for simulatingsupernovae have been developed to

Insight intoSupernovae:SciDAC’s Terascale Supernova InitiativeDiscovers New Models for Stellar Explosions,Spinning Pulsars

FIGURE 1: This image provides a snapshot of the angularmomentum of the layered fluid flow in the stellar corebelow the supernova shock wave (the outer surface)during a core collapse supernova explosion. Pinkdepicts a significant flow rotating in one directiondirectly below the shock wave. Gold depicts a deeperflow directly above the proto-neutron star surface, mov-ing in the opposite direction. The supernova shock waveinstability (SASI) leads to such counter rotating flowsand is important in powering the supernova, and maybe responsible for the spin of pulsars (rotating neutronstars). The inner flow (in green) spins up the proto-neu-tron star. These three-dimensional simulations were per-formed by John Blondin (NCSU) under the auspices ofthe Terascale Supernova Initiative, led by TonyMezzacappa (ORNL). The visualization was performedby Kwan-Liu Ma (University of California, Davis).

Page 17: Progress report on the U.S. Department of Energy's Scientific

take advantage of the growing powerof bigger and faster supercomputers.

Scientists now know that thesemassive stars are layered like onions.Around the iron core are layer afterlayer of lighter elements. As the stellarcore becomes more massive, gravitycauses the core to collapse to a cer-tain point at which it rebounds like a

compressed ball. This results in ashock wave which propagates fromthe core to the outermost layers,causing the star to explode.

Despite these gains in understand-ing, however, a key question remainsunanswered: What triggers the forceswhich lead a star to explode? UnderSciDAC, the Terascale Supernova

Initiative (TSI) was launched, con-sisting of astrophysicists, nuclearphysicists, computer scientists, math-ematicians and networking engineersat national laboratories and universi-ties around the country. Their singularfocus was to understand how thesemassive stars die.

“When these stars die in stellar

Page 18: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

16

explosions known as core-collapsesupernovae, they produce many of theelements in the universe. In fact, theyare arguably the single most impor-tant source of elements,” said TonyMezzacappa, an astrophysicist atOak Ridge National Laboratory andprincipal investigator for the TSI.“Learning what triggers supernovaeis tantamount to understanding howwe came to be in the universe.”

Ascertaining the explosion mech-anism is one of the most importantquestions in physics. It is a complex,multi-physics problem involving fluidflow, instability, and turbulence, stel-lar rotation, nuclear physics, particlephysics, radiation transport and mag-netic fields.

“Much of modern physics comesto play a role in a core-collapse super-nova,” Mezzacappa says. “The cosmosis very much a physics laboratory.”

Once the explosion mechanism isaccurately predicted, scientists will beable to predict all the other associatedbyproducts and phenomena, such asthe synthesis of elements. The key tosolving this question depends on devel-oping more detailed computer mod-els, such as those advanced underSciDAC, and detailed observations ofcore-collapse supernovae, which onlyoccur about twice every century in ourgalaxy (although many such super-novae are observed each year fromoutside of our galaxy). As models aredeveloped and improved, their accu-racy can be tested by comparing theresults with the observed data. Byadjusting any parameters of the model(of course, the goal in developingsophisticated models is to minimizethe number of free parameters), sci-entists can generate results which arecloser and closer to the actual data.

Because the simulations are com-putationally intensive, Mezzacappa andJohn Blondin, a professor at NorthCarolina State University, started bydeveloping codes to look at the fluidflow at the core of a supernova. Tofocus on this one area, they removedthe other physics components fromtheir code, knowing they would have

to add them again later. They wereinterested in learning how the coreflow behaves during the explosion asone goes from two spatial dimensionsto three. They first ran the code intwo dimensions, then in 3D. Whenthey ran the simulations, they startedto see instability in the shock wave.But, Mezzacappa said, they initiallybelieved that the physics in theircode should not have led to thatbehavior. Their first reaction was thatthe code contained an error, but fur-ther study led them into a new areaof supernova research.

Despite decades of supernovatheory focused on the formation andpropagation of the shock wave, appar-ently no one considered whether theshock wave was stable, according toMezzacappa. The TSI team’s researchled them to conclude that the shockwave is unstable; and if it is perturbed,it grows in an unbounded, nonlinearfashion. As it spreads, it becomesmore and more distorted. The TSIgroup found that the instability ofthe shock wave could help contributeto the explosion mechanism. Forexample, a distorted shock wavecould explain why supernovaeexplode asymmetrically.

Under further study, the TSIresearchers found that as they addedmore physics back into their code,the instability did not go away, givingthem more confidence in their find-ings. Calling this new discovery theStationary Accretion Shock Instability,or SASI, the team published their find-ings, which were then corroboratedby other supernova researchers. As aresult, Mezzacappa said, the workhas fundamentally changed scientists’thinking about the explosion phenom-enon. And, he adds, it probably wouldnot have happened without the mul-tidisciplinary approach fostered bySciDAC.

For starters, SciDAC providedthe team with access to some of theworld’s fastest supercomputers — aresource otherwise unavailable toresearchers like Blondin. This accessto the Cray X1E and Cray XT3 sys-

tems at Oak Ridge gave Blondin thecomputing horsepower he needed torun his simulations in 3D.

But scaling up from two to threedimensions is not just a matter ofrunning on a larger computer. It isalso a matter of developing moredetailed codes.

In this case, neutrinos are centralto the dynamics of core-collapsesupernovae. The explosion releases1053 ergs of radiation, almost all of itin the form of neutrinos. The explo-sion energy (the kinetic energy of theejected material) is only 1051 ergs.This intense emission of radiationplays a key role in powering thesupernova, heating the interior mate-rial below the shock wave and addingenergy to drive the shock wave out-ward. But this radiation transport —the production, movement and inter-action of the neutrinos — is one ofthe most difficult things to simulatecomputationally. It requires that thesupercomputer solve a huge numberof algebraic equations. Applied math-ematicians on the TSI team developedmethods for solving the equations onterascale supercomputers.

“This is another example of howSciDAC makes a world of difference— we could not have done these simu-lations without the help of the appliedmathematicians,” Mezzacappa said.“SciDAC really enabled us to thinkabout this problem in all of its com-plexity in earnest for the first time.”

But this also led to a new chal-lenge. Suddenly, the team was facedwith managing massive amounts ofdata with no infrastructure for analyz-ing or visualizing these terabytes ofdata. “We didn’t know what we had,”Mezzacappa said.

The TSI team partnered withanother SciDAC project led by MicahBeck of the University of Tennessee,which developed a new data transfersolution known as logistical network-ing. Logistical networking softwaretools allow users to create local stor-age “depots” or utilize shared storagedepots deployed worldwide to easilyaccomplish long-haul data transfers,

Page 19: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

17

temporary storage of large datasets(on the order of terabytes) and pre-positioning of data for fast on-demanddelivery.

The group provided the hardwareand software needed to create a pri-vate network linking Oak Ridge andNorth Carolina State, allowing thedata to be moved to the university.There, Blondin used a visualizationcluster to analyze and visualize thedata, allowing the team to pursue the3D science. The TSI network nowspans the U.S., linking the two originalsites with facilities like the NationalEnergy Research Scientific ComputingCenter in California to the StateUniversity of New York at StonyBrook. As the TSI project demon-strated the benefits of logistical net-working, researchers in the fusionand combustion communities alsobuilt their own logistical networks.

Once the data from the 3D simu-lations could be analyzed, the teammade another discovery that was not

possible in the two-dimensional sim-ulations. Once a core-collapse super-nova explodes, what remains is knownas a neutron star. In some cases, thisneutron star spins and emits radia-tion as light and is called a pulsar.What is not known, however, is whatcauses a neutron star to get spun up.

A previous theory held that neu-tron star spin is generated when thestellar core spins up as it collapsesduring the supernova, much like anice skater who increases his rotationalspeed by pulling his arms close to hisbody. But this simple model cannotexplain all the observed characteris-tics of pulsars and at the same timeexplain the predicted characteristicsof stars at the onset of collapse.

In TSI’s 3D models, the SASIproduces two counter-rotating flowsbetween the proto-neutron star andthe propagating shock wave. As theinnermost flow settles onto the proto-neutron star, it imparts angularmomentum and causes it to spin, just

as you can spin a bicycle tire byswiping your hand across the tread.The team computed their spin pre-dictions and found that the resultswere within the observed range ofpulsars spins.

“This was another breakthroughmade possible by SciDAC,”Mezzacappa said. “It gives us newpossibilities for explaining the spinsof pulsars.”

The key to their results in bothcases, Mezzacappa noted, was scalingtheir application to simulate complexphenomena in 3D, rather than twodimensions. “In other scientific arti-cles, supernova researchers havewritten that they don’t see much dif-ference between simulations in 2Dand 3D, but now we know they arevery different, and have shown howthis difference is related to the super-nova mechanism and byproducts,”Mezzacappa said. “Mother Natureclearly works in a 3D universe.”

Page 20: Progress report on the U.S. Department of Energy's Scientific

One of the most widely dis-cussed scientific questions ofthe past 20 years is the issueof global climate change. Butit’s not just a matter of sci-ence. The question of globalclimate change also involveseconomic, environmental,social and political implica-tions. In short, it’s a complexquestion with no easyanswers. But detailed climatemodeling tools are helpingresearchers gain a betterunderstanding of howhuman activities are influenc-ing our climate.

CCSM:Advanced Simulation Models for Studying Global Climate Change

FIGURE 1. Surface temperature (in ºC) averaged over December of year 9 ofthe fully coupled chemistry simulation. Vectors represent the surface wind.

Page 21: Progress report on the U.S. Department of Energy's Scientific

33

30

27

24

21

18

15

12

9

6

3

0

–3

–6

–9

–12

–15

–18

–21

–24

–27

–30

To try to ensure that decision-makers around the world have accessto the most accurate informationavailable, the IntergovernmentalPanel on Climate Change (IPCC)was established in 1988 under theauspices of the World MeterologicalOrganization and the United NationsEnvironment Program. The role ofthe IPCC is to assess on a compre-hensive, objective, open and transpar-ent basis the scientific, technical andsocio-economic information relevantto understanding the scientific basisof risk of human-induced climatechange, its potential impacts andoptions for adaptation and mitigation.

In accordance with its mandate,the major activity of the IPCC is toprepare at regular intervals compre-hensive and up-to-date assessmentsof relevant for the understanding ofhuman induced climate change, poten-tial impacts of climate change andoptions for mitigation and adaptation.The First Assessment Report wascompleted in 1990, the SecondAssessment Report in 1995 and theThird Assessment Report in 2001.The Fourth Assessment Report isscheduled to be completed in 2007.SciDAC-sponsored research hasenabled the United States climatemodeling community to make signifi-cant contributions to this report. Infact, the United States is the largestcontributor of climate modeling datato the report, as compared to the2001 report in which no U.S.-gener-ated data was included.

Two large multi-laboratorySciDAC projects are directly relevantto the activities of the IPCC. The first,entitled “Collaborative Design andDevelopment of the CommunityClimate System Model for TerascaleComputers,” has made importantsoftware contributions to the recentlyreleased third version of the Com-munity Climate System Model(CCSM3.0), developed by theNational Center for AtmosphericResearch, DOE laboratories and theacademic community. The secondproject, entitled “Earth System Grid

Page 22: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

20

II: Turning Climate Datasets intoCommunity Resources,” aims to facili-tate the distribution of the copiousamounts of data produced by coupledclimate model integrations to the gen-eral scientific community.

A key chapter in the FourthAssessment Report will look at Global

Climate Projections, which will exam-ine climate change to the year 2100and beyond. Gerald Meehl, one oftwo lead authors of the chapter and ascientist at the National Center forAtmospheric Research in Colorado,

ran extensive simulations using theCCSM and another climate modelingapplication, PCM, to project climatechange through the end of the 21stcentury.

The results, reported in the March18, 2005 issue of Science magazine,indicate that “even if the concentra-tions of greenhouse gases in theatmosphere had been stabilized in theyear 2000, we are already committedto further global warming of aboutanother half degree and an additional320 percent sea level rise caused bythermal expansion by the end of the21st century. … At any given point intime, even if concentrations are stabi-lized, there is a commitment to futureclimate changes that will be greaterthan those we have already observed.”

As one of many research organi-zations from around the world pro-viding input for the IPCC report, theDOE Office of Science is committedto providing data that is as scientifi-cally accurate as possible. This com-mitment to scientific validity is a keyfactor behind the SciDAC program torefine and improve climate models.

In order to bring the best scienceto bear on future climate prediction,modelers must first try to accuratelysimulate the past. The record of his-torical climate change is well docu-mented by weather observations andmeasurements starting around 1890.So, after what is known about thephysics and mathematics of atmos-pheric and ocean flows is incorporat-ed in a simulation model running onone of DOE’s supercomputers, themodel is tested by “predicting” the cli-mate from 1890 to the present.

The input to such a model is notthe answer, but rather a set of datadescribing known climate conditions— what the atmospheric concentra-tions of greenhouse gases were, whatsolar fluctuations occurred, and whatvolcanic eruptions took place. Whilepast climate models were not able toreplicate the historical record with itsnatural and induced variability, thepresent generation of coupled ocean,atmosphere, sea ice, and land compo-

36N37N38N39N40N41N42N43N44N45N46N47N48N49N

Observed

0.10.20.5125

36N37N38N39N40N41N42N43N44N45N46N47N48N49N

Simulated 1980-2000

0.10.20.5125

36N37N38N39N40N41N42N43N44N45N46N47N48N49N

Simulated 1980-2100

Observed

Annual Precipitation (mm/day) March Snow Water (m)

Simulated 1980-2000

Simulated 1980-2100

0.10.20.5125

0.020.050.10.20.512

0.020.050.10.20.512

0.020.050.10.20.512

124W 120W122W

116W118W

112W114W

108W110W 106W

124W 120W122W

116W118W

112W114W

108W110W 106W

FIGURE 2. Annual precipitation and Marchsnow water from an IPCC simulation usingthe subgrid orography scheme in Ghan andShippert (2005).

Page 23: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

21

nents do a remarkably good job ofpredicting the past century’s climate.This accuracy gives confidence thatwhen the same model is used to gen-erate predictions into the future basedon various emission scenarios, theresults will have a strong scientificbasis and are more than scientificopinions of what might happen.

The Community Climate SystemModel, CCSM3, is centered atNCAR, arguably a world leader inthe field of coupled climate models.This model continues to be devel-oped jointly by the National ScienceFoundation and DOE to capture thebest scientific understanding of cli-mate. In addition to being used forclimate change studies, the CCSM isbeing used to study climate-relatedquestions ranging from ancient cli-mates to the physics of clouds.However, the DOE national labora-tories and the National Center forAtmospheric Research periodicallyuse the model for assessment purposessuch as the IPCC project.

The SciDAC CCSM Consortiumdeveloping the model has also madesure that it runs effectively on the mostpowerful supercomputers at DOE’sleading computing centers — theLeadership Class Facility (LCF) inTennessee and the National EnergyResearch Scientific ComputingCenter (NERSC) in California. DOEcontributed significant allocations ofsupercomputer time to the interna-tional efforts (along with NCAR andthe Japanese Earth Simulator Center)toward the completion of the IPCCruns with CCSM3.

The fruit of these labors is sub-stantial. Enabled by these sizableallocations, the CCSM3 is the largestsingle contributor to the IPCCAssessment Report 4 database. Notonly is the model represented in moretransient scenarios than any othermodel, more statistically independentrealizations of each of these scenarioshave been integrated than with anyother model. Also, the resolution ofthe atmosphere component isexceeded by only one other model

(which has not been ensemble-inte-grated).

This increase in production is sci-entifically important in many ways.The need for large numbers of indi-vidual simulations is especiallyimportant when attempting to char-acterize the uncertainty of climatechange. For instance, in looking atrecent climate change (over the lastcentury), running more statisticallyindependent simulations of the 20thcentury allows researchers to producea much better defined pattern of cli-mate changes. To model future cli-mate change, scientists will run multi-ple simulations with a given level ofgreenhouse gas emissions to quantifythe range of possible outcomes. Andwhen researchers want to study pos-sible outcomes for a range of emis-sions, they need to run even moresimulations.

Running simulations for extendedtimes are also important to helpresearchers detect and take intoaccount conditions which result fromflaws in the climate model. Suchflaws cause “drift,” or an unsubstanti-ated change in temperature. Whiledrift-free control runs covering sever-al hundred years are useful for study-ing climate change over a fewdecades, examining climate changeof the entire 20th century requires acontrol run of at least 1,000 years toavoid misinterpretation of the result-ing climate model. In summary, largeensembles and long control runs arenecessary to better understand theclear patterns of climate change.

Through SciDAC, DOE alsosponsors the technology being usedto make available the results to scien-tists worldwide. The Earth SystemGrid (ESG) project collects all thedata from all the runs of models inthe U.S., as well as from other coun-tries, and makes the data accessibleto scientists throughout the world.This allows scientists in the interna-tional climate research community toanalyze more extensive datasets,compare results and document differ-ences in model predictions. In fact,

over 200 papers are in press referenc-ing this data. (You can see a list athttp://www-pcmdi.llnl.gov/ipcc/diagnostic_subprojects.php.)

When researchers encounter dis-agreement among models, scientificdebates and discussions ensue aboutthe physical mechanisms governingthe dynamics of climate. This debateis typically vigorous and points indirections that models can beimproved. To take advantage of suchdiscussions and to improve the valid-ity of climate models, DOE sponsorsthe Program for Model Intercompar-ison and Diagnosis (PCMDI), whichplaces the scientific basis of climatemodeling front and center in itswork. Such rigorous peer review anddiscussion is critical to ensuring thatgovernment officials have access tothe most scientifically valid informa-tion when making decisions regard-ing climate change.

This SciDAC project is a collaborationbetween six DOE National Laboratories(ANL, LANL, LBNL, LLNL, ORNL,PNNL) and NCAR. The lead investigatorsrepresenting these institutions are R. Jacob,P. Jones, C. Ding, P. Cameron-Smith, J.Drake, S. Ghan, W. Collins, W. Washingtonand P. Gent.

Page 24: Progress report on the U.S. Department of Energy's Scientific

Most people do not think about turbulencevery often, except when they are flying andthe captain turns on the “Fasten Seat Belts”sign. The kind of turbulence that may causeproblems for airplane passengers involvesswirls and eddies that are a great deal largerthan the aircraft. But in fusion plasmas, muchsmaller-scale turbulence, called microturbu-lence, can cause serious problems—specifical-ly, instabilities and heat loss that could stopthe fusion reaction.

Plasma PhysicsTerascale Simulations Predict Favorable Confinement Trend for Reactor-Scale Plasmas

In fusion research, all of the con-ditions necessary to keep a plasmadense and hot long enough to under-go fusion are referred to as confine-ment. The retention of heat, calledenergy confinement, can be threat-ened by microturbulence, which canmake particles drift across, ratherthan along with, the plasma flow. Atthe core of a fusion reactor such as atokamak, the temperatures and den-sities are higher than at the outsideedges. As with weather, when there

Page 25: Progress report on the U.S. Department of Energy's Scientific

are two regions with different tem-peratures and densities, the areabetween is subject to turbulence. In atokamak, turbulence can allowcharged particles in the plasma tomove toward the outer edges of thereactor rather than fusing with otherparticles in the core. If enough parti-cles drift away, the plasma loses tem-perature and the fusion reaction can-not be sustained.

One troublesome result of toka-mak experiments to date is that as

the size of the plasma increases, therelative level of heat loss from turbu-lent transport also increases. Becausethe size (and therefore cost) of afusion experiment is determinedlargely by the balance between fusionself-heating and turbulent transportlosses, understanding this process isof utmost importance for the designand operation of fusion devices suchas the multi-billion-dollar ITER proj-ect. ITER (Latin for “the way”), amultinational tokamak experiment to

be built at Cadarache in southernFrance, is one of the highest strategicpriorities of the DOE Office ofScience.

Underlining America’s commit-ment to ITER, U.S. Energy SecretarySamuel Bodman stated on June 28,2005, “Plentiful, reliable energy iscritical to worldwide economicdevelopment. Fusion technologieshave the potential to transform howenergy is produced and provide sig-nificant amounts of safe, environmen-tally friendly power in the future.The ITER project will make thisvision a reality.”

ITER is expected to produce 500million thermal watts of fusionpower—10 times more power than isneeded to heat the plasma—when itreaches full operation around theyear 2016. As the world’s first pro-duction-scale fusion reactor (Figure 1),ITER will help answer questionsabout the most efficient ways to con-figure and operate future commercialreactors.

The growth of the microinstabili-ties that lead to turbulent transporthas been extensively studied over theyears, not only because understand-ing this process is an important prac-tical problem, but also because it is atrue scientific grand challenge whichis particularly well suited to beaddressed by modern terascale com-putational resources. And the latestnews, confirmed by multiple simula-tions using different codes, is good:in reactors the size of ITER, heatlosses caused by plasma turbulenceno longer follow the empirical trendof increasing with the size of theplasma. Instead, the rate of heat losslevels off and stabilizes.

Progress in understanding plasmaion dynamics has been impressive.For example, studies show that elec-trostatic ion temperature gradient(ITG) driven turbulence can be sup-

FIGURE 1. Cutaway illustration of the ITER tokamak.

Page 26: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

24

pressed by self-generated zonalflows within theplasma. The sup-pression of turbu-lence is causedby a shearingaction thatdestroys the fin-ger-like densitycontours whichpromote thermaltransport. Thisdynamic processis depicted by thesequences shownin Figure 2,obtained usingthe Gyrokinetic

Toroidal Code (GTC) developed bythe SciDAC-funded GyrokineticParticle Simulation Center (GPSC).The lower panels show the nonlinearevolution of the turbulence in theabsence of flow, while the upper pan-els illustrate the turbulence decorrela-tion caused by the self-generated“E×B” flow, which arises from crossedelectric and magnetic fields.

Thissimulationis a goodexample ofthe effec-tive use ofpowerfulsupercom-puters (inthis case,the 10 ter-aflop/s

Seaborg IBM SP at NERSC). Typicalglobal particle-in-cell simulations ofthis type have used one billion parti-cles with 125 million grid points over7000 time steps to produce significantphysics results. Simulations of thissize would not be feasible on smallercomputers.

Large-scale simulations have alsoexplored key consequences of scalingup from present-day experimentalfusion devices (around 3 meters radiusfor the largest existing machines) toITER-sized reactors (about 6 meters).

For the reactor-scale plasmas of thefuture, these simulations suggest thatthe relative level of heat loss drivenby electrostatic ITG turbulence doesnot increase with plasma size (Figure3). This transition from “Bohm” (lin-ear) scaling to “gyro-Bohm” (quadrat-ic) scaling is a positive trend, becausesimple empirical extrapolation of thesmaller system findings would pro-duce much more pessimistic predic-tions for energy confinement. Sinceneither experiments nor theory andsimulations have previously been ableto explore such trends in an ITER-sized plasma, these results represent asignificant scientific discovery enabledby the SciDAC program.

Exploration of the underlyingcauses for the transition in the rate ofheat loss that simulations showaround the 400 gyroradii range hasinspired the development of new non-linear theoretical models based on thespreading of turbulence. Althoughthis predicted trend is a very favorableone, the fidelity of the analysis needsto be further examined by investigat-ing additional physics effects, such askinetic electromagnetic dynamics,which might alter the present predic-tions.

The excellent scaling of the GTCcode provides strong encouragementthat future simulations will be able tocapture the additional complexity andlead to greater scientific discoveries.GTC is currently involved in bench-mark tests on a variety of supercom-puting platforms—including the EarthSimulator in Japan, the Cray X1E andXT line at Oak Ridge NationalLaboratory, the IBM Power SP line,and the IBM Blue Gene line—andexhibits scaling properties which lookto be readily extensible to the peta-scale regime. GTC’s performance andscaling on a variety of leading plat-forms is illustrated in Figure 4.

The GS2 and GYRO codes devel-oped by the Plasma MicroturbulenceProject (the SciDAC predecessor tothe current GPSC) have also con-tributed productively to the interpre-tation of turbulence-driven transport

With flow

Without flow

a/ρi

χ i/χG

B

00 200 400 600 800 1000

0.5

1.0

Simulation

Formula

1.5

FIGURE 2. Turbulence reduction via shearedplasma flow compared to case with flowsuppressed.

FIGURE 3. Full torus particle-in-cell gyroki-netic simulations (GTC) of turbulent trans-port scaling. (Left) The granular structuresrepresent the scales of the turbulence in atypical plasma which need to be includedin realistic plasma simulations. (Right) Thehorizontal axis expresses the plasma size,and the point at 1000 represents ITER’ssize. The vertical axis represents the ther-mal diffusion, or heat loss.

Page 27: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

25

trends observed in experiments. Thesetwo SciDAC projects have involvedresearchers from Lawrence LivermoreNational Laboratory; PrincetonPlasma Physics Laboratory (PPPL);Columbia University; the Universityof California at Los Angeles, Irvine,and Davis; the University ofColorado; the University of Maryland;the University of Tennessee atKnoxville; Oak Ridge NationalLaboratory; and General Atomics.

Bill Tang, Chief Scientist at PPPL,is encouraged by the progress in com-putational fusion research. “SciDAChas contributed strongly to the accel-erated development of computationaltools and techniques needed to devel-op predictive models for the analysisand design of magnetically confinedplasmas,” Tang commented. “Unravel-ing the complex behavior of stronglynonlinear plasma systems under real-istic conditions is a key component ofthe next frontier of computationalplasma physics in general and fusionresearch in particular. Acceleratedprogress in the development of theneeded codes with higher physicsfidelity has been greatly aided by theinterdisciplinary alliances championedby the SciDAC Program, together withnecessary access to the tremendousincrease in compute cycles enabled bythe rapid advances in supercomputertechnology.”

Ray Orbach, Director of the DOEOffice of Science, expressed his hopefor high-end computing’s future con-tributions to fusion energy in an inter-view in SciDAC Review (Number 1,

Spring 2006, p. 8): “At the speeds thatwe are talking about [50 teraflop/s],the electrons in a fusion device can beconsidered as real point particles anddo not have to be treated in meanfield approximations. So for the firsttime at 50 TF, one will be able to dosimulations of high-density, high-tem-perature plasmas that were never pos-sible before. This will have a signifi-cant impact on the treatment of thesehighly nonlinear systems. These sys-tems are not subject to analytic exam-ination and there may be instabilitiesthat no one has thought about. Thishas already been found to be of pro-found importance for ITER [and] forfusion science in general.”

10000

1000

100

10

64 128 256 512 1024Number of processors

Latest vector optimizationsNot tested on Earth Simulator

CRAY XIENEC SX-8Earth Simulator (05)CRAY XICRAY XT3 (Jaguar)JacquardThunderBlue Gene/LSeaborg (MPI)Seaborg (MPI + OMP)

S. Ethier, PPPL, Nov. 2005

2048 4096 81926 16384

Compute Power of the Gyrokinetic Toroidal CodeNumber of particles (in million) moved 1 step in 1 second

Com

put

e Po

wer

(mill

ions

of

par

ticl

es)

FIGURE 4. Scaling study of the GTC codeon multiple high performance computingplatforms.

Page 28: Progress report on the U.S. Department of Energy's Scientific

What happens when you shoot one of thecoldest materials into the hottest environ-ment on earth? The answer may help solvethe world’s energy crisis.

HFS Pellet Injection:R. SamtaneyFueling the Future

Simulations Examine the Behavior of Frozen Fuel Pellets in Fusion Reactors

Imagine that there is a largechamber in hell that’s shaped like adoughnut, and that you’d like toshoot a series of hailstones into thatchamber so that as they melt, thewater vapor penetrates as deeply aspossible and disperses as evenly aspossible throughout the chamber.Setting aside for a moment the ques-tion of why you would want to dothat, consider the multitude of howquestions: Should you shoot in thehailstones from outside the ring ofthe doughnut or from inside thehole? What size hailstones shouldyou use? At what speed and angleshould they enter the chamber?

This strange scenario is actuallyan analogy for one of the questionsfacing fusion energy researchers: howto refuel a tokamak. A tokamak is amachine that produces a toroidal(doughnut-shaped) magnetic field. Inthat field, two isotopes of hydrogen— deuterium and tritium — are heatedto about 100 million degrees Celsius(more than six times hotter than theinterior of the sun), stripping theelectrons from the nuclei. The mag-netic field makes the electricallycharged particles follow spiral pathsaround the magnetic field lines, sothat they spin around the torus in a

Page 29: Progress report on the U.S. Department of Energy's Scientific

T = 2000 T = 32000 T = 82000

T = 2000 T = 32000 T = 82000

fairly uniform flow and interact witheach other, not with the walls of thetokamak. When the hydrogen ions(nuclei) collide at high speeds, theyfuse, releasing energy. If the fusionreaction can be sustained longenough that the amount of energyreleased exceeds the amount neededto heat the plasma, researchers willhave reached their goal: a viableenergy source from abundant fuelthat produces no greenhouse gasesand no long-lived radioactivebyproducts.

High-speed injection of frozenhydrogen pellets is an experimentallyproven method of refueling a toka-mak. These pellets are about the sizeof small hailstones (3–6 mm) andhave a temperature of about 10degrees Celsius above absolute zero.The goal is to have these pellets pen-etrate as deeply as possible into theplasma so that the fuel dispersesevenly.

Pellet injection will be the pri-mary fueling method used in ITER(Latin for “the way”), a multinational

tokamak experiment to be built atCadarache in southern France. ITER,one of the highest strategic prioritiesof the DOE Office of Science, isexpected to produce 500 millionthermal watts of fusion power — 10times more power than is needed toheat the plasma — when it reachesfull operation around the year 2016.As the world’s first production-scalefusion reactor, ITER will help answerquestions about the most efficientways to configure and operate futurecommercial reactors.

However, designing a pellet injec-tion system that can effectively deliv-er fuel to the interior of ITER repre-sents a special challenge because ofits unprecedented large size and hightemperatures. Experiments haveshown that a pellet’s penetration dis-tance into the plasma dependsstrongly on how the injector is ori-ented in relation to the torus. Forexample, an “inside launch” (frominside the torus ring) results in betterfuel distribution than an “outsidelaunch” (from outside the ring).

In the past, progress in develop-ing an efficient refueling strategy forITER has required lengthy andexpensive experiments. But thanks toa three-year, SciDAC-funded collabo-ration between the ComputationalPlasma Physics Theory Group atPrinceton Plasma Physics Laboratoryand the Advanced NumericalAlgorithms Group at LawrenceBerkeley National Laboratory, com-puter codes have now reproducedsome key experimental findings,resulting in significant progresstoward the scientific goal of usingsimulations to predict the results ofpellet injection in tokamaks.

“To understand refueling by pel-let injection, we need to understandtwo phases of the physical process,”said Ravi Samtaney, the Princetonresearcher who is leading the codedevelopment effort. “The first phaseis the transition from a frozen pelletto gaseous hydrogen, and the secondphase is the distribution of that gasin the existing plasma.”

The first phase is fairly well

FIGURE 1. These simulations show the results of pellet injection into a tokamak fusion reactor from the low-field-side (LFS) and high-field-side (HFS). Thetop row shows a time sequence of the density in LFS injection while the bottom panel shows density evolution in HFS injection. The dominant motion ofthe ablated pellet mass is along field lines accompanied by transport of material across flux surfaces towards the low field side. This observation is qual-itatively consistent with experimental observations leading to the conclusion that HFS pellet injection is a more efficient refueling technique than LFSinjection. The MHD instabilities which cause the pellet material to move towards the low field side are currently under investigation.

Page 30: Progress report on the U.S. Department of Energy's Scientific

understood from experiments andtheoretical studies. In this phase,called ablation, the outer layer of thefrozen hydrogen pellet is quicklyheated, transforming it from a solidinto an expanding cloud of densehydrogen gas surrounding the pellet.This gas quickly heats up, is ionized,and merges into the plasma. As abla-tion continues, the pellet shrinks untilall of it has been gasified and ionized.

The second phase — the distribu-tion of the hydrogen gas in the plasma— is less well understood. Ideally, theinjected fuel would simply follow themagnetic field lines and the “flux sur-faces” that they define, maintaining astable and uniform plasma pressure.But experiments have shown that thehigh-density region around the pellet

quickly heats up to form a localregion of high pressure, higher thancan be stably confined by the localmagnetic field. A form of “local insta-bility” (like a mini-tornado) thendevelops, causing the high-densityregion to rapidly move across, ratherthan along, the field lines and fluxsurfaces — a motion referred to as“anomalous” because it deviates fromthe large-scale motion of the plasma.

Fortunately, researchers have dis-covered that they can use this insta-bility to their advantage by injectingthe pellet from inside the torus ring,because from this starting point, theanomalous motion brings the fuelpellet closer to the center of the plasma,where it does the most good. Thisanomalous motion is one of the phe-

nomena that Samtaney and his col-leagues want to quantify and exam-ine in detail.

Figure 3 shows the fuel distribu-tion phase as simulated by Samtaneyand his colleagues in the first detailed3D calculations of pellet injection.The inside launch (top row) distrib-utes the fuel in the central region ofthe plasma, as desired, while the out-side launch (bottom row) dispersesthe fuel near the plasma boundary, asshown in experiments.

Simulating pellet injection in 3Dis difficult because the physicalprocesses span several decades oftime and space scales. The large dis-parity between pellet size and tokamaksize, the large density differencesbetween the pellet ablation cloud and

SCIENTIFIC DISCOVERY

28

FIGURE 2. This illustration shows how computational problems are solved by dividing them into smaller pieces by covering them with a mesh. In this case, the fuelpellet for a fusion reactor is buried within the finest mesh which occupies less than 0.015 percent of the volume of the coarsest mesh. Using a technique known asadaptive mesh refinement, scientists can focus the power of a supercomputer on the most interesting part of a problem, such as the fuel pellet in a hot plasma.

Page 31: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

29

the ambient plasma, and the long-distance effects of electron heattransport all pose severe numericalchallenges. To overcome these diffi-culties, Samtaney and his collabora-tors used an algorithmic methodcalled adaptive mesh refinement(AMR), which incorporates a rangeof scales that change dynamically asthe calculation progresses. AMRallowed this simulation to run morethan a hundred times faster than auniform-mesh simulation.

While these first calculations rep-resent an important advance in

methodology, Samtaney’s work onpellet injection is only beginning.“The results presented in this paperdid not include all the detailed physi-cal processes which we’re starting toincorporate, along with more realisticphysical parameters,” he said. “Forexample, we plan to develop modelsthat incorporate the perpendiculartransport of the ablated mass. Wealso want to investigate other launchlocations. And, of course, we’ll haveto validate all those results againstexisting experiments.”

This pellet injection model will

eventually become part of a compre-hensive predictive capability forITER, which its supporters hope willbring fusion energy within reach as acommercial source of electricalpower.

Field data coloring legend6.00

4.67

3.35

2.02

0.700

Field data coloring legend6.00

4.67

3.35

2.02

0.700

Field data coloring legend6.00

4.67

3.35

2.02

0.700Field data coloring legend

6.00

4.67

3.35

2.02

0.700

Field data coloring legend6.00

4.67

3.35

2.02

0.700

Field data coloring legend6.00

4.67

3.35

2.02

0.700

FIGURE 3. Time sequence of 2D slices from a 3Dsimulation of the injection of a fuel pellet into atokamak plasma. Injection from outside the torus(bottom row, injection from right) results in thepellet stalling and fuel being dispersed near theplasma boundary. Injection from inside the torus(top row, injection from left) achieves fuel distribu-tion in the hot central region as desired.

Page 32: Progress report on the U.S. Department of Energy's Scientific

Controlling fire to provide heat and light wasone of humankind’s first great achievements, andthe basic chemistry of combustion — what goesin and what comes out — was established longago. But a complete quantitative understandingof what happens during the combustion processhas remained as elusive as the ever-changingshape of a flame.

Burning QuestionsNew 3D Simulations Are Closing in on the HolyGrail of Combustion Science:Turbulence–Chemistry Interactions

Page 33: Progress report on the U.S. Department of Energy's Scientific
Page 34: Progress report on the U.S. Department of Energy's Scientific

Even a simple fuellike methane (CH4),the principal compo-nent of natural gas,burns in a complexsequence of steps.Oxygen atoms gradu-ally replace otheratoms in the hydrocar-bon molecules, ulti-mately leaving carbondioxide and water.Methane oxidationinvolves about 20chemical species forreleasing energy, aswell as many minorspecies that canbecome pollutants.Turbulence can distortthe distribution ofspecies and the redis-tribution of thermalenergy which arerequired to keep theflame burning. Theseturbulence–chemistryinteractions can causethe flame to burnfaster or slower and tocreate more or lesspollution.

The holy grail ofcombustion sciencehas been to observe

these turbulence–chemistry effects.Over the past few decades amazingprogress has been made in observa-tional techniques, including the use oflasers to excite molecules of a givenspecies and produce a picture of thechemical distribution. But laser imag-ing is limited in the species and con-centrations that can be reliablyobserved, and it is difficult to obtainsimultaneous images to correlate dif-ferent chemical species.

Because observing the details ofcombustion is so difficult, progress incombustion science has largely coin-cided with advances in scientific com-puting. For example, while basic con-cepts for solving one-dimensional flatflames originated in the 1950s, it onlybecame possible to solve the 1D

flame equations some 30 years laterusing Cray-1 supercomputers. Thosecalculations, which are routine onpersonal computers today, enabled arenaissance in combustion science byallowing chemists to observe theinterrelationships among the manyhypothesized reaction processes inthe flame.

Simulating three-dimensional tur-bulent flames took 20 more years ofadvances in applied mathematics,computer science, and computerhardware, particularly massively par-allel systems. But the effort has beenworth it. New 3D simulations arebeginning to provide the kind ofdetailed information about the struc-ture and dynamics of turbulent flamesthat will be needed to design newlow-emission, fuel-efficient combus-tion systems.

The first 3D simulation of a labo-ratory-scale turbulent flame from firstprinciples — the result of a SciDAC-funded collaboration between compu-tational and experimental scientists atBerkeley Lab — was featured on thecover of the July 19, 2005 Proceedingsof the National Academy of Sciences(Figure 1). The article, written byJohn Bell, Marc Day, Ian Shepherd,Matthew Johnson, Robert Cheng,Joseph Grcar, Vincent Beckner, andMichael Lijewski, describes the simu-lation of “a laboratory-scale turbulentrod-stabilized premixed methane V-flame.” This simulation was unprece-dented in several aspects — the num-ber of chemical species included, thenumber of chemical reactions mod-eled, and the overall size of the flame.

This simulation employed a differ-ent mathematical approach than hastypically been used for combustion.Most combustion simulationsdesigned for basic research use com-pressible flow equations that includesound waves, and are calculated withsmall time steps on very fine, uniformspatial grids — all of which makesthem very computationally expensive.Because of limited computer time,such simulations often have beenrestricted to only two dimensions, to

FIGURE 1. The calculated surface of a turbulent premixed laboratory methaneflame.

SCIENTIFIC DISCOVERY

32

Page 35: Progress report on the U.S. Department of Energy's Scientific

scales less than a centimeter, or to justa few carbon species and reactions.

In contrast, the Center for Comp-utational Sciences and Engineering(CCSE), under Bell’s leadership, hasdeveloped an algorithmic approachthat combines low Mach-numberequations, which remove soundwaves from the computation, withadaptive mesh refinement, whichbridges the wide range of spatialscales relevant to a laboratory experi-ment. This combined methodologystrips away relatively unimportantaspects of the simulation and focusescomputing resources on the mostimportant processes, thus slashing thecomputational cost of combustionsimulations by a factor of 10,000.

Using this approach, the CCSEteam has modeled turbulence and tur-bulence–chemistry interactions for athree-dimensional flame about 12 cm(4.7 in.) high, including 20 chemicalspecies and 84 fundamental chemicalreactions. The simulation was realistic

enough to be compared directly withexperimental diagnostics.

The simulation captured withremarkable fidelity some major fea-tures of the experimental data, such asflame-generated outward deflection inthe unburned gases, inward flow con-vergence, and a centerline flow accel-eration in the burned gases (Figure 2).The simulation results were found tomatch the experimental results withina few percent. This agreement direct-ly validated both the computationalmethod and the chemical model ofhydrocarbon reaction and transportkinetics in a turbulent flame.

The results demonstrate that it ispossible to simulate a laboratory-scaleflame in three dimensions withouthaving to sacrifice a realistic represen-tation of chemical and transportprocesses. This advance has thepotential to greatly increase ourunderstanding of how fuels behave inthe complicated environments insideturbulent flames.

1 cm

FIGURE 2. Left: A typical centerline slice of the methane concentration obtained from the simulation. Right: Experi-mentally, the instantaneous flamelocation is determined by using the large differences in Mie scattering intensities from the reactants and products to clearly outline the flame. Thewrinkling of the flame in the computation and the experiment is of similar size and structure.

SCIENTIFIC DISCOVERY

33

Page 36: Progress report on the U.S. Department of Energy's Scientific

In most first-year chemistry classes at universities,students begin by learning how atoms of carbonand other elements form bonds with other atomsto form molecules, which in turn combine toform ourselves, our planet and our universe.

Understanding the propertiesand behavior of molecules, or betteryet, being able to predict and con-trol the behavior, is the driving forcebehind modern chemistry. Thedevelopment of the theory of quan-tum mechanics in the 1920’s made itpossible that all the properties ofmolecules could be predicted. Oneits founders, Paul Dirac, wrote in1929 that “the difficulty is only thatthe exact application of these lawsleads to equations much too compli-cated to be soluble.”

Almost 80 years later this is stilltrue, even using the most powerfulsupercomputers. For example, a sci-entist could solve for the behavior ofa molecule with one electron movingin one dimension on a grid of roughly100 points spread along that line.Since electrons in free space move inthree dimensions, not one, the scien-tist would need 1003 (one million)

calculations. Adding each extra elec-tron then entails about a milliontimes more work, a condition knownas exponential scaling, so it is nowonder that the largest exact calcu-lations still involve no more thanabout 15 electrons. Even with antici-pated supercomputing advancesover the next decade, only one ormaybe two more electrons could betreated this way.

Quantum chemistry, however,provides scientists with the modelsto approximate these values withoutdoing all the calculations. But there isa tradeoff, since researchers need tobalance accuracy against feasibility.A very simple model can be appliedto a giant molecular system, but theresults will be uselessly inaccurate.As noted, a very accurate model,due to its complexity, may only befeasible for systems of up to 15 elec-trons. However, when scientists are

Pushing the Frontiers of ChemistryDirac’s Vision Realized

studying molecular systems in a fieldsuch as nanotechnology, they are inter-ested in systems with at least a fewhundred electrons.

Realizing that a brute-forceapproach would not succeed, theAdvanced Methods for ElectronicStructure: Local Coupled ClusterTheory project was designed to devel-op new methods which strike novelcompromises between accuracy andfeasibility. The goal was to extend cou-pled cluster electronic structure theoryto larger molecules. This was achievedby developing both new theory andnew high-performance algorithms. Theresulting method scales much better (atthe 3rd power) than the old codes (tothe 6th power), and can therefore be

Paul Dirac, one of the founders ofquantum mechanics.

Page 37: Progress report on the U.S. Department of Energy's Scientific

applied to much larger moleculesusing existing supercomputers oreven commodity computers.

One advantage the new methodhas over older methods is that it canbe applied to systems where the elec-trons are highly correlated. In otherwords, the motions of the two elec-trons are closely tied to each other,much like two ballroom dancers orfigure skaters who go through theirmotions almost like a single unit.Such a condition is mathematicallymuch harder to describe than sys-tems with weaker correlation, whichcan be imagined as two friends danc-ing separately at a rock concert.Being able to treat highly correlatedelectrons reliably allows scientists tostudy interesting molecules whichwould otherwise be difficult.

In particular, scientists have longbeen interested in studying moleculesin which the chemical bonds arebreaking, as opposed to the morecommon states where the moleculeis either stable, with the bonds intact,or unstable, in which the bonds arebroken. In bond-breaking, the elec-trons are strongly correlated.Scientists are interested in studyingmolecules as they are on their way tobreaking into fragments because thismay give insights that allow thedesign of useful materials in areasranging from molecular electronicsand spintronics to self-assembly offunctional nanostructures.

The project team studied aboron-phosphorus based materialthat caused excitement when it wasreported as the first indefinitely stable

singlet diradical with the characteris-tics of broken bonds. The specieswas found computationally to havearound 17 percent broken bond char-acter, rather than the much higherfraction believed by experimentalists.This explained the stability of thematerial, and the origin of the 17 per-cent result could be understood fromthe role of neighboring groups, whichshows the ability to tune the reactivityof the molecule by chemical design.

Project leader Martin Head-Gordon notes that the algorithms arecurrently not suitable for all prob-lems, but that the team is looking todevelop them into general purposecodes, and is working on improvedsuccessors. He predicts that the pro-ject’s work will make its way into themainstream within five years, bring-ing new computational chemistrycapabilities to many of the 40,000chemists who use such applications.Already, project members are gettinginquiries from experimental chemistsasking if they can use their codes tostudy specific molecules.

This has led to collaborationsinvestigating the diradical characterof what may be the world’s longestcarbon-carbon bonds as well as workin progress on triple bonds betweenheavier elements such as tin and ger-manium. Chemists are interested inwhether these heavy elements canform triple bonds the way carbondoes. Initial results indicate they do,but the behavior of the resulting mol-ecules is very different.

Computation plays a key part ininterpreting what experimentalistsfind and also in predicting what theirexperiments will produce, accordingto Head-Gordon, who has a jointappointment as a chemistry professorat the University of California,Berkeley, and DOE’s LawrenceBerkeley National Laboratory. “Highperformance computing, togetherwith new theory and algorithms,allows us to push the frontiers ofchemistry and go where previousgenerations of chemists have notbeen able to go,” Head-Gordon said.

LUMO 0.17 e–

HUMO 0.83 e–

FIGURE 1. Graphical representation of the highest occupied molecular orbital (HOMO) and the low-est unoccupied level (LUMO) for the boron-phosphorus material. Instead of being fully empty, theLUMO contains 0.17 electrons, corresponding to 17 percent broken bond character.

Page 38: Progress report on the U.S. Department of Energy's Scientific

Particle accelerators are some of themost powerful experimental tools avail-able for basic science, providingresearchers with insight into the basicbuilding blocks of matter and enablingsome of the most remarkable discoveriesof the 20th century. They have led tosubstantial advances in applied scienceand technology, such as nuclear medi-cine, and are being studied for potentialapplication to problems related to energyand the environment.

Accelerators:Extraordinary Tools for Extraordinary Science

Given the importance of particleaccelerators to the United States’ scien-tific, industrial and economic competi-tiveness, bringing the most advancedhigh performance computing tools tobear on accelerator design, optimizationand operation is in the national interest.

Within the DOE Office of Science,particle accelerators have enabled remark-able scientific discoveries and importanttechnological advances that span severalprograms. In the High Energy Physicsand Nuclear Physics programs, experi-ments associated with high-energyaccelerators have led to important dis-coveries about elementary particles andthe fundamental forces of nature, quarkdynamics, and nuclear structure. In theBasic Energy Sciences and the Biologicaland Environmental Research programs,experiments with synchrotron lightsources and spallation neutron sourceshave been crucial to advances in thematerials, chemical and biological sci-ences. In the Fusion Energy Sciencesprogram, great strides have been madein developing heavy-ion particle acceler-ators as drivers for high energy densityphysics research and ultimately inertialfusion energy. The importance of accel-erators to the Office of Science missionis evident from an examination of theDOE “Facilities for the Future of Science:A Twenty-Year Outlook.” Of the 28facilities listed, 14 involve new orupgraded accelerator facilities.

The SciDAC Accelerator Science andTechnology (AST) modeling project wasa national research and developmenteffort aimed at establishing a compre-hensive terascale simulation environmentneeded to solve the most challengingproblems in 21st century accelerator sci-ence and technology. The AST projecthad three focus areas: computationalbeam dynamics, computational electro-magnetics, and modeling advanced accel-erator concepts. The tools developedunder this program are now being usedby accelerator physicists and engineersacross the country to solve the mostchallenging problems in acceleratordesign, analysis, and optimization. Thefollowing examples of scientific andengineering accomplishments achieved

Page 39: Progress report on the U.S. Department of Energy's Scientific

by the AST project illustrate how theproject software is already improvingthese extraordinary tools for extraor-dinary science.

Driving Discovery inExisting Accelerators

DOE operates some of the world’smost productive particle accelerators,and one component of the AST proj-ect focused on developing scientificcodes to help researchers get evenmore science out of these facilities.

As part of the SciDAC AcceleratorScience and Technology project, theFermilab Computational AcceleratorPhysics group has developed theSynergia framework. Integrating andextending existing accelerator physicscodes in combination with new codesdeveloped by the group, Synergia isdesigned to be a general-purposeframework with an interface that isaccessible to accelerator physicistswho are not experts in simulation.

In recent years, accurate model-ing of beam dynamics in high-current,low-energy accelerators has becomenecessary because of new machinesunder consideration for future appli-cations, such as the High EnergyPhysics neutrino program, and theneed to optimize the performance ofexisting machines, such as theSpallation Neutron Source and theFermilab Booster. These machinesare characterized by high currentsand require excellent control of beamlosses. A common problem in accel-erators is that the particles can strayfrom the beam core, creating what isknown as a “halo,” which can lead tobeam loss. Understanding how theaccelerator design and various physi-cal phemonena (such as the space-charge effect) affect this halo forma-tion is an essential component ofaccelerator modeling.

Several computer simulations ofspace-charge effects in circular accel-erators using particle-in-cell techniqueshave been developed. Synergia is a

package for state-of-the-art simulationof linear and circular acceleratorswith a fully three-dimensional treat-ment of space charge. Space-chargecalculations are computationallyintensive, typically requiring the useof parallel computers.

Synergia was designed to be dis-tributable to the particle acceleratorcommunity. Since compiling hybridcode can be a complicated task whichis further complicated by the diverseset of existing parallel computingenvironments, Synergia includes a“build” system that allows it to becompiled and run on various plat-forms without requiring the user tomodify the code.

When Beams CollideWhile some accelerator research

involves firing a beam at a stationarytarget, other accelerators are used togenerate particle beams which aretargeted to collide with each other,resulting in millions of particles flyingout from the collision. High intensity,tightly focused beams result in high“luminosity,” a parameter which isproportional to the number of eventsseen in an accelerator’s detectors. Buthigh luminosity also leads to increasedbeam disruption due to “beam-beam”effects, which consequently lowersthe luminosity.

To better study these effects, aparallel three-dimensional, particle-in-cell code called BeamBeam3D wascreated to model beam-beam effectsof colliding beams at high energy ringcolliders. The resulting information cannow be used to help accelerator oper-ators determine the optimum param-eters for colliders to maximize lumi-nosity and hence maximize scientificoutput. Under SciDAC, BeamBeam3Dwas used to model colliding beams inseveral DOE accelerators includingthe Fermilab Tevatron, the RelativisticHeavy Ion Collider (RHIC) atBrookhaven National Laboratory,the PEP-II B-factory at StanfordLinear Accelerator Center, and the

soon-to-be-operating Large HadronCollider (LHC) at CERN. In thefuture, BeamBeam3D is expected tobe used to model beam-beam effectsin the proposed International LinearCollider.

BeamBeam3D simulations ofRHIC are relevant to both RHICoperations and to the future operationof the LHC (due to the similar beam-beam conditions). In regard to RHIC,the code was used to study beam-beam effects in three areas. The firstare the coherent beam-beam effects.One problem is that beam-beameffects in hadron colliders, such asthe one at RHIC, can create an oscil-lation under which the beams becomeunstable; this instability represents aroadblock to increasing the luminosity.Using BeamBeam3D, SciDACresearchers modeled the coherentbeam-beam effects first during colli-sions of single bunches of particles,then during multiple bunch collisions.In the latter case, each beam hasthree bunches that are coupled tothree bunches in the opposite beamat four interaction points. Using theBeamBeam3D code running on highperformance computers, projectmembers were able to calculate atwhich point the beams would becomeunstable. However, the group alsofound that by appropriately arrang-ing the accelerator machine latticeparameters of the two rings at RHICand generating sufficient tune separa-tion between the two beams, oscilla-tion could be controlled to the extentthat there is no risk of instability.

Second, using BeamBeam3D, theteam studied emittance growth (areduction in beam quality that affectsthe luminosity) in beams which areoffset from one another. The teamfound that, while static offsets do notcause significant emittance growthover short time periods, the impact ofoffsets is much larger when they aretime-modulated (due, for example, tomechanical vibrations in the final-focus magnets.) This finding providesa potential mechanism to account forthe extra emittance growth observed

Page 40: Progress report on the U.S. Department of Energy's Scientific

during the machine operation.Lastly, the team used BeamBeam3D

to study long-range beam-beam effectsat RHIC. Simulations performed atthe energy level at which particles areinjected into the accelerator showeda strong sensitivity of the long-rangebeam-beam effects to the machinetunes; this sensitivity was also observedin the experiments. The team subse-quently modeled the long-range beam-beam effects at the higher energylevels at which the particles collide.Such simulations are providinginsight and a means to numericallyexplore the parameter space for thefuture wire beam-beam compensa-tion experiment planned at RHIC.

Shaping the Future ofAccelerator Design

Although their scientific value issubstantial, accelerators are very expen-sive and difficult to design, build andoperate. As a result, the scientificcommunity is now looking at inter-national collaboration to develop thenext generation of accelerators. Oneprime example is the InternationalLinear Collider (ILC), the highestpriority future project in the world-wide high energy physics communityfor probing into the fundamentalnature of matter.

Presently, a large team of acceler-ator physicists and engineers fromEurope, Asia and North America isworking on the design of this tera-electronvolt-scale particle acceleratorunder a unified framework, called theGlobal Design Effort (GDE), to makethe most of limited R&D resources.Under the AST project, modelingtools were developed and used tostudy the effectiveness of proposeddesigns and look for better solutions.

In the ILC design, two facing lin-ear accelerators (linacs), each 20 kilo-meters long, hurl beams of electronsand positrons toward each other atnearly the speed of light. Eachnanometer-scale size beam contain-

ing ten billion electrons orpositrons is accelerated downthe linac by superconductingaccelerating cavities, which givethem more and more energy tillthey meet in an intense crossfireof collisions at the interactionpoint. At these energies,researchers anticipate significantdiscoveries that will lead to aradically new understanding ofwhat the universe is made ofand how it works. The energyof the ILC beams can be adjust-ed to home in on elementaryparticle processes of interest.

A critical component of theILC linac is the superconductingradio frequency (SRF) accelerat-ing cavity. The design, calledTESLA, was created in the early1990s and has been the focus ofmore than a decade of experi-mental R&D effort. Recently, anew cavity design has been proposedwhich has a higher accelerating gra-dient and 20 percent less cryogenicsloss over the TESLA design – animportant consideration as the cavitiesmust be cooled to extremely low tem-peratures (2 degrees Kelvin, or about–456 degrees Fahrenheit) to maintainsuperconductivity.

Accompanying intense experi-mental efforts at the KEK acceleratorcenter in Japan and Jefferson Lab inVirginia, the development of thislow-loss (LL) cavity has been greatlyfacilitated by the new parallel electro-magnetic codes developed at theStanford Linear Accelerator Center(SLAC) under the AST project. Incollaboration with the SciDACTerascale Optimal PDE SolversIntegrated Software InfrastructureCenter (TOPS ISIC), the SLAC teamcreated a nonlinear 3D parallel finiteelement eigensolver with which, forfirst time, one can directly solve forthe damped dipole modes in the 3DLL cavity, complete with higher-order mode (HOM) couplers. HOMdamping is essential for stable trans-port of “long bunch trains” of particlesas they race down the ILC linacs and

for preserving low emittance of parti-cles from the beams, which can reducethe beam quality.

Previously, evaluating HOMdamping in an SRF cavity took yearsto complete either experimentally ornumerically. Today, using the toolsdeveloped under SciDAC, HOM cal-culations can be done in a matter ofweeks with the new parallel solverrunning on NERSC’s IBM SP3 andNCCS’s Cray X1E supercomputers.In addition, the use of tetrahedralgrid and higher order basis functionshas enabled solutions in the complexcavity geometry to be obtained withunprecedented accuracy. As a result,costly and time-consuming prototyp-ing by the trial-and-error approach isavoided as the LL cavity can becomputationally designed as close tooptimal as possible.

Advanced Accelerators:Smaller, Faster, Futuristic

DOE is also looking at future-generation accelerators, in whichlasers and plasmas would be usedinstead of electromagnetic cavities to

FIGURE 1. Using software developed under SciDAC, acceleratorscientists developed this model of the new low-loss cavity for theproposed International Linear Collider. The different colors illus-trate how various sections were modeled in parallel on multipleprocessors. The end perpectives show the geometry details inthe couplers which could not previously be modeled because ofthe disparate length scales.

38

Page 41: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

accelerate particles, which couldresult in accelerators with 1,000 timesthe performance of current technology.The challenge is to control thesehigh-gradient systems and then tostring them together. Such technolo-gies would enable the development ofultra-compact accelerators, whichwould be measured in meters, notkilometers. However, experiments todate have yielded acceleration oververy short distances (millimeters tocentimeters) resulting in beams ofmodest quality.

One approach being exploredis a plasma-wakefield accelerator,in which a drive beam — either anintense particle beam or laser pulse— is sent through a uniform plasma.This creates a space-charge wakeon which a trailing beam of parti-cles can surf. SciDAC investigatorsin the Acceler-ator Science andTechnology (AST) project havedeveloped a suite of particle-in-cell (PIC) codes to model suchdevices.

SciDAC support has enabledthe development of four independ-ent, high-fidelity, particle-in-cell(PIC) codes: OSIRIS (fully explicitPIC), VORPAL (fully explicit PICplus ponderomotive guiding cen-ter), QuickPIC (quasi-static PICplus ponderomotive guiding cen-ter), and UPIC (a framework forrapid construction of new codessuch as QuickPIC). Results fromthese codes were included inthree articles in Nature and eightin Physical Review Letters. Forexample, members of the SciDACAST project have performedlarge-scale simulations using thecodes OSIRIS and VORPAL tohelp interpret laser- and plasma-

wakefield accelerator experimentsand to gain insight into the accelera-tion mechanism.

Researchers at LawrenceBerkeley National Laboratory took agiant step toward realizing the prom-ise of laser wakefield acceleration, byguiding and controlling extremelyintense laser beams over greater dis-tances than ever before to producehigh-quality, energetic electronbeams. The experimental resultswere analyzed by running the VOR-PAL plasma simulation code, devel-

oped with SciDAC support, onsupercomputers at DOE’s NERSC.This allowed scientists to see detailsof the evolution of the experiment,including the laser pulse breakup andthe injection of particles into thelaser-plasma accelerator. This allowsthem to understand how the injec-tion and acceleration occur in detailso that the experiment’s designerscan figure out how to optimize theprocess.

The results of the experiment andsimulations were published as thecover article in the Sept. 30, 2004issue of Nature (Figure 2).

Great progress has also beenmade in the development of reduceddescription models for laser/plasmasimulation. One such example,QuickPIC, has been shown for someproblems to provide answers as accu-rate as OSIRIS but with two to threeorders of magnitude less computa-tion time. Before the SciDAC effort, afull-scale simulation of a 1 TeV after-burner would have required 5 millionnode hours on a supercomputer (andthus was not done); after SciDAC,using QuickPIC, the simulation wasdone in 5,000 node hours on a clus-ter system.

FIGURE 2. SciDAC codes were used to analyze theexperiments in two of three featured articles oncompact particle accelerators in the September30, 2004 issue of Nature. The cover image wascreated using the VORPAL code developed byTech-X Corp. partly with SciDAC support.

39

Page 42: Progress report on the U.S. Department of Energy's Scientific

The long-term goals of high energy andnuclear physicists are to identify the funda-mental building blocks of matter and todetermine the interactions among them thatgive rise to the physical world we observe.Major progress towards these goals hasbeen made through the development ofthe Standard Model of high energy physics.The Standard Model consists of two quan-tum field theories: the Weinberg-SalamTheory of the electromagnetic and weakinteractions, and quantum chromodynamics(QCD), the theory of the strong interactions.

Lattice QCDImproved Formulations, Algorithms andComputers Result in Accurate Predictionsof Strong Interaction Physics

The Standard Model has beenenormously successful in explaining awealth of data produced in accelera-tor and cosmic ray experiments overthe past twenty-five years. However,our knowledge of the StandardModel is incomplete because it hasbeen difficult to extract many of themost interesting predictions of QCD,those that depend on the strong cou-pling domain of the theory. The onlyway to extract these predictions fromfirst principles and with controllederrors is through large-scale numeri-cal simulations. These simulations areneeded to obtain a quantitativeunderstanding of the physical phe-nomena controlled by the stronginteractions, to determine a numberof the basic parameters of theStandard Model, and to make precisetests of the Standard Model’s rangeof validity.

Despite the many successes ofthe Standard Model, it is believedthat to understand physics at theshortest distances or highest energies,a more general theory, which unifiesall four of the fundamental forces ofnature, will be required. However, todetermine where the Standard Modelbreaks down and new physics isrequired, one must first know whatthe Standard Model predicts.

Numerical simulations of QCDaddress problems that are at theheart of the Department of Energy’slarge experimental programs in highenergy and nuclear physics. Amongthe major goals are (1) to calculatethe effects of strong interactions onweak interaction processes to anaccuracy needed to make precisetests of the Standard Model; (2) todetermine the properties of stronglyinteracting matter under extremeconditions, such as those that existedin the very early development of theuniverse and are created today in rel-ativistic heavy ion collisions; and (3)to calculate the masses of stronglyinteracting particles and obtain aquantitative understanding of theirinternal structure.

According to QCD, the funda-

Page 43: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

41

mental building blocks of stronglyinteracting matter are quarks.Interactions among quarks are medi-ated by gluons, in a manner analo-gous to the way photons mediate theelectromagnetic interactions. QCDinteractions are so strong that onedoes not directly observe quarks andgluons under ordinary laboratoryconditions. Instead, one observesstrongly interacting particles, such asprotons and neutrons, which arebound states of quarks and gluons.QCD was initially formulated in thefour-dimensional space-time continu-um; however, to carry out numericalsimulations, one must reformulatethe theory on a discrete four-dimen-sional lattice—hence the name latticeQCD.

Major progress has been made inthe numerical study of QCD duringthe course of the SciDAC programthrough the introduction of improvedformulations of QCD on the lattice,coupled with improved algorithmsand major advances in the capabilitiesof high performance computers. Thesedevelopments have allowed physiciststo fully include the effects arisingfrom the polarization of the QCDground state due to the creation andannihilation of quark-antiquark pairs.

The inclusion of vacuum polar-ization effects has been the greatestchallenge to performing accuratenumerical calculations of QCD.Their importance is illustrated inFigure 1, where lattice QCD calcula-tions of the masses of a number ofstrongly interacting particles, and thedecay constants of the π and K mesons,are compared with their experimentalvalues. In each case, agreement withexperiment was found within statisti-cal and systematic errors of 3% orless when vacuum polarizations wereincluded (right panel), but this wasnot the case when the uncontrolledapproximation of ignoring vacuumpolarization was made (left panel).This work was described in a “Newsand Views” article in Nature, as wellas in a “News Focus” article inScience. Within the high energy

physics community, it has been fea-tured in Fermi News Today, PhysicsToday, and in a cover article in theCERN Courier.

A number of other important val-idations of lattice QCD methods havealso been enabled by the SciDACProgram. They include the determi-nation of the strong interaction cou-pling constant in agreement with butwith somewhat smaller errors than theworld average from a number of dif-ferent experimental determinations;the determination of the nucleon axialcharge, which governs the β decay ofa free neutron into a proton, electron,and neutrino; and the calculation ofthe Cabibbo-Kobayashi-Maskawa(CKM) matrix element Vus to anaccuracy comparable with experiment.(The CKM matrix describes howquarks couple to the weak interac-tions; its elements are fundamentalparameters of the Standard Model.)

During the course of the SciDACProgram, the lattice QCD communi-ty moved from the validation oftechniques, through the calculationof quantities that are well knownexperimentally, to the successful pre-diction of quantities that had not yetbeen measured. One importantexample was the prediction of themass of the Bc meson. This exoticparticle, which consists of a bottom

quark and a charmed anti-quark, wasfirst observed in 1998, but its masswas only poorly measured. With theaid of SciDAC-funded prototypeclusters at Fermilab, the mass of theBc meson was calculated to be 6304± 20 MeV, a dramatic improvementin accuracy over previous lattice cal-culations. Soon after this result wasmade public, the CDF experiment atFermilab’s Tevatron finished a new,precise measurement of the mass:6287 ± 5 MeV, confirming the pre-diction from lattice QCD. The fineagreement was covered in the NewScientist, The Scotsman newspaper,and a “News and Views” article inNature. The success of the lattice cal-culation was named one of the topphysics stories of 2005 by PhysicsNews Update, which described it as“the best-yet prediction of hadronmasses using lattice QCD.”

One of the major objectives ofthe field of lattice QCD is to deter-mine the decay properties of pseu-doscalar mesons with one light andone heavy quark. Strong interactioneffects in leptonic decays are charac-terized by decay constants, while insemileptonic decays they are charac-terized by various form factors. Thedecay constants and form factors forB and Bs mesons, which containheavy b quarks, play a critical role in

0.8 1.0 1.2 1.4LatticeQCD/Expít

Quenched approximation

1% errs

3MΞ – MΝ

2MΒμ – MΤ

ψ(1P – 1S)

γ(1D – 1S)

γ(2P – 1S)

γ(3S – 1S)

γ(1P – 1S)

Full QCD

0.8 1.0 1.2 1.4LatticeQCD/Exp’t

1% errs

3MΞ – MΝ

2MΒμ – MΤ

ψ(1P – 1S)

γ(1D – 1S)

γ(2P – 1S)

γ(3S – 1S)

γ(1P – 1S)

FIGURE 1. The ratio of several quantities calculated in lattice QCD to their experimental values. Thepanel on the left shows results from the quenched approximation, and that on the right from full QCD.

Page 44: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC DISCOVERY

42

tests of the Standard Model that arecurrently a major focus of the experi-mental program in high energy physics.These quantities are very difficult tomeasure experimentally, so accuratelattice calculations of them would beof great importance. On the otherhand, the decay constants and formfactors of D and Ds mesons, which

contain heavy c quarks, are beingmeasured to high accuracy by theCLEO-c Collaboration. Since the lat-tice techniques for studying mesonswith c and b quarks are identical,these experiments provide an excel-lent opportunity to validate the lat-tice approach being used in the studyof D and B decays.

The first lattice QCD results forthe leptonic decay constants of the Dand Ds mesons that fully took intoaccount vacuum polarization effectswere announced in June 2005. Withina few days of these results being madepublic, the CLEO-c Collaborationannounced its experimental result forthe decay constant of the D meson;and in April 2006 the BaBar Colla-boration announced its determinationof the decay constant of the Ds meson.In both cases the experiments con-firmed the lattice QCD calculationswith comparable errors. The latticeand experimental results for the decayof the D meson were the subjects ofcover articles in the CERN Courierand the New Scientist.

Finally, the form factors that char-acterize the decay of a D meson intoa K meson and leptons were predictedwith lattice QCD in work that wasalso enabled by the SciDAC Program.The results were subsequently con-firmed in experiments by the Focusand Belle collaborations. The latticeresults are compared with experi-mental ones from Belle in Figure 2.The excellent agreement betweentheory and experiment provides onemore piece of evidence that latticeQCD calculations are able to makeaccurate predictions of strong inter-action physics. This work too wasfeatured in Fermi News Today.

f +(q

2 )

q2max/m2

Ds*

00

0.5

1

1.5

2

2.5

0.05 0.1 0.15

Experiment [Belle, hep-ex/0510003]Lattice QCD [Fermilab/MILC, hep-ph/0408306)

0.2 0.25 0.3 0.35 0.4 0.45D Klv

FIGURE 2. The semileptonic form factor f + (q2) for the decay of a D meson into a K meson, a lepton,and a neutrino, as a function of the momentum transfer to the leptons q2. The orange curve is thelattice result, and the blue points are the experimental results of the Belle Collaboration.

Page 45: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

Throughout the Office ofScience research program are majorscientific challenges that can best beaddressed through advances in sci-entific supercomputing. These chal-lenges including designing materialswith selected properties, under-standing and predicting global cli-mate change, understanding andcontrolling plasma turbulence forfusion energy, designing new parti-cle accelerators, and exploring basicquestions in astrophysics.

A key component of theSciDAC program is the developmentof scientific challenge codes — newapplications aimed at addressingkey research areas and designed totake advantage of the capabilities of

the most powerful supercomputers,and to run as efficiently as possibleto make the most effective use ofthose systems.

This is a daunting problem.Current advances in computingtechnology are typicaly driven bymarket forces in the commercialsector, resulting in systems designedfor commerce, not scientific com-puting. Harnessing commercialcomputing technology for scientificresearch poses problems unlike thoseencountered in previous supercom-puters, both in magnitude as well asin kind. This problem will only besolved by increased investments incomputer software — in researchand development on scientific simu-

lation codes as well as on the math-ematical and computing systemssoftware that underlie these codes.

In the following pages, descrip-tions of many of the software proj-ects will illustrate how SciDAC hassupported software development tobenefit the offices of Basic EnergyResearch, Biological and Environ-mental Research, Fusion EnergySciences and High Energy andNuclear Physics.

Basic Energy Sciences:Understanding Energyat the Atomic Level

As one of the world’s leadingsponsors of basic scientific research,the Department of Energy has longsupported investigations into howatoms interact, how they form mol-ecules and how groups of atomsand molecules react with oneanother. Understanding the forcesat work among the most basicbuilding blocks of matter can pro-vide greater insight into how energyis released, how waste products aregenerated during chemical process-es and how new materials can bedeveloped.

Within the Office of Science,the Basic Energy Sciences (BES)program supports such fundamentalresearch in materials sciences andengineering, chemistry, geosciencesand molecular biosciences. Basicresearch supported by the BES pro-gram touches virtually every aspectof energy resources, production,conversion, efficiency, and wastemitigation. Under the SciDAC pro-gram, a number of projects werefunded to support computationalchemistry.

Research in chemistry leads toadvances such as efficient combus-tion systems with reduced emissionsof pollutants, new processes forconverting solar energy, improvedcatalysts for the producing fuels andchemicals and better methods for

43

Scientific ChallengeCodes: DesigningScientific Software forNext-GenerationSupercomputersFor more than 40 years, the power of computer processors hasdoubled about every 18 months, following a pattern known asMoore’s Law. This constant increase has resulted in desktopcomputers which today are more powerful than the super-computers of yesteryear. Today’s supercomputers, at the sametime, offer computing power which enables researchers todevelop increasingly complex and accurate simulations toaddress the most challenging scientific problems. However,the software applications for studying these problems havenot kept pace with the advances in computational hardware.

Page 46: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

environmental remediation andwaste management.

Over the past 50 years, moleculartheory and modeling have advancedfrom merely helping explain theproperties of molecules to the pointwhere they provide exact predictivetools for describing the chemicalreactions of three- and four-atomsystems, the starting point for manyof these chemical processes. This isincreasingly important as researchersseek to understand more complexmolecules and processes such ascombustion, which involves com-plex interactions of chemistry withfluid dynamics.

The advances in computationalchemistry in recent years in provid-ing accurate descriptions of increas-ingly complex systems have comeas much from improvements in the-ory and software as from improvedcomputational hardware. However,the computational requirementsoften outweigh the scientific gains.For example, electronic structuretheory provides the framework formodeling complex molecular-levelsystems (from atoms to thousandsof atoms) with increasing accuracy.But accurate electronic structuretheories have computational coststhat rise with the 6th power (orhigher) of molecular size, so that 10times the computing resourcestranslates into treating a system lessthan 1.5 times bigger. Predictivemodeling of such processes is cur-rently beyond the capabilities ofexisting computational resourcesand computational methods.

Under the SciDAC program,BES supported a number of proj-ects aimed at developing computa-tional approaches to solving prob-lems in the modeling of chemicalprocesses that exceed current com-putational capabilities. These proj-ects were selected to increase theaccuracy of models, increase thesize of systems which could be sim-ulated and expand the capabilitiesof key applications already beingused in the research community.

Two of the projects focused onimproving computational modelingof combustion, which is key to 80percent of the energy production inthe United States. Developing newmethods to make combustion moreefficient and cleaner will benefit theeconomy, the environment and thequality of life. With such a promi-nent role in energy production anduse, combustion has long been animportant research focus for theDOE.

But fully understanding combus-tion is an extremely complex prob-lem involving turbulent flows, manychemical species and a continuingseries of interdependent chemicalprocesses and reactions. As comput-ers have become more powerful,more detailed combustion simula-tions can be created to help scien-tists better understand the process.

The Terascale High-FidelitySimulations of Turbulent Combus-tion with Detailed Chemistry proj-ect is a multi-university collabora-tive effort to develop a high-fidelityturbulent reacting flow simulationcapability that can take advantageof the most powerful supercomput-ers available. The approach is basedon direct numerical simulation

(DNS) to enable the highest accura-cy and allow scientists to study thefine-scale physics found in turbulentreacting flows.

Under SciDAC, the simulationcode named S3D has been enhancedwith many new numerical algorithmsand physical models to provide pre-dictive capabilities for many of thedetailed processes which occur dur-ing combustion, including thermalradiation, soot dynamics, sprayinjection and evaporation, andflame-wall interaction. The S3Dcode was used to perform detailedthree-dimensional combustion sim-ulations of flames in which fuel andoxygen are not premixed. Thisresearch could have applications insuch areas as jet aircraft engines,where fuel and oxidizers are notpremixed for safety reasons, and indirect-injection internal combustionengines. Under certain conditions,this type of combustion can sudden-ly and unexpectedly become extin-guished, and this project is expectedto lead to a better understanding ofthis problem, as well as re-ignitionof extinguished flames. The teamdemonstrated the advanced DNScapability by undertaking severallaboratory-scale simulations to high-

44

FIGURE 1: Simulations of turbulent nonpremixed ethylene-air flames reveal detailed structure ofdynamics of the soot formation process. Shown from left to right are instantaneous images of thevorticity, temperature, and soot volume fraction.

Page 47: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

light fundamental aspects of turbu-lence-chemistry interaction occur-ring in many practical combustionsystems.

Principal Investigators: Hong G. Im,University of Michigan; Arnaud Trouvé,University of Maryland; Christopher J.Rutland, University of Wisconsin; andJacqueline H. Chen, Sandia NationalLaboratories

The Computational Facility forReacting Flow Science project isaimed at advancing the state of theart in the understanding and predic-tion of chemical reaction processes,such as combustion, and their inter-actions with fluid flow. Such detailedreacting flow computations arecomputationally intensive, yet theyprovide information that is neces-

sary for the understanding and pre-diction of turbulent combustion.The project’s approach towardsachieving this goal includes twobroad focus areas. The first involvesdeveloping high-accuracy adaptivemesh refinement (AMR) algorithmsand implementing them in a flexiblesoftware toolkit for reacting flowcomputations. The second involvesthe development of advanced

chemical analysis algorithms andsoftware that enable both theextraction of enhanced physicalunderstanding from reacting flowcomputations, and the developmentof chemical models of reducedcomplexity. The overall construc-tion is being implemented in thecontext of the common componentarchitecture (CCA) framework. Theassembled software will be appliedto targeted reacting flow problems,in two and three dimensions, andvalidated with respect to reactingflow databases at the CombustionResearch Facility of Sandia NationalLaboratories.

Principal Investigator: Habib Najm, SandiaNational Laboratories

Another set of projects focusedon improving applications for mod-eling electronic structure. Electronicstructure calculations, followed bydynamical and other types of mole-cular simulations, are recognized asa cornerstone for the understandingand successful modeling of chemicalprocesses and properties that arerelevant to combustion, catalysis,photochemistry, and photobiology.

The Advanced Methods forElectronic Structure project advan-ced the capabilities of quantum chem-ical methods to describe efficientlyand with controllable accuracy theelectronic structure, statisticalmechanics, and dynamics of atoms,molecules and clusters. The project,which had goals of improving the

speed, scalability and accuracy ofelectronic structure applications,had two thrusts.

(1) The MultiresolutionQuantum Chemistry — GuaranteedPrecision and Speed componentworked to increase the accuracy ofchemistry codes as they are scaled upto run on larger systems, while alsoreducing the amount of processingtime needed to run the calclulations.Additionally, computational chem-istry is expected to benefit from theresulting computational frameworkthat is substantially simpler thanconventional atomic orbital meth-ods, that has robust guarantees ofboth speed and precision, and thatis applicable to large systems. Incontrast, current conventionalmethods are severely limited inboth the attainable precision andsize of system that may be studied.The project achieved its goals foreffective one-electron theories andis studying many-body theories.The electronic structure methodsshould be valuable in many disci-plines, and the underlying numericalmethods are broadly applicable.

Principal Investigator: Robert Harrison,Oak Ridge National Laboratory

(2) The component to developAdvanced Methods for ElectronicStructure: Local Coupled ClusterTheory was designed to extendcoupled cluster (CC) electronicstructure theory to larger molecules(“coupled cluster” is a numericaltechnique used for describing many-body systems). This method wasachieved by developing novel “fast”coupled cluster algorithms that aremuch more scalable with respect tosystem size to more fully realize thepotential of high performance com-puting for treating larger molecularsystems. CC methods are wellestablished as the wave-function-based electronic structure methodof choice. However, even the sim-plest CC method, incorporating justcorrelations between pairs of elec-trons (double substitutions), stillrequires computational costs that

FIGURE 3. Molecular orbital of the benzenedimer with the adaptive grid and an isosurface.

FIGURE 2. Reaction-diffusion high-order AMRcomputations of the propagation of randompremixed hydrogen-oxygen ignition kernelsin two dimensions. The temperature field isshown, indicating cold reactants (blue) andhot combustion products (red), separated bythe propagating flame fronts.

45

Page 48: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

scale with the 6th power of the sizeof the molecule. This researchfocused on defining new and pow-erful “local correlation” models thatreduce scaling of coupled clustercalculations, and developing effec-tive algorithms for their implemen-tation.

Principal Investigator: Martin Head-Gordon, Lawrence Berkeley NationalLaboratory

The project for AdvancingMulti-Reference Methods inElectronic Structure Theoryresearched and developed new mod-els for investigating detailed mecha-nisms of chemical reactions for com-plex systems, including biomolecu-lar processes, such as the interactionof protein molecules. Specifically,the project worked on the develop-ment of highly scalable electronicstructure codes that are capable ofpredicting potential energy surfacesof very high accuracy, which isimportant since potential energysurfaces determine how atoms andchemical bonds rearrange duringchemical reactions. The ability tostudy extended systems containingtens to thousands of atoms withreasonable accuracy is of paramountimportance. The development ofmethods to adequately treat suchsystems requires highly scalable,highly correlated electronic struc-ture methods interfaced with classi-cal methods and mesoscale codes.

Principal Investigator: Mark S. Gordon,Ames Laboratory

The Advanced Software for theCalculation of Thermochemistry,Kinetics and Dynamics projectconsisted of two integrated programsto develop both scalable kinetics/dynamics software and infrastruc-ture software. The overall thrust isto develop software to efficientlyprovide reliable thermochemistry(heat-related chemical reactions),kinetics, and dynamics for largemolecular systems. Kinetics/dynam-ics studies compute how moleculesmove over a potential energy sur-

face. The first goal is to provide bet-ter thermochemistry for larger sys-tems and improve the performanceof related components in theColumbus computational chemistrysoftware system. The second goal isto develop highly parallelized quan-tum dynamics and quantum kineticssoftware. Lastly, common compo-nent architecture techniques will beused to integrate kinetics and elec-tronic structure software into apackage that will allow users tocompute kinetics information justby specifying the reactants. Theinfrastructure program’s focus is onpreconditioners (tools whichimprove convergence rates of cer-tain mathematical methods) and onpotential energy surface fittingschemes that reduce the number ofelectronic structure calculationsnecessary for accurate kinetics anddynamics. The infrastructure soft-ware has benefited from IntegratedSoftware Infra-structure Centers(ISIC) support.

Principal Investigators: Albert F. Wagner(2001-03), Ron Shepard (2004-05),Argonne National Laboratory

The Theoretical ChemicalDynamics Studies of ElementaryCombustion Reactions projectinvolved modeling the dynamics ofchemical reactions of large poly-atomic molecules and radicalsimportant in combustion research,using a combination of theories andmethods. These computationallychallenging studies will test theaccuracy of statistical theories forpredicting reaction rates, whichessentially neglect dynamical effects.This research is expected helpextend theoretical chemical dynam-ics to the treatment of complexchemical reactions of large poly-atomic molecules.

Principal Investigator: Donald L.Thompson, University of Missouri

Biological andEnvironmentalResearch: AdvancedSoftware for StudyingGlobal Climate Change

In many fields of scientificresearch, scientists conduct experi-ments to test theories, then analyzethe results, and use the informationto continue refining their theoriesand/or experiments as they gainadditional insight. In the study ofglobal climate change, however, the“experiment” takes place day afterday, year after year, century aftercentury, with the entire planet andits atmosphere as the “laboratory.”Climate change literally affects eachand every person on Earth andbecause any measures taken toaddress the issue will have far-reaching social, economic and polit-ical implications, it’s critical that wehave confidence that the decisionsbeing made are based on the bestpossible information.

Scientists have thought for morethan 100 years that increasingatmospheric concentrations of car-bon dioxide (CO2) and other green-house-gases from human activitywould cause the atmospheric layerclosest to the Earth’s surface towarm by several degrees. Because ofthe complex feedbacks within theEarth system, precisely estimatingthe magnitude and rate of thewarming, or understanding itseffects on other aspects of climate,such as precipitation, is a dauntingscientific challenge. Thanks to awealth of new observational dataand advances in computing technol-ogy, current climate models are ableto reproduce the global averagetemperature trends observed overthe twentieth century and provideevidence of the effect of humanactivity on today’s climate.

The Climate Change Research

46

Page 49: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

Division in the DOE Office ofScience was established “… toadvance climate change science andimprove climate change projectionsusing state-of-the-science coupledclimate models, on time scales ofdecades to centuries and spacescales of regional to global.”

Fortunately, the growing powerand capabilities of high-perform-ance computers are giving climateresearchers more accurate tools foranalyzing and predicting climatechange. Climate, whether in ourneighborhood, our region or conti-nent, is determined by many factors,some local, others global. Geography,air temperature, wind patterns,ocean temperature and currents andsea ice are among the forces thatshape our climate. As a result, com-prehensive climate models whichcouple together simulations of dif-ferent processes are among themost complex and sophisticatedcomputer codes in existence. Whilethey can simulate most of the conti-

nental-scale features of the observedclimate, the models still cannot sim-ulate, and therefore cannot predict,climate changes with the level ofregional spatial accuracy desired fora complete understanding of thecauses and effects of climate change.

As climate science advances andnew knowledge is gained, there is ademand to incorporate even morefactors into the models and toimprove acknowledged shortcom-ings in existing models. Thesedemands, which make the modelsmore accurate, will unfortunatelyoverwhelm even the most opti-mistic projections of computerpower increases. So, a balance mustbe struck between the costs, bene-fits and tradeoffs required to allo-cate scarce computer and humanresources to determine whichimprovements to include in the nextgeneration of climate models.

Under SciDAC, teams of climateresearchers worked to developimproved computational tools and

techniques for studying and model-ing climate change, focusing on thenumerical and computationalaspects of climate modeling. In par-ticular, SciDAC funding has enabledDOE researchers to participate withNSF and NASA researchers over anextended period in the design andimplementation of new climatemodeling capabilities. There arethree major pieces, two of whichpursued research in the academicarena that was longer term andmore basic.

A National Consortium toAdvance Global ClimateModeling

The major SciDAC effort in cli-mate modeling was a multi-discipli-nary project to accelerate develop-ment of the Community ClimateSystem Model (CCSM), a computermodel of the Earth’s climate thatcombines component models for

47

FIGURE 4. As supercomputers have become more powerful, climate researchers have developed codes with finer and finer resolution, allowing the mod-els to incorporate more details which affect climate. The resulting climate models are more accurate. Images by Gary Strand, NCAR

R15 T42 T63

T85 T106 T170

Page 50: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

the atmosphere, ocean, land and seaice. The CCSM is developed byresearchers from NSF, DOE, NASAand NOAA laboratories, as well asuniversities. As one of the leadingclimate models in the United States,CCSM has a large user base of sev-eral hundred climate scientists. TheCCSM community contributed alarge number of climate changesimulation results for the periodicIntergovernmental Panel on ClimateChange (IPCC) climate assessmentreport. The Collaborative Designand Development of the Com-munity Climate System Model forTerascale Computers project hadtwo goals.

The first goal was to improvesoftware design and engineering ofthe CCSM and its componentmodels and improve performance

portability across the wide varietyof computer architectures requiredfor climate assessment simulations.The second goal was to acceleratethe introduction of new numericalalgorithms and new physicalprocesses within CCSM models.

Under SciDAC, a consortium ofsix national laboratories and resear-chers from the National Center forAtmospheric Research (NCAR) andNOAA worked together on a vari-ety of improvements to the CCSM.In the early years of the five-yearproject, attention was focused onimproving the performance andportability of CCSM on the widevariety of vector and scalar comput-ers available to the community.

The atmosphere and oceanmodels were improved with theintroduction of new flexible data

decomposition schemes to enabletuning for each computational plat-form. The sea ice and land modelswere restructured to improve per-formance, particularly for vectorcomputers, and new software wasdeveloped to improve the couplingof the four components into a fullycoupled model.

These changes enabled thelargest ensemble of simulations —10,000 years worth — of any model-ing group in the world for therecent IPCC assessment. These sim-ulations were performed at relative-ly high resolution, generating morethan 110 terabytes of climate modeldata, which were distributed inter-nationally via the SciDAC-fundedEarth System Grid. This importantcontribution to the international cli-mate research effort was made pos-

48

-150

-130

-110

-90

-70

-50

-30

-10

10

30

50

70

90

110

130

150

FIGURE 5. Using the Community Climate System Model, climate scientists ran a fully coupled simulation of the global carbon cycle to study how CO2 iseither taken up or released by the ocean. Understanding this transfer is important because a significant fraction of anthropogenic CO2 emissions arecurrently absorbed by the ocean, thus slowing down the accumulation of CO2 in the atmosphere. This image shows the average exchange (in tonnes ofcarbon per square kilometer per year) in the last month (December) of a nine-year simulation. Areas with positive values (shown ranging from yellow topink) indicate where CO2 is being taken up by the ocean, and negative values (shown in green ranging to purple) indicate where CO2is being released.When the CO2 in the ocean is out of equilibrium with that in the atmosphere, CO2 is exchanged between the two. Nonequilibrium can occur due to thetemperature of the water, with the Northern Hemisphere (colder water in December) able to absorb more CO2 than the relatively warm SouthernOcean. Biological activity can also alter the equilibrium when microscopic plants use CO2 in the water for photosynthesis, which is then replaced byabsorption from the atmosphere. The yellow and orange areas in the Southern Ocean demonstrate this phenomenon, showing uptake resulting from aseasonal phytoplankton bloom.

Page 51: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

sible by the critical software engi-neering work performed by theSciDAC team members on all thecomponents of CCSM.

In addition to software expert-ise, the SciDAC CCSM consortiumalso contributed new model algo-rithms and new scientific capabilities.The consortium contributed to theintroduction of a new finite-volumemethod for simulating atmosphericdynamics and developed new alterna-tive schemes for ocean models as well.

Details of the consortium’s soft-ware engineering of the CCSMwere reported in articles featured ina special issue on climate modelingof the International Journal of HighPerformance Computing and Appli-cations (Vol. 19, No. 3, 2005).

In the last years of the project,the focus of the SciDAC consortiumwas the development of new capa-bilities for simulating the carbonand sulfur cycles as part of the cli-mate system. Until recently, mostclimate change scenarios specify aconcentration of greenhouse gasesand atmospheric aerosols. Byadding the biological and chemicalprocesses that govern the absorp-tion and emission of greenhousegases, better simulations of how theEarth system responds to human-caused emissions are possible (seeFigure 1). A prototype carbon-cli-mate-biogeochemistry model wasassembled as a demonstration of thereadiness to undertake coupledEarth system simulation at this dra-matically new level of complexity.

The new model included acomprehensive atmospheric chem-istry formulation as well as land andocean ecosystem models. In thisfirst step towards a comprehensiveEarth system model, CO2 fluxeswere exchanged between compo-nents (see Figure 5) and the oceanicflux of dimethyl sulfide (DMS) wasused in the atmospheric model tocreate sulfate aerosols. Theseaerosols then interacted and affect-ed the physical climate system, theoceanic carbon cycle and terrestrial

ecosystems. The prototype modeldeveloped under SciDAC will formthe basis for future work on a com-prehensive Earth system model andhelp the climate community enter anew phase of climate changeresearch.

Principal Investigators: John Drake, OakRidge National Laboratory, Phil Jones. LosAlamos National Laboratory

A Brand New ModelUnder SciDAC, DOE’s Office of

Biological and EnvironmentalResearch pursued the novel conceptof a five-year cooperative agreementwith one or two universities to buildthe prototype climate model of thefuture. The concept was to develop

a new climate model without thelegacies of past approaches, whichmay be scientifically or computa-tionally outdated. Only one quali-fied proposal was submitted, thatfrom a group centered at ColoradoState University, which is develop-ing a model using a radically differ-ent grid design, alternative formula-tions for both the atmosphere and

ocean components and alternativenumerical methods.

The project, A Geodesic ClimateModel with Quasi-LagrangianVertical Coordinates, was createdto develop a new, comprehensivemodel of the Earth's climate systemthat includes model components forthe atmosphere, ocean, sea-ice, andland surface, along with a modelcoupler to physically link the com-ponents. A multi-institutional col-laboration of universities and gov-ernment laboratories in an integrat-ed program of climate science,applied mathematics and computa-tional science, this project builtupon capabilities in climate model-ing, advanced mathematical

research, and high-end computerarchitectures to provide useful pro-jections of climate variability andchange at regional to global scales.The project team developed com-ponents for modeling the atmos-phere, oceans and sea ice, and acoupler component to links thephysical model components. Forexample, the coupler computes the

49

FIGURE 6. By coupling different climate modeling components, climate researchers can take advan-tage of advancements in the separate codes to improve overall accuracy.

Page 52: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

latent heat flux exchanged betweenthe ocean surface and atmosphere.This physical coupling enables thecoupled system to evolve in acoherent manner. As the piecescome together the integrated sys-tem is being thoroughly tested.

Principal Investigator: David A. Randall,Colorado State University

University-Led ClimateModeling Projects

SciDAC supported two roundsof university grants to individualresearchers to address the 5- to 15-year needs and opportunities toadvance climate modeling. Thesegrants supported research into newmethodologies and numerical meth-ods. It is this basic research thatexplores new ideas and conceptsthat tie climate science and compu-tational science together as bothfields advance.

A major factor in climate changebetween decades is the effect ofwinds on ocean circulation. Theproject on Predictive Understandingof the Oceans: Wind-DrivenCirculation on Interdecadal TimeScales was aimed at developing and

applying advanced computationaland statistical methods to help cli-mate models better predict theseeffects. The computational aspect ofthe project was aimed at developingefficient multi-level methods to sim-ulate these ocean flows and studytheir dependence on physically rele-vant parameters. The oceanographicand climate work consists in apply-ing these methods to study thebifurcations in the wind-driven cir-culation and their relevance to theflows observed at present and thosethat might occur in a warmer cli-mate. Both aspects of the work arecrucial for the efficient treatment oflarge-scale, eddy-resolving numeri-cal simulations of the oceans and anincreased understanding of climatechange.

Principal Investigators: Michael Ghil,UCLA; and Roger Temam, IndianaUniversity

With the advent of more power-ful supercomputers and modelingapplications, climate models, resolu-tion of the globe has become increas-ingly precise — to the extent thatsome models can focus on an area assmall as 10 kilometers square. Whilethis is useful for large-scale climate

conditions, understanding smallerphenomena such as tropical stormsrequire 1-km resolution. The Con-tinuous Dynamic Grid Adap-tationin a Global Atmospheric Model isaimed at providing a capability tobreak down the larger grid structureto target areas of interest. While cli-mate models are not expected toprovide a uniform 1-km resolutionwithin the next 15 years, this gridadaptation capability is a promisingapproach to providing such resolu-tion for specific conditions.

Principal Investigators: J.M. Prusa and W.J.Gutowski, Iowa State University

While much discussion of climatechange focuses on the global scale,there is also increasing demand forclimate modeling on the regional ormesoscale covering areas from 50 toseveral hundred miles in size. Theproject to develop Decadal ClimateStudies with Enhanced Variableand Uniform Resolution GCMsUsing Advanced Numerical Tech-niques focused on using developedand evolving state-of-the-art generalcirculation models (GCMs) withenhanced variable and uniform res-olution to run on parallel-processingterascale supercomputers. Themajor accomplishment of the proj-ect was completion of the interna-tional SGMIP-1 (Stretched-GridModel Intercomparison Project,phase-1), which allows smallerregional models to be computed aspart of global modeling systems.The “stretched grid” approach pro-vides an efficient down-scaling tomesoscales over the area of interestand computes the interactions ofthis area with the larger model. Thiscapability is expected to improveour understanding of climate effectson floods, droughts, and monsoons.

Collaboration with J. Côté ofMSC/RPN and his group is a strongintegral part of the joint effort. Thecompanion study with the membersof another SciDAC group, F. Baerand J. Tribbia, is devoted to devel-oping a stretched-grid GCM usingthe advanced spectral-element tech-

FIGURE 7. Adapted grid with block-data structure.

50

Page 53: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

nique with variable resolution, andthe NCAR CAM physics.

Principal Investigator: Michael S. Fox-Rabinovitz, University of Maryland

The project for Development ofan Atmospheric Climate Model withSelf-Adapting Grid and Physicswas funded to develop adaptive gridtechniques for future climate modeland weather predictions. Thisapproach will lead to new insightsinto small-scale and large-scale flowinteractions that cannot be modeledby current uniform-grid simulations.This project will result in a climatemodel that self-adjusts the horizon-tal grid resolution and the complex-ity of the physics module to theatmospheric flow conditions. Usingan approach called adaptive meshrefinement to model atmosphericmotion will improve horizontal res-olution in a limited region withoutrequiring a fine grid resolutionthroughout the entire model domain.Therefore, the model domain to beresolved with higher resolution iskept at a minimum, greatly reducingcomputer memory and speed require-ments. Several tests show that thesemodeling procedures are stable andaccurate.

Principal Investigator: Joyce E. Penner,University of Michigan

The credibility of ocean modelsfor climate research depends on theirability to simulate the observed stateand natural variations of the oceansas indicated by actual measurements.Existing models, which can be usedfor simulating the long time scalesof climate change of the order ofcenturies, still do not provide a verysatisfactory treatment of key climat-ic processes such as water mass for-mation in the subpolar oceans. Oceanmodels based on Cartesian coordi-nates have been well tested andtheir drawbacks are well known.Models based on a moving verticalcoordinate have the potential toprovide a much more accurate sim-ulation, but are not “mature’” enoughat present to gain widespread accept-

ance in the climate modeling com-munity. The project to develop AVertical Structure Module forIsopycnal Ocean CirculationModels aimed at providing a mod-ule for representing key processes insuch a model and organizing thevertical structure. The module canthen be inserted in the ‘dynamiccore’ of existing models and used bythe modeling community.

Principal Investigator: Kirk Bryan,Princeton University

The advent of terascale super-computers has advanced climatemodeling by allowing the use ofhigher-resolution atmospheric andocean models. However, more dra-matic improvements are likelythrough development of improveddescriptions of physical processestreated by the models, especiallythose that can be observationallycharacterized in much finer detailsthan are conventionally included inthe climate models. The project forImproving the Processes of Land-Atmosphere Interaction in CCSM2.0 at High Resolution has con-tributed to this goal by advancingthe treatment of land with muchimproved details in the climaticallymost important processes, using pri-marily the Community Land Model(CLM).

Principal Investigator: Robert E. Dickinson,Georgia Institute of Technology

As part of the effort to developclimate models for making reliableclimate predictions, one need is foran efficient and accurate method forproducing regional climate predic-tions and the development of com-puting methodology which uses thelatest in computing hardware (mas-sively parallel processing or MPP)most efficiently and economically,to produce the best predictionresults with minimal expenditure ofresources. To meet this goal, theMulti-Resolution Climate Modelingproject developed a Spectral ElementAtmospheric Model (SEAM), a fairlyrecent concept using spectral ele-

ments (a method of approximatingmathematical solutions). The Earth’sspherical domain is tiled with spec-tral elements that can be arbitrarilysized to meet local scaling require-ments, allowing the model to createpredictions over a range of scales onthe entire global domain withoutuser involvement in the computa-tional process. The method alsotakes optimum advantage of state-of-the-art MPP by minimizing com-munication among elements andthereby amongst processors. Thisprocedure has yielded dramaticspeedup, making the production ofmultiple realizations more feasible.In addition to serving as a researchand training tool, the model isexpected to provide more accurateclimate predictions.

Principal Investigator: Ferdinand Baer,University of Maryland

The Decadal Variability in theCoupled Ocean-Atmosphere Systemproject researched the slowly chang-ing circulations in the oceans andtheir influence on long-term globalclimate variability. There were twomain themes in the project. The firstconcerned the decadal variability ofupper ocean circulation and its rolein the long period fluctuations of thecoupled ocean/atmosphere system.The second was the role of mesoscaleeddies — the oceanic flows on scalesof 10 to 100 km — in the large-scaleheat and energy budget of the ocean.The model was developed and test-ed on a smaller computer and willbe adapted for use on larger mas-sively parallel supercomputers.

Principal Investigator: Paola Cessi, ScrippsInstitution of Oceanography – Universityof California, San Diego

A project called Towards thePrediction of Decadal to Multi-Century Processes in a High-Throughput Climate SystemModel pursued interdisciplinaryresearch to simulate decadal tomulti-century global variability andchange in an earth system modelthat couples climate to the terrestri-

51

Page 54: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

al ecosystem. The primary tool willbe a high-throughput climate-ecosystem model called FOAM thatenables rapid turnaround on longclimate runs. FOAM is used to con-tinue the study of the mechanismsof decadal climate variability, thedynamics of global warming, andthe interaction of climate and theland biosphere. The project alsolinked climate scientists with eco-system modelers in building a fullycoupled ocean-atmosphere-landbiosphere model, which enabled thestudy of the interaction of land veg-etation changes and climate changesfor past, present, and future scenarios.

Principal Investigator: Zhengyu Liu,University of Wisconsin-Madison

Vegetation is an important com-ponent of the global climate systemthrough its control of the fluxes of

energy, water, carbon dioxide andnitrogen over land surfaces. Theaim of the project for ModelingDynamic Vegetation for Decadalto Century Climate Change Studiesis to develop, evaluate, utilize andmake available a model of vegeta-tion/soil dynamics to improve theability of general circulation models(GCMs) to make predictions offuture climate change. The model isbeing developed by combiningtreatments of carbon and nitrogenfluxes, and vegetation communitydynamics, as a standalone modulewithin the NASA Goddard Institutefor Space Studies GCM. This modelwill be a tool for answering questionsabout past climate and vegetationdistributions, as well as for predict-ing global changes due to risingatmospheric CO2 in the comingdecades and century. In one sce-

nario, in which the concentration ofCO2 in the atmosphere was dou-bled, the model showed that vegeta-tion increased the uptake of CO2 by48 percent and that surface temper-atures in some regions increased byup to 2° C due to stomatal closure.

Principal Investigator: Nancy Y. Kiang,Columbia University

Gaining a better understandingof the relationship between climatechange and the transport of watervapor and chemicals in the atmos-phere was the aim of the Modelingand Analysis of the Earth’s Hydro-logic Cycle project. This researchaddresses fundamental issues under-lying the understanding and model-ing of hydrologic processes, includ-ing surface evaporation, long-rangetransport of water, and the releaseof latent heating through evapora-tion. These processes play a centralrole in climate. The goals of theproject are to advance climatechange modeling by developing ahybrid isentropic model (in whichthe flow variables change gradually)for global and regional climate sim-ulations, to advance the understand-ing of physical processes involvingwater substances and the transportof trace constituents, and to exam-ine the limits of global and regionalclimate predictability.

Principal Investigator: Donald R. Johnson,University of Wisconsin – Madison

Fueling the Future:Plasma Physics andFusion Energy

Plasmas, or very hot ionizedgases, make up more than 99 per-cent of the visible universe. In fact,the earth’s sun and the stars we seeat night are masses of plasma burn-ing at millions of degrees Celsius.Inside these stars, the incrediblyhigh temperatures and pressuresresult in atoms fusing together and

52

N CO2 H2O GAP DYNAMICS: months/yearsMORTALITY, FIRE(Moorcroft, et al., 2001)

GROWTH (Friend, in prep.)

ALBEDO (NI-Meister, et al., 1999)

SOILCARBON

SOILNITROGEN

SOILMOISTURE

CONDUCTANCE/PHOTOSYNTHESIS(Friend & Klang, in prep.;Kull & Krult, 1999)

minutes

months/years

days

LEAF AREANITROGEN

RADIATION

SOIL BIOCHEMISTRY

HYDROLOGY

ATMOSPHERE

GCM

FIGURE 8. Diagram of the dynamic vegetation embedded within the atmospheric and land surfacecomponents of a GCM.

Page 55: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

releasing more energy. For much ofthe 20th century, scientists haveinvestigated whether similar condi-tions could be created and used onEarth as source of energy.

Over the last 30 years, comput-ing has evolved as a powerful toolin scientific research, includingfusion. High performance comput-ers are now routinely being used toadvance our understanding of fusionenergy, with the goal of harnessingthe same power source of the sun inspecially built reactors to provideclean and almost unlimited energy.But just as fusion energy is filledwith huge potential, high perform-ance computing also presentsdaunting scientific challenges.

Most of DOE’s current research,both in experimental facilities and incomputational science, focuses onmagnetic fusion research. Oneapproach centers on reactors withdoughnut-shaped chambers calledtokamaks which would be used toheat ionized gas to about 100 mil-lion degrees centigrade, then usepowerful magnetic fields to confineand compress the plasma untilhydrogen atoms fuse together andrelease helium and energy. This sus-tained fusion reaction is known as a“burning plasma.” As envisioned,such reactors would generate moreenergy than they consume. Sustainedfusion power generation requiresthat we understand the detailedphysics of the fluid of ions and elec-trons, or plasma, in a fusion reactor.

With recent advances in super-computer hardware and numericalalgorithm efficiency, large-scalecomputational modeling can playan important role in the design andanalysis of fusion devices. Amongthe nations engaged in developingmagnetic confinement fusion, theU.S. remains the world leader inplasma simulation.

Computational techniques forsimulating the growth and satura-tion of turbulent instabilities in thisplasma are critical for developingthis understanding of how to control

the plasma and hence to develop asuccessful fusion reactor. Achievingsuch a goal is a long-term objectiveand many steps are needed beforeefficient fusion reactors can bedesigned and built. A number ofsmaller experimental reactors havebeen built in the U.S. and aroundthe world. These facilities areexpensive to build and operate, butgive scientists valuable informationfor future designs.

Computational physics researchhas also helped give fusion scientistsimportant insights into many aspectsof plasma confinement, reactor fuel-ing and the effects of turbulence onplasmas. In fact, turbulence hasemerged as one of the biggest chal-lenges in magnetic fusion — as theplasmas are heated and confined,turbulence occurs and introducesinstabilities, which disrupt the con-finement. If the plasmas cannot beconfined, the plasmas come intocontact with the reactor walls, los-ing temperature and making fusionimpossible.

Under the SciDAC program,computational scientists, appliedmathematicians and fusion researchersworked in teams to develop newtools and techniques for advancingfusion research, then using thesemethods to run more detailed simu-lations. The result is ever increasingknowledge and understanding aboutthe forces at work within a fusionreactor. This information is beingdirectly applied to the design andconstruction of ITER, a multina-tional facility to be built in France.

Since construction costs forITER have been estimated at $12billion and once completed, operat-ing costs could be up to $1 millionper day, it’s critical that the newest,most detailed research findings beapplied before the system is com-pleted. Not only will this help mini-mize the number of costly modifica-tions, but it will also mean a greaterlikelihood of success.

The results can also be appliedon a broader scale to increase our

understanding of the many complexphysical phenomena in the universe,which we are only starting tounderstand. Being able to captureand reproduce the phenomenon offusion on Earth would solve theworld’s energy problems in an eco-nomically and environmentally sus-tainable manner.

Like other fields, fusion sciencehas developed subfields such asplasma microturbulence theory,magnetohydrodynamics (MHD)and transport theory to study thephenomena which occur, and com-putational science has greatlyadvanced research in these areas.Under SciDAC, multi-institutionprojects were created to advanceunderstanding in specific areas tohelp advance fusion researcharound the world.

Center for ExtendedMagnetohydrodynamicModeling

The Center for ExtendedMagnetohydrodynamic Modelingproject was established to enable arealistic assessment of the mecha-nisms leading to disruptive andother stability limits in the presentand next generation of fusiondevices. Rather than starting fromscratch, the project built on thework of fusion research teamswhich developed the NIMRODand M3D codes for magnetohy-drodyamic (MHD) modeling. Thisdiscipline studies the dynamics ofelectrically conducting fluids such asplasmas. The goal was to developthese codes to enable a realisticassessment of the mechanisms lead-ing to disruptive and other stabilitylimits in the present and next gener-ation of fusion devices. The mainwork involves extending andimproving the realism of the leading3D nonlinear magneto-fluid basedmodels of hot, magnetized fusionplasmas, increasing their efficiency,and using this improved capability

53

Page 56: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

to pioneer new terascale simulationsof unprecedented realism and reso-lution. As a result, scientists gainednew insights into low frequency,long-wavelength nonlinear dynamicsin hot magnetized plasmas, some ofthe most critical and complex phe-nomena in plasma and fusion science.The underlying models are validatedthrough comparisons with experi-mental results and other fusion codes.

An example of an importantfusion problem that is being addressedby the center is the onset and non-linear evolution of “edge localizedmodes” (ELMs) and their effect onplasma confinement and the reactorwalls. By using the two MHDcodes and data from the DIII-Dfusion device operated by GeneralAtomics in California, the center

was able to achieve a series of scien-tific milestones in 2005 and 2006.These modes shed thermal energyfrom the edge of the confinementregion and, in their most virulentform, could overheat the walls nearthe plasma, and may also affect thecore plasma. Using the codes advan-ced under this project, the centerteam was able to create more exten-sive simulations of the effects ofELMs than previously possible. Theevolution of the temperature duringthe nonlinear evolution of an ELMin shot 113207 is shown in Figure 9,illustrating the formation of finger-like structures near the plasma edgewith increasingly fine structure asthe calculation progresses.

The project also made a numberof modifications to the M3D codeto improve the accurate representa-tion of a number of conditions intokamaks.

Principal Investigator: Steve Jardin,Princeton Plasma Physics Laboratory

The Plasma MicroturbulenceProject

A key goal of magnetic fusionprograms worldwide is the con-struction and operation of a burningplasma experiment. The perform-ance of such an experiment is deter-mined by the rate at which energyis transported out of the hot core(where fusion reactions take place)to the colder edge plasma (which isin contact with material surfaces).The dominant mechanism for thistransport of thermal energy is plas-ma microturbulence.

The development of terascalesupercomputers and of efficient sim-ulation algorithms provides a newmeans of studying plasma microtur-bulence — direct numerical simulation.Direct numerical simulation comple-ments analytic theory by extendingits reach beyond simplified limits.Simulation complements experimentbecause non-perturbative diagnosticsmeasuring quantities of immediate

theoretical interest are easily imple-mented in simulations, while similarmeasurements in the laboratory aredifficult or prohibitively expensive.

The development of tools in thisproject will significantly advance theinterpretation of experimental con-finement data and will be used totest theoretical ideas about electro-static and electromagnetic turbulence.By fulfilling the objective of enablingdirect comparisons between theoryand experiment, direct numericalsimulation will lead to improve-ments in confinement theory andincreased confidence in theoreticalconfinement predictions. ThePlasma Microturbulence Project(PMP) is addressing this opportuni-ty through a program of codedevelopment, code validation, andexpansion of the user community.

The project has implemented anumber of code improvements formodeling different geometries andmulti-species plasma models to sim-ulate both the turbulent electric andmagnetic fields in a realistic equilib-rium geometry (see Figure 10). Theproject’s global codes are able tosimulate plasmas as large as those inpresent experiments, and the evenlarger plasmas foreseen in burningplasma experiments.

The project’s algorithms scalenearly linearly with processor num-ber to ~1000 processors, which isimportant as fusion codes are signif-icant users of supercomputingresources at DOE centers. To help

54

–1

0 0.5 1 1.5 2r

z(a)

2.5 3 3.5

0

0.5

1

–0.5

–1

0 0.5 1 1.5 2r

z

(b)

2.5 3 3.5

0

0.5

1

–0.5

–1

0 0.5 1 1.5 2r

z

(c)

2.5 3 3.5

0

0.5

1

–0.5

FIGURE 9. Evolution of the temperature duringthe nonlinear evolution of an edge localizedmode in shot 113207.

FIGURE 10. The (turbulent) electrostatic poten-tial from a GYRO simulation of plasma micro-turbulence in the DIII-D tokamak.

Page 57: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

the fusion research community ben-efit from this investment in the GS2and GYRO codes, the project hasengaged researchers in helping tovalidate the codes against experi-mental results. Additionally, projectmembers have conducted trainingworkshops and presented talks onthe codes at key fusion researchmeetings.

Principal Investigator: Bill Nevins,Lawrence Livermore National Laboratory

Center for Gyrokinetic ParticleSimulations of TurbulentTransport in Burning Plasmas

The Center for GyrokineticParticle Simulations of TurbulentTransport in Burning Plasmas con-sortium was formed in 2004 todevelop codes to simulate turbulenttransport of particles and energy, andimprove the confinement of burningplasmas in fusion reactors. In partic-ular, the project is aimed at devel-oping the capabilities for simulatingburning plasma experiments at thescale of ITER, the internationalthermonuclear experimental reactorwhich is expected to be the firstfusion reactor capable of sustaininga burning plasma when the machinegoes on line in about 10 years.

Although fusion scientists havebeen creating simulation codes tostudy fusion for more than 30 years,ITER presents two unique chal-lenges, according to Wei-li Lee,head of the center.

First, the size of the reactor and

the necessary simulations are large.The ITER will have a major radiusof 6.2 meters and a minor radius oftwo meters, and a correspondinglylarge confined plasma must be sim-ulated. Second, the temperatureinside ITER will be higher than anyother fusion reactor. This highertemperature means that new typesof physics will be encountered. As aresult, new numerical simulationmodels must be developed.

The new codes must also becreated to perform as efficiently aspossible so that the larger-scale sim-ulations can be run on the nation’smost powerful supercomputers.

The major challenge for theproject is to use simulations to bet-ter understand and minimize theproblem of turbulence in the reac-tor. At the core of the reactor, thetemperatures are at their highest. Atthe outside edges, the temperaturesare lower. As with weather, whenthere are two regions with differenttemperatures, the area between issubject to turbulence. This turbu-lence provides a means for thecharged particles in the plasma tomove toward the outer edges of thereactor rather than fusing withother particles. If enough particles(and their energy) come into con-tact with the reactor wall, the parti-cles lose temperature and the fusionreaction cannot be sustained.

Scientists understand that thedifference in temperatures as well asdensities is what causes the turbu-lence. But what is still not fullyunderstood is the rate at which par-

ticles are transported through theplasma by the turbulence. Experi-ments show that the particles aretransported quite differently thantheory suggests. One of the objec-tives of the center’s simulations is tobridge this gap between experimentand theory.

In the simulations, particles willfly around in the plasma accordingto Newton’s laws of motion, althoughin a much smaller number than willactually be present in the ITERplasma. Algorithms will be devel-oped to follow the path of eachsimulated particle as it interactswith other particles and is affectedby turbulence transport. The resultsof the simulations will then be com-pared with experimental results.

To model the particles, thegroup is using the the gyrokineticparticle-in-cell (PIC) method, firstdeveloped in the early 1980s andwidely adopted in the 1990s. Gyro-kinetic refers to the motion of theparticles in a strong magnetic field.In such a field, particles spiral along,with electrons spinning in one direc-tion and ions in the opposite direc-tions. By simplifying the spiralingmotion as a charged ring, researchersare able to increase the time stepsby several orders of magnitudewithout sacrificing scientific validity.

The project team also collabo-rated with Terascale Optimal PDESimulations Integrated SoftwareInfrastructure Center (TOPS ISICs)established under SciDAC. By inte-grating other software libraries forsolving the equations describing theinteraction between the particles,the project team was able to solve anumber of physics problems andintegrate more realistic physicsmodules in their simulations. Theresult is that the simulations areexpected to be able to model ITER-scale plasmas by solving a system ofequations with millions of unknownsby following billions of particles inthe computer.

Principal Investigator: W.W. Lee, PrincetonPlasma Physics Laboratory

FIGURE 11. Potential contours of microturbulence for a magnetically confined plasma. The finger-likeperturbations (streamers) stretch along the weak field side of the poloidal plane as they follow themagnetic field lines around the torus.

55

Page 58: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

Understanding MagneticReconnection in Plasmas

The Center for MagneticReconnection Studies (CMRS) is amulti-university consortium dedicatedto physical problems involving mag-netic reconnection in fusion, space,and astrophysical plasmas. Under-standing magnetic reconnection isone of the principal challenges inplasma physics. Reconnection is aprocess by which magnetic fieldsreconfigure themselves, releasingenergy that can be converted toparticle energies.

The goal of the MagneticReconnection project was to pro-duce a unique high performancecode to study magnetic reconnectionin astrophysical plasmas, in smaller-scale laboratory experiments and infusion devices. The principal com-putational product of the CMRS —the Magnetic Reconnection Code(MRC) — is a state-of-the-art, large-scale MHD code for carrying outmagnetic reconnection researchwith accuracy and completeness in2D and 3D. The MRC is massivelyparallel and modular, has the flexi-bility to change algorithms whennecessary, and uses adaptive meshrefinement (AMR) (Figure 12). The

project team has applied the MRCto a wide spectrum of physicalproblems: internal disruption (Figure13) and error-field induced islandsin tokamaks, storms in the magne-tosphere, solar/stellar flares, andvortex singularity formation in fluids.

Principal Investigator: Amitava Bhattacharjee,University of New Hampshire

Terascale Computational AtomicPhysics for the Edge Region inControlled Fusion Plasmas

Atomic physics plays a centralrole in many of the high temperatureand high density plasmas found inmagnetic and inertial confinementfusion experiments, which are crucialto our national energy and defenseinterests, as well as in technologicalplasmas important to the U.S. eco-nomic base. In turn, the develop-ment of the necessary atomicphysics knowledge depends onadvances in both experimental andcomputational approaches. TheTerascale Computational AtomicPhysics for the Edge Region inControlled Fusion Plasmas projectwas established to develop a newgeneration of scientific simulationcodes, through a broad ranging col-laboration of atomic physicists and

computer scientists, which will takefull advantage of national terascalecomputational facilities to addressthe present and future needs foratomic-scale information for fusionand other plasma environments.

Especially in regard to the needsof fusion, atomic and molecular col-lisions significantly influence thetransport and energy balance inregions crucial for the next step ofmagnetic confinement fusion devel-opment, the divertor and edge oftokamaks. For example, the micro-scopic modeling of turbulence andtransport in magnetic fusion edgeplasmas relies heavily on an accu-rate knowledge of the underlyingatomic processes, such as elasticscattering, vibrational energy transfer,mutual neutralization, and dissocia-tive recombination. The transportand atomic conversion of radiationis also at the heart of inertial fusionexperiments, where a vast array ofelectron and photon collision pro-cesses, as well as of radiative andelectron cascade relaxations, isrequired to model, diagnose, guide,and understand experiments. Inaddition, most magnetic fusion diag-nostics involve interpretation ofobservations of the conversion of

FIGURE 12. AMR allows the MRC code toplace refined numerical grids automaticallywhere the fine spatial structures requirethem, as shown in the figure where sharpspatial gradients at the center are resolvedby placing refined grids (see inset for zoom).The MRC code has the capability to refinenot only in space but also in time, which ismore efficient since one does not need totake unnecessarily small time steps in regionswhere the fields are smooth.

FIGURE 13. A simulation of the so-called kink-tearing instability in tokamaks that can causeinternal current disruption and core tempera-ture collapse. The two intertwining helicalflux tubes are formed as a result of recon-nection, and their 2D projections show mag-netic island structure.

FIGURE 14. Accurate modeling of plasmasrequires accurate knowledge of atomicprocesses, such as scattering. This series isfrom a simulation capturing all scatteringprocesses for the scattering of a wavepackfrom a helium atom.

56

Page 59: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

fast electron energy to atomic ionlight emission, such as in the wellknown charge exchange recombina-tion spectroscopy. Development ofnew scientific simulation codes toaddress these needs will also benefitresearch involving a huge range oftechnical, astrophysical, and atmos-pheric plasmas that also depend onaccurate and large scale databases ofatomic processes.

Many of the project’s atomiccollision codes are being imple-mented on supercomputers at DOEcomputing centers in California andTennessee. Results of these codedevelopments will be disseminatedopenly to the relevant plasma sci-ence and atomic physics communi-ties, and will enable new regimes ofcomputation by taking advantage ofterascale and successor facilities. Allof the atomic collision codes haveimportant applications in a varietyof research areas outside controlledfusion plasma science.

Principal Investigator: Michael Pindzola,Auburn University

Numerical Computation ofWave-Plasma Interactions inMulti-Dimensional Systems

In order to achieve the extremelyhigh temperatures — six times hotterthan the core of the sun — neededto drive fusion reactions in a toka-

mak, scientists are studying differentapproaches to heating. One methodinvolves radio waves, similar to themicrowaves used to heat food. Asthe waves propagate through theplasma, they interact with the ionsspinning in the plasma. When theions’ gyro-frequency resonates atthe same frequency as the wave, theions spin up to a very high energy,further heating the plasma.

The goal of Numerical Comp-utation of Wave-Plasma Interact-ions in Multi-Dimensional Systemsis to create two-dimensional andthree-dimensional simulations ofthe interactions between the plasmaand the waves to make accuratepredictions of the processes. Theresults of these simulations will be

used to design and build fusionreactors, including ITER. Duringthe first three years of the project,the main focus was on wave inter-action and conversion. Whatresearchers discovered was thatunder certain conditions, the rela-tively long waves propagated froman antenna inside the tokamakcould be converted to much shorterwaves, change characteristics andbecome much shorter in length.This process is known as “modeconversion.” One form of the shorterwave, known as an “ion cyclotronwave,” interacts with both the ionsand the electrons in the plasma.The project is using two plasmamodeling codes — AORSA andTORIC — to study the effects of the

57

Thermal ions

RF accelerated beam ions

Vperp

Vpar

Minor radius

Injected beam ions

FIGURE 15. Under SciDAC, fusion energy researchers were allocated time on the Cray XT-3 massivelyparallel supercomputer “Jaguar” at Oak Ridge National Laboratory. By using thousands of proces-sors, the team was able for the first time to run this calculation of how non-thermal ions are dis-tributed in a heated fusion plasma.

Page 60: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

ion cyclotron wave. One effectappears to be that the wave drivesflows in the plasma which helpbreak up turbulence, therebyimproving confinement of the plasma.

In summer 2005, the AORSAglobal-wave solver was ported tothe new Cray XT3 supercomputerat Oak Ridge National Laboratory.The code demonstrated excellentscaling to thousands of processors.Preliminary calculations using 4,096processors have allowed the firstsimulations of mode conversion inITER. Mode conversion from thefast wave to the ion cyclotron wavehas been identified in ITER usingmixtures of deuterium, tritium andhelium-3 at 53 MHz.

A second focus of the projecthas been on how energies are dis-tributed in the plasma. The simplesttype of distribution is known as aMaxwellian distribution, which isused to calculate how energy is dis-tributed in all gases, such as the airin a room. This distribution functionwas previously the only method inwhich fusion researchers could cal-culate wave propagation andabsorption. But, when plasmas areheated to very high temperatures, along tail of particles can form,which represents a non-Maxwelliandistribution. Using another codecalled CQL3D, project team mem-bers were able to calculate the newenergy distributions. The code wasalso installed on the Cray XT3supercomputer along with theAORSA code, and both were pro-grammed to automatically iteratewith one another. So, once the newenergy distributions were calculated,this data was fed back into theAORSA code to recalculate theheating of the plasma. Once thereheating was simulated, theCQL3D code then calculated theenergy distribution. The iterationswere repeated until a steady statewas reached in the simulation.

The project results, which makekey contributions to understandingprocesses in hot plasmas, were

achieved through the scaling of sim-ulation codes and the availability ofterascale computing systems.

Principal Investigator: Don Batchelor, OakRidge National Laboratory (2001-05), PaulBonoli, Massachusetts Institute ofTechnology, (2006-present)

High Energy andNuclear Physics:Accelerating Discoveryfrom SubatomicParticles to Supernovae

The Office of Science supports aprogram of research into the funda-mental nature of matter and energythrough the offices of High EnergyPhysics and Nuclear Physics. In car-rying out this mission, the Office ofScience:

• builds and operates large, world-class charged-particle acceleratorfacilities for the nation and forthe international scientificresearch community;

• builds detectors and instrumentsdesigned to answer fundamentalquestions about the nature ofmatter and energy; and

• carries out a program of scientificresearch based on experimentaldata, theoretical studies, and sci-entific simulation.

The scale of research rangesfrom the search for subatomic parti-cles, which are one of the mostbasic building blocks of matter, tostudying supernovae, massiveexploding stars which also serve asan astrophysical laboratory in whichunique conditions exist that are notachievable on Earth. Under theSciDAC program, projects werelaunched to advance research in theareas of accelerator science and sim-ulation, quantum chromodynamicsand supernovae science.

Advanced Computing for 21stCentury Accelerator Science &Technology

Accelerators underpin many ofthe research efforts of the Office ofScience and physics research aroundthe world. From biology to medicine,from materials to metallurgy, fromelementary particles to the cosmos,accelerators provide the microscopicinformation that forms the basis forscientific understanding and applica-tions. Though tremendous progresshas been made, our present theoryof the physical world is not com-plete. By tapping the capabilities ofthe different types of accelerators,including colliders, spallation neu-tron sources and light sources, sci-entists are making advances inmany areas of fundamental science.

Much of our knowledge of thefundamental nature of matter resultsfrom probing it with directed beamsof particles such as electrons, pro-tons, neutrons, heavy ions and pho-tons. The resulting ability to “see”the building blocks of matter hashad an immense impact on societyand our standard of living. Over thelast century, particle acceleratorshave changed the way we look atnature and the universe we live inand have become an integral part ofthe nation’s technical infrastructure.Today, particle accelerators areessential tools of modern scienceand technology.

For example, about 10,000 can-cer patients are treated every day inthe United States with electronbeams from linear accelerators.Accelerators produce short-livedradioisotopes that are used in over10 million diagnostic medical proce-dures and 100 million laboratorytests every year in the United States.

The SciDAC AcceleratorScience and Technology (AST)modeling project was a nationalresearch and development effortaimed at establishing a comprehen-sive terascale simulation environ-

58

Page 61: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

ment needed to solve the mostchallenging problems in 21st centuryaccelerator science and technology.The AST project had three focusareas: computational beam dynam-ics, computational electromagnetics,and modeling advanced acceleratorconcepts. The newly developedtools are now being used by accel-erator physicists and engineersacross the country to solve the mostchallenging problems in acceleratordesign, analysis, and optimization.

An accelerator generates andcollects billions of elementary parti-cles, such as electrons, positrons orprotons, into a limited space, thenaccelerates these particles in a beamtoward a target — another beam ofparticles or devices for producingradiation or other particles. In theprocess of acceleration, the energyof every particle in the beam isincreased tremendously. In order toadvance elementary particle physicsinto regions beyond our presentknowledge, accelerators with largerfinal beam energies approaching thetera electron volt (TeV) scale arerequired.

The Department of Energyoperates some of the world’s mostpowerful accelerators, including athree-kilometer-long linear accelera-tor at the Stanford Linear AcceleratorCenter (SLAC) in California, theTandem Linac Accelerator Systemat Argonne National Laboratory(ANL) in Illinois, the Tevatron atFermilab in Illinois, the RelativisticHeavy Ion Collider (RHIC) atBrookhaven National Laboratory(BNL) in New York, the ContinuousElectron Beam Accelerator Facilityin Virginia and the Holifield Radio-active Ion Beam Facility at OakRidge National Laboratory (ORNL)in Tennessee. DOE-operated lightsources are the Advanced LightSource at Lawrence BerkeleyNational Laboratory in California,the Advanced Photon Source at ANL,the National Synchrotron LightSource at BNL and the StanfordSynchrotron Radiation Laboratory

at SLAC. Neutron sources includethe Spallation Neutron Source atORNL and the Los Alamos NeutronScience Center at Los AlamosNational Laboratory in New Mexico.While accelerators can be of differ-ent configurations with differentcapabilities, most use radio frequen-cies (RF) to accelerate the particles.In order to experimentally pursue thequest for the grand unified theoryever more powerful accelerators areneeded. Such massive facilities arecostly to build and require special-ized infrastructure.

The long-term future of experi-mental high-energy physics researchusing accelerators depends on thesuccessful development of newacceleration methods which canaccelerate particles to even higherenergy levels per acceleration length(known as the accelerating gradi-ent). In the past, this was achievedby building a longer accelerator, anapproach which faces practical lim-its in space and cost.

Since building such acceleratorsis time-consuming and expensive,scientists are using computer simu-lations to increasing our under-standing of the physics involved and

to help improve the design for moreefficient acceleration. High-resolution,system-scale simulation utilizingterascale computers such as theIBM SP supercomputer at NERSC,has been made possible by SciDAC-supported code development effortsand collaborations with the SciDACISICs. This team-oriented approachto computing is beginning to makea qualitative difference in the R&Dof major DOE accelerators, existingor planned.

Beam Dynamics: MaximizingScientific Productivity

The AST project has had amajor impact on computationalbeam dynamics and the design ofparticle accelerators. Thanks toSciDAC, accelerator design calcula-tions that were once thoughtimpossible are now carried out rou-tinely. SciDAC accelerator modelingcodes are being used to get themost science out of existing facili-ties, to produce optimal designs forfuture facilities, and to exploreadvanced accelerator concepts thatmay hold the key to qualitativelynew ways of accelerating chargedparticle beams. Here are some high-

z

0.30

0.25

0.20

0.15

0.10

0.05

0.00

–0.05

–0.10

–0.15

–0.20

–0.25

–0.30

–0.35

0.35 y

0.150.

100.050.

00–0.0

5–0

.10

–0.1

5

–0.15

–0.10

–0.05

0.00x

0.05

0.10

0.15

z

–0.05

0.05

0.10

0.15

0.20

0.25

0.30

0.35

–0.10–0.15

–0.20–0.25

–0.30–0.35

0.00–0.10

–0.050.00

y 0.050.10

0.15

–0.15

–0.10

–0.05

0.00x

0.05

0.10

0.15

FIGURE 16. This simulation of the Fermi Booster, a synchrotron accelerator ring, was created usingthe Synergia software developed under SciDAC.

59

Page 62: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

lights from the AST project BeamDynamics focus area in regard toalgorithm development, softwaredevelopment, and applications.

Particle simulation methodshave been among the most success-ful and widely used methods incomputational beam physics, plas-ma physics and astrophysics. Underthe AST project a comprehensive,state-of-the-art set of parallel parti-cle-in-cell (PIC) capabilities hasbeen developed, including:

• IMPACT: An integrated suite ofcodes originally developed tomodel high intensity ion linacs,IMPACT’s functionality hasbeen greatly enhanced so that itis now able to model highbrightness electron beamdynamics, ion beam dynamicsand multi-species transportthrough a wide variety of sys-tems.

• BeamBeam3D: A code for mod-eling beam-beam effects in col-liders. This code contains multi-ple models and multiple collisiongeometries and has been used tomodel the Tevatron, Positron-Electron Project (PEP)-II,Relativistic Heavy Ion Collider(RHIC), and Large HadronCollider (LHC) accelerators.

• MaryLie/IMPACT: A code thatcombines the high-order opticsmodeling capabilities of theMaryLie Lie algebraic beamtransport code with the parallelPIC capabilities of IMPACT. It isused to model space-chargeeffects in large circular machinessuch as the ILC damping rings.

• Synergia: A parallel beamdynamics simulation framework,Synergia combines multiplefunctionality, such as the space-charge capabilities of IMPACTand the high-order optics capa-bilities of MXYZPLT, along witha “humane” user interface andstandard problem description.Crucial to the development of

the AST project’s codes has beenthe collaboration with the SciDAC

ISICs and with other DOE-support-ed activities. In some cases, the useof advanced algorithms has led toapplications running up to 100times faster.

SciDAC AST beam dynamicscodes have been applied to severalimportant projects within the DOEOffice of Science. Examples include:

• existing colliders, including theTevatron, RHIC and PEP-II

• future colliders such as the LHCcurrently under construction

• proposed linear colliders such asthe International Linear Collider(ILC)

• high intensity machines, includ-ing the Fermilab booster and theSpallation Neutron Source(SNS) ring under construction

• linacs for radioactive ion beams,such as the proposed RareIsotope Accelerator (RIA)

• electron linacs for fourth-genera-tion light sources, such as theLinac Coherent Light Sourcenow under construction.

Electromagnetic SystemsSimulation: Prototypingthrough Computation

The AST project has supporteda well-integrated, multi-institutional,multi-disciplinary team to focus onthe large-scale simulations necessaryfor the design and optimization ofelectromagnetic systems essential toaccelerator facilities. As a result, anincreasing number of challengingdesign and analysis problems are beingsolved through large-scale simula-tions, benefiting such projects as thePEP-II, Next Linear Collider (NLC),

and the proposed ILC and RIA.Central to these efforts is a suite

of 3D parallel electromagnetic designcodes: Omega3P, Tau3P, Track3P,S3P and T3P. Collaborations withSciDAC ISICs led to a number ofimprovements in the codes whichhave not only increased the speedand accuracy of the codes up totenfold, but have expanded theircapabilities for solving more compli-cated problems of importance topresent and future accelerators. Thecodes have been used to advance anumber of DOE accelerator proj-ects, including the ILC, PEP-II,NLC, RIA and LCSS.

For example, the AST electro-magnetic codes were applied to thePEP-II facility, which now operateswith twice the luminosity of theoriginal design and is aiming foranother twofold increase. However,beam heating in the interaction region(IR) could limit the accelerator frommeeting that goal. The Tau3P codehas been used to calculate the heatload in the present design. Highercurrents and shorter bunches of par-ticles will require modifications tothe IR to reduce the increased heat-ing. T3P will be able to model theentire IR geometry and to simulatethe actual curved beam paths in anynew IR design.

The proposed RIA, which isranked third in the Office of Science’s20-year Science Facility Plan, requiresdesign of a variety of radio frequencyquadropole (RFQ) cavities (Figure 17)for its low-frequency linacs. Due tolack of accurate predictions, tunersare designed to cover frequencydeviations of about 1 percent. WithOmega3P, frequency accuracy of 0.1percent can be reached, significantlyreducing the number of tuners andtheir tuning range. Parallel comput-ing will play an important role inprototyping RIA’s linacs and helpreduce their costs.

60

FIGURE 17. (Left) A model of RIA’s hybrid RFQand (Right) an enlarged portion of the corre-sponding Omega3P mesh.

Page 63: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

Modeling Advanced Accelerators:Miniaturizing Acceleratorsfrom Kilometers to Meters

The long-term future of experi-mental high-energy physics researchusing accelerators is partly depend-ent on the successful developmentof novel ultra high-gradient acceler-ation methods. New accelerationtechniques using lasers and plasmashave already been shown to exhibitgradients (or acceleration) andfocusing forces more than 1000times greater than conventionaltechnology. The challenge is to con-trol these high-gradient systems andthen to string them together. Suchtechnologies would not only enablea cost-effective path to the outermostreaches of the high-energy frontier,but could also lead to the develop-ment of ultra-compact accelerators.Such compact accelerators couldbenefit science, industry and medi-cine by shrinking large facilities to amuch reduced size and allowingthem to be built nearer researchorganizations, high-tech businessesand medical centers.

Under the AST Project, theAdvanced Accelerator effort hasemphasized developing a suite offully parallel 3D electromagneticPIC codes. The codes have beenbenchmarked against each otherand their underlying algorithms, aswell as against experiments. Theresulting codes provide more realis-tic models, which are being appliedto advanced accelerators as well asmore mainstream problems inaccelerator physics. Furthermore,the effort has included runningthese codes to plan and interpretexperiments and to study the keyphysics that must be understoodbefore a 100+ GeV collider basedon plasma techniques can bedesigned and tested.

In some advanced acceleratorconcepts a drive beam, either anintense particle beam or laser pulse,is sent through a uniform plasma.The space charge or radiation pres-

sure creates a space-charge wake onwhich a trailing beam of particles cansurf. To model such devices accu-rately usually requires following thetrajectories of individual plasma par-ticles. Therefore, the software toolsdeveloped fully or partially under thisproject — OSIRIS, VORPAL, OOPIC,QuickPIC and UPIC — rely on thePIC techniques.

Among the major accomplish-ments are:

The development of UPIC:Using highly optimized legacy code,a modern framework for writing alltypes of parallelized PIC codesincluding electrostatic, gyrokinetic,electromagnetic, and quasi-statichas been developed. The UPICFramework has obtained 30 percentof peak speed on a single processorand 80 percent efficiency on wellover 1,000 processors. It is usedacross both the accelerator andfusion SciDAC projects.

The rapid construction ofQuickPIC: The development ofQuickPIC is a success story for therapid construction of a new codeusing reusable parallel code via aSciDAC team approach. The basicequations and algorithms weredeveloped from a deep understand-ing of the underlying physics involvedin plasma and/or laser wakefield

acceleration while the code wasconstructed rapidly using the UPICFramework. In some cases, QuickPICcan completely reproduce the resultsfrom previous algorithms with a fac-tor of 50 to 500 savings in computerresources. It is being used to studybeam-based science in regimes notaccessible before.

Rapidly adding realism into thecomputer models: The large elec-tromagnetic fields from the intensedrive beams can field ionize a gas,forming a plasma. Early SciDACresearch using two-dimensional codesrevealed that this self-consistent for-mation of the plasma from the drivebeam needs to be included in manycases. Via the SciDAC team approach,ionization models have been addedand benchmarked against eachother in the fully three-dimensionalPIC codes, VORPAL and OSIRIS.

Extending the plasma codes tomodel the electron-cloud instability:The electron cloud instability is oneof the major obstacles for obtainingthe design luminosity in circularaccelerators, storage rings anddamping rings. A set of modules hasbeen written for QuickPIC thatmodels circular orbits under theinfluence of external focusing ele-ments. This new software tool hasalready modeled 100,000 km of beam

15

10

5

010

20

3040

x1 [c/ωp]

0

15

10

5

0

FIGURE 18. 3D afterburner simulation results.

61

Page 64: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

propagation of the SPS machine atCERN. It is a major improvementover previously existing tools formodeling electron cloud interactions.

Applying the suite of codes todiscover new accelerator science:The suite of PIC codes has beenused to model particle beam accel-erator experiments at the StanfordLinear Accelerator Center and thelaser-plasma experiments at theL’OASIS lab at Lawrence BerkeleyNational Laboratory. They havealso been used to study key physicsissues related to the afterburnerconcept, in which the energy of anexisting beam from an accelerator isdoubled with gradients near 10GeV/m. An example is shown inFigure 18 where the beam and plas-ma density is shown from a three-dimensional simulation for the after-burner, including field ionization.

Principal Investigators: Robert Ryne,Lawrence Berkeley National Laboratory,and Kwok Ko, Stanford Linear AcceleratorCenter

Supernovae Science: Gettingthe Inside Story on How StarsExplode

While supernovae are awesomein their own right — these explodingstars expire in flashes brighter thanwhole galaxies — they can also pro-vide critical information about ouruniverse. Because of their regularbrightness, Type Ia supernovae areused as astronomical “standard can-dles” for cosmic distances and theexpansion of the universe. Super-novae are also helping scientistsgain a better understanding of thedark energy thought to make up 75percent of the universe. And finally,supernovae are also the source ofthe heavy elements in our universe,as well as prolific producers of neu-trinos, the most elusive particles sofar discovered.

The best perspective fromwhich to observe supernovae isfrom a telescope mounted on a

satellite. But earth-bound supercom-puters are also proving to be power-ful tools for studying the forces behindthe most powerful explosions in theuniverse. To help understand theconditions which lead to supernovaeexplosions, the SciDAC programlaunched two projects — the Super-nova Science Center and the TerascaleSupernova Initiative.

The Supernova Science Center

The Supernova Science Center(SNSC) was established with theobjective of using numerical simula-tions to gain a full understanding ofhow supernovae of all types explodeand how the elements have beencreated in nature. These computa-tional results are then comparedwith astronomical observations,including the abundance patterns ofelements seen in our own sun.

The explosions both of massivestars as “core-collapse” supernovaeand of white dwarfs as “thermonu-clear” supernovae (also called TypeIa) pose problems in computationalastrophysics that have challengedthe community for decades. Core-collapse supernovae require at leasta two- (and ultimately three-)dimensional treatment of multi-energy-group neutrinos coupled tomulti-dimensional hydrodynamics.Type Ia explosions are simulationproblems in turbulent (nuclear)combustion.

During its first five years, theSNSC focused on forging the inter-disciplinary collaborations necessaryto address cutting-edge problems in

a field that couples astrophysics,particle physics, nuclear physics,turbulence theory, combustion theoryand radiation transport. The projectteam also worked to develop andmodify the necessary computer codesto incorporate the physics efficientlyand do exploratory calculations.

Examples of these alliances arethe chemical combustion groups atLawrence Berkeley and SandiaNational Laboratories; the NSF'sJoint Institute for Nuclear Astro-physics (JINA); and radiation trans-port and nuclear physics experts atLos Alamos and Lawrence LivermoreNational Laboratories. With LBNL,the team applied codes previouslyoptimized to study chemical com-bustion on large parallel computersto novel problems in nuclear com-bustion in white dwarf stars, carry-ing out for the first time fullyresolved studies of the flame in boththe “flamlet” and “distributed”regimes.

The project team worked withJINA to develop a standardizedlibrary of nuclear data for applica-tion to the study of nucleosynthesisin stars, supernovae and X-raybursts on neutron stars.

The SNSC is also tapping theexpertise of the SciDAC ISICSs. Forexample, the TOPS ISIC is workingwith members of the team to pro-duce more efficient solvers, with agoal of increasing the performanceof the multi-dimensional codes by afactor of 10.

Other SNSC achievementsinclude:

62

Page 65: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

• Carrying out the first 3D, full-starsimulation of core collapse andexplosion. The project also car-ried out simulations in 2D withmulti-group neutrino transportof the collapse, bounce, and earlypost-bounce evolution. Recentlya new supernova mechanismwas discovered, powered byneutron star vibrations.

• Developing and using 3D adap-tive mesh relativistic hydrocodesto model the production ofgamma-ray bursts in massivestars, which led to the predictionof a new kind of high energytransient – cosmological x-rayflashes, which were later discov-ered. (An image from this modelappears on the cover of thisreport.)

• Carrying out the world's bestsimulations of Type I X-raybursts on neutron stars. Thenuclear physics of these explo-sions is a primary science goal ofDOE’s proposed Rare IsotopeAccelerator.

• Calculating the ignition of thenuclear runaway in a whitedwarf star becoming a Type Iasupernova, which showed thatthe ignition occurs off-center.Other research has shown thatthe resulting supernova dependscritically upon whether it isignited centrally or at a point offcenter. This issue of how thewhite dwarf ignites has becomethe most debated and interestingissue in Type Ia supernova mod-eling today.

The Terascale SupernovaInitiative: Shedding New Lighton Exploding Stars

Established to “shed new lighton exploding stars,” SciDAC’sTerascale Supernova Initiative(TSI) is a multidisciplinary collabo-ration which will use modeling ofintegrated complex systems to searchfor the explosion mechanism ofcore-collapse supernovae – one ofthe most important and challengingproblems in nuclear astrophysics.The project team, which includesscientists, computer scientists andmathematicians, will develop mod-

els for core collapse supernovae andenabling technologies in radiationtransport, radiation hydrodynamics,nuclear structure, linear systems andeigenvalue solution and collabora-tive visualization. These calcula-tions, when carried out on terascalemachines, will provide importanttheoretical insights and support forthe large experimental efforts inhigh energy and nuclear physics.

A key focus of the TSI project isunderstanding what causes the coreof a large star to collapse. When thestar runs out of fuel, pressures in thecore build and fuse elements together,leading to accelerating combustion.

63

FIGURE 19. Snapshots of a thermonuclear supernova simulation with asymmetric ignition. Afterigniting a sub-sonic thermonuclear flame slightly off-center of the white dwarf progenitor star, itpropagates towards the surface. The interaction of buoyancy and flame propagation changes themorphology of the initially spherical burning bubble (see 1st snapshot, where the flame correspondsto the blue isosurface and the white dwarf star is indicated by the volume rendering of the logarithmof the density) to a toroidal shape perturbed by instabilities (2nd snapshot, cf. Zingale et al., SciDAC2005 proceedings). The flame interacts with turbulence and is accelerated, but due to the asymmetricignition configuration, only a small part of the star is burned. Thus the white dwarf remains gravi-tationally bound and when the ash bubble reaches its surface it starts to sweep around it (3rd snap-shot), as also noticed by the ASC Flash center at the University of Chicago. The question addressedit the present study was whether the collision of ash on the far side of the star's surface (4th snapshot)would compress fuel in this region strong enough to trigger a spontaneous detonation (“GCD scenario”,Plewa et al., 2004). Three-dimensional simulations like those shown in the images disfavor this possibility.

Page 66: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

At the end, only iron is left at thecore and fusion stops. As the ironcore grows, gravitational forceseventually cause the core toimplode, or collapse, leading to anexplosion. This explosion createsboth heavy elements and neutrinos.Once the core can no longer col-lapse any more, it “bounces” backand sends a shock wave throughthe external layers of the star. If theshock wave kept going, it wouldlead to the supernova blast. But, forreasons still not understood, theshock wave stalls before it can trig-ger the explosion in current simula-tions. TSI researchers are formulat-ing new ways to examine criticalevents in the supernova core withinthe first second after the corebounces. One theory is that neutri-no heating helps re-energize thestalled shock.

The TSI project created a seriesof 3D hydrodynamic simulations

showing the flow in a stellar explo-sion developing into a strong, stable,rotational flow (streamlineswrapped around the inner core).The flow deposits enough angularmomentum on the inner core toproduce a core spinning within aperiod of only a few milliseconds.This new ingredient of core-col-lapse supernovae, which can onlybe modeled in full 3D simulations,may have a dramatic impact on boththe supernova mechanism itself andon the neutron star left behind bythe supernova event. The strongrotational flow below the supernovashock leads to an asymmetric densi-ty distribution that can dramaticallyalter the propagation of neutrinos,which may in turn enhance the effi-ciency of a neutrino-driven shock.This rotational flow will also pro-vide a significant source of energyfor amplifying any pre-existing mag-netic fields through the magneto-

rotational instability. Finally, theaccretion of the rotational flow ontothe central core can spin up theproto-neutron star to periods oforder tens of milliseconds, consis-tent with the inferred rotationalperiod of radio pulsars at birth.

Other team members were ableto carry out the most physicallydetailed 2D core-collapse supernovasimulations to date using precondi-tioners, which are approximatingtechniques for speeding up linearsolvers. Earlier simulations used agray approximation, which assumesa shape of the energy distributionspectrum for neutrinos. In contrast,

64

0.0026.7

26.7

23.3

23.3

20.0

20.0

16.7

16.7

13.3

13.3

10.0

10.0

6.67

6.67

3.33

3.33

0.00

1.00 1.65 2.30 2.94 3.59Entropy

4.24 4.89 5.53 6.18

3.33 6.67 13.3(km)

10.0 16.7 20.0 23.3 26.7

FIGURE 20. This image illustrates the stable rotational flow found in recent 3D simulations of theSASI, computed on the Cray X1 at NCCS. The streamlines show the inward flow of the core beingdeflected by the nascent supernova shock and ultimately wrapping around the inner core.

FIGURE 21. A closeup look at a proto-neutronstar model, shown at about 32 ms followingcore bounce, as calculated by the 2D radiation-hydrodynamic code, V2D. The distance scaleis given in kilometers. This view focuses on theinner 25 km of a model that extends nearly6000 km into the oxygen shell. The figure iscolor mapped by entropy. Velocity direction isshown using an LEA texture map. Convectivemotion can be seen to encompass the entireinner core. However, even at this early stageof evolution, vigorous motion has ceased,leaving this region moving at only a few per-cent of the sound speed.

Page 67: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

the TSI simulations using the V2Dcode computed the energy spectraof the neutrinos, which change withtime and with position as the super-nova evolves. That’s far more realistic,but requires extensive computingpower. With SciDAC providing accessto terascale supercomputers, theteam was able to solve the problem.

Using V2D, initial studies haveshown a number of new results andconfirmed some results reported byothers. These include a much earlieronset of post-bounce radiative-hydro-dynamic instabilities and develop-ment of a hydrodynamically unstableregion that quickly grows to engulfthe entire region inside the core-bounce shock wave, including bothneutrino-transparent regions as wellas the proto-neutron star where mate-rial is opaque to neutrinos. The mostdetailed simulations have proceededto about 35 milliseconds followingcore bounce, and the team is work-ing to extend these simulations tosignificant fractions of a second andbeyond.

Interaction between astrophysi-cists and nuclear physicists on theTSI team resulted in the first stellarcore collapse simulations to imple-ment a sophisticated model for thestellar core nuclei. The increasedsophistication in the nuclear modelsled to significant quantitative changesin the supernova models, demon-strating that both the microphysicsand the macrophysics of core col-lapse supernovae must be computedwith care. The significance of thiswork led to the publication of twoassociated Physical Review Letters,one focusing on the astrophysicsand one focusing on the nuclearphysics. The work has also motivatednuclear experiments to measurenuclear cross-sections as a way tovalidate the nuclear models used insupernova simulations.

In the last few years experimentsand observations have revealed thatneutrinos have mass and can changetheir flavors (electron, muon and tau).This development could turn out to

be significant for the core collapsesupernova explosion problem, forassociated nucleosynthesis and for aneutrino signal detection. Althoughsome experimental results indicatethat neutrinos would not be able totransform to the point where theycould affect the shock wave, theTSI team has mapped out scenariosin which the behavior of neutrinoscould in principle affect the dynam-ics and nucleosynsthesis deep insidethe supernova environment.

National ComputationalInfrastructure for Lattice GaugeTheory: Accomplishments andOpportunities

The long-term goals of highenergy and nuclear physicists are toidentify the fundamental buildingblocks of matter, and to determinethe interactions among them thatlead to the physical world we observe.The fundamental theory of the stronginteractions between elementaryparticles is known as quantum chro-modynamics, or QCD. The strongforce is one of four fundamentalforces in nature, and provides theforce to bind quarks to constructprotons and neutrons, which accountfor about 98 percent of the matterin nature.

The objective of the SciDACNational Computational Infrastruc-ture for Lattice Gauge Theoryproject is to construct the computa-tional infrastructure needed to studyQCD. Nearly all high energy andnuclear physicists in the UnitedStates working on the numericalstudy of QCD are involved in thisproject, and the infrastructure createdwill be available to all. The projectincludes the development of com-munity software for the effective useof terascale computers, and researchand development of specializedcomputers for the study of QCD.

The Department of Energy sup-ports major experimental, theoreticaland computational programs aimed

at reaching these goals. Remarkableprogress has been made through thedevelopment of the Standard Modelof high energy and nuclear physics,which provides fundamental theoriesof the strong, electromagnetic andweak interactions. This progress hasbeen recognized through the awardof Nobel Prizes in Physics for thedevelopment of each of the compo-nents of the Standard Model: theunified theory of weak and electro-magnetic interactions in 1979, andQCD, the theory of the stronginteractions, in 1999 and 2004.However, our understanding of theStandard Model is incompletebecause it has proven extremely dif-ficult to determine many of themost interesting predictions ofQCD, those that involve the strongcoupling regime of the theory. Todo so requires large scale numericalsimulations within the framework oflattice gauge theory.

The scientific objectives of latticeQCD simulations are to understandthe physical phenomena encom-passed by QCD, and to make pre-cise calculations of the theory’s pre-dictions. Lattice QCD simulationsare necessary to solve fundamentalproblems in high energy and nuclearphysics that are at the heart of DOE’slarge experimental efforts in thesefields. Major goals of the experi-mental programs in high energyand nuclear physics on which latticeQCD simulations can have animportant impact are to: (1) verifythe Standard Model or discover itslimits; (2) understand the internalstructure of nucleons and otherstrongly interacting particles and (3)determine the properties of stronglyinteracting matter under extremeconditions, such as those that existedimmediately after the Big Bang andare produced today in relativisticheavy–ion experiments. LatticeQCD calculations are essential toresearch in all of these areas.

The numerical study of QCDrequires very large computationalresources, and has been recognized as

65

Page 68: Progress report on the U.S. Department of Energy's Scientific

SOFTWARE FOR SCIENCE

one of the grand challenges of com-putational science. The advent ofterascale computing, coupled withrecent improvements in the formula-tion of QCD on the lattice, providean unprecedented opportunity tomake major advances in QCD calcu-lations. The infrastructure createdunder this SciDAC grant will play acritical role in enabling the U.S. latticeQCD community to take advantageof these opportunities. In particular,the hardware research and develop-ment work provides the groundworkfor the construction of dedicatedcomputers for the study of QCD,and the community software willenable highly efficient use of thesecomputers and the custom designed12,228-node QCDOC computerrecently constructed at BrookhavenNational Laboratory (BNL).

Software Development

The project’s software effort hascreated a unified programming envi-ronment, the QCD ApplicationProgramming Interface (QCD API),that enables members of the U.S.lattice gauge theory community toachieve high efficiency on terascalecomputers, including the QCDOC,

commodity clusters optimized forQCD, and commercial supercom-puters. Design goals included enablingusers to quickly adapt codes to newarchitectures, easily develop newapplications and preserve their largeinvestment in existing codes.

The QCD API is an example ofan application-specific code base serv-ing a national research community.It exploits the special features of QCDcalculations that make them particu-larly well suited to massively parallelcomputers. All the fundamental com-ponents have been implemented andare in use on the QCDOC hardwareat BNL, and on both the switchedand mesh architecture Pentium 4clusters at Fermi National AcceleratorLabor-atory (FNAL) and ThomasJefferson National AcceleratorFacility (JLab). The software codeand documentation are publiclyavailable via the Internet.

Hardware Research andDevelopment

The second major activity underthe Lattice QCD SciDAC grant hasbeen the design, construction andoperation of commodity clusters opti-mized for the study of QCD. This

work has taken place at FNAL andJLab. The objective has been to pro-vide computing platforms to test theQCD API, and to determine optimalconfigurations for the terascale clus-ters planned for FY 2006 and beyond.

The clusters that have been builtare being used to carry out impor-tant research in QCD. The bottle-neck in QCD calculations on clusters,as on commercial supercomputers,is data movement, not CPU per-formance. QCD calculations takeplace on four-dimensional space-time grids, or lattices. To updatevariables on a lattice site, one onlyneeds data from that site and a fewneighboring ones. The standardstrategy is to assign identical sub-lattices to each processor. Then, onecan update lattice points on theinterior of the sub-lattices, for whichall the relevant neighbors are on thesame processor, while data is beingcollected from a few neighboringprocessors to update lattice sites onthe sub-lattice boundaries. Thisstrategy leads to perfect load bal-ancing among the processors, and,if the computer and code are prop-erly tuned, to overlap of computa-tion and communications.

66

FIGURE 22. Under SciDAC’s Lattice QCD project, two computer clusters were developed for use by the lattice gauge theory community. The system onthe left is the 6n cluster at DOE’s Jefferson Lab. At the right is the QCDOC computer, constructed at Brookhaven National Laboratory by a group cen-tered at Columbia University. The RBRC computer on the right side of the photo was funded by the Riken Research Institute in Japan. Together, the twosystems have a peak speed of 20 teraflop/s.

Page 69: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

As supercomputers becomemore powerful and faster, computa-tional scientists are seeking to adapttheir codes to utilize this computingpower to develop ever moredetailed simulations of complexproblems. Other scientists are usingsupercomputing centers to store,analyze and share data from largeexperimental facilities, such as parti-cle colliders or telescopes searchingdeep space for supernovae.

But scaling up applications torun on the newest terascale super-computers, which can perform tensof trillions of calculations per sec-ond, presents numerous challenges.First, many of these scientific appli-cations were written by researchersover the course of years or decadesfor older computers with tens orhundreds of processors and maynot be easily adapted to run onthousands or tens of thousands ofprocessors. Second, in some cases,scaling an existing application torun on more processors can actuallyslow down the run time as thecommunication between processors

bogs down. Because these “legacycodes” often represent significantintellectual investment aimed atsolving very specific problems, sci-entists are reluctant to start anew indeveloping new codes. Finally, evenwhen applications can be adaptedto run more efficiently on one com-puter architecture, they may notachieve similar performance onanother type of supercomputer, anissue of growing importance assupercomputing centers increasinglyshare their computers to meet thegrowing demand for greater com-puting capability. Understandingcomputer performance and devel-oping tools to enhance this perform-ance enables scientists to scale uptheir codes to study problems ingreater detail for better understanding.

To help bridge the gap betweenthese legacy codes and terascalecomputing systems, SciDAC estab-lished seven Integrated SoftwareInfrastructure Centers, or ISICs,which brought together expertsfrom government computing cen-ters, scientific disciplines and indus-

try to approach the problems fromdifferent perspectives to createbroadly applicable solutions. Inmany cases, computing tools origi-nally developed and refined for onetype or problem were found to beuseful in solving problems in otherscientific areas, too. As a result ofthese focused efforts, scientists cannow draw on a wide range of toolsand techniques to ensure theircodes perform efficiently on super-computers of varying sizes, with dif-ferent architectures and at variouscomputing centers.

An important emphasis ofSciDAC is that the ISIC teamsworked with other projects toenable the effective use of terascalesystems by SciDAC applications.

Specifically, the ISICS addressedthe need for:

• new algorithms which scale toparallel systems having thou-sands of processors

• methodology for achievingportability and interoperabilityof complex high performancescientific software packages

• operating systems tools and sup-port for the effective manage-ment of terascale and beyondsystems, and

• effective tools for feature identifi-cation, data management andvisualization of petabyte-scalescientific datasets.

The result is a comprehensive,integrated, scalable, and robust highperformance software infrastructurefor developing scientific applica-tions, improving the performance ofexisting applications on differentsupercomputers and giving scien-tists new techniques for sharing andanalyzing data.

Applied MathematicsISICs

Three applied mathematicsISICs were created to develop algo-rithms, methods and mathematical

Building a BetterSoftware Infrastructurefor Scientific ComputingInfrastructure usually brings to mind images of physical struc-tures such as roads, bridges, power grids and utilities. Butcomputers also rely on an infrastructure of processors, soft-ware, interconnects, memory and storage, all working togetherto process data and accomplish specific goals. While personalcomputers arrive ready to use with easy-to-install software,scientists who rely on supercomputers have typically devel-oped their own software, often building on codes that weredeveloped 10 or 20 years ago.

67

Page 70: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

libraries that are fully scalable tomany thousands of processors withfull performance portability.

An Algorithmic and SoftwareFramework for Applied PDEs(APDEC)

Many important DOE applica-tions can be described mathemati-cally as solutions to partial differen-tial equations (PDEs) at multiplescales. For example, combustion forenergy production and transporta-tion is dominated by the interactionof fluid dynamics and chemistry inlocalized flame fronts. Fueling ofmagnetic fusion devices involves thedispersion of material from smallinjected fuel pellets. The successfuldesign of high-intensity particleaccelerators relies on the ability toaccurately predict the space-chargefields of localized beams in order tocontrol the beams, preserve thebeam emittance and minimize parti-cle loss.

The goal of the Algorithmicand Software Framework forApplied PDEs (APDEC) ISIC

project has been to develop a highperformance algorithmic and soft-ware framework for multiscaleproblems based on the use of block-structured adaptive mesh refine-ment (AMR) for representing multi-ple scales. In this approach, thephysical variables are discretized ona spatial grid consisting of nestedrectangles of varying resolution,organized into blocks. This hierar-chical discretization of space canadapt to changes in the solution tomaintain a uniform level of accuracythroughout the simulation, leadingto a reduction in the time-to-solu-tion by orders of magnitude com-pared to traditional fixed-grid calcu-lations with the same accuracy. Inshort, AMR serves as a computa-tional microscope, allowing scien-tists to focus their computingresources on the most interestingaspects of a problem. The resultingalgorithms have enabled importantscientific discoveries in a number ofdisciplines.

The APDEC center has beeninvolved with the development ofapplications in the areas of combus-tion simulations, magnetohydrody-

namic (MHD) modeling for fusionenergy reactors called tokamaks,and modeling particle accelerators.In combustion, the center devel-oped new, more efficient simulationmethods for solving chemicallyreacting fluid flow problems. Thesealgorithms and software have beenused to study a variety of problemsin the combustion of hydrocarbonfuels, as well as providing the start-ing point for the development of anew capability for simulatingnuclear burning in Type 1a super-novae.

In the area of fusion research,the center developed AMR codesto investigate a variety of scientificproblems, including tokamak fueling(Figure 1) and magnetic reconnec-tion, which can help in the designof fusion reactors for future energyproduction.

In the area of accelerator mod-eling, two new capabilities weredeveloped: an AMR-particle-in-cell(AMR-PIC) algorithm that has beenincorporated into the MLI andImpact beam dynamics codes(Figure 2); and a prototype capabilityfor use in simulating gas dynamics

FIGURE 1. AMR simulation of a fuel pellet being injected into atokamak. Image shows the pressure along a slice as well asthe outlines of the various refined grid patches (in collabora-tion with the SciDAC CEMM project).

FIGURE 2. AMR particle-in-cell calculation, showing the electrostatic field induced bythe particles (in collaboration with the SciDAC AST project).

68

Page 71: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

69

problems in jets and capillary tubesarising in the design of plasma-wakefield accelerators (Figure 3).

To manage these and othercomplex AMR algorithms and soft-ware, the APDEC team used a col-lection of libraries written in a com-bination of programming languages.This software architecture, whichmaps the mathematical structure ofthe algorithm space onto a hierar-chy of software layers, enables thecodes to be more easily adaptedand reused across multiple applica-tions. The project team has alsodeveloped new grid-generationtools for calculations using mathe-matical representations frommicroscopy, medical image dataand geophysical data.

Since the APDEC was estab-lished, a guiding principle has beento develop a software infrastructurethat performs as efficiently as possi-ble. By creating algorithms to effi-ciently use the number of availableprocessors and combining thesewith tools for load balancing, theresult is software which has a com-putational cost per grid point andscalability comparable to that ofuniform-grid calculations using thesame algorithms, giving scientistsmore detailed results withoutrequiring additional computingresources.

For AMR algorithms for simplecompressible flow models, theAPDEC project has measured 75percent efficiency up to 1,024processors on IBM SP and HPAlpha systems. For incompressibleflow, they have observed 75 percentefficiency up to 256 processors. Forlow-Mach-number combustion,production calculations on IBM SPand SGI systems are typically doneusing 128–512 processors, and theefficiency of the APDEC algorithmshas enabled significant scientificachievements in combustion andastrophysics problems.

For many problems, such ascombustion applications, a principalbarrier to scaling past a few hun-dred processors is the performanceof iterative methods for solvingelliptic equations. To address thisproblem, the APDEC Center isdeveloping a new class of AMRsolvers in three dimensions. Thegoal is to create a solver with a farlower communications-to-computa-tion ratio than traditional iterativemethods, which means the solution

can be calculated much more quickly.A preliminary implementation ofthis algorithm has obtained 85 per-cent efficiency on 1,024 processors,with less than 4 percent of the exe-cution time spent in communica-tions. The APDEC adaptive calcula-tion used 3 billion grid points, ascompared to the 2 trillion gridpoints that would have beenrequired using a uniform grid algo-rithm at the same resolution, lead-ing to a reduction in computingtime of at least two orders of mag-nitude. This solver is the basis forthe AMR-PIC method developedfor the beam dynamics codes men-tioned above.

Principal Investigator: Phil Colella,Lawrence Berkeley National Laboratory

Terascale Optimal PDE Solvers(TOPS) ISIC

In many areas of science, physi-cal experimentation is impossible,such as with cosmology; dangerous,as with manipulating the climate; orsimply expensive, as with fusionreactor design. Large-scale simula-tions, validated by comparison withrelated experiments in well-under-stood laboratory contexts, are usedby scientists to gain insight andconfirmation of existing theories insuch areas, without benefit of fullexperimental verification. But today’shigh-end computers, such as thoseat DOE’s supercomputing centers,are one-of-a-kind, and come with-out all of the scientific softwarelibraries that scientists expect to findon desktop workstations.

The Terascale Optimal PDESolvers (TOPS) ISIC was createdto develop and implement algo-

FIGURE 4. Model of the ILC low-loss cavity

FIGURE 3. Axisymmetric gas jet expanding into a vacuum, with the axis of symmetry along the bot-tom of the figure, in a laser-driven plasma-wakefield accelerator (in collaboration with the SciDACAST project).

Page 72: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

70

rithms and support scientific investi-gations performed by DOE-spon-sored researchers. These simulationsoften involve the solution of partialdifferential equations (PDEs) onterascale computers. The TOPSCenter researched, developed anddeployed an integrated toolkit ofopen-source, optimal complexitysolvers for the nonlinear partial dif-ferential equations that arise in manyDOE application areas, includingfusion, accelerator design, global cli-mate change and reactive chemistry.The algorithms created as part ofthis project were also designed toreduce current computational bot-tlenecks by orders of magnitude onterascale computers, enabling scien-tific simulation on a scale heretoforeimpossible.

Nonlinear PDEs give mathemat-ical expression to many core DOEmission applications. PDE simula-tion codes require implicit solversfor the multirate, multiscale, multicom-ponent, multiphysics phenomena ofhydrodynamics, electromagnetism,chemical reaction and radiationtransport. Currently, such problemstypically reach into the tens of mil-lions of unknowns — and this size isexpected to increase 100-fold in justa few years.

Unfortunately, the algorithmstraditionally used to solve PDEswere designed to address smallerproblems and become much lessefficient as the size of the systembeing studied increases. This createsa double jeopardy for applications,particularly in the case when thealgorithms are based on iterations —as it takes more computing resourcesto solve each step of the problem,and the number of steps also goesup. Fortunately, the physical originof PDE problems often allows themto be addressed by using a sequenceof approximations, each of which issmaller than the one before. Thesolutions to the approximations,which may be obtained more effi-ciently, are combined judiciously toprovide the solution to the original

problems. One well-known exampleof this approach is called the multi-grid method, which solves a prob-lem by tackling a coarse approxima-tion and then using the solution togenerate the solution of a betterapproximation, and so on. It can beshown that the performance ofmultigrid method is optimal for cer-tain classes of problems. The under-lying philosophy of this and othersimilar approaches is to make themajority of progress towards a highquality result through less complexintermediate steps.

The efforts defined for TOPSand its collaborations with otherprojects have been chosen to revo-lutionize large-scale simulationthrough incorporation of existingand new optimal algorithms andcode interoperability. TOPS pro-vides support for the software pack-ages Hypre, PARPACK, PETSc,ScaLAPACK, Sundials, SuperLUand TAO, some of which are in thehands of thousands of users, whohave created a valuable experiencebase on thousands of different com-puter systems.

Software developed and sup-ported by the TOPS project is beingused by scientists around the globe.In the past few years, researchersoutside the SciDAC communityauthored more than two dozen sci-entific papers reporting resultsachieved using TOPS software. Thepapers cover many disciplines, fromastronomy to chemistry to materialsscience to nanotechnology to optics.

TOPS solver software has alsobeen incorporated into numerousother packages, some commercialand some freely available. Amongthe widely distributed packagesmaintained outside of the SciDACprogram that employ or interface toTOPS software “under the hood”are Dspice, EMSolve, FEMLAB,FIDAP, Global Arrays, HPMathematical Library, libMesh,Magpar, Mathematica, NIKE,Prometheus, SCIRun, SLEPc, Snarkand Trilinos.

Finally, TOPS software is taughtin many courses in the U.S. andabroad, and forms a core of theannual training workshop spon-sored by DOE to introduceresearchers to the AdvancedCompuTational Software (ACTS)Collection. TOPS software is alsoregularly featured in short coursesat professional meetings.

Principal Investigator: David Keyes,Cornell University

The Terascale Simulation Toolsand Technology (TSTT) Center

Terascale computing providesan unprecedented opportunity toachieve numerical simulations atlevels of detail and accuracy previ-ously unattainable. DOE scientistsin many different application areascan reach new levels of understand-ing through the use of high-fidelitycalculations based on multiple cou-pled physical processes and multipleinteracting physical scales. The bestway to achieve detailed and accuratesimulations in many applicationareas, and frequently the only way toobtain useful answers, is to use adap-tive, composite, hybrid approaches.However, the best strategy for aparticular simulation is not alwaysclear and the only way to find thebest method is to experiment withvarious options. This is both time-consuming and difficult due to thelack of easy-to-apply, interoperablemeshing, discretization and adaptivetechnologies.

The Terascale Simulation Toolsand Technologies (TSTT) Centerwas established to address the tech-nical and human barriers preventingthe effective use of powerful adap-tive, composite, and hybrid meth-ods. The primary objective was todevelop technologies that enableapplication scientists to easily usemultiple mesh and discretizationstrategies within a single simulationon terascale computers. A keyaspect of the project is to develop

Page 73: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

71

services that can be used interopera-bly to enable mesh generation forrepresenting complex and possiblyevolving domains, high-order dis-cretization techniques for improvednumerical solutions, and adaptivestrategies for automatically optimiz-ing the mesh to follow movingfronts or to capture important solu-tion features.

The center encapsulated itsresearch into software componentswith well-defined interfaces thatenable different mesh types, dis-cretization strategies and adaptivetechniques to interoperate in a “plugand play” fashion. All software wasdesigned for terascale computingenvironments with particularemphasis on scalable algorithms forhybrid, adaptive computations andsingle processor performance opti-mization.

To ensure the relevance of theproject’s research and software devel-opments to the SciDAC goals, theTSTT team collaborated closely withboth SciDAC application researchersand other ISICs. Specifically, TSTTtechnologies were integrated intofusion, accelerator design, climatemodeling and biology applications toboth provide near-term benefits forthose efforts as well as receive thefeedback for further improvements.

Working with the AcceleratorModeling project, TSTT researchersgenerated high-quality meshes fromCAD (computer-aided design) mod-els for accurate modeling of theinteraction region of the PEP IIaccelerator. This led to the first suc-cessful simulation with a transitbeam, using the Tau3P acceleratormodeling application. This successled to a decision to continue to useTau3P for PEP-II computations forreaching higher luminosities. Similarmesh generation efforts by TSTTresearchers were used to analyzethe wakefield in the advancedDamped Detuned Structure andverify important performance char-acteristics of the system.

TSTT also partnered with SLAC

and the TOPS ISIC to create anautomatic design optimization loopto provide automatic tuning of accel-erator geometries to significantlyincrease the speed and decrease thecost by which new accelerators canbe designed.

In the field of plasma physics,TSTT researchers helped the Centerfor Extended MHD Modeling(CEMM) solve a number of prob-lems, particularly the case of aniso-tropic diffusion, after determiningthat using fewer higher-order ele-ments significantly decreased thetotal solution time needed to obtaina given accuracy. This demonstra-tion resulted in a new effort by sci-entists at Princeton Plasma PhysicsLaboratory (PPPL) to develop fifth-order finite elements for their pri-mary application code, M3D. TSTTresearchers also worked with scien-tists at PPPL to insert adaptivemesh refinement technologies anderror estimators directly into thenew high-order M3D code.

TSTT researchers are collabo-rating with climate modeling scien-tists to develop enhanced meshgeneration and discretization capa-bilities for anisotropic planar andgeodesic surface meshes. The initialmesh is adapted and optimized tocapture land surface orographic ortopographic height fields. This tech-nology has improved the predica-tion of rainfall, snowfall and cloudcover in regional weather models inprototype simulations.

TSTT researchers are collabo-rating with computational biologiststo develop the Virtual MicrobialCell Simulator (VMCS), which istargeting DOE bioremediationproblems to clean up heavy metalwaste using microbes. Methodsdeveloped by TSTT were used tostudy how microbe communitiesaggregate in certain environments,providing new insight into thebehavior of these microbes.

In partnership with combustionscientists, TSTT researchers havedeveloped a new tool that combines

adaptive mesh refinement with fronttracking and used this technologyto develop a new capability fordiesel engine design. This effort hasemphasized the modeling of theinstability and breakup of a dieseljet into spray, thought to be the firstsuch effort to provide a predictivecapability in this arena.

Principal Investigators: Jim Glimm, StateUniversity of New York — Stony Brook;Lori Freitag Diachin, Lawrence LivermoreNational Laboratory

Computer Science ISICsThe software infrastructure

vision of SciDAC is for a compre-hensive, portable, and fully integratedsuite of systems software and toolsfor the effective management andutilization of terascale computation-al resources by SciDAC applica-tions. The following four ComputerScience ISIC activities addressedcritical issues in high performancecomponent software technology,large-scale scientific data manage-ment, understanding application/architecture relationships forimproved sustained performance,and scalable system software toolsfor improved management and utili-ty of systems with thousands ofprocessors.

Center for ComponentTechnology for TerascaleSimulation Software

The Center for ComponentTechnology for Terascale Simu-lation Software (CCTTSS) wascreated to help accelerate computa-tional science by bringing a “plugand play” style of programming tohigh performance computing.Through a programming modelcalled the Common ComponentArchitecture (CCA), scientists candramatically reduce the time andeffort required to compose inde-pendently created software libraries

Page 74: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

into new terascale applications. Thisapproach has already been adoptedby researchers in the applicationareas of combustion, quantumchemistry and climate modeling,with new efforts beginning in fusionand nanoscale simulations.

While the CCA approach hasbeen more common in industry, thegoal of the CCTTSS was to bring asimilar approach to scientific com-puting. Unlike more narrowlyfocused commercial applications,the scientific CCA effort faced suchchallenges as maintaining high per-formance, working with a broadspectrum of scientific programminglanguages and computer architec-tures, and preserving DOE invest-ments in legacy codes.

An example of the CCAapproach in action is a prototypeapplication being developed withinthe Community Climate SystemModel (CCSM) project. The CCAis used at the level of system inte-gration to connect skeleton compo-nents for the atmosphere, ocean, seaice, land surface, river routing, andflux coupler. Prototype applicationswithin the Earth System ModelingFramework (ESMF), the infrastruc-ture targeted for the future CCSM,also employ the CCA, includingreusable components for visualiza-tion and connectivity.

In addition, CCTTSS researchersare collaborating closely with appli-cation scientists to create high-performance simulations in quan-tum chemistry and combustion.Moreover, new externally fundedprojects incorporate CCA conceptsin applications involving nanotech-nology, fusion, and undergroundtransport modeling, and proposalshave been recently submittedinvolving biotechnology and fusion.

As a part of its mission, theCCTTSS has developed productioncomponents that are used in scien-tific applications as well as proto-type components that aid in teach-ing CCA concepts. These freelyavailable components include vari-

ous service capabilities, tools formesh management, discretization,linear algebra, integration, optimiza-tion, parallel data description, paral-lel data redistribution, visualization,and performance evaluation. TheCCTTSS is also collaborating withthe APDEC, TSTT and TOPSSciDAC centers to define commoninterfaces for mesh-based scientificdata management as well as linear,nonlinear, and optimization solvers.

SciDAC funding has acceleratedboth CCA technology developmentand the insertion of this technologyinto massively parallel scientific appli-cations, including major SciDACapplications in quantum chemistryand combustion. Additionally, numer-ous CCA-compliant applicationcomponents have been developedand deployed, and the underlyinginfrastructure is maturing and estab-lishing itself in the scientific com-munity.

Principal Investigator: Rob Armstrong,Sandia National Laboratories

High-End Computer SystemPerformance: Science andEngineering ISIC

As supercomputers become evermore powerful tools of scientific dis-covery, demand for access to suchresources also increases. As a result,many scientists receive less time onsupercomputers than they would like.Therefore, it’s critical that they makethe most of their time allocations.One way to achieve this optimalefficiency is to analyze the perform-ance of both scientific codes andthe computer architectures onwhich the codes are run. The result-ing improvements can be dramatic.In one instance, an astrophysics codewhich had been running at about 10percent of a supercomputer’s theo-retical peak speed (which is typical)was able to run at more than 50percent efficiency after careful per-formance analysis and tuning.

SciDAC’s High-End Computer

System Performance: Science andEngineering ISIC, also known asthe Performance ISIC or PERC,focused on developing tools andtechniques to help scientists deter-mine how they can best execute aspecific application on a given com-puter platform. The research wasaimed at determining what level ofachievable performance is realistic,how scientific applications can beaccelerated toward these levels, andhow this information can drive thedesign of future applications andhigh-performance computing systems.

In addition to helping scientistsbetter understand performancebehavior of their codes, the majorimprovements to computer perform-ance modeling technology made byPERC researchers are also helpingsupercomputer centers to betterselect systems, thereby ensuringhigher scientific productivity on thesepublicly funded systems. Such super-computer procurements run to mil-lions of dollars, and PERC researchis being used by centers operated byDOE, the National Science Found-ation and the Department of Defensearound the country. PERC researchershave also collaborated with severalSciDAC scientific application efforts,resulting is major speedups for sev-eral high-profile computer codesused in these projects.

For the Community ClimateSystem Model (CCSM), one of theUnited States’ primary applicationsfor studying climate change, PERCresearchers helped determine opti-mal algorithmic settings for theCommunity Atmosphere Model(CAM), a key component of theCCSM, when running the Intergov-ernmental Panel on Climate Changescenario runs on the IBM p690cluster computer at ORNL, acceler-ating the completion of this mile-stone. Similar studies are ongoingon the SGI Altix, Cray XD1, andCray XT3 supercomputers. Theresulting data from these runs willcontribute to an international reporton global climate change.

72

Page 75: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

73

Working with the PlasmaMicroturbulence Project (PMP),PERC researchers applied ActiveHarmony, a software tool support-ing distributed execution of compu-tational objects, to GS2, a gyroki-netic turbulence simulator used bythe fusion energy research commu-nity. The result was a 2.3 to 3.4times speedup of GS2 for a com-mon configuration used in produc-tion runs.

PERC researchers worked withthe Terascale Supernovae Initiative(TSI) project to port and optimizethe EVH1 hydrodynamics code onthe Cray X1 supercomputer, achiev-ing excellent performance for largeproblems. The EVH1 performanceanalysis was completed for up to256 processors on all current targetplatforms.

Working with two other ISICs,the Terascale PDE Simulations(TOPS) and Terascale SimulationTools and Technology (TSTT)centers, PERC researchers analyzedthe performance of a TOPS-TSTTmesh smoothing application andfound that the sparse matrix-vectormultiply achieves 90 percent of thepeak performance imposed by thememory bandwidth limit.

PERC researchers collaboratedclosely with SciDAC’s Lattice GaugeTheory project, conducting per-formance analyses of the MILC(MIMD Lattice Computation)application on Pentium-3 Linuxclusters. The PERC team collecteddetailed performance data on differ-ent aspects of the code and identi-fied the key fragments that affectthe code performance, then present-ed their findings to the lattice gaugetheory community.

In the Accelerator Science andTechnology program, the PERCteam has shown how to improvethe performance of the post-pro-cessing phase within the Omega3Papplication at the Stanford LinearAccelerator Center. Post-processingcan consume 40 percent or more oftotal execution time in a full run of

Omega3P, where most of this timeis due to a complex and redundantcomputation. By exchanging thisredundant computation for cachedlookup to pre-computed data,researchers were able to achieve 4.5times speedups of the post-process-ing phase, and 1.5 time speedups ofa full Omega3P run.

In addition to working directlywith scientists on the performance ofcodes, PERC researchers have mademajor improvements in the usabilityand effectiveness of several perform-ance tools, enabling researchers toanalyze their codes more easily. Theresult is an integrated suite of meas-urement, analysis and optimizationtools to simplify the collection andanalysis of performance data andhelp users in optimizing their codes.

Principal Investigator: David Bailey,Lawrence Berkeley National Laboratory

Scalable Systems Software forTerascale Computer Centers

As terascale supercomputersbecome the standard environmentfor scientific computing, the nation’spremiere scientific computing cen-ters are at a crossroads. Having builtthe operations of their systemsaround home-grown systems soft-ware, system administrators andmanagers now face the prospect ofrewriting their software, as theseincompatible, ad hoc sets of systemstools were not designed to scale tothe multi-teraflop systems that arebeing installed in these centerstoday. One solution would be foreach computer center to take itshome-grown software and rewrite itto be scalable. But this would incura tremendous duplication of effortand delay the availability of terascalecomputers for scientific discovery.

The goal of the SciDAC’sScalable Systems Software projectis to provide a more timely andcost-effective solution by pullingtogether representatives from themajor computer centers and indus-

try and collectively defining stan-dardized interfaces between systemcomponents. At the same time, thisgroup is producing a fully integratedsuite of systems software componentsthat can be used by the nation’slargest scientific computing centers.

The Scalable Systems Softwaresuite is being designed to supportcomputers that scale to very largephysical sizes without requiring thatthe number of support staff scalealong with the machine. But thisresearch goes beyond just creating acollection of separate scalable com-ponents. By defining a softwarearchitecture and the interfacesbetween system components, theScalable Systems Software research iscreating an interoperable frameworkfor the components. This makes itmuch easier and cost effective forsupercomputer centers to adapt,update, and maintain the componentsin order to keep up with new hard-ware and software. A well-definedinterface allows a site to replace orcustomize individual components asneeded. Defining the interfacesbetween components across theentire system software architectureprovides an integrating force betweenthe system components as a wholeand improves the long-term usabilityand manageability of terascale sys-tems at supercomputer centersacross the country.

The Scalable Systems Softwareproject is a catalyst for fundamentallychanging the way future high-endsystems software is developed anddistributed. The project is expectedto reduce facility management costsby reducing the need to supporthome-grown software while makinghigher quality systems tools avail-able. The project will also facilitatemore effective use of machines byscientific applications by providingscalable job launch, standardized jobmonitoring and management soft-ware, and allocation tools for thecost-effective management and uti-lization of terascale and petascalecomputer resources.

Page 76: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

74

Here are some of the highlightsof the project:

Designed modular architecture:A critical component of the pro-ject’s software design is its modular-ity. The ability to plug and playcomponents is important because ofthe diversity in both high-end sys-tems and the individual site man-agement policies.

Defined XML-based interfacesthat are independent of languageand wire protocol: The interfacesbetween all the components havebeen fully documented and madepublicly available, thereby allowingothers to write replacement compo-nents (or wrap their existing com-ponents) as needed. The referenceimplementation has a mixture ofcomponents written in a variety ofcommon programming languages,which allows a great deal of flexibil-ity to the component author andallows the same interface to workon a wide range of hardware archi-tectures and facility polices.

Reference implementationreleased: Version 1.0 of the fullyintegrated systems software suite was

released at the SC2005 conference— the leading international meetingof the supercomputing community— and will be followed by quarterlyupdates for the following year.

Production users: Full systemsoftware suites have been put intoproduction on clusters at AmesLaboratory and Argonne NationalLaboratory. Pacific NorthwestNational Laboratory and theNational Center for SupercomputerApplications have adopted one ormore components from the suite,and the project team is in discussionwith Department of Defense sitesabout use of some of the systemsoftware components.

Adoption of application pro-gramming interface (API): Thesuite’s scheduler component is thewidely used Maui Scheduler. Thepublic Maui release (as well as thecommercial Moab scheduler) hasbeen updated to use the publicXML interfaces and has added newcapabilities for fairness, higher sys-tem utilization, and improvedresponse time. All new Maui andMoab installations worldwide (more

than 3,000/month) now use thesystem software interfaces devel-oped in this ISIC. Prominent usersinclude 75 of the world’s top 100supercomputers and commercialindustries such as Amazon.com andFord Motor Co.

Principal Investigator: Al Geist, Oak RidgeNational Laboratory

The Scientific DataManagement Center

Scientific exploration and dis-covery typically takes place in twophases: generating or collecting dataand then analyzing the data. In thedata collection/generation phase,large volumes of data are generatedby simulation programs running onsupercomputers or collected fromexperiments. As experimental facili-ties and computers have becomelarger and more powerful, the amountof resulting data threatens to over-whelm scientists. Without effectivetools for data collection and analysis,scientists can find themselvesspending more time on data man-agement than on scientific discovery.

The Scientific Data Manage-ment Center (SDM) was estab-lished to focus on the application ofknown and emerging data manage-ment technologies to scientificapplications. The center’s goals areto integrate and deploy software-based solutions to the efficient andeffective management of large vol-umes of data generated by scientificapplications. The purpose is not onlyto achieve efficient storage and accessto the data using specialized index-ing, compression, and parallel storageand access technology, but also toenhance the effective use of the sci-entists’ time by eliminating unpro-ductive simulations, by providingspecialized data-mining techniques,by streamlining time-consumingtasks, and by automating the scien-tists’ workflows.

The center’s approach is to pro-vide an integrated scientific data

AllocationManagement

Security

SystemMonitoring

Job Management

System Build& Configure

Resource & QueueManagement

Accounting & UserManagement

FaultTolerance

CheckpointRestart

FIGURE 5. The mission of the Scalable Systems Software center is the development of an integratedsuite of systems software and tools for the effective management and utilization of terascale com-putational resources, particularly those at the DOE facilities.

Page 77: Progress report on the U.S. Department of Energy's Scientific

SCIENTIFIC SOFTWARE INFRASTRUCTURE

75

management framework wherecomponents can be chosen by sci-entists and applied to their specificdomains. By overcoming the datamanagement bottlenecks andunnecessary information-technologyoverhead through the use of thisintegrated framework, scientists arefreed to concentrate on their sci-ence and achieve new scientificinsights.

Today’s computer simulationsand large-scale experiments cangenerate data at the terabyte level,equivalent to about 5 percent of allthe books in the Library of Congress.Keeping up with such a torrent ofdata equires efficient parallel datasystems. In order to make use ofsuch amounts of data, it is necessaryto have efficient indexes and effectiveanalysis tools to find and focus onthe information that can be extractedfrom the data, and the knowledge

learned from that information.Because of the large volume of data,it is also useful to perform analysis asthe data are generated. For example,a scientist running a thousand-time-step, 3D simulation can benefit fromanalyzing the data generated by theindividual steps in order to steer thesimulation, saving unnecessary com-putation, and accelerating the discov-ery process. However, this requiressophisticated workflow tools, aswell as efficient dataflow capabilitiesto move large volumes of databetween the analysis components.To enable this, the center uses anintegrated framework that providesa scientific workflow capability, sup-ports data mining and analysis tools,and accelerates storage access anddata searching. This frameworkfacilitates hiding the details of theunderlying parallel and indexingtechnology, and streamlining the

assembly of modules using processautomation technologies.

Since it was established, SDMhas adopted, improved and appliedvarious data management technolo-gies to several scientific applicationareas. By working with scientists,the SDM team not only learned theimportant aspects of the data man-agement problems from the scien-tists’ point of view, but also provid-ed solutions that led to actualresults. The successful resultsachieved so far include:

• More than a tenfold speedup inwriting and reading NetCDFfiles was achieved by developingParallel NetCDF software ontop of the MPI-IO.

• An improved version of PVFS isnow offered by cluster vendors,including Dell, HP, Atipa, andCray.

• A method for the correct classi-fication of orbits in punctureplots from the National CompactStellarator eXperiment (NCSX)at PPPL was developed by con-verting the data into polar coor-dinates and fitting second-orderpolynomials to the data points.

• A new specialized method forindexing using bitmaps was usedto provide flexible efficientsearch over billions of collisions(events) in high energy physicsapplications. A paper on the sys-tem received best paper awardat the 2004 InternationalSupercomputing Conference.

• The development of a parallelversion of the popular statisticalpackage R is being applied tofusion, geographic informationsystems and proteomics (bio-chemistry) applications.

• A scientific workflow systemwas developed and applied tothe analysis of microarray data,as well as astrophysics applica-tions. The system greatlyincreased the efficiency of run-ning repetitive processes.

Principal Investigator: Arie Shoshani,Lawrence Berkeley National Laboratory

FIGURE 6. The STAR Experiment at Brookhaven National Laboratory generates petabytes of dataresulting from the collisions of millions of particles. The Scientific Data Management ISIC devel-oped software to make it easier for scientists to transfer, search and analyze the data from suchlarge-scale experiments.

Page 78: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

76

With the advent of high-perform-ance computers and networks, suchteams can now be assembled virtu-ally, with a multitude of researchtalents brought to bear on scientificproblems of national and globalimportance. DOE scientists andengineers have been developingtechnologies in support of these“collaboratories.” The goal is tomake collaboration among scientistsat geographically dispersed sites aseffective as if they were working atthe same location. A key aspect ofcollaboratories is providing seamlessand secure access to experimentalfacilities and computing resources atsites around the country.

As these collaborative experi-ments, whether a particle acceleratorin New York or a massive telescopeatop a volcano in Hawaii, come online, scientists will be inundated withmassive amounts of data which mustbe shared, analyzed, archived andretrieved by multiple users frommultiple sites. Large research pro-

grams are producing data librariesat the petabyte level, the equivalentof 500 billion standard-size docu-ment pages, or 100 times the datavolume of the U.S. Library ofCongress. Tools which enable theeasy transfer of huge datasets areessential to the success of theseresearch efforts.

Under SciDAC, a number ofcollaboratory infrastructure projectswere established to advance collab-orative research. These projectsincorporated technologies from dis-tributed systems often referred to as“Grids.” Built upon networks likethe Internet, a computing Grid is aservice for sharing computer powerand data storage capacity over theInternet, resulting in large, distrib-uted computational resources. DOEis expanding the concept of Gridsbeyond computers to integrateaccess to experimental facilities anddata archives around the world.While the idea is straightforward,developing a unified, secure infra-

structure linking various systems withunique security and access proce-dures, sometimes across internationalboundaries, is a daunting task.

Grid technology is evolving toprovide the services and infrastruc-ture needed for building “virtual”systems and organizations. A Grid-based infrastructure provides a wayto use and manage widely distrib-uted computing and data resourcesto support science. The result is astandard, large-scale computing,data, instrument, and collaborationenvironment for science that spansmany different projects, institutions,and countries.

National CollaboratoryProjects

DOE’s investment in NationalCollaboratories includes fourSciDAC projects focused on thegoal of creating collaboratory soft-ware environments to enable geo-graphically separated scientists toeffectively work together as a teamand to facilitate remote access toboth facilities and data.

The DOE Science GridA significant portion of DOE

science is already, or is rapidlybecoming, a distributed endeavorinvolving many collaborators whoare frequently working at institutionsacross the country or around theworld. Such collaborations typicallyhave rapidly increasing needs forhigh-speed data transfer and high-performance computing. More andmore, these requirements must beaddressed with computing, data,

Creating an AdvancedInfrastructure forIncreased CollaborationA hallmark of many DOE research efforts is collaborationamong scientists from different disciplines. The multidiscipli-nary approach is the driving force behind the “big science”breakthroughs achieved at DOE national laboratories andother research institutions. By creating teams comprisingphysicists, chemists, engineers, applied mathematicians, biolo-gists, earth scientists, computer scientists and others, even themost challenging scientific problems can be defined, analyzedand addressed more effectively than if tackled by one or twoscientists working on their own.

Page 79: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

and instrument resources that areoften more widely distributed thanthe collaborators. Therefore, devel-oping an infrastructure that supportswidely distributed computing anddata resources is critical to DOE’sleading-edge science.

Such infrastructures havebecome known as Grids, whichprovide secure and reliable access tothese critical resources. While theindividual institutions have theirown policies and procedures forproviding access to users, successfuldeployment of a Grid requires thata common approach be implement-ed so that when an authorized userlogs into a Grid, all appropriateresources can be easily accessed.

Under SciDAC, the DOE ScienceGrid was developed and deployedacross the DOE research complexto provide persistent Grid servicesto advanced scientific applicationsand problem-solving frameworks.By reducing barriers to the use ofremote resources, it has made sig-nificant contributions to SciDACand the infrastructure required forthe next generation of science. TheDOE Science Grid testbed was ini-tially deployed among five projectsites: the National Energy ResearchScientific Computing Center andArgonne, Lawrence Berkeley, OakRidge and Pacific Northwest nationallaboratories.

In addition to the constructionof a Grid across five major DOEfacilities with an initial complementof computing and data resources,other accomplishments include:

• Integration into the Grid of thelarge-scale computing and datastorage systems at NERSC,DOE’s Office of Science flagshipsupercomputing center.

• Design and deployment of aGrid security infrastructure thatis facilitating collaborationbetween U.S. and Europeanhigh energy physics projects,and within the U.S. magneticfusion community. This infra-structure provides a global, policy-

based method of identifying andauthenticating users, which leadsto a “single sign-on” so that anysystem on the Grid can accept auniform user identity for author-ization. This work is currentlyused by DOE’s Particle PhysicsData Grid, Earth Systems Grid,and Fusion Grid projects.

• A resource monitoring anddebugging infrastructure thatfacilitates managing this widelydistributed system and the build-ing of high performance distrib-uted science applications.

• Development and deploymentpartnerships established withseveral key vendors.

• Use of the Grid infrastructure byapplications from several disci-plines — computational chemistry,groundwater transport, climatemodeling, bioinformatics, etc.

• In collaboration with DOE’sEnergy Sciences Network(ESnet), the design and deploy-ment of the DOE GridsCertification Authority (Gridauthentication infrastructure)that provides Grid identity cer-tificates to users, systems, andservices, based on a common,science collaboration orientedpolicy that is defined andadministered by the subscribingvirtual organizations.

Principal Investigator: William Johnston,Lawrence Berkeley National Laboratory

The National FusionCollaboratory

Developing a reliable energysystem that is economically andenvironmentally sustainable is thelong-term goal of DOE’s FusionEnergy Science (FES) research pro-gram. In the U.S., FES experimentalresearch is centered at three largefacilities with a replacement value ofover $1 billion. As these experi-ments have increased in size andcomplexity, there has been a con-current growth in the number andimportance of collaborations amonglarge groups at the experimentalsites and smaller groups locatednationwide.

The next generation fusionexperimental device is the ITERreactor, an international collabora-tion with researchers from China,Europe, India, Japan, Korea, Russiaand the U.S. ITER will be a burningfusion plasma magnetic confinementexperiment located in France. WithITER located outside the U.S., theability to maximize its value to theU.S. program is closely linked to thedevelopment and deployment ofcollaborative technology. As a resultof the highly collaborative presentand future nature of FES research,the community is facing new andunique challenges.

The National Fusion Collabora-tory established under SciDACunites fusion and computer scienceresearchers to directly address thesechallenges by creating and deploy-ing collaborative software tools. Inparticular, the project has developedand deployed a national FES Grid(FusionGrid) that is a system forsecure sharing of computation, visu-alization and data resources overthe Internet. This represents a fun-damental paradigm shift for thefusion community where data,analysis and simulation codes, and

77

Page 80: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

78

visualization tools are now thoughtof as network services. In this newparadigm, access to resources (data,codes, visualization tools) is separatedfrom their implementation, freeingthe researcher from needing to knowabout software implementationdetails and allowing a sharper focuson the physics. The long-term goalof FusionGrid is to allow scientistsat remote sites to participate as fullyin experiments and computationalactivities as if they were working onsite, thereby creating a unified virtualorganization of the geographicallydispersed U.S. magnetic fusion com-munity — more than 1,000 researchersfrom over 40 institutions.

Fusion scientists use the Fusion-Grid to provide wide access to thecomplex physics codes and associateddata used for simulating and analyz-ing magnetic fusion experiments.The first FusionGrid service cameonline in fall 2002. Since then, theservice — a transport code known asTRANSP — has been used by scien-tists to produce over 6,000 code

runs and has supported analysis offusion devices on three continents.TRANSP was also used to demon-strate the feasibility of between-shotanalysis on FusionGrid.

Fusion scientists also use theFusionGrid as an advanced collabo-rative environment service that pro-vides computer-mediated communi-cations techniques to enhance workenvironments and to enable increasedproductivity for collaborative work.In January 2004 a fusion scientist inSan Diego remotely led an experimenton the largest European magneticfusion experiment, located in theUnited Kingdom, using FusionGrid’sremote collaboration services. Theseservices included multiple videoimages and a unified audio streamalong with real-time secure accessto data. With the introduction ofFusionGrid’s remote collaborationservices, remote participation infusion experiments is becomingmore commonplace.

The National Fusion Collabora-tory project is also working to

enhance collaboration within thecontrol rooms of fusion experiments.Shared display walls have beeninstalled in the control rooms of thethree large U.S. fusion experimentsfor enhanced large-group collabora-tion. The project has created anddeployed unique software that pres-ents the scientists with a multi-userenvironment allowing them to simul-taneously share data to the largedisplay and simultaneously interactwith the display to edit, arrange,and highlight information. For col-located scientists, the ability to pub-lish and share a scientific graph onthe display wall results in easier andbroader discussion compared to theold model of a few individuals gath-ering around a small computermonitor. For remote scientists, theability of the shared display toaccept video combined with audiohas allowed them to interact andeven lead an experiment.

Taken as a whole, the technolo-gies being deployed by the NationalFusion Collaboratory project arefurther facilitating data sharing anddecision-making by both collocatedand remote scientists. With the eyesof the worldwide fusion communityshifting towards France for the nextgeneration device, ITER, such capa-bility becomes increasingly impor-tant for scientific success.

Principal Investigator: David Schissel,General Atomics

FIGURE 1. The control rooms of NSTX (a), DIII-D (b), and C-Mod (c) with shared display walls beingused to enhance collocated collaboration. On the DIII-D shared display is also video from remotecollaborators in Europe who were participating in that day’s experiment.

FIGURE 2. FusionGrid services are being usedto collaborate in real time by fusion scientistsin the U.S. and abroad. Pictured in the fore-ground is Dr. deGrassie (San Diego) leadingthe JET experiment (England), and on thescreen is the JET control room and variousJET data traces being discussed in real time.

Page 81: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

The Particle Physics Data GridThe Department of Energy’s

national laboratories are home tosome of the world’s leading facilitiesfor particle physics experiments.Such experiments typically generatemillions of particles which must beextensively analyzed and filtered tofind and measure the exotic parti-cles that herald new discoveries.The Particle Physics Data Grid(PPDG) collaboratory was launchedas a collaboration between physi-cists and computer scientists toadapt and extend Grid technologiesand the physicists’ applications tobuild distributed systems to handlethe storage, retrieval and analysis ofthe particle physics data at DOE’smost critical research facilities.

PPDG has been instrumental inthe significant progress made overthe past five years in deployingGrid-enabled, end-to-end capabili-ties into the production data pro-cessing infrastructure of the partici-pating experiments, and in makingextended and robust Grid technolo-gies generally available. Also, PPDGhas successfully demonstrated theeffectiveness of close collaborationbetween end users and the develop-ers of new distributed technologies.

The experiments involved spanthose already in steady data-takingmode — where existing productionsystems were carefully extendedstep by step to take advantage ofGrid technologies — to those devel-oping new global systems for detectorcommissioning and the acquisitionof data.

As a result of PPDG and otherGrid projects in the U.S. and Europe,particle physics collaborations nowinclude Grid-distributed computingas an integral part of their data pro-cessing models. By collaborativelydeploying their developed technolo-gies into significant production use,the computer science groups haveextended their technologies andmade them generally available tobenefit other research communities.

In addition to the experiment-

specific end-to-end systems, throughpartnering with the National ScienceFoundation’s iVDGL and GriPhyNprojects and working with Europeancounterparts such as the EuropeanData Grid and Enabling Grids forEsciencE, PPDG has helped pave theway for a worldwide multi-scienceproduction Grid infrastructure. Inthe U.S., PPDG has played a leader-ship role in creating first the Grid3and now the Open Science Gridinfrastructure, which provides acommon software installation, oper-ational procedures and sharing ofservices for 50 laboratory and uni-versity sites.

The PPDG-supported improve-ments in data handling include atenfold increase in the amount ofsustained data distribution, a 50 per-cent reduction in the effort requiredto distribute data and analysis amongfive institutions, and up to a 50 per-cent increase in available resourcesdue to Grid-enabled sharing. Andeven though funding for the PPDGhas concluded, the collaborationsestablished under the program con-tinue to boost scientific productivity.

Significant scientific results andbenefits that the technology devel-opments and collaborations in PPDGhave enabled include:

• The shortened turnaround timefor analysis of the STAR nuclearphysics experiment data pro-duced early results in the firstdirect measurement of particlesknown as “open charm” pro-duction at the Relativistic HeavyIon Collider. This was enabledthrough the experiment’s col-laboration with the StorageResource Management group,which for the past few yearshas provided sustained transferof more than 5 terabytes a weekbetween Brookhaven NationalLaboratory in New York andLawrence Berkeley NationalLaboratory in California.

• All results from the FermilabTevatron experiments, CDF andD0, now rely on distributed

computing using Grid technolo-gies. Up to 20 sites around theworld now receive and analyzedata from the D0 experiment,which has led to new discover-ies, such as results showing themixing of Bs mesons, mesonsthat contain a “beauty quark”and a “strange quark.” ThePPDG has contributed to suchsuccesses by improving a num-ber of the software applicationsused to transfer and analyzedata, more than doubling theefficiency of these operations.CDF seamlessly supports simu-lation and analysis jobs at up to10 sites in Europe and the U.S.

• The BaBar experiment at theStanford Linear AcceleratorCenter routinely moves all thedata collected from the accelera-tor to computing centers inFrance, Italy and England. TheStorage Resource Broker wasextended to provide the distrib-uted catalogs describing andmanaging the data. These cata-logs are necessary to managethe location and selection infor-mation for the multipetabytedistributed datasets.

• The Large Hadron Collider(LHC) experiments are partici-pating in a worldwide Grid proj-ect to provide global distributionand analysis of data from CERNin Switzerland. The U.S. ATLASand U.S. CMS collaborations aredeveloping and testing theirphysics algorithms and distrib-uted infrastructure in prepara-tion for the onslaught of tens ofpetabytes a year of data to beanalyzed by 2008. At the end of2005, ATLAS produced morethan one million fully simulatedevents on the Open Science Gridwith 20 different physics sam-ples to test the data distributionand analysis capabilities. CMS isrunning “Data Challenges” todistribute data from the CERNaccelerator location to the globalsites. Data is transferred from

79

Page 82: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

80

the U.S. Tier-1 Center to sitesaround the world as part of thedata distribution and analysissystem. Peak transfer rates of ~5Gbps are reached.

• The Open Science Grid Consor-tium provides a common infra-structure, validated softwarestack and ongoing operationsand collaboration which all thegroups in PPDG use and towhich PPDG has made signifi-cant contributions. The use ofOpen Science Grid has increasedsteadily since its launch in July2005, and it is hoped to contin-ue with this work to benefit allthe PPDG collaborators overthe next few years. Ideally, thesignificant progress made byPPDG towards a usable, globalcomputing infrastructure sup-porting large institutions can beextended to benefit smallerresearch programs and institu-tions as well.

Principal Investigators: Richard Mount,Stanford Linear Accelerator Center;Harvey Newman, California Institute ofTechnology; Miron Livny, University ofWisconsin Madison; Ruth Pordes, FermiNational Accelerator Laboratory (added2003).

The Earth System Grid II:Turning Climate ModelDatasets into CommunityResources

One of the most important — andchallenging — scientific problemswhich requires extensive computa-tional simulations is global climatechange. The United States’ climateresearch community has created theCommunity Climate System Model(CCSM), one of the world’s leadingclimate modeling codes. By chang-ing various conditions, such as theemission levels of greenhouse gases,scientists can model different cli-mate change scenarios. Becauserunning such simulations is compu-tationally demanding, the code isonly being run at a few DOE com-puting facilities and the NationalCenter for Atmospheric Research.The resulting data archive, distrib-uted over several sites, currentlycontains upwards of 100 terabytesof simulation data and continues togrow to petabytes by the end of thedecade. Making the data availableto a broad range of researchers foranalysis is key to a comprehensivestudy of climate change.

SciDAC’s Earth System Grid(ESG) is a collaborative interdisci-plinary project aimed at addressing

the challenge of enabling manage-ment, discovery, access and analysisof these enormous and extremelyimportant data assets. For the mod-eling teams, the daily managementand tracking of their data is alreadyproving to be a significant problem.Then there are climate researchersworking at institutions and universi-ties across the U.S. whose ability todiscover and use the data isextremely limited. That’s today, andthe problem is rapidly escalating.

As computers become morepowerful and can run the modelsfaster and with more complexdetails, such as finer geographic res-olution, the amount of data willincrease significantly. Much of thecurrent modeling activity is focusedupon simulations aimed at theupcoming Intergovernmental Panelon Climate Change (IPCC) assess-ment, and these simulations havetwice the horizontal resolution ofthe models that have been run forthe past several years, whichincreases the resulting data volumefourfold. Figure 4 depicts new workrepresenting yet another doublingof the resolution.

All of this adds up to an enor-mous increase in the volume andcomplexity of data that will be pro-duced. Moving that data willbecome increasingly costly, whichcould discourage the sharing of dataamong the community of climateresearchers.

The heart of ESG is a simple,elegant, and powerful Web portalthat allows scientists to register,search, browse, and acquire the datathey need. This portal was demon-strated at the Supercomputing 2003conference on high performancecomputing. The portal incorporatesnew capabilities for catalogingdatasets, registering new users andinteroperability between differentmass storage architectures.

One of ESG’s strategies is todramatically reduce the amount ofdata that needs to be moved overthe network, and the project team

FIGURE 3: Monitored Jobs on Open Science Grid

ATLAS

Total jobs per VO

AUGER

CDF

CDMS

CMSDOSAR

DZERO

MINIBOONE

MIPP

MIS

NANOHUB

OSGSDSS

STAR

THEORY

ZEUS

FERMILABFERMILAB-TESTGADU

GIN.GGF.ORGGLOW

GRASE

GRIDEX

GROWILC

IVDGL

KTEV

LIGO

200400600

0 Oct Nov2005 2006

Dec Jan Feb Mar

GMT time

Apr May Jun Jul Aug Sep

8001000120014001600180020002200

Run

ning

job

s

2400260028003000320034003600380040004200

Page 83: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

has done groundbreaking work indeveloping generalized remote dataaccess capabilities. This involvesheavy collaboration with the SciDACDataGrid Toolkit project, as well asjoint work with the communityOpenDAP project.

In summer 2004, ESG beganserving IPCC and other model datato a global community in close part-nership with the IPCC effort and theWorld Meteorological Organization.

Principal Investigator: Ian Foster, ArgonneNational Laboratory

Security and Policy for GroupCollaboration

Today, scientific advances arerarely the result of an individualtoiling in isolation, but rather theresult of collaboration. Such collab-orations are using their combinedtalents to address scientific prob-lems central to DOE’s mission inareas such as particle physics exper-iments, global climate change andfusion science. While considerablework has been done on tools tohelp perform the work of a collabo-ration, little attention has been paidto creating mechanisms for estab-lishing and maintaining the struc-ture of such collaborative projects.This structure includes methods foridentifying who are members of the

collaboration, what roles they play,what types of activities they areentitled to perform, what communi-ty resources are available to mem-bers of the collaboration and whatare the policies set down by theowners of those resources.

The Security and Policy forGroup Collaboration project wascreated to develop scalable, secureand usable methods and tools fordefining and maintaining member-ship, rights, and roles in group col-laborations. Such collaborationsshare common characteristics:

• The participants, as well as theresources, are distributed bothgeographically and organization-ally.

• Collaborations can scale in sizefrom a few individuals to thou-sands of participants, and mem-bership may very dynamic.

• Collaborations may span areasof expertise, with members fillingdifferent roles within the collab-oration.

• The work of the team is enabledby providing team memberswith access to a variety of com-munity resources, includingcomputers, storage systems,datasets, applications, and tools. Central to this problem of struc-

ture is determining the identity ofboth participants and resources and,based on this identity, determining

the access rights of the participantand the policy of the resource. Whilemechanisms for authentication andauthorization have been defined,the issues of distribution, dynamicsand scale complicate their applica-tion to collaborative environments.Additionally, sites providing theresources for a collaboration oftenhave overruling security mecha-nisms and policies in place whichmust be taken into account, ratherthan replaced by the collaboration.

The project team has instantiatedits research results into the GlobusToolkit’s® widely used Grid SecurityInfrastructure. Since the Globus Tool-kit is already adopted by many sci-ence projects — the Particle PhysicsData Grid, Earth Systems Grid, DOEScience Grid and other non-DOEGrid activities like NSF TeraGrid,NASA IPG and the European DataGrid — the project’s mechanisms forsecurity and authentication can beeasily used by these scientists.

Principal Investigator: Stephen Tuecke,Argonne National Laboratory

Middleware ProjectsTwo projects are conducting

research and development that willaddress individual technology ele-ments to enable universal, ubiquitous,easy access to remote resources andto contribute to the ease with whichdistributed teams work together.Enabling high performance for sci-entific applications is especiallyimportant.

Middleware Technology toSupport Science Portals:Gateways to the Grid

While extensive networks suchas the Internet have been in exis-tence for decades, it took the organ-ization of the World Wide Web andthe creation of easy-to-use Webbrowsers to make the Internet thedynamic information resource that

81

FIGURE 4: From the atmospheric component of the CCSM, this visualization shows precipitablewater from a high-resolution experimental simulation (T170 resolution, about 70 km).

Page 84: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

it is today. In much the same way,networked Grids linking computing,data and experimental resources forscientific research have the potentialto greatly accelerate collaborativescientific discovery.

Under SciDAC, the MiddlewareTechnology to Support Science Por-tals project was created to developa “Science Portal” environmentwhich would allow scientists to pro-gram, access and execute distributedGrid applications from a conven-tional Web browser and other toolson their desktop computers. Portalsallow each user to configure theirown problem-solving environmentfor managing their work. The goalis to allow the scientist to focuscompletely on the science by mak-ing the Grid a transparent extensionof the user’s desktop computingenvironment.

The first step in this project wasto create a software platform forbuilding the portal and which couldbe used as a testbed. Then the proj-ect team partnered with applicationsresearchers to get feedback on howwell the applications worked whenaccessed by portal users. For appli-cations, the project partnered withseveral National Science Foundation(NSF) projects. The team also workedwith the Global Grid Forum, withNSF funding, to refine the researchportal technology for production use.

Principal Investigator: Dennis Gannon,Indiana University

DataGrid Middleware: EnablingBig Science on Big Data

Many of the most challengingscientific problems being addressedby DOE researchers require accessto tremendous quantities of data aswell as computational resources.Physicists around the world cooper-ate in the analysis of petabytes ofaccelerator data. Climate modelerscompare massive climate simulationoutputs. Output from multi-milliondollar online instruments, such as

the Advanced Photon Source orearthquake engineering shake tables,must be visualized in real time sothat a scientist can adjust the exper-iment while it is running.

In order for these applicationsto be feasible, infrastructure must bein place to support efficient, highperformance, reliable, secure, andpolicy aware management of large-scale data movement. The SciDACDataGrid Middleware project hasprovided tools in three primary areasin support of this goal. GridFTP,developed primarily at ArgonneNational Laboratory, provides asecure, robust, high performance trans-port mechanism that is recognizedas the de facto standard for trans-port on the Grid. The Globus repli-ca tools, developed primarily at theUniversity of Southern California’sInformation Sciences Institute, pro-vide tracking of replicated data setsand provide efficiency in the selec-tion of data sets to access. TheCondor team at the University ofWisconsin is providing storageresource management, particularlyspace reservation tools. Together,these three components provide thebasis for DataGrid applications.

These DataGrid tools are beingused by other SciDAC projects. TheEarth System Grid and the ParticlePhysics Data Grid use GridFTPservers to stage input data and moveresults to mass storage systems. Theyalso employ the project’s first gener-ation replica catalog to determinethe best location from which to storeand/or retrieve data. The LaserInterferometer Gravitational WaveObservatory project has movedover 50 terabytes of data and has areplica location service with over 3million logical files and over 30 mil-lion physical filenames. The Grid3project, part of the Grid PhysicsNetwork, moves over 4 terabytes ofdata a day. GridFTP is now aGlobal Grid Forum standard andthere is a replication working groupworking on standards as well. Thetools developed as part of this proj-

ect have become the de facto stan-dard for data management and, infact, are opening up new realms ofscientific discovery.

For example, participants in theLarge Hadron Collider experiment,which will come on line in 2007 inSwitzerland, are conducting “datachallenges” to ensure that the datacoming out of the experiment canbe accommodated. In a 2005 chal-lenge, a continuous average dataflow of 600 megabytes per secondwas sustained over 10 days, for atotal transfer of 500 terabytes ofdata – all enabled by GridFTP.Moving the same amount of dataover a typical household networkconnection (at 512 kilobits per sec-ond) would take about 250 years.

Principal Investigators: Ian Foster, ArgonneNational Laboratory; Carl Kesselman,University of Southern CaliforniaInformation Sciences Institute; MironLivny, University of Wisconsin-Madison

Networking ProjectsAs high-speed computer net-

works play an increasingly impor-tant role in large-scale scientificresearch, scientists are becomingmore dependent on reliable androbust network connections,whether for transferring experimen-tal data, running applications ondistributed supercomputers or col-laborating across the country oraround the globe. But as anyonewho has logged onto the Internetcan attest, network connections canunexpectedly break or bog down atany time. To help keep such systemproblems from disrupting scientificresearch, SciDAC funded threeresearch projects to develop bettermethods for analyzing and address-ing network performance.

82

Page 85: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

INCITE: MonitoringPerformance from the OutsideLooking In

The INCITE (InterNet Controland Inference Tools at the Edge)project was aimed at developingtools and services to improve theperformance of applications runningon distributed computing systemslinked by computational grids.Although such applications arecomplex and difficult to analyze,knowledge of the internal networktraffic conditions and services couldoptimize the overall performance.Without special-purpose networksupport (at every router), the onlyalternative is to indirectly inferdynamic network characteristicsfrom edge-based network measure-ments. Therefore, this projectresearched and developed networktools and services for analyzing,modeling, and characterizing high-performance network end-to-endperformance based solely on edge-based measurements made at thehosts and routers at the edges of thenetwork, rather than midpoints onthe network. Specifically, the goalwas to develop multi-scalable on-line tools to characterize and mapnetwork performance as a functionof space, time, applications, proto-cols, and services for end-to-endmeasurement, prediction, and net-work diagnosis.

Two of the tools developed bythe project have been incorporatedinto a toolkit used to identify per-formance problems at a number ofmajor networks and research sites.INCITE users include: Globus,SciDAC’s Supernova Science Centerand Particle Physics Data Grid Pilot,Scientific Workspaces of the Future,TeraGrid, Transpac at IndianaUniversity, San Diego Supercom-puting Center, Telecordia, CAIDA,Autopilot, TAU and the EuropeanGridLab project.

Principal Investigator: Richard Baraniuk,Rice University

Logistical Networking: A NewApproach for Data Delivery

The project for OptimizingPerformance and EnhancingFunctionality of DistributedApplications using LogisticalNetworking explored the use of anew technique to provide fast, effi-cient and reliable data delivery tosupport the high-performance appli-cations used by SciDAC collabora-tors. Logistical networking (LN) is a

new way of synthesizing network-ing and storage to create a commu-nication infrastructure that providessuperior control of data movementand management for distributedapplications based on shared networkstorage. LN software tools allowusers to create local storage “depots”or utilize shared storage depotsdeployed worldwide to easily accom-plish long-haul data transfers, tem-porary storage of large datasets (onthe order of terabytes), pre-position-ing of data for fast on-demand deliv-ery, and high performance contentdistribution such as streaming video.

A primary research drive of theproject was to work with SciDAC’sTerascale Supernova Initiative (TSI)group, which uses LN to share mas-sive datasets at speeds up to 220

Mbps between Oak Ridge NationalLaboratory (ORNL) in Tennesseeand North Carolina State University(NCSU). A new, private LN infra-structure is in place, designed specif-ically for TSI and other SciDACprojects. Depots at ORNL, San DiegoSupercomputing Center, SUNY-Stony Brook and NCSU provide 8terabytes of storage and form thenetwork’s backbone.

The second area of focus is sup-port for SciDAC’s fusion energy

researchers. Typical fusion plasmaexperiments require real-time feed-back for rapid tuning of experimen-tal parameters, meaning data mustbe analyzed during the 15-minuteintervals between plasma-generatingpulses in experimental reactors. Suchrapid assimilation of data is achievedby a geographically dispersed researchteam, a challenge for which LNtechnologies are well suited.

Principal Investigator: Micah Beck,University of Tennessee

Estimating Bandwidth to Findthe Fastest, Most Reliable Path

The Bandwidth Estimation:Measurement Methodologies andApplications project focused on the

83

UPLOAD moves a file froma local disk to IBP depots,returning an exNode, whichcontains file metadata.

1

AUGMENT transfers data to other locations, whichadds a replica to the exNode.

2

DOWNLOAD by the clientretrieves data from the”nearest” depots.

3

ORNLDEPOTS

UTK

NCSU

ORNL

LOCAL DISKAT ORNL

NCSUDEPOTS

LOCAL DISKAT NCSU

2

3

SS

1

FIGURE 5: The interworkingcomponents of the LogisticalRuntime System Tools (LoRSTools) in the high perform-ance distribution of databetween IBP depots atTerascale SupernovaInitiative sites at ORNL andNCSU.

Page 86: Progress report on the U.S. Department of Energy's Scientific

ADVANCED COLLABORATIVE INFRASTRUCTURE

research, development and deploy-ment of scalable and accurate band-width estimation tools for high-capacity network links. Such toolswould allow applications to adaptto changing network conditions,finding the fastest, most reliable net-work path. This adaptive abilitywould benefit a large class of data-intensive and distributed scientificapplications. However, previoustools and methodologies for meas-uring network capacity, availablebandwidth and throughput havebeen largely ineffective across realInternet infrastructures.

The goal was to develop meas-

urement methodologies and toolsfor various bandwidth-related met-rics, such as per-hop capacity, end-to-end capacity, and end-to-endavailable bandwidth. Capacity is themaximum throughput that the pathcan provide to an application whenthere is no competing traffic.Available bandwidth, on the otherhand, is the maximum throughputthat the path can provide to anapplication given the path’s currenttraffic load. Measuring capacity iscrucial for debugging, calibratingand managing a path, while measur-ing available bandwidth is impor-tant for predicting end-to-end per-

formance of applications, fordynamic path selection and trafficengineering.

As part of the project, GeorgiaTech researchers developed twobandwidth estimation tools,Pathrate (which measures capacity)and Pathload (which measuresavailability), which were released inJanuary 2004. Pathrate and Pathloadhave been downloaded by morethan 2,000 users around the world.

Principal Investigators: K. C. Klaffy, SanDiego Supercomputer Center;Constantinos Davrolis, Georgia Institute ofTechnology

84

Page 87: Progress report on the U.S. Department of Energy's Scientific
Page 88: Progress report on the U.S. Department of Energy's Scientific

LIST OF PUBLICATIONS

86

2001K. Abazajian, G. M. Fuller, M. Patel,“Sterile neutrino hot, warm, and cold darkmatter,” Physical Review D, 64, 023501,2001.

P. R. Amestoy, I. S. Duff, J.-Y. L’Excellentand X.S. Li, “Analysis and Comparison ofTwo General Sparse Solvers for DistributedMemory Computers,” ACM Transactionson Mathematical Software, 27:388-421,2001.

S. F. Ashby, M. J. Holst, T. A. Manteuffeland P. E. Saylor, “The Role of the InnerProduct in Stopping Criteria for ConjugateGradient Iterations,” B.I.T., Vol 41, No. 1,2001.

N. R. Badnell, and D. C. Griffin, “Electron-impact excitation of Fe20+, including n=4 lev-els,” Journal of Physics B, 34 681, 2001.

N. R. Badnell, D. C. Griffin, and D. M.Mitnik, “Electron-impact excitation ofFe21+, including n=4 levels,” Journal ofPhysics B, 34, 5071, 2001.

M. Baertschy and X. S. Li, “Solution of aThree-Body Problem in Quantum MechanicsUsing Sparse Linear Algebra on ParallelComputers,” Proc. of the SC2001 confer-ence, Denver, Colorado, November 2001.

M. Baker, “Cluster Computing White Paper,”International Journal of High PerformanceComputing Applications – special issueCluster Computing White Paper, Vol. 15,No. 2, Summer 2001.

S. Benson, L. Curfman McInnes, and J.More, “A Case Study in the Performance andScalability of Optimization Algorithms,”ACM Transactions of MathematicalSoftware 27:361-376, 2001.

J. M. Blondin, R. A. Chevalier, D. M.Frierson, “Pulsar Wind Nebulae in EvolvedSupernova Remnants,” AstrophysicalJournal Letters, 563, 806-815, 2001.

J. M. Blondin, K. J. Borkowski, S. P.Reynolds, “Dynamics of Fe Bubbles in YoungSupernova Remnants,” AstrophysicalJournal Letters, 557, 782-791, 2001.

M. Borland, H. Braun, S. Doebert, L.Groening, and A. Kabel, “RecentExperiments on the Effect of CoherentSynchrotron Radiation on the Electron Beamof CTF II,” Proc. of the 2001 ParticleAccelerator Conference, Chicago, Ill., June2001.

M. Brezina, A. Cleary, R. Falgout, V.Henson, J. Jones, T. Manteuffel, S.McCormick, and J. Ruge, 2001, “Algebraicmultigrid based on element interpolation(AMGe),” SIAM Journal of ScientificComputing, 22:1570-1592.

S. W. Bruenn, K. R. De Nisco, A.Mezzacappa, “General Relativistic Effects inthe Core Collapse Supernova Mechanism,”Astrophysical Journal Letters, 560, 326-338, 2001.

D. Bruhwiler, R. Giacone, J. Cary, J.Verboncoeur, P. Mardahl, E. Esarey, W.Leemans, B. Shadwick, “Particle-in-CellSimulations of Plasma Accelerators andElectron-Neutral Collisions,” PRST-AB, 4,101302, 2001.

Z. Cai, T. Manteuffel, S. McCormick andJ. Ruge, “First-order System LL* (FOSLL*):Scalar Elliptic Partial DifferentialEquations,” SIAM Journal on NumericalAnalysis, 39:1418-1445, 2001.

K. Campbell, “Functional SensitivityAnalysis for Computer Model Output,” Proc.of the Seventh Army Conference onStatistics, Santa Fe, NM, Oct. 2001.

H. Chen, et al., “Calibrating Scalable Multi-Projector Displays Using CameraHomography Trees,” Proc. of ComputerVision and Pattern Recognition, 2001.

H. Chen, et al., “Data DistributionStrategies for High-Resolution Displays,”Computers & Graphics, 25(5):811-818,2001.

T.-Y. Chen, “Preconditioning sparse matricesfor computing eigenvalues and computing lin-ear systems of equations,” PhD Dissertation,UC Berkeley. 2001.

Y. Chen, et al., “Software Environments forCluster-based Display Systems,” Proc. of theFirst IEEE/ACM InternationalSymposium on Cluster Computing andthe Grid, 2001.

B. Cheng, J. Glimm, X. L. Li D. H. Sharp,“Subgrid Models and DNS Studies of FluidMixing,” Proceedings of the 7thInternational Conference on the Physicsof Compressible Turbulent Mixing, E.Meshkov, Y. Yanilkin, V. Zhmailo, eds., p.385-390, 2001.

E. Chow, “Parallel Implementation andPractical Use of Sparse Approximate InversesWith A Priori Sparsity Patterns,”International Journal of High PerformanceComputing Appl., 15, pp. 56-74, 2001.

S. Chu, and S. Elliott, “Latitude versusdepth simulation of ecodynamics and dis-solved gas chemistry relationships in the cen-tral Pacific,” Journal of AtmosphericChemistry, 40(3), 305, 2001.

D. D. Clayton, E. A.-N. Deneault, B. S.Meyer, “Condensation of Carbon inRadioactive Supernova Gas,” AstrophysicalJournal Letters, 562, 480-493, 2001.

J. P. Colgan, M. S. Pindzola, D. M. Mitnik,D. C. Griffin, and I. Bray, “Benchmark non-perturbative calculations for the electron-impact ionization of Li(2s) and Li(2p),”Physical Review Letters, 87, 213201,2001.

J. P. Colgan, M. S. Pindzola, and F. J.Robicheaux, “Fully quantal (Á, 2e) calcula-

List of Publications

Page 89: Progress report on the U.S. Department of Energy's Scientific

LIST OF PUBLICATIONS

87

tions for absolute differential cross sections ofhelium,” Journal of Physics B, 34, L457,2001.

V. K. Decyk and D. E. Dauger,“Supercomputing for the Masses: A ParallelMacintosh Cluster,” Proc. 4th Intl. Conf. onParallel Processing and AppliedMathematics, Naleczow, Poland,September, 2001, Springer-Verlag LectureNotes in Computer Science 2328, ed. byR. Wryrzykowski, J. Dongerra, M.Paprzycki, and J. Wasniewski Springer-Verlag, Berlin, 2002.

C. H. Q. Ding and Y. He, “Ghost CellExpansion Method for Reducing Communi-cations in Solving PDE Problems,” Proc ofSC2001, Denver, Coloradao, Nov. 2001.

F. Dobrian, “External Memory Algorithmsfor Factoring Sparse Matrices,” PhD Disser-tation, Old Dominion University, 2001.

F. Dobrian and A. Pothen, 2001, “TheDesign of I/O-Efficient Sparse DirectSolvers,” Proc. of the SC 2001 conference,Denver, Colorado, 2001.

V. Eijkhout, “Automatic Determination ofMatrix Blocks,” Lapack Working note 151,Parallel Programming, 2001.

A. Gabric, W. Gregg, R. G. Najjar, D. J.Erickson III and P. Matrai, “Modeling thebiogeochemical cycle of DMS in the upperocean: A review,” Chemosphere - GlobalChange Science 3, 377, 2001.

F. Gerigk, M. Vretenar, and R. D. Ryne,“Design of the Superconducting Section of theSPL Linac at CERN,” Proc. PAC 2001, p.3909.

J. R. Gilbert, X. S. Li, E. G. Ng and B. W.Peyton, “Computing Row and ColumnCounts for Sparse QR and LU Factorization,”BIT 41:693-710, 2001.

J. Glimm, X. L. Li, Y. J. Liu, N. Zhao,“Conservative Front Tracking and Level SetAlgorithms,” Proc. National Academy ofSci., p. 14198-14201, 2001.

D. C. Griffin, D. M. Mitnik, J. P. Colganand M. S. Pindzola, “Electron-impact exci-tation of lithium,” Physical Review A, 64,032718, 2001.

D. C. Griffin, D. M. Mitnik and N. R.Badnell, “Electron-impact excitation of Ne+,”Journal of Physics B, 34, 4401 2001.

E. D. Held, J. D. Callen, C. C. Hegna, andC. R. Sovinec, “Conductive electron heat flowalong magnetic field lines,” Physics ofPlasmas 8, 1171, 2001.

P. Heggernes, S. Eisenstat, G. Kumfertand A. Pothen, “The ComputationalComplexity of the Minimum DegreeAlgorithm,” Proc. of the Nordic ComputerScience Conference (NIK), 2001.

W. D. Henshaw, “An Algorithm forProjecting Points onto a Patched CADModel,” Proc. of the 10th InternationalMeshing Roundtable, October 2001.

I. Hofmann, O. Boine-Frankenheim, G.Franchetti, J. Qiang, R. Ryne, D. Jeon, J.Wei, “Emittance Coupling in High IntensityBeams Applied to the SNS Linac,” Proc. ofthe 2001 Particle Accelerator Conference,p. 2902, Chicago, Ill., June 2001.

I. Hofmann, J. Qiang, R. D. Ryne and R.W. Garnett, “Collective Resonance Model ofEnergy Exchange in 3D NonequipartitionedBeams,” Physical Review Letters 86,2313–2316, 2001.

D. J. Holmgren, P. Mackenzie, D.Petravick, R. Rechenmacher and J.Simone, “Lattice QCD production on a com-modity cluster at Fermilab,” Computing inHigh-Energy Physics and Nuclear, 2,Beijing, China, Sept. 2001.

C. Huang, V. Decyk, S. Wang, E. S. Dodd,C. Ren, W. B. Mori, T. Katsouleas, and T.Antonsen Jr., “QuickPIC: A ParallelizedQuasi-Static PIC Code for Modeling PlasmaWakefield Acceleration,” Proc. of the 2001Particle Accelerator Conference, Chicago,Ill., June 2001.

E.-J. Im and K. A. Yelick, Paper:“Optimizing Sparse Matrix-Vector Multipli-cation for Register Reuse,” InternationalConference on Computational Science,San Francisco, Calif., May 2001

V. Ivanov, A. Krasnykh, “Confined FlowMultiple Beam Guns for High Power RFApplications,” Proc. of the 2nd IEEEInternational Vacuum ElectronicsConference, Noordwijk, Netherlands,April 2001.

V. Ivanov, A. Krasnykh, “3D Method forthe Design of Multi or Sheet Beam RFSources,” Proc. of the 2001 ParticleAccelerator Conference, Chicago, Ill.,June 2001.

A. Kabel, “Coherent Synchrotron RadiationCalculations Using TRAFIC4: Multi-proces-sor Simulations and Optics Scans,” Proc. ofthe 2001 Particle Accelerator Conference,Chicago, Ill., June 2001.

A. Kabel, “Quantum Corrections toIntrabeam Scattering,” Proc. of the 2001

Particle Accelerator Conference, Chicago,Ill., June 2001.

K. Keahey, T. Fredian, Q. Peng, D. P.Schissel, M. Thompson, I. Foster, M.Greenwald, and D. McCune,“Computational Grids in Action,” TheNational Fusion Collaboratory FutureGeneration Computer System, 2001.

S.-D. Kim, T. Manteuffel and S.McCormick, “First-order system leastsquares (FOSLS) for spatial linear elasticity:pure traction,” SIAM Journal on NumericalAnalysis, 38:1454-1482, 2001.

A. Kuprat, A. Khamayseh, “VolumeConserving Smoothing for Piecewise LinearCurves, Surfaces, and Triple Lines,” Journalof Computational Physics, 172, p. 99-118,2001.

C.-L. Lappen and D. A. Randall, “Towarda Unified Parameterization of the BoundaryLayer and Moist Convection. Part III:Simulations of Clear and CloudyConvection,” Journal of AtmosphericSciences., 58, 2052, 2001.

C.-L. Lappen and D. A. Randall “Towarda Unified Parameterization of the BoundaryLayer and Moist Convection. Part II: LateralMass Exchanges and Subplume-ScaleFluxes,” Journal of Atmospheric Sciences.,58, 2037, 2001.

C.-L. Lappen and D. A. Randall, “Towarda Unified Parameterization of the BoundaryLayer and Moist Convection. Part I: A NewType of Mass-Flux Model,” Journal ofAtmospheric Science, 58, 2021, 2001.

J. Larson, R. Jacob, I. Foster, J. Guo, “TheModel Coupling Toolkit,” Proc. of the. 2001International Conference onComputational Science, eds. V. N.Alexandrov, J. J. Dongarra, C. J. K. Tan,Springer-Verlag, 2001.

S. Lee, T. Katsouleas, R. G. Hemker, E. S.Dodd, and W. B. Mori, “Plasma-WakefieldAcceleration of a Positron Beam,” PhysicalReview E, 64, No. 4, 04550, 2001.

Z. Li, N. Folwell, G. Golub, A. Guetz, B.McCandless, C.-K. Ng, Y. Sun, M. Wolf,R. Yu and K. Ko, “Parallel Computations ofWakefields in Linear Colliders and StorageRings,” Proc. of the 2001 ParticleAccelerator Conference, Chicago, Ill.,June 2001.

Z. Li, K. L. Bane, R. H. Miller, T. O.Raubenheimer, J. Wang, R. D. Ruth,“Traveling Wave Structure Optimization forthe NLC,” Proc. of the 2001 Particle

Page 90: Progress report on the U.S. Department of Energy's Scientific

Accelerator Conference, Chicago, Ill.,June 2001.

Z. Li, P. Emma and T. Raubenheimer,“The NLC L-band Bunch Compressor,”Proc. of the 2001 Particle AcceleratorConference, Chicago, Ill., June 2001.

Z. Li, N. Folwell, G. Golub, A. Guetz, B.McCandless, C.-K. Ng, Y. Sun, M. Wolf,R. Yu and K. Ko, “Parallel Computations OfWakefields In Linear Colliders And StorageRings,” Proc. of the 2001 ParticleAccelerator Conference, Chicago, Ill.,June 2001.

Z. Liu, et al., “Progressive View-DependentIsosurface Propagation,” Proc. of the IEEETCVG Symposium on Visualization,2001.

W. P. Lysenko et al., “CharacterizingProton Beam of 6.7 MeV LEDA RFQ byFitting Wire-Scanner Profiles to 3-DNonlinear Simulations,” Proc. of the 2001Particle Accelerator Conference, p. 3051,Chicago, Ill., June 2001.

B. S. Meyer, S. S. Gupta, L.-S. The, S.Long, “Long-Lived Nuclear Isomers andShort-Lived Radioactivities,” Meteoritics &Planetary Science (Supp.) 36, 134, 2001.

D. M. Mitnik, D. C. Griffin and N. R.Badnell, “Electron-impact excitation ofNe5+,” Journal of Physics B, 34, 4455,2001.

P. Muggli, et al., “Collective refraction of abeam of electrons at a plasma-gas interface,”Physical Review Letters Special Topics-Accelerators & Beams, Vol. 4, No. 9, pp.091301-091303, Sept. 2001.

S. Nath et al., “Comparison of LinacSimulation Codes,” Proc. of the 2001Particle Accelerator Conference, p. 264,Chicago, Ill., June 2001.

J. Neiplocha, H. E. Trease, B. J. Palmer, D.R. Rector, “Building an ApplicationDomain Specific Programming Frameworkfor Computational Fluid DynamicsCalculations on Parallel Computers,” TenthSIAM Conference on Parallel Processingfor Scientific Computing, March 2001.

E. Parker, B. R. de Supinski and D.Quinlan, “Measuring the Regularity ofArray References,” Los Alamos ComputerScience Institute Symposium (LACSI2001), Los Alamos, NM, Oct. 2001.

N. A. Peterson, K. K. Chand, “Detectingtranslation errors in CAD surfaces andpreparing geometries for mesh generation,”

Proc. of 10th Annual Meshing RoundTable, Newport Beach, Calif., October2001.

J. Pruet, K. Abazajian, G. M. Fuller, “Newconnection between central engine weakphysics and the dynamics of gamma-ray burstfireballs,” Physical Review D, 64, 063002,2001.

J. Pruet, G. M. Fuller, C. Y. Cardall, “OnSteady State Neutrino-heated UltrarelativisticWinds from Compact Objects,”Astrophysical Journal Letters, 561, 957-963, 2001.

S. C. Pryor, R. J. Barthelmie, J. T. Schoof,L. L. Sorensen, D. J. Erickson III,“Implications of heterogeneous chemistry fornitrogen deposition to marine ecosystems:Observations and modeling,” InternationalJournal of Environmental Pollution,Water, Air and Soil Pollution, Focus 1, 99,2001.

J. Qiang and R. Ryne, “Parallel 3D PoissonSolver for a Charged Beam in a ConductingPipe,” Computer PhysicsCommunications 138, 18, 2001.

J. Qiang, R. Ryne, B. Blind, J. Billen, T.Bhatia, R. Garnett, G. Neuschaefer, H.Takeda, “High Resolution Parallel Particle-In-Cell Simulation of Beam Dynamics in theSNS Linac,” Nuclear Instrument andMethods Physics, Res. A, 457, 1, 2001.

J. Qiang, I. Hofmann, and R. D. Ryne,“Cross-Plane Resonance: A Mechanism forVery Large Amplitude Halo Formation,”Proc. of the 2001 Particle AcceleratorConference, p. 1735, Chicago, Ill., June2001.

J. Qiang and R. D. Ryne, “A Layer-BasedObject-Oriented Parallel Framwork for BeamDynamics Studies,” Proc. of the 2001Particle Accelerator Conference, p. 3060,Chicago, Ill., June 2001.

J. F. Remacle, M. S. Shephard, J. E.Flaherty, O. Klass, “Parallel Algorithm ori-ented mesh database,” Proc. 10thInternational Meshing Roundtable, p.341-349, Newport Beach, Calif., October2001.

R. Samanta, et al., “Parallel Rendering withK-Way Replication,” Proc. of the IEEESymposium on Parallel and Large-DataVisualization and Graphics, 2001.

S. L. Scott, M. Brim, R. Flanery, A. Geistand B. Luethke, “Cluster Command &Control (C3) Tools Suite,” Parallel and

Distributed Computing Practices – specialissue, Nova Science Publishers, Inc., ISSN1097-2803,Volume 4, Number 4, Dec.2001.

S. L. Scott, M. Brim and T. Mattson,“OSCAR: Open Source Cluster ApplicationResources,” 2001 Linux Symposium,Ottawa, Canada, July 2001.

S. L. Scott, M. Brim, G.A. Geist, B.Luethke and J. Schwidder, “M3C:Managing and Monitoring MultipleClusters,” CCGrid 2001 – IEEEInternational Symposium on ClusterComputing and the Grid, Brisbane,Australia, May 2001.

M. Seth, K. G. Dyall, R. Shepard, and A.Wagner, “The Calculation of f-f Spectra ofLanthanide and Actinide Ions by theMCDF-CI Method,” Journal of Physics B:At. Molecular and Optical Physics 34,2383-2406 2001.

R. Shepard, A. F. Wagner, J. L. Tilson andM. Minkoff, “The Subspace ProjectedApproximate Matrix (SPAM) Modificationof the Davidson Method,” Journal ofComputational Physics 172, 472-514.2001.

A. Snavely, “Modeling ApplicationPerformance by Convolving MachineSignatures with Application Profiles,” IEEE4th Annual Workshop on WorkloadCharacterization, Austin, Texas, Dec.2001.

C. R. Sovinec, J. M. Finn, and D. del-Castillo-Negrete, “Formation and sustain-ment of spheromaks in the resistive MHDmodel,” Physics of Plasmas 8, 475, 2001.

L. Sugiyama, W. Park, H. R. Strauss, et al.,“Studies of spherical tori, stellarators, andanisotropic pressure with the M3D code,”Nuclear Fusion 41, 739, 2001.

K. Teranishi, P. Raghavan and E. G. Ng,“Scalable Preconditioning Using IncompleteFactors,” Proc. of the Tenth SIAMConference on Parallel Processing forScientific Computing, 2001.

T. A. Thompson, A. Burrows and B. S.Meyer, “The Physics of Proto-Neutron StarWinds: Implications for r-ProcessNucleosynthesis,” Astrophysical Journal,562:887-908, Dec. 2001.

P. Vanek, M. Brezina and J. Mandel,“Convergence analysis of algebraic multigridbased on smoothed aggregation,” NumericalMathematics 88, 559—579, 2001.

LIST OF PUBLICATIONS

88

Page 91: Progress report on the U.S. Department of Energy's Scientific

R. Vuduc, J. Demmel and J. Bilmes,“Statistical Models for AutomaticPerformance Tuning,” International Journalof High Performance ComputingApplications, 2001.

T. P. Wangler et al., “Experimental Study ofProton-Beam Halo Induced by BeamMismatch in LEDA,” Proc. of the 2001Particle Accelerator Conference, p. 2923,Chicago, Ill., June 2001.

A. D. Whiteford, N. R. Badnell, C. P.Ballance, M. G. O’Mullane, H. P.Summers and A. L. Thomas, “A radiation-damped R-matrix approach to the electron-impact excitation of helium-like ions for diag-nostic application to fusion and astrophysicalplasmas,” Journal of Physics B, 34, 3179,2001.

K. Wu, E. J. Otoo, and A. Shoshani, “Aperformance comparison of bitmap indexes,”Proc. of the 2001 ACM CIKMInternational Conference on Informationand Knowledge Management, Atlanta,Georgia, USA, Nov. 2001.

2002J. Abate, S. Benson, L. Grignon, P.Hovland, L. McInnes, and B. Norris,“Integrating Automatic Differentiation withObject-Oriented Toolkits for HighPerformance Scientific Computing,” FromSimulation to Optimization, G. Corliss, C.Faure, A. Griewank, L. Hascoet and U.Naumann, eds, p 173-178, Springer-Verlag,New-York, 2002.

A. Adelmann, “Space Charge Studies atPSI,” Proc. 20th ICFA Advanced BeamDynamics Workshop on High BrightnessHadron Beams (ICFA-HB 2002), p. 316,Batavia, Ill. April 2002.

A. Adelmann and D. Feichtinger, “GenericLarge Scale 3D Visualization of Acceleratorsand Beam Lines,” Proc. ICCS 2002, p. 362,2002.

S. Adjerid, K. D. Devine, J. E. Flaherty, L.Krivodonova, “A posteriori error estimationfor discontinuous Galerkin solutions of hyper-bolic problems,” Computer Methods inApplied Mechanics and Engineering, 191,p. 1997-1112, 2002.

V. Akcelik, G. Biros and O. Ghattas, 2002,“Parallel Multiscale Gauss-Newton-KrylovMethods for Inverse Wave Propagation,”Proc. of the IEEE/ACM SC2002Conference, Baltimore, Md., Nov. 2002.

W. Allcock, J. Bester, J. Bresnahan, I.Foster, J. Gawor, J. Insley, J. Link, M.Papka, “GridMapper: A Tool for Visualizingthe Behavior of Large-Scale DistributedSystems,” Proc. of the 11th IEEEInternational Symposium on HighPerformance Distributed ComputingHPDC-11, 179-188, 2002.

C. K. Allen, K. C. D. Chan, P. L.Colestock, K. R. Crandall, R. W. Garnett,J. D. Gilpatrick, W. Lysenko, J. Qiang, J.D. Schneider, M. E. Shulze, R. L.Sheffield, H. V. Smith, and T. P. Wangler,“Beam-Halo Measurements in High-CurrentProton Beams,” Physical Review Letters,89, 214802, Nov. 2002.

J. Amundson and P. Spentzouris, “AHybrid, Parallel 3D Space Charge Code withCircular Machine Modeling Capabilities,”Proc. of the International ComputationalAccelerator Physics Conference 2002(ICAP2002), East Lansing, Michigan,Oct. 2002.

C. P. Ballance, N. R. Badnell, and K. A.Berrington, “Electron-impact excitation ofH-like Fe at high temperatures,” Journal ofPhysics B, 35, 1095, 2002.

G. P. Balls, P. Colella, “A finite differencedomain decomposition method using local cor-rections for the colution of Poissons’s equa-tion,” Journal of Computational Physics,180, 25, 2002.

D. B. Batchelor, L. A. Berry, M. D.Carter, E. F. Jaeger, C. K. Phillips, R.Dumont, P. T. Bonoli, D. N. Smithe, R.W. Harvey, D. A. D’Ippolito, J. R. Myra,E. D’Azevedo, L. Gray and T. Kaplan,“Tera-scale Computation of Wave-PlasmaInteractions in Multidimensional FusionPlasmas,” Plasma Physics and ControlledNuclear Fusion Research, 2002, Proc. ofthe 19th International Conference, Lyon,France, International Atomic EnergyAgency, Vienna, 2002.

J. B. Bell, M. S. Day and J. F. Grcar,“Numerical Simulation of PremixedTurbulent Methane Combustion,” Proc. ofthe Combustion Institute, 29:1987, 2002.

J. B. Bell, M. S. Day, J. F. Grcar, W. G.Bessler, C. Shultz, P. Glarborg and A. D.Jensen, “Detailed Modeling and Laser-

Induced Fluorescence Imaging of Nitric Oxidein an NH3-seeded non-premixed methane/airflame,” Proc. of the Combustion Institute,29:2195, 2002.

G. J. O. Beran, S. R. Gwaltney, and M.Head-Gordon, “Can coupled cluster singlesand doubles be approximated by a valenceactive space model?,” Journal of ChemistryPhysics. 117, 3040-3048, 2002.

C. Bernard, et al. (The MILCCollaboration), “Lattice Calculation ofHeavy-Light Decay Constants with TwoFlavors of Dynamical Quarks,” PhysicalReview D 66, 094501, 2002.

L. A. Berry, E. F. D’Azevedo, E. F. Jaeger,D. B. Batchelor and M. D. Carter, “All-Orders, Full-Wave RF Computations UsingLocalized Basis Functions,” 29th EPSConference on Plasma Phys. and Contr.Fusion, ECA Vol. 26B, P-4.109, June2002.

G. Biros and O. Ghattas, “InexactnessIssues in Lagrange-Newton-Krylov-SchurMethods for PDE-ConstrainedOptimization,” Lecture Notes inComputational Science and Engineering,30, Springer-Verlag, 2002.

D. L. Brown, L. Freitag, J. Gilman,“Creating interoperable meshing and dis-cretization software: the Terascale SimulationTools and Technology Center,” Proc.of theEighth International Conference onNumerical Grid Generation inComputational Field Simulations,Honolulu, Hawaii , June 2002.

X.-C. Cai and D. E. Keyes, “NonlinearlyPreconditioned Inexact Newton Algorithms,”SIAM Journal of Scientific Computing,24:183-200, 2002.

X.-C. Cai, D. E. Keyes and L.Marcinkowski, “Nonlinear AdditiveSchwarz Preconditioners and Applications inComputational Fluid Dynamics,”International Journal of NumericalMethods in Fluids, 40:1463-1470, 2002.

X.-C. Cai, D. E. Keyes and D. P. Young,“A Nonlinearly Additive SchwarzPreconditioned Inexact Newton Method forShocked Duct Flow,” Proc. of the 13thInternational Conference on DomainDecomposition Methods,” (N. Debit etal., eds.) CIMNE pp. 345-352, 2002.

M. D. Carter, F. W. Baity, Jr., G. C.Barber, R. H. Goulding, Y. Mori, D. O.Sparks, K. F. White, E. F. Jaeger, F. R.Chang-Diaz, and J. P. Squire, “Comparing

LIST OF PUBLICATIONS

89

Page 92: Progress report on the U.S. Department of Energy's Scientific

Experiments with Modeling for Light IonHelicon Plasma Sources,” Physics ofPlasmas 9, 5097, 2002.

L. Chatterjee, M. R. Strayer, J. S. Wu,“Pair production by neutrinobremsstrahlung,” Physical Review D, 65,057302, 2002.

H. Chen, et al., “Scalable Alignment ofLarge-Format Multi-Projector Displays UsingCamera Homography Trees,” Proc. of IEEEVisualization (VIZ), October 2002.

H. Chen, et al., “Memory PerformanceOptimizations for Real-Time SoftwareHDTV Decoding,” Proc. of the IEEEInternational Conference on Multimediaand Expo, August 2002.

H. Chen, et al., “A Parallel Ultra-HighResolution MPEG-2 Video Decoder for PCCluster Based Tiled Display System,” Proc.of International Parallel and DistributedProcessing Symposium, 2002.

H. Chen, et al., “Experiences withScalability of Display Walls,” Proc. of the7th Annual Immersive ProjectionTechnology Symposium, 2002.

E. Chow, T. A. Manteuffel, C. Tong, B. K.Wallin, “Algebraic elimination of slide sur-face constraints in implicit structural analy-sis,” International Journal for NumericalMethods in Eng., 01, 1-21, 2002.

S. Chu, S. Elliott and M. Maltrud, “Wholeocean fine resolution carbon management sim-ulations systems,” EOS, Transactions,American Geophysical Union 2002Spring Meeting, 83(19), S125, 2002.

C. E. Clayton, B. E. Blue, E. S. Dodd, C.Joshi, K. A. Marsh, W. B. Mori, S. Wang,P. Catravas, S. Chattopadhyay, E. Esarey,W. P. Leemans, R. Assmann, F. J. Decker,M. J. Hogan, R. Iverson, P. Raimondi, R.H. Siemann, D. Walz, T. Katsouleas, S.Lee and P. Muggli, “Transverse EnvelopeDynamics of a 28.5-GeV Electron Beam in aLong Plasma,” Phys. Rev. Lett., 88, 15 p.154801/1-4, April 2002.

D. D. Clayton, B. S. Meyer, L.-S. The, M.F. El Eid, “Iron Implantation in PresolarSupernova Grains,” Astrophysical JournalLetters, 578, L83-L86, 2002.

E. Colby, V. Ivanov, Z. Li, C. Limborg,“Simulation Issues for RF Photoinjector,”Proc. of the International ComputationalAccelerator Physics Conference 2002(ICAP2002), East Lansing, Michigan,Oct. 2002.

J. P. Colgan and M. S. Pindzola, “(Á, 2e)total and differential cross section calculationsfor helium at various excess energies,”Physical Review A, 65, 032729, 2002.

J. P. Colgan and M. S. Pindzola, “Core-excited resonance enhancement in the two-photon complete fragmentation of helium,”Physical Review Letters, 88, 173002,2002.

J. P. Colgan and M. S. Pindzola, “Doublephotoionization of beryllium,” PhysicalReview A, 65, 022709, 2002.

J. P. Colgan and M. S. Pindzola, “Time-dependent close-coupling studies of the elec-tron-impact ionization of excited-stateHelium,” Physical Review A, 66, 062707,2002.

J. P. Colgan, M. S. Pindzola and F. J.Robicheaux, “A Fourier transform method ofcalculating total cross sections using the time-dependent close-coupling theory,” PhysicalReview A, 66, 012718, 2002.

J. H. Cooley, T. M. Antonsen Jr., C.Huang, V. Decyk, S. Wang, E. S. Dodd, C.Ren, W. B. Mori, and T. Katsouleas,“Further Developments for a Particle-in-CellCode for Efficiently Modeling WakefieldAcceleration Schemes,” AdvancedAccelerator Concepts, Tenth Workshop,eds. C. E. Clayton and P. Muggli, AIPConf. Proc. No. 647, p. 232-239, 2002.

J. J. Cowan, C. Sneden, S. Burles, I. I.Ivans, T. C. Beers, J. W. Truran, J. E.Lawler, F. Primas, G. M. Fuller, B.Pfeiffer, K.-L. Kratz, “The ChemicalComposition and Age of the Metal-poor HaloStar BD +17°3248,” Astrophysical JournalLetters, 572, 861-879, 2002.

T. Cristian, I.-H. Chung and J. K.Hollingsworth, “Active Harmony: TowardsAutomated Performance Tuning,” Proc. ofthe SC 2002 conference, Baltimore, Md.,Nov. 2002.

S. Deng, F. Tsung, S. Lee, W. Lu, W. B.Mori, T. Katsouleas, P. Muggli, B. E.Blue., C. E. Clayton, C. O’Connell, E.Dodd, F. J. Decker, C. Huang, M. J.Hogan, R. Hemker, R. H. Iverson, C.Joshi, C. Ren, P. Raimondi, S. Wang andD. Walz, “Modeling of Ionization Physicswith the PIC Code OSIRIS,” AdvancedAccelerator Concepts, Tenth Workshop,eds. C. E. Clayton and P. Muggli, AIPConf. Proc. No. 647, p. 219-223, 2002.

S. Deng, T. Katsouleas, S. Lee, P. Muggli,W. B. Mori, R. Hemker, C. Ren, C.

Huang, E. Dodd, B. E. Blue., C. E.Clayton, C. Joshi, S. Wang, F-J. Decker,M. J. Hogan, R. H. Iverson, C. O’Connell,P. Raimondi, and D. Walz, “3-DSimulation of Plasma Wakefield Accelerationwith Non-Idealized Plasmas and Beams,”Advanced Accelerator Concepts, TenthWorkshop, eds. C. E. Clayton and P.Muggli, AIP Conf. Proc. No. 647, p. 592-599. 2002.

L. Derose, K. Ekanadham, J. K.Hollingsworth and S. Sbaraglia, “SIGMA:A Simulator to Guide Memory Analysis,”Proc. of the SC 2002 conference,Baltimore, Md., Nov. 2002.

D. A. Dimitrov, D. L. Bruhwiler, W.Leemans, E. Esarey, P. Catravas, C. Toth,B. Shadwick, J. R. Cary, and R. Giacone,“Simulations of Laser Propagation andIonization in l’OASIS Experiments,” Proc.Advanced, Accel. Concepts Workshop, p.192, Mandalay Beach, CA, 2002.

D. A. Dimitrov, D. L. Bruhwiler, W. P.Leemans, E. Esarey, P. Catravas, C. Toth,B. A. Shadwick, J. R. Cary and R. E.Giacone, “Simulations of Laser Propagationand Ionization in l’OASIS Experiments,”Proc. Advanced Accel. ConceptsWorkshop, AIP Conf. Proc. 647, eds. C.E. Clayton and P. Muggli, AIP, NewYork, 2002.

E. S. Dodd, R. G. Hemker, C.-K.Huang,S. Wang, C. Ren, W. B. Mori, S. Lee andT. Katsouleas, “Hosing and Sloshing ofShort-Pulse Ge-V-Class Wakefield Drivers,”Physical Review Letters, 88, 125001,March 2002.

E. D. Dolan and J. J. More, “BenchmarkingOptimization Software with PerformanceProfiles,” Mathematical Programming,91:201-213, 2002.

D. Dolgov, et al. (The LHPC Collabora-tion):, “Moments of nucleon light cone quarkdistributions calculated in full lattice QCD,”Physical Review D 66, 034506, 2002.

S. C. Doney and M. W. Hecht, “AntarcticBottom Water Formation and Deep WaterChlorofluorocarbon Distributions in a GlobalOcean Climate Model,” Journal of Physics,Oceanography 32, 1642, 2002.

R. J. Dumont, C. K. Phillips and D. N.Smithe, “ICRF wave propagation andabsorption in plasmas with non-thermal pop-ulations,” 29th EPS Conference on PlasmaPhys. and Contr. Fusion, ECA Vol. 26B,P-5.051, Montreux, Switzerland, June2002.

LIST OF PUBLICATIONS

90

Page 93: Progress report on the U.S. Department of Energy's Scientific

D. J. Erickson, III and J. L. Hernandez, “Aglobal, high resolution, satellite-based model ofair-sea isoprene flux,” Gas Transfer atWater Surfaces, ed. by M. A. Donelan, W.M. Drennan, E. S. Saltzman, and R.Wanninkhof, American GeophysicalUnion Monograph 127, 312, p. 312, 2002.

R. Evard, D. Narayan, J. P. Navarro, andD. Nurmi, “Clusters as large-scale develop-ment facilities,” Proc of the 4th IEEEInternational Conference on ClusterComputing (CLUSTER02), p. 54, IEEEComputer Society, 2002.

R. D. Falgout and U. M. Yang, “hypre: aLibrary of High PerformancePreconditioners,” in Computational Science- ICCS 2002 Part III, P. M. A. Sloot, C .J.K. Tan. J. J. Dongarra, and A. G. Hoekstra,eds., Lecture Notes in Computer Science,2331, p. 632-641, Springer-Verlag, 2002.

J. E. Flaherty, L. Krivodonova, J. F.Remacle, M. S. Shephard, “Aspects of dis-continuous Galerkin methods for hyperbolicconservation laws,” Finite Elements inAnalysis and Design, 38:889-908, 2002.

J. E. Flaherty, L. Krivodonova, J. F.Remacle, M. S. Shephard, “High-orderadaptive and parallel discontinuous Galerkinmethods for hyperbolic systems,” WCCM V:Fifth World Congress on ComputationalMechanics, Vienna Univ. of Tech.,Vienna, Austria, p. 1:144, 2002.

R. A. Fonseca, et al., “OSIRIS: A 3D, fullyrelativistic PIC code for modeling plasmabased accelerators,” Lect. Notes Comput.Sci, 2331, 3 p. 342-351. 2002.

T. W. Fredian and J. A. Stillerman,“MDSplus, Current Developments andFuture Directions,”The 3rd IAEATechnical Meeting on Control, DataAcquisition, and Remote Participation forFusion Research, Fusion Engineering andDesign, 60, 229-233, 2002.

L. Freitag, T. Leurent, P. Knupp, D.Melander, “MESQUITE Design: Issues inthe Development of a Mesh QualityImprovement Toolkit”, Proceedings of the8th Intl. Conference on Numerical GridGeneration in Computational FieldSimulations, Honolulu, p159-168, 2002.

C. L. Fryer and M. S. Warren, “ModelingCore-Collapse Supernovae in ThreeDimensions,” Astrophysical JournalLetters, 574:L65-L68, July 2002.

G. Fubiani, G. Dugan, W. Leemans, E.Esarey, P. Catravas, C. Toth, B. Shadwick,

J. R. Cary, and R. Giacone, “Semi-analyti-cal 6D Space Charge Model for DenseElectron Bunches with Large EnergySpreads,” Proc. of the AdvancedAccelerator Concepts WorkshopMandalay Beach, Calif., 2002.

G. M. Fuller, “The nuclear physics andastrophysics of the new neutrino physics,”AIP Conf. Proc. 610: Nuclear Physics inthe 21st Century, p. 231-246, 2002.

A. Gebremedhin, F. Manne, and A.Pothen, “Parallel Distance- k ColoringAlgorithms for Numerical Optimization,”Lecture Notes in Computer Science,2400, p. 912-921, Springer-Verlag, 2002.

E. George, J. Glimm, X. L. Li, A.Marchese, Z. L. Xu, “A Comparison ofExperimental, Theoretical and NumericalSimulation Rayleigh-Taylor Mixing Rates,”Proc. National Academy of Science, 59,p. 2587-2592, 2002.

S. J. Ghan, X. Bian, A. G. Hunt and A.Coleman, “The thermodynamic influence ofsubgrid orography in a global climate model,”Climate Dynamics, 20, 31-44, 2002.

A. Z. Ghalam, T. Katsouleas, S. Lee, W.B. Mori, C. Huang, V. Decyk and C. Ren,“Simulation of Electron-Cloud Instability inCircular Accelerators using Plasma Models,”Advanced Accelerator Concepts, TenthWorkshop, eds. C. E. Clayton and P.Muggli, AIP Conf. Proc. No. 647, p. 224j-231, 2002.

O. Ghattas and L. T. Biegler, “ParallelAlgorithms for Large-Scale Simulation-basedOptimization,” iModeling and Simulation-Based Life Cycle Engineering, SponPress, London, 2002.

T. A. Gianakon, S. E. Kruger, and C. C.Hegna, “Heuristic closures for numerical sim-ulation of neoclassical tearing modes,”Physics of Plasmas 9, 536, 2002.

J. Glimm, X. L. Li, A. Lin, “Nonuniformapproach to terminal velocity for single modeRayleigh-Taylor instability,” ActaMathematicae Applicatae Sinica, 18, p. 1-8, 2002.

J. Glimm, X. L. Li, Y. J. Liu, “ConservativeFront Tracking in One Dimension,” FluidFlow and Transport in Porous Media:Mathematical and Numerical Treatment,Contemporary Mathematics, Z. X. Chen,R. Ewing, eds., American MathematicalSociety, 295, p. 253-264, 2002.

L. Griogori and X. S. Li, “A NewScheduling Algorithm for Parallel Sparse LU

Factorization with Static Pivoting,” Proc. ofthe IEEE/ACM SC2002 Conference,Baltimore, Md., Nov. 2002.

W. Gropp, L. McInnes and B. Smith,“Chapter 19 of CRPC Handbook of ParallelComputing,” Morgan Kaufmann, 2002.

P. T. Haertel and D. A. Randall, “Could apile of slippery sacks behave like an ocean?”Monthly Weather Review, 130, 2975,2002.

A. Heger, S. E.Woosley, T. Rauscher, R.D. Horman, and M. M. Boyes, “Massivestar evolution: nucleosynthesis and nuclearreaction rate uncertainties,” NewAstronomy Review, 46:463-468, July2002.

R. P. Heikes, “A comparison of vertical coor-dinate systems for numerical modeling of thegeneral circulation of the atmosphere,” Ph.D.thesis, Colorado State University, 2002.

W. D. Henshaw, “Generating CompositeOverlapping Grids on CAD Geometries,”Numerical Grid Generation in Compu-tational Field Simulations, June 2002.

W. D. Henshaw, “An Algorithm forProjecting Points onto a Patched CADModel,” Engineering with Computers, 18,p. 265-273, 2002.

V. E. Henson and U. M. Yang,“BoomerAMG: a Parallel AlgebraicMultigrid Solver and Preconditioner,”Applied Numerical Mathematics, 41, p.155-177, 2002.

J. J. Heys, C. G. DeGroff, T. Manteuffel, S.McCormick and W. W. Orlando, “First-order system least squares for elastohydrody-namics with application to flow in compliantblood vessels,” Biomed. Sci. Instrum. 38, p.277-282, 2002.

M. J. Hogan, C. D. Barnes, C. E. Clayton,C. O’Connell, F. J. Decker, S. Deng, P.Emma, C. Huang, R. Iverson, D. K.Johnson, C. Joshi, T. Katsouleas, P.Krejcik, W. Lu, K. A. Marsh, W. B. Mori,P. Muggli, R. H. Siemann and D. Walz,“Acceleration and Focusing of Electrons andPositrons using a 30 GeV Drive Beam,”Advanced Accelerator Concepts, TenthWorkshop, eds. C. E. Clayton and P.Muggli, AIP Conf. Proc. No. 647, pp. 3-10, 2002.

P. Hovland, B. Norris and B. Smith,“Making Automatic Differentiation trulyAutomatic: Coupling PETSc with ADIC,”Proc. of ICCS 2002, Amsterdam,Netherlands, 2002.

LIST OF PUBLICATIONS

91

Page 94: Progress report on the U.S. Department of Energy's Scientific

E. C. Hunke and J. K. Dukowicz, “TheElastic-Viscous-Plastic Sea Ice DynamicsModel in General Orthogonal CurvilinearCoordinates on a Sphere—Incorporation ofMetric Terms,” Monthly Weather Review,130, 1848, 2002.

C. Joshi, B. Blue, C. E. Clayton, E. Dodd,C. Huang, K. A. Marsh, W. B. Mori, S.Wang, J. J. Hogan, C. O’Connell, R.Siemann, D. Watz, P. Muggli, T.Katsouleas and S. Lee, “High EnergyDensity Plasma Science with anUltrarelativistic Electron Beam,” Physics ofPlasmas, Vol. 9, No. 5, p. 1845-1855, May2002.

J. Ivanic and K. Ruedenberg, “Deadwoodin Configuration Spaces. II. SD and SDTQSpaces,” Theoretical Chemistry Accounts,107, 220-228, 2002.

V. Ivanov, N. Folwell, G. Golub, A. Guetz,Z. Li, W. Mi, C.-K. Ng, G. Schussman, Y.Sun, J. Tran, M. Weiner, M. Wolf and K.Ko, “Computational Challenge In LargeScale Accelerator Modeling,” ICCM-2002,Novosibirsk, Russia, June 2002.

V. Ivanov, R. Ives, A. Krasnykh, G.Miram, M. Read, “An Electron Gun for aSheet Beam Klystron,” IVEC-2002,Monterey, Calif., April 2002.

V. Ivanov, G. Schussman, M. Weiner,“Particle Tracking Algorithms for 3D ParallelCodes in Frequency and Time Domains,”Proc. of 18th Annual Review of Progressin Applied ComputationalElectromagnetics, ACES-2002, Monterey,Calif., March 2002.

E. F. Jaeger, L. A. Berry, E. D’Azevedo, D.B. Batchelor, M. D. Carter, K. F. Whiteand H. Weitzner, “Advances in Full-WaveModeling of Radio Frequency Heated,Multidimensional Plasmas,” Physics ofPlasmas 9, 1873, 2002.

G. C. Jordan, B. S. Meyer, “Sensitivity ofIsotope Yields to Reaction Rates in the AlphaRich Freezeout,” Proc. of the Workshop onAstrophysics, Symmetries, and AppliedPhysics at Spallation Neutron Sources(ASAP 2002), p. 42. Oak Ridge, Tenn.March 2002.

A. Kabel, “A Parallel Code for LifetimeCalculations in P/P-Storage Rings in thePresence of Parasitic Beam-Beam Crossings,”Proc. of the International ComputationalAccelerator Physics Conference 2002(ICAP2002), East Lansing, Michigan,Oct. 2002.

A. Kabel, “Parallel Simulation Algorithmsfor the Strong-Strong Beam-BeamInteraction,” Proc. of the InternationalComputational Accelerator PhysicsConference 2002 (ICAP2002), EastLansing, Michigan, Oct. 2002.

T. Katsouleas, A. Z. Ghalam, S. Lee, W.B. Mori, C. Huang, V. Decyk and C. Ren,“Plasma Modeling of Wakefields in ElectronClouds,” ECLOUD’02 Mini-Workshop onElectron-Cloud Simulations for Protonand Positron Beams, CERN, Geneva,Switzerland, 2002.

K. Keahey and V. Welch, “Fine-GrainAuthorization for Resource Management inthe Grid Environment,” Proc. of the 3rdInternational Workshop on GridComputing, 2002.

D. E. Keyes, “Terascale Implicit Methods forPartial Differential Equations,” The BarrettLectures, University of TennesseeMathematics Department, 2001,Contemporary Mathematics 306:29-84,AMS, Providence, RI, 2002.

D. E. Keyes, P. D. Hovland, L. C.McInnes and W. Samyono, “UsingAutomatic Differentiation for Second-OrderMatrix-free Methods in PDE-constrainedOptimization,” Automatic Differentiationof Algorithms: From Simulation toOptimization (G. Corliss et al., eds.),Springer-Verlag, p 35-50, 2002.

A. Khamayseh, A. Kuprat, “Hybrid CurvePoint Distribution Algorithms,” SIAMJournal on Scientific Computing, 23, p.1464-1484, 2002.

A. Klawonn, O. B. Widlund and M.Dryja, “Dual-Primal FETI Methods forThree-dimensional Elliptic Problems withHeterogeneous Coefficients,” SIAM Journalon Numerical Analysis, 40:159-179, 2002.

K. Ko, “High Performance Computing inAccelerator Physics,” Proc. of 18th AnnualReview of Progress in AppliedComputational Electromagnetics, ACES-2002, Monterey, Calif., March 2002.

S. Lee, T. Katsouleas, P. Muggli, W. B.Mori, C. Joshi, R. Hemker, E. S. Dodd, C.E. Clayton, K. A. Marsh, B. Blue, S.Wang, R. Assmann, F. J. Decker, M.Hogan, R. Iverson, and D. Walz, “EnergyDoubler for a Linear Collider,” PhysicalReview Special Topics-Accelerators AndBeams, 5, 1, January 2002.

X. Li, J. Demmel, D. Bailey, G. Henry, Y.Hida, J. Iskandar, W. Kahan, S. Kang, A.

Kapur, M. Martin, B. Thompson, T. Tungand D. Yoo, “Design, Implementation andTesting of Extended and Mixed PrecisionBLAS,” ACM Transactions ofMathematical Software 22:152-205, 2002.

M. Liebendorfer, O. E. B. Messer, A.Mezzacappa, W. R. Hix, F.-K.Thielemann, K. Langanke, “The importanceof neutrino opacities for the accretion luminos-ity in spherically symmetric supernova mod-els,” Proc. of the 11th Workshop onNuclear Astrophysics, p. 126-131, Max-Planck-Institut fuer Astrophysik,Germany, 2002.

Z. Liu, et al., “Improving Progressive View-Dependent Isosurface Propagation,”Computers and Graphics, 26 (2): 209-218, 2002.

S. D. Loch, M. S. Pindzola and J. P.Colgan, “Electron-impact ionization of allionization stages of krypton,” PhysicalReview A, 66, 052708, 2002.

C.-D. Lu and D. A. Reed, “CompactApplication Signatures for Parallel andDistributed Scientific Codes,” Proc. of theSC 2002 conference, Baltimore, Md.,Nov. 2002.

X. Luo, M. S. Shephard, J. F. Remacle, R.M. Bara, M. W. Beall, B. A. Szab, “p-Version Mesh Generation Issues,” 11thInternational Meshing Roundtable, 2002.

X. Luo, M. S. Shephard, J. F. Remacle,“The Influence of Geometric Approximationon the Accuracy of High Order Methods,”Proc. 8th International Conference onNumerical Grid Generation inComputational Field Simulations, Miss.State University, 2002.

W. P. Lysenko, R. W. Garnett, J. D.Gilpatrick, J. Qiang, L. J. Rybarcyk, R. D.Ryne, J. D. Schneider, H. V. Smith, L. M.Young, and M. E. Schulze, “High OrderBeam Features and Fitting Quadrupole ScanData to Particle Code Model,” Proc. of theInternational Computational AcceleratorPhysics Conference 2002 (ICAP2002),East Lansing, Michigan, Oct. 2002.

K.-L. Ma, G. Schussman, B. Wilson, K.Ko, J. Qiang and R. Ryne, “AdvancedVisualization Technology for TerascaleParticle Accelerator Simulations,” Proc. ofthe IEEE/ACM SC2002 Conference,Baltimore, Md., Nov. 2002.

J. L. McClean, P.-M. Poulain, J. W. Pelton,and M. E. Maltrud, “Eulerian andLagrangian statistics from surface drifters

LIST OF PUBLICATIONS

92

Page 95: Progress report on the U.S. Department of Energy's Scientific

and a high-resolution POP simulation in theNorth Atlantic,” Journal of PhysicalOceanography, 32, 2472, 2002.

C. W. McCurdy, P. Cummings, E. Stechel,B. Hendrickson and D. E. Keyes, “Theoryand Modeling in Nanoscience,” whitepapercommissioned by the U.S. DOE Officesof Basic Energy Sciences and AdvancedScientific Computing, 2002.

C. L. Mendes and D. A. Reed,“Monitoring Large Systems via StatisticalSampling,” LACSI Symposium, Santa Fe,Oct. 2002.

B. S. Meyer, “r-Process Nucleosynthesis with-out Excess Neutrons,” Physical ReviewLetters, 89, 231101, 2002.

D. M. Mitnik, D. C. Griffin, and M. S.Pindzola, “Time-dependent close-coupling cal-culations of dielectronic capture in He,”Physical Review Letters, 88, 173004,2002.

J. J. More, “Automatic Differentiation Toolsin Optimization Software,” in AutomaticDifferentiation 2000: From Simulation toOptimization, G. Corliss, C. Faure, A.Griewank, L. Hascoet, and U. Naumann,eds, p 25-34, Springer-Verlag, New-York,2002.

W. B. Mori, “Recent Advances and SomeResults in Plasma-Based AcceleratorModeling,” Advanced AcceleratorConcepts, Tenth Workshop, eds. C. E.Clayton and P. Muggli, AIP Conf. Proc.No. 647, p. 11-28, 2002.

P. Muggli, “Transverse Envelope Dynamicsof a 28.5-GeV Electron Beam in a LongPlasma,” Physical Review Letters, Vol. 88,No. 15 pp. 154801/1-4, April 2002.

C. Naleway, M. Seth, R. Shepard, A. F.Wagner, J. L. Tilson, W. C. Ermler and S.R. Brozell, “An Ab Initio Study of theIonization Potentials and the f-f Spectroscopyof Europium Atoms and Ions,” Journal ofChemistry Physics. 116, 5481-5493. 2002.

J. P. Navarro, R. Evard, D. Nurmi, and N.Desai, “Scalable cluster administration -{Chiba City I} approach and lessons learned,”Proc of the 4th IEEE InternationalConference on Cluster Computing(CLUSTER02), p. 54, IEEE ComputerSociety, 2002.

J.W. Negele, “Understanding parton distri-butions from lattice QCD: Present limitationsand future promise,” Nuclear Physics A711, 281, 2002.

C.-K. Ng, L. Ge, A. Guetz, V. Ivanov, Z.Li, G. Schussmann, Y. Sun, M. Weiner, M.Wolf, “Numerical Studies of Field Gradientand Dark Currents in SLAC Structures,”Proc. of the International ComputationalAccelerator Physics Conference 2002(ICAP2002), East Lansing, Michigan,Oct. 2002.

C.-K. Ng et. al.,“Modeling HOM Heating inthe PEP-II IR with Omega3P,” Proc. of theInternational Computational AcceleratorPhysics Conference 2002 (ICAP2002),East Lansing, Michigan, Oct. 2002.

B. Norris, S. Balay, S. Benson, L. Freitag,P. Hovland, L. McInnes, and B. Smith,“Parallel Components for PDEs andOptimization: Some Issues and Experiences,”Parallel Computing 28:1811-1831, 2002.

C. O’Connell et al., “Dynamic Focusing ofan Electron Beam through a Long Plasma,”Phys. Rev. STAB, 5, 121301, 2002.

R. J. Oglesby, S. Marshall, D. J. EricksonIII, J. O. Roads and F. R. Robertson,“Thresholds in atmosphere-soil moisture inter-actions: Results from climate model studies,”Journal of Geophysical Research,. 107,2002.

L. F. Pavarino and O. B. Widlund,“Balancing Neumann-Neumann methods forincompressible Stokes equations,” Comm.Pure Appl. Math. 55:302-335, 2002.

M. S. Pindzola, “Proton-impact excitation oflaser-excited lithium atoms,” PhysicalReview A, 66, 032716, 2002.

P. Ploumans, G. S. Winckelmans, J. K.Salmon, A. Leonard, and M. S. Warren,“Vortex Methods for High-ResolutionSimulation of Three-Dimensional Blu® BodyFlows; Application to the Sphere at Re=300,500 and 1000,” Journal of ComputationalPhysics, 178:427-463, 2002.

J. Pruet and N. Dalal, “Implications ofNeutron Decoupling in Short Gamma-RayBursts,” ApJ, 573:770-777, July 2002.

J. Pruet, S. Guiles, G. M. Fuller, “Light-Element Synthesis in High-EntropyRelativistic Flows Associated with Gamma-Ray Bursts,” Astrophysical Journal Letters,580, 368-373, 2002.

J. Qiang, P. L. Colestock, D. Gilpatrick, H.V. Smith, T. P. Wangler and M. E.Schulze, “Macroparticle Simulation Studiesof a Proton Beam Halo Experiment,” Phys.Rev. ST Accel. Beams, vol 5, 124201,2002.

J. Qiang, M. Furman and R. Ryne, “Strong-Strong Beam-Beam Simulation Using aGreen Function Approach,” Phys. Rev. STAccel. Beams, 5, 104402, Oct. 2002.

J. Qiang and R. Ryne, “Parallel BeamDynamics Simulation of Linear Accelerators,”Proceedings of the 2002 AppliedComputational Electromagnetics Society(ACES’2002) Conference, March 2002.

D. A. Randall, T. D. Ringler, R. P. Heikes,P. Jones and J. Baumgardner, “Climatemodeling with spherical geodesic grids,”Computing in Science and Engr., 4, 32-41. 2002.

T. Rauscher, A. Heger, R. D. Hoffmanand S. E. Woosley, “Nucleosynthesis inMassive Stars with Improved Nuclear andStellar Physics,” American Physics Journal,576:323-348, Sept. 2002.

J. F. Remacle, O. Klass, J. E. Flaherty, M.S. Shephard, “A parallel algorithm orientedmesh database,” Engineering withComputers, 18, 274,2002.

J. F. Remacle, M. S. Shepard, J. E.Flaherty, “Some issues on distributed meshrepresentations,” Proc. 8th InternationalConference on Numerical GridGeneration in Computational FieldSimulations, Miss. State University, 2002.

J. F. Remacle, O. Klass, M. S. Shephard,“Trellis: A framework for adaptive numericalanalysis based on multiparadigm program-ming in C++,” WCCM V: Fifth WorldCongress on Computational Mechanics,Vienna Univ. of Tech, Vienna, Austria, p.1-164, 2002.

J. F. Remacle, X. Li, N. Chevaugeon, M.S. Shepard, “Transient mesh adaptationusing conforming and non-conforming meshmodifications,” 11th International MeshingRoundtable, 2002.

T. D. Ringler and D. A. Randall, “TheZM Grid: An Alternative to the Z Grid,”Monthly Weather Review, 130,1411–1422, 2002.

T. D. Ringler, and D. A. Randall, “APotential Enstrophy and Energy ConservingNumerical Scheme for Solution of theShallow-Water Equations on a GeodesicGrid,” Monthly Weather Review, 130,1397–1410, 2002.

F. J. Robicheaux and J. D. Hanson,“Simulation of the expansion of an ultracoldneutral plasma,” Physical Review Letters,88, 055002. 2002.

LIST OF PUBLICATIONS

93

Page 96: Progress report on the U.S. Department of Energy's Scientific

J. M. Sampaio, K. Langanke, G. Martinez-Pinedo, D. J. Dean, “Neutral-current neutri-no reactions in the supernova environment,”Physics Letters B, 529, 19-25. 2002.

D. P. Schissel, M. J. Greenwald, and W .E.Johnston, “Fusion Energy SciencesNetworking: A Discussion of FutureRequirements,” DOE High PerformanceNetwork Planning Workshop, 2002.

D. P. Schissel, “An Advanced CollaborativeEnvironment to Enhance Magnetic FusionResearch, for the National FusionCollaboratory Project,” Proceedings of theWorkshop on Advanced CollaborativeEnvironments, 2002.

D. P. Schissel, et al., “Data Management,Code Deployment, and ScientificVisualization to Enhance Scientific Discoveryin Fusion Research through AdvancedComputing,” The 3rd IAEA TechnicalMeeting on Control, Data Acquisition,and Remote Participation for FusionResearch. Fusion Engineering and Design60, 481-486, 2002.

K. L. Schuchardt, J. D. Myers, E. G.Stephan, 2002, “A web-based data architec-ture for problem-solving environments: appli-cation of distributed authoring and versioningthe extensible computational chemistry envi-ronment,” Cluster Computing 5(3):287-296, 2002.

D. R. Schultz, C. O. Reinhold, P. S. Krsticand M. R. Strayer, “Ejected-electron spec-trum in low-energy proton-hydrogen colli-sions,” Physical Review A, 65, 052722,2002.

G. Schussman and K.-L. Ma, “ScalableSelf-Orienting Surfaces: A Compact, Texture-Enhanced Representation for InteractiveVisualization Of 3D Vector Fields,” Proc. ofthe Pacific Graphics 2002 Conference,Beijing, China, Oct. 2002.

G. Schussman and K.-L. Ma, “ScalableSelf-Orienting Surfaces: A Compact Texture-Enhanced Representation for InteractiveVisualization Of 3d Vector Fields,” Proc. ofthe 10th Pacific Conference on ComputerGraphics and Applications, IEEEComputer Society, Washington, D.C.,Oct. 2002.

S. L. Scott, B. Luethke and T. Naughton,“C3 Power Tools – The Next Generation,”4th DAPSYS Conference, Linz, Austria,Sept.–Oct. 2002.

S. L. Scott, B. Luethke, “HPC FederatedCluster Administration with C3 v3.0,”

Ottawa Linux Symposium (OLS 2002),Ottawa, Canada, July 2002.

S. L. Scott, J. L. Mugler and T. Naughton,“User Interfaces for Clustering Tools,” OttawaLinux Symposium (OLS 2002), Ottawa,Canada, July 2002.

S. L. Scott, B. Luethke, “C3 Power ToolsCommodity, High-Performance ClusterComputing Technologies and Applications,”Sixth World Multiconference onSystemics, Cybernetics, and Informatics(ISAS SCI2002), Orlando, Fla., July 2002.

S. L. Scott, T. Naughton, B. Barrett, J.Squyres, A. Lumsdain and Y. C. Fang,“The Penguin in the Pail – OSCAR ClusterInstallation Tool Commodity, High-Performance Cluster Computing Technologiesand Applications,” Sixth WorldMulticonference on Systemics,Cybernetics, and Informatics (ISASSCI2002), Orlando, Fla., July 2002.

B. A. Shadwick, G. M. Tarkenton, E. H.Esarey and W. P. Leemans, “Fluid simula-tions of intense laser-plasma interactions,”IEEE Trans. Plasma Sci. 30, 38, 2002.

M. S. Shephard, X. Luo, J. F. Remacle,“Meshing for p-version finite element meth-ods,” WCCM V: Fifth World Congress onComputational Mechanics, Vienna Univ.of Tech, Vienna, Austria, p. 1-464, 2002.

A. Snavely, et al., “A Framework forApplication Performance Modeling andPrediction,” Proc. of the SC 2002 confer-ence, Baltimore, Md., Nov. 2002.

P. Spentzouris, J. Amundson, J. Lackey, L.Spentzouris, R. Tomlin, “Space ChargeStudies and Comparison with SimulationsUsing the FNAL Booster,” Proc. of theInternational Computational AcceleratorPhysics Conference 2002 (ICAP2002),East Lansing, Michigan, Oct. 2002.

N. Sullivan, A. D. Jensen, P. Glarborg, M.S. Day, J. F. Grcar, J. B. Bell, C. J. Pope,and R. J. Kee, “Ammonia Conversion andNOx Formation in Laminar CoflowingNonpremixed Methane-Air Flames,”Combustion and Flame, 131(3):285, 2002.

Y. Sun, N. Folwell, Z. Li and G. H. Golub,“High Precision Accelerator Cavity DesignUsing the Parallel Eigensolver Omega3p,”Proc. of 18th Annual Review of Progressin Applied ComputationalElectromagnetics, ACES-2002, Monterey,Calif., March 2002.

Y. Sun, G. Golub and K. Ko, “Solving theComplex Symmetric Eigenproblem in

Accelerator Structure Design,” Proc. of theInternational Computational AcceleratorPhysics Conference 2002 (ICAP2002),East Lansing, Michigan, Oct. 2002.

K. Teranishi, P. Raghavan, and E. G. Ng,“A New Data-Mapping Scheme for Latency-Tolerant Distributed Sparse TriangularSolution,” Proc. of the IEEE/ACMSC2002 Conference, Baltimore, Md., Nov.2002.

K. Teranishi, P. Raghavan, and E. G. Ng,“Reducing Communication Costs for ParallelSparse Triangular Solution,” Proc. of theIEEE/ACM SC2002 Conference,Baltimore, Md., Nov. 2002.

F.-K. Thielemann, D. Argast, F. Brachwitz,G. Martinez-Pinedo, T. Rauscher, M.Liebendoerfer, A. Mezzacappa, P. Hoflich,K. Nomoto, “Nucleosynthesis and StellarEvolution,” Astrophysics and SpaceScience, 281, 25-37, 2002.

F.-K Thielemann,. D. Argast, F.Brachwitz, G. Martinez-Pinedo, T.Rauscher, M. Liebendoerfer, A.Mezzacappa, P. Hoflich, K. Iwamoto, K.Nomoto, “Nucleosynthesis in Supernovae.(I),” ASP Conf. Ser. 253: ChemicalEnrichment of Intracluster andIntergalactic Medium, p. 205, 2002.

F.-K. Thielemann, P. Hauser, E. Kolbe, G.Martinez-Pinedo, I. Panov, T. Rauscher,K.-L. Kratz, B. Pfeiffer, S. Rosswog, M.Liebendoerfer, A. Mezzacappa, “HeavyElements and Age Determinations,” SpaceScience Reviews, 100, 277-296, 2002.

J. L. Tilson, C. Naleway, M. Seth, R.Shepard, A. F. Wagner, and W. C. Ermler,“An Ab Initio Study of the f-f Spectroscopy ofAmericium+3,” Journal of ChemistryPhysics. 116, 5494-5502, 2002.

S. J. Thomas, J. M. Dennis, H. M. Tufo, P.F. Fischer, “A Schwarz preconditioner for thecubed-sphere,” Proc. of the CopperMountain Conference on IterativeMethods, 2002.

H. Trease, L. Trease, J. Fowler, R. Corley,C. Timchalk, K. Minard, D. Rommeriem,“A Case Study: Extraction, ImageReconstruction, And Mesh Generation FromNMR Volume Image Data From F344 RatsFor Computational Biology Applications,”8th International Conference on GridGeneration and Computational Physics,2002.

F. S. Tsung., R. G. Hemker, C. Ren, W. B.Mori, L. O. Silva and T. Katsouleas,

LIST OF PUBLICATIONS

94

Page 97: Progress report on the U.S. Department of Energy's Scientific

“Generation of Ultra-Intense, Single-CycleLaser Pulse using Photon Deceleration,”Proceedings of the National Academy ofSciences., Vol. 99, pp. 29-32, 2002.

T. Van Voorhis and M. Head-Gordon,“Implementation of generalized valence-bondinspired coupled cluster theories,” Journal ofChemical Physics, 117, 9190-9201, 2002.

J. L. Vay, P. Colella, P.McCorquodale, B.Van Straalen, A. Friedman, D.P. Grote,“Mesh refinement for particle-in-cell plasmasimulations: Applications to and benefits forheavy ion fusion,” Laser and ParticleBeams, 20, 4, p. 569-575, 2002.

J. S. Vetter and P. Worley, “AssertingPerformance Expectations,” Proc. of the SC2002 conference, Baltimore, Md., Nov.2002.

J. S. Vetter and A. Yoo, “An EmpiricalPerformance Evaluation of Scalable ScientificApplications,” Proc. of the SC 2002 con-ference, Baltimore, Md., Nov. 2002.

J. S. Vetter, “Dynamic Statistical Profiling ofCommunication Activity in DistributedApplications,” Proc. of SIGMETRICS2002: Joint International Conference onMeasurement and Modeling of ComputerSystems, ACM, 2002.

J. S. Vetter and F. Mueller, “CommunicationCharacteristics of Large-Scale ScientificApplications for Contemporary ClusterArchitectures,” Proc. of the InternationalParallel and Distributed ProcessingSymposium, 2002.

R. Vuduc, J. W. Demmel, K. A. Yelick, S.Kamil, R. Nishtala and B. Lee,“Performance Optimizations and Bounds forSparse Matrix-Vector Multiply,” Proc. of theIEEE/ACM SC2002 Conference,Baltimore, Md., Nov. 2002.

R. Vuduc, S. Kamil, J. Hsu, R. Nishtala, J.W. Demmel and K. A. Yelick, “AutomaticPerformance Tuning and Analysis of SparseTriangular Solve,” ICS 2002: Workshop onPerformance Optimization via High-Level Languages and Libraries, 2002.

G. Wallace, “Display Wall ClusterManagement,” Proc. of the IEEEVisualization 2002 Workshop onCommodity-Based Visualization Clusters,Oct. 2002.

S. Wang, C. E. Clayton, B. E. Blue, E. S.Dodd, K. A. Marsh, W. B. Mori, C. Joshi,P. Muggli, T. Katsouleas, F. J. Decker, M.J. Hogan, R. H. Iverson, P. Raimondi, D.Watz, R. Siemann, and R. Assmann, “X-

ray Emission from Betatron Motion in aPlasma Wiggler,” Physical Review Letters,Vol. 88, No. 13, p. 135004/1-4, April2002.

A. Wetzels, A. Gurtler, L. D. Noordam, F.Robicheaux, C. Dinu, H. G. Muller, M. J.J. Vrakking and W. J. van der Zande,“Rydberg state ionization by half-cycle-pulseexcitation: strong kicks create slow electrons,”Physical Review Letters, 89, 273003.2002.

A. D. Whiteford, N. R. Badnell, C. P.Ballance, S. D. Loch, M. G. O’Mullaneand H. P. Summers, “Excitation of Ar15+and Fe23+ for diagnostic application tofusion and astrophysical plasmas,” Journal ofPhysics B, 35, 3729, 2002.

B. Wilson, K.-L. Ma and P. McCormick,“A Hardware-Assisted Hybrid RenderingTechnique for Interactive VolumeVisualization,” 2002 Symposium onVolume Visualization and Graphics,Boston, Mass., Oct. 2002.

B. Wilson, K.-L. Ma, J. Qiang, and R.Ryne, “Interactive Visualization of ParticleBeams for Accelerator Design,” InProceedings of the Workshop on ParticleAccelerator Science and Technology,ICCS 2002, Amsterdam, Netherlands,April 2002.

M. Wingate, et al. (The HPQCDCollaboration), “Heavy Light Physics UsingNRQCD/Staggered Actions,” NuclearPhysics B (Proc. Suppl.) 106, 379, 2002.

M. Wolf, A. Guetz and C.-K. Ng, “ModelingLarge Accelerator Structures with theParallel Field Solver Tau3P,” Proc. of 18thAnnual Review of Progress in AppliedComputational Electromagnetics, ACES-2002, Monterey, Calif., March 2002.

C. S. Woodward, K. E. Grant and R.Maxwell, “Applications of SensitivityAnalysis to Uncertainty Quantification forVariably Saturated Flow,” ComputationalMethods in Water Resources, S.M.Hassanizadeh, R. J. Schotting, W. G.Gray, and G. F. Pinder, eds., vol.1, pp. 73-80, Elsevier, Amsterdam, 2002.

S. E. Woosley, A. Heger, and T. A.Weaver, “The evolution and explosion ofmassive stars,” Reviews of ModernPhysics, 74:1015-1071, Nov. 2002.

P. H. Worley, “Scaling the Unscalable: ACase Study on the AlphaServer SC,” Proc. ofthe IEEE/ACM SC2002 Conference,Baltimore, Md., Nov. 2002.

P. H. Worley, T. H. Dunigan, Jr., M. R.Fahey, J. B. White III, A. S. Bland, “EarlyEvaluation of the IBM p690,” Proc. of theIEEE/ACM SC2002 Conference,Baltimore, Md., Nov. 2002.

K. Wu, J. Ekow, E. J. Otoo, and A.Shoshani, “Compressing bitmap indexes forfaster search operations,” Proc. ofSSDBM’02, p. 99, Edinburgh, Scotland,2002.

M. Zingale, L. J. Dursi, J. ZuHone, A. C.Calder, B. Fryxell, T. Plewa, J. W. Truran,A. Caceres, K. Olson, P. M. Ricker, K.Riley, R. Rosner, A. Siegel, F. X. Timmesand N. Vladimirova, “Mapping InitialHydrostatic Models in Godunov Codes,”Astrophysical Journal Supplement,143:539-565, Dec. 2002.

2003S. I. Abarzhi, J. Glimm, A. D. Lin,“Rayleigh-Taylor instability for fluids with afinite density contrast,” Phys. Fluids, 15, p.2190-2197, 2003.

S. I. Abarzhi, J. Glimm, K. Nishihara,“Rayleigh-Taylor instability and Richtmyer-Meshkov instabilities for fluids with a finitedensity contrast,” Phys. Lett. A, 11, p. 1-7,2003.

M. Adams, M. Brezina, J. Hu, and R.Tuminaro, “Parallel multigrid smoothing:polynomial versus Gauss-Seidel,” Journal ofComputational Physics, 188, p. 593-610,2003.

V. Akcelik, J. Bielak, G. Biros, I.Epanomeritakis, A. Fernandez, O.Ghattas, E. J. Kim, D. O’Hallaron and T.Tu, 2003, “High-resolution Forward andInverse Earthquake Modeling on TerascaleComputers,” Proc. of the ACM/IEEESC03 conference, Nov. 2003.

C. Alexandrou et al., “Calculation of the Nto Delta electromagnetic transition matrix ele-ment,” Nuclear Physics B (Proc. Suppl.)119, 413, 2003.

P. R. Amestoy, I. S. Duff, J.-Y. L’Excellentand X. S. Li, “Impact of the Implementationof MPI Point-to-Point Communications onthe Performance of Two General SparseSolvers,” Parallel Computing, 29:833-849,2003.

LIST OF PUBLICATIONS

95

Page 98: Progress report on the U.S. Department of Energy's Scientific

J. Amundson, J. Lackey, P. Spentzouris,G. Jungman, L. Spentzouris, “Calibrationof the Fermilab Booster ionization profilemonitor,” Physics Review ST Accel.Beams. 6, 102801, 2003.

J. Amundson, P. Spentzouris, “FNALBooster: Experiment and Modeling,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

J. Amundson, P. Spentzouris, “SYNER-GIA: A Hybrid, Parallel Beam DynamicsCode with 3D Space Charge,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

C. Aubin, et al. (The MILCCollaboration), “Chiral logs with staggeredfermions,” Nuclear Physics B (Proc.Suppl.), 119, 233, 2003.

N. R. Badnell, D. C. Griffin, and D. M.Mitnik, “Electron-impact excitation of B+using the R-matrix with pseudo-statesmethod,” Journal of Physics B, 1337 2003.

N. R. Badnell, M. G. O’Mullane, H. P.Summers, Z. Altun, M. A. Bautista, J. P.Colgan, T. W. Gorczyca, D. M. Mitnik,M. S. Pindzola, and O. Zatsarinny,“Dielectronic recombination data for dynamicfinite-density plasmas: I. Goals and methodol-ogy,” Astronomy and Astrophysics, 406,1151, 2003.

A. B. Balantekin, G. M. Fuller, “Supernovaneutrino nucleus astrophysics,” Journal ofPhysics G, 29, 2513-2522, 2003.

C. P. Ballance, N. R. Badnell and E. S.Symth, “A pseudo-state sensitivity study onhydrogenic ions,” Journal of Physics B, 36.3707. 2003.

C. P. Ballance, N. R. Badnell, D. C.Griffin, S. D. Loch and D. M. Mitnik,“The effects of coupling to the target continu-um on the electronimpact excitation of Li+,”Journal of Physics B, 36, 235, 2003.

G. T. Balls, S. B. Baden, P. Colella,“SCALLOP: A highly scalable parallelPoisson solver in three dimensions,” Proc. ofACM/IEEE SC03 conference, Phoenix,Arizona, November, 2003.

M. W. Beall, J. Walsh, M. S. Shephard, “Acomparison of techniques for geometry accessrelated to mesh generation,” Proc. 7th USNat. Congress on Comp. Mech., p. 62,2003.

M. W. Beall, J. Walsh, M. S. Shephard.,“Accessing CAD geometry for mesh genera-tion,” 12th International Meshing

Roundtable, Sandia NationalLaboratories, p. 33-42, 2003.

J. B. Bell, M. S. Day, J. F. Grcar, M. J.Lijewski, M. Johnson, R. K. Cheng and I.G. Shepherd, “Numerical Simulation of aPremixed Turbulent V-Flame,” 19thInternational Colloquium on theDynamics of Explosions and ReactiveSystems, July-Aug. 2003.

J. B. Bell, M. S. Day, A. S. Almgren, R. K.Cheng, and I. G. Shepherd, “NumericalSimulation of Premixed Turbulent MethaneCombustion,” Proc. of the Second MITConference on Computational Fluid andSolid Mechanics, June 2003.

J. B. Bell, M. S. Day, J. F. Grcar, and M. J.Lijewski, “Analysis of carbon chemistry innumerical simulations of vortex flame interac-tions,” 19th International Colloquium onthe Dynamics of Explosions and ReactiveSystems, July-Aug. 2003.

C. Bernard, et al. (The MILCCollaboration), “Topological Susceptibilitywith the Improved Asqtad Action,” Phys.Rev. D 68, 114501, 2003.

C. Bernard, et al. (The MILCCollaboration), “Light hadron propertieswith improved staggered quarks,” NuclearPhysics B (Proc. Suppl.), 119, 257, 2003.

C. Bernard, et al. (The MILCCollaboration), “Topological susceptibilitywith the improved Asqtad action,” NuclearPhysics B (Proc. Suppl.), 119, 991, 2003.

C. Bernard, et al. (The MILCCollaboration), “Static hybrid quarkoniumpotential with improved staggered quarks,”Nuclear Physics B (Proc. Suppl.), 119,598, 2003.

C. Bernard, et al. (The MILCCollaboration), “High temperature QCDwith three flavors of improved staggeredquarks,” Nuclear Physics B (Proc. Suppl.),119, 523, 2003.

C. Bernard, et al. (The MILCCollaboration), “Exotic hybrid mesons fromimproved Kogut-Susskind fermions,” NuclearPhysics B (Proc. Suppl.), 119, 260, 2003.

C. Bernard, et al. (The MILCCollaboration), “Heavy–light meson decayconstants with Nf =3,” Nuclear Physics B(Proc. Suppl.), 119, 613, 2003.

C. Bernard, et al. (The MILC Collabora-tion), “Lattice calculation of 1?+ hybridmesons with improved Kogut-Susskind fermi-ons,” Physical Review D 68, 074505, 2003.

S. Bhowmick, L. McInnes, B. Norris, andP. Raghavan, 2003, “The Role of Multi-Method Linear Solvers in PDE-basedSimulations,” Proc. of the 2003International Conference onComputational Science and itsApplications, Montreal, Canada, LectureNotes in Computer Science, 2677, p. 828-839, V. Kumar, M. L. Gavrilova, C. J. K.Tan, and P. L’Ecuyer. eds, May 2003.

L. Biegler, O. Ghattas, M. Heinkenschlossand B. van Bloemen Waanders, editors,“PDE-Constrained Optimization: State-of-the-Art,” Lecture Notes in ComputationalScience and Engineering, 30, Springer-Verlag, 2003.

J, M. Blondin, “The Inherent Asymmetry ofCore-Collapse Supernovae,” ASP Conf. Ser.293: 3D Stellar Evolution, p. 290, 2003.

J. M. Blondin, A. Mezzacappa, C.DeMarino, “Stability of Standing AccretionShocks, with an Eye toward Core-CollapseSupernovae,” Astrophysical JournalLetters, 584, 971-980, 2003.

B. E. Blue et al., “Plasma-WakefieldAcceleration of an Intense Positron Beam,”Phys. Rev. Lett. 90, 214801, 2003.

P. A. Boyle, C. Jung and T. Wettig, “TheQCDOC supercomputer: Hardware, soft-ware, and performance,” Econf C0303241,THIT003, 2003.

M. Branstetter, and D. J. Erickson III,“Continental runoff dynamics in theCCSM2.0,” Journal of GeophysicalResearch 108(D17), 4550, 2003.

D. P. Brennan, R. J. La Haye, A. D.Turnbull, et al., “A mechanism for tearingonset near ideal stability boundaries,”Physics of Plasmas 10 1643, 2003.

J. A. Breslau, S. C. Jardin and W. Park,“Simulation studies of the role of reconnectionin the ‘current hole’ experiments in the jointEuropean torus,” Physics of Plasmas 101665, 2003.

M. Brewer, L. Diachin, T. Leurent, D.Melander, “The Mesquite Mesh QualityImprovement Toolkit,” Proc. of the 12thInternational Meshing Roundtable, SantaFe NM, p. 239-250, 2003.

P. N. Brown, B. Lee, T. A. Manteuffel, “Amoment-parity multigrid preconditioner forthe first-order least-squares formulation of theBlotzmann transport equations,” SIAMJournal of Scientific Computing, 25, 2, p.513-533, 2003.

LIST OF PUBLICATIONS

96

Page 99: Progress report on the U.S. Department of Energy's Scientific

P. N. Brown, P. Vassilevski, and C. S.Woodward, “On Mesh-IndependentConvergence of an Inexact Newton-MultigridAlgorithm,” SIAM Journal of ScientificComputing, 25, 2, p. 570-590, 2003.

D. L. Bruhwiler, D. A. Dimitrov, J. R.Cary, E. Esarey, W. P. Leemans and R. E.Giacone, “Particle-in-cell simulations of tun-neling ionization effects in plasma-basedaccelerators,” Physics of Plasmas 10, p.2022, 2003.

D. L. Bruhwiler, D. A. Dimitrov, J. R.Cary, E. Esarey and W. P. Leemans,“Simulation of ionization effects for high-den-sity positron drivers in future plasma wake-field experiments,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

N. Bucciantini, J. M. Blondin, L. DelZanna, E. Amato, “Spherically symmetricrelativistic MHD simulations of pulsar windnebulae in supernova remnants,” Astronomy& Astrophysics, 405, 617-626, 2003.

T. Burch and D. Toussaint (The MILCCollaboration), “Hybrid configuration con-tent of heavy S-wave mesons,” PhysicalReview D, 68, 094504, 2003.

R. Butler, D. Narayan, A. Lusk and E.Lusk, “The process management component ofa scalable system software environment,”Proc. of the 5th IEEE InternationalConference on Cluster Computing(CLUSTER03), Kowloon, Hong Kong,Dec. 2003.

L. Bytautas, J. Ivanic, K. Ruedenberg,“Split-Localized Orbitals Can Yield StrongerConfiguration Interaction Convergence thanNatural Orbitals,” Journal of ChemicalPhysics, 119, 8217, 2003.

C. Cardall, A. Mezzacappa, “Conservativeformulations of general relativistic kinetic the-ory,” Physical Review D, 68, 023006,2003.

L. Carrington, A. Snavely, N. Wolter, andX. Gao, “A Performance PredictionFramework for Scientific Applications,”ICCS Workshop on PerformanceModeling and Analysis (PMA03),Melbourne, Australia, June 2003.

M. D. Carter, F. W. Baity Jr., G. C. Barber,R. H. Goulding, Y. Mori, D. O. Sparks, K.F. White, E. F. Jaeger, F. R. Chang-Diazand J. P. Squire, “Comparing Experimentswith Modeling to Optimize Light IonHelicon Plasma Sources for VASIMR,” Proc.of Open Systems, Korea, Transactions of

Fusion Science and Technology 43(1),2003.

J. R. Cary, R. E. Giacone, C. Nieter, D. L.Bruhwiler, E. Esarey, G. Fubiani and W.P. Leemans, “All-Optical Beamlet TrainGeneration,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

G. K. L. Chan and M. Head-Gordon,“Exact solution (in a triple zeta, doublepolarization basis) of the Schrödinger elec-tronic equation for water,” Journal ofChemical Physics, 118, 8851-8854, 2003.

B. Cheng, J. Glimm, H. Jin, D. H. Sharp,“Theoretical methods for the determination ofmixing,” Laser and Particle Beams, 21, p.429-436, 2003.

N. Chevaugeon, J. E. Flaherty, L.Krivodonova, J. F. Remacle, M. S.Shephard, “Adaptive and parallel discontin-uous Galerkin method for hyperbolic conser-vation law,” Proc. 7th US Nat. Congresson Comp. Mech., p. 448, 2003.

N. Chevaugeon, J. F. Remacle, J. E.Flaherty, D. Cler, M. S. Shephard,“Application of discontinuous Galerkinmethod. Large scale problem: Cannon blastmodeling,” Proc. 7th US Nat. Congress onComp. Mech., p. 470, 2003.

Y. Choi, S. Elliott, D. R. Blake, F. S.Rowland, et al., “PCA survey of whole airhydrocarbon data from the second airborneBIBLE expedition,” Journal of GeophysicalResearch, 108, 10, 2003.

T. Chartier, R. D. Falgout, V. E. Henson,J. Jones, T. Manteuffel, S. McCormick, J.Ruge and P. S. Vassilevski, “SpectralAMGe (ÚAMGe),” SIAM Journal ofScientific Computing, 25, p. 1-26, 2003.

E. Chow, “An Unstructured MultigridMethod Based on Geometric Smoothness,”Numer. Linear Algebra Appl., 10, p. 401-421, 2003.

E. Chow, T. A. Manteuffel, C. Tong andB. K. Wallin, “Algebraic Elimination of SlideSurface Constraints in Implicit StructuralAnalysis,” Intern. Journal on NumericalAnalysis,. Meth. Engrg., 57, p. 1129-1144,2003.

E. Chow and P. Vassilevski, “MultilevelBlock Factorizations in GeneralizedHierarachical Bases,” Numer. LinearAlgebra Appl., 10, p. 105-127, 2003.

S. Chu, S. M. Elliott, and M. E. Maltrud,“Global eddy permitting simulations of sur-

face ocean nitrogen, iron, sulfur cycling,”Chemosphere, 50, 223, 2003.

S. Chu, S. Elliott, M. Maltrud, F. Chai andF. Chavez, “Studies of the consequences ofocean carbon sequestration using Los Alamosocean models,” Proceedings of the DOE2nd Annual Conference on CarbonSequestration, Alexandria, Va., May 2003.

D. L. Cler, N. Chevaugeon, M. S.Shephard, J. E. Flaherty, J. F. Remacle,“CFD application to gun muzzle blast - Avalidation case study,” 41st AIAAAerospace Sciences Meeting and Exhibit,Jan. 2003.

A. Codd, T. Manteuffel and S.McCormick, “Multilevel first-order systemleast squares for nonlinear elliptic partial dif-ferential equations,” SIAM Journal onNumerical Analysis, 41, p. 2197-2209,2003.

A. Codd, T. Manteuffel, S, McCormickand J. Ruge, “Multilevel first-order systemleast squares for elliptic grid generation,”SIAM Journal on Numerical Analysis, 41,pp. 2210-2232, 2003.

J. P. Colgan, M. S. Pindzola, A. D.Whiteford, and N. R. Badnell,“Dielectronic recombination data for dynamicfinite-density plasmas: III. The Beryllium iso-electronic sequence,” Astronomy andAstrophysics, 412, 597, 2003.

J. P. Colgan and M. S. Pindzola, “Total anddifferential cross section calculations for thedouble photoionization of the helium 1x2s1,3S states,” Physical Review A, 67, 012711,2003.

J. P. Colgan, M. S. Pindzola, and F. J.Robicheaux, “Time-dependent studies of sin-gle and multiple photoionization of H2+,”Physical Review A, 68, 063413, 2003.

J. P. Colgan, M. S. Pindzola, and M. C.Witthoeft, “Time-dependent electron-impactscattering from He+ using variable latticespacings,” Physical Review A, 032713,2003.

J. P. Colgan, S. D. Loch, M. S. Pindzola,C. P. Ballance, and D. C. Griffin,“Electron-impact ionization of all ionizationstages of Beryllium,” Physical Review A,032712, 2003.

D. Datta, C. Picu, M. S. Shephard,“Multiscale simulation of defects using multi-grid atomistic continuum method,” Proc. 7thUS Nat. Congress on Comp. Mech., p.814, 2003.

LIST OF PUBLICATIONS

97

Page 100: Progress report on the U.S. Department of Energy's Scientific

C. Davies, et al. (The HPQCDCollaboration), “The Determination ofAlpha(s) from Lattice QCD with 2+1Flavors of Dynamical Quarks,” NuclearPhysics B (Proc. Suppl.), 119, 595, 2003.

J. Demmel and Y. Hida, “Accurate FloatingPoint Summation,” SIAM Journal ofScientific Computing, 25, 4, p.1214–1248, 2003.

S. Dey, L. Couchman, D. Datta, “Errorand convergence of p-refinement schemes forelastodynamics and coupled elasto-acoustics,”Proc. 7th US Nat. Congress on Comp.Mech., , p. 517, 2003.

M. di Pierro et al., “Charmonium with threeflavors of dynamical quarks,” NuclearPhysics B (Proc. Suppl.) 119, 586, 2003.

J. Dongarra, K. London, S. Moore, P.Mucci, D. Terpstra, H. You, and M. Zhou,“Experiences and Lessons Learned with aPortable Interface to Hardware PerformanceCounters,” PADTAD Workshop, IPDPS2003, Nice, France, April 2003.

J. Dongarra and V. Eijkhout, “Self-adaptingNumerical Software and Automatic Tuningof Heuristics,” ICCS 2003, Workshop onAdaptive Algorithms, Melbourne,Australia, June 2003.

J. Dongarra, A. Malony, S. Moore, P.Mucci, and S. Shende, “PerformanceInstrumentation and Measurement forTerascale Systems,” ICCS 2003, TerascaleSystems Workshop, Melbourne, Australia,June 2003.

P. Dreher, et al. (The SESAMCollaboration), “Continuum extrapolation ofmoments of nucleon quark distributions in fullQCD,” Nuclear Physics B (Proc. Suppl.),119, 392, 2003.

M. Dryja and O. Widlund, “A GeneralizedFETI-DP Method for a MortarDiscretization of Elliptic Problems,” Proc ofthe 14th International Symposium onDomain Decomposition Methods,Cocoyoc, Mexico, Jan. 2002, I. Herrera,D. E. Keyes, O. B. Widlund and R. Yateseds., 2003.

R. J. Dumont, C. K. Phillips, A. Pletzerand D.N. Smithe, “Effects of Non-Maxwellian Plasma Species on ICRF WavePropagation and Absorption in ToroidalMagnetic Confinement Devices,” invitedpaper in the 10th European FusionTheory Conference, Helsinki, Finland,Sept. 2003.

R. J. Dumont, C. K. Phillips, and D. N.Smithe, “Effects of Non-Maxwellian PlasmaSpecies on ICRF Propagation andAbsorption,” Radio Frequency Power inPlasmas, Proc. 15th Topical Conf., Moran,Wyoming, 2003, AIP, N. Y., 2003.

T. H. Dunigan, Jr., M. R. Fahey, J. B.White III, P. H. Worley, “Early Evaluationof the Cray X1,” Proc. of the ACM/IEEESC03 conference, Phoenix, Ariz., Nov.2003.

L. J. Dursi, M. Zingale, A. C. Calder, B.Fryxell, F. X. Timmes, N. Vladimirova, R.Rosner, A. Caceres, D. Q. Lamb, K.Olson, P. M. Ricker, K. Riley, A. Siegel,and J. W. Truran, “The Response of Modeland Astrophysical Thermonuclear Flames toCurvature and Stretch,” ApJ, 595:955-979,Oct. 2003.

S. Dutta, E. George, J. Glimm, X. L. Li, A.Marchese, Z. L. Xu, Y. M. Zhang, J. W.Grove, D. H. Sharp, “Numerical methodsfor the determination of mixing,” Laser andParticle Beams, 21, p. 437-442, 2003.

S. Dutta, J. Glimm, J. W. Grove, D. H.Sharp, Y. Zhang, “A fast algorithm for mov-ing interface problems,” InternationalConference on Computational Scienceand Its Applications (ICCSA 2003),Lecture Notes in Computer Science 2668,V. Kumar et al.., ed., Springer-Verlag,Berlin/Heidelberg, p. 782-790, 2003.

S. Elliott and S. Chu, “Comment on ‘oceanfertilization experiments initiate large scalephytoplankton blooms,’” GeophysicalResearch Letters, 30, 1274, 2003.

S. Elliott, and H. Hanson, “Syndication ofthe Earth system,” Environmental Scienceand Policy, 6, 457, 2003.

S. Elliott, D. R. Blake, F. S. Rowland, etal., “Whole air sampling as a window onPacific biogeochemistry,” Journal ofGeophysical Research, 108(D3), 8407,2003.

D. J. Erickson III, J. Hernandez, P.Ginoux, W. Gregg, C. McClain, and J.Christian, “Atmospheric Iron Delivery andsurface ocean biological activity in theSouthern Ocean and Patagonian region,”Geophysical Research Letter, 30(12),1609, 2003.

J. Fetter, G. C. McLaughlin, A. B.Balantekin, G. M. Fuller, “Active-sterile neu-trino conversion: consequences for the r-process and supernova neutrino detection,”Astroparticle Physics, 18, 433-448, 2003.

P. Fischer, F. Hecht, Y. Maday, “A Pararealin Time Semi-implicit Approximation of theNavier-Stokes Equations,” Proc. Of the 15thInt. Conf. on Domain DecompositionMethods, Berlin, 2003.

P. F. Fischer, J. W. Lottes, “HybridSchwarz-Multigrid Methods for the SpectralElement Method: Extensions to Navier-Stokes,” Proc. Of the 15th Int. Conf. onDomain Decomposition Methods, Berlin,2003.

N. Folwell, P. Knupp “Increasing Tau3PAbort-Time via Mesh Quality Improvement,”Proceedings of the 12th InternationalMeshing Roundtable, Santa Fe NM, p.379-392, 2003.

N. Folwell, P. Knupp and M. Brewer,“Increasing the TAU3P Abort-time via MeshQuality Improvement,” Proc. of the 12thInternational Meshing Roundtable, SandiaNational Laboratories, Santa Fe, NM,Sept. 2003.

N. Folwell, P. Knupp, K. Ko, “MeshQuality Improvement for Accelerator Design,”SIAM Conference on ComputationalScience and Engineering, San Diego,Calif., Feb. 2003.

C. L. Fryer and P. Meszaros, “Neutrino-driven Explosions in Gamma-Ray Burstsand Hypernovae,” ApJ Letters, 588:L25-L28, May 2003.

G. Fubiani, E. Esarey, W. Leemans, J.Qiang, R. D. Ryne and G. Dugan,“Numerical Studies of Laser-PlasmaProduced Electron Beam Transport,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

G. M. Fuller, A. Kusenko, I. Mocioiu, S.Pascoli, “Pulsar kicks from a dark-mattersterile neutrino,” Physical Review D, 68,103002, 2003.

E. Gabriel, G. Fagg and J. Dongarra,“Evaluating the Performance of MPI-2Dynamic Communicators and One-sidedCommunication,” EuroPVM/MPI 2003,Venice, Italy, Sept. 2003.

Z. Gan, Y. Alexeev, R. A. Kendall, and M.S. Gordon, “The Parallel Implementation ofa Full CI Program,” Journal of ChemicalPhysics, 119, 47, 2003.

X. Gao and A. Snavely, “ExploitingStability to Reduce Time-Space Cost forMemory Tracing,” ICCS Workshop onPerformance Modeling and Analysis(PMA03), Melbourne, Australia, June2003.

LIST OF PUBLICATIONS

98

Page 101: Progress report on the U.S. Department of Energy's Scientific

E. George, J. Glimm, J. W. Grove, X. L.Li, Y. J. Liu, Z. L. Xu, N. Zhao,“Simplification, conservation and adaptivityin the front tracking method,” in HyperbolicProblems: Theory, Numerics,Applications, T. Hou, E. Tadmor, eds.,Springer Verlag, Berlin and New York, p.175-184200, 2003.

F. X. Giraldo, J. B. Perot and P. F. Fischer,“A spectral element semi-Lagrangian (SESL)method for the spherical shallow water equa-tions,” J. of Comp. Phys., 190 2, p. 623-650, 2003.

J. Glimm, S. Hou, Y. Lee, D. Sharp, K. Ye,“Solution error models for uncertainty quan-tification,” Contemporary Mathematics,327, p. 115-140, 2003.

J. Glimm, H. Jin, M. Laforest, F.Tangerman, Y. Zhang, “A two pressurenumerical model of two fluid mixing,” SIAMJ. Multiscale Model. Simul., 1, p. 458-484,2003.

J. Glimm, Y. Lee, D. H. Sharp, K. Q. Ye,“Prediction using numerical simulations, ABayesian framework for uncertainty quantifi-cation and its statistical challenge,” inProceedings of the Fourth InternationalSymposium on Uncertainty Modelingand Analysis (ISUMA 2003), B. M.Ayyub, N. O. Attoh-Okine, eds., IEEEComputer Society, 2003.

J. Glimm, X. L. Li, Y. J. Liu, Z. L. Xu, N.Zhao, “Conservative front tracking withimproved accuracy,” SIAM J. NumericalAnalysis, 41, p. 1926-1947, 2003.

J. Glimm, X. L. Li, W. Oh, A. Marchese,M.-N. Kim, R. Samulyak, C. Tzanos, “Jetbreakup and spray formation in a dieselengine,” Proc. of the Second MITConference on Computational Fluid andSolid Mechanics, Cambridge, Mass., 2003.

P. Goldfeld, 2003, “Balancing Neumann-Neumann for (In)Compressible LinearElasticity and (Generalized) Stokes - ParallelImplementation,” Proc of the 14thInternational Symposium on DomainDecomposition Methods, Cocoyoc,Mexico, Jan. 2002, I. Herrera, D. E. Keyes,O. B. Widlund and R. Yates, eds., 2003.

P. Goldfeld, L. F. Pavarino and O. B.Widlund, “Balancing Neumann-NeumannPreconditioners for Mixed Approximations ofHeterogenous Problems in Linear Elasticity,”Numer. Math., 95, 283-324, 2003.

D. A. Goussis, M. Valorani, F. Creta, andH. N. Najm, “Inertial Manifolds with CSP,”

Computational Fluid and SolidMechanics, 1:1951-1955, K. J. Bathe, ed.,Elsevier, 2003.

A. Gray, et al. (The HPQCDCollaboration), “The Upsilon Spectrum fromLattice QCD with 2+1 Flavors ofDynamical Quarks,” Nuclear Physics B(Proc. Suppl.), 119, 592, 2003.

J. F. Grcar, M. S. Day and J. B. Bell,“Conditional and opposed reaction path dia-grams for the analysis of fluid-chemistryinteractions,” 19th InternationalColloquium on the Dynamics ofExplosions and Reactive Systems, July-Aug. 2003.

D. C. Griffin, J. P. Colgan, S. D. Loch, M.S. Pindzola, and C. P. Ballance “Electron-impact excitation of beryllium and its ions,”Physical Review A, 68, 062705, 2003.

A. T. Hagler, et al. (The LHPCCollaboration), “Moments of nucleon gener-alized parton distributions in lattice QCD,”Physical Review D 68, 034505, 2003.

J. C. Hayes, M. L. Norman, “Beyond Flux-limited Diffusion: Parallel Algorithms forMultidimensional RadiationHydrodynamics,” Astrophysical JournalSupplement Series, 147, 197-220, 2003.

M. Head-Gordon, T. Van Voorhis, G.Beran and B. Dunietz, “Local CorrelationModels,” Computational Science: ICCS2003, Part IV, Lecture Notes inComputer Science, Vol. 2660, p. 96-102,2003.

A. Heger, C. L. Fryer, S. E. Woosley, N.Langer, and D. H. Hartmann, “HowMassive Single Stars End Their Life,” ApJ,591:288-300, July 2003.

E. D. Held, “Unified form for parallel ionviscocus stress in magnetized plasmas,”Physics of Plasmas, 10, 4708, 2003.

E. D. Held, J. D. Callen, C. C. Hegna, andC. R. Sovinec, “Conductive electron heatflow along an inhomogeneous magnetic field,”Physics of Plasmas 10, 3933, 2003.

B. Hientzsch, 2003, “Fast Solvers andSchwarz Preconditioners for Spectral NedelecElements for a Model Problem in H(curl),”Proc of the 14th InternationalSymposium on Domain DecompositionMethods, Cocoyoc, Mexico, Jan. 2002, I.Herrera, D. E. Keyes, O. B. Widlund andR. Yates, eds., 2003.

W. R. Hix, O. E. B. Messer, A.Mezzacappa, M. Liebendoerfer, J.

Sampaio, K. Langanke, D. J. Dean, G.Martinez-Pinedo, “Consequences of NuclearElectron Capture in Core CollapseSupernovae,” Physical Review Letters, 91,201102, 2003.

W. R. Hix, A. Mezzacappa, O. E. B.Messer, S. W. Bruenn, “Supernova scienceat spallation neutron sources,” Journal ofPhysics G, 29, 2523-2542, 2003.

I. Hofmann, G. Franchetti, O. Boine-Frankenheim, J. Qiang, and R. D. Ryne,“Space-Charge Resonances in Two- andThree-Dimensional Anisotropic Beams,”Physics Rev. ST Accel. Beams, 6, 024202,2003.

I. Hofmann, G. Franchetti, J. Qiang andR. D. Ryne, “Self-Consistency and CoherentEffects in Nonlinear Resonances,” Proc. ofthe 29th ICFA Advanced BeamDynamics Workshop on Beam HaloDynamics, Diagnostics, and Collimation(HALO’03), Montauk, NY, May 2003.

M. J. Hogan et al., “Ultrarelativistic-Positron-Beam Transport through Meter-ScalePlasmas,” Phys. Rev. Lett. 90, 205002,2003.

P. Hovland, K. Keahey, L. C. McInnes, B.Norris, L. Freitag and P. Raghavan, “AQuality-of-Service Architecture for High-Performance Numerical Components,”Proceedings of QoSCBSE2003, Toulouse,France, June 2003.

C. Hsu, “Effects of Block Size on the BlockLanczos Algorithm,” Senior Thesis, U.C.Berkeley Dept. of Mathematics, June2003.

P. Hu, N. Chevaugeon, J. E. Flaherty, M.S. Shephard, “Modelizations of movingobject in a fluid flow using level set,” Proc.7th US Nat. Congress on Comp. Mech.,p. 288, 2003.

A. L. Hungerford, C. L. Fryer, and M. S.Warren, “Gamma-Ray Lines fromAsymmetric Supernovae,” ApJ, 594:390-403, Sept. 2003.

M. Ikegami, et al., “Beam Commissioning ofthe J-PARC Linac Medium Energy BeamTransport at KEK,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

M. Ikegami, T. Kato, Z. Igarashi, A. Ueno,Y. Kondo, J. Qiang, and R. Ryne,“Comparison of Particle Simulation with J-PARC Linac MEBT Beam Test Results,”Proc. of the 29th ICFA Advanced BeamDynamics Workshop on Beam Halo

LIST OF PUBLICATIONS

99

Page 102: Progress report on the U.S. Department of Energy's Scientific

Dynamics, Diagnostics, and Collimation(HALO’03), Montauk, NY, May 2003.

J. Ivanic and K. Ruedenberg, “A MCSCFMethod for Ground and Excited StatesBased on Full Optimizations of SuccessiveJacobi Rotations,” Journal ofComputational Chemistry, 24, 1250-1262,2003.

V. Ivanov, A. Krasnykh, G. Scheitrum, D.Sprehn, L. Ives, G. Miram, “3D ModelingActivity for Novel High Power ElectronGuns at SLAC,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

V. Ivanov, C. Adolphsen, N. Folwell, L.Ge, A. Guetz, Z. Li, C.-K. Ng, G.Schussman, J.W. Wang, M. Weiner, M.Wolf, K. Ko, “Simulating AcceleratorStructure Operation at High Power,” Proc.of IEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

V. Ivanov, C. Adolphsen, N. Folwell et al.,“Dark Current Simulation and Rise TimeEffects at High Power,” Workshop on HighGradient RF, Chicago, Ill., Oct. 2003.

E. F. Jaeger, L. A. Berry, J. R. Myra, D. B.Batchelor, E. D’Azevedo, P. T. Bonoli, C.K. Phillips, D. N. Smithe, D. A.D’Ippolito, M. D. Carter, R. J. Dumont, J.C. Wright, R. W. Harvey, “ShearedPoloidal Flow Driven by Mode Conversion inTokamak Plasmas,” Physical ReviewLetters, 90, 195001, 2003.

A. Jensen, V. Ivanov, A. Krasnykh, G.Scheitrum, “An Improved Version ofTOPAZ3D,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

N. A. C. Johnston, F. S. Rowland, S.Elliott, K. S. Lackner, et al., “Chemicaltransport modeling of CO2 sinks,” EnergyConversion and Management, 44, 681,2003.

J. E. Jones, P. S. Vassilevski, and C. S.Woodward, “Nonlinear Schwarz-FASMethods for Unstructured Finite ElementProblems,” Proc. of the Second M.I.T.Conference on Computational Fluid andSolid Mechanics, Cambridge, Mass., June2003.

R. M. Jones, Z. Li, R. H. Miller, T. O.Raubenheimer, “Optimized WakefieldSuppression & Emittance Dilution-ImposedAlignment Tolerances in X-Band AcceleratingStructures for the JLC/NLC,” Proc. ofIEEE Particle Accelerator Conference

2003, Portland, Ore., May 2003.

G. C. Jordan, S. S. Gupta, B.S. Meyer,“Nuclear reactions important in alpha-richfreeze-outs,” Physical Review C, 68,065801, 2003.

A. Kabel, “Maxwell-Lorentz Equations inGeneral Frenet-Serret Coordinates,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

A. Kabel, “Parallel Simulation Algorithmsfor the Three-dimensional Strong-StrongBeam-Beam Interaction,” Proc. of IEEEParticle Accelerator Conference 2003,Portland, Ore., May 2003.

A. Kabel, “Particle Tracking and BunchPopulation in TRAFIC4 2.0,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

A. Kabel, Y. Cai, B. Erdelyi, T. Sen, andM. Xiao, “A Parallel Code for LifetimeSimulations in Hadron Storage Rings in thePresence of Parasitic Weak-strong Beam-beamInteractions,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

K. Keahey, “Fine Grain AuthorizationPolicies in the Grid: Design andImplementation,” Proc. of the Middleware2003 Conference, 2003.

E. Kim, J. Bielak, and O. Ghattas, “Large-scale Northridge Earthquake simulationusing octree-based multiresolution meshmethod,” Proc. of 16th ASCE EngineeringMechanics Conference, Seattle, Wash.,July 2003.

L. Krivodonova, J. E. Flaherty, “Error esti-mation for discontinuous Galerkin solutions ofmultidimensional hyperbolic problems,”Advances in Computational Mathematics,19, 57, 2003.

K. Langanke, D. J. Dean, W. Nazarewicz,“Shell model Monte Carlo studies of nuclei inthe A~80 mass region,” Nuclear Physics A,728, 109-117, 2003.

K. Langanke, G. Martinez-Pinedo, J. M.Sampaio, D. J. Dean, W. R. Hix, O. E. B.Messer, A. Mezzacappa, M. Liebendorfer,H.-T. Janka, M. Rampp, “Electron CaptureRates on Nuclei and Implications for StellarCore Collapse,” Physical Review Letters,90, 241102, 2003.

B. S. Lee McCormick, B. Philip, and D.Quinlan, “Asynchronous fast adaptive com-posite-grid methods: numerical results,”SIAM Journal of Scientific Computing,

25, p. 682-700, 2003.

S. Lefantzi and J. Ray, “A Component-basedScientific Toolkit for Reacting Flows,”Computational Fluid and SolidMechanics, 2:1401-1405, K. J. Bathe, Ed.,Elsevier, 2003.

S. Lefantzi, J. Ray and H. Najm, “Usingthe Common Component Architecture toDesign High Performance ScientificSimulation Codes,” Proc. of the Parallel andDistributed Processing Symposium, June2003.

J. Li, “A Dual-Primal FETI Method forSolving Stokes/Navier-Stokes Equations,”Proc of the 14th InternationalSymposium on Domain DecompositionMethods, Cocoyoc, Mexico, Jan. 2002, I.Herrera, D. E. Keyes, O. B. Widlund andR. Yates, eds., 2003.

L. Li, J. Glimm, X. L. Li, “All isomorphicdistinct cases for multi-component interfacesin a block,” J. Comp. Appl. Mathematics,152, p. 263-276, 2003.

X. S. Li and J. W. Demmel,“SuperLU_DIST: A Scalable Distributed-memory Sparse Direct Solver forUnsymmetric Linear Systems,” ACMTransactions on Math. Software, 29:110-140, 2003.

X. Li, M. S. Shephard, M. W. Beall,“General mesh adaptation using mesh modifi-cations,” Proc. 7th US Nat. Congress onComp. Mech., p. 17, 2003.

X. Li, M. S. Shephard, M. W. Beall,“Accounting for curved domains in meshadaptation,” International Journal forNumerical Methods in Engineering, 58,p. 246-276, 2003.

Z. Li, C. Adolphsen, D. L. Burke, V.Dolgashev, R. M. Jones, R. H. Miller, C.Nantista, C.-K. Ng, T. O. Raubenheimer,R. D. Ruth, P. T. Tenenbaum, J. W. Wang,N. Toge, T. Higo, “Optimization of the X-band Structure for the JLC/NLC,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

Z. Li, R. Johnson, S. R. Smith, T. Naito, J.Rifkin, “Cavity BPM with Dipole-mode-selective coupler,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

M. Liebendoerfer, A. Mezzacappa, O. E.B. Messer, G. Martinez-Pinedo, W. R.Hix, F.-K Thielemann, “The neutrino signalin stellar core collapse and postbounce evolu-

LIST OF PUBLICATIONS

100

Page 103: Progress report on the U.S. Department of Energy's Scientific

tion,” Nuclear Physics A, 719, 144-152,2003.

Y. Lin et al., “Ion Cyclotron Range ofFrequencies Mode Conversion ElectronHeating in Deuterium-Hydrogen Plasmas inthe Alcator C-Mod Tokamak,” Plasma Phys.Control. Fusion 45, (6) 1013, 2003.

Y. Lin, S. J. Wukitch, P. T. Bonoli, A.Mazurenko, E. Nelson-Melby, M.Porkolab, J. C. Wright, I. H. Hutchinson,E. S. Marmar, D. Mossessian, S. Wolfe, C.K. Phillips, G. Schilling, and P. Phillips,“Study of Ion Cyclotron Range of Frequenciesin the Alcator C-Mod Tokamak,” RadioFrequency Power in Plasmas, Proc. 15thTopical Conf., Moran, Wyoming, 2003.

S. D. Loch, J. P. Colgan, and M. S.Pindzola, M. Westermann, F.Scheuermann, K. Aichele, D.Hathiramani, and E. Salzborn, “Electron-impact ionization of Oq+ ions for q=1-4,”Physical Review A, 67, 042714, 2003.

Y. Luo, I. Malik, Z. Li, M S. Shephard,“Adaptive mesh refinement in acceleratormodeling with OMEGA3P,” Proc. 7th USNat. Congress on Comp. Mech., p. 50,2003.

W. P. Lysenko, J. Qiang, R. W. Garnett, T.P. Wangler, “Challenge of BenchmarkingSimulation Codes for the LANL Beam-HaloExperiment,” Proc. of IEEE ParticleAccelerator Conference 2003, Portland,Ore., May 2003.

K.-L. Ma, “Visualizing Time-Varying VolumeData,” IEEE Computing in Science andEngineering,” 5(2), p. 34, March/April2003.

M. Macri, S. De, M. S. Shephard,“Hierarchical tree-based discretization for themethod of spheres,” Computers andStructures, 81(8-11), p. 789-803, 2003.

G. G. Maisuradze, and D. L. Thompson,“Interpolating Moving Least-Squares Methodfor Fitting Potential Energy Surfaces:Illustrative Approaches and Applications,”Journal of Physical Chemistry A 107,7118-7124, 2003.

G. G. Maisuradze, D. L. Thompson, A. F.Wagner, and M. Minkoff, “InterpolatingMoving Least-Squares Method for FittingPotential Energy Surfaces: Detailed Analysisof One Dimensional Applications,” Journalof Chemical Physics 119, 10002-10014,2003.

T. Manteuffel, S. McCormick, and C.Pflaum, “Improved discretization error esti-

mates for first-order system least squares(FOSLS),” Journal on NumericalAnalysis, Math., 11, p. 163-177, 2003.

J. Marathe, F. Mueller, T. Mohan, B. R. deSupinski, S.A. McKee and A. Yoo, “MET-RIC: Tracking Down Inefficiencies in theMemory Hierarchy Via Binary Rewriting,”International Symposium on CodeGeneration and Optimization 2003, San Francisco, Calif., March 2003.

E. S. Marmar, B. Bai, R. L. Boivin, P. T.Bonoli, C. Boswell, R. Bravenec, B.Carreras, D. Ernst, C. Fiore, S.Gangadhara, K. Gentle, J. Goetz, R.Granetz, M. Greenwald, K. Hallatschek,J. Hastie, J. Hosea, A. Hubbard, J. W.Hughes, I. Hutchinson, Y. In, J. Irby, T.Jennings, D. Kopon, G. Kramer, B.LaBombard, W. D. Lee, Y. Lin, B.Lipschultz, J. Liptac, A. Lynn, K. Marr, R.Maqueda, E. Melby, D. Mikkelsen, D.Mossessian, R. Nazikian, W. M. Nevins,R. Parker, T. S. Pedersen, C. K. Phillips, P.Phillips, C. S. Pitcher, M. Porkolab, J.Ramos, M. Redi, J. Rice, B. N. Rogers, W.L. Rowan, M. Sampsell, G. Schilling, S.Scott, J. Snipes, P. Snyder, D. Stotler, G.Taylor, J. L. Terry, H. Wilson, J. R.Wilson, S. M. Wolfe, S. Wukitch, X. Q.Xu, B. Youngblood, H. Yuh, K.Zhurovich and S. Zweben, “Overview ofrecent Alcator C-Mod research,” Nucl.Fusion 43, 1610, 2003.

G. Martinez-Pinedo, D. J. Dean, K.Langanke, J. Sampaio, “Neutrino-nucleusinteractions in core-collapse supernova,”Nuclear Physics A, 718, 452-454, 2003.

W. Maslowski, and W. H. Lipscomb,“High-resolution simulations of Arctic sea iceduring 1979-93,” Polar Research, 22, 67,2003.

L. McInnes, B. Norris, S. Bhowmick, andP. Raghavan, 2002, “Adaptive Sparse LinearSolvers for Implicit CFD Using Newton-Krylov Algorithms,” Proc. of the SecondMIT Conference on Computational Fluidand Solid Mechanics, Boston, Mass., June2003.

D. J. McGillicuddy, L. A. Anderson, S. CDoney and M. E. Maltrud, “Eddy-drivensources and sinks of nutrients in the upperocean: results from a 0.1 degree resolutionmodel of the North Atlantic,” GlobalBiogeochemical Cycles, 17, 1035, 2003.

O. E. B. Messer, W. R. Hix, M.Liebendorfer, A. Mezzacappa, “Electroncapture and shock formation in core-collapse

supernovae,” Nuclear Physics, A. 718, 449-451, 2003.

O. E. B. Messer, M. Liebendorfer, W. R.Hix, A. Mezzacappa, S. W. Bruenn, “TheImpact of Improved Weak Interaction Physicsin Core-Collapse Supernova Simulations,”From Twilight to Highlight: The Physicsof Supernovae, W. Hillebrandt, D.Leibundgut, eds., p. 70, Springer-Verlag,2003.

B. S. Meyer, “Neutrinos, supernovae, molyb-denum, and extinct 92Nb,” Nuclear PhysicsA, 719, 13-20, 2003.

A. Mezzacappa, “The Road to Prediction:Toward Gravitational Wave Signatures fromCore Collapse Supernovae,” AIP Conf. Proc.686: The Astrophysics of GravitationalWave Sources, p. 29-39, 2003.

A. Mezzacappa, J. M. Blondin, “A NewTwist on the Core Collapse SupernovaMechanism?” in From Twilight toHighlight: The Physics of Supernovae, W.Hillebrandt, D. Leibundgut, eds., p. 63,Springer-Verlag, 2003.

D. Mika, P. Myers, A. Majorell, L. Jiang,S. Kocak, J. Wan, M. S. Shephard, “Alloy718 extrusion modeling - unstable flow andmesh enrichment,” Proc. 7th US Nat.Congress on Comp. Mech., p. 342, 2003.

A. J. Miller, M. A. Alexander, G. J. Boer,F. Chai, K. Denman, D. J. Erickson III, R.Frouin, A. Gabric, E. Laws, M. Lewis, Z.Liu, R. Murtugudde, S. Nakamoto, D. J.Neilson, J. R. Norris, C. Ohlmann, I.Perry, N. Schneider, K. Shell and A.Timmermann, “Potential feedbacks betweenPacific Ocean ecosystems and interdecadel cli-mate variations,” Bulletin of the AmericanMeteorology, Soc. 84, 617, 2003.

G. H. Miller, “An iterative Riemann solverfor systems of hyperbolic conservation laws,with application to hyperelastic solid mechan-ics,” Journal of Computational Physics193, p. 198-225, 2003.

D. M. Mitnik, D. C. Griffin, C. P.Ballance, and N. R. Badnell, “An R-matrixwith pseudo-states calculation of electron-impact excitation in C2+,” Journal ofPhysics B, 36, 717, 2003.

T. Mohan, B. R. de Supinski, S. A.McKee, F. Mueller, A. Yoo, M. Schulz,“Identifying and Exploiting SpatialRegularity in Data Memory References,”Proc. of ACM/IEEE SC03 conference,Phoenix, Ariz., Nov. 2003.

LIST OF PUBLICATIONS

101

Page 104: Progress report on the U.S. Department of Energy's Scientific

J. Mueller, S. Nagrath, X. Li, J. E. Jansen,M. S. Shephard, “Efficient computationalmethods for the investigation of vascular dis-ease,” Proc. 7th US Nat. Congress onComp. Mech., p. 195, 2003.

J. D. Myers, C. Pancerella, C. Lansing, K.L. Schuchardt and B. Didier, “Multi-scalescience: Supporting emerging practice withsemantically-derived provenance,” Proc. ofthe Workshop on Semantic WebTechnologies for Searching andRetrieving Scientific Data, Sanibel Island,Fla., Oct. 2003.

J. D. Myers, A. Chappell, M. Elder, A.Geist, J. Schwidder, “Re-integrating theresearch record,” IEEE Computing inScience and Engineering, 5(3):44-50, 2003.

D. Narayan, R. Bradshaw, R. Evard andA. Lusk, “Bcfg: a configuration managementtool for heterogeneous environments,” Proc. ofthe 5th IEEE International Conferenceon Cluster Computing (CLUSTER03),Kowloon, Hong Kong, Dec. 2003.

E. Nelson-Melby, M. Porkolab, P. T.Bonoli, Y. Lin, A. Mazurenko, and S. J.Wukitch, “Experimental Observations ofMode-Converted Ion Cyclotron Waves in aTokamak Plasma by Phase ContrastImaging,” Physical Review Letters, 90,155004, 2003.

R. M. Olson, M. W. Schmidt, M. S.Gordon and A. P. Rendell, “Enabling theEfficient Use of SMP Clusters: TheGAMESS/DDI Model,” Proc. of theACM/IEE SC03 conference, Phoenix,Ariz., Nov. 2003.

E. Ong, J. Larson and R. Jacob, “A RealApplication of the Model Coupling Toolkit,”Proc. of the 2002 InternationalConference on Computational Science,eds. P. Sloot, C. J. K. Tan, J. J. Dongarra,A. Hoekstra, Springer-Verlag, 2003.

C. M. Pancerella, J. Hewson, W. KoeglerD. Leahy, M. Lee, L. Rahn, C. Yang, J. D.Myers, D. Didier, R. McCoy, “Metadata inthe collaboratory for multi-scale chemical sci-ence,” Proc, of the 2003 Dublin CoreConference: Supporting Communities ofDiscourse and Practice-MetadataResearch and Applications, Seattle,Wash., Sept.-Oct. 2003.

W. Park, J. Breslau, J. Chen, et al,,“Nonlinear simulation studies of tokamaksand STs,” Nuclear Fusion 43, 483, 2003.

M. S. Pindzola, T. Minami, and D. R.Schultz, “Laser-modified charge-transfer

processes in proton collisions with lithiumatoms,” Physical Review A, 68, 013404,2003.

M. S. Pindzola and F. Texier, “Collectivemodes of ground and dark-soliton states ofBose-Einstein condensates in anisotropictraps,” Journal of Physics B, 36, 1783,2003.

A. Pochinsky, et al., “Large scale commodityclusters for lattice QCD,” Nuclear Physics B(Proc. Suppl.) 119, 1044, 2003.

J. Pruet, “Neutrinos from the Propagation ofa Relativistic Jet through a Star,” ApJ,591:1104-1109, July 2003.

J. Pruet and G. M. Fuller, “Estimates ofStellar Weak Interaction Rates for Nuclei inthe Mass Range A=65-80,” AstrophysicalJournal Supplement, 149:189-203, Nov.2003.

J. Pruet, S. E. Woosley, and R. D.Hoffman, “Nucleosynthesis in Gamma-RayBurst Accretion Disks,” ApJ, 586:1254-1261, April 2003.

J. Qiang and R. W. Garnett, “SmoothApproximation with Acceleration in anAlternating Phase Focusing SuperconductingLinac,” Nuclear Instruments and Methodsin Physics Res. A 496, 33, 2003.

J. Qiang, M. Furman and R. Ryne,“Parallel Particle-In-Cell Simulation ofColliding Beams in High EnergyAccelerators,” Proc. of the ACM/IEEESC03 conference, Phoenix, Ariz., Nov.2003.

J. Qiang, M. Furman, R. D. Ryne, W.Fischer, T. Sen, M. Xiao, “Parallel Strong-Strong/Strong-Weak Simulations of Beam-Beam Interactions in Hadron Accelerators,”Proc. of the 29th ICFA Advanced BeamDynamics Workshop on Beam HaloDynamics, Diagnostics, and Collimation(HALO’03), Montauk, NY, May 2003.

J. Qiang, R. D. Ryne, and I. Hofmann,“Space-Charge Driven Emittance Growth ina 3D Mismatched Anisotropic Beam,” Proc.of IEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

J. Qiang, R. D. Ryne, T. Sen, M. Xiao,“Macroparticle Simulations of AntiprotonLifetime at 150 GeV in the Tevatron,” Proc.of IEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

D. Quinlan and M. Schordan, “A Source-to-Source Architecture for User-DefinedOptimizations,” Joint Modular Languages

Conference, Austria, Lecture Notes inComputer Science, 2789, p. 214-223,Springer-Verlag, Aug. 2003.

D. Quinlan, M. Schordan, B. Miller andM. Kowarschik, “Parallel Object-OrientedFramework Optimization,” Special Issue ofConcurrency: Practice and Experience,16, 2-3, p. 293-302, 2003.

D. Quinlan, M. Schordan, Q. Yi, and B. R.de Supinski, “Semantic-drivenParallelization of Loops Operating on User-Defined Containers,” 16th AnnualWorkshop on Languages and Compilersfor Parallel Computing, College Station,Texas, Oct. 2003.

D. Quinlan, M. Schordan, Qing Li, and B.R. de Supinski, “A C++ Infrastructure forAutomatic Introduction and Translation ofOpenMP Directives,” Proc. of theWorkshop on Open MP Applications andTools (WOMPAT’03), Lecture Notes inComputer Science, 2716, p. 13-25,Springer-Verlag, June 2003.

P. Raghavan, K. Teranishi, and E. G. Ng,“A Latency Tolerant Hybrid Sparse SolverUsing Incomplete Cholesky Factorization,”Numerical Linear Algebra withApplications 10, p. 541-560, 2003.

S. Ratkovic, S. Iyer Dutta, M. Prakash,“Differential neutrino rates and emissivitiesfrom the plasma process in astrophysical sys-tems,” Physical Review D, 67, 123002,2003.

D. A. Reed, C. L. Mendes and C. Lu,“Intelligent Application Tuning andAdaptation,” Chapter in The Grid:Blueprint for a New ComputingInfrastructure, 2nd edition, I. Foster & C.Kesselman eds., Morgan Kaufmann, Nov.2003.

D. A. Reed, C. Lu, C. L. Mendes, “BigSystems and Big Reliability Challenges,”Proc. of Parallel Computing 2003,Dresden, Germany, Sept. 2003.

J. F. Remacle, J. E. Flaherty, M. S.Shephard, “An adaptive discontinuousGalerkin technique with an orthogonal basisapplied compressible flow problems,” SIAMReview, 45(1), p. 53-72, 2003.

J. F. Remacle, M. S. Shephard, “An algo-rithm oriented mesh database,”International Journal for NumericalMethods in Engineering, 58, p. 349-374,2003.

J. Rikovska Stone, J. C. Miller, R.Koncewicz, P. D. Stevenson, M. R. Strayer,

LIST OF PUBLICATIONS

102

Page 105: Progress report on the U.S. Department of Energy's Scientific

“Nuclear matter and neutron-star propertiescalculated with the Skyrme interaction,”Physical Review C, 68, 034324, 2003.

F. J. Robicheaux and J. D. Hanson,“Simulated expansion of an ultracold neutralplasma,” Physics of Plasmas, 10, 2217,2003.

T. M. Rogers, G. A. Glatzmaier, and S. E.Woosley, “Simulations of two-dimensionalturbulent convection in a density-stratifiedfluid,” Physical Review E, 67(2):026315,Feb. 2003.

G. Rosa, E. Lum, K.-L. Ma, and K. Ono,“An Interactive Volume Visualization Systemfor Transient Flow Analysis,” Proc. of theVolume Graphics 2003 Workshop, Tokyo,Japan, July 2003.

A. L. Rosenberg, J. E. Menard, J. R.Wilson, S. Medley, R. Andre, D. Darrow,R. Dumont, B. P. LeBlanc, C. K. Phillips,M. Redi, T. K. Mau, E. F. Jaeger, P. M.Ryan, D. W. Swain, R. W. Harvey, J.Egedal, and the NSTX Team,“Characterization of Fast Ion Absorption ofthe High Harmonic Fast Wave in theNational Spherical Torus Experiment,”Radio Frequency Power in Plasmas, Proc.15th Topical Conf., Moran, Wyoming,AIP, N. Y., 185, 2003.

G. Rumolo, A. Z. Ghalam, et al., “ElectronCloud Effects on Beam Evolution in aCircular Accelerator,” Phys. Rev. STAB, 6,081002, 2003.

J. M. Sampaio, K. Langanke, G. Martinez-Pinedo, E. Kolbe, D. J. Dean, “Electroncapture rates for core collapse supernovae,”Nuclear Physics A, 718, 440-442, 2003.

R. Samtaney, “Suppression of the Richtmyer-Meshkov instability in the presence of a mag-netic field,” Physics of Fluids 15, L53-56,2003.

S. Sankaran, J. M. Squyres, B. Barrett, A.Lumsdaine, J. Duell, P. Hargrove and E.Roman, “The LAM/MPI Checkpoint/RestartFramework: System-Initiated Checkpointing,”LACSI Symposium, Santa Fe, NM, Oct.2003.

D. P. Schissel, “National FusionCollaboratory - Advancing the Science ofFusion Research,” Proc. of the 2003 DOESciDAC PI Meeting, 2003.

M. Schnell, G. Gwinner, N. R. Badnell,M. E. Bannister, S. Bohm, J. P. Colgan, S.Kieslich, S. D. Loch, D. Mitnik, A. Muller,M. S. Pindzola, S. Schippers, D. Schwalm,W. Shi, A. Wolf and S. G. Zhou,

“Observation of trielectronic recombination inBe-like Cl ions,” Physical Review Letters,91, 043001, 2003.

M. Schordan and D. Quinlan, “A Source-to-Source Architecture for User-DefinedOptimizations,” Proc, of the Joint ModularLanguages Conference (JMLC’03),Lecture Notes in Computer Science, 2789,p. 214-223, Springer-Verlag, June 2003.

D. R. Schultz and P. S. Krstic, “Ionizationof Helium by antiprotons: Fully correlated,four-dimensional lattice approach,” PhysicalReview A, 022712, 2003.

G. L. Schussman, “Interactive AndPerceptually Enhanced Visualization OfLarge, Complex Line-Based Datasets,” Ph.D.thesis, University of California Davis. Dec.2003.

S. L. Scott, B. Luethke and J. Mugler,“Cluster & Grid Tools: C3 Power Tools withC2G,” Newsletter of the IEEE Task Forceon Cluster Computing, 4, 2, March/April2003.

S. L. Scott, C. Leangsuksun, L Chen andT. Lui, “Building High Availability andPerformance Clusters with HA-OSCARToolkits,” 2003 High Availability andPerformance Workshop, Santa Fe, NM,Oct. 2003.

S. L. Scott, C. Leangsuksun, L. Shen, T.Liu and H. Song, “Availability Predictionand Modeling of High Availability OSCARCluster,” Proc. of the 5th IEEEInternational Conference on ClusterComputing (CLUSTER03), Kowloon,Hong Kong, Dec. 2003.

S. L. Scott, B. Luethke and T. Naughton,“OSCAR Cluster Administration With C3,”Proc. of the 17th Annual InternationalSymposium on High PerformanceComputing Systems and Applications andthe 1st Annual OSCAR Symposium,Quebec, Canada, May 2003.

S. L. Scott, C. Leangsuksun, L. Shen, H.Song and I. Haddad, “The Modeling andDependability Analysis of High AvailabilityOSCAR Cluster System,” Proc. of the 17thAnnual International Symposium onHigh Performance Computing Systemsand Applications and the 1st AnnualOSCAR Symposium, Quebec, Canada,May 2003.

S. L. Scott, T. Naughton, B. Ligneris andN. Gorsuch, “Open Source ClusterApplication Resources (OSCAR): Design,Implementation and Interest for the

[Computer] Scientific Community,” Proc. ofthe 17th Annual International Symposiumon High Performance ComputingSystems and Applications and the 1stAnnual OSCAR Symposium, Quebec,Canada, May 2003.

S. L. Scott, B. Luethke, “Scalable C3 PowerTools,” Cluster World Conference andExpo (CWCE 2003) and 4th LinuxCluster Institute Conference (LCI), SanJose, Calif., June 2003.

S. L. Scott, J. Mugler, T. Naughton, B.Barrett, A. Lumsdaine, J. Squyres, B.Ligneris, F. Giraldeau and C.Leangsuksun, “OSCAR Clusters,” Proc. ofthe Ottawa Linux Symposium (OLS’03),Ottawa, Canada, July 2003.

S. L. Scott, C. Leangsuksun, L. Shen, T.Liu and H. Song, “Dependability Predictionof High Availability OSCAR ClusterServer,” 2003 International Conference onParallel and Distributed ProcessingTechniques and Applications(PDPTA’03), Las Vegas, Nevada, June2003.

S. L. Scott, I. Haddad and C.Leangsuksun, “HA-OSCAR: The birth of aHighly Available OSCAR,” Linux Journal,2003, 115, Seattle, Wash., Nov. 2003.

S. L. Scott, T. Naughton, Y. C. Fang, P.Pfeiffer, B. Ligneris, and C. Leangsuksun,“The OSCAR Toolkit: Current and FutureDevelopments,” Dell Power Solution maga-zine, Nov. 2003.

S. L. Scott, T. Naughton, B. Barrett, J.Squyres, A. Lumsdaine, Y. C. Fang, and V.Mashayekhi “Looking Inside the OSCARCluster Toolkit,” Dell Power Solution mag-azine, Nov. 2003.

A. Snavely, L. Carrington, N. Wolter, C.Lee, J. Labarta, J. Gimenez, and P. Jones,“Performance Modeling of HPC Applications,Mini-symposium on Performance Modeling,”Proc. of Parallel Computing 2003,Dresden, Germany, Sept. 2003.

C. Sneden, J. J. Cowan, J. E. Lawler, I. I.Ivans, S. Burles, T. C. Beers, F. Primas, V.Hill, J. W. Truran, G. M. Fuller, B. Pfeiffer,K.-L. Kratz, “The Extremely Metal-poor,Neutron Capture-rich Star CS 22892-052: AComprehensive Abundance Analysis,”Astrophysical Journal Letters, 591, 936-953, 2003.

C. R. Sovinec, T. A. Gianakon, E. D.Held, S. E. Kruger, D. D. Schnack and theNIMROD Team, “NIMROD: a computa-

LIST OF PUBLICATIONS

103

Page 106: Progress report on the U.S. Department of Energy's Scientific

tional laboratory for studying nonlinearfusion magnetohydrodynamics,” Physics ofPlasmas 10, 1727, 2003.

C. R. Sovinec, B. I. Cohen, G. A. Cone, E.B. Hooper, and H. S. McLean, “Numericalinvestigation of transients in the SSPX spher-omak,” Physical Review Letters 94,035003, 2003.

P. Spentzouris, J. Amundson, “FNALBooster: experiment and modeling,” Proc. ofIEEE Particle Accelerator Conference2003, Portland, Ore., May 2003.

N. Stojic, J. W. Davenport, M. Komelj, J.Glimm, “Surface magnetic moment in\alpha-uranium by density-functional theory,” Phys.Rev. B, 68, 094407, 2003.

P. H. Stoltz, M. A. Furman, J. -L. Vay, A.W. Molvik, R. H. Cohen, “NumericalSimulation of the Generation of GecondaryElectrons in the High Current Experiment,”Physics Rev. ST Accel. Beams. Vol 6,054701. 2003.

A. Stompel, K.-L. Ma, J. Bielak, O.Ghattas and E. Kim, “Visualizing Large-Scale Earthquake Simulations,” Proc. of theACM/IEEE SC03 conference, Phoenix,Ariz., Nov. 2003.

A. Stompel, K.-L. Ma, E. Lum, J. Ahrens,and J. Patchett, “SLIC: Scheduled LinearImage Compositing for Parallel VollumeRendering,” Proc. of the 2003 Symposiumon Parallel Visualization and Graphics,Seattle, WA, Oct. 2003.

D. W. Swain, M. D. Carter, J. R. Wilson,P. M. Ryan, J. B. Wilgen, J. Hosea and A.Rosenberg, “Loading and AsymmetryMeasurements and Modeling for the NationalSpherical Torus Experiment Ion CyclotronRange of Frequencies System,” FusionScience and Technology, 43, 503, 2003.

K. Takahashi, K. Sato, A. Burrows, and T.Thompson, “Supernova neutrinos, neutrinooscillations, and the mass of the progenitorstar,” Phys. Rev. D, 68:113009, 2003.

T. J. Tautges, “MOAB-SD: IntegratedStructured and Unstructured MeshRepresentation,” 6th U.S. Congress onComputational Mechanics, Trends inUnstructured Mesh Generation,Albuquerque, NM, July 2003.

F.-K. Thielemann, D. Argast, F. Brachwitz,W. R. Hix, P. Hoflich, M. Liebendorfer, G.Martinez-Pinedo, A. Mezzacappa, I.Panov, T. Rauscher, “Nuclear cross sections,nuclear structure and stellar nucleosynthesis,”Nuclear Physics A, 718, 139-146, 2003.

F.-K. Thielemann, D. Argast, F.Brachwitz, W. R. Hix, P. Hoflich, M.Liebendorfer, G. Martinez-Pinedo, A.Mezzacappa, K. Nomoto, I. Panov, I.“Supernova Nucleosynthesis and GalacticEvolution,” From Twilight to Highlight:The Physics of Supernovae, W.Hillebrandt, D. Leibundgut, eds., p. 331,Springer-Verlag, 2003.

S. J. Thomas, J. M. Dennis, H. M. Tufo, P.F. Fischer, “A Schwarz Preconditioner forthe Cubed-Sphere,” SIAM J. Sci. Comp., 25,p. 442-453, 2003.

M. Thompson, “Fine-grained Authorizationfor Job and Resource Management usingAkenti and Globus,” Proceedings of the2003 Conference for Computing in HighEnergy and Nuclear Physics, 2003.

T. A. Thompson, A. Burrows, and P. A.Pinto, “Shock Breakout in Core-CollapseSupernovae and Its Neutrino Signature,”Astrophysical Journal, 592:434-456, July2003.

C. Tse, S. T. Teoh, and K.-L. Ma, “AnInteractive Isosurface Visualization Cluster,”Proc. of High Performance Computing2003 Symposium, Orlando, Fla., March2003.

M. Valorani, F. Creta and D. A. Goussis,“Local and Global Manifolds in StiffReaction-Diffusion Systems,” ComputationalFluid and Solid Mechanics 2003, 1548-1551, K. J. Bathe, ed., Elsevier, 2003.

M. Valorani, H. N. Najm and D. A.Goussis, “CSP Analysis of a TransientFlame-Vortex Interaction: Time Scales andManifolds,” Combustion and Flame,134(1-2):35-53, July 2003.

J. S. Vetter and F. Mueller, “CommunicationCharacteristics of Large-Scale ScientificApplications for Contemporary ClusterArchitectures,” Journal of Parallel andDistributed Computing, 63, p. 853-865,2003.

R. Vuduc, “Automatic Performance Tuning ofSparse Matrix Kernels,” Ph.D. thesis, U.C.Berkeley Dept. of Computer Science,Dec. 2003.

R. Vuduc, A. Gyulassy, J. W. Demmel andK. A. Yelick, “Memory HierarchyOptimizations and Performance Bounds forSparse ATA*x,” Proc. of ICCS 2003:Workshop on Parallel Linear Algebra,Melbourne, Australia, June 2003.

G. Wallace, et al., “DeskAlign:Automatically Aligning a Tiled Windows

Desktop,” Proc. of the IEEE InternationalWorkshop on Projector-Camera Systems(PROCAMS), Oct. 2003.

G. Wallace, et al., “Color Gamut Matchingfor Tiled Display Walls,” Proc. of theImmersive Projection TechnologyWorkshop (IPT), May 2003.

J. Wan, S. Kocak, M. S. Shephard,“Automated adaptive forming simulations,”12th International Meshing Roundtable,Sandia National Laboratories, p. 323-334,2003.

J. Wan, S. Kocak, M. S. Shephard,“Controlled mesh enrichments for evolvinggeometries,” Proc. 7th US Nat. Congresson Comp. Mech., p. 42, 2003.

T. P. Wangler, K. R. Crandall, R. W.Garnett, D. Gorelov, P. Ostroumov, J.Qiang, R. Ryne, R. York, “Advanced Beam-Dynamics Simulation Tools for the RIADriver Linac, Part I: Low Energy BeamTransport and Radiofrequency Quadrupole,”Proc. RIA R&D Workshop, Washington,D.C., Aug. 2003.

A. Wetzels, A. Gurtler, F. Rosca-Pruna, S.Zamith, M. J. J. Vrakking, F. Robicheauxand W. J. van der Zande, “Two-dimensionalmomentum imaging of Rydberg states usinghalf-cycle pulse ionization and velocity mapimaging,” Physical Review A, 68, 041401,2003.

M. Wingate, et al. (The HPQCDCollaboration), “Bs Mesons Using StaggeredLight Quarks,” Nuclear Physics B (Proc.Suppl.) 119, 604, 2003.

M. C. Witthoeft, J. P. Colgan, and M. S.Pindzola, “Electron-impact excitation of Lito high principal quantum number,” PhysicalReview A, 68, 022711, 2003.

F. Wolf, B. Mohr, “Automatic performanceanalysis of hybrid MPI/OpenMP applica-tions,” Journal of Systems Architecture,Special Issue: Evolutions in ParallelDistributed and Network-BasedProcessing, A. Clematis, D. D’Agostino,eds., Elsevier, 49(10-11), p. 421-439, Nov.2003.

P. H. Worley, T. H. Dunigan, “EarlyEvaluation of the Cray X1 at Oak RidgeNational Laboratory,” Proc. of the 45thCray User Group Conference, Columbus,Ohio, May 2003.

J. C. Wright, P. T. Bonoli, M. Brambilla, F.Meo, E. D’Azevedo, D. B. Batchelor, E. F.Jaeger, L. A. Berry, C. K. Phillips and A.Pletzer, “Full Wave Simulations of Fast

LIST OF PUBLICATIONS

104

Page 107: Progress report on the U.S. Department of Energy's Scientific

Wave Mode Conversion and Lower HybridWave Propagation in Tokamaks,” Bull. Am.Phys. Soc. 48, 127, 2003.

K. Wu, W. Koegler, J. Chen and A.Shoshani, “Using bitmap index for interac-tive exploration of large datasets,” Proc. ofSSDBM 2003, p. 65, Cambridge, Mass,USA, 2003.

K. Wu, W.-M. Zhang, A. Sim, J. Gu andA. Shoshani, “Grid collector: An event cata-log with automated file management,” Proc.of IEEE Nuclear Science Symposium2003, 2003.

R. G. Zepp, T. V. Callaghan and D. J.Erickson III, “Interactive effects of ozonedepletion and climate change on biogeochemi-cal cycles,” Journal of Photochemistry,Photobiol. 2, 51, 2003.

O. Zatsarinny, T. W. Gorczyca, K. T.Korista, N. R. Badnell and D. W. Savin,“Dielectronic recombination data for dynamicfinite-density plasmas: II. The Oxygen isoelec-tronic sequence,” Astronomy andAstrophysics, 412, 587, 2003.

M. Zeghal, U. El Shamy, M. S. Shephard,R. Dobry, J. Fish, T. Abdounm, “Micro-mechanical analysis of saturated granularsoils,” Journal of Multiscale ComputationalEngineering, 1(4), p. 441-460, 2003.

W. Zhang, S. E. Woosley, and A. I.MacFadyen, “Relativistic Jets inCollapsars,” Astrophysical Journal,586:356-371, March 2003.

2004A. Adelmann, J. Amundson, P.Spentzouris, J. Qiang, R. D. Ryne, “A TestSuite of Space-charge Problems for CodeBenchmarking,” Proc. of EPAC 2004,Lucerne, Switzerland, p. 1942, 2004.

P. K. Agarwal, R. A. Alexander, E. Apra,S. Balay, A. S. Bland, J. Colgan, E. F.D’Azevedo, J. J. Dongarra, T. H. Dunigan,Jr., M. R. Fahey, R. A. Fahey, A. Geist, M.Gordon, R. J. Harrison, D. Kaushik, M.Krishnakumar, P. Lusczek, A.Mezzacappa, J. A. Nichols, J. Nieplocha,L. Oliker, T. Packwood, M. S. Pindzola, T.C. Shulthess, J. S. Vetter, J. B. White III, T.L. Windus, P. H. Worley, T. Zacharia,“ORNL Cray X1,” Proc. of the 46th Cray

User Group Conference, Knoxville, TN,May 2004.

V. Akcelik, J. Bielak, G. Biros, I.Epanomeritakis, O. Ghattas, L. Kallivokasand E. Kim, “An Online Framework forInversion-Based 3D Site Characterization,”Lecture Notes in Computer Science,3038, p. 717, Springer-Verlag, 2004.

M. A. Albao, D.-J. Liu, C. H. Choi, M. S.Gordon and J. W. Evans, “Atomistic model-ing of morphological evolution during simul-taneous etching and oxidation of Si(100),”Surface Science 555, 51, 2004.

A. Alexakis, A. C. Calder, A. Heger, E. F.Brown, L. J. Dursi, J. W. Truran, R. Rosner,D. Q. Lamb, F. X. Timmes, B. Fryxell, M.Zingale, P. M. Ricker and K. Olson, “OnHeavy Element Enrichment in ClassicalNovae,” ApJ, 602:931-937, Feb. 2004.

C. Alexandru, et al., “N to Delta electro-magnetic transition form factors from latticeQCD,” Physical Review D, 69, 114506,2004.

C. Alexandrou, et al., “gamma N – Deltatransition form factors in quenched and N(F)= 2 QCD,” Nuclear Physics (Proc. Suppl.)129-130, 302, 2004.

J. Amundson, P. Spentzouris, “Emittancedilution and halo creation during the firstmilliseconds after injection at the FermilabBooster,” Proc. 33rd ICFA AdvancedBeam Dynamics Workshop on HighIntensity & High Brightness HadronBeams, Bensheim, Germany, Oct. 2004.

J. Amundson, P. Spentzouris, J. Qiang, R.D. Ryne, A. Adelmann, “A test suite ofspace-charge problems for code benchmark-ing,” 9th European Particle AcceleratorConference (EPAC 2004), Lucerne,Switzerland, July 2004.

A. Andrady, P. J. Aucamo, A. F. Bais, C.L. Ballare, L. O. Bjorn, J. F. Bornman, M.Caldwell, A. P. Cullen, D. J. Erickson III,F. R. de Gruijl, D.-P. Hader, M. ilyas, G.Kulandaivelu, H. D. Kumar, J. Longstreth,R. L. McKenzie, M. Norval, H. H.Redhwi, R. C. Smith, K. R. Solomon, Y.Takizawa, X. Tang, A. H. Teramura, A.Torikai, J. C. van der Leun, S. Wilson, R.C. Worrest, and R. G. Zepp,“Environmental effects of ozone depletion andits interactions with climate change: ProgressReport 2003,” Photochemical andPhotobiological Sciences. 3, 1-5, 2004.

C. Aubin, et al. (The HPQCD, MILCand UKQCD Collaborations), “First deter-

mination of the strange and light quark mass-es from full lattice QCD,” Physical ReviewD, 70, 031504(R), 2004.

C. Aubin, et al. (The MILCCollaboration), “Light hadrons withimproved staggered quarks: approaching thecontinuum limit,” Physical Review D 70,094505, 2004.

C. Aubin, et al. (The MILCCollaboration), “Light pseudoscalar decayconstants, quark masses, and low energy con-stants from three-flavor lattice QCD,”Physical Review D, 70, 114501, 2004.

C. Aubin, et al. (The MILCCollaboration), “Pion and kaon physics withimproved staggered quarks,” NuclearPhysics B (Proc. Suppl.), 129-130, 227,2004.

T. Austin, T. Manteuffel, and S.McCormick, “A robust approach to mini-mizing H(div) dominated functionals in anH1-conforming finite element space,” Journalon Numerical Analysis, Lin. Alg. App. 11,p. 115, 2004.

I. Baraffe, A. Heger, and S. E. Woosley,“Stability of Supernova Ia Progenitors againstRadial Oscillations,” ApJ, 615:378-382,Nov. 2004.

S. Basak, et al. (The LHPCCollaboration), “Baryon operators and spec-troscopy in lattice QCD,” Nuclear Physics(Proc. Suppl.) 129-130, 186, 2004.

S. Basak, et al. (The LHPCCollaboration), “Mass spectrum of N* andsource optimization,” Nuclear Physics(Proc. Suppl.) 129-130, 209, 2004.

D. B. Batchelor, E. F. Jaeger, L. A. Berry,E. D’Azevedo, M. D. Carter, R. W.Harvey, R. J. Dumont, J. R. Myra, A. A.D’Ippolito, D. N. Smithe, C. K. Phillips, P.T. Bonoli and J. C. Wright, “Integratedmodeling of wave-plasma interactions infusion systems,” Theory of Fusion Plasmas,Proc. of the Joint Varenna-LausanneInternational Workshop Varenna, Italy,Aug.-Sept. 2004.

M. W. Beall, J. Walsh, M. S. Shephard, “Acomparison of techniques for geometry accessrelated to mesh generation,” Engineeringwith Computers, 20(3), p. 210-221, 2004.

J. B. Bell, M. S. Day, C. A. Rendleman, S.E. Woosley, and M. A. Zingale, “DirectNumerical Simulations of Type IaSupernovae Flames II: The Rayleigh-TaylorInstability,” Astrophysical Journal, 608,883, 2004.

LIST OF PUBLICATIONS

105

Page 108: Progress report on the U.S. Department of Energy's Scientific

J. B. Bell, M. S. Day, C. A. Rendleman, S.E. Woosley, and M. A. Zingale, “DirectNumerical Simulations of Type IaSupernovae Flames I: The Landau-DarrieusInstability,” Astrophysical Journal, 606,1029, 2004.

J. B. Bell, M. S. Day, C. A. Rendleman, S.E. Woosley, and M. A. Zingale, “Adaptivelow Mach number simulations of nuclearflame microphysics,” Journal ofComputational Physics, 195:677-694,April 2004.

J. B. Bell, M. S. Day, J. F. Grcar, and M. J.Lijewski, “Stochastic Algorithms for theAnalysis of Numerical Flame Simulations,”Journal of Computational Physics, 202,262, 2004.

S. Belviso, L. Bopp, C. Moulin, J. C. Orr,T. R. Anderson, O. Aumont, S. Chu, S.Elliott, M. E. Maltrud and R. Simó,“Comparison of global climatological maps ofsea surface dimethylsulfide,” GlobalBiogeochemical Cycles, 18(3), GB3013,2004.

S. J. Benson, L. Curfman McInnes, J. J.Moré and J. Sarich, “Scalable Algorithms inOptimization: Computational Experiments,”AIAA Multidisciplinary Analysis andOptimization Conference, 2004.

G. J. O. Beran and M. Head-Gordon,“Extracting dominant pair correlations frommany-body wavefunctions,” Journal ofChemical Physics, 121, 78-88, 2004.

C. Bernard, et al. (The MILCCollaboration), “Quark Loop Effects inSemileptonic Form Factors for Heavy-LightMesons,” Nuclear Physics B (Proc. Suppl.),129-130, 364, 2004.

C. Bernard, et al. (The MILCCollaboration), “The Phase Diagram ofHigh Temperature QCD with Three Flavorsof Improved Staggered Quarks,” NuclearPhysics B (Proc. Suppl.), 129-130, 626,2004.

C. Bernard, et al. (The MILCCollaboration), “Excited States in StaggeredMeson Propagators,” Nuclear Physics B(Proc. Suppl.), 129-130, 230, 2004.

S. Bhowmick, P. Raghavan, L. McInnes,and B. Norris, “Faster PDE-basedSimulations Using Robust Composite LinearSolvers,” Future Generation ComputerSystems, The International Journal ofGrid Computing: Theory, Methods andApplications, 20, p. 373-387, 2004.

B. Billeter, C. DeTar, and J. Osborn,“Topological susceptibility in staggered fermi-on chiral perturbation theory,” PhysicalReview D, 70, 077502, 2004.

L. Bonaventura, and T. D. Ringler,“Analysis of discrete shallow-water models ongeodesic Delaunay grids with C-type stagger-ing,” Monthly Weather Review, 133,2351, 2004.

P. T. Bonoli et al., “Full-waveElectromagnetic Field Simulations in theLower-Hybrid Range of Frequencies,” Bull.American Phys. Soc. 49, 325, RI13, 2004.

M. Brezina, R. Falgout, S. MacLachlan, T.Manteuffel, S. McCormick and J. Ruge,“Adaptive Smoothed Aggregation (_SA),”SIAM Journal of Scientific Computing,25, 1896-1920, 2004.

S. W. Bruenn, “Issues with Core-CollapseSupernova Progenitor Models,” in OpenIssues in Core Collapse SupernovaTheory, Proc. of the Institute for NuclearTheory, Vol. 14, p. 99, University ofWashington, Seattle, Wash., June 2004.

N. Bucciantini, R. Bandiera, J. M. Blondin,E. Amato, L. Del Zanna, “The effects ofspin-down on the structure and evolution ofpulsar wind nebulae,” Astronomy &Astrophysics, 422, 609-619, 2004.

A. Burrows, S. Reddy, and T. A.Thompson, “Neutrino Opacities in NuclearMatter,” Nuclear Physics A, in press, 2004.

J. R. Burruss, et al., “Remote Computing onthe National Fusion Grid,” The 4th IAEATechnical Meeting on Control, DataAcquisition, and Remote Participation forFusion Research. Fusion Engineering andDesign 71, 251-255, 2004.

L. Bytautas and K. Ruedenberg,“Correlation Energy Extrapolation ThroughIntrinsic Scaling. I,” Journal of ChemicalPhysics, 121, 10905, 2004.

L. Bytautas and K. Ruedenberg,“Correlation Energy Extrapolation ThroughIntrinsic Scaling. II,” Journal of ChemicalPhysics, 121, 10919, 2004.

L. Bytautas and K. Ruedenberg,“Correlation Energy Extrapolation ThroughIntrinsic Scaling. III. CompactWavefunctions,” Journal of ChemicalPhysics, 121, 10852, 2004.

A. C. Calder, L. J. Dursi, B. Fryxell, T.Plewa, V. G. Weirs, T. Dupont, H. F.Robey, R. P. Drake, B. A. Remington, G.Dimonte, J. Hayes, J. M. Stone, P. M.

Ricker, F. X. Timmes, M. Zingale and K.Olson, “Issues with Validating anAstrophysical Simulation Code,” Computingin Science and Engineering, 6(5):10, 2004.

C. Y. Cardall, A. O. Razoumov, E.Endeve, A. Mezzacappa, “The Long Term:Six-dimensional Core-collapse SupernovaModels,” in Open Issues in Core CollapseSupernova Theory, Proc. of the Institutefor Nuclear Theory, Vol. 14, p. 196,University of Washington, Seattle, Wash.,June 2004.

G. R. Carr, I. L. Carpenter, M. J. Cordery,J. B. Drake, M. W. Ham, F. M. Hoffmanand P. H. Worley, “Porting and Performanceof the Community Climate System Model(CCSM3) on the Cray X1,” Proc. of the11th Workshop on High PerformanceComputing in Meteorology, Reading,UK, Oct. 2004.

L. Carrington, N. Wolter, A. Snavely andC. B. Lee, “Applying an AutomatedFramework to Produce Accurate BlindPerformance Predictions of Full-Scale HPCApplications,” DOD User GroupConference 2004, Williamsburg, Va., June2004.

Y. Chen, et al., “Chiral Logs in QuenchedQCD,” Physical Review D, 70, 034502,2004.

Z. Chen, J. Dongarra, P. Luszczek, and K.Roche, “LAPACK for Clusters Project: AnExample of Self Adapting NumericalSoftware,” Hawaii InternationalConference on System Sciences (HICSS-37), Jan. 2004.

S. Chu, M. Maltrud, S. Elliott, J. L.Hernandez and D. J. Erickson III,“Ecodynamic and eddy resolving dimethylsulfide simulations in a global ocean biogeo-chemistry/circulation model,” EarthInteractions, 8 (11), 1, 2004.

S. Chu, S. Elliott, M. E. Maltrud, F.Chavez and F. Chai, “Ecodynamics of mul-tiple Southern Ocean iron enrichments simu-lated in a global, eddy permitting GCM,” inEnvironmental Sciences andEnvironmental Computing Vol. II, TheEnviroComp Institute, 2004.

S. Chu, S. Elliott, M. E. Maltrud, F.Chavez, and F. Chai, “Southern Ocean IronEnrichments Simulated in a global, eddy per-mitting GCM,” in Environmental Sciencesand Environmental Computing Vol. II, ed.P. Zannetti, The EnviroComp Institute,2004.

LIST OF PUBLICATIONS

106

Page 109: Progress report on the U.S. Department of Energy's Scientific

I.-H. Chung, and J. K. Hollingsworth,“Using Information from Prior Runs toImprove Automated Tuning Systems,” Proc.of ACM/IEEE SC2004 conference,Pittsburgh, Penn., Nov. 2004.

I.-H. Chung, and J. K. Hollingsworth,“Automated Cluster-Based Web ServicePerformance Tuning,” Proceedings of IEEEConference on High Performance Distri-buted Computing (HPDC), June 2004.

J. P. Colgan, M. S. Pindzola, and N. R.Badnell, “Dielectronic recombination data fordynamic finite-density plasmas: V. TheLithium isoelectronic sequence,” Astronomyand Astrophysics, 417, 1183, 2004.

J. P. Colgan and M. S. Pindzola, “Doublephotoionization of helium at high photonenergies,” Journal of Physics B, 37, 1153,2004.

D. Crawford, K.-L. Ma, M.-Y. Huang, S.Klasky and S. Ethier, “VisualizingGyrokinetic Simulations,” Proc. of the IEEEVisualization 2004 conference, Austin,Texas, Oct. 2004.

T. L. Dahlgren, P. T. Devanbu, “AdaptableAssertion Checking for Scientific SoftwareComponents,” Proc. of the First Interna-tional Workshop on Software Engineeringfor High Performance Computing SystemApplications, Edinburgh, Scotland, UK, p64-69, May 2004.

D. Datta, R. C. Picu, M. S. Shephard,“Composite Grid Atomistic ContinuumMethod: An adaptive approach to bridgecontinuum with atomistic analysis,” Journalof Multiscale Computational Engineering,2(3), p. 401-420, 2004.

C. T. H. Davies, et al., “High-PrecisionLattice QCD Confronts Experiment,”Physical Review Letters, 92, 022001, 2004.

T. A. Davis, J. R. Gilbert, S. I. Larimore,and E. G. Ng, “A column approximate mini-mum degree ordering algorithm,” ACMTransactions of Mathematical Software30, p. 353-376, 2004.

T. A. Davis, J. R. Gilbert, S. I. Larimore,and E. G. Ng, “Algorithm 836: COLAMD,a column approximate minimum degreeordering algorithm,” ACM Transactions ofMathematical Software 30, p. 377-380,2004.

D. J. Dean, “Intersections of Nuclear Physicsand Astrophysics,” in The FutureAstronuclear Physics, A. Jorissen, S.Goriely, M. Rayet, L. Siers, J. Boffin, eds.,EAS Publications Series, p. 175-189, 2004.

H. de Sterck, T. Manteuffel, S.McCormick, and L. Olson, “Least-squaresfinite element methods and algebraic multi-grid solvers for linear hyperbolic PDEs,”SIAM Journal of Scientific Computing,26, 31-54, 2004.

S. Deng et al., “Plasma wakefield accelera-tion in self-ionized gas or plasmas,” PhysicalReview E 68, 047401, 2004.

Y. Deng, J. Glimm, J. Davenport, X. Cai,E. Santos, “Performance models on QCDOCfor molecular dynamics with coulomb poten-tials,” International Journal of HighPerformance Computing Applications,18, p. 183-198, 2004.

M. di Pierro et al., “Properties of charmoni-um in lattice QCD with 2+1 flavors ofimproved staggered sea quarks,” NuclearPhysics (Proc. Suppl.) 129-130, 340. 2004.

M. di Pierro, et al., “Ds spectrum and lep-tonic decays with Fermilab heavy quarks andimproved staggered light quarks,” NuclearPhysics (Proc. Suppl.) 129-130, 328, 2004.

J. B. Drake, P. H. Worley, I. Carpenter, M.Cordery, “Experience with the Full CCSM,”Proc. of the 46th Cray User GroupConference, Knoxville, TN, May 2004.

M. Dryja, A. Gantner, O. Widlund, and B.I. Wohlmuth, “Multilevel Additive SchwarzPreconditioner for Nonconforming MortarFinite Element Methods,” Journal onNumerical Analysis, Math., 12, 1, 23-38,2004.

T. H. Dunigan, Jr., J. S. Vetter, and P. H.Worley, “Performance Evaluation of theCray X1 Distributed Shared MemoryArchitecture,” Proc. of IEEE HotInterconnects, 2004.

S. Dutta, J. Glimm, J. W. Grove, D. H.Sharp, Y. Zhang, “Error comparison intracked and untracked spherical simulations,”Computers and Mathematics withApplications, 48, p. 1733-1747, 2004.

S. Dutta, J. Glimm, J. W. Grove, D. H.Sharp, Y. Zhang, “Spherical Richtmyer-Meshkov instability for axisymmetric flow,”Mathematics and Computers inSimulations, 65, p. 417-430, 2004.

S. I. Dutta, S. Ratkovic, M. Prakash,“Photoneutrino process in astrophysical sys-tems,” Physical Review D, 69, 023005,2004.

R. G. Edwards, G. Fleming, D. G.Richards, H. R. Fiebig and The LHPCCollaboration, “Lattice QCD and nucleon

resonances,” Nuclear Physics A, 737, 167,2004.

S. Elliott, S. Chu, M. Maltrud and A.McPherson, “Animation of global marinechlorophyll distributions from fine grid bio-geochemistry/transport modeling,” inEnvironmental Sciences andEnvironmental Computing Vol. II (Editedby P. Zannetti), The EnviroCompInstitute, 2004.

S. Ethier and Z. Lin, “Porting the 3D gyro-kinetic particle-in-cell code GTC to the NECSX-6 vector architecture: perspectives andchallenges,” Computer PhysicsCommunications, 164, 456, 2004.

R. D. Falgout and P. S. Vassilevski, “OnGeneralizing the AMG Framework,” SIAMJournal on Numerical Analysis, 42, p.1669-1693, 2004.

D. G. Fedorov, R. M. Olson, K. Kitaura,M. S. Gordon, and S. Koseki, “A new hier-archical parallelization scheme: Generalizeddistributed data interface (GDDI), and anapplication to the fragment molecular orbitalmethod (FMO),” Journal of ComputationalChemistry, 25, 872, 2004.

S. Flanagan, et al., “A General Purpose DataAnalysis Monitoring System with CaseStudies from the National Fusion Grid andthe DIII-D MDSplus between Pulse AnalysisSystem,” 4th IAEA Technical Meeting onControl, Data Acquisition, and RemoteParticipation for Fusion Research, FusionEngineering and Design 71, 263-267,2004.

E. Follana, et al. (The HPQCDCollaboration), “Improvement and tastesymmetry breaking for staggered quarks,”Nuclear Physics (Proc. Suppl.) 129-130,384, 2004.

N. Folwell, L. Ge, A. Guetz, V. Ivanov, etal., “Electromagnetic System Simulation –Prototyping Through Computation,” SciDACPI meeting, 2004.

C. L. Fryer, “Neutron Star Kicks fromAsymmetric Collapse,” ApJ Letters,601:L175-L178, Feb. 2004.

C. L. Fryer, D. E. Holz, and S. A. Hughes,“Gravitational Waves from Stellar Collapse:Correlations to Explosion Asymmetries,” ApJ,609:288-300, July 2004.

C. L. Fryer and M. S. Warren, “TheCollapse of Rotating Massive Stars in ThreeDimensions,” American Physics Journal,601:391-404, Jan. 2004.

LIST OF PUBLICATIONS

107

Page 110: Progress report on the U.S. Department of Energy's Scientific

L. Ge, L.-Q. Lee, Z. Li, C.-K. Ng, K. Ko,Y. Luo and M. Shephard, “Adaptive MeshRefinement For High Accuracy Wall LossDetermination In Accelerating CavityDesign,” Proc. of the Eleventh BiennialIEEE Conference on ElectromagneticField Computation, Seoul, Korea, June2004.

L. Ge, L. Lee, L. Zenghai, L. Ng, C. Ko,K., Y. Luo, M. S. Shephard, “AdaptiveMesh Refinement for High Accuracy WallLoss Determination in Accelerating CavityDesign,” IEEE Conf. on ElectromagneticField Computations, June 2004.

C. G. R. Geddes, C. Toth, J. van Tilborg,E. Esarey, C. B. Schroeder, D. Bruhwiler,C. Nieter, J. Cary and W. P. Leemans,“High-quality electron beams from a laserwakefield accelerator using plasma-channelguiding,” Nature 431, p.538-541, 2004.

J. Glimm, J. W. Grove, Y. Kang, T. Lee, X.Li, D. H. Sharp, Y. Yu, K. Ye, M. Zhao,“Statistical riemann problems and a composi-tion law for errors in numerical solutions ofshock physics problems,” SIAM Journal onScientific Computing, 26, p. 666-697,2004.

J. Glimm, S. Hou, Y. Lee, D. Sharp, K. Ye,“Sources of uncertainty and error in the simu-lation of flow in porous media,” Comp. andApplied Mathematics, 23, p. 109-120,2004.

J. Glimm, H. Jin, Y. Zhang, “Front trackingfor multiphase fluid mixing,”Computational Methods in MultiphaseFlow II, A. A. Mammoli, C. A. Brebbia,eds., p. 13-22, WIT Press, Southampton,UK, 2004.

J. N. Goswami, K. K. Marhas, M.Chaussidon, M. Gounelle, B. Meyer,“Origin of Short-lived Radionuclides in theEarly Solar System,” Workshop onChondrites and the Protoplanetary Disk,p. 9120, 2004.

J. F. Grcar, P. Glarborg, J. B. Bell, M. S.Day, A. Loren, and A. D. Jenson, “Effectsof Mixing on Ammonia Oxidation inCombustion Environments at IntermediateTemperatures,” Proceedings of theCombustion Institute, 30:1193, 2004.

L. Grigori and X. S. Li, “PerformanceAnalysis of Parallel Right-looking Sparse LUFactorization on Two-dimensional Grids ofProcessors,” Proceedings of the PARA'04Workshop on State-of-the art in ScientificComputing, Springer Lecture Notes inComputer Science, 2004.

Y. Guo, A. Kawano, D. L. Thompson, A.F. Wagner and M. Minkoff, “InterpolatingMoving Least-Squares Methods for FittingPotential Energy Surfaces: Applications toClassical Dynamics Calculations,” Journal ofChemical Physics, 121, 5091-5097, 2004.

A. Gurtler, F. Robicheaux, W. J. van derZande and L. D. Noordam, “Asymmetry inthe strong field ionization of Rydberg atomsby few-cycle pulses,” Physical ReviewLetters, 033002, 2004.

A. Gurtler, F. Robicheaux, M. J. J.Vrakking, W. J. van der Zande and L. D.Noordam, “Carrier phase dependence in theionization of Rydberg atoms by short radio-frequency pulses: a model system for highorder harmonic generation,” PhysicalReview Letters, 92, 063901, 2004.

S. W. Hadley, D. J. Erickson III, J. L.Hernandez and S. Thompson, “FutureU.S. energy use for 2000-2025 as computedwith temperatures from a global climate pre-diction model and energy demand model,”Proc. of the 24th Annual North AmericanConference of the USAEE/IAEE, UnitedStates Association for Energy Economics,Cleveland, OH, 2004.

P. T. Haertel, D. A. Randall and T. G.Jensen, “Toward a Lagrangian ocean model:Simulating upwelling in a large lake usingslippery sacks,” Monthly Weather Review,132, 66, 2004.

A. T. Hagler, et al. (The LHPCCollaboration), “Transverse structure ofnucleon parton distributions from latticeQCD,” Physical Review Letters, 93,112001, 2004.

W. C. Haxton, “Supernova Neutrino-Nucleus Physics and the r-Process,” The r-Process: The Astrophysical Origin of theHeavy Elements and Related RareIsotope Accelerator Physics, Proc. of theFirst Argonne/MSU/JINA/INT R/AWorkshop, p. 73, World Scientific, 2004.

Y. He and C. Ding, “MPI and OpenMPParadigms on Cluster of SMP Architectures:the Vacancy Tracking Algorithm for Multi-Dimensional Array Transpose,” Journal ofParallel and Distributed ComputingPractice, 2, 5, 2004.

E. D. Held, J. D. Callen, C. C. Hegna, C.R. Sovinec, T. A. Gianakon, S. E. Kruger,“Nonlocal closures for plasma fluid simula-tions,” Physics of Plasmas 11, 2419, 2004.

J. J. Heys, C. G. DeGroff, T. Manteuffel, S.McCormick, and H. Tufo, “Modeling 3-d

compliant blood flow with FOSLS,” BiomedSci Instrum. 40, p. 193-9, 2004.

J. Heys, T. Manteuffel, S. McCormick,and L. Olson, “Algebraic multigrid (AMG)for higher-order finite elements,” Journal ofComputational Physics 195, p. 560-575,2004.

J. Heys, T. Manteuffel, S. McCormick,and J. Ruge, “First-order system least squares(FOSLS) for coupled fluid-elasticity prob-lems,” Journal of Computational Physics195, p. 560-575, 2004.

D. Higdon, M. Kennedy, J. C. Cavendish,J. A. Cafeo, R. D. Ryne, “Combining FieldData and Computer Simulations forCalibration and Prediction,” SIAM Journalon Scientific Computing, 26, 2, p. 448-466, March 2004.

W. R. Hix, O. E. B. Messer, A.Mezzacappa, “The interplay betweennuclear electron capture and fluid dynamicsin core collapse supernovae,” in Cosmicexplosions in Three Dimensions, P.Hoflich, P. Kumar, J. C. Wheeler, eds., p.307, Cambridge University Press, 2004.

F. M. Hoffman, M. Vertenstein, H.Kitabata, J. B. White III, P. H. Worley, J.B. Drake, M. Cordery, “Adventures inVectorizing the Community Land Model,”Proc. of the 46th Cray User GroupConference, Knoxville, TN, May 2004.

I. Hofmann, G. Franchetti, J. Qiang, R. D.Ryne, “Dynamical Effects of the MontagueResonance,” Proc. EPAC 2004, Lucerne,Switzerland, p. 1960, 2004.

I. Hofmann, G. Franchetti, M.Giovannozzi, M. Martini, E. Metral, J.Qiang, R. D. Ryne, “Simulation aspects ofthe code benchmarking based on the CERN-PS Montague-resonance experiment,” Proc.33rd ICFA Advanced Beam DynamicsWorkshop on High Intensity & HighBrightness Hadron Beams, Bensheim,Germany, Oct. 2004.

D. J. Holmgren, P. Mackenzie, A. Singhand J. Somine, “Lattice QCD clusters atFermilab,” Computing in High-EnergyPhysics (CHEP ’04), Interlaken,Switzerland, Sept. 2004.

U. Hwang, J. M. Laming, C. Badenes, F.Berendse, J. Blondin, D. Cio, T. DeLaney,D. Dewey, R. Fesen, K. A. Flanagan, C. L.Fryer, P. Ghavamian, J. P. Hughes, J. A.Morse, P. P. Plucinsky, R. Petre, M. Pohl,L. Rudnick, R. Sankrit, P. O. Slane, R. K.Smith, J. Vink, and J. S. Warren, “A Million

LIST OF PUBLICATIONS

108

Page 111: Progress report on the U.S. Department of Energy's Scientific

Second Chandra View of Cassiopeia A.,” ApJLetters, 615, L117-L120, Nov. 2004.

A. V. Ilin, F. R. Chang-Diaz, J. P. Squire,A. G. Tarditi, M. D. Carter, “Improved sim-ulation of the ICRF waves in the VASIMRplasma,” Proceedings of the 18thInternational conference on the numericalSimulation of Plasmas, Computer PhysicsCommunications 164, 251, 2004.

E.-J. Im, I. Bustany, C. Ashcraft, J.Demmel and K. Yelick, “Toward automaticperformance tuning of matrix triple productsbased on matrix structure,” PARA'04Workshop on State-of-the-art in ScientificComputing, Copenhagen, Denmark, June2004.

E.-J. Im, K. A. Yelick and R. Vuduc,“SPARSITY: An Optimization Frameworkfor Sparse Matrix Kernels,” InternationalJournal of High Performance ComputingApplications, 18 (1), p. 135-158, Feb.2004.

J. P. Iorio, P. B. Duffy, B. Govindasamy, S.L. Thompson, M. Khairoutdinov and D.A. Randall, “Effects of model resolution andsubgrid-scale physics on the simulation ofdaily precipitation in the continental UnitedStates,” Climate Dynamics, 23, 243, 2004.

L. Ives, T. Bui, W. Vogler, A. Bauer, M.Shephard, “Operation and performance of a3D finite element charged particle code withadaptive meshing Source,” 5th IEEE Int.Vacuum Electronics Conf. (IVEC 2004),p. 316-317, 2004.

C. Jablonowski, “Adaptive grids in weatherand climate modeling,” University ofMichigan thesis, 2004.

C. Jablonowski, M. Herzog, J. E. Penner,R. Oehmke, Q. F. Stout, B. van Leer,“Adaptive grids for weather and climate mod-els,” ECMWF Seminar Proceedings onRecent Developments in NumericalMethods for Atmospheric and OceanModelling, Reading, UK, Sept. 2004.

S. C. Jardin, “A triangular finite elementwith first-derivative continuity applied tofusion MHD applications,” Journal ofComputational Physics 200, 133, 2004.

G. C. Jordan, B. S. Meyer, “Nucleosynthesisin Fast Expansions of High-Entropy, Proton-rich Matter,” Astrophysical JournalLetters, 617, L131-L134, 2004.

G. C. Jordan, B. S. Meyer, “NuclearReaction Rates and the Production of Lightp-Process Isotopes in Fast Expansions ofProton-Rich Matter,” The r-Process: The

Astrophysical Origin of the HeavyElements and Related Rare IsotopeAccelerator Physics, Proc. of the FirstArgonne/MSU/JINA/INT R/AWorkshop, p. 157, World Scientific, 2004.

R. Katz, M. Knepley and B. Smith,“Developing a Geodynamics Simulation withPETSc,” in Numerical Solution of PartialDifferential Equations on ParallelComputers, A. M. Bruaset, P. Bjorstad,and A. Tveito, eds., p. 413, Springer-Verlag, 2004.

S. R. Kawa, D. J. Erickson III, S. Pawson,and Z. Zhu, “Global CO2 transport simula-tions using meteorological data from theNASA data assimilation system,” Journal ofGeophysical Research, 109. D18, 2004.

K. Keahey et al., “Agreement BasedInteractions for Experimental Science,” Proc.of Europar 2004 conference, 2004.

K. Keahey, et al., “Grids for ExperimentalScience: The Virtual Control Room,” Proc. ofthe CLADE 2004 Workshop, 2004.

K. Keahey, et al., “Fine-GrainedAuthorization for Job Execution in the Grid:Design and Implementation,” Concurrencyand Computation: Practice andExperience 16 (5), 477-488, 2004.

J. P. Kenny, S. J. Benson, Y. Alexeev, J.Sarich, C. L. Janssen, L. CurfmanMcInnes, M. Krishnan, J. Nieplocha, E.Jurrus, C. Fahlstrom and T. L. Windus,“Component-Based Integration of Chemistryand Optimization Software,” Journal ofComputational Chemistry, 25, 1717-1725,2004.

M. E. Kowalski, C.-K. Ng, L.-Q. Lee, Z.Li and K. Ko, “Time-Domain Calculation ofGuided Electromagnetic Wave Propagationon Unstructured Grids,” U. S. NationalCommittee of the International Union ofRadio Science (URSI) National RadioScience Meeting, Boulder, Colo., Jan.2004.

B. C. Lee, R. Vuduc, J. Demmel and K.Yelick, “Performance Models for Evaluationand Automatic Tuning of Symmetric SparseMatrix-Vector Multiply,” InternationalConference on Parallel Processing,Montreal, Quebec, Canada, Aug. 2004.

L.-Q. Lee, L. Ge, M. Kowalski, Z. Li, C.-K. Ng, G. Schussman, M. Wolf and K. Ko,“Solving Large Sparse Linear Systems InEnd-To-End Accelerator StructureSimulations,” Proc. of the 18thInternational Parallel and Distributed

Processing Symposium, Santa Fe, NM,April 2004.

T. Lee, Y. Yu, M. Zhao, J. Glimm, X. Li,K. Ye, “Error analysis of composite shockinteraction problems,” Proc. of PMC04Conference, 2004.

W. W. Lee, “Theoretical and numericalproperties of a gyrokinetic plasma: implica-tions on transport scale simulation,”Computer Physics Communications, 164,244, 2004.

W. W. Lee, “Steady State GlobalSimulations of Microturbulence,” AmericanPhysical Society Division of PlasmaPhysics, Savannah, Ga., Nov. 2004.

J. Li and O. Widlund, “FETI--DP, BDDC,and Block Cholesky Methods,” InternationalJournal on Numerical Analysis, MethodsEngineering, 2004.

N. Li, Y. Saad and E. Chow, “CroutVersions of ILU for General Sparse Matrices,”SIAM Journal of Scientific Computing,25, p. 716-728, 2004.

X. Li, J. F. Remacle, N. Chevaugeon, M.S. Shephard, “Anisotropic Mesh GradationControl,” 13th International MeshingRoundtable, Sandia NationalLaboratories, p. 401-412, 2004.

Z. Li, N. T. Folwell, L. Ge, A. Guetz, V.Ivanov, M. Kowalski, L.-Q. Lee, C.-K. Ng,G. Schussman, R. Uplenchwar, M. Wolfand K. Ko, “X-Band Linear Collider R&DIn Accelerating Structures Through AdvancedComputing,” Proc. of the 9th EuropeanParticle Accelerator Conference, Lucerne,Switzerland, July 2004.

Z. Li, N. Folwell, L. Ge, A. Guetz, V.Ivanov, M. Kowalski, L.-Q. Lee, C.-K. Ng,G. Schussman, R. Uplenchwar, M. Wolf,L. Xiao and K. Ko, “High PerformanceComputing In Accelerator Structure DesignAnd Analysis,” Proc. of the 8thInternational Computational AcceleratorPhysics Conference, St. Petersburg,Russia, June 2004.

M. Liebendoerfer, O. E. B. Messer, A.Mezzacappa, S. W. Bruenn, C. Y. Cardall,F.-K. Thielemann, “A Finite DifferenceRepresentation of Neutrino RadiationHydrodynamics in Spherically SymmetricGeneral Relativistic Spacetime,”Astrophysical Journal Supplement Series,150, 263-316, 2004.

Y. Lin, S. Wukitch. P. Bonoli, E. Nelson-Melby, M. Porkolab, J. C. Wright, N.Basse, A. E. Hubbard, J. Irby, L. Lin, E. S.

LIST OF PUBLICATIONS

109

Page 112: Progress report on the U.S. Department of Energy's Scientific

Marmar, A. Mazurenko, D. Mossessian,A. Parisot, J. Rice, S. Wolfe, C. K. Phillips,G. Schilling, J. R. Wilson, P. Phillips andA. Lynn, “Investigation of Ion CyclotronRange of Frequencies Mode Conversion at theIon-Ion Hybrid Layer in Alcator C-Mod,”Physics of Plasmas 11 (5) 2466-2472,2004.

Z. Lin and T. S. Hahm, “Turbulence spread-ing and transport scaling in global gyrokineticparticle simulations,” Physics of Plasmas,11, 1099, 2004.

Z. Lin et al., “Electron Thermal Transport inTokamaks: ETG or TEM Turbulence?”International Atomic Energy AgencyConference, IAEA-CN/TH/8-4,Vilamoura, Portugal, 2004.

W. Lipscomb and T. D. Ringler, “An incre-mental remapping transport scheme on aspherical geodesic grid,” Monthly WeatherReview, 133, 2335, 2004.

E. Livne, A. Burrows, R. Walder, I.Lichtenstadt and T. A. Thompson, “Two-dimensional, Time-dependent, Multigroup,Multiangle Radiation Hydrodynamics TestSimulation in the Core-Collapse SupernovaContext,” Astrophysical Journal, 609:277-287, July 2004.

S. D. Loch, C. J. Fontes, J. P. Colgan, M.S. Pindzola, C. P. Ballance, D. C. Griffin,M. G. O’Mullane and H. P. Summers, “Acollisional-radiative study of lithium plas-mas,” Physical Review E, 69, 066405,2004.

C. Lu and D. Reed, “Assessing FaultSensitivity in MPI Applications,” Proc. ofthe ACM/IEEE SC2004 conference,Pittsburgh, Penn., Nov. 2004.

X. J. Luo, M. S. Shephard, R. M. O’Bara,R. Nastasia, M. W. Beall, “Automatic p-ver-sion mesh generation for curved domains,”Book of Abstracts: ICOSAHOM 4004,Brown University, June, 2004.

X. J. Luo, M. S. Shephard, R. M. O’Bara,R. Nastasia, M. W. Beall., “Automatic p-ver-sion mesh generation for curved domains,”Engineering with Computers, 20(3), p.273-285, 2004.

K.-L. Ma and E. Lum, “Techniques forVisualizing Time-Varying Volume Data,” inVisualization Handbook, C. Hansen andC. Johnston, eds., Academic Press, Oct.2004.

K.-L. Ma, G. Schussman and B. Wilson,“Visualization for Computational AcceleratorPhysics,” in Visualization Handbook, C.

Hansen and C. Johnston, eds., AcademicPress, Oct. 2004.

H. MacMillan, T. Manteuffel and S.McCormick, “First-order system leastsquares and electrical impedance tomogra-phy,” SIAM Journal on NumericalAnalysis, 42, p. 461-483, 2004.

G. Mahinthakumar, M. Sayeed, J. Blondin,P. Worley, W. Hix and A. Mezzacappa,“Performance Evaluation and Modeling of aParallel Astrophysics Application,” Proc. ofthe High Performance ComputingSymposium 2004, The Society forModeling and Simulation International,Arlington, Va., 2004.

G. G. Maisuradze, A. Kawano, D. L.Thompson, A. F. Wagner, and M.Minkoff, “Interpolating Moving Least-Squares Methods for Fitting Potential EnergySurfaces: Analysis of an Application to a Six-Dimensional System,” Journal of ChemicalPhysics, 121, 10329-10338, 2004.

G. Martinez-Pinedo, K. Langanke, J. M.Sampaio, D. J. Dean, W. R. Hix, O. E. B.Messer, A. Mezzacappa, M. Liebendorfer,H.-T. Janka, M. Rampp, “Weak InteractionProcesses in Core-Collapse Supernovae,” IAUColloq. 192: Cosmic Explosions, On the10th Anniversary of SN1993J, p. 321,2005.

N. Mathur, et al., “A Study of Pentaquarkson the Lattice with Overlap Fermions,”Physical Review D, 70, 074508, 2004.

B. S. Lee McCormick, B. Philip, and D.Quinlan, “Asynchronous fast adaptive com-posite-grid methods for elliptic problems: theo-retical foundations,” SIAM Journal onNumerical Analysis, 42, p. 130-152, 2004.

P. McCorquodale, P. Colella, D. P. Grote,J.-L. Vay, “A node-centered local refinementalgorithm for Poisson's equation in complexgeometries,” Journal of ComputationalPhysics, 201, p. 34-60, 2004.

E. Métral, C. Carli, M. Giovannozzi, M.Martini, R. R. Steerenberg, G. Franchetti,I. Hofmann, J. Qiang, R. D. Ryne,“Intensity Dependent Emittance TransferStudies at the CERN Proton Synchrotron,”Proc. EPAC 2004, Lucerne, Switzerland,p. 1894, 2004.

E. Métral, M. Giovannozzi, M. Martini, R.R. Steerenberg, G. Franchetti, I.Hofmann, J. Qiang, R. D. Ryne, “Space-Charge Experiments at the CERN ProtonSynchrotron,” Proc. 33rd ICFA AdvancedBeam Dynamics Workshop on High

Intensity & High Brightness HadronBeams, Bensheim, Germany, Oct., 2004.

B. S. Meyer, “Synthesis of Short-livedRadioactivities in a Massive Star,” ASPConf. Ser. 341: Chondrites and theProtoplanetary Disk, p. 515, 2005.

B. S. Meyer, “Online Tools for Astronomyand Cosmochemistry,” 36th Annual Lunarand Planetary Science Conference, p.1457, 2005.

B. S. Meyer, “Nucleosynthesis of Short-livedRadioactivities in Massive Stars,”Workshop on Chondrites and theProtoplanetary Disk, p. 9089, 2004.

P. A. Milne, A. L. Hungerford, C. L.Fryer, T. M. Evans, T. J. Urbatsch, S. E.Boggs, J. Isern, E. Bravo, A. Hirschmann,S. Kumagai, P. A. Pinto, and L.-S. The,“Unified One-Dimensional Simulations ofGamma-Ray Line Emission from Type IaSupernovae,” Astrophysical Journal,613:1101-1119, Oct. 2004.

P. Mucci, J. Dongarra, R. Kufrin, S.Moore, F. Song, F. Wolf, “Automating theLarge-Scale Collection and Analysis ofPerformance,” Proc. of the 5th LCIInternational Conference on LinuxClusters: The HPC Revolution, Austin,Texas, May 2004.

P. Muggli, B. E. Blue, C. E. Clayton, S.Deng, F.-J. Decker, M. J. Hogan, C.Huang, R. Iverson, C. Joshi, T. C.Katsouleas, S. Lee, W. Lu, K. A. Marsh,W. B. Mori, C. L. O'Connell, P. Raimondi,R. Siemann, and D. Walz, “Meter-ScalePlasma-Wakefield Accelerator Driven by aMatched Electron Beam,” Physical ReviewSpecial Topics-Accelerators and Beams93, 014802, 2004.

J. W. Murphy, A. Burrows and A. Heger,“Pulsational Analysis of the Cores of MassiveStars and Its Relevance to Pulsar Kicks,”Astrophysical Journal, 615:460-474, Nov.2004.

J. D. Myers, T. C. Allison, S. Bittner, D.Didier, M. Frenklach, W. H. Green, Jr., Y.-L. Ho, J. Hewson, W. Koegler, C.Lansing, D. Leahy, M. Lee, R. McCoy,M. Minkoff, S. Nijsure, G. von Laszewski,D. Montoya, L. Oluwole, C. Pancerella,R. Pinzon, W. Pitz, L. A. Rahn, B. Ruscic,K. Schuchardt, E. Stephan, A. Wagner, T.Windus, C. Yang, “A collaborative informat-ics infrastructure for multi-scale science,”Challenges of Large Applications inDistributed Environments (CLADE)Workshop, Honolulu, Hawaii, June 2004.

LIST OF PUBLICATIONS

110

Page 113: Progress report on the U.S. Department of Energy's Scientific

J. R. Myra, L. A. Berry, D. A. D’Ippolito,E. F. Jaeger, “Nonlinear Fluxes and Forcesfrom Radio-Frequency Waves withApplication to Driven Flows in Tokamaks,”Physics of Plasmas 11, 1786, 2004.

D. Narayan, R. Bradshaw, A. Lusk and E.Lusk, “MPI cluster system software,”Recent Advances in Parallel VirtualMachine and Message Passing Interface,Lecture Notes in Computer Science,3241, p. 277-286. Springer-Verlag, 2004.

D. Narayan, R. Bradshaw, A. Lusk, E.Lusk and R. Butler, “Component-based clus-ter systems software architecture: A casestudy,” Proc. of the 6th IEEEInternational Conference on ClusterComputing (CLUSTER04), p. 319-326,IEEE Computer Society, 2004.

J. W. Negele, et al. (The LHPCCollaboration), “Insight into nucleon struc-ture from generalized parton distributions,”Nuclear Physics (Proc. Suppl.) 129-130,910, 2004.

J. W. Negele, et al., “Insight into nucleonstructure from lattice calculations of momentsof parton and generalized parton distribu-tions,” Nuclear Physics (Proc. Suppl.) 128,170, 2004.

J. W. Negele, “What we are learning aboutthe quark structure of hadrons from latticeQCD,” 3rd International Symposium onthe Gerasimov-Drell-Hearn Sum Ruleand its Extensions (GDH 2004), Norfolk,Virginia, June 2004.

C.-K. Ng, N. Folwell, A. Guetz, V. Ivanov,L.-Q. Lee, Z. Li, G. Schussman and K.Ko, “Simulating Dark Current In NlcStructures,” Proc. of the 8th InternationalComputational Accelerator PhysicsConference, St. Petersburg, Russia, June2004.

M. A. Nobes and H. D. Trottier, “Progressin automated perturbation theory for heavyquark physics,” Nuclear Physics (Proc.Suppl.) 129-130, 355, 2004.

B. Norris, J. Ray, R. C. Armstrong, L. C.McInnes, D. E. Bernholdt, W. R. Elwasif,A. D. Malony and S. Shende,“Computational quality of service for scientif-ic components,” Proc. of the InternationalSymposium on Component-BasedSoftware Engineering (CBSE7),Edinburgh, Scotland, May 2004.

K. Ohmi, M. Tawada, Y. Cai, S. Kamada,K. Oide, J. Qiang, “Beam-beam limit in e+e-

circular colliders,” Physical Review Letters92, 214801, 2004.

K. Ohmi, M. Tawada, Y. Cai, S. Kamada,K. Oide, and J. Qiang, “Luminosity limitdue to the beam-beam interactions with orwithout crossing angle,” Physical ReviewST Accel. Beams, vol 7, 104401, 2004.

C. D. Ott, A. Burrows, E. Livne and R.Walder, “Gravitational Waves fromAxisymmetric, Rotating Stellar CoreCollapse,” Astrophysical Journal, 600:834-864, Jan. 2004.

S. Ovtchinnikov and X.-C. Cai, “One-levelNewton-Krylov-Schwarz algorithm forunsteady nonlinear radiation diffusion prob-lem,” Numerical Linear Alg. Applic., 11,867-881, 2004.

P. P. Pebay, H. N. Najm, and J. Pousin, “ANon-Split Projection Strategy for Low MachNumber Reacting Flow,” InternationalJournal of Multiscale Computational Eng.,2(3):445-460, Aug. 2004.

P. Petreczky and K. Petrov, “Free energy ofa static quark anti-quark pair and the renor-malized Polyakov loop in three flavor QCD,”Physical Review D, 70, 054503, 2004.

M. Prakash, et al., “Neutrino Processes inSupernovae and Neutron Stars in TheirInfancy and Old Age,” in Compact Stars:The Quest for New States of DenseMatter, Proc. of the KIAS-APTCPInternational Symposium on Astro-Hadron Physics, Seoul, Korea, p. 476,2004.

J. Pruet, R. Surman and G. C.McLaughlin, “On the Contribution ofGamma-Ray Bursts to the Galactic Inventoryof Some Intermediate-Mass Nuclei,”Astrophysical Journal Letters, 602:L101-L104, Feb. 2004.

J. Pruet, T. A. Thompson and R. D.Hoffman, “Nucleosynthesis in Outflows fromthe Inner Regions of Collapsars,”Astrophysical Journal, 606:1006-1018,May 2004.

J. Qiang, M. Furman and R. Ryne, “AParallel Particle-In-Cell Model for Beam-Beam Interactions in High Energy RingColliders,” Journal of ComputationalPhysics vol. 198, 278, 2004.

J. Qiang and R. Gluckstern, “Three-Dimensional Poisson Solver for a ChargedBeam with Large Aspect Ratio in aConducting Pipe,” Computer PhysicsCommunications, 160, 120, 2004.

J. Qiang, R. D. Ryne, I. Hofmann, “Space-charge driven emittance growth in a 3D mis-matched anisotropic beam,” Physical ReviewLetters, 92, 174801, 2004.

J. Qiang, “Halo formation due to beam-beaminteractions of beams optically mismatched atinjection,” Physical Review ST Accel.Beams, vol 7, 031001, 2004.

J. Qiang, “Strong-strong beam-beam simula-tion on parallel computer,” ICFA BeamDynamics Newsletter, No. 34, p.19-26,2004.

D. Quinlan, M. Schordan, Q. Yi and A.Saebjornsen, “Classification and Utilizationof Abstractions for Optimization,” FirstInternational Symposium on LeveragingApplications of Formal Methods, Paphos,Cyprus, Oct. 2004.

D. A. Randall, “The new CSU dynamicalcore,” Seminar on Recent Developments inNumerical Methods of Atmosphere andOcean Modeling,” European Centre forMid-Range Weather Forecasting, UK,Sept. 2004.

D. A. Randall and W. H. Schubert,“Dreams of a stratocumulus sleeper,”Atmospheric Turbulence and MesoscaleMeteorology, E. Fedorovich, R. Rotunoand G. Stevens, eds., p. 71, CambridgeUniversity Press, 2004.

K. Robinson, L. J. Dursi, P. M. Ricker, R.Rosner, A. C. Calder, M. Zingale, J. W.Truran, T. Linde, A. Caceres, B. Fryxell,K. Olson, K. Riley, A. Siegel and N.Vladimirova, “Morphology of RisingHydrodynamic and MagnetohydrodynamicBubbles from Numerical Simulations,”Astrophysical Journal, 601:621-643, Feb.2004.

A. L. Rosenberg, J. E. Menard, J. R.Wilson, S. Medley, R. Andre, D. Darrow,R. Dumont, B. P. LeBlanc, C. K. Phillips,M. Redi, T. K. Mau, E. F. Jaeger, P. M.Ryan, D. W. Swain, R. W. Harvey, J.Egedal and the NSTX Team, “Fast IonAbsorption of the High Harmonic Fast Wavein the National Spherical Torus Experiment,”Physics of Plasmas 11, 2441, 2004.

B. Ruscic, “Active Thermochemical Table,” in2005 Yearbook of Science andTechnology, an annual update to theMcGraw-Hill Encyclopedia of Scienceand Technology, McGraw-Hill, NewYork, 2004.

LIST OF PUBLICATIONS

111

Page 114: Progress report on the U.S. Department of Energy's Scientific

B. Ruscic, R. E. Pinzon, M. L. Morton, G.von Laszewski, S. Bittner, S. G. Nijsure,K. A. Amin, M. Minkoff, A. F. Wagner.2004, “Introduction to active thermochemicaltables: Several “key” enthalpies of formationrevisited,” Journal of Physical ChemistryA, 108:9979-9997, 2004.

R. S. Samtaney, S. Jardin, P. Colella, D. F.Martin, “3D adaptive mesh refinement simu-lations of pellet injection in tokamaks,”Computer Physics Communications 1641-3 220, 2004.

A. R. Sanderson, et al., “Display of VectorFields Using a Reaction-Diffusion Model,”Proc. of the 2004 IEEE VisualizationConference, 115-122, 2004.

D. P. Schissel, “The National FusionCollaboratory Project,” Proc. of the 2004DOE SciDAC PI Meeting, 2004.

D. P. Schissel, “The National FusionCollaboratory Project: Applying GridTechnology for Magnetic Fusion Research,”Proc. of the Workshop on Case Studieson Grid Applications at GGF10, 2004.

D. P. Schissel, et al., “Building the U.S.National Fusion Grid: Results from theNational Fusion Collaboratory Project,” The4th IAEA Technical Meeting on Control,Data Acquisition, and RemoteParticipation for Fusion Research. FusionEngineering and Design 71, 245-250,2004.

W. Schroers, et al. (The LHPCCollaboration), “Moments of nucleon spin-dependent generalized parton distributions,”Nuclear Physics (Proc. Suppl.) 129-130,907, 2004.

W. H. Schubert, “Generalization of Ertel'spotential vorticity to a moist atmosphere,”Meteorologische Zeitschrift, 13, (HansErtel Special Issue), 465, 2004.

G. Schussman and K.-L. Ma, “AnisotropicVolume Rendering for Extremely Dense, ThinLine Data,” Proc. of the IEEEVisualization 2004 Conference, Austin,Texas, Oct. 2004.

S. L. Scott, C. B. Leangsuksun, T. Roa, J.Xu, and R. Libby, “Experiments andAnalysis towards effective cluster manage-ment system,” 2004 High Availability andPerformance Workshop (HAPCW), SantaFe, NM, Oct. 2004.

S. L. Scott, C. Leangsuksun, T. Lui and R.Libby, “HA-OSCAR Release 1.0 Beta:Unleashing HA-Beowulf,” Proc.of 2nd

Annual OSCAR Symposium, Winnipeg,Manitoba Canada, May 2004.

S. L. Scott, I. Haddad and C.Leangsuksun, “HA-OSCAR: TowardsHighly Available Linux Clusters,” LinuxWorld Magazine, 2, 3, March 2004.

M. S. Shephard, M. W. Beall, R. M.O’Bara, B. E. Webster, “Toward simulation-based design,” Finite Elements in Analysisand Design, 40, p. 1575-1598, 2004.

M. S. Shephard, M. W. Beall, B. E.Webster, “Supporting Simulations to GuideEngineering Design,” AIAA-2004-4448,AIAA, Richmond, Va., 2004.

J. Shigemitsu, et al. (The HPQCD collab-oration), “Heavy-light Meson SemileptonicDecays with Staggered Light Quarks,”Nuclear Physics (Proc. Suppl.) 129-130,331, 2004.

L. O. Silva et al., “Proton shock accelerationin laser-plasma interactions,” PhysicalReview Letters, 92, 015002, 2004.

F. Song, F. Wolf, N. Bhatia, J. Dongarra, S.Moore, “An Algebra for Cross-ExperimentPerformance Analysis,” 2004 InternationalConference on Parallel Processing (ICCP-04), Montreal, Quebec, Canada, Aug.2004.

C. R. Sovinec, A. H. Glasser, D. C.Barnes, T. A. Gianakon, R. A. Nebel, S. E.Kruger, D. D. Schnack, S. J. Plimpton, A.Tarditi, M. S. Chu and the NIMRODTeam, “Nonlinear magnetohydrodynamicswith high-order finite elements,” Journal OfComputational Physics 195, 355, 2004.

H. R. Strauss, A. Pletzer, W. Park, S.Jardin, J. Breslau, and L. Sugiyama,“MHD simulations with resistive wall andmagnetic separatrix,” Computer PhysicsCommunications 164, 1-3 40-45, 2004.

H. R. Strauss, L. Sugiyama, G. Y. Fu, W.Park, J. Breslau, “Simulation of two fluidand energetic particle effects in stellarators,”Nuclear Fusion 44, 9, 1008-1014, 2004.

M. M. Strout and P. D. Hovland, “Metricsand Models for Reordering Transformations,”Proc. of the Second ACM SIGPLANWorkshop on Memory SystemPerformance (MSP), June 2004.

M. M. Strout and P. D. Hovland, “Metricsand Models for Reordering Transformations,”Proc. of the Second ACM SIGPLANWorkshop on Memory SystemPerformance (MSP), June 2004.

F. D. Swesty, E. S. Myra, “Advances inMulti-Dimensional Simulation of Core-Collapse Supernovae,” in Open Issues inCore Collapse Supernova Theory, Proc.of the Institute for Nuclear Theory, Vol.14, p. 176, University of Washington,Seattle, Wash., June 2004.

T. J. Tautges, “MOAB-SD: IntegratedStructured and Unstructured MeshRepresentation,” Engineering WithComputers, 20:286-293, 2004.

S. Thompson, B. Govindasamy, A. Mirin,K. Caldeira, C. Delire, J. Milovich, M.Wickett and D. J. Erickson III,“Quantifying the effects of CO2-fertilizedvegetation on future global climate and car-bon dynamics,” Geophysical ResearchLetters, 31, L23211, 2004.

M. M. Tikir, J. K. Hollingsworth, “UsingHardware Counters to AutomaticallyImprove Memory Performance,” Proc. ofACM/IEEE SC2004 conference,Pittsburgh, Penn., Nov. 2004.

J. L. Tilson, C. Naleway, M. Seth, R.Shepard, A. F. Wagner, and W. C. Ermler,“An Ab Initio Study of AmCl+: f-fSpectroscopy and Chemical Binding,” Journalof Chemical Physics, 121, 5661-5675,2004.

A. Toselli and O. Widlund, “DomainDecomposition Methods - Algorithms andTheory,” Springer Series in ComputationalMathematics, 34, Oct. 2004.

H. D. Trottier, “Higher-order perturbationtheory for highly-improved actions,” NuclearPhysics (Proc. Suppl.) 129-130, 142, 2004.

F. S. Tsung et al., “Near-GeV-energy laser-wakefield acceleration of self-injected electronsin a centimeter-scale channel” PhysicalReview Letters 93, 185002, 2004.

S. Vadlamani, S. E. Parker, Y. Chen, “TheParticle-Continuum method: an algorithmicunification of particle-in-cell and continuummethods,” Computer PhysicsCommunications, 164, 209, 2004.

J.-L. Vay, P. Colella, J. W. Kwan, P.McCorquodale, D. B. Serafini, A.Friedman, D. P. Grote, G. Westenskow, J.C. Adam, A. Héron and I. Haber,“Application of adaptive mesh refinement toparticle-in-cell simulations of plasmas andbeams,” Physics of Plasmas 11, 2928, 2004.

J-L. Vay, P. Colella, A. Friedman, D. P.Grote, P. McCorquodale, D. B. Serafini,“Implementations of mesh refinement schemes

LIST OF PUBLICATIONS

112

Page 115: Progress report on the U.S. Department of Energy's Scientific

for particle-in-cell plasma simulations,”Computer Physics Communications 164,p. 297-305, 2004.

M. Vertenstein, K. Oleson, S. Levis, and F.M. Hoffman, “Community Land ModelVersion 3.0 (CLM3.0) User's Guide,”National Center for AtmosphericResearch, June 2004.

R. Vuduc, J. W. Demmel and J. A. Bilmes,“Statistical Models for Empirical Search-Based Performance Tuning,” InternationalJournal of High Performance ComputingApplications, 18 (1), p. 65-94, February2004.

G. Wallace, et al., “Automatic Alignment ofTiled Displays for a Desktop Environment,”Journal of Software. 15(12): 1776-1786,Dec. 2004.

W. X. Wang, et al., “Global delta-f particlesimulation of neoclassical transport andambipolar electric field in general geometry,”Computer Physics Communications, 164,178, 2004.

T. P. Wangler, C. K. Allen, K. C. D. Chan,P. L. Colestock, K. R. Crandall, R. W.Garnett, J. D. Gilpatrick, W. Lysenko, J.Qiang, J. D. Schneider, M. E. Schulze, R.L. Sheffield, H. V. Smith, “Beam-Halo inMismatched Proton Beams,” InstrumentMethods Physics Res. A 519, 425, 2004.

J. Wei, A. Fedotov, W. Fischer, N.Malitsky, G. Parzen, J. Qiang, “Intra-beamScattering Theory and RHIC Experiments,”Proc. 33rd ICFA Advanced BeamDynamics Workshop on High Intensity &High Brightness Hadron Beams,Bensheim, Germany, Oct. 2004.

M. Wingate, et al. (The HPQCD collabo-ration), “Progress Calculating DecayConstants with NRQCD and AsqTadactions,” Nuclear Physics (Proc. Suppl.)129-130, 325, 2004.

M. Wingate, et al. (The HPQCDCollaboration), “The Bs and Ds decay con-stants in 3 flavor lattice QCD,” PhysicalReview Letters 92, 162001, 2004.

F. Wolf, B. Mohr, J. Dongarra, S. Moore,“Efficient Pattern Search in Large Tracesthrough Successive Refinement,” Proc. ofEuro-Par 2004, Pisa, Italy, Aug.-Sept.2004.

S. E. Woosley, A. Heger, A. Cumming, R.D. Hoffman, J. Pruet, T. Rauscher, J. L.Fisker, H. Schatz, B. A. Brown and M.Wiescher, “Models for Type I X-Ray Burstswith Improved Nuclear Physics,”

Astrophysical Journal Supplement,151:75-102, March 2004.

S. E. Woosley, S. Wunsch and M. Kuhlen,“Carbon Ignition in Type Ia Supernovae: AnAnalytic Model,” Astrophysical Journal,607:921-930, June 2004.

P. H. Worley, J. Levesque, “ThePerformance Evolution of the Parallel OceanProgram on the Cray X1,” Proc. of the 46thCray User Group Conference, Knoxville,TN, May 2004.

J. C. Wright, P. T. Bonoli, M. Brambilla, F.Meo, E. D’Azevedo, D. B. Batchelor, E. F.Jaeger, L. A. Berry, C. K. Phillips and A.Pletzer, “Full Wave Simulations of FastWave Mode Conversion and Lower HybridWave Propagation in Tokamaks,” Physics ofPlasmas 11, 2473, 2004.

J. C. Wright, P. T. Bonoli, E. D'Azevedoand M. Brambilla, “Ultrahigh resolutionsimulations of mode converted ion cyclotronwaves and lower hybrid waves,” Proc. ofthe 18th International Conference on theNumerical Simulation of Plasmas,Computer Physics Communications 164,330, 2004.

K. Wu, J. Ekow, E. J. Otoo, and A.Shoshani, “On the performance of bitmapindices for high cardinality attributes,” Proc.of the Thirtieth International Conferenceon Very Large Data Bases, Toronto,Canada, Morgan Kaufmann, Sept. 2004.

K. Wu, W.-M. Zhang, V. Perevoztchikov,J. Lauret and A. Shoshani, “The grid collec-tor: Using an event catalog to speed up useranalysis in distributed environment,” Proc.of Computing in High Energy andNuclear Physics (CHEP), Interlaken,Switzerland, Sept. 2004.

S. Wunsch and S. E. Woosley, “Convectionand Core-Center Ignition in Type IaSupernovae,” Astrophysical Journal,616:1102-1108, Dec. 2004.

D. Xu and B. Bode, “Performance Studyand Dynamic Optimization Design forThread Pool Systems,” Proc. of theInternational Conference on Computing,Communications and ControlTechnologies (CCCT’2004), Austin,Texas, Aug. 2004.

C. Yang, W. Gao, Z. Bai, X. Li, L. Lee, P.Husbands and E. Ng., “Algebraic Sub-struc-turing for Large-scale ElectromagneticApplications,” PARA'04 Workshop onState-of-the-art in Scientific Computing,Copenhagen, Denmark, June 2004.

U. M. Yang, “On the Use of RelaxationParameters in Hybrid Smoothers,” Numer.Linear Algebra Appl., 11, p. 155-172,2004.

Q. Yi, K. Kennedy, H. You, K. Seymour,and J. Dongarra, “Automatic Blocking of QRand LU Factorizations for Locality,” SecondACM SIGPLAN Workshop on MemorySystem Performance, Washington, D.C.,June 2004.

Q. Yi and D. Quinlan, “Applying LoopOptimizations to Object-OrientedAbstractions Through General Classificationof Array Semantics,” 17th InternationalWorkshop on Languages and Compilersfor Parallel Computing, West Lafayette,Ind., Sept. 2004.

H. Yu, K.-L. Ma and J. Welling, “A ParallelVisualization Pipeline for TerascaleEarthquake Simulations,” Proc. of theACM/IEEE SC04 conference,Pittsburgh, Penn., Nov. 2004.

H. Yu, K.-L. Ma and J. Welling, “I/OStrategies for Parallel Rendering of LargeTime-Varying Volume Data,” Proc. of theEurographics/ACM SIGGRAPHSymposium on Parallel Graphics andVisualization, June 2004.

O. Zatsarinny, T. W. Gorczyca, K. T.Korista, N. R. Badnell and D. W. Savin,“Dielectronic recombination data for dynamicfinite-density plasmas: IV. The Carbon iso-electronic sequence,” Astronomy andAstrophysics, 417, 1173, 2004.

H. Zhang, et al., “Providing Data Transferwith QoS as Agreement-Based Service,”Proc. of the IEEE InternationalConference on Services Computing, 2004.

W. Zhang, S. E. Woosley and A. Heger,“The Propagation and Eruption ofRelativistic Jets from the Stellar Progenitorsof Gamma-Ray Bursts,” AstrophysicalJournal, 608:365-377, June 2004.

2005-2006G. Abla, et al., “Shared Display Wall BasedCollaboration Environment in the ControlRoom of the DIII-D National FusionFacility,” Fifth Annual Workshop onAdvanced Collaborative Environments(WACE2005), 2005.

LIST OF PUBLICATIONS

113

Page 116: Progress report on the U.S. Department of Energy's Scientific

G. Abla, et al., “Advanced Tools forEnhancing Control Room Collaborations,”The 5th IAEA Technical Meeting onControl, Data Acquisition, and RemoteParticipation for Fusion Research, 2005.

Mark Adams, “Ax=b: The Link BetweenGyrokinetic Particle Simulations of TurbulentTransport in Burning Plasmas and Micro-FEAnalysis of Whole Vertebral Bodies inOrthopaedic Biomechanics,” invited talk,SciDAC Conference, San Francisco, Calif.,June 2005.

C. M. Aikens, G. D. Fletcher, M. W.Schmidt and M. S. Gordon, “ScalableImplementation Of Analytic Gradients ForSecond-Order Z-Averaged PerturbationTheory Using The Distributed DataInterface,” Journal of Chemical Physics.,124, 014107, 2006.

V. Akcelik, J. Bielak, I. Epanomeritakis, O.Ghattas and E. Kim, “High fidelity forwardand inverse earthquake modeling in largebasins,” Proc. of the Sixth EuropeanConference on Structural DynamicsEURODYN'05, Paris, France, Sept. 2005.

V. Akcelik, G. Biros, A. Dragenescu, J.Hill, O. Ghattas and B. van BloemenWaanders, “Dynamic data-driven inversionfor terascale simulations: Real-time identifica-tion of airborne contaminants,” Proc. of,ACM/IEEE SC2005 conference, Seattle,Wash., Nov. 2005.

V. Akcelik, G. Biros, O. Ghattas, D.Keyes, K. Ko, L.-Q. Lee and E. Ng,“Adjoint Methods For Electromagnetic ShapeOptimization Of The Low-Loss Cavity ForThe International Linear Collider,” Proc. ofSciDAC 2005 Conference, San Francisco,Calif., June 2005.

V. Akcelik, G. Biros, O. Ghattas, D.Keyes, K. Ko, L.-Q. Lee and E. G. Ng,“Adjoint Methods for Electromagnetic ShapeOptimization of the Low-Loss Cavity for theInternational Linear Collider,” Proc. ofSciDAC 2005, San Francisco, Calif., June2005.

V. Akcelik, G. Biros, O. Ghattas, J. Hill, D.Keyes, and B. van Bloeman Waanders,“Parallel PDE constrained optimization,” inFrontiers of Parallel Computing, M.Heroux, P. Raghaven, and H. Simon, eds,SIAM, to appear 2006.

H. Akiba, K.-L. Ma, and J. Clyne, “End-to-end Data Reduction and HardwareAccelerated Rendering Techniques forVisualizing Time-Varying Non-uniform Grid

Volume Data,” Proc. of the InternationalWorkshop on Volume Graphics 2005,Stony Brook, NY, June 2005.

M. A. Albao, D.-J. Liu, C. H. Choi, M. S.Gordon and J. W. Evans, “CompetitiveEtching and Oxidation of Vicinal Si(100)Surfaces,” MRS Proceedings Vol. 859E,JJ3.6., 2005.

M. A. Albao, M. M. R. Evans, J. Nogami,D. Zorn, M. S. Gordon and J. W. Evans,“Atomistic processes mediating growth of one-dimensional Ga rows on Si(100),” PhysicalReview B 72, 035426, 2005.

M. A. Albao, D.-J. Liu, M. S. Gordon andJ. W. Evans, “Simultaneous Etching andOxidation of Vicinal Si(100) Surfaces:Atomistic Lattice-Gas Modeling ofMorphological Evolution,” Physical ReviewB 72, 195420, 2005.

A. Alexandru, et al., “Progress on aCanonical Finite Density Algorithm,” NuclearPhysics B (Proc. Suppl.) 140, 517, 2005.

C. Alexandru, et al. (The LHPCCollaboration), “First principles calculationsof nucleon and pion form factors:Understanding the building blocks of nuclearmatter from lattice QCD,” Journal ofPhysics Conf. Ser. 16, 174, 2005.

C. Alexandrou, et al., “Momentum depend-ence of the N to Delta transition form fac-tors,” Nuclear Physics B (Proc. Suppl.)140, 293, 2005.

I. F. Allison et al. (The HPQCDCollaboration), “Mass of the B/c meson inthree-flavor lattice QCD,” Physical ReviewLetters 94, 172001, 2005.

I. F. Allison et al. (The HPQCDCollaboration), “A precise determination ofthe B/c mass from dynamical lattice QCD,”Nuclear Physics (Proc. Suppl.) 140, 440,2005.

A. S. Almgren, J. B. Bell, C. A.Rendleman, and M. Zingale, “Low MachNumber Modeling of Type Ia Supernovae,”ApJ, 637, 922, 2006.

J. Amundson, P. Spentzouris,“Space chargeexperiments and simulation in the FermilabBooster,” Proc. of Particle AcceleratorConference (PAC 05), Knoxville, Tenn.,May 2005.

J. Amundson, P. Spentzouris, J.Qiang andR. Ryne, “Synergia: An accelerator modelingtool with 3-D space charge,” Journal ofComputational Physics vol. 211, 229,2006.

A. Andrady, P. J. Aucamo, A. F. Bais, C.L. Ballare, L. O. Bjorn, J. F. Bornman, M.Caldwell, A. P. Cullen, D. J. Erickson III,F. R. de Gruijl, D.-P. Hader, M. ilyas, G.Kulandaivelu, H. D. Kumar, J. Longstreth,R. L. McKenzie, M. Norval, H. H.Redhwi, R. C. Smith, K. R. Solomon, Y.Takizawa, X. Tang, A. H. Teramura, A.Torikai, J. C. van der Leun, S. Wilson, R.C. Worrest and R. G. Zepp,“Environmental effects of ozone depletion andits interactions with climate change: ProgressReport 2004,” Photochem. Photobiol. Sci.,2005.

D. J. Antonio et al. (The RBC andUKQCD Collaborations), “Baryons in 2+1flavour domain wall QCD,” Proc. of XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 098, 2005.

C. Aubin, et al. (The MILCCollaboration), “Properties of light quarksfrom lattice QCD simulations,” Journal ofPhysics: Conference Proceedings, 16, 160,2005.

C. Aubin, et al. (The Fermilab Lattice,MILC, and HPQCD Collaborations),“Semileptonic decays of D mesons in three-fla-vor lattice QCD,” Physical Review Letters94, 011601, 2005.

C. Aubin, et al. (The Fermilab Lattice,MILC, and HPQCD Collaborations),“Charmed meson decay constants in three fla-vor lattice QCD,” Physical Review Letters95, 122002, 2005.

C. Aubin, et al. (The MILCCollaboration), “Topological susceptibilitywith three flavors of staggered quarks,”Nuclear Physics. B (Proc. Suppl.) 140,600, 2005.

C. Aubin, et al. (The MILCCollaboration), “Results for light pseu-doscalars from three-flavor simulations,”Nuclear Physics. B (Proc. Suppl.) 140,231, 2005.

C. Aubin, et al. (The MILCCollaboration), “The scaling dimension oflow lying Dirac eigenmodes and of the topo-logical charge density,” Nuclear Physics. B(Proc. Suppl.) 140, 626, 2005.

B. Bader, O. Ghattas, B. van BloemenWaanders and K. Willcox, “An optimiza-tion framework for goal-oriented model-basedreduction of large-scale systems,” 44th IEEEConference on Decision and Control,Seville, Spain, Dec. 2005.

LIST OF PUBLICATIONS

114

Page 117: Progress report on the U.S. Department of Energy's Scientific

D. H. Bailey and A. S. Snavely,“Performance Modeling: Understanding thePresent and Predicting the Future,” Proc. ofSIAM PP04, 2005.

A. H. Baker, E. R. Jessup, and T. A.Manteuffel, “A technique for accelerating theconvergence of GMRES,” SIAM J. Mat.Anal. & App., 26, 4, p. 962, 2005.

S. Basak, et al., “Combining quark and linksmearing to improve extended baryon opera-tors,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 076, 2005.

S. Basak, et al. (The LHPCCollaboration), “Baryonic sources using irre-ducible representations of the double-coveredoctahedral group,” Nuclear Physics. (Proc.Suppl.) 140, 281, 2005.

S. Basak, et al. (The LHPCCollaboration), “Analysis of N* spectra usingmatrices of correlation functions based on irre-ducible baryon operators,” Nuclear Physics(Proc. Suppl.) 140, 278, 2005.

S. Basak, et al. (The LHPCCollaboration), “Group-theoretical construc-tion of extended baryon operators,” NuclearPhysics. (Proc. Suppl.) 140, 287, 2005.

S. Basak, et al. (The LHPCCollaboration), “Group-theoretical construc-tion of extended baryon operators in latticeQCD,” Physical Review D, 72, 094506,2005.

S. Basak, et al. (The LHPCCollaboration), “Clebsch-Gordan construc-tion of lattice interpolating fields for excitedbaryons,” Physical Review D, 72, 074501,2005.

A. C. Bauer, M. S. Shephard, M.,Nuggehally, “Design overview of the TSTTintegrated field approximation for numericalsimulation (FANS) library,” Proc. 8th USNat. Congress on Comp. Mech., 2005.

J. B. Bell, M. S. Day, J. F. Grcar, and M. J.Lijewski, “Active Control for StatisticallyStationary Turbulent Premixed FlameSimulations,” Communications in AppliedMathematics and Computational Science,1, 29, 2006.

J. B. Bell, M. S. Day, I. G. Shepherd, M.Johnson, R. K. Cheng, J. F. Grcar, V. E.Beckner, and M. J. Lijewski, “NumericalSimulation of a Laboratory-Scale TurbulentV- flame,” Proceedings of the NationalAcademy of Sciences, 102 (29) 10006,2005.

G. J. O. Beran, B. Austin, A. Sodt, and M.Head-Gordon, “Unrestricted perfect pairing:The simplest wave-function-based modelchemistry beyond mean field,” Journal ofPhysical Chemistry A, 109, 9183-9192,2005.

C. Bernard, et al. (The MILCCollaboration), “The locality of the fourthroot of staggered fermion determinant in theinteracting case,” XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 299, 2005.

C. Bernard, et al. (The MILCCollaboration), “More evidence of localiza-tion in the low-lying Dirac spectrum,” XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 299, 2005.

C. Bernard, et al. (The MILCCollaboration), “The Equation of State forQCD with 2+1 Flavors of Quarks,”Proceedings of Science, Lattice 2005, 156,2005.

C. Bernard, et al. (The MILCCollaboration), “Update on pi and KPhysics,” XXIII International Symposiumon Lattice Field Theory (Lattice 2005),025, 2005.

C. Bernard, et al. (TheMILCCollaboration), “QCDThermodynamics withThree Flavors of Improved StaggeredQuarks,” Physical Review D, 71, 034504,2005.

C. Bernard, et al. (The MILCCollaboration), “Heavy-light decay con-stants using clover valence quarks and threeflavors of dynamical improved staggeredquarks,” Nuclear Physics. B (Proc. Suppl.)140, 449, 2005.

C. Bernard, et al. (The MILCCollaboration), “Three Flavor QCD atHigh Temperatures,” Nuclear Physics B(Proc. Suppl.) 140, 538, 2005.

M. Berndt, T. Manteuffel, and S.McCormick, and G. Starke, “Analysis offirst-order system least squares (FOSLS) forelliptic problems with discontinuous coeffi-cients: Part I,” SIAM Journal on NumericalAnalysis, 43, p. 386-408, 2005.

M. Berndt, T. Manteuffel and S.McCormick, “Analysis of first-order systemleast squares (FOSLS) for elliptic problemswith discontinuous coefficients: Part II,”SIAM Journal on Numerical Analysis, 43,p. 409-436, 2005.

M. Barad, P. Colella, “A fourth-order accu-rate local refinement method for Poisson's

equation,” J. Comput. Phys. 209, p. 1-18,2005.

N. Bhatia, F. Song, F. Wolf, J. Dongarra,B. Mohr, S. Moore, “AutomaticExperimental Analysis of CommunicationPatterns in Virtual Topologies,” Proc. of theInternational Conference on ParallelProcessing (ICPP-05), Oslo, Norway, June2005.

B. Bhattacharjee, P. Lemonidis, W. H.Green, and P. I. Barton, “Global Solution ofSemi-Infinite Programs,” MathematicalProgramming, 103, 283, 2005.

G. Biros and O. Ghattas, “ParallelLagrange-Newton-Krylov-Schur Methods forPDE-Constrained Optimization. Part I: TheKrylov-Schur Solver,” SIAM Journal onScientific Computing, 27, 687, 2005.

G. Biros and O. Ghattas, “ParallelLagrange-Newton-Krylov-Schur Methods forPDE-Constrained Optimization. Part II: TheLagrange-Newton Solver, and its Applicationto Optimal Control of Steady Viscous Flows,”SIAM Journal on Scientific Computing,27, 714, 2005.

B. Bistrovic, et al. (The LHPCCollaboration), “Understanding hadronstructure from lattice QCD in the SciDACera,” Journal of Physics Conf. Ser. 16, 150,2005.

J. M. Blondin, “Discovering new dynamics ofcore-collapse supernova shock waves,” Journalof Physics Conference Series, 16, 370-379,2005.

J. M. Blondin, A. Mezzacappa, “TheSpherical Accretion Shock Instability in theLinear Regime,” ArXiv Astrophysics e-prints, July 2005.

T. Blum, “Lattice calculation of the lowest-order hadronic contribution to the muonanomalous magnetic moment,” NuclearPhysics B (Proc. Suppl.) 140, 311, 2005.

T. Blum and P. Petreczky, “Cutoff effects inmeson spectral functions,” Nuclear Physics B(Proc. Suppl.) 140, 553, 2005.

F.D.R. Bonnet, et al. (The LHPCCollaboration), “Lattice computations of thepion form factor,” Phys. Rev. D 72, 054506,2005.

P. A. Boyle, et al., “Overview of theQCDSP and QCDOC computers,” IBMResearch Journal 49, 351, 2005.

P. A. Boyle, et al., “QCDOC: Project statusand first results,” Journal of Physics Conf.Ser. 16, 129, 2005.

LIST OF PUBLICATIONS

115

Page 118: Progress report on the U.S. Department of Energy's Scientific

P. A. Boyle, et al., “Localisation and chiralsymmetry in 2+1 flavour domain wallQCD,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 141, 2005.

P. A. Boyle, et al., “The QCDOC project,”Nuclear Physics (Proc. Suppl.) 140, 169,2005.

J. Brannick, M. Brezina, S. MacLachlan,T. Manteuffel, S. McCormick, and J.Ruge, “An energy-based AMG coarseningstrategy,” Journal on Numerical Analysis,.Lin. Alg. Appl., 13, 133, 2006.

D. P. Brennan, S. E. Kruger, T. A.Gianakon, and D. D. Schnack, “A catego-rization of tearing mode onset in tokamaksvia nonlinear simulation,” Nuclear Fusion45. 1178, 2005.

M. Brezina, R. Falgout, S. MacLachlan, T.Manteuffel, S. McCormick and J. Ruge,“Adaptive Smoothed Aggregation (_SA),”SIAM Review: SIGEST, 47, p. 317, 2005.

M. Brezina, R. Falgout, S. MacLachlan, T.Manteuffel, S. McCormick and J. Ruge,“Adaptive Algebraic Multigrid,” SIAMJournal of Scientific Computing, 27, 1534,2006.

M. Brezina, C. Tong, R. Becker, “ParallelAlgebraic Multigrids for StructuralMechanics,” SIAM Journal of ScientificComputing, to appear.

R. C. Brower, H. Neff, and K. Orginos,“Moebius fermions: Improved domain wallchiral fermions,” Nuclear Physics. (Proc.Suppl.) 140, 686, 2005.

J. R. Burruss, et al., “Security on the U. S.Fusion Grid,” The 5th IAEA TechnicalMeeting on Control, Data Acquisition,and Remote Participation for FusionResearch, 2005.

J. R. Burruss, et al., “Simplifying FusionGridSecurity,” Proc. Challenges of LargeApplications in DistributedEnvironments, 95-105, 2005.

J. R. Burruss, “The Resource OrientedAuthorization Manager (ROAM),” Proc.14th IEEE International Symposium onHigh Performance DistributedComputing, 308-309, 2005.

J. R. Burruss, et al., “Developments inRemote Collaboration and Computation,”The 14th ANS Topical Meeting on theTechnology of Fusion Energy, FusionScience and Technology, 47, 3, 814, 2005.

L. Bytautas and K. Ruedenberg,“Correlation Energy Extrapolation throughIntrinsic Scaling. IV. Accurate BindingEnergies of the Homonuclear DiatomicMolecules Carbon, Nitrogen, Oxygen andFluorine,” Journal of Chemical Physics,122, 154110, 2005.

X.-C. Cai, L. Marcinkowski, and P.Vassilevski, “An element agglomerationnonlinear additive Schwarz precondi-tioned Newton method for unstructuredfinite element problems,” Applications ofMathematics, 50, p. 247-275, 2005.

C. Y. Cardall, E. J. Lentz, A. Mezzacappa,“Conservative special relativistic radiativetransfer for multidimensional astrophysicalsimulations: Motivation and elaboration,”Physical Review D, 72, 043007, 2005.

C. Y. Cardall, A. O. Razoumov, E.Endeve, E. J. Lentz, A. Mezzacappa,“Toward five-dimensional core-collapse super-nova simulations,” Journal of PhysicsConference Series, 16, 390-394, 2005.

G. R. Carr, M. J. Cordery, J. B. Drake, M.W. Ham, F. M. Hoffman and P. H.Worley, “Porting and Performance of theCommunity Climate System Model(CCSM3) on the Cray X1,” Proc. of the47th Cray User Group Conference,Albuquerque, NM, May 2005.

L. Carrington, M. Laurenzano, A.Snavely, R. Campbell and L. Davis, “Howwell can simple metrics represent the perform-ance of HPC applications?” Proc. of theACM/IEEE SC05 conference, Seattle,Wash., Nov. 2005.

T. Chartier, R. Falgout, V. E. Henson, J.Jones, T. Manteuffel, S. McCormick, J.Ruge and P. Vassilevski, “Spectral elementagglomerate AMGe,” Lecture Notes inComputational Science and Engineering,to appear.

J. Chen, S. C. Jardin, and H. R. Strauss,“Solving anisotropic transport equations onmisaligned grids,” Lecture Notes InComputer Science, 3516, 1076-1079,2005.

J. Chen, J. Breslau, G. Fu, S. Jardin, W.Park, “Symmetric solution in M3D,”Computer Physics Communications 164,1-3, 468-471, 2005.

H. Chen, et al., “Memory PerformanceOptimizations for Real-Time SoftwareHDTV Decoding,” The Journal of VLSISignal Processing, 21, 2, 193-207, 2005.

L. Chen, F. Zonca, and Z. Lin, “NonlinearToroidal Mode Coupling: A New Paradigmfor Drift Wave Turbulence in ToroidalPlasmas,” Plasma Phys. Contr. Fusion 47,B71, 2005.

M. Cheng, et al. (The RBC-BielefeldCollaboration), “Scaling test of the P4-improved staggered fermion action,” Proc. ofXXIII International Symposium onLattice Field Theory (Lattice 2005), 045,2005.

N. Chevaugeon, P. Hu, X. Li, J. Xin, D.Cler, J. E. Flaherty, M. S. Shephard,“Discontinuous Galerkin methods applied toshock and blast problems,” Journal ofScientific Computing, 22, p. 227-243,2005.

E. Chow, “An Aggregation MultilevelMethod Using Smooth Error Vectors,” SIAMJournal of Scientific Computing, 27, 1727,2006.

M. Choi, V. S. Chan, V. Tang, P. Bonoli,R. I. Pinsker, “Monte-Carlo Orbit/Full WaveSimulation of Fast Alfvén (FW) Dampingon Resonant Ions in Tokamaks,” RadioFrequency Power in Plasmas, Proc. 16thTopical Conf., Park City, Utah, 2005.

M. A. Clark, P. de Forcrand and A. D.Kennedy, “Algorithm shootout: R versusRHMC,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 115, 2005.

C. S. Co, A. Friedman, D. P. Grote, J.-L.Vay, W. Bethel, K. I. Joy, “InteractiveMethods for Exploring Particle SimulationData,” Proc. of EuroVis 2005 conference,2005.

B. Cohen, E. B. Hooper, R. H. Cohen, D.N. Hill, H. S. McLean, R. D. Wood, S.Woodruff, C. R. Sovinec, and G. A. Cone,“Simulation of spheromak evolution and ener-gy confinement,” Physics of Plasmas, 12,056106, 2005.

S. Cohen, “Preliminary Study of BK on 2+1flavor DWF lattices from QCDOC,” Proc.of XXIII International Symposium onLattice Field Theory (Lattice 2005), 346,2005.

P. Colella, D. T. Graves, B. J. Keen, D.Modiano, “A cartesian grid embeddedboundary method for hyperbolic conservationlaws,” Journal of Computational Physics,211, p. 347-366, 2006.

I. R. Corey, J. R. Johnson and J. S. Vetter,“Performance Assertions and Sensitivity

LIST OF PUBLICATIONS

116

Page 119: Progress report on the U.S. Department of Energy's Scientific

Analysis: A Technique for DevelopingAdaptive Grid Applications,” Proceedingsof the Eleventh IEEE InternationalSymposium on High PerformanceDistributed Computing, (HPDC),Edinburgh, Scotland, July 2006.

A. P. Craig, R. L. Jacob, B. Kauffman, T.Bettge, J. Larson, E. Ong, C. Ding, and Y.He, “CPL6: The New Extensible, High-Performance Parallel Coupler for theCommunity Climate System Model,”International Journal of HighPerformance Computing Applications,19, 3, p. 309, Aug. 2005.

R. K. Crockett, P. Colella, R. Fisher, R. I.Klein, C. F. McKee, “An unsplit, cell-cen-tered Godunov method for ideal MHD,” J.Comput. Phys. 203, p. 422-448, 2005.

C.T.H. Davies, G. P. Lepage, F.Niedermayer and D. Toussaint, (TheHPQCD, MILC and UKQCDCollaborations), “The Quenched ContinuumLimit,” Nuclear Physics B (Proc. Suppl.)140, 261, 2005.

M. S. Day, J. B. Bell, J. F. Grcar, and V. E.Beckner, “Numerical Control of 3DTurbulent Premixed Flame Simulations,”20th International Colloquium on theDynamics of Explosions and ReactiveSystems, July-Aug, 2005.

H. de Sterck, T. Manteuffel, S.McCormick and L. Olson, “Numerical con-servation properties of H(div)-conformingleast-squares finite element methods for theBurgers equation,” SIAM Journal ofScientific Computing, 26, p. 1573-1597,2005.

H. de Sterck, R. Markel and R. Knight,“TaskSpaces: a software framework for paral-lel bioinformatics on computational grids,” inParallel Computing in Bioinformatics andComputational Biology, A. Zomaya, edi-tor, J. Wiley and Sons, 2005.

J. Demmel, J. Dongarra, V. Eijkhout, E.Fuentes, A. Petitet, R. Vuduc, R. C.Whaley, and K. Yelick, “Self-AdaptingLinear Algebra Algorithms and Software,”Proc. of the IEEE, Special Issue onProgram Generation, Optimization, andPlatform Adaptation, 93(2), Feb. 2005.

N. Desai, R. Bradshaw, S. Matott, S.Bittner, S. Coghlan, R. Evard, C.Luenighoener, T. Leggett, J. P. Navarro,G. Rackow, C. Stacey, and T. Stacey, “Acase study in configuration management tooldeployment,” Proc. of the Nineteenth

System Administration Conference (LISAXIX), San Diego, Calif., Dec. 2005.

N. Desai, A. Lusk, R. Bradsha, and E.Lusk, “{MPISH}: A parallel shell for {MPI}programs,” Proc. of the 1st Workshop onSystem Management Tools for Large-Scale Parallel Systems (IPDPS '05),Denver, Colo., 2005.

Z. M. Djurisic, D. Amusin, T. Bereknyei,T. C. Allison, M. Frenklach, “Reporting ofexperimental data for development and vali-dation of chemical kinetic models,” FallMeeting of the Western States Section ofthe Combustion Institute, Stanford, CA,Oct. 2005.

J. B. Drake, P. W. Jones, G. R. Carr Jr.,“Overview of the Software Design of theCommunity Climate System Model,”International Journal of HighPerformance Computational Application,19, 177, 2005.

T. Draper, et al., “Improved Measure ofLocal Chirality,” Nuclear Physics B (Proc.Suppl.) 140, 623, 2005.

T. Draper, et al., “Locality and Scaling ofQuenched Overlap Fermions,” Draper, et al.,Proc. of XXIII International Symposiumon Lattice Field Theory (Lattice 2005),120, 2005.

H. Duan, G. M. Fuller, Y.-Z. Qian,“Collective Neutrino Flavor TransformationIn Supernovae,” ArXiv Astrophysics e-prints, Nov. 2005.

J. Dudek, R. G. Edwards and D. G.Richards, “Radiative transitions in charmo-nium,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 210, 2005.

T. J. Dudley, R. M. Olson, M. W.Schmidt, and M .S. Gordon, “ParallelCoupled Perturbed CASSCF Equations andAnalytic CASSCF Second Derivatives,”Journal of Computational Chemistry, 27,352, 2006.

T. H. Dunigan, Jr., J. S. Vetter and P. H.Worley, “Performance Evaluation of the SGIAltix 3700,” Proc. of the 2005International Conference on ParallelProcessing, Oslo, Norway, June 2005.

T. H. Dunigan, Jr., J. S. Vetter, J. B. WhiteIII, and P. H. Worley, “PerformanceEvaluation of the Cray X1 DistributedShared-Memory Architecture,” IEEE Micro,25(1), p. 30, Jan./Feb. 2005.

R. J. Dumont, C. K. Phillips and D. N.Smithe, “Effects of non-Maxwellian specieson ion cyclotron wave propagation andabsorption in magnetically confined plasmas,”Physics of Plasmas 12, 042508, 2005.

R. G. Edwards, et al. (The LHPCCollaboration), “Hadron structure withlight dynamical quarks,” Proc. of XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 056, 2005.

R. G. Edwards et al. (The LHPC andUKQCD Collaborations), “The ChromaSoftware System for Lattice QCD,” NuclearPhysics (Proc. Suppl.) 140, 832, 2005.

S. Ethier, “Gyrokinetic Particle-in-CellSimulations of Plasma Microturbulence onAdvanced Computing Platforms,” keynotetalk, SciDAC Conference, San Francisco2005.

J. W. Evans, “Kinetic Monte CarloSimulation of Non-Equilibrium Lattice-GasModels: Basic and Refined Algorithmsapplied to Surface Adsorption Processes,” inHandbook of Materials Modeling, Part A,Ch. 5.12, ed. S. Yip, Springer, Berlin,2005.

M. R. Fahey, S. Alam, T. H. Dunigan, Jr.,J. S. Vetter and P. H. Worley, “EarlyEvaluation of the Cray XD1,” Proc. of the47th Cray User Group Conference,Albuquerque, NM, May 2005.

R. Falgout, J. Brannick, M. Brezina, T.Manteuffel, and S. McCormick, “Newmultigrid solver advances in TOPS,”Proceedings of SciDAC 2005, Journal ofPhysics: Conference Series, Institute ofPhysics, San Francisco, Calif., June 2005.

R. D. Falgout, J. E. Jones, and U. M. Yang,“The Design and Implementation of hypre, aLibrary of Parallel High PerformancePreconditioners,” chapter in NumericalSolution of Partial Differential Equationson Parallel Computers, A. M. Bruaset, P.Bjørstad, and A. Tveito, eds., Springer-Verlag, 2006.

R. D. Falgout, J. E. Jones, and U. M. Yang,“Conceptual Interfaces in hypre,” FutureGeneration Computer Systems, SpecialIssue on PDE software, 22, p. 239-251,2005.

R. D. Falgout, J. E. Jones and U. M. Yang,“Pursuing Scalability for hypre's ConceptualInterfaces,” ACM Transactions ofMathematical Softw., 31, p. 326-350,2005.

LIST OF PUBLICATIONS

117

Page 120: Progress report on the U.S. Department of Energy's Scientific

R. D. Falgout, P. S. Vassilevski, and L. T.Zikatanov, “On Two-Grid ConvergenceEstimates,” Numer. Linear Algebra Appl.,12, p. 471-494, 2005.

D.G. Fedorov, K. Kitaura, H. Li, J. H.Jernsen, and M. S. Gordon, “The polariz-able continuum model (PCM) interfacedwith the fragment molecular orbital method(FMO),” Journal of ComputationalChemistry, in press.

J. Feng, W. Wan, J. Qiang, A. Bartelt, A.Comin, A. Scholl, J. Byrd, R. Falcone, G.Huang, A. MacPhee, J. Nasiatka, K.Opachich, D. Weinstein, T. Young and H.A. Padmore, “An Ultra-Fast X-Ray StreakCamera for the Study of MagneticationDynamics,” Proceedings of SPIE, 2005.

G. Fleming, et al. (The LHPCCollaboration), “Pion form factor usingdomain wall valence and asqtad sea quarks,”Nuclear Physics (Proc. Suppl.) 140, 3022005.

G. Folatelli, C. Contreras, M. M. Phillips,S. E. Woosley, S. Blinnikov, N. Morrell,N. B. Suntzeff, B. L. Lee, M. Hamuy, S.Gonzalez, W. Krzeminski, M. Roth, W.Li, A. V. Filippenko, W. L. Freedman, B.F. Madore, S. E. Persson, D. Murphy, S.Boissier, G. Galaz, L. Gonzalez, P. J.McCarthy, A. McWilliam and W. Pych,“SN 2005bf: A Transition Event BetweenType Ib/c Supernovae and Gamma RayBursts,” ArXiv Astrophysics e-prints, Sept.2005.

I. Foster, “Service-Oriented Science,” ScienceMagazine, 308, 5723, 814-817, May 2005.

N. Fout, H. Akiba, K.-L. Ma, A. Lefohn,and J. Kniss, “High Quality Rendering ofCompressed Volume Data Formats,” Proc. ofthe Eurographics/IEEE-VGTCSymposium on Visualization (EuroVis2005), Leeds, UK, June 2005.

N. Fout, K.-L. Ma, and J. Ahrens, “Time-Varying Multivariate Volume DataReduction,” Proc. of the ACM Symposiumon Applied Computing 2005, p. 1224-1230, Santa Fe, NM, March 2005.

C. L. Fryer and A. Heger, “Binary MergerProgenitors for Gamma-Ray Bursts andHypernovae,” ApJ, 623:302-313, April2005.

C. L. Fryer, G. Rockefeller, A.Hungerford, and F. Melia, “The Sgr B2 X-ray Echo of the Galactic Center SupernovaExplosion that Produced Sgr A East,” ArXivAstrophysics e-prints, June 2005.

G. M. Fuller, Y.-Z. Qian, “SimultaneousFlavor Transformation of Neutrinos andAntineutrinos with Dominant Potentials fromNeutrino-Neutrino Forward Scattering,”ArXiv Astrophysics e-prints, May 2005.

E. Gamiz, et al. (The HPQCDCollaboration), “Dynamical Determinationof BK from Improved Staggered Quarks,”Proc. of XXIII International Symposiumon Lattice Field Theory (Lattice 2005),47 2005.

E. Gamiz, et al. (The HPQCDCollaboration), “BK from ImprovedStaggered Quarks,” Nuclear Physics (Proc.Suppl.) 140, 353, 2005.

W. Gao, X. S. Li, C. Yang, Z. Bai,“Performance Evaluation of a MultilevelSub-structuring Method for SparseEigenvalue Problems,” Proc. of the 16thInternational Conference on DomainDecomposition Methods, 2005.

X. Gao, B. Simon, and A. Snavely,“ALITER: An Asynchronous LightweightInstrumentation Tool for Event Recording,”Workshop on Binary Instrumentation andApplications, St. Louis, Mo. Sept. 2005.

X. Gao, M. Laurenzano, B. Simon, and A.Snavely, “Reducing Overheads for AcquiringDynamic Traces,” International Symposiumon Workload Characterization (ISWC05),Austin, Texas, Sept. 2005.

R. Garnett, J. Billen, T. Wangler, P.Ostroumov, J. Qiang, R. Ryne, R. York, K.Crandall, “Advanced Beam-DynamicsSimulation Tools for RIA,” Proc. PAC2005,Knoxville, Tenn., May 2005.

C. Gatti-Bono, P. Colella, “An anelastic all-speed projection method for gravitationallystratified flows,” Journal of ComputationalPhysics, in press.

L. Ge, L.-Q. Lee, Z. Li, C.-K. Ng, G.Schussman, L. Xiao and K. Ko, “AdvancedEigensolver For Electromagnetic Modeling OfAccelerator Cavities,” The 15th Conferenceon the Computation of ElectromagneticFields, Shenyang, China, June 2005.

L. Ge, L.-Q. Lee, Z. Li, C.-K. Ng, K. Ko,E. Seol, A. C. Bauer and M. Shephard,“Adaptive Mesh Refinement In AcceleratorCavity Design,” SIAM Conference onComputational Science & Engineering,Orlando, Fla., Feb. 2005.

C. G. R. Geddes, C. Toth, J. van Tilborg,E. Esarey, C. B. Schroeder, D. Bruhwiler,C. Nieter, J. Cary, and W. P. Leemans,“Production of high quality electron bunches

by dephasing and beam loading in channeledand unchanneled laser plasma accelerators,”Physics of Plasmas, 12, 5, p. 56709, May2005.

S. J. Ghan, and T. Shippert, “Load balanc-ing and scalability of a subgrid orographyscheme in a global climate model,”International Journal of High PerformanceComputer Applications., 19, 237, 2005.

J. Glimm, X. L. Li, Z. L. Xu, “Front track-ing algorithm using adaptively refined mesh-es,” Proc. of the 2003 Chicago Workshopon Adaptive Mesh Refinement Methods,Lecture Notes in Computational Scienceand Engineering, 41, 83-90, 2005.

M. S. Gordon and M. W. Schmidt,“Advances in Electronic Structure Theory:GAMESS a Decade Later,” Theory andApplications of ComputationalChemistry, Ch. 41, C. E. Dykstra, G.Frenking, K. S. Kim, G. E. Scuseria, Eds.,Elsevier, 2005.

S. Gottlieb, et al., “Onium masses with threeflavors of dynamical quarks,” Proc. of XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 203, 2005.

D. A. Goussis and M. Valorani, “An effi-cient iterative algorithm for the approxima-tion of the fast and slow dynamics of stiff sys-tems,” Journal of Computational Physics,214, 316, 2006.

D. A. Goussis, and M. Valorani, F. Cretaand H. N. Najm, “Reactive andReactive/Diffusive Time Scales in StiffReaction-Diffusion Systems,” Prog. inComputational Fluid Dynamics, 5(6):316-326, Sept. 2005.

A. Gray, et al. (The HPQCDCollaboration), “The B meson decay con-stant from unquenched lattice QCD,”Physical Review Letters 95, 212001,2005.

A. Gray, et al. (The HPQCDCollaboration), “The Upsilon spectrum andm(b) from full lattice QCD,” PhysicalReview D, 72, 094507, 2005.

A. Gray, et al. (The HPQCDCollaboration), “B Leptonic Decays andB_B Mixing with 2+1 Flavors ofDynamical Quarks,” Nuclear Physics(Proc. Suppl.) 140, 446, 2005.

J. F. Grcar, M. S. Day, and J. B. Bell, “ATaxonomy of Integral Reaction PathAnalysis,” Combust. Theory Modelling, toappear.

LIST OF PUBLICATIONS

118

Page 121: Progress report on the U.S. Department of Energy's Scientific

C. M. Greenfield, “Advanced TokamakResearch at the DIII-D National FusionFacility in Support of ITER,” Proceedingsof SciDAC 2005, Journal of Physics:Conference Series, Institute of Physics,San Francisco, Calif., June 2005.

M. Greenwald, D. P. Schissel, J. R.Burruss, T. Fredian, J. Lister and J.Stillerman, “Visions for Data Managementand Remote Collaboration on ITER,” Proc.of ICALEPCS05, 2005.

E. Gulez, et al. (The HPQCDCollaboration), “B Semileptonic Decayswith 2+1 Dynamical Quark Flavors,” Proc.of XXIII International Symposium onLattice Field Theory (Lattice 2005), 220,2005.

O. D. Gurcan, P. H. Diamond, T. S.Hahm and Z. Lin, “Dynamics of TurbulenceSpreading in Magnetically ConfinedPlasmas,” Physics of Plasmas, 12, 032303,2005.

A. T. Hagler, et al. (The LHPCCollaboration), “Helicity dependent andindependent generalized parton distributionsof the nucleon in lattice QCD,” EuropeanPhysics Journal, A24S1, 29, 2005.

T. S. Hahm, P. H. Diamond, Z. Lin, G.Rewoldt, O. Gurcan, and S. Ethier, “Onthe Dynamics of Edge-Core Coupling,”Physics of Plasmas, 12, 090903, 2005.

K. Hashimoto, T. Izubuchi and J. Noaki,“The static quark potential in 2+1 flavourdomain wall QCD from QCDOC,” Proc. ofXXIII International Symposium onLattice Field Theory (Lattice 2005), 093,2005.

J. C. Hayes, S. W. Bruenn, “Toward 2-Dand 3-D simulations of core-collapse super-novae with magnetic fields,” Journal ofPhysics Conference Series, 16, 395-399,2005.

Y. He and C. Ding, “Coupling Multi-Component Models with MPH onDistributed Memory ComputerArchitectures,” International Journal ofHigh Performance ComputingApplications, 19, 3, p. 329, 2005.

M. Head-Gordon, G. J. O. Beran, A. Sodtand Y. Jung, “Fast electronic structure meth-ods for strongly correlated molecular systems,”Journal of Physics: Conf. Ser. 16, 233-242,2005.

A. Heger, E. Kolbe, W. C. Haxton, K.Langanke, G. Martinez-Pinedo and S. E.

Woosley, “Neutrino nucleosynthesis,”Physics Letters B, 606, 258-264, Jan. 2005.

A. Heger, S. E. Woosley and H. C. Spruit,“Presupernova Evolution of DifferentiallyRotating Massive Stars Including MagneticFields,” ApJ, 626:350-363, June 2005.

J. J. Heys, C. G. DeGroff, T. Manteuffel,and S. McCormick, “First-order system leastsquares (FOSLS) for modeling blood flow,”Med. Eng. Phys., to appear.

A. C. Hindmarsh, P. N. Brown, K. E.Grant, S. L. Lee, R. Serban, D. E.Shumaker, and C. S. Woodward, “SUN-DIALS: Suite of Nonlinear andDifferential/Algebraic Equation Solvers,”ACM Transactions on MathematicalSoftware, 31(3), p. 363-396, 2005.

W. R. Hix, O. E. B. Messer, A.Mezzacappa, M. Liebendoerfer, D. J.Dean, K. Langanke, J. Sampaio, G.Martinez-Pinedo, “Terascale input physics:the role of nuclear electron capture in core col-lapse supernovae,” Journal of PhysicsConference Series, 16, 400-404, 2005.

W. R. Hix, B. S. Meyer, “ThermonuclearKinetics in Astrophysics,” Nuclear PhysicsA, in press, Sept. 2005.

F. M. Hoffman, W. W. Hargrove, D. J.Erickson III and W. Oglesby, “Using clus-tered climate regimes to analyze and comparepredictions from fully coupled general circula-tion models,” Earth Interactions, 9, 1, 2005.

F. M. Hoffman, M. Vertenstein, H.Kitabata, and J. B. White III. “Vectorizingthe Community Land Model (CLM),”International Journal of HighPerformance Computing Applications,19(3), 247, Aug. 2005.

F. M. Hoffman, W. W. Hargrove, D. J.Erickson, and R. J. Oglesby, “UsingClustered Climate Regimes to Analyze andCompare Predictions from Fully CoupledGeneral Circulation Models,” EarthInteractions, 9(10), 1-27, Aug. 2005.

I. Hofmann, G. Franchetti, A. U. Luccio,M. Giovannozzi, E. Métral, J. F.Amundson, P. Spentzouris, S. Machida, J.Qiang, R. Ryne, S. M. Cousineau, J. A.Holmes, F. Jones, “Benchmarking ofSimulation Codes Based on the MontagueResonance in the CERN-PS,” Proc.PAC2005, Knoxville, Tenn., May 2005.

M. J. Hogan, C. D. Barnes, C. E. Clayton,F. J. Decker, S. Deng, P. Emma, C.Huang, R. H. Iverson, D. K. Johnson, C.

Joshi, T. Katsouleas, P. Krejcik, W. Lu, K.A. Marsh, W. B. Mori, P. Muggli, C. L.O’Connell, E. Oz, R. H. Siemann and D.Walz, “Multi-GeV Energy Gain in aPlasma-Wakefield Accelerator,” PhysicalReview Letters 95, 054802, 2005.

J. Hollingsworth, A. Snavely, S. Sbaraglia,and K. Ekanadham, “EMPS: AnEnvironment for Memory PerformanceStudies,” IPDPS 2005, Next GenerationSoftware Workshop, to appear.

D. J. Holmgren, “PC clusters for latticeQCD,” Nuclear Physics (Proc. Suppl.)140, 183, 2005.

D. J. Holmgren, “U.S. Lattice Clusters andthe USQCD Project,” Proc. of XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 105, 2005.

C. Homescu, L. Petzold, and R. Serban,“Error estimation for reduced order models ofdynamical systems,” SIAM Journal onNumerical Analysis, 43, 1693, 2005.

E. B. Hooper, R. A. Kopriva, B. I. Cohen,D. N. Hill, H. S. McLean, R. D. Wood, S.Woodruff and C. R. Sovinec, “A magneticmeconnection event in a driven spheromak,”Physics of Plasmas 12, 092503, 2005.

I. Horvath, et al., “Inherently Global Natureof Topological Charge Fluctuations in QCD,”Physics Letters B, 612, 21, 2005.

I. Horvath, et al., “The Negativity of theOverlap-Based Topological Charge DensityCorrelator in Pure-Glue QCD and the Non-Integrable Nature of its Contact Part,”Physics Letters B, 617, 49, 2005.

M. Huang, and B. Bode, “A PerformanceComparison of Tree and Ring Topologies inDistributed Systems,” Parallel andDistributed Scientific and EngineeringComputing PDSEC-05, Denver, Colo.,April 2005.

A. L. Hungerford, C. L. Fryer, and G. M.Rockefeller, “Gamma-Rays from SingleLobe Supernova Explosions,” ArXivAstrophysics e-prints, July 2005.

F.-N. Hwang, and X.-C. Cai, “Parallel fullycoupled Schwarz preconditioners for saddlepoint problems,” Electronic Transactionson Numerical Analysis, 2005.

F.-N. Hwang, and X.-C. Cai, “A parallelnonlinear additive Schwarz preconditionedinexact Newton algorithm for incompressibleNavier-Stokes equations,” Journal ofComputational Physics, 204, p. 666-691,2005.

LIST OF PUBLICATIONS

119

Page 122: Progress report on the U.S. Department of Energy's Scientific

F.-N. Hwang, and X.-C. Cai, “A combinedlinear and nonlinear preconditioning tech-nique for incompressible Navier-Stokes equa-tions,” J. Dongarra, K. Madsen, and J.Wasniewski, eds., Lecture Notes inComputer Science, 3732, p. 313-322,Springer-Verlag, 2005.

E. Ipek, B. R. de Supinski, M. Schulz andS. A. McKee, “An Approach to PerformancePrediction for Parallel Applications,” Euro-Par 2005, Lisbon, Portugal, Sept. 2005.

R. Jacob, J. Larson, and E. Ong, “MxNCommunication and Parallel Interpolation inCCSM3 Using the Model Coupling Toolkit,”International Journal of HighPerformance Computational Applications,19(3), 293, 2005.

E. F. Jaeger, “Self-consistent full-wave andFokker-Planck calculations for ion cyclotronheating in non-Maxwellian plasmas,” Bull.American Physics Society 50, 257, 2005.

E. F. Jaeger, L. A. Berry, R. W. Harvey, etal., “Self-Consistent Full-Wave / Fokker-Planck Calculations for Ion CyclotronHeating in non-Maxwellian Plasmas,” RadioFrequency Power in Plasmas, Proc. 16thTopical Conf., Park City, Utah, 2005.

S. C. Jardin, J. A. Breslau, “Implicit solutionof the four-field extended-magnetohydrody-namic equations using high-order high-conti-nuity finite elements,” Physics of Plasmas12, 10, 2005.

J.E. Jones and B. Lee, “A Multigrid Methodfor Variable Coefficient Maxwell'sEquations,” SIAM Journal on ScientificComputing, 27, 168, 2006.

P. W. Jones, P. H. Worley, Y. Yoshida, J. B.White III and J. Levesque, “Practical per-formance portability in the Parallel OceanProgram (POP),” ConcurrencyComputation-Practice Experiment &Experience, 17, 1317, 2005.

C. Jung, “Thermodynamics using p4-improved staggered fermion action onQCDOC,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 150, 2005.

A. Juodagalvis, D. J. Dean, “Gamow-TellerGT+ distributions in nuclei with mass A =90-97,” Physical Review C, 72, 024306,2005.

A. Juodagalvis, K. Langanke, G. Martinez-Pinedo, W. R. Hix, D. J. Dean, J. M.Sampaio, “Neutral-current neutrino-nucleuscross sections for A~50-65 nuclei,” NuclearPhysics A, 747, 87-108, 2005.

D. J. Kerbyson and P. W. Jones, “APerformance Model of the Parallel OceanProgram,” International Journal of HighPerformance Computing Applications,19, 261, 2005.

C. C. Kim, C. R. Sovinec, S. E. Parker andthe NIMROD Team, “Hybrid kinetic-MHD simulations in general geometry,”Computer Physics Communications 164,1-3, 448, 2005.

S. Klasky, “Data Management on the FusionComputational Pipeline,” invited talk,SciDAC Conference, San Francisco, Calif.,June 2005.

A. Klawonn, O. Rheinbach and O.Widlund, “Some Computational Results forDual-Primal FETI Methods for EllipticProblems in 3D,” Proc. of the 15thInternational Symposium on DomainDecomposition Methods, Berlin,Germany, July 2003.

A. Klawonn and O. Widlund, “SelectingConstraints in Dual-Primal FETI Methodsfor Elasticity in Three Dimensions,” Proc. ofthe 15th International Symposium onDomain Decomposition Methods, Berlin,Germany, July 2003.

K. Ko, N. Folwell, L. Ge, A. Guetz, L.-Q.Lee, Z. Li, C.-K. Ng, E. Prudencio, G.Schussman, R. Uplenchwar and L. Xiao,“Advances in Electromagnetic Modelingthrough High Performance Computing,”Proc. of 12th International Workshop onRF Superconductivity (SRF 2005),Cornell, New York, July 2005.

K. Ko, N. Folwell, L. Ge, A. Guetz, V.Ivanov, A. Kabel, M. Kowalski, L. Lee, Z.Li, C. Ng, E. Prudencio, G. Schussman, R.Uplenchwar, L. Xiao, E. Ng, W. Gao, X.Li, C. Yang, P. Husbands, A. Pinar, D.Bailey, D. Gunter, L. Diachin, D. Brown,D. Quinlan, R. Vuduc, P. Knupp, K.Devine, J. Kraftcheck, O. Ghattas, V.Akcelik, D. Keyes, M. Shephard, E. Seol,A. Brewer, G. Golub, K. Ma, H. Yu, Z.Bai, B. Liao, T. Tautges and H. Kim,“Impact of SciDAC on accelerator projectsacross the Office of Science through electro-magnetic modeling,” Proc. of SciDAC 2005meeting, San Francisco, Calif., June 2005.

S. Kronfeld, et al. (The Fermilab Latticeand MILC Collaborations), “Predictionsfrom Lattice QCD,” Proc. of XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 206, 2005.

S. E. Kruger, C. R. Sovinec, D. D.Schnack and E. D. Held, “Free-boundary

simulations of DIII-D plasmas with theNIMROD code,” Computer PhysicsCommunications 164, 1-3, 34, 2005.

S. E. Kruger, D. D. Schnack and C. R.Sovinec, “Dynamics of the major disruptionof a DIII-D plasma,” Physics of Plasmas12, 056113, 2005.

M. Kuhlen, S. E. Woosley and G. A.Glatzmaier, “Carbon Ignition in Type IaSupernovae: II. A Three-DimensionalNumerical Model,” ApJ 640, 407, 2006.

J.-F. Lamarque, J. T. Kiehl, P. G. Hess, W.D. Collins, L. K. Emmons, P. Ginoux, C.Luo and X. X. Tie, “Response of a coupledchemistry-climate model to changes in aerosolemissions: Global impact on the hydrologicalcycle and the tropospheric burdens of OH,ozone and NOx,” Geophysical ResearchLetter, 32, 16, L16809, 2005.

C.-L. Lappen and D. A. Randall, “Usingidealized coherent structures to parameterizemomentum fluxes in a PBL mass-fluxmodel,” Journal of Atmospheric Sciences,62, 2829, 2005.

J. Larson, R. Jacob and E. Ong, “TheModel Coupling Toolkit: A New Fortran90Toolkit for Building Multi-Physics ParallelCoupled Models,” International Journal ofHigh Performance ComputingApplications, 19(3), 277-292, 2005.

M. Laurenzano, B. Simon, A. Snavely andM. Gunn, “Low Cost Trace-driven MemorySimulation Using SimPoint,” Workshop onBinary Instrumentation and Applications,St. Louis, Mo., Sept. 2005.

I. Lee, P. Raghavan, and E. G. Ng,“Effective Preconditioning Through OrderingInterleaved with Incomplete Factorization,”SIAM J. Matrix Anal. Appls., 27, 106,2006.

J. C. Lee, H. N. Najm, S. Lefantzi, J. Ray,M. Frenklach, M. Valorani and D. A.Goussis, “On chain branching and its role inhomogeneous ignition and premixed flamepropagation,” Computational Fluid andSolid Mechanics 2005, K. J. Bathe, ed., p.717, Elsevier, 2005.

L.-Q. Lee, Z. Bai, W. Gao, L. Ge, Z. Li,C.-K. Ng, C. Yang, E. Ng and K. Ko,“Nonlinear Eigenvalue Problem inAccelerator Cavity Design,” SIAM AnnualMeeting, New Orleans, La., July 2005.

L.-Q. Lee, Z. Bai, W. Gao, L. Ge, P.Husbands, M. Kowalski, X. Li, Z. Li, C.-K. Ng, C. Yang, E. Ng and K. Ko,“Modeling Rf Cavities With External

LIST OF PUBLICATIONS

120

Page 123: Progress report on the U.S. Department of Energy's Scientific

Coupling,” SIAM Conference onComputational Science & Engineering,Orlando, Fla., Feb. 2005.

L.-Q. Lee, L. Ge, Z. Li, C. Ng, G.Schussman, K. Ko, E. Ng, W. Gao, P.Husbands, X. Li, C. Yang, V. Akcelik, O.Ghattas, D. Keyes, T. Tautges, H. Kim, J.Craftcheck, P. Knupp, L. Freitag Diachin,K.-L. Ma, Z. Bai and G. Golub,“Achievements in ISICs/SAPP collaborationsfor electromagnetic modeling of accelerators,”Proc. of SciDAC 2005 meeting, SanFrancisco, Calif., June 2005.

W. P. Leemans, E. Esarey, J. van Tilborg,P. A. Michel, C. B. Schroeder, C. Toth, C.G. R. Geddes and B. A. Shadwick,“Radiation from laser accelerated electronbunches: coherent terahertz and femtosecondx-rays,” IEEE Trans. Plasma Sci., 33, 1,0093-3813, Feb. 2005.

S. Lefantzi, J. Ray, C. A. Kennedy and H.N. Najm, “A Component-based Toolkit forReacting Flows with Higher Order SpatialDiscretizations on Structured AdaptivelyRefined Meshes,” Prog. in ComputationalFluid Dynamics, 5(6), 298-315, Sept. 2005.

J. L. V. Lewandowski, “Low-noise collisionoperators for particle-in-cell simulations,”Physics of Plasmas, 12, 052322, 2005.

J. L. V. Lewandowski, “Global particle-in-cell simulations of microturbulence with kinet-ic electrons,” invited talk, APS/DPP meet-ing, Denver, Colo., 2005.

H. Li, and M. S. Gordon, “Gradients of theExchange-Repulsion Energy in the EffectiveFragment Potential Method,” TheoreticalChemistry Accounts, in press.

K. Li, et al., “Dynamic ScalableVisualization for Collaborative ScientificApplications,” Proceedings of the NextGeneration Software Workshop, DenverColo., April 2005.

X. Li, M. S. Shepard, M. W., Beall, “3-DAnisotropic Mesh Adaptation by MeshModifications,” Computer Methods inApplied Mechanics and Engineering,194(48-49), p. 4915-4950, 2005.

X. S. Li, “An Overview of SuperLU:Algorithms, Implementation, and UserInterface,” ACM Trans. on Math.Software, 31, 3, 2005.

X. S. Li, W. Gao, P. J. R. Husbands, C.Yang and E. G. Ng, “The Roles of SparseDirect Methods in Large-Scale Simulations,”Proc. of SciDAC 2005 meeting, SanFrancisco, Calif., June 2005.

Z. Li, A. Kabel, L. Ge, R. Lee, C.-K. Ng,G. Schussman, L. Xiao and K. Ko,“Electromagnetic Modeling For The ILC,”International Linear Collider Physics andDetector Workshop and 2nd ILC acceler-ator Workshop, Snowmass, Colo., Aug.2005.

M. Liebendoerfer, M. Rampp, H.-T. Janka,A. Mezzacappa, “Supernova Simulationswith Boltzmann Neutrino Transport: AComparison of Methods,” AstrophysicalJournal Letters, 620, 840-860, 2005.

H.-W. Lin and N. H. Christ, “Non-pertur-batively determined relativistic heavy quarkaction,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 225, 2005.

M.-F. Lin, “Probing the chiral limit ofMpiand fpi _in 2+1 flavor QCD with domainwall fermions from QCDOC,” Proc. ofXXIII International Symposium onLattice Field Theory (Lattice 2005), 94,2005.

Y. Lin, S. Wukitch, A. Parisot, J. C.Wright, N. Basse, P. Bonoli, E. Edlund, L.Lin, M. Porkolab, G. Schilling and P.Phillips, “Observation and modeling of ioncyclotron range of frequencies waves in themode conversion region of Alcator C-Mod,”Plasma Phys. and Control. Fusion 47,1207, 2005.

Y. Lin, X. Wang, Z. Lin, and L. Chen, “AGyrokinetic Electron and Fully Kinetic IonPlasma Simulation Model,” Plasma Phys.Contr. Fusion 47, 657, 2005.

Z. Lin, L. Chen, and F. Zonca, “Role ofNonlinear Toroidal Coupling in ElectronTemperature Gradient Turbulence,” Physicsof Plasmas, 12, 056125, 2005.

D.-J. Liu and J. W. Evans, “From AtomicScale Ordering to Mesoscale Spatial Patternsin Surface Reactions: A HeterogeneousCoupled Lattice-Gas (HCLG) SimulationApproach,” SIAM Multiscale Modelingand Simulation, 4, 424, 2005.

K. F. Liu and N. Mathur, “A Review ofPentaquark Calculations on the Lattice,”Proc. of 2005 Int. Conf. on QCD andHadron Physics, 2005.

X. Luo, L. Yin, M. S. Shephard, R. M.O’Bara, M. W. Beall, “An automatic adap-tive directional variable p-version method for3-D curved domains,” Proc. 8th US Nat.Congress on Comp. Mech., 2005.

E. Lusk, N. Desai, R. Bradshaw, A. Luskand R. Butler, “An Interoperability

Approach to System Software, Tools andLibraries for Clusters,” InternationalJournal of High Performance ComputingApplications (to appear 2006).

K. L. Ma, E. B. Lum, H. Yu, H. Akiba, M.Y. Huang, M.-Y. Huang, Y. Wang and G.Schussman, “Scientific Discovery throughAdvanced Visualization,” Journal ofPhysics, 16, DOE SciDAC 2005Conference, San Francisco, Calif., June2005.

Q. Mason et al. (The HPQCDCollaboration), “Accurate determinations ofalpha(s) from realistic lattice QCD,”Physical Review Letters 95, 052002,2005.

T. Manteuffel, S. McCormick, J. Ruge andJ. G. Schmidt, “First-order system LL*(FOSLL*) for general scalar elliptic prob-lems in the plane,” SIAM Journal onNumerical Analysis, 43, 209, 2005.

T. Manteuffel, S. McCormick, O. Röhrleand J. Ruge, “Projection multilevel methodsfor quasilinear elliptic partial differentialequations: numerical results,” SIAM Journalon Numerical Analysis, 44, 120, 2006.

T. Manteuffel, S. McCormick and O.Röhrle, “Projection multilevel methods forquasilinear elliptic partial differential equa-tions: theoretical results,” SIAM Journal onNumerical Analysis, 44, 139, 2006.

J. Marathe, F. Mueller and B. R. deSupinski, “A Hybrid Hardware/SoftwareApproach to Efficiently Determine CacheCoherence Bottlenecks,” Proc. of theInternational Conference onSupercomputing, Heidelberg, Germany,June 2005.

D. F. Martin, P. Colella, M. Anghel, F.Alexander, “Adaptive mesh refinement formultiscale nonequilibrium physics,”Computers in Science and Engineering, 7,3, p. 24-31, 2005.

N. Mathur, et al., “Roper Resonance andS11(1535) from Lattice QCD,” PhysicsLetters B, 605, 137, 2005.

S. McCormick, “Projection multilevel meth-ods for quasilinear elliptic partial differentialequations: V-cycle theory,” SIAM J. Mult.Modeling Sim., 4, 133, 2006.

P. McCorquodale, P. Colella, G. Balls, S.B. Baden, “A scalable parallel Poisson solverwith infinite domain boundary conditions,”Proc. of the 7th Workshop on HighPerformance Scientific and EngineeringComputing, Oslo, Norway, June 2005.

LIST OF PUBLICATIONS

121

Page 124: Progress report on the U.S. Department of Energy's Scientific

D. M. Medvedev, E. M. Goldfield and S.K. Gray, “An OpenMP/MPI Approach tothe Parallelization of Iterative Four-AtomQuantum Mechanics,” Computer PhysicsComm. 166, 94-108, 2005.

D. M. Medvedev, S. K. Gray, A. F.Wagner, M. Minkoff and R. Shepard,“Advanced Software for the Calculation ofThermochemistry, Kinetics, and Dynamics,”Journal of Physics: Conference Series 16,247-251, 2005.

J. E. Menard, R. E. Bell, E. D.Fredrickson, W. Park, et al., “Internal kinkmode dynamics in high-beta NSTX plasmas,”Nuclear Fusion 45 (7), 539-556, 2005.

A. Mezzacappa, “Ascertaining the RoadAhead,” Annual Review of Nuclear andParticle Science, 55, 467-515, 2005.

B. Mohr, A. Khnal, M.-A. Hermanns, F.Wolf, “Performance Analysis of One-sidedCommunication Mechanisms,” Mini-Symposium “Tools Support for ParallelProgramming,” Proc. of Parallel Compu-ting (ParCo), Malaga, Spain, Sept. 2005.

S. Moore, F. Wolf, J. Dongarra, S. Shende,A. Malony, B. Mohr, “A Scalable Approachto MPI Application Performance Analysis,”Proc. of the 12th European ParallelVirtual Machine and Message PassingInterface Conference, Sept. 2005.

S. Moore, F. Wolf, J. Dongarra, B. Mohr,“Improving Time to Solution with AutomatedPerformance Analysis,” Second Workshopon Productivity and Performance inHigh-End Computing, 11th InternationalSymposium on High PerformanceComputer Architecture (HPCA-2005),San Francisco, Calif., Feb. 2005.

J. Mueller, O. Sahni, K. E. Jansen, M. S.Shephard, “An anisotropic mesh adaptionprocedure for pulsatile blood flow simulation,”Proc. 8th US Nat. Congress on Comp.Mech., 2005.

J. Mueller, O. Sahni, K. E. Jansen, M. S.Shephard, C. A. Taylor, “A tool for the effi-cient FE-simulation of cardio-vascular flow,”Proc. 2nd NAFEMS CFD Seminar:Simulation of Complex Flows (CFD),April 2005.

B. Mustapha, V. N. Aseev, P. Ostroumov,J. Qiang, R. Ryne, “RIA Beam Dynamics:Comparing TRACK to IMPACT,” Proc. ofParticle Accelerator Conference (PAC05), Knoxville, Tenn., May 2005.

J. R. Myra, D. A. D’Ippolito, D. A.Russell, L. A. Berry, E. F. Jaeger and M.

D. Carter, “Nonlinear IRCR-PlasmaInteractions,” Radio Frequency Power inPlasmas, Proc. 16th Topical Conf., ParkCity, Utah, 2005.

H. N. Najm, J. C. Lee, M. Valorani, D. A.Goussis and M. Frenklach, “Adaptivechemical model reduction,” Journal ofPhysics: Conference Series, 16, 101-106,June 2005.

D. Narayan, “Bcfg2: A Pay As You GoApproach to Configuration Complexity,”Proc. of the 2005 Australian UNIX UsersGroup, Sydney, Australia, Oct. 2005.

D. Narayan, E. Lusk and R. Bradshaw,“{MPISH2}: Unix integration for {MPI} pro-grams,” Proc, of EuroPVM-MPI 2005,Sept. 2005.

H. Nassar, M. Paul, I. Ahmand, J. Ben-Dov, J. Caggiano, S. Ghelberg, S. Goriely,J. P. Greene, M. Hass, A. Heger, A. Heinz,D. J. Henderson, R. V. F. Janssens, C. L.Jiang, Y. Kashiv, B. S. Nara Singh, A.Ofan, R. C. Pardo, T. Pennington, K. E.Rehm, G. Savard, R. Scott and R.Vondrasek, “The 40Ca(_,_)44Ti reaction inthe energy regime of supernova nucleosynthe-sis,” Physical Review Letters, 96, 041102,2006.

Y. Nishimura, Z. Lin, J. L. V.Lewandowski and S. Ethier, “A FiniteElement Poisson Solver for GlobalGyrokinetic Particle Simulations in a GlobalField Aligned Mesh,” Journal ofComputation & Physics, 214, 657, 2006.

B. Norris, L. McInnes, and I. Veljkovic,“Computational quality of service in parallelCFD,” Proc. of the 17th InternationalConference on Parallel ComputationalFluid Dynamics, University of Maryland,College Park, Md., May 2005.

M. Nuggehally, M. Picu, C. Shephard, M.S. Shephard, “Adaptive model selection pro-cedure in multiscale problems,” Proc. 8th USNat. Congress on Comp. Mech., 2005.

M. Okamoto, et al. (The Fermilab Lattice,HPQCD and MILC Collaborations),“Semileptonic D _ _/K and B _ _ /D decaysin 2+1 flavor lattice QCD,” NuclearPhysics B (Proc. Suppl.) 140, 461, 2005.

L. Oliker, J. Carter, M. Wehner, A.Canning, S. Ethier, A. Mirin, G. Bala, D.Parks, P. Worley, S. Kitawaki and Y.Tsuda, “Leading Computational Methods onScalar and Vector HEC Platforms,” Proc. ofthe ACM/IEEE SC05 conference,Seattle, Wash., Nov. 2005.

O. O. Oluwole and W. H. Green,“Rigorous Error Control in Reacting FlowSimulations Using Reduced ChemistryModels,” Computational Fluid and SolidMechanics: Proceedings of the ThirdMIT Conference on Computational Fluidand Solid Mechanics, ed. by K. J. Bathe,Elsevier, 2005.

C. D. Ott, S. Ou, J. E. Tohline and A.Burrows, “One-armed Spiral Instability in aLow-T/|W| Postbounce Supernova Core,”ApJ. Letters, 625:L119-L122, June 2005.

C. Ozen, D. J. Dean, “Shell Model MonteCarlo method in the pn-formalism and appli-cations to the Zr and Mo isotopes,” ArXivNuclear Theory e-prints, Aug. 2005.

J. E. Penner, M. Herzog, C. Jablonowski,B. van Leer, R. C. Oehmke, Q. F. Stoutand K. G. Powell, “Development of anatmospheric climate model with self-adaptinggrid and physics,” Journal of Physics:Conference Series, 16, 353, 2005.

P. Petreczky, K. Petrov, D. Teaney and A.Velytsky, “Heavy quark diffusion and latticecorrelators,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 185, 2005.

K. Petrov, A. Jakovac, P. Petreczky and A.Velytsky, “Bottomonia correlators and spec-tral functions at zero and finite temperature,”Proc. of XXIII International Symposiumon Lattice Field Theory (Lattice 2005),153, 2005.

J. Petrovic, N. Langer, S.-C. Yoon and A.Heger, “Which massive stars are gamma-rayburst progenitors?” Astronomy &Astrophysics, 435:247-259, May 2005.

L. R. Petzold, Y. Cao, S. Li and R. Serban,“Sensitivity Analysis of Differential-AlgebraicEquations and Partial DifferentialEquations,” Proc. of the Chemical ProcessControl Conference, Lake Louise,Alberta, Canada, Jan. 2006.

J. Pjesivac-Grbovic, T. Angskun, G.Bosilca, G. Fagg, E. Gabriel, J. Dongarra,“Performance Analysis of MPI CollectiveOperations,” 4th International Workshopon Performance Modeling, Evaluation,and Optmization of Parallel andDistributed Systems (PMEO-PDS 2005),Denver, Colo., April 2005.

A. Pochinsky, “Domain Wall FermionInverter on Pentium 4,” Nuclear Physics B(Proc. Suppl.) 140, 859, 2005.

E. Prudencio, R. Byrd, and X.-C. Cai,“Parallel full space SQP Lagrange-Newton-

LIST OF PUBLICATIONS

122

Page 125: Progress report on the U.S. Department of Energy's Scientific

Krylov-Schwarz algorithms for PDE-constrained optimization problems,” SIAMJournal of Scientific Computing, 27, 130,2006.

J. Pruet, S. E. Woosley, R. Buras, H.-T.Janka and R. D. Hoffman, “Nucleosynthesisin the Hot Convective Bubble in Core-Collapse Supernovae,” ApJ, 623:325-336,April 2005.

J. Qiang, “Terascale Beam-Beam Simulationsfor Tevatron, RHIC and LHC,” Proc.PAC2005, Knoxville, Tenn., May 2005.

J. Qiang, M. A. Furman, R. D. Ryne, W.Fischer, K. Ohmi, “Recent advaces ofstrong-strong beam-beam simulation,”Nuclear Instruments & Methods inPhysics Research A, 558, 351, 2006.

J. Qiang, S. M. Lidia, R. Ryne, C.Limborg-Deprey, “A 3D Parallel BeamDynamics Code for Modeling HighBrightness Beams in Photoinjectors,” Proc.PAC2005, Knoxville, Tenn., May 2005.

J. Qiang, D. Leitner, D. Todd, “A Parallel3D Model for the Multi-Species Low EnergyBeam Transport System of the RIA PrototypeECR Ion Source VENUS,” Proc. PAC2005,Knoxville, Tenn., May 2005.

P. Raghavan, M. J. Irwin, L. C. McInnesand B. Norris, “Adaptive software for scien-tific computing: Co-managing quality-per-formance-power tradeoffs,” Proc. of theIEEE International Parallel & DistributedProcessing Symposium 2005, IEEEComputer Society Press, 2005.

J. J. Ramos, “Fluid formalism for collisionlessmagnetized plasmas,” Physics of Plasmas12, 052102, 2005.

J. J. Ramos, “General expression of the gyro-viscous force,” Physics of Plasmas 12,112301, 2005.

N. S. Rao, S. M. Carter, Q. Wu, W. R.Wing, M. Zhu, A. Mezzacappa, M.Veeraraghavan, J. M. Blondin, “Networkingfor large-scale science: infrastructure, provi-sioning, transport and application mapping,”Journal of Physics Conference Series, 16,541-545, 2005.

J. Ray, C. Kennedy, J. Steensland and H.Najm, “Advanced algorithms for computa-tions on block-structured adaptively refinedmeshes,” Journal of Physics: ConferenceSeries, 16:113-118, June 2005.

D. A. Reed, C. Lu and C. L. Mendes,“Reliability Challenges in Large Systems,”

Future Generation Computer Systems,22, 293,2006.

J. F. Remacle, S. S. Frazao, M. S.Shephard, M. S. Flaherty, “An AdaptiveDiscretization of Shallow Water Equationsbased on Discontinuous Galerkin Methods,”International Journal for NumericalMethods in Fluids, 62(7), pp. 899-923,2005.

J. F. Remacle, X. Li, M. S. Shephard, J. E.Flaherty, “Anisotropic Adaptive Simulationof Transient Flows using DiscontinuousGalerkin Methods,” International Journalfor Numerical Methods in Engineering,62, pp. 899-923, 2005.

D. B. Renner, et al. (The LHPCCollaboration), “Hadronic physics withdomain-wall valence and improved staggeredsea quarks,” Nuclear Physics. (Proc.Suppl.) 140, 255, 2005.

G. Rewoldt, “Particle-In-Cell simulations ofelectron transport,” invited talk, SciDACConference, San Francisco, Calif., June2005.

D. G. Richards, “Lattice studies of baryons,”Journal of Physics Conf. Ser. 9, 238, 2005.

G. Rockefeller, C. L. Fryer, F. K. Baganoffand F. Melia, “The X-ray Ridge SurroundingSgr A* at the Galactic Center,” ArXivAstrophysics e-prints, June 2005.

G. Rockefeller, C. L. Fryer, and F. Melia,“Spin-Induced Disk Precession in SagittariusA*,” ArXiv Astrophysics e-prints, July2005.

F. K. Roepke, W. Hillebrandt, J. C.Niemeyer, and S. E. Woosley, “Multi-spotignition in type Ia supernova models,” ArXivAstrophysics e-prints, Oct. 2005.

T. M. Rogers and G. A. Glatzmaier,“Penetrative Convection within the AnelasticApproximation,” ApJ, 620:432-441, Feb.2005.

D. Rotem, K. Stockinger, and K. Wu,“Optimizing candidate check costs for bitmapindices,” Proc. of CIKM 2005, ACM Press,2005.

D. Rotem, K. Stockinger, and K. Wu,“Optimizing i/o costs of multi-dimensionalqueries using bitmap indices,” in DEXA,Lecture Notes in Computer Science3588, p. 220. Springer-Verlag, 2005.

B. Ruscic, J. E. Boggs, A. Burcat, A. G.Csaszar, J. Demaison, R. Janosche, J. M.L. Martin, M. L. Morton, M. J. Rossi, J.

F. Stanton, P. G. Szalay, P. R.Westmoreland, F. Zabel, T. Berces,“IUPAC critical evaluation of thermochemi-cal properties of selected radicals: Part I,”Journal of Physical and ChemicalReference Data, 34(2), 573-656, 2005.

B. Ruscic, R. E Pinzon, G. VonLaszewski, D. Kodeboyina, A. Burcat, D.Leahy, D. Montoya, A. F. Wagner, “Activethermochemical tables: Thermochemistry forthe 21st Century,” Journal of Physics:Conference Series, 16:561-570, 2005.

R. Ryne, D. Abell, A. Adelmann, J.Amundson, C. Bohn, J. Cary, P. Colella,D. Dechow, V. Decyk, A. Dragt, R.Gerber, S. Habib, D. Higdon, T.Katsouleas, K.-L. Ma, P. McCorquodale,D. Mihalcea, C. Mitchell, W. Mori, C. T.Mottershead, F. Neri, I. Pogorelov, J.Qiang, R. Samulyak, D. Serafini, J. Shalf,C. Siegerist, P. Spentzouris, P. Stoltz, B.Terzic, M. Venturini, P. Walstrom,“SciDAC Advances and Applications inComputational Beam Dynamics,”Proceeding of SciDAC 2005, June 26-30,San Francisco, Calif., June 2005.

M. J. Savage, “Nuclear Physics and QCD,”Proc. of XXIII International Symposiumon Lattice Field Theory (Lattice 2005),020, 2005.

E. Scannapieco, P. Madau, S. Woosley, A.Heger and A. Ferrara, “The Detectability ofPair-Production Supernovae at z < 6,”Astrophysical Journal, 633, 1031, 2005.

D. P. Schissel, “Advances in RemoteParticipation for Fusion Experiments,” Proc.of the 21st IEEE/NPSS Symposium onFusion Engineering, 2005.

D. P. Schissel, et al., “The CollaborativeTokamak Control Room,” The 5th IAEATechnical Meeting on Control, DataAcquisition, and Remote Participation forFusion Research, 2005.

D. P. Schissel, “Advances in the Real-TimeInterpretation of Fusion Experiments,”Proceedings of SciDAC 2005, Journal ofPhysics: Conference Series, Institute ofPhysics, San Francisco, Calif., June 2005.

D. P. Schissel, et al., “Grid Computing andCollaboration Technology in Support ofFusion Energy Sciences,” Physics of Plasmas12, 058104, 2005.

D. P. Schissel, et al., “Advances in RemoteParticipation for Fusion Experiments,” The23rd Symposium on Fusion Technology,2005.

LIST OF PUBLICATIONS

123

Page 126: Progress report on the U.S. Department of Energy's Scientific

W. J. Schroeder, M. S. Shephard,“Computational Visualization,” Encyclopediaof Computational Mechanics, E. Stein, R.DeBorst, T.J.R. Hughes, eds., Wiley, 2005.

K. Schuchardt, O. Oluwole, W. Pitz, L. A.Rahn, J. William, H. Green, D. Leahy, C.Pancerella, M. Sjöberg and J. Dec, “NewApproaches for Collaborative Sharing ofChemical Model Data and Analysis Tools,”2005 Joint Meeting of the US Sections ofthe Combustion Institute, 2005.

K. Schuchardt, O. Oluwole, W. Pitz, L. A.Rahn, J. William, H. Green, D. Leahy, C.Pancerella, M. Sjöberg, J. Dec,“Development of the RIOT web service andinformation technologies to enable mechanismreduction for HCCI simulations,” Journal ofPhysics: Conference Series, 16:107-112,2005.

M. Schulz, D. Ahn, A. Bernat, B. R. deSupinski, S. Ko, G. Lee and B. Rountree,“Scalable Dynamic Binary Instrumentationfor BlueGene/L,” Workshop on BinaryInstrumentation and Applications, Sept.2005.

P. Schwartz, D. Adalsteinsson, P. Colella,A. P. Arkin, M. Onsum, “Numerical compu-tation of diffusion on a surface,” Proc. Nat.Acad. Sci., 102, p. 11151-11156, 2005.

P. Schwartz, M. Barad, P. Colella, T. J.Ligocki, “A Cartesian grid embedded bound-ary method for the heat equation andPoisson's equation in three dimensions,”Journal of Computational Physics 211, p.531-550, 2006.

E. S. Seol, M. S. Shephard, “A FlexibleDistributed Mesh Data Structure to SupportParallel Adaptive Analysis,” Proc. 8th USNat. Congress on Comp. Mech., 2005.

R. Serban and A. C. Hindmarsh,“CVODES: the Sensitivity-Enabled ODESolver in SUNDIALS,” Proc. of the 5thInternational Conference on MultibodySystems, Nonlinear Dynamics andControl, Long Beach, Calif., Sept. 2005.

B. A. Shadwick, G. M. Tarkenton, E.Esarey and C. B. Schroeder, “Fluid andVlasov models of low temperature, collisionlesslaser-plasma interactions,” Physics ofPlasmas, 12, 5, p. 56710-18, May 2005.

S. Shende, A. Malony, A. Morris, F. Wolf,“Performance Profiling OverheadCompensation for MPI Programs,” Proc. ofthe 12th European Parallel VirtualMachine and Message Passing InterfaceConference, Sept. 2005.

M. S. Shephard, A. C. Bauer, E. S. Seol, J.Wa., “Construction of Adaptive SimulationLoops,” Proc. 8th US Nat. Congress onComp. Mech., 2005.

M. S. Shephard, J. E. Flaherty, K. E.Jansen, X. Li, X. j. Luo, N. Chevaugeon, J.F. Remacle, M. W. Beall, R. M. O’Bara,“Adaptive mesh generation for curveddomains,” Applied NumericalMathematics, 53(2-3) p. 251-271, 2005.

R. Shepard, M. Minkoff, and Y. Zhou,“Software for Computing Eigenvalue Boundsfor Iterative Subspace Methods,” ComputerPhysics Comm. 170, 109-114, 2005.

M. S. Shephard, E. Seol, J. Wan, A. C.Bauer, “Component-based adaptive controlprocedures,” Proc. from the Conference onAnalysis, Modeling and Computation ofPDE and Multiphase Flow, KluwerAcademic Press, 2005.

J. Shigemitsu, et al. (The HPQCDCollaboration), “Semileptonic B Decayswith Nf = 2+1 Dynamical Quarks,”Nuclear Physics (Proc. Suppl.) 140, 464,2005.

J. N. Simone, et al. (The Fermilab Lattice,HPQCD and MILC Collaborations),“Leptonic decay constants fDs and fD inthree flavor lattice QCD,” Nuclear PhysicsB (Proc. Suppl.) 140, 443, 2005.

A. Socrates, O. Blaes, A. Hungerford, andC. L. Fryer, “The Neutrino BubbleInstability: A Mechanism for GeneratingPulsar Kicks,” Astrophysical Journal,632:531-562, Oct. 2005.

C. R. Sovinec, and C. C. Kim, D. D.Schnack, A. Y. Pankin, S. E. Kruger, E. D.Held, D. P. Brennan, D. C. Barnes, X. S.Li, D. K. Kaushik, S. C. Jardin and theNIMROD Team, “NonlinearMagnetohydrodynamic (MHD) Simulationsusing High-Order Finite Elements,” SciDAC2005 meeting, Journal of Physics:Conference Series, 16, p. 25-34, SanFrancisco, Calif., June 2005.

C. Stan, “The mean meridional circulation:A new potential-vorticity, potential-tempera-ture perspective,” Ph.D. thesis, ColoradoState University. 2005.

A. W. Steiner, M. Prakash, J. M. Lattimer,P. J. Ellis, “Isospin asymmetry in nuclei andneutron stars,” Physics Reports, 411, 325-375, 2005.

K. Stockinger, J. Shalf, E. W. Bethel andK. Wu, “Dex: Increasing the capability of sci-entific data analysis pipelines by using effi-

cient bitmap indices to accelerate scientificvisualization,” Proc. of 2005 Scientific andStatistical Database ManagementConference, p. 35, 2005.

K. Stockinger, J. Shalf, E. W. Bethel andK. Wu, “Query-driven visualization of largedata sets,” IEEE Visualization 2005,Minneapolis, MN, Oct. 2005.

K. Stockinger, K. Wu, R. Brun and P.Canal, “Bitmap indices for fast end-userphysics analysis in Root,” NuclearInstruments and Methods in PhysicsResearch Section A, 539, 99, 2006.

K. Stockinger, K. Wu, S. Campbell, S.Lau, M. Fisk, E. Gavrilov, A. Kent, C. E.Davis, R. Olinger, R. Young, J. Prewett, P.Weber, T. P. Caudell, E. W. Bethel and S.Smith, “Network traffic analysis with querydriven visualization,” Proc. of theACM/IEEE SC05 conference, Seattle,Wash., Nov. 2005.

P. Strack and A. Burrows, “GeneralizedBoltzmann formalism for oscillating neutri-nos,” Physics Rev. D, 71(9), 093004, May2005.

L. Sugiyama, W. Park, H. R. Strauss, G.Fu, J. Breslau, J. Chen and S. Klasky,“Plasmas beyond MHD: two-fluids and sym-metry breaking,” Journal of Physics:Conference Series 16, 54-58, 2005.

F. D. Swesty, E. S. Myra, “Multigroup mod-els of the convective epoch in core collapsesupernovae,” Journal of Physics ConferenceSeries, 16, 380-389, 2005.

S. Tamhankar, et al., “CharmoniumSpectrum from Quenched QCD withOverlap Fermions,” Nuclear Physics B(Proc. Suppl.), 140, 434, 2005.

T. A. Thompson, E. Quataert, and A.Burrows, “Viscosity and Rotation in Core-Collapse Supernovae,” AstrophysicalJournal, 620:861-877, Feb. 2005.

D. Trebotich, G. H. Miller, P. Colella, D.T. Graves, D. F. Martin, P. O. Schwartz,“A tightly coupled particle-fluid model forDNA-laden flows in complex microscalegeometries,” Proc. of the Third MITConference on Computational Fluid andSolid Mechanics, June 2005.

D. Trebotich, P. Colella, G. H. Miller, “Astable and convergent scheme for viscoelasticflow in contraction channels,” J. Comput.Phys. 205, p. 315-342, 2005.

D. Todd, D. Leitner, C. Lyneis, J. Qiang,D. Grote, “Particle-in-Cell Simulations of the

LIST OF PUBLICATIONS

124

Page 127: Progress report on the U.S. Department of Energy's Scientific

VENUS Ion Beam Transport System,” Proc.PAC2005, Knoxville, Tenn., May 2005.

D. Toussaint and C. Davies (MILC andUKQCD Collaborations), “The Ω_ and thestrange quark mass,” Nuclear Physics B(Proc. Suppl.), 140, 234, 2005.

T. Tu, D. O'Hallaron, and O. Ghattas,“Scalable parallel octree meshing for terascaleapplications,” Proceedings of theACM/IEEE SC05 conference, Seattle,Wash., November 2005.

R. Tweedie, “Light meson masses and decayconstants in a 2+1 flavour domain wallQCD,” Proc. of XXIII InternationalSymposium on Lattice Field Theory(Lattice 2005), 080, 2005.

F. Y. Tzeng and K. L. Ma, “IntelligentFeature Extraction and Tracking for Large-Scale 4D Flow Simulations,” Proc. of theACM/IEEE SC05 conference, Seattle,Wash., Nov. 2005.

M. Valorani, F. Creta, D. A. Goussis, H.N. Najm, and J. C. Lee, “Chemical kineticsmechanism simplification via CSP,”Computational Fluid and SolidMechanics 2005, K. J. Bathe, ed., p. 900,Elsevier, 2005.

M. Valorani, D. A. Goussis, F. F. Cretaand H. N. Najm, “Higher Order Correctionsin the Approximation of Low DimensionalManifolds and the Construction of SimplifiedProblems with the CSP Method,” Journal ofComputational Physics, 209(2), 754-786,Nov. 2005.

J-L. Vay, P. Colella, P. McCorquodale, D.Serafini, V. Straalen, A. Friedman, D. P.Grote, J-C. Adam, A. Héron, “Applicationof Adaptive Mesh Refinement to Particle-In-Cell Simulations of Plasmas and Beams,”Workshop on Multiscale Processes inFusion Plasmas, IPAM, Los Angeles,Calif., Jan. 2005.

J.-L. Vay, M. Furman, R. Cohen, A.Friedman, D. Grote, “Initial self-consistent3-D electron-cloud simulations of LHC beamwith the code WARP+POSINST,” Proc.PAC2005, Knoxville, Tenn., May 2005.

J. Vetter, B. R. de Supinski, J. May, L.Kissel and S. Vaidya, “Evaluating HighPerformance Computers,” in Concurrency:Practice and Experience, 17, 1239, 2005.

J. S. Vetter, S. R. Alam, T. H. Dunigan, Jr.,M. R. Fahey, P. C. Roth and P. H. Worley,“Early Evaluation of the Cray XT3 atORNL,” Proc. of the 47th Cray User

Group Conference, Albuquerque, NM,May 2005.

R. Vuduc, J. Demmel and K. Yelick,“OSKI: A library of automatically tunedsparse matrix kernels,” Proc. of SciDAC2005, Journal of Physics: ConferenceSeries, San Francisco, Calif., June 2005.

R. Vuduc, and H.-J. Moon, “Fast sparsematrix-vector multiplication by exploitingvariable block structure,” Proc. of theInternational Conference on High-Performance Computing andCommunications, Sorrento, Italy, Sept.2005.

G. Wallace, et al., “Tools and Applicationsfor Large-Scale Display Walls,” IEEEComputer Graphics & Applications,Special Issue on Large Displays, 25, 24,July/Aug. 2005.

R. Walder, A. Burrows, C. D. Ott, E.Livne, I. Lichtenstadt and M. Jarrah,“Anisotropies in the Neutrino Fluxes andHeating Profiles in Two-dimensional, Time-dependent, Multigroup RadiationHydrodynamics Simulations of RotatingCore-Collapse Supernovae,” AstrophysicalJournal, 626:317-332, June 2005.

W. Wan, Y. Chen, S. E. Parker,“Gyrokinetic delta f simulation of the colli-sionless and semicollisional tearing modeinstability,” Physics of Plasmas 12, 012311,2005.

J. Wan, M. S. Shephard, “Contact SurfaceControl in Forming Simulations,” Proc. 8thUS Nat. Congress on Comp. Mech.,2005.

W. X. Wang, “Neoclassical and turbulenttransport in shaped toroidal plasmas,” invit-ed talk, APS/DPP meeting, Denver,Colo., 2005.

J. Weinberg, M. O. MCracken, A. Snavely,and E. Strohmaier, “Quantifying Localityin the Memory Access Patterns of HPCApplications,” Proc. of the ACM/IEEESC05 conference, Seattle, Wash., Nov.2005.

V. Wheatley, D. I. Pullin and R.Samtaney, “On the regular refraction of aMHD shock at a density interface,” Journalof Fluid Mechanics, 522, 179-214, 2005.

V. Wheatley, D. I. Pullin, and R.Samtaney, and R. Samtaney, “Stability ofan impulsively accelerated density interface inMHD,” Physical Review Letters 95, 12,2005.

B. S. White, S. A. McKee, B. R. deSupinski, B. Miller, D. Quinlan and M.Schulz, “Improving the ComputationalIntensity of Unstructured Mesh Applications,”Proceedings of the InternationalConference on Supercomputing,Heidelberg, Germany, June 2005.

B. Willems, M. Henninger, T. Levin, N.Ivanova, V. Kalogera, K. McGhee, F. X.Timmes, and C. L. Fryer, “UnderstandingCompact Object Formation and Natal Kicks.I. Calculation Methods and the Case of GROJ1655-40,” Astrophysical Journal, 625,324-346, May 2005.

F. Wolf, A. Malony, S. Shende, A. Morris,“Trace-Based Parallel Performance OverheadCompensation,” Proc. of the InternationalConference on High PerformanceComputing and Communications(HPCC), Sorrento, Italy, Sept.2005.

S. E. Woosley and T. Janka, “The Physics ofCore-Collapse Supernovae,” Nature Physics,1, 147, 2005.

P. H. Worley, S. Alam, T. H. Dunigan, Jr.,M. R. Fahey and J. S. Vetter, “ComparativeAnalysis of Interprocess Communication onthe X1, XD1, and XT3,” Proc. of the 47thCray User Group Conference,Albuquerque, NM, May 2005.

P. H. Worley and J. B. Drake, “PerformancePortability in the Physical Parameterizationsof the Community Atmospheric Model,”International Journal for HighPerformance Computer Applications,19(3), p. 187, Aug. 2005.

J. C Wright, L. A. Berry, P. T. Bonoli, D.B. Batchelor, E. F. Jaeger, M. D. Carter, E.D’Azevedo, C. K. Phillips, H. Okuda, R.W. Harvey, D. N. Smithe, J. R. Myra, D.A. D’Ippolito, M. Brambilla and R. J.Dumont, “Nonthermal particle and full-wave diffraction effects on heating and cur-rent drive in the ICRF and LHRF regimes,”Nuclear Fusion 45, 1411, 2005.

J. C. Wright, P. T. Bonoli, M. Brambilla, E.D’Azevedo, L. A. Berry, D. B. Batchelor,E. F. Jaeger, M. D. Carter, C. K. Phillips,H. Okuda, R. W. Harvey, J. R. Myra, D.D’Ippolito and D. N. Smithe, “Full-waveElectromagnetic Field Simulation of LowerHybrid Waves in Tokamaks,” RadioFrequency Power in Plasmas, Proc. 16thTopical Conf., Park City, Utah, 2005.

K. Wu, J. Ekow, E. J. Otoo, and A.Shoshani, “An efficient compression schemefor bitmap indices,” ACM Transactions on

LIST OF PUBLICATIONS

125

Page 128: Progress report on the U.S. Department of Energy's Scientific

Database Systems, to appear.

K. Wu, J. Gu, J. Lauret, A. M. Poskanzer,A. Shoshani, A. Sim, and W. M. Zhang,“Grid collector: Facilitating efficient selectiveaccess from data grids,” Proc. ofInternational Supercomputer Conference,Heidelberg, Germany, June 2005.

B. Wylie, B. Mohr, F. Wolf, “Holistic hard-ware counter performance analysis of parallelprograms short version:,” Proceedings ofParallel Computing 2005 (ParCo 2005),Malaga, Spain, Sept. 2005.

A. Yamaguchi, “Chiral condensate and spec-tral flows on 2+1 flavours Domain WallQCD from QCDOC,” Proc. of XXIIIInternational Symposium on Lattice FieldTheory (Lattice 2005), 135, 2005.

T. Yamaguchi, “Entraining cloud-toppedboundary layers,” M.S. Thesis, ColoradoState University, 2005.

C. Yang, “Solving Large-scale EigenvalueProblems in SciDAC Applications,” SciDAC

2005 meeting, Journal of Physics:Conference Series, 16, p. 425-434, SanFrancisco, Calif., June 2005.

C. Yang, W. Gao, Z. Bai, X. Li, L.-Q. Lee,P. Husbands, and E. Ng, “An AlgebraicSub-Structuring for Large-Scale EigenvalueCalculation,” SIAM Journal on ScientificComputing, 27, 873, 2005.

U. M. Yang, “Parallel Algebraic MultigridMethods - High Performance Preconditioners,”chapter in Numerical Solution of PartialDifferential Equations on ParallelComputers, A. M. Bruaset, P. Bjørstad,and A. Tveito, eds., Springer-Verlag, 2006.

L. Yin, X. Luo, M. S. Shephard,“Identifying and meshing thin section of 3-dcurved domains,” Proc. 14th InternationalMeshing Roundtable, p. 33-54, Springer,2005.

H. Yu and K.-L. Ma, “A Study of I/OTechniques for Parallel Visualization,”Journal of Parallel Computing, 31, 2, p.167-183, Feb. 2005.

P. A. Young, C. Meakin, D. Arnett, and C.L. Fryer, “The Impact of HydrodynamicMixing on Supernova Progenitors,”Astrophysical Journal Letters, 629:L101-L104, Aug. 2005.

Y. Zhang, X. S. Li, and O. Marques,“Towards an Automatic and Application-Based Eigensolver Selection,” LACSISymposium 2005, Santa Fe, NM, 87501,Oct. 2005.

Y. Zhou, R. Shepard and M. Minkoff,“Computing Eigenvalue Bounds for IterativeSubspace Methods,” Computer PhysicsComm. 167, 90-102, 2005.

M. Zingale, S. E. Woosley, C. A.Rendleman, M. S. Day, and J. B. Bell,“Three-dimensional Numerical Simulations ofRayleigh-Taylor Unstable Flames in Type IaSupernovae,” Astrophysical Journal, 632,1021-1034, Oct. 2005.

LIST OF PUBLICATIONS

126

Page 129: Progress report on the U.S. Department of Energy's Scientific

DISCLAIMER This document was prepared as an account of work sponsored by the United States Government. While this document is believed

to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of

their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any infor-

mation, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific

commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorse-

ment, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views

and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, or The

Regents of the University of California.

Ernest Orlando Lawrence Berkeley National Laboratory is an equal opportunity employer.

This report was researched, written and edited by Jon Bashor and John Hules of the LBNL Computing Sciences Communications Group, and designed

by the Creative Services Office at LBNL. JO11877

LBNL-61963

Page 130: Progress report on the U.S. Department of Energy's Scientific