high-precision floating-point arithmetic in scientific computation

8
V irtually all present-day computer systems, from PCs to the largest su- percomputers, implement the IEEE 64-bit floating-point arithmetic standard, which provides 53 mantissa bits, or ap- proximately 16-decimal-digit accuracy. For most scientific applications, this is more than suffi- cient—in fact, for some applications (such as routine processing of experimental data), the 32- bit standard often provides sufficient accuracy. However, for a rapidly expanding body of appli- cations—ranging from interesting new mathemat- ical computations to large-scale physical simulations performed on highly parallel supercomputers— 64-bit IEEE arithmetic is no longer sufficient. In these applications, portions of the code typically in- volve numerically sensitive calculations that pro- duce results of questionable accuracy using con- ventional arithmetic. These inaccurate results could in turn induce other errors, such as taking the wrong path in a conditional branch. Few of the scientists and engineers currently in- volved in technical computing have rigorous back- grounds in numerical analysis. What’s more, this scarcity of numerical expertise is likely to worsen in the future rather than improve, in part because of the dearth of students and young researchers inter- ested in numerical mathematics. Thus, while some might argue that numerically sensitive calculations can be remedied by using different algorithms or coding techniques, in practice such changes are highly error-prone and very expensive. It’s usually easier, cheaper, and more reliable to use high- precision arithmetic to overcome these difficulties, even if other remedies are theoretically feasible. This article focuses on high-precision floating- point (rather than integer) arithmetic, although high-precision (that is, multiword) integer arith- metic is an interesting arena in its own right, with numerous applications in both pure and applied mathematics. When you use a secure Web site to purchase a book or computer accessory, for exam- ple, at some point the browser software performs high-precision integer computations to communi- cate credit-card numbers and other secure information. Closely related to this technology is ongoing research in large integer factorization al- gorithms, in which several remarkable advances have recently been made. 1 High-Precision Software Software packages that perform high-precision floating-point arithmetic have been available since 54 COMPUTING IN SCIENCE & ENGINEERING HIGH-P RECISION F LOATING-P OINT A RITHMETIC IN S CIENTIFIC C OMPUTATION FEATURE F LOATING- P OINT O PERATIONS IEEE 64-bit floating-point arithmetic is sufficient for most scientific applications, but a rapidly growing body of scientific computing applications requires a higher level of numeric precision. New software packages have yielded interesting scientific results that suggest numeric precision in scientific computations could be as important to program design as algorithms and data structures. DAVID H. BAILEY Lawrence Berkeley National Laboratory 1521-9615/05/$20.00 © 2005 IEEE Copublished by the IEEE CS and the AIP

Upload: dh

Post on 16-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Virtually all present-day computersystems, from PCs to the largest su-percomputers, implement the IEEE64-bit floating-point arithmetic

standard, which provides 53 mantissa bits, or ap-proximately 16-decimal-digit accuracy. For mostscientific applications, this is more than suffi-cient—in fact, for some applications (such asroutine processing of experimental data), the 32-bit standard often provides sufficient accuracy.

However, for a rapidly expanding body of appli-cations—ranging from interesting new mathemat-ical computations to large-scale physical simulationsperformed on highly parallel supercomputers—64-bit IEEE arithmetic is no longer sufficient. Inthese applications, portions of the code typically in-volve numerically sensitive calculations that pro-duce results of questionable accuracy using con-ventional arithmetic. These inaccurate results couldin turn induce other errors, such as taking thewrong path in a conditional branch.

Few of the scientists and engineers currently in-volved in technical computing have rigorous back-

grounds in numerical analysis. What’s more, thisscarcity of numerical expertise is likely to worsen inthe future rather than improve, in part because ofthe dearth of students and young researchers inter-ested in numerical mathematics. Thus, while somemight argue that numerically sensitive calculationscan be remedied by using different algorithms orcoding techniques, in practice such changes arehighly error-prone and very expensive. It’s usuallyeasier, cheaper, and more reliable to use high-precision arithmetic to overcome these difficulties,even if other remedies are theoretically feasible.

This article focuses on high-precision floating-point (rather than integer) arithmetic, althoughhigh-precision (that is, multiword) integer arith-metic is an interesting arena in its own right, withnumerous applications in both pure and appliedmathematics. When you use a secure Web site topurchase a book or computer accessory, for exam-ple, at some point the browser software performshigh-precision integer computations to communi-cate credit-card numbers and other secureinformation. Closely related to this technology isongoing research in large integer factorization al-gorithms, in which several remarkable advanceshave recently been made.1

High-Precision SoftwareSoftware packages that perform high-precisionfloating-point arithmetic have been available since

54 COMPUTING IN SCIENCE & ENGINEERING

HIGH-PRECISIONFLOATING-POINT ARITHMETICIN SCIENTIFIC COMPUTATION

FEATUREF L O A T I N G - P O I N TO P E R A T I O N S

IEEE 64-bit floating-point arithmetic is sufficient for most scientific applications, but arapidly growing body of scientific computing applications requires a higher level of numericprecision. New software packages have yielded interesting scientific results that suggestnumeric precision in scientific computations could be as important to program design asalgorithms and data structures.

DAVID H. BAILEY

Lawrence Berkeley National Laboratory

1521-9615/05/$20.00 © 2005 IEEE

Copublished by the IEEE CS and the AIP

MAY/JUNE 2005 55

the early days of computing—a package RichardBrent wrote, for example, has been in use sincethe 1970s.2 However, many of these packages re-quire the user to rewrite a scientific applicationwith individual subroutine calls for each arith-metic operation. The difficulty of making suchchanges, and the difficulty of debugging the re-sulting code, has deterred all but a few scientistsfrom using such software. In the past few years,though, high-precision software packages have in-creasingly included high-level language interfacesthat make such conversions relatively painless.These packages typically use custom data typesand operator overload features, which are nowavailable in languages such as C++ and Fortran 90,to facilitate conversion.

Even more advanced high-precision computa-tion facilities are available in the commercialproducts Mathematica and Maple, which incor-porate arbitrary-precision arithmetic in a verynatural way. Although these packages are ex-tremely useful in some contexts, they’re generallynot as fast as specialized high-precision arithmeticpackages, and they don’t provide a means to con-vert existing scientific programs written in, say,Fortran 90 or C++, to use their high-precisionarithmetic facilities.

The following list gives some examples of freelyavailable high-precision arithmetic software pack-ages (ARPREC, DDFUN, QD, and MPFUN90are available on my Web site: http://crd.lbl.gov/~dhbailey/mpdist or from www.experimentalmath.info):

• ARPREC includes routines to perform arithmeticwith an arbitrarily high (but pre-set) level of pre-cision, including many algebraic and transcen-dental functions. High-level language interfacesavailable for C++ and Fortran 90 support real, in-teger, and complex data types. Xiaoye S. Li,Brandon Thompson, and I wrote this software.

• DDFUN provides “double-double” (approxi-mately 31 digits) arithmetic in an all-Fortran 90environment. A high-level language interfacesupports real, integer, and complex data types.This package is much faster than using arbitraryprecision or “quad-double” (approximately 62digits) software in applications where 31 digitsare sufficient, and it’s often faster than using thereal*16 data type available in some Fortransystems.

• FMLIB is a low-level multiple-precision library,and FMZM90 provides high-level Fortran 90 lan-guage interfaces for real, integer, and complexdata types. David Smith of Loyola Marymount

University wrote this software (http://myweb.lmu.edu/dmsmith/FMLIB.html).

• GMP includes an extensive library of routines tosupport high-precision integer, rational, andfloating-point calculations. Produced by a vol-unteer effort, GMP is distributed under a GNUlicense by the Free Software Foundation(www.swox.com/gmp).

• Predicates was specifically designed to performthe numerically sensitive operations that oftenarise in computational geometry. JonathanShewchuk of the University of California, Berke-ley, wrote this software (http://www-2.cs.cmu.edu/~quake/robust.html).

• QD includes routines to perform double-doubleand quad-double arithmetic. High-level lan-guage interfaces are available for C++ and For-tran 90, supporting real, integer, and complexdata types. The QD package is much faster thanusing arbitrary-precision software in applicationswhere 31 or 62 digits are sufficient. Xiaoye S. Li,Yozo Hida, and I wrote this software.

• The MPFR library is a C library for multiple-precision floating-point computations with exactrounding, and is based on the GMP multiple-precision library (www.mpfr.org).

• MPFR++ is a high-level C++ interface to MPFR(http://perso.ens-lyon.fr/nathalie.revol/software.html).

• MPFUN90 is equivalent to ARPREC in its user-level functionality, but it’s written entirely in For-tran 90 and provides a Fortran 90 language in-terface.

• VPA provides an arbitrary-precision functional-ity together with a Fortran language interface.J.L. Schonfelder of the UK’s N.A. Softwarewrote this application (http://pcwww.liv.ac.uk/~jls/vpa20.htm).

Several of these packages include sample appli-cation programs—for example, the ARPREC andMPFUN90 packages include programs that im-plement the PSLQ algorithm for integer-relationdetection. The ARPREC, MPFUN90, and QDpackages also include programs to evaluate definiteintegrals to high precision. In fact, it’s now possi-ble to evaluate many integrals that arise in mathe-matics or physics to very high accuracy, even incases in which the integrand functions have singu-larities such as vertical derivatives or infinite valuesat endpoints. When these high-precision integra-tion schemes are combined with integer-relation-detection methods, it’s often possible to obtain an-alytic evaluations of definite integrals thatotherwise have no known solution.3,4

56 COMPUTING IN SCIENCE & ENGINEERING

Using high-precision software definitely in-creases computer runtimes (compared to usingconventional machine precision). Computationsusing double-double precision, for example, typi-cally run five times longer than with ordinary 64-bit arithmetic. This figure rises to 25 times forquad-double arithmetic, and to more than 50 timesfor 100-digit arithmetic. Beyond 100-digit-precision, the computational cost for precision p in-creases roughly as p2 up to roughly 1,000 digits, af-ter which the cost increases as roughly p log p(presuming we’re using fast Fourier transform[FFT]-based multiplication, which is available inARPREC and MPFUN90). Fortunately, it’s oftennot necessary to employ high-precision arithmeticfor the entire calculation; one critical loop is oftenall that requires this higher accuracy.

Let’s examine some of the scientific applicationsthat use high-precision arithmetic.

Climate ModelingWe all know that weather and climate simulationsare fundamentally chaotic—if microscopic changesoccur in the present state, within a certain periodof simulated time, the future state will changecompletely. As a result, computational scientistsinvolved in climate-modeling applications havelong resigned themselves to knowing that theircodes quickly diverge from any baseline calcula-tion, even if they only change the number ofprocessors used to run the code. As a result, it’s notonly difficult for researchers to compare results,but it’s often problematic to even determinewhether they correctly deployed their code on agiven system.

Recently, Helen He and Chris Ding atLawrence Berkeley National Laboratory investi-gated this non-reproducibility phenomenon in awidely used climate-modeling code.5 They ob-served that almost all the numerical variation oc-curred in one inner product loop in the atmos-pheric data assimilation step and in a similaroperation in a large conjugate gradient calculation.He and Ding found that a straightforward solutionwas to employ double-double arithmetic (using theDDFUN package) for these loops. This singlechange dramatically reduced the entire applica-tion’s numerical variability, permitting computerruns to be compared for much longer runtimesthan before.

In retrospect, it’s not clear that handling thesesums in this manner is the best solution—other ap-proaches could preserve meaningful numerical ac-curacy and yield reproducible results. Such phe-nomena deserve further study and are currently

being investigated, but in the meantime, He andDing’s solution is a straightforward and effectiveway to deal with this problem.

Supernova SimulationsEdward Baron, Peter Hauschildt, and Peter Nu-gent recently used the QD package, which providesdouble-double (128-bit or 31-digit) and quad-double (256-bit or 62-digit) data types, to solve forthe nonlocal thermodynamic equilibrium popula-tions of iron and other atoms in the atmospheres ofsupernovae and astrophysical objects.6 Iron, for ex-ample, could exist as Fe II in the outer parts of theatmosphere, but in the inner parts, Fe IV or Fe Vmight dominate. Introducing artificial cutoffs leadsto numerical glitches, so it is necessary to solve forall these populations simultaneously. Because therelative population of any state from the dominantstage is proportional to the exponential of the ion-ization energy, the dynamic range of these numer-ical values can be quite large.

To handle this potentially very large dynamicrange, yet also perform the computation in rea-sonable time, Baron, Hauschildt, and Nugent em-ployed a dynamic scheme to determine whether touse 64-bit, 128-bit, or 256-bit arithmetic both inconstructing the matrix elements and in solving thelinear system. The runtime of the code using 256-bit arithmetic took up a large fraction of the totalruntime, so the researchers are planning an evenfaster version of the 256-bit routines to reducethese runtimes.

Coulomb N-Body Atomic System SimulationsResearchers perform numerous computationswith high-precision arithmetic to study atomic-level Coulomb systems. Alexei Frolov of Queen’sUniversity in Ontario, Canada, for example, usedhigh-precision software to solve the generalizedeigenvalue problem (H – E S )C = 0, where thematrices H and S are large (typically 5,000 5,000 in size) and very nearly degenerate.7,8 Un-til recently, progress in this arena was severelyhampered by the numerical difficulties these ma-trices induce.

Frolov did his calculations using the MPFUNpackage, with a numeric precision level exceeding100 digits. He noted to me that in this way, “we canconsider and solve the bound state few-body prob-lems which have been beyond our imaginationeven four years ago.” He also used MPFUN tocompute the matrix elements of the Hamiltonianmatrix H and the overlap matrix S in four- andfive-body atomic problems. Future plans include

MAY/JUNE 2005 57

generalizing this method for use in photodetach-ment and scattering problems.

Studies of the Fine Structure ConstantIn the past few years, researchers have made sig-nificant progress in using high-precision arithmeticto obtain highly accurate solutions to theSchrödinger equation for the lithium atom. In par-ticular, they’ve calculated the nonrelativisticground-state energy to an accuracy of a few partsin a trillion, a factor of 1,500 improvement over thebest previous results. With these highly accuratewave functions, Zong-Chao Yan and others havetested the relativistic and quantum electrodynam-ics effects at the 50 parts per million (ppm) level aswell as the 1 ppm level.9 Yan has also calculated sev-eral properties of lithium and lithium-like ions, in-cluding the oscillator strengths for certain resonanttransitions, isotope shifts in some states, dispersioncoefficients, and Casimir-Polder effects betweentwo lithium atoms.

Theoretical calculations of the fine structuresplittings in helium atoms have now advanced tothe planning of highly accurate experiments.When certain additional computations finish, wecould get a unique atomic physics value of thefine structure constant to an accuracy of 16 partsper billion.10

Electromagnetic Scattering TheoryA key operation in computational studies of elec-tromagnetic scattering is to find the branch pointsof the spheroidal wave function’s asymptotic ex-pansion, typically by using Newton-Raphson iter-ations. Unfortunately, the accuracy of the resultsfrom conventional 64-bit arithmetic for theseNewton iterations significantly limits the range ofthe wave functions that we can study.

Ben Barrowes has used the MPFUN package toremedy this situation.11 His calculation requiredthe conversion of a large body of existing code,which was facilitated by the Fortran 90 languagetranslation modules in the MPFUN package.However, in the course of his work, Barrowesfound that arbitrary-precision versions of the For-tran 90 array operations were required. He thenworked with the author to implement these func-tions, which are now available as an optional partof the MPFUN90 package.

Vortex Sheet Roll-Up SimulationsResearchers have exploited high-precision arith-metic for some time in their attempts to resolvecomplex phenomena associated with fluid flows. In1993, for example, Russel Caflisch used 32-, 64-,

and even 128-digit arithmetic in studies of singu-larity formation under three-dimensional incom-pressible Euler flows.12

A long-standing problem of fluid dynamics iswhether a fluid undergoing vortex sheet roll-up as-sumes a true power law spiral. This problem’s re-sulting calculation, which is fairly long-runningeven with standard precision, becomes prohibi-tively expensive on a single-processor system withhigh-precision arithmetic. However, RobertKrasny, Richard Pelz, and I were able to implementthe calculation on a highly parallel system using 64processors.13 The calculation ultimately used10,000 CPU-hours, and the result showed thatwhen we reduce a critical parameter below 0.02,the solution is no longer a smooth spiral, but in-stead develops “blisters” that aren’t merely artifactsof insufficient precision (see Figure 1). These blis-ters aren’t well understood, and thus warrant fur-ther investigation.

Computational Geometry and Grid GenerationGrid generation, contour mapping, and severalother computational geometry applications rely onhighly accurate arithmetic. To prove this, WilliamKahan and Joseph Darcy showed that small nu-merical errors in the computation of the pointnearest to a given point on a line intersecting twoplanes can result in the computed point being so farfrom either plane as to rule out the solution beingcorrect for a reasonable perturbation of the origi-nal problem.14

–0.025

–0.020

–0.015

–0.010

–0.005

0.000

0.005

0.010

0.015

0.020

0.025

0.475 0.48 0.485 0.49 0.495 0.50 0.505 0.51 0.515 0.52 0.525

Figure 1. Vortex roll-up computed with quad-double arithmetic. The“blisters” aren’t merely artifacts of insufficient precision; their source isnot yet understood.

58 COMPUTING IN SCIENCE & ENGINEERING

Two commonly used computational geometryoperations are the orientation test and the in-circletest. The orientation test attempts to unambigu-ously determine whether a point lies to the left of,to the right of, or on a line or plane defined byother points. In a similar way, the in-circle test de-termines whether a point lies inside, outside, or ona circle defined by other points. Each of these testsis typically performed by evaluating the sign of adeterminant expressed in terms of the points’ co-ordinates. If these coordinates are expressed as sin-gle- or double-precision floating-point numbers,round-off error might lead to an incorrect resultwhen the true determinant nears zero. In turn, thismisinformation can lead an application to fail orproduce incorrect results, as noted earlier.

To remedy such problems, Jonathan Shewchukproduced a software package called Predicates(http://www-2.cs.cmu.edu/~quake/robust.html)that performs “adaptive” floating-point arithmetic,which dynamically increases numeric precision un-til an unambiguous result is obtained.

Computational Number TheoryComputational number theory is an area of cur-rent mathematical research that has applications

in fields such as cryptography. These compu-tations frequently involve high-precisionarithmetic in general and high-precision float-ing-point arithmetic in particular. For exam-ple, the current largest known explicit primenumber—namely, 224036583 – 1 (with 7 milliondecimal digits)—was recently proven prime viathe Lucas-Lehmer test. This test requires verylarge-integer arithmetic, which in turn usesFFTs to perform multiplications; these multi-plications are merely large linear convolu-tions.1 The numerical accuracy of IEEE arith-metic limits the size of these convolutions thatcan be done reliably—unless we store just afew bits in each word of memory (whichmeans we waste memory). Thus, fast double-double arithmetic has a significant advantagein these computations.

Many questions in mathematical number the-ory revolve around the celebrated Riemann zetafunction

, (1)

which, among other things, encodes profoundinformation about the distribution of primenumbers. The best method currently known forevaluating the classical function (x), the num-

ber of primes not exceeding x, involves calcula-tions of (3/2 + it), but Will Galway has observedthat calculating zeta using IEEE 64-bit arithmeticonly lets us accurately compute (x) for x up to1013.1 Thus, there is considerable interest in usinghigher-precision floating-point arithmetic, whichwill permit researchers to press up toward the cur-rent frontier of knowledge—namely, (1022).

Experimental MathematicsHigh-precision computations have proven to be anessential tool for the emerging discipline of exper-imental mathematics—namely, the usage of mod-ern computing technology as an active agent ofexploration in mathematical research.15,16 One ofthe key techniques used here is the PSLQ integer-relation-detection algorithm, which searches forlinear relationships satisfied by a set of numericalvalues.17 Integer-relation computations requirevery high precision in the input vector to obtainnumerically meaningful results. Computations withseveral hundred digits are typical, although one ex-treme instance required 50,000-digit arithmetic toobtain the desired result.18

The best-known application of PSLQ in exper-imental mathematics is the 1995 discovery, by com-

ς ( )sns

k=

=

∑ 1

1

π 39

321

64

166 1

86 2

26 4

16 50

=+

−+

−+

−+

=

kk k k k k

∞∞

=+

−+

−+

π 22 2 2

18

1

64

144

6 1

216

6 2

72

6 3k k k k( ) ( ) ( )−−

++

+

=

=

∑ 54

6 4

9

6 5

227

1

729

2 20

2

( ) ( )k kk

πkk k k k k

243

12 1

405

12 2

81

12 4

27

122 2 2( ) ( ) ( ) (+−

+−

+−

++

−+

−+

=

∑5

72

12 6

9

12 7

9

12

20

2 2

)

( ) ( ) (

k

k k kk k k+−

++

+

=

8

5

12 10

1

12 11

31

7291

7

2 2 2) ( ) ( )

log229

7296 1

816 2

816 3

96 4

96 5

16 6k k k k k k k+

++

++

++

++

++

=+

=

∑k

k k

0

22

1256

1

4096

4096

24 1

81π log( )

992

24 2

26112

24 3

15360

24 42 2 2( ) ( ) ( )k k k+−

++

+

kk

k k

=

−+

++

+

0

2 21024

24 5

9984

24 6

11520

2( ) ( ) ( 44 8

2368

24 9

512

24 10768

24

2 2 2k k k

k

++

+−

+

+

) ( ) ( )

( ++−

++

++

+12

64

24 13

408

24 15

720

24 162 2 2) ( ) ( ) ( )k k k 22

2 2 216

24 17

196

24 18

60

24 20+

++

++

+−

( ) ( ) ( )k k k

337

24 21

0 2 2 2 2 2 3

2

2 3 4 10 11

( )k

J J J J J J

+

= − − − + + + 112 13 14 15 16

17 18 19 20 21

3+ + − −− − − + + −

J J J J

J J J J J J222 23 252− +J J

Figure 2. Some new math identities found by high-precision computations.

=?

MAY/JUNE 2005 59

puter, of what is now known as the Bailey-Borwein-Plouffe (BBP) formula for :

. (2)

Remarkably, this formula lets us calculate binary orhexadecimal digits beginning at the nth digit, with-out needing to calculate any of the first n – 1 digits,by using a simple scheme that requires very littlememory and no multiple-precision arithmetic soft-ware.16,19 The BBP formula is currently used in theg95 Fortran compiler as part of transcendentalfunction evaluation software.

Since 1995, researchers have used the PSLQ-based computational approach to find—and thenprove—numerous other formulas of this type.16 Fig-ure 2 shows a sampling of these discoveries, includ-ing a recent discovery regarding the integrals

. (3)

The ?= notation in Figure 2 means that this “iden-tity” has been discovered numerically and verifiedto 2,000-digit accuracy, but no formal proof is yetknown.

These computer-discovered formulas have im-plications for the age-old question of whether (andwhy) the digits of constants such as and log 2 ap-pear statistically random.16,20 They’ve also at-tracted considerable attention, including feature ar-ticles in Scientific American and Science News.21,22

Moreover, this same line of investigation has led toa formal proof of normality (statistical randomnessin a specific sense) for an uncountably infinite classof explicit real numbers. The simplest example ofthis class is the constant

, (4)

which is 2-normal: every string of m binary digitsappears, in the limit, with frequency 2–m.16,23 Foran efficient pseudorandom number generatorbased on this constant’s binary digits, visit http://crd.lbl.gov/~dhbailey/mpdist.

Identification of Constants in Quantum Field TheoryA few years ago, British physicist David Broadhurstfound an application of high-precision PSLQ com-putations in quantum field theory.24 In particular,

he found that by using PSLQ computations in eachof 10 cases with unit or zero mass, the finite part ofthe scalar three-loop tetrahedral vacuum Feynmandiagram reduces to four-letter “words” that repre-sent iterated integrals in an alphabet of seven “let-ters”—namely, the one-forms := dx/x and k :=dx/(–k – x), where := ( )/2 is the primitivesixth root of unity, and k runs from 0 to 5. In thiscontext, a four-letter word is a four-dimensional it-erated integral, such as

(5)

V := Re[(231)] =

. (6)

The remaining terms in the diagrams reduce toproducts of constants found in Feynman diagramswith fewer loops. Figure 3 shows these 10 cases; inthe diagrams, dots indicate particles with nonzerorest mass. Figure 4 gives the formulas for the cor-responding constants that researchers have foundby using PSLQ. Here, the constant C = k>0sin(k/3)/k2.

In each case, these formulas were later proven tobe correct, but until the experimental results of thecomputations were found, no one had any solidreason to believe that such relations might exist. In-deed, the adjective “experimental” is quite appro-priate here, because the PSLQ computationsplayed a role entirely analogous to (and just as cru-cial as) a conventional laboratory experiment.

( ) cos( / )−

> >∑ 1 2 3

30

j

j k

k

j k

π

U

dxx

dxx

dxx

dxx

:

( )

= ( )=

− −∫

ς ω ωΩ23 0

1

1

2

2

3

304

12

(( )

( )

1

14000

1

30

31

= −

∫∫∫

∑+

> >

x

j k

xx

j k

j k

1 3−

α2 3 31

1

3 2, =

=

∑ nn

n

Jtt

dtn n

n= +

−+

∫ logtantan/

( ) / 7760

1 60

ππ

π =+

−+

−+

−+

=

∑ 1

16

48 1

28 4

18 5

18 60

kk k k k k

V1 V2A V2N

V4N V5 V6V4AV3L

V3T V3S

Figure 3. The 10 Feynmann diagrams for the 10 tetrahedral cases.

60 COMPUTING IN SCIENCE & ENGINEERING

We’ve just reviewed a brief surveyof the rapidly expanding usageof high-precision arithmetic inmodern scientific computing,

but it’s worth noting that all these examples havearisen in the past 10 years. We could be witness-ing the birth of a new era of scientific comput-ing, in which the numerical precision requiredfor a computation is as important to program de-sign as algorithms and data structures.

In one respect, perhaps it’s not a coincidence thatinterest in high-precision arithmetic has occurredduring the same period that a concerted attempthas been made to implement many scientific com-putations on highly parallel and distributed sys-tems. These systems have made possible muchlarger-scale runs than before, greatly magnifyingnumerical difficulties. Even single-processor sys-tem memory and performance have greatly ex-panded (a gift of Moore’s law), enabling muchlarger computations than were feasible only a fewyears ago.

Other computations being performed on thesesystems today could have significant numerical dif-ficulties, but those who use these codes might notyet realize the extent of these difficulties. With theadvent of relatively painless software to convertprograms to use high-precision arithmetic, re-searchers can periodically run a large scientificcomputation with double-double or higher-preci-sion arithmetic and determine how accurate its re-sults really are. In fact, the software tools currentlyavailable for high-precision arithmetic can be usedin every stage of numerical error management:

• periodic testing of a scientific code to see if it’s

becoming numerically sensitive;• determining how many digits in the results (in-

termediate or final) are reliable;• pinning down a numerical difficulty to a few lines

of code; and• rectifying the numerical difficulties that are un-

covered.

Although the software described here and else-where will help scientists in the short run, it is es-sential in the longer run that computer vendorsrecognize the need to provide both hardware andsoftware support for high-precision arithmetic.An IEEE committee is updating the IEEE-754floating-point arithmetic standard, and 128-bitsupport is one of the key provisions of their draftstandard. This same committee is also consider-ing establishing standards for arbitrary-precisionarithmetic. These are all welcome and timelydevelopments.

AcknowledgmentsThe author acknowledges the contributions of numerouscolleagues in the preparation of this article, including EdBaron, Ben Barrowes, Jonathan Borwein, RichardCrandall, Chris Ding, Alexei Frolov, William Kahan, PeterNugent, Michael Strayer, and Zong-Chao Yan. Bailey’swork was supported by the director of the Office ofComputational and Technology Research, Division ofMathematical, Information, and ComputationalSciences, of the US Department of Energy, under contractnumber DE-AC03-76SF00098.

References1. R. Crandall and C. Pomerance, Prime Numbers: A Computational

Perspective, Springer-Verlag, 2001.

2. R.P. Brent, “A Fortran Multiple-Precision Arithmetic Package,”ACM Trans. Mathematical Software, vol. 4, no. 1, 1978, pp.57–70.

3. D.H. Bailey and X.S. Li, “A Comparison of Three High-PrecisionQuadrature Schemes,” Computational Research Division, Berke-ley Lab Computing Sciences, 2004; http://crd.lbl.gov/~dhbailey/dhbpapers/quadrature.pdf.

4. D.H. Bailey and S. Robins, “Highly Parallel, High-Precision Nu-merical Integration,” Computational Research Division, BerkeleyLab Computing Sciences, 2004; http://crd.lbl.gov/~dhbailey/dhbpapers/quadparallel.pdf.

5. Y. He and C. Ding, “Using Accurate Arithmetics to Improve Nu-merical Reproducibility and Stability in Parallel Applications,” J.Supercomputing, vol. 18, no. 3, 2001, pp. 259–277.

6. P.H. Hauschildt and E. Baron, “The Numerical Solution of the Ex-panding Stellar Atmosphere Problem,” J. Computational and Ap-plied Mathematics, vol. 109, 1999, pp. 41–63.

7. D.H. Bailey and A.M. Frolov, “Universal Variational Expansionfor High-Precision Bound-State Calculations in Three-Body Sys-tems: Applications to Weakly-Bound, Adiabatic and Two-ShellCluster Systems,” J. Physics B, vol. 35, no. 20, 2002, pp.4287–4298.

8. A.M. Frolov and D.H. Bailey, “Highly Accurate Evaluation of the

V

V

V

A

N

1

2

2

6 3 3 4

6 3 5 4

6 3132

= += −

= −

ς ςς ς

ς ς

( ) ( )

( ) ( )

( ) (( )

( ) ( )

( ) ( )

4 8

6 3 9 4

6 3112

4 4

3

32

= −

= − −

U

V

V C

T

S

ς ς

ς ς

VV C

V C

L

A

32

42

6 3154

4 6

6 3772

4 6

= − −

= − −

ς ς

ς ς

( ) ( )

( ) ( )

VV U

V U C

N4

52

6 3 14 4 16

6 3 13 4 8 4

= − −

= − − −

ς ς

ς ς

( ) ( )

( ) ( )

Figure 4. Formulas found by PSLQ for the 10 cases inFigure 3.

MAY/JUNE 2005 61

Few-Body Auxiliary Functions and Four-Body Integrals,” J. PhysicsB, vol. 36, no. 9, 2003, pp. 1857–1867.

9. Z.-C. Yan and G.W.F. Drake, “Bethe Logarithm and QED Shift forLithium,” Physical Rev. Letters, vol. 81, 12 Sept. 2003, pp.774–777.

10. T. Zhang, Z.-C. Yan, and G.W.F. Drake, “QED Corrections ofO(mc27ln) to the Fine Structure Splittings of Helium and He-Like Ions,” Physical Rev. Letters, vol. 77, no. 26, 1994, pp.1715–1718.

11. B.E. Barrowes et al., “Asymptotic Expansions of the Prolate An-gular Spheroidal Wave Function for Complex Size Parameter,”to appear in Studies in Applied Mathematics, 2005.

12. R.E. Caflisch, “Singularity Formation for Complex Solutions ofthe 3D Incompressible Euler Equations,” Physica D, vol. 67, no.1, 1993, pp. 1–18.

13. D.H. Bailey, R. Krasny, and R. Pelz, “Multiple Precision, MultipleProcessor Vortex Sheet Roll-Up Computation,” Proc. 6th SIAMConf. Parallel Processing for Scientific Computing, SIAM Press, 1993,pp. 52–56.

14. W. Kahan and J. Darcy, “How Java’s Floating-Point Hurts Every-one Everywhere,” 1998; www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf.

15. D.H. Bailey, “Integer Relation Detection,” Computing in Science& Eng., vol. 2, no. 1, 2000, pp. 24–28.

16. J.M. Borwein and D.H. Bailey, Mathematics by Experiment: Plau-sible Reasoning in the 21st Century, AK Peters, 2003.

17. D.H. Bailey and D. Broadhurst, “Parallel Integer Relation Detec-tion: Techniques and Applications,” Mathematics of Computation,vol. 70, no. 236, 2000, pp. 1719–1736.

18. D.H. Bailey and D.J. Broadhurst, “A Seventeenth-Order Polylog-arithm Ladder,” Computational Research Division, Berkeley LabComputing Sciences, 1999; http://crd.lbl.gov/~dhbailey/dhbpapers/ladder.pdf.

19. D.H. Bailey, P.B. Borwein, and S. Plouffe, “On the Rapid Com-putation of Various Polylogarithmic Constants,” Mathematics ofComputation, vol. 66, no. 218, 1997, pp. 903–913.

20. D.H. Bailey and R.E. Crandall, “On the Random Character of Fun-damental Constant Expansions,” Experimental Mathematics, vol.10, no. 2, 2001, pp. 175–190.

21. W. Gibbs, “A Digital Slice of Pi,” Scientific Am., vol. 288, no. 5,2003, pp. 23–24.

22. E. Klarreich, “Math Lab: How Computer Experiments Are Trans-forming Mathematics,” Science News, vol. 165, 24 Apr. 2004, pp.266–268.

23. D.H. Bailey and R.E. Crandall, “Random Generators and NormalNumbers,” Experimental Mathematics, vol. 11, no. 4, 2004, pp.527–546.

24. D.J. Broadhurst, “Massive Three-Loop Feynman Diagrams Re-ducible to SC* Primitives of Algebras of the Sixth Root of Unity,”to appear in European Physical J. C, 2005; http://xxx.lanl.gov/abs/hep-th/9803091.

David H. Bailey is chief technologist of the Compu-tational Research Department at Lawrence BerkeleyNational Laboratory. In 1993, he received the IEEEComputer Society’s Sidney Fernbach Award for con-tributions to numerical computational scienceincluding innovation algorithms for FFTs, matrix mul-tiply, and multiple-precision arithmetic on vectorcomputer architecture. Bailey received a BS in math-ematics from Brigham Young University and a PhD inmathematics from Stanford University. Contact himat [email protected].

The American Institute of Physics is a not-for-profit membershipcorporation chartered in New York State in 1931 for thepurpose of promoting the advancement and diffusion of theknowledge of physics and its application to human welfare.Leading societies in the fields of physics, astronomy, and relatedsciences are its members.

In order to achieve its purpose, AIP serves physics and relatedfields of science and technology by serving its member societies,individual scientists, educators, students, R&D leaders, and thegeneral public with programs, services, and publications—information that matters.

The Institute publishes its own scientific journals as well as thoseof its member societies; provides abstracting and indexingservices; provides online database services; disseminatesreliable information on physics to the public; collects andanalyzes statistics on the profession and on physics education;encourages and assists in the documentation and study of thehistory and philosophy of physics; cooperates with otherorganizations on educational projects at all levels; and collectsand analyzes information on federal programs and budgets.

The Institute represents approximately 130,000 scientiststhrough its member societies. In addition, approximately 6,000students in more than 700 colleges and universities are membersof the Institute’s Society of Physics Students, which includes thehonor society Sigma Pi Sigma. Industry is represented through themembership of 36 Corporate Associates.

Governing Board: Mildred S. Dresselhaus (chair), MartinBlume, Marc H. Brodsky (ex officio), Slade Cargill, CharlesW. Carter Jr., Hilda A. Cerdeira, Marvin L. Cohen, TimothyA. Cohn, Lawrence A. Crum, Robert E. Dickinson, MichaelD. Duncan, H. Frederick Dylla, Judy R. Franz, Brian J.Fraser, John A. Graham, Joseph H. Hamilton, Ken Heller,James N. Hollenhorst, Judy C. Holoviak, Anthony M.Johnson, Angela R. Keyser, Bernard V. Khoury, Leonard V.Kuhi, Louis J. Lanzerotti, Rudolf Ludeke, Christopher H.Marshall, Thomas J. McIlrath, Arthur B. Metzner, RobertW. Milkey, James Nelson, John A. Orcutt, Richard W.Peterson, Helen R. Quinn, S. Narasinga Rao, Elizabeth A.Rogan, Bahaa A.E. Saleh, Charles E. Schmid, James B.Smathers, Benjamin B. Snavely (ex officio), A.F. Spilhaus Jr,Richard Stern, and John H. Weaver.Board members listed in italics are members of the Executive Committee.

Management Committee: Marc H. Brodsky, ExecutiveDirector and CEO; Richard Baccante, Treasurer and CFO;Theresa C. Braun, Vice President, Human Resources;James H. Stith, Vice President, Physics Resources;Darlene A. Walters, Senior Vice President, Publishing; andBenjamin B. Snavely, Secretary.

www.a ip .or g