siag/opt views-and-news pdfsubjectleyffer/views/15-1.pdf · siag/opt views-and-news a forum for the...

27
SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 Contents Articles R: A Statistical Tool John C. Nash ................................... 1 Euclidean Distance Matrices and the Molecular Conformation Problem Abdo Y. Alfakih and Henry Wolkowicz ........... 5 Three Interviews An Interview with R. Tyrrell Rockafellar ......... 9 An Interview with M. J. D. Powell ............. 14 An Interview with W. R. Pulleyblank ........... 19 Bulletin ....................................... 22 Chairman’s Column Henry Wolkowicz ............................... 26 Comments from the Editor Lu´ ıs N. Vicente ................................ 27 Obituary Jos F. Sturm (1971–2003) Jos Sturm was born in Rotterdam, The Nether- lands, on August 13, 1971. He graduated in August 1993 in operations re- search from the Department of Econometrics at the University of Groningen, where he had also been a teaching assistant in statistics and a research as- sistant in econometrics. From September 1993 to September 1997, he was a PhD student at the Econo- metric Institute and the Tinbergen Institute of the Erasmus University in Rotterdam, under supervision of Shuzhong Zhang. Jos spent the academic year 1997/1998 as a post- doctoral fellow at the Communications Research Laboratory (CRL) of McMaster University, Hamil- ton, Canada, as a TALENT stipend of the Nether- lands Organization for Scientific Research (NWO). At McMaster University, he was a member of the ASPC group of Zhi-Quan (Tom) Luo. After his post-doctoral fellowship at McMaster, he returned to the Netherlands, and from October 1998 to January 2001, he was a lecturer at Maastricht University, at the Department of Quantitative Eco- nomics, in the group of Antoon Kolen. In January 2000, his PhD thesis was awarded the Gijs de Leve prize for the best thesis in operations research in the years 1997-1999 in The Netherlands. In February 2001, Jos was appointed as an Associate Professor at Tilburg University, and a fellow of CentER (Center for economic re- search). In July 2001, he was awarded the pres- tigious Vernieuwingsimpuls (Innovation) grant of the Netherlands Organization for Scientific Research (NWO). Jos was the editor of the newsletter SIAG/Optimization Views-and-News of the SIAM Activity Group on Optimization (SIAG/OPT), and a council member-at-large of the Mathematical Programming Society (MPS). His scholarly works include more than 30 papers, and his PhD thesis was published in edited form in the volume ‘High Performance Optimization’, Frenk et al. eds., Kluwer Academic Press, 2000. He was also the author of a widely used optimization soft- ware package called SeDuMi. He is survived by his wife Changqing and daughter Stefanie.

Upload: dodat

Post on 19-Aug-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

SIAG/OPT Views-and-NewsA Forum for the SIAM Activity Group on Optimization

Volume 15 Number 1 March 2004

Contents

ArticlesR: A Statistical ToolJohn C. Nash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Euclidean Distance Matrices and theMolecular Conformation ProblemAbdo Y. Alfakih and Henry Wolkowicz . . . . . . . . . . . 5Three InterviewsAn Interview with R. Tyrrell Rockafellar . . . . . . . . .9An Interview with M. J. D. Powell . . . . . . . . . . . . . 14An Interview with W. R. Pulleyblank . . . . . . . . . . . 19Bulletin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Chairman’s ColumnHenry Wolkowicz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26Comments from the EditorLuıs N. Vicente . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Obituary

Jos F. Sturm (1971–2003)

Jos Sturm was born in Rotterdam, The Nether-lands, on August 13, 1971.

He graduated in August 1993 in operations re-search from the Department of Econometrics at theUniversity of Groningen, where he had also been ateaching assistant in statistics and a research as-sistant in econometrics. From September 1993 toSeptember 1997, he was a PhD student at the Econo-metric Institute and the Tinbergen Institute of theErasmus University in Rotterdam, under supervisionof Shuzhong Zhang.

Jos spent the academic year 1997/1998 as a post-doctoral fellow at the Communications ResearchLaboratory (CRL) of McMaster University, Hamil-ton, Canada, as a TALENT stipend of the Nether-lands Organization for Scientific Research (NWO).At McMaster University, he was a member of theASPC group of Zhi-Quan (Tom) Luo.

After his post-doctoral fellowship at McMaster, hereturned to the Netherlands, and from October 1998to January 2001, he was a lecturer at MaastrichtUniversity, at the Department of Quantitative Eco-nomics, in the group of Antoon Kolen.

In January 2000, his PhD thesis was awarded theGijs de Leve prize for the best thesis in operationsresearch in the years 1997-1999 in The Netherlands.

In February 2001, Jos was appointed as anAssociate Professor at Tilburg University, anda fellow of CentER (Center for economic re-search). In July 2001, he was awarded the pres-tigious Vernieuwingsimpuls (Innovation) grant ofthe Netherlands Organization for Scientific Research(NWO).

Jos was the editor of the newsletterSIAG/Optimization Views-and-News of the SIAMActivity Group on Optimization (SIAG/OPT),and a council member-at-large of the MathematicalProgramming Society (MPS).

His scholarly works include more than 30 papers,and his PhD thesis was published in edited form inthe volume ‘High Performance Optimization’, Frenket al. eds., Kluwer Academic Press, 2000. He wasalso the author of a widely used optimization soft-ware package called SeDuMi.

He is survived by his wife Changqing and daughterStefanie.

Page 2: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

2 SIAG/OPT Views-and-News

Articles

R: A Statistical Tool

John C. NashSchool of Management, University of Ottawa,

Ontario K1N 6N5, Canada ([email protected]).

1. Introduction

The objective of this article is to illustrate the R sta-tistical system, especially as it may be used by opti-mization workers. R is an open-source project thatwas started by Robert Gentleman and Ross Ihakaof the University of Auckland, New Zealand. R fol-lows the spirit and much of the functionality of theS language of Chambers (1998) and colleagues atBell Laboratories. S has been developed since the1970s, and is commercialized as S-Plus. There aremany differences in the user interface and featuresbetween R and S. However, scripts written for oneusually run in the other, often with minimal modifi-cation. Thus, although what is written here is aboutR, readers will likely find it applies “mostly” to S,at least concerning computational capabilities.

This article came about as a result of a paper Igave at the McMaster Optimization conference or-ganized by Tamas Terlaky in August 2002. In thatpaper, I tried to show some of the ways that un-certainty may be introduced into the results of op-timization calculations. This uncertainty is essen-tially a statistical property, so it was natural to usea statistical package to analyze and display results.I chose R because, in talking to optimization re-searchers, it was clear that many people were un-aware of the capabilities of R or S. As sometimeshappens with conference talks, the audience paid lit-tle attention to “uncertainty”, but were very keen tohave the URL http://www.r-project.org.

2. About R

There are several features of R that should appealto optimizers.

• R has fairly simple and powerful graphics, in-cluding contour and [3D] plots.

• R can be extended fairly easily to incorporateC and Fortran programs.

• R already has some modest optimization func-tions included. Some of the routines are myown algorithms from Compact Numerical Meth-ods for Computers, (Nash, 1979) implementedby Brian Ripley.

• R has a number of useful basic statistical func-tions that are generally lacking (or badly imple-mented) in traditional document processing orspreadsheet tools.

My experience is that R installs very quickly andeasily using pre-compiled binaries for both Windows,Linux and Macintosh. There is also the source codefor those with other systems. In addition, many re-searchers have contributed “packages” to carry outspecial computations and extend the R base. Thesehave to be “required” by scripts, a detail that canprove troublesome to novices.

Learning to use the built-in functionality of R isnot particularly difficult, though the Internet gen-eration who have a mouse grafted into their handwill likely find its command-line interface very awk-ward. (S-Plus has pull-down menus, though I havefound that I am more comfortable with R and itscommands.) Generally I like to try out commandsthen build them into a script that I can run later.It seems that there are always minor changes in theinput data or an optional parameter, so that run-ning from a script saves time and effort. Moreover,it documents the computations when I have to comeback to revise an article or answer some query aboutmy work.

3. Graphical features of R

Optimization workers are likely to be most interestedin R’s graphics. Consider the following task, whichwe will walk through below:

Draw a [3D] perspective plot of the “volcano withterraces” function defined by

f(d) = (10− .5 ∗ d) + sin(2 ∗ d),

Page 3: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 3

whered =

√(x− 1)2 + (y − 5)2.

The “solution” to this problem is presented in Fig-ure 1. As someone who is an occasional user of TEX,I confess to spending much more time learning howto include this graphic in the article than I did learn-ing how to prepare a perspective plot with R.

X

0

5

10

15

Y

0

5

10

15

Z

2

4

6

8

10

Figure 1: The volcano-with-terraces function.

The script for preparing Figure 1 in R is straight-forward:

x <- seq(0,15, 0.2) # creates x asa vector 0, 0.2, 0.4,...,15

y <- seq(0,15, 0.2) # similarly fory

hc <- function(x, y)d = sqrt((x - 1)^2+(y - 5)^2)# distance from (1,5)val = (10 - 0.5*d)+sin(2*d) # hc is

the terraced volcano functionz <- outer(x, y, hc) # outer()

creates a [3D] data arraypersp(x, y, z, theta = 30, phi

= 30, expand = 0.5, col ="lightblue",

ltheta = 120, shade = 0.75,ticktype = "detailed",

xlab = "X", ylab = "Y", zlab = "Z")# Next line adds the title to the

graphtitle("jnwave function:

volcano-with-terraces")

0 5 10 15

05

1015

jnwave function with constraint lines

Figure 2: jnwave function with constraint lines.

# Finally we ‘‘print’’ the graph toan Encapsulated PostScript files

dev.print(postscript,file="c:/temp/jnwavep01.eps",horizontal=FALSE,

onefile=FALSE, paper="special")

I wanted to find minima of the “volcano” functionsubject so some linear constraints, namely,

A : y > 5 + 0.1x

andB : y < 8− 0.2x.

In addition, I wanted to show how uncertainty inspecifying the “slope” of the second line could affectthe results. For example, if the line is really

B′ : y < 8− 0.4x

we will find a very different minimum if we use con-straint B’ rather then B. This can be illustratedquickly via a contour plot, especially if colour isavailable. Figure 2 shows this using a dashed linefor the modified constraint. It actually looks nicerin colour, and I had less trouble getting colour tofunction properly.

4. Composite graphics

When comparing different optimization runs ormethods, I often want to compare output. Placing

Page 4: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

4 SIAG/OPT Views-and-News

graphs side by side or in an array is helpful in pickingout the differences. R makes this very easy.

As an example, suppose we wish to apply theJackknife technique to gauge the stability of the pa-rameters of a logistic growth function estimated bynonlinear least squares. We start with the followingweed infestation data.

Year Weed Density1 5.3082 7.243 9.6384 12.8665 17.0696 23.1927 31.4438 38.5589 50.15610 62.94811 75.99512 91.972

We want to model this data with the growth func-tion

model ∼= b1/(1 + b2exp(−b3year)).

We will fit the parameters by minimizing the sumof squares of the deviations between the data weedsand model. To obtain a measure of uncertaintyin the parameters, however, we will estimate theparameters 13 times – once with all the data, thenomitting each of the 12 observations in turn. Wewould then like to examine b1, b2, b3 and the sum ofsquares for each of the 13 sets of input data. Andwe would like the graphs to be displayed together.This is fairly easy with R. We simply set up thegraphical window first with the command

par(mfrow=c(2,2))

After this, each of four “plot” commands will besent to a separate portion of the plotting window,along with its related commands for titles and leg-ends. The result is in Figure 3.

Figure 3: Hobbs weed infestation Jackknife esti-mates.

5. Help in learning R

There are now a number of books and online re-sources for learning R or S. In book form, I likeVenables and Ripley (1994). Though R is not men-tioned, versions of all the scripts exist for R. Thereare some differences in the binary formats used forR and S, but plain text is supposed to be equiva-lent. Venables and Ripley include an introductorychapter to the S language, followed by a chapter ongraphics that gives the commands to prepare someof the more interesting outputs, then a chapter onwriting your own scripts. The rest of the book coversvarious statistical techniques. Thus users can accessthe base material quickly, and then pick and choosemore specialized tools.

Online (there are links on the r-projectsite), I like John Maindonald’s tutorial(http://wwwmaths.anu.edu.au/~johnm/r/usingR.pdf). I walked through the exam-ples while reading the material, and I founda printout of the tutorial helpful, along withscripts downloaded from Maindonald’s site(http://wwwmaths.anu.edu.au/~johnm). Thequick reference sheet by Jonathan Baron(http://www.psych.upenn.edu/~baron/refcard.pdf) can be helpful. There are other doc-uments, some in languages other than English. Allthis is in addition to the official R documentation,

Page 5: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 5

which I find more useful for reference.Another approach, which has wider ap-

plication, is Rweb by Jeff Banfield, whichhas been described in an online article(http://www.jstatsoft.org/v04/i01/Rweb/Rweb.html) and for which the software can bedownloaded (http://www.math.montana.edu/Rweb/Resources.html). This uses a web-serverto provide example scripts for common sta-tistical calculations. The user pastes a scriptinto a submission box and launches the cal-culation. The results (text and graphic) arepresented via the client’s web browser. I foundit straightforward to install Rweb on our Fac-ulty’s experimental server. You can try it there(http://courses.gestion.uottawa.ca/Rweb) orat Banfield’s own site (http://rweb.stat.umn.edu/Rweb). A warning: it can be slow as our serveris a machine that was intercepted on its way todisposal.

REFERENCES

[1] J. M. Chambers, Programming with Data: A Guide tothe S Language, Springer-Verlag, Berlin, 1998.

[2] R. Gentleman, A (different) Talk about R andModern Statistical Computing, 2002. Seehttp://biosun1.harvard.edu/~rgentlem/Ppt/

MSC.ppt, also from the Statistical Society of Ottawasite, http://macnash.admin.uottawa.ca/~sso/feb15.htm.

[3] R. Gentleman, (unknown) A Talk about R andStatistical Computing: An Overview of R. Seehttp://biosun1.harvard.edu/~rgentlem/Ppt/

Rcomput.ppt and http://www.r-project.org.

[4] J. C. Nash, Compact Numerical Methods for Computers:Linear Algebra and Function Minimization, FirstEdition 1979, Second Edition 1990, Adam Hilger,Bristol (an imprint of Institute of Physics Publica-tions).

[5] W. N. Venables and B. D. Ripley, Modern Applied Statis-tics with S-Plus, Springer-Verlag, Berlin, 1994.

Euclidean Distance Matrices andthe Molecular Conformation

Problem

Abdo Y. AlfakihDepartment of Mathematics and Statistics,

University of Windsor, Windsor, Ontario N9B 3P4, Canada

([email protected]).

Henry WolkowiczDepartment of Combinatorics and Optimization,

University of Waterloo, Waterloo, Ontario N2L 3G1, Canada

([email protected]).

1. Introduction

In molecular conformation theory [7] and in multi-dimensional scaling in statistics [6, 17], one is usu-ally interested in the following problem known asthe graph realizability problem (GRP). Given anedge-weighted undirected incomplete simple graphG = (V, E, ω) with node set V = {v1, v2, . . . , vn},edge set E ⊂ V × V , and weights ω : E → IR+, theGRP is the problem of determining whether or notG is realizable in some Euclidean space. A graphG = (V,E, ω) is said to be realizable in IRr if andonly if there exist points p1, p2, . . . , pn in IRr suchthat ‖pi−pj‖2 = ωij for all edges (vi, vj) ∈ E, where‖ . ‖ is the Euclidean norm.

In the context of molecular conformation theory,graph G represents a molecule with V , the set ofnodes of G, representing the atoms. It is possible, us-ing nuclear magnetic resonance [22, 24], to determinesome of the pair-wise distances of the atoms. If thedistance between atoms i and j is known, then edge(vi, vj) is created in graph G with weight ωij equalto the square of this distance. Naturally, the missingedges of G correspond to those unknown interatomicdistances. Thus the shape of this molecule, which isimportant in determining its chemical and biologicalproperties, can be found by solving the GRP. In thiscase we want the graph to be realizable in IR3.

Euclidean distance matrices (EDMs) provide auseful tool in the study of the GRP. In fact, the GRPcan be formulated as a Euclidean distance matrixcompletion problem [5]. Other optimization meth-ods for the GRP are discussed in [14, 18, 21].

An n × n matrix D = (dij) is said to be a Eu-clidean distance matrix (EDM) if and only if thereexist n points p1, p2, . . . , pn in some Euclidean space

Page 6: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

6 SIAG/OPT Views-and-News

IRr, such that ‖pi − pj‖2 = dij for all i, j = 1, . . . , n.The dimension of the smallest Euclidean space con-taining points p1, p2, . . . , pn is called the embeddingdimension of D. Given graph G = (V, E, ω), definean n× n partial symmetric matrix AG = (aij) withsome entries specified (or fixed) and the rest unspec-ified (or free) such that

aij :={

ωij if and only if (vi, vj) ∈ E,free otherwise .

(1)

An n × n matrix D1 is said to be an EDM com-pletion of AG if and only if D1 is EDM and d1ij =aij for all fixed elements of AG. Thus it immediatelyfollows that G = (V, E, ω) is realizable in IRr if andonly if AG has an EDM completion of embeddingdimension r.

In Section 2 we present some of the properties ofEDMs and we show how the GRP can be formulatedas a semidefinite programming problem. In Sec-tion 3, we study the problem of determining whetheror not a given EDM completion is unique. Finallyin Section 4 we present an interesting connection be-tween the set of EDMs and the set of correlationmatrices.

2. Euclidean distance matrices

A necessary and sufficient condition for a symmet-ric matrix D with zero diagonal to be a EDM wasgiven by Schoenberg [20]. The following is a slightlymodified version of Schoenberg result due to Gower[12].

Let M be the subspace orthogonal to e, the vectorof all ones, i.e.,

M := {e}⊥ = {x ∈ IRn : eT x = 0}.

Let J be the orthogonal projection on M , i.e., J :=In − eeT /n where In is the identity matrix of ordern. Then we have the following result.

Theorem 2..1 A matrix D with zero diagonal isEDM if and only if the matrix JDJ is negativesemidefinite. Furthermore, the embedding dimensionof D is given by the rank of JDJ .

Let D be a EDM and let rank (JDJ) = r. Then,the points p1, p2, . . . , pn that generate D are given

by the rows of the n× r matrix P where −12JDJ :=

PP T . Note that the centroid of the points pi, i =1, . . . , n coincides with the origin. This follows sinceP T e = 0 which is implied by the definition of Jnamely Je = 0.

An alternative statement of Theorem 2..1 which ismore convenient for our purposes is given in the nexttheorem [3, 8]. Let Sn denote the space of symmetricmatrices of order n and let SH denote the subspaceof Sn defined as

SH = {B ∈ Sn : diag B = 0}.

Let V be the n × (n − 1) matrix whose columnsform an orthonormal basis of M ; that is, V satisfies:

V T e = 0 , V T V = In−1 . (2)

Note that J = V V T . Now define the linear operatorsKV : Sn−1 → SH and TV : SH → Sn−1 such that

KV (X) = diag (V XV T )eT + e(diag (V XV T )T

− 2 V XV T ,TV (D) = −1

2V T DV .

Theorem 2..2 [3] The following statements areequivalent:

1. D ∈ SH is a EDM.

2. TV (D) is positive semidefinite.

3. D = KV (X) for some positive semidefinite ma-trix X in Sn−1.

Let AG be the partial matrix associated with G =(V,E, ω). Without any loss of generality we can setthe free elements of AG initially to zero. Let H bethe adjacency matrix of graph G. Then graph G hasa realization in IRr for some positive integer r ≤ n−1if and only if the optimal objective function of thefollowing semidefinite programming problem is equalto zero.

min µ(X) = ||H ◦ (KV (X)−AG)||2Fsubject to

X º 0,(3)

where || . ||F is the Frobenius norm defined as ||A||2F= traceAT A, ◦ denotes the Hadamard product, andX º 0 means that the matrix X is symmetric pos-itive semidefinite. If X∗ is the optimal solution

Page 7: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 7

of problem (3) such that µ(X∗) = 0, then D1 =KV (X∗) is an EDM completion of AG. Note thatwe could answer the question whether graph G hasa realization in IRk for a given integer k simply byadding to problem (3) the side constraint

rank X = k.

Unfortunately the feasible region of problem (3) withthis extra constraint is not convex. In fact, it wasshown by Saxe [19] that given an edge-weightedgraph G and an integer k, the problem of deter-mining whether or not G is realizable in IRk is NP-hard. Where as the GRP, i.e., the problem of deter-mining whether or not G is realizable in some Eu-clidean space is still open [15], the GRP can be solved“approximately” in polynomial time since problem(3) can be solved using interior point algorithms forsemidefinite programming [23]. For polynomial in-stances of the GRP see [16].

A primal-dual interior point algorithm for problem(3) was given in [3] and numerical tests were alsopresented. Note that by taking H to be the matrixof all ones, problem (3) can be used to find the closestEuclidean distance matrix, in Frobenius sense, to agiven matrix A. For another approach to solving theclosest EDM problem see [1, 11].

3. Uniqueness of EDM comple-tions

Given a EDM completion D1 of a given partial ma-trix AG, an interesting problem is the problem ofdetermining whether or not D1 is unique. A charac-terization of the uniqueness of D1 is discussed nextusing the notion of Gale transform from the theoryof polytopes [10, 13].

Let p1, p2, . . . , pn be points in IRr whose centroidcoincides with the origin. Assume that the pointsp1, p2, . . . , pn are not contained in a proper hyper-plane. Then the matrix

P :=

p1T

p2T

...pnT

is of rank r. Let r = n−1−r. For r ≥ 1, let Λ to bean n× r matrix, whose columns form a basis for the

null space of the (r + 1)×n matrix[

P T

eT

]; that is,

P T Λ = 0, eT Λ = 0, Λ is full column rank. (4)

Λ is called a Gale matrix corresponding to the EDMmatrix D generated by pi, i = 1, . . . , n; and the ithrow of Λ, considered as a vector in IRr, is called aGale transform of pi. It is clear that Λ is not unique.In fact, for any nonsingular r×r matrix Q, ΛQ is alsoa Gale matrix. We will exploit this property to definea special Gale matrix Z which is more convenient forour purposes.

Let us write Λ in block form as

Λ =[

Λ1

Λ2

],

where Λ1 is r× r and Λ2 is (r +1)× r. Without lossof generality we can assume that Λ1 is nonsingular.Then Z is defined as

Z := ΛΛ1−1 =

[Ir

Λ2Λ1−1

]. (5)

Let ziT denote the i-th row of Z. i.e.,

Z :=

z1T

z2T

...znT

.

Hence zi, the Gale transform of pi, for i = 1, . . . , ris equal to the ith unit vector in IRr. Now we havethe following result.

Theorem 3..1 [2] Let A be a given partial symmet-ric matrix and let D1 be an EDM completion of A.Let X1 = TV (D1) and let r = n−1− rankX1. Then

1. If r = 0 then D1 is not unique.

2. If r = 1, let Z be the Gale matrix correspondingto D1 defined in (5). The following condition isnecessary and sufficient for D1 to be unique:

(a) There exists a positive definite r× r matrixΨ such that ziT Ψzj = 0 for all i, j suchthat aij is free.

For a proof of this theorem and other results con-cerning the case r ≥ 2 see [2].

Page 8: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

8 SIAG/OPT Views-and-News

4. EDMs and the elliptope

As it turned out, there is an interesting connectionbetween the set of EDMs and the set of correlationmatrices, i.e., the set of symmetric positive semidef-inite matrices whose diagonal is equal to e. Denotethe set of all n × n EDM matrices by Dn. Then iteasily follows from Theorem 2..1 that Dn is a closedconvex cone. Let En denote the set of n × n corre-lation matrices often referred to as the Elliptope [9].Next we discuss the relationship between Dn and En.

Given a convex set K ∈ Sn and a point A ∈ K,let R(K, A) and N(K,A) denote, respectively, theradial cone and the normal cone of K at A; that is

R(K, A) = {B : B = λ(X −A), ∀X ∈ K, λ ≥ 0},N(K, A) = {B : 〈 B, A−X 〉 ≥ 0,∀X ∈ K},

where 〈 , 〉 denotes the matrix inner product〈 A,B 〉 = traceAB. Hence, N(K, A) = (R(K, A))◦,where Q◦ denotes the polar of cone Q; defined as

Q◦ = {B ∈ Sn : 〈 B, X 〉 ≤ 0, for all X ∈ Q}.

Furthermore, (N(K, A))◦ = closure of R(K,A) =T (K, A), where T (K, A) is called the tangent coneof K at A. Let E denote the matrix of all ones.Then it is not difficult to show that

N(En, E) = {B : B = Diag y − V ΦV T },

where y is any vector in IRn and Φ is any positivesemidefinite matrix in Sn−1. It would easily followsthen that Dn = − T (En, E); i.e., the cone of EDMsof order n is equal to the negative of the tangentcone of the set of n × n correlation matrices at E,the matrix of all ones [9, Page 535].

An interesting question is whether the radial coneof En at E is closed. If this is the case then R(En, E)= T (En, E) which would imply that for any EDMmatrix D there exists a nonnegative scalar λ and acorrelation matrix C such that D = λ(E − C). Un-fortunately this is not true. The radial cone R(En, E)is not closed as can be shown by the following exam-ple.

Let

D =

0 1 41 0 14 1 0

.

Then clearly D is a EDM generated by the threepoints p1, p2, and p3 on the x-axis of coordinates−1, 0 and 1 respectively. It is also clear that thereexists no λ ≥ 0 such that D = λ(E − C1) for anycorrelation matrix C1 since the matrix λE − D isindefinite for all λ’s.

For a characterization of those EDM matrices Dwhich can be expressed as D = λ(E − C) for somescalar λ and some correlation matrix C see [4]. Hereagain this characterization is given in terms of theGale transform of the points p1, p2, . . . , pn that gen-erate D.

REFERENCES

[1] S. Al-Homidan and R. Fletcher, Hybrid methods for find-ing the nearest Euclidean distance matrix, RecentAdvances in Nonsmooth Optimization, (1995), pp.1–17.

[2] A. Y. Alfakih, On the uniqueness of Euclidean distancematrix completions, technical report, 2002.

[3] A. Y. Alfakih, A. Khandani, and H. Wolkowicz, SolvingEuclidean distance matrix completion problems viasemidefinite programming, Comput. Optim. Appl.,12 (1999), pp. 13–30.

[4] A. Y. Alfakih and H. Wolkowicz, Two theorems on Eu-clidean distance matrices and Gale transform, LinearAlgebra Appl., 340 (2002), pp. 149–154.

[5] M. Bakonyi and C. R. Johnson, The Euclidean distancematrix completion problem, SIAM J. Matrix Anal.Appl., 16 (1995), pp. 646–654.

[6] I. Borg and P. Groenen, Modern MultidimensionalScaling, Theory and Applications, Springer-Verlag,Berlin, 1997.

[7] G. M. Crippen and T. F. Havel, Distance Geometry andMolecular Conformation, John Wiley & Sons, NewYork, 1988.

[8] F. Critchley, On certain linear mappings between inner-product and squared distance matrices, Linear Alge-bra Appl., 105 (1998), pp. 91–107.

[9] M. Deza and M. Laurent, Geometry of Cuts and Metrics,Algorithms and Combinatorics, vol. 15, Springer-Verlag, Berlin, 1997.

[10] D. Gale, Neighboring vertices on a convex polyhedron, inLinear Inequalities and Related System, PrincetonUniversity Press, Princeton, (1956), pp. 255–263.

Page 9: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 9

[11] W. Glunt, T. L. Hayden, S. Hong, and J. Wells, Analternating projection algorithm for computing thenearest Euclidean distance matrix, SIAM J. MatrixAnal. Appl., 11 (1990), pp. 589–600.

[12] J. C. Gower, Properties of Euclidean and non-Euclideandistance matrices, Linear Algebra Appl., 67 (1985),pp. 81–97.

[13] B. Grunbaum, Convex Polytopes, John Wiley & Sons,New York, 1967.

[14] B. Hendrickson, The molecule problem: Exploiting struc-ture in global optimization, SIAM J. Optim., 5(1995), pp. 835–857.

[15] M. Laurent, Cuts, matrix completions and graph rigidity,Math. Programming, 79 (1997), pp. 255–283.

[16] M. Laurent, Polynomial instances of the positivesemidefinite and Euclidean distance matrix comple-tion problems, SIAM J. Matrix Anal. Appl., 22(2000), pp. 874–894.

[17] J. De Leeuw and W. Heiser, Theory of multidimensionalscaling, in P. R. Krishnaiah and L. N. Kanal, edi-tors, Handbook of Statistics, North-Holland, vol. 2(1982), pp. 285–316.

[18] J. More and Z. Wu, Distance geometry optimization forprotein structures, J. Global Optim., 15 (1999), pp.219–234.

[19] J. B. Saxe, Embeddability of weighted graphs in k-spaceis strongly NP-hard, Proc. 17th Allerton Conf. inCommunications, Control, and Computing, (1979),pp. 480–489.

[20] I. J. Schoenberg, Remarks to Maurice Frechet’s article:Sur la definition axiomatique d’une classe d’espacesvectoriels distancies applicables vectoriellement surl’espace de Hilbert, Ann. Math., 36 (1935), pp. 724–732.

[21] M. W. Trosset, Distance matrix completion by numericaloptimization, Comput. Optim. Appl., 17 (2000), pp.11–22.

[22] C. P. Wells, An improved method for sampling of molec-ular conformation space, PhD thesis, University ofKentucky, 1995.

[23] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors,Handbook of Semidefinite Programming: Theory, Al-gorithms and Applications, Kluwer Academic Pub-lishers, Dordrecht, 2000.

[24] K. Wutrich, The development of nuclear magnetic reso-nance spectroscopy as a technique for protein struc-ture determination, Accounts of chemical research,22 (1989), pp. 36–44.

Three Interviews

The three interviews that follow appeared first inthe Bulletin of the International Center for Mathe-matics (Centro Internacional de Matematica, CIM):http://www.cim.ptThe Editor would like to thank the Editors of the

Bulletin of CIM as well as the interviewees for per-mitting to reprint the three interviews here.

An Interview with R. TyrrellRockafellar

Published originally in Bulletin of the InternacionalCenter for Mathematics, n. 12, June 2002.http://www.cim.pt/cimE/boletim.html

There are obvious reasons for concern about thecurrent excessive scientific specialization and aboutthe uncontrolled breadth of research publication. Doyou see a need for increasing coordination of eventsand publications in the mathematical community (inparticular in the optimization community) as a wayto improve quality?

There are too many meetings nowadays, even toomany in some specialized areas of optimization. Thisis regrettable, but perhaps self-limiting because ofconstraints on the time and budgets of participants.In many ways, the huge increase in the number ofmeetings is a direct consequence of globalization—with more possibilities for travel and communication(e.g. e-mail) than before, and this is somehow good.The real problem, I think, is how to preserve qualityunder these circumstances. Meetings shouldn’t justbe touristic opportunities, and generally they aren’t,but in some cases this has indeed become the case.I see no hope, however, for a coordinating body tocontrol the situation.

An aspect of meetings that I believe can definitelyhave a bad effect on the quality of publications is theproliferation of “conference volumes” of collected pa-pers. This isn’t a new thing, but has gotten worse.In principle such volumes could be good, but we allknow that it’s not a good idea to submit a “real”

Page 10: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

10 SIAG/OPT Views-and-News

paper to such a volume. In fact I often did that inthe past, but it’s clear now that such papers are es-sentially lost to the literature after a few years andunavailable. Of course, the organizers of a confer-ence often feel obliged to produce such a book inorder to justify getting the money to support theconference. But for the authors, the need to producepapers for that purpose is definitely a big distractionfrom their more serious work. Therefore it can havea bad effect on activities that are mathematicallymore important.

There are also too many journals. This is a diffi-cult matter, but it may also be self-limiting. Manylibraries now aren’t subscribing to all the availablejournals. At my own university, for example, we havedecided to omit many mathematical journals that weregard as costing much more than they are worth,and this even includes some older journals that arequite well known (I won’t name names). And hardlya month goes by without the introduction of yet an-other journal. Besides the problem of paying for allthe journals (isn’t this often really a kind of businesstrick of publishers in which ambitious professors co-operate?), there is the quality problem that therearen’t enough researchers to referee the papers thatget submitted. Furthermore, one sees that certainfields of research that are perhaps questionable invalue and content, start separate journals of theirown and thereby escape their critics on the outside.The governments paying for all of it may some daybecome disillusioned, and that would hurt us all.

Before I ask you questions about yourself and yourwork, let me pose you another question about re-search policy. How do you see the importance andimpact of research in the professor’s teaching activ-ity? Do you consider research as a necessary condi-tion for better university teaching?

Personally, I believe that an active acquaintancewith research is important to teaching mathemat-ics on many levels. The nature of the subject be-ing taught, and the kind of research being done, canmake a big difference in this, however. Ideally, math-ematics should be seen as a thought process, ratherthan just as a mass of facts to be learned and re-membered, which is so often the common view. Thethought process uses logic but also abstraction and

needs to operate with a clear appreciation of goals,whether coming directly out of applications or forthe sake of more complete insights into a central is-sue.

Even with standard subjects such as calculus, Ithink it’s valuable to communicate the excitementof the ideas and their history, how hard they wereto develop and understand properly—which so oftenreflects difficulties that students have themselves. Idon’t see how a teacher can do that well withoutsome direct experience in how mathematics contin-ues to grow and affect the world.

On the higher levels, no teacher who does not en-gage in research can even grasp the expanding knowl-edge and prepare the next generation to carry itforward. And, practically speaking, without directcontact with top-rate researchers, a young mathe-matician, no matter how brilliant, is doomed to ascientifically dull life far behind the frontiers.

You started your career in the sixties working in-tensively in convex analysis. Your book “ConvexAnalysis”, Princeton University Press, 1970, becamea landmark in the field. How exciting was that timeand how do you see now the impact that the book hadin the applied mathematical field?

C. Caratheodory, W. Fenchel, V. L. Klee, J.-J.Moreau, F. A. Valentine,... Who do you really thinkthat set the ground for convex analysis? WernerFenchel?

Was it A. W. Tucker himself who suggested thename “Convex Analysis”? What are your recollec-tions of Professor Tucker and his influential activ-ity?

Some of the history of “convex analysis” is re-counted in the notes at the ends of the first twochapters of my book Variational Analysis, writtenwith Roger Wets. Before the early 1960’s, therewas plenty of convexity, but almost entirely in ge-ometric form with little that could be called “analy-sis”. The geometry of convex sets had been studiedby many excellent mathematicians, e.g. Minkowski,and had become important in functional analysis,specifically in Banach space theory and the study ofnorms. Convex functions other than norms beganto attract much more attention once optimizationstarted up in the early 1950’s, and through the eco-

Page 11: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 11

nomic models that became popular in the same era,involving games, utility functions, and the like. Still,convex functions weren’t handled in a way that wassignificantly different from that of other functions.That only came to be true later.

As a graduate student at Harvard, I got interestedin convexity because I was amazed by linear pro-gramming duality and wanted to invent a “nonlin-ear programming duality”. That was around 1961.The excitement then came from all the work goingon in optimization, as represented in particular bythe early volumes of collected papers being put to-gether by Tucker and others at Princeton, and fromthe beginnings of what later become the sequenceof Mathematical Programming Symposia. It didn’tcome from anything in convexity itself. At that time,I knew of no one else who was really much interestedin trying to do “new” things with convexity. Indeed,nobody else at Harvard had much awareness of con-vexity, not to speak of optimization.

It was while I was writing up my dissertation—focused then on dual problems stated in terms ofpolar cones—that I came across Fenchel’s conjugateconvex functions, as described in Karlin’s book ongame theory. They turned out to be a wonderfulvehicle expressing for “nonlinear programming du-ality”, and I adopted them wholeheartedly. Aroundthe time the thesis was nearly finished, I also foundout about Moreau’s efforts to apply convexity ideas,including duality, to problems in mechanics.

Moreau and I independently in those days at first,but soon in close exchanges with each other, madethe crucial changes in outlook which, I believe, cre-ated “convex analysis” out of “convexity”. For in-stance, he and I passed from the basic objects inFenchel’s work, which were pairs consisting of a con-vex set and a finite convex function on that set, toextended-real-valued functions implicitly having “ef-fective domains”, for which we moreover introducedset-valued subgradient mappings. Nevertheless, theidea that convex functions ought to be treated geo-metrically in terms of their epigraphs instead of theirgraphs was essentially something we had gotten fromFenchel.

Less than a year after completing my thesis, I wentto Copenhagen to spend six months at the institutewhere Fenchel was working. He was no longer en-gaged then in convexity, so I had no scientific in-

teraction with him in that respect, except that hearranged for Moreau to visit, so that we could talk.

Another year later, I went to Princeton for a wholeacademic year through an invitation from Tucker. Ihad kept contact with him as a student, even thoughI was at Harvard, not Princeton, and had never actu-ally met him. (He had helped to convince my advisorthat my research was promising.) He had me teacha course on convex functions, for which I wrote thelecture notes, and he then suggested that those notesbe expanded to a book. And yes, it was he who sug-gested the title, Convex Analysis, thereby inventingthe name for the new subject.

So, Tucker had a great effect on me, as he had hadon others, such as his students Gale and Kuhn. Hehimself was not a very serious researcher, but he be-lieved in the importance of the new theories growingout of optimization. With his personal contacts andinfluence, backed by Princeton’s prestige, he acted asa major promoter of such developments, for exampleby arranging for “Convex Analysis” to be publishedby Princeton University Press. I wonder how thesubject would have turned out if he hadn’t movedme and my career in this way.

I think of Klee (a long-time colleague of mine inSeattle, who helped me get a job there), and Valen-tine (whom I once met but only briefly), as well asCaratheodory, as involved with “convexity” ratherthan “convex analysis”. Their contributions can beseen as primarily geometric.

Since the mid seventies you have been working onstochastic optimization, mainly with Roger Wets. Itseems that it took a long while to see stochastic op-timization receiving proper attention from the opti-mization community. Do you agree?

I owe my involvement in stochastic programmingto Roger Wets. This was his subject when we firstbecame friends around 1965. He has always beenmotivated by its many applications, whereas for methe theoretical implications, in particular the onesrevolving around, or making use of duality, providedthe most intriguing aspects. We have been goodpartners from that perspective, and the partnershiphas lasted for a long time.

Stochastic programming has been slow to gainground among practitioners for several reasons, de-

Page 12: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

12 SIAG/OPT Views-and-News

spite its obvious relevance to numerous problems.For many years, the lack of adequate computingpower was a handicap. An equal obstacle, how-ever, has been the extra mental machinery requiredin treating problems in this area and even in formu-lating them properly. I have seen that over and over,not just in the optimization community but also inworking with engineers and trying to teach the sub-ject to students. A different way of thinking is oftenneeded, and people tend to resist that, or to feellost and retreat to ground they regard as safer. I’mconfident, though, that stochastic programming willincreasingly be accepted as an indispensable tool formany purposes.

Your recent book “Variational Analysis”,Springer-Verlag, 1998, with Roger Wets, emerges asan overwhelming life-time project. You say in thefirst paragraph of the Preface: “In this book we aimto present, in a unified framework, a broad spectrumof mathematical theory that has grown in connectionwith the study of problems of optimization, equilib-rium, control, and stability of linear and nonlinearsystems. The title Variational Analysis reflects thisbreadth.” How do you feel about the book a few yearsafter its publication? Has the purpose of forming a“coherent branch of analysis” been well digested bythe book audience?

That book took over 10 years to write—if one in-cludes the fact that at least twice we decided to startthe job from the beginning again, totally reorganiz-ing what we had. In that period I had the feeling ofan enormous responsibility, but a joyful burden oneeven if involved with pain, somewhat like a womancarrying a baby within her and finally giving birth.I am very happy with the book (although it wouldbe nice to have an opportunity to make a few lit-tle corrections), and Wets and I have heard manyheart-warming comments about it. Also, it has wona prize1.

Still, I have to confess that I have gone through abit of “post partum depression” since it was finished.It’s clear—and we knew it always —that such a mas-sive amount of theory can’t be digested very quickly,even by those who could benefit from it the most.Another feature of the situation, equally predictable,

1Frederick W. Manchester Prize (INFORMS, 1997).

is that some of the colleagues who could most readilyunderstand what we have tried to do often have theirown philosophies and paradigms to sell. It’s discour-aging to run into circumstances where developmentswe were especially proud of, and which we regardedas very helpful and definitive, appear simply to beignored.

But in all this I have a very long view. We nowtake for granted that “convex analysis” is a goodsubject with worthwhile ideas, yet it was not alwaysthat way. There was actually a lot of resistance to itin the early days, from individuals who preferred ageometric presentation to one targeting concepts ofanalysis. Even on the practical plane, it’s fair to saythat little respect was paid to convex analysis in nu-merical optimization until around 1990, say. Havingseen how ideas that are vital, and sound, can slowlywin new converts over many years, I can well dreamthat the same will happen with variational analysis.

Of course, in the meantime there are manyprojects to work on, whether directly based on vari-ational analysis or aimed in a different direction, andsuch matters are keeping me thoroughly busy.

Nonlinear optimization has been also part of yourresearch interests, in particular duality and Lagrangemultiplier methods. Nonlinear optimization has beenrecently enriching its classical methodology with newtechniques especially tailored to simulation modelsthat are expensive, ill-posed or that require high per-formance computing. Would you like to elaborateyour thoughts on this new trend?

The growth of numerical methodology based onduality and new ways of working with, or conceivingof, Lagrange multipliers has been thrilling. Semi-definite programming fits that description, but sotoo do the many decomposition schemes in large-scale optimization, including optimal control andstochastic programming. Also in this mix, at leastas close cousins, are schemes for solving variationalinequality problems.

I’ve been active myself in some of this, but on amore basic level of theory a bigger goal has beento establish a better understanding of how solutionsto optimization problems, both of convex and non-convex types, depend on data parameters. That’sessential not only to numerical efficacy and simula-

Page 13: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 13

tion, but also to the stability of mathematical mod-els. I find it to be a tough but fascinating area ofresearch with broad connections to other things. Itrequires us to look at problems in different ways thanin the past, and that’s always valuable. Otherwise itwon’t be possible to bring optimization to the diffi-cult tasks for which it is greatly needed in economicsand technology.

Let me now increase my level of curiosity and askyou more personal questions. The George B. DantzigPrize (SIAM and Mathematical Programming So-ciety, 1982), the The John von Neumann Lecture(SIAM, 1992), and the John von Neumann The-ory Prize (INFORMS, 1999) are impressive recog-nitions. However, it is clear that it is neither recog-nition nor any other oriented-career goal that keepsyou moving on. What makes you so active at yourage? Are you addicted to mathematics?

It’s the excitement of discovering new proper-ties and relationships—ones having the intellectualbeauty that only mathematics seems able to bring—that keeps me going. I never get tired of it. Thisprocess builds its own momentum. New flashes ofinsight stimulate curiosity more and more.

Of course, a mathematician has to be in tune withsome of the basics of a mathematical way of life, suchas pleasure in spending hours in quiet contempla-tion, and in dedication to writing projects. But weall know that this somewhat solitary side of math-ematical life also brings with it a kind of social lifethat few people outside of our professional world caneven imagine. The frequent travel that’s not just tiedto a few laboratories, the network of friends and re-search collaborators in different cities and even dif-ferent countries, the extended family of former stu-dents, and the interactions with current students—what fun, and what an opportunity to explore music,art, nature, and our many other interests. All thesefeatures keep me going too.

Recently, at the end of a live radio interview bytelephone that was being broadcast nationally inAustralia, I was asked whether I really liked moun-tain hiking and backpacking. The interviewer hadseen that about me on a web site and appeared to beincredulous that someone with such outdoor activi-ties could fit her mental picture of a mathematician.

So little did she know about the lives we lead!

Have you ever felt that a result of yours was un-fairly neglected? Which? Why?

Yes, I have often felt that certain results I hadworked very hard to obtain, and which I regardedas deep and important, were neglected. That wasthe case in the early days and still goes on now. Forinstance, the duality theorems I developed in the1960’s, connecting duality with perturbations, wereignored for a long time while most people in opti-mization thought only about “Lagrangian duality”.And in the last couple of years, I and several of mystudents have worked very hard at bringing varia-tional analysis to bear on Hamilton-Jacobi theory,but despite strong theorems can’t seem to get atten-tion from the PDE people who work in that subject.

In most cases the trouble has come from the factthat new ideas have been involved which other peo-ple didn’t have the time or energy to appreciate.That can be an unhappy state of affairs, but timecan change it. I’ve never been seriously bothered byit and have simply operated on the principle thatgood ideas will come through eventually. This hasin fact been my experience.

Anyway, there are always so many other excitingprojects to work on that one can’t be very distractedby such disappointments, which may after all onlybe temporary.

What would you like to prove or see proven that isstill open?

Oh, this is a hard kind of question for me. I be-long to the class of mathematicians who are theory-builders more than problem-solvers. I get my satis-faction from being able to put a subject into a ro-bust new framework which yields many new insights,rather than from cracking a hard nut like Fermat’slast theorem. Of course, I spend a lot of time prov-ing a lot of things, but for me the main challengeultimately is trying to get others to look at some-thing in a different and better way. Of course, thatcan be frustrating! But, to tie it in with an ear-lier question, a key part is getting students to followthe desired thought patterns. That’s good for themand also for the theoretical progress. Without hav-

Page 14: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

14 SIAG/OPT Views-and-News

ing been so deeply engaged with teaching for manyyears, I don’t think I could have gone as far with myresearch.

So, if I would state my own idea of an open chal-lenge, it would be, for instance, on the grand scaleof enhancing the appreciation and use of “variationalanalysis” (by which I don’t just mean my book!). Ido nonetheless have specific results that I would liketo be able to prove in several areas, but they wouldtake much more space to describe.

What was the most gratifying paper you everwrote? Why?

Oh, again very hard to say. There are so manypapers, and so many years have gone by. And I’veworked on so many different topics, often in differentdirections. Anyway, for “gratification” it’s hard tobeat books. The two books that I’m most proud ofare obviously Convex Analysis and Variational Anal-ysis. Both have greatly gratified me both “exter-nally” (recognition) and “internally” (personal feel-ing of accomplishment). So far, Convex Analysis hasbeen the winner externally, but Variational Analysisis the winner internally.

About the interviewee: R. Tyrrell Rockafellar com-pleted his undergraduate studies at Harvard Univer-sity in 1957, and his PhD in 1963 at Harvard aswell. He has been in the faculty of the Department ofMathematics of the University of Washington since1966.

His research and teaching interests focus on convexand variational analysis, optimization, and control.He is well known in the field and his contributionscan be found in several books and in more than onehundred papers.

Professor Rockafellar gave a plenary lecture in theconference Optimization 2001, held in Aveiro, Por-tugal, July 23-25, 2001.

Interviewer: L. N. Vicente (University of Coimbra,Portugal).

An Interview with M. J. D. Powell

Published originally in Bulletin of the InternacionalCenter for Mathematics, n. 14, June 2003.http://www.cim.pt/cimE/boletim.html

I am sure that our readers would like to know abit about your academic education and professionalcareer first. Why did you choose to go to the AtomicEnergy Establishment (Harwell) right after college in1959?

When I studied mathematics at school, nearly allof my efforts were applied to solving problems intext books, instead of reading the texts. Then myteachers marked and discussed my solutions insteadof instructing me in a formal way. I enjoyed this kindof work greatly, especially when I was able to findanswers to difficult questions myself. Thus I gaineda good understanding of some fields of mathemat-ics, but I became unwilling to learn about new sub-jects at a general introductory level, because I do nothave a good memory, and to me it was without fun.I also disliked the breadth of the range of coursesthat one had to take at Cambridge University as anundergraduate in mathematics. Fortunately, I wasable to complete that work adequately in two years,which allowed me to study for the Diploma in Nu-merical Analysis and Computation during my thirdyear. It was a relief to be able to solve problemsagain most of the time, and the availability of theEdsac 2 computer was a bonus. I welcomed the useof analysis and the satisfaction of obtaining answers.I wished to continue this kind of work after gradu-ating, but the possibility of remaining in Cambridgefor a higher degree was not suggested to me. Con-tributing to academic research and publishing papersin journals were not suggested either, although I de-veloped a successful algorithm for adaptive quadra-ture in a third year project. Therefore in 1959 Iapplied for three jobs at government research estab-lishments, where I would assist scientists with nu-merical computer calculations. I liked the locationof Harwell and the people who interviewed me there,so it was easy for me to accept their offer of employ-ment.

You obtained your doctor of science only in 1979,

Page 15: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 15

twenty years after your bachelors degree and threeyears after being hired as a professor in Cambridge.Why was that the case?

After graduating from Cambridge in 1959 with aBA degree, I had no intention of obtaining a doctor-ate. All honours graduates from Cambridge are eli-gible for an MA degree after about 3 further years,without taking any more courses or examinations,but from my point of view that opportunity was notadvantageous, partly because one had to pay a fee.When I became the Professor of Applied Numeri-cal Analysis at Cambridge in 1976, I was granted allthe privileges of an MA automatically, and my offi-cial degree became BA with MA status. Two yearslater, I was fortunate to be elected as a Professo-rial Fellow at Pembroke College, and the Master ofPembroke suggested that I should follow the pro-cedure for becoming a Master of Arts. Rather thanexpressing my reservations about it, I offered to seekan ScD degree instead, which required an examina-tion of much of my published work. Thus I becamean academic doctor in 1979.

Was it hard to adapt to the academic life after somany years in Harwell?

After about five years at Harwell, most of my timewas spent on research, which included the develop-ment of Fortran software for general computer cal-culations, the theoretical analysis of algorithms, andof course the publication of papers. The purpose ofthe administrative staff there was to make it easierfor scientists to carry out their work. On the otherhand, I found at Cambridge that one had to createone’s own opportunities for research, which requiredsome stubbornness and lack of cooperation, becauseof the demands of teaching, examining and admit-ting students, and also because administrative dutiesat universities can consume the time that remains,especially during terms. This change was particu-larly unwelcome, and is very different from the viewthat most of my relatives and friends have of uni-versity life. Indeed, when I was at Harwell they didnot doubt that I had a full time job, but they as-sume that at Cambridge the vacations provide a lifeof leisure.

In your work in optimization we find severalinteresting and meaningful examples and counter-examples. Where did you get this training (assumingthat not all is natural talent)? From your exposureto approximation theory? From the hand calcula-tions of the old computing times?

The construction of examples and counter-examples is a natural part of my strong interest inproblem solving, and of the development of softwarethat I have mentioned. Specifically, numerical re-sults during the testing of an algorithm often sug-gest the convergence and accuracy properties thatare achieved, so conjectures arise that may be trueor false. Answers to such questions are either proofsor counter-examples, and often I have tried to dis-cover which of these alternatives applies. Perhapsmy training started with my enjoyment of geometryat school, but then the solutions were available. I ampleased that you mention hand calculations, becauseI still find occasionally that they are very useful.

Was exemplification a relevant tool for you whenyou taught numerical analysis classes? Did youryears as a staff member at Harwell influence yourteaching?

My main aim when teaching numerical analysis tostudents at Cambridge was to try to convey some ofthe delightful theory that exists in the subject, es-pecially in the approximation of functions. Only 36lectures are available for numerical analysis duringthe three undergraduate years, however, except thatthere are also courses on computer projects in thesecond and third years, where attention is given tothe use of software packages and to the numericalresults that they provide. Moreover, in most yearsI also presented a graduate course of 24 lectures, inorder to attract research students. The main con-tribution to my teaching from my years at Harwellwas that I became familiar with much of the rel-evant theory there, because it was developed afterI graduated in 1959, but I hardly ever mentionednumerical examples in my lectures, because of theexistence of the Cambridge computer projects, andbecause the mathematical analysis was more impor-tant to my teaching objectives. Therefore my classeswere small. Fortunately, some of the strongest math-

Page 16: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

16 SIAG/OPT Views-and-News

ematicians who attended them became my researchstudents. I am delighted by their achievements.

Could you tell us how computing resources evolvedat Harwell in the sixties and seventies and howthat impacted on the numerical calculations of thosetimes?

Beginning in 1958, I have always found that thespeed of computers and the amount of storage areexcellent, because of the huge advances that occurabout every three years. On the other hand, theturnaround time for the running of computer pro-grams did not improve steadily while I was at Har-well. Indeed, for about four years after I started touse Fortran in 1962, those programs were run on theIBM Stretch computer at Aldermaston, the punchedcards being transported by car. Therefore one couldrun each numerical calculation only once or twice in24 hours. Of course it was annoying to have to waitso long to be told that one had written dimesnioninstead of dimension, but ever since I have beengrateful for the careful attention to detail that onehad to learn in that environment. Moreover, it waseasier then to develop new algorithms that extendthe range of calculations that can be solved. Con-veying such advances to Harwell scientists was notstraightforward, however, mainly because they wrotetheir own computer programs, using techniques thatwere familiar to them. The Harwell Subroutine Li-brary, which I started, was intended to help them,and to reduce duplication in Fortran software. Oftenit was highly successful, but many computer users,both then and now, prefer not to learn new tricks,because they are satisfied by the huge gains they re-ceive from increases in the power of computers.

You once wrote: “Usually I produced a Fortranprogram for the Harwell subroutine library wheneverI proposed a new algorithm,...”1. In fact, writing nu-merical software has always been a concern of yours.Could you have been the same numerical analystwithout your numerical experience?

1A View of Nonlinear Optimization, History of Mathemat-ical Programming: A Collection of Personal Reminiscences(J.K. Lenstra, A.H.G. Rinnooy Kan, and A. Schrijver eds),North-Holland (Amsterdam), 119-125 (1991).

My principal duty at Harwell was to produce For-tran programs that were useful for general calcula-tions, which justified my salary. My work on thetheoretical side of numerical analysis was also en-couraged greatly, and its purpose was always to ad-vance the understanding of practical computation.Indeed, without numerical experience, I would becut off from my main source of ideas. It is unusualfor me to make progress in research by studying pa-pers that other people have written. Instead I seekfields that may benefit from a new algorithm that Ihave in mind. I also try to explain and to take ad-vantage of the information that is provided by bothgood and bad features of numerical results.

Roger Fletcher wrote once that “your style of pro-gramming is not what one might call structured”.Some people think that a piece of software shouldbe well structured and documented. Others that itshould be primarily efficient and reliable. What areyour views on this?

I never study the details of software that is writ-ten by other people, and I do not expect them tolook at my computer programs. My writing of soft-ware always depends on the discipline of subroutinesin Fortran, where the lines of code inside a subrou-tine can be treated as a black box, provided that thefunction of each subroutine is specified clearly. Find-ing bugs in programs becomes very painful, however,if there are any doubts about the correctness of theroutines that are used. Therefore I believe that thereliability and accuracy of individual subroutines isof prime importance. If one fulfils this aim, then inmy opinion there is no need for programs to be struc-tured in a formal way, and conventional structuresare disadvantageous if they do not suit the style ofthe programmer who must avoid mistakes. Thosepeople who write reliable software usually achievegood efficiency too. Of course it is necessary for thedocumentation to state what the programs can do,but otherwise I do not favour the inclusion of lots ofinternal comments.

And by the way, how do you regard the recent ad-vances in software packages for nonlinear optimiza-tion?

Page 17: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 17

Most of my knowledge of recent advances in soft-ware packages has been gained from talks at confer-ences. I am a strong supporter of such activities, asthey make advances in numerical analysis availablefor applications. My enthusiasm diminishes, how-ever, when a speaker claims that his or her softwarehas solved successfully about 90% of the test prob-lems that have been tried, because I could not toler-ate a failure rate of 10%. Another reservation, whichapplies to my programs too, is that many computerusers prefer software that has not been developedby numerical analysts. I have in mind the popu-larity of simulated annealing and genetic algorithmsfor optimization calculations, although they are veryextravagant in their use of function evaluations.

Many people working in numerical mathematicsundervalue the paramount importance of numericallinear algebra (matrix calculations). Would you liketo comment on this issue? How often was researchin numerical linear algebra essential to your work inapproximation and optimization?

An optimization algorithm is no good if its ma-trix calculations do not provide enough accuracy,but, whenever I try to invent a new method, I as-sume initially that the computer arithmetic is exact.This point of view is reasonable for the minimiza-tion of general smooth functions, because techniquesthat prevent serious damage from nonlinear and non-quadratic terms in exact arithmetic can usually copewith the effects of computer rounding errors, as inboth cases one has to restrict the effects of perturba-tions. Therefore I expect my algorithms to includestability properties that allow the details of the ma-trix operations to be addressed after the principalfeatures of the algorithm have been chosen. Further,I prefer to find ways of performing the matrix calcu-lations myself, instead of studying relevant researchby other people.

I read in one of your articles that “a referee sug-gested rejection because he did not like the bracketnotation”. What is your view about the importanceof refereeing? How do you classify yourself as a ref-eree?...

The story about the bracket notation is remark-

able, because the paper that was nearly rejectedis the one by Roger Fletcher and myself on theDavidon–Fletcher–Powell (DFP) algorithm. As areferee, I ask whether submitted work makes a sub-stantial contribution to its subject, whether it is cor-rect, and whether the amount of detail is about right.I believe strongly that we can rely on the accuracy ofpublished papers only if someone, different from theauthor(s), checks every line that is written, and inmy opinion that task is the responsibility of referees.When it is done carefully, then refereeing becomeshighly important. I try to act in this way myself, but,because my general knowledge of achievements in myfields is not comprehensive, I often consider submis-sions in isolation, although I should relate them topublished work.

Actually, in my previous question I had in mindthe difficulty that others might face to meet your highstandards. This brings me to your activity as a Ph.Dadviser. What difficulties and what rewards do youencounter when advising Ph.D. students?

Of course I take the view that my requirements forthe quality of the work of my PhD students are rea-sonable. I require their mathematics to be correct,I require relevance to numerical computation, and Irequire some careful investigations of new ideas, in-stead of a review of a subject with some superficialoriginality. Further, I prefer my students to work ontopics that are not receiving much attention fromother researchers, in order that they can becomeleading experts in their fields. Some of them havesucceeded in this way, which is a great reward, buttwo of them switched to less demanding supervisors,and another one switched to a well paid job insteadof completing his studies. I also had a student thatI never saw after his first four terms. Eventually hesubmitted a miserable thesis, that was revised afterhis first oral examination, and then the new versionwas passed by the examiners, but the outcome wouldhave been different if university regulations had al-lowed me to influence the result. On the other hand,all my other students have done excellent work andhave thoroughly deserved their PhDs. One difficultyhas occurred in several cases, namely that, becauseeach student has to gain experience and to makeadvances independently, one may have to allow his

Page 18: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

18 SIAG/OPT Views-and-News

or her rate of progress to be much slower than onecould achieve oneself. Another difficulty is that myknowledge of pure mathematics has been inadequatefor easy communication between myself and most ofmy students who have studied approximation theory.Usually they were very tolerant about my ignoranceof distributions and properties of Fourier transforms,for example, but my heart sinks when I am asked toreferee papers that depend on these subjects.

Most of your publications are single-authored.Why do you prefer to work on your own?

I believe I have explained already why I enjoyworking on my own. Therefore, when I begin somenew research, I do not seek a co-author. Moreover,as indicated in the last paragraph, I prefer my stu-dents to make their own discoveries, so usually I amnot a co-author of their papers.

I have been trying to avoid technical questions butthere is one I would like to ask. What is your view oninterior-point methods (a topic where you made onlya couple — but as always relevant and significant —contributions)?

My view of interior point methods for optimiza-tion calculations with linear constraints is that itseems silly to introduce nonlinearities and iterativeprocedures for following central paths, because thesecomplications are not present in the original prob-lem. On the other hand, when the number of con-straints is huge, then algorithms that treat con-straints individually are also unattractive, especiallyif the attention to detail causes the number of iter-ations to be about the number of constraints. It ispossible, however, to retain linear constraints explic-itly, and to take advantage of the situation where theboundary of the feasible region has so many linearfacets that it seems to be smooth. This is done bythe TOLMIN software that I developed in 1989, forexample, but the number of variables is restrictedto a few hundred, because quadratic models withfull second derivative matrices are employed. There-fore eventually I expect interior point methods to bebest only if the number of variables is large. An-other reservation about this field is that it seems tobe taking far more than its share of research activity.

You published a book in approximation theory.Have you ever thought about writing a book in non-linear optimization?

My book on Approximation Theory and Methodswas published in 1981. Two years later, my son diedin an accident, and then I wished to write a bookon Nonlinear Optimization that I would dedicate tohim. I have not given up this idea, but other duties,especially the preparation of work for conferencesand their proceedings, have caused me to postponethe plan. Of course, because of the circumstances,I would try particularly hard to produce a book ofhigh quality.

Let me end this interview with the very same ques-tions I asked T.R. Rockafellar (who, by the way,shared with you the first Dantzig Prize in 1982).Have you ever felt that a result of yours was unfairlyneglected? Which? Why? What would you like toprove or see proven that is still open (both in ap-proximation theory and in nonlinear optimization)?What was the most gratifying paper you ever wrote?Why?

I was taught the FFT (Fast Fourier Transform)method by J.C.P. Miller in 1959, and then it madeCooley and Tukey famous a few years later. More-over, my 1963 paper with Roger Fletcher on the DFPmethod is mainly a description of work by Davidonin 1959, and it has helped my career greatly. There-fore, by comparison, none of my results has beenunfairly neglected. My main theoretical interest atpresent is trying to establish the orders of conver-gence that occur at edges, when values of a smoothfunction are interpolated by the radial basis functionmethod on a regular grid, which is frustrating, be-cause the orders are shown clearly by numerical ex-periments. In nonlinear optimization, however, mostof my attention is being given to the development ofalgorithms. Since you ask me to mention a grati-fying paper, let me pick “A method for nonlinearconstraints in minimization problems”, because it isregarded as one of the sources of the “augmentedLagrangian method”, which is now of fundamentalimportance in mathematical programming. I havebeen very fortunate to have played a part in discov-eries of this kind.

Page 19: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 19

About the interviewee: M. J. D. Powell completedhis undergraduate studies at the University of Cam-bridge in 1959. From 1959 to 1976 he worked atthe Atomic Energy Establishment, Harwell, wherehe was Head of the Numerical Analysis Group from1970. He has been John Humphrey Plummer Pro-fessor of Applied Numerical Analysis, University ofCambridge since 1976 and a Fellow of Pembroke Col-lege, Cambridge since 1978.

He made many seminal contributions in approxima-tion theory, nonlinear optimization, and other topicsin numerical analysis. He has written a book in ap-proximation theory and more than one hundred andfifty papers.

Interviewer: L. N. Vicente (University of Coimbra,Portugal).

An Interview withW. R. Pulleyblank

Published originally in Bulletin of the InternacionalCenter for Mathematics, n. 15, December 2003.

http://www.cim.pt/cimE/boletim.html

Many of us have no idea as to how is the researchenvironment in a private laboratory like the IBMT.J. Watson Research Center. Could you start bytelling us about this research environment, in par-ticular the one in the Department of MathematicalSciences?

There is probably as much difference between dif-ferent industrial research laboratories as there is be-tween different universities. IBM Research has con-sistently had a mission that combined carrying out atop level scientific research agenda with the desire tomake the results relevant to the corporation. In someways, the Mathematical Sciences Department oper-ates like a university department. We write and ref-eree papers, edit journals and present papers at con-ferences. Some department members teach coursesand supervise graduate students at nearby universi-ties, for example, Columbia, NYU, Yale and MIT.However, there are significant differences. Many ofthe problems we work on come from IBM customersand other units within IBM. Often we are able to

apply our work directly to real world problems. Inaddition, we always have the possibility of seeing theresults that our research realized in the form of prod-ucts. We do get pretty excited when this happens.

I recently heard a biographer of T. J. Watson (thefather) emphasizing the importance of Research inthe early days of the IBM company. Do you alsothink that research has played a vital role in the longsuccess of the company?

Absolutely. When Lou Gerstner, our previousChairman, formulated the principles that he wantedto guide the company, the first was “at our core, weare a technology company”. He was a strong sup-porter of Research, as is our present chairman, SamPalmisano. There is a feeling here that the thingswe do really have a chance to have an impact on thecompany and on our customers. It is very energizing.

In particular, how do you envision the Departmentof Mathematical Sciences thirty years from now?Will research staff there continue to do basic researchand prove theorems?

I hope so. The model of combining serious math-ematics with doing things that have the potential topositively affect the company has been remarkablysuccessful, and robust. Examples range from deviceslike the Trackpoint, to software systems like OSL,to inventing new algorithms for digital half-toning.At the same time, there has been a remarkable col-lection of papers and books written by departmentmembers. The legacy of current and former depart-ment members like Shmuel Winograd, Mike Shub,Roy Adler, Alan Hoffman, Ellis Johnson, Ralph Go-mory and Herman Goldstine sets quite an example!

Now, let us focus on your career. In 1990 youmoved from the University of Waterloo in Canadato the Watson Research Center. Would you like tocomment on those times? What was the driving forcethat made you move?

In 1987, I was awarded an NSERC Industrial Re-search Chair in Optimization and Computer Appli-cations, in part funded by CP Rail. This gave mea chance to expand the applied part of my research

Page 20: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

20 SIAG/OPT Views-and-News

program, and also started me thinking about what Iwould do for the next 25 years of my career. In thespring of 1989, Ellis Johnson called and supposedthat I would not move, but wanted suggestions for apossible successor to himself as Manager of the Op-timization Center. It got me thinking about alterna-tives and I came here for a visit. I soon realized thatIBM Research would be an excellent place to workon applied problems. Also, earlier in my career I hadworked for IBM as a systems engineer. I had alwayshad a high regard for the company, and the WatsonResearch Center had always seemed to me to be anexciting place.

So, after a lot of discussion with my wife, Diane,we decided to give it a shot and here we are.

How did your research program change as result ofmoving to the industry?

It evolved. I have always been an interactivemathematician, enjoying working with co-authors.Here I had the chance to work with people likeJohn Tomlin, John Forrest and Ellis Johnson. I be-came very interested in computational questions. Iwas granted my first patent ever, jointly with JohnTomlin and Alan Hoffman — an application of theKoenig edge coloring theorem to make certain ma-trix computations much more efficient.

A few years later you were chairing the Depart-ment of Mathematical Sciences in the Watson Re-search Center. How do you describe the leadershipskills required for this job in comparison to thoseneeded for a similar academic position?

The scope of a department Director’s job is insome ways similar in scale to that of a dean. TomBrzustowski when he was Provost at the Universityof Waterloo, described the faculty at the universityas “800 small businessmen sharing a library and aparking lot”. Today, he would probably add a com-puter network to the list. IBM is a much more hi-erarchical organization. When someone becomes amanager, it is not assumed that it is a temporary,three to five year, assignment. A department Direc-tor has a responsibility to generate funding for thedepartment as well as to make sure that the careersof department members are progressing satisfacto-

rily. In addition, it is important to understand andbe able to present all the work done in a department.This has really encouraged me to broaden my out-look. For example, I am sure that I know much moreComputational Biology now than I ever would havelearned in a university mathematics department.

Do you think that an academic training can posi-tion one better for the industry than the other wayaround? I mean, do you think that someone with acareer in the industry would have had a more difficulttime chairing an academic department?

I believe that some of the skills needed for successin an industrial research position can be learned onthe job. However, I think that the only way to un-derstand what it takes to carry out a serious researchagenda is to do it oneself. I think it is feasible for aperson who has worked in an industrial research lablike IBM to be quite successful in a university envi-ronment — there are several examples I can thinkof who have made this switch. However, I have notseen many people who have not got a research back-ground being very successful running a research de-partment in industry or at a university.

How did you find time to write a book during thoseyears? Has it payed off?

Ron Graham once, when asked a similar question,said that his secret is that every day contains 24hours! You can do a lot of things if you decide todo so. In the case of our best seller CombinatorialOptimization (number 158,831 on the Amazon bestseller list!), the big thing was the co-authors. BillCook and I launched it one night at Oberwolfach.We began by getting a group of luminaries to con-tribute comments for the back of the dustcover. Ourplan was to write the index next, because then, wethought, it would be simple to write the book — justsee what page we were on, check in the index, andsee what had to go there. Later Bill Cunninghamand Lex Schrijver joined the project. It was reallyinteresting working through this material, that weall really loved, trying to combine our different ped-agogic approaches.

I enjoyed doing it — I learned a lot and am verysatisfied with the end result. However, I still earn

Page 21: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 21

more from my day job than from my book royalties.

Could you also tell us a bit about the Deep Com-puting project which you are currently coordinating?Has it been a rewarding experience for you? Howwill it impact IBM’s future development policies?

The Deep Computing Institute at IBM Researchwas formed following our second chess match withGary Kasparov in 1997 (which IBM’s Deep Bluewon 3.5 to 2.5!). The challenge was to see what wecould do to take the ideas and apply them to a muchbroader set of problems. The idea of combing largeamounts of computation and data to solve businessand scientific decision problems is very broad, andthe challenge has been to make it concrete. Thebreadth of topics — from simulation to optimiza-tion to data mining to advanced computation hasbeen extremely interesting and has, I believe, led tosome interesting research. For example, one of theprojects I am currently leading is to construct Blue-Gene — the largest supercomputer in the world (bya large margin).

Let us now backtrack to the old days in Waterloo.It always impressed me in Waterloo the existence ofa School of Mathematics, consisting of different de-partments. Did you see it as positive too?

The University of Waterloo has always been a verysuccessful and innovative institution. In the sixties,the university decided to focus on mathematics, en-gineering, and an emerging discipline — computerscience. It pioneered coop education in Canada. Theidea of creating a Faculty of Mathematics, includingpure and applied mathematics, statistics, computerscience and “Combinatorics and Optimization” hadvery positive consequences. There were stresses andconflicts that had to be resolved, but it seemed eas-ier because of the common background of so manyof the faculty members. And, it was really fun beingbigger than Engineering!

How exciting was to do combinatorics in Water-loo? Who had a greater impact on you? Do you missthose times?

It was wonderful. I spent two and a half years

at Waterloo as a PhD student and nine years thereas a faculty member. I had the great fortune tobe part of an extraordinary group of researchers inC&O. Jack Edmonds was a huge influence and wewere all inspired by being able to work around BillTutte. I really enjoyed the time I spent as Man-aging Editor of Journal of Combinatorial Theory–Bwith Bill, Adrian Bondy and U.S.R. Murty. One ofthe exciting things was the set of visitors continu-ally passing through. I also really enjoyed workingwith some of the young pups — we had quite a fewBruces at Waterloo — ranging from PhD students toFull Professors. They were an amazing bunch of col-leagues. Somehow the group of students, postdocsand faculty members formed an amazingly homoge-neous group of researchers. The thing that matteredmost was the mathematics — everything else existedto support that.

We talked about mathematics in Canada and Idon’t want to miss this opportunity to ask you tocompare the pre-college mathematical education inCanada to the one in the United States.

Clearly there is a huge variety within both coun-tries. I do think that the Canadian system has beenmore strongly influenced by the British, or Europeanmodel, and we expect students to take significantresponsibility for their own programs and activities.The American system has huge diversity — rangingfrom top tier research schools to nurturing educa-tional environments. Top schools in both countriesare very competitive with each other.

Also, do you think graduate programs in US arestronger than in Canada, especially when it comes toapplied mathematics and connection to the industry?

I like the practice of including external examin-ers on PhD committees in Canada. I believe thatit raises the standard of the doctoral program andensures a high quality of result. I think that theNSERC funding programs have been remarkably ef-fective in supporting a broad base of graduate re-search. However the much larger size of the UnitedStates educational enterprise does result in a hugevariety of opportunities.

Page 22: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

22 SIAG/OPT Views-and-News

Both systems work — top graduates from bothsystems carry out excellent research programs andhave great careers.

It is time to end this interview with your futureprojects. What do you have in hands for the nextyears?

The big thing right now is building BlueGene —a single computer with about as much power as thetotal of the world’s 500 largest machines today. Thisincludes hardware, software and finding ways to con-struct applications that can exploit this machine.We should be able to solve some pretty big opti-mization problems very quickly!

How and when would you like to end your ca-reer...?

I don’t think of my career ending as much aschanging focus. There are still many things that Ihave not had time to do yet — understand quantummechanics, the proof of Fermat’s Last Theorem, andhow the human cell translates DNA into proteins.I’d also like to finish some of the novels that I havestarted. And, I’ve still got a long way to go beforeEric Clapton will consider me a rival on blues guitar.

About the interviewee: W. R. Pulleyblank chairedthe Department of Mathematical Sciences of theIBM T. J. Watson Research Center from 1994 to2000. He is currently directing the Deep ComputingInstitute of this Center.

He held faculty positions in the University of Cal-gary (1974-1981) and in the University of Water-loo (1982-1991), before moving to IBM Corporationin 1991.

Bill Pulleyblank is one of the authors of the bookCombinatorial Optimization, John Wiley and Sons,1998, and the author of more than seventy researchpapers in this field. He has served on an extensivenumber of external and editorial boards.

Interviewer: L. N. Vicente (University of Coimbra,Portugal).

Bulletin

1. Workshop Announcements

V Brazilian Workshop on ContinuousOptimization: In honor of the 60th birthday

of Clovis Caesar GonzagaMarch 22–25, 2004, UFSC – Florianopolis, Brazil

http://jurere.mtm.ufsc.br/~workshop

The V Brazilian Workshop on Continuous Opti-mization will take place at the Jurere Beach Vil-lage, in Florianpolis, Brazil, between March 22 and25, 2004. Subjects to be discussed encompass theo-retical, computational and implementation issues, inboth linear and non-linear programming, includingvariational inequalities, complementarity problems,nonsmooth optimization, vector optimization, gen-eralized equations, etc.. The workshop will consistof invited talks and sessions with contributed talks.

Large Scale Nonlinear and SemidefiniteProgramming: Workshop in memory

of Jos SturmMay 12–15, 2004, University of Waterloo, Canada

http://orion.math.uwaterloo.ca/~hwolkowi/w04workshop.d/readme.html

The breathtaking progress in algorithmic nonlin-ear optimization, but also in computer hardware hasthrown new light on the solving large scale nonlinearprograms. The aim of the workshop is therefore tobring together researchers from several communities,such as: algorithmic nonlinear optimization; com-binatorial optimization, dealing with computationalmethods for NP-hard problems; computer scientistsinterested in scientific parallel computing, who sharea common interest to do computations on (large-scale) hard combinatorial optimization problems.

Topics to be covered include: general nonlin-ear programming problems, semidefinite program-ming and reformulation schemes, quadratic andmulti-dimensional assignment problems, boolean

Page 23: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 23

quadratic programming, massive graph problems,massively parallel distributed processing, VLSI de-sign.

Invited plenary speakers: S. Boyd, N. I. M. Gould,D. Henrion, M. Kocvara, J. Lasserre, Y. Nesterov,J. Nocedal, P. Parrilo, J. Sturm (was scheduled tobe a plenary speaker but passed away on Saturday,Dec. 6, 2003), R. Vanderbei, Y. Ye.

On the first day of the conference, on Wednes-day May 12, 2004, there will be two short courses:(1) Theory and Applications of SDP (9:00-12:00) (in-cluding motivation as to how large scale SDPs ariseboth in theory and applications); (2) Algorithms forSDP (14:00-17:00).

IPCO XJune 9–11, 2004, University of Columbia,

New York City, USAIPCO Summer School, June 7–8, 2004

http://www.corc.ieor.columbia.edu/meetings/ipcox/ipcox.html

This meeting, the tenth in the series of IPCO con-ferences, is a forum for researchers and practition-ers working on various aspects of integer program-ming and combinatorial optimization. The aim isto present recent developments in theory, computa-tion, and applications of integer programming andcombinatorial optimization.

Topics include, but are not limited to: inte-ger programming, polyhedral combinatorics, cuttingplanes, branch-and-cut, lift-and-project, semidefi-nite relaxations, geometry of numbers, computa-tional complexity, network flows, matroids and sub-modular functions, 0,1 matrices, approximation al-gorithms, scheduling theory and algorithms.

In all these areas, the organizers welcome struc-tural and algorithmic results, revealing computa-tional studies, and novel applications of these tech-niques to practical problems. The algorithms stud-ied may be sequential or parallel, deterministic orrandomized.

The IPCO Summer School will take place June 7and 8, 2004, and will present the following speak-ers: J. Feigenbaum, T. Roughgarden, R. Vohra.The summer school will focus on the interactionsbetween operations research, computer science, andeconomics.

8th International Workshop on HighPerformance Optimization Techniques:

Optimization and Polynomials(HPOPT 2004)

June 23–25, 2004, CWI, Amsterdam,The Netherlands

http://www.cwi.nl/~monique/hpopt20041

This workshop focuses on the recent exciting de-velopments on the interplay between optimizationand polynomials, in particular, the field of real alge-braic geometry dealing with representations of pos-itive polynomials as sums of squares. The linkingelement relies on the following facts:• tight approximations for optimization problemscan be constructed via sums of squares of polyno-mials (and the dual theory of moments);• sums of squares of polynomials can be modelled us-ing semidefinite programming and thus can be com-puted efficiently with interior-point algorithms.The theoretical justification for the convergence ofthe relaxed bounds is provided by representationresults for positive polynomials. The aim of thisworkshop is to bring together researchers with in-terests in optimization and polynomial representa-tions. The objective is to provide a forum for theexchange of ideas and knowledge about algorithmicdevelopments and new theoretical achievements.

The workshop will begin with a one-day tuto-rial on the theme Sums of Squares in Optimization,whose aim is to present results about polynomialrepresentations and optimization to non-specialists.The tutorial lectures will be delivered by three ex-perts in the area.

International School of Mathematics“G. Stampacchia”, 40th Workshop,Large Scale Nonlinear Optimization

June 22 – July 1, 2004, Erice, Italyhttp://www.dis.uniroma1.it/~erice2004

The workshop aims to review and discuss recentadvances in the development of methods and algo-rithms for Nonlinear Optimization and its Applica-tions, with a main interest in the large scale dimen-sional case, the current forefront of the research ef-fort. It is the fourth in a series of Workshops on

Page 24: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

24 SIAG/OPT Views-and-News

Nonlinear Optimization: the preceeding ones havebeen held in 1995, 1998, and 2001.

Topics include, but are not limited to: constrainedand unconstrained optimization, large scale opti-mization, global optimization, derivative-free meth-ods, interior point techniques for nonlinear program-ming, linear and nonlinear complementarity prob-lems, variational inequalities, nonsmooth optimiza-tion, neural networks and optimization, innovativeapplications of nonlinear optimization.

The workshop will consist of invited lecturesand contributed lectures. Invited lecturers whohave confirmed the participation are: D. P. Bert-sekas, F. Bonnans, O. Burdakov, Y. Evtushenko,F. Facchinei, J. Gondzio, N. I. M. Gould,A. Griewank, L. Grippo, S. Leyffer, L. Luksan,J. More, J. Nocedal, J.-S. Pang, P. Pardalos,M. J. D. Powell, L. Qi, E. W. Sachs, S. Scholtes,J. P. Vial, H. Wolkowicz, M. H. Wright, J. Zhang.

Workshop on Linear Matrix Inequalitiesin Control

July 1–2, 2004, LAAS-CNRS, Toulouse, Francehttp://www.laas.fr/~henrion/lmi04

The workshop aims at reporting latest achieve-ments in the area of linear matrix inequal-ity (LMI) methods in systems control. Con-firmed invited speakers are: V. R. Balakrishnan,B. Clement, J. C. Geromel, F. Leibfritz, E. Prem-pain, C. W. Scherer, L. Vandenberghe.

Optimization 2004July 25–28, 2004, University of Lisbon, Portugal

http://www.opti2004.fc.ul.pt

Optimization 2004 is the fifth international con-ference on optimization to be held in Portugal since1991. The meeting will be held at the Faculty ofScience, University of Lisbon.

The invited plenary sessions are: W. J. Cook(Solving Traveling Salesman Problems), M. Dorigo(The Ant Colony Optimization Metaheuristic),C. A. Floudas (Deterministic Global Optimization:Advances and Challenges), P. E. Gill (Recent Ad-vances in Large-Scale Nonlinearly Constrained Op-timization), T. L. Magnanti (All Roads Lead to Lis-

bon: Some Modeling Approaches for Designing Op-timal Networks), F. Rendl, (Does Semidefinite Pro-gramming help to Approximate Combinatorial Opti-mization Problems?).

A special stream on “Network Design” will alsobe organized. A collection of selected papers of thisstream will be published in a special issue of Net-works, to be edited by L. Gouveia and S. Voss.

4th Annual McMaster OptimizationConference: Theory and Applications

(MOPTA 04)July 28 – July 30, 2004, McMaster University,

Canadahttp://www.cas.mcmaster.ca/~mopta

The 4th annual McMaster Optimization Confer-ence (MOPTA 04) will be held at the campus of Mc-Master University. It will be hosted by the AdvancedOptimization Lab at the Department of Computingand Software and it is co-sponsored by the FieldsInstitute, MITACS, and IBM Canada.

The conference aims to bring together a diversegroup of people from both discrete and continuousoptimization, working on both theoretical and ap-plied aspects. We aim to bring together researchersfrom both the theoretical and applied communitieswho do not usually get the chance to interact in theframework of a medium-scale event.

Invited speakers include: A. Ben Tal, L. Biegler,J. Lee, J.-S. Pang, C. Shoemaker, G. Vanderplaats.

Inaugural International Conference onContinuous Optimization (ICCOPT I)

August 2–4, 2004, Rensselaer Polytechnic Institute,Troy, New York, USA

http://www.math.rpi.edu/iccopt

The conference is sponsored in part by the Math-ematical Programming Society. It is organized incooperation with the Society for Industrial and Ap-plied Mathematics (SIAM) and the SIAM ActivityGroup on Optimization.

The scientific program of ICCOPT will coverall major aspects of continuous optimization: the-ory, algorithms, applications, and related prob-lems. A partial list of topics includes linear, nonlin-

Page 25: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 25

ear, and convex programming; equilibrium program-ming; semidefinite and conic programming; stochas-tic programming; complementarity and variationalinequalities; nonsmooth and variational analysis;nonconvex and global optimization; optimization ofpartial differential systems; applications in engineer-ing, economics, finance, statistics, game; theory,and bioinformatics; energy modeling and electricpower market modeling; optimization over comput-ing grids; modeling languages and web-based opti-mization systems.

The conference will consist of a mixture of plenary,semiplenary, invited, and contributed talks. It is an-ticipated that at most four sessions will be scheduledin parallel. Selected papers will appear in a specialissue of Mathematical Programming Series B.

A dedicated session will be devoted to papers byyoung colleagues, to be chosen by a panel of review-ers. See the separate Call for Papers by Young Re-searchers for details, including guidelines and sub-mission information.

The conference will be preceded by a summerschool for graduate students, junior faculty, andother interested participants, which will describesome of the recent exciting developments in contin-uous optimization.

Eighth SIAM Conference on Optimization,2005 SIAG-OPT VIII

Sunday to Wednesday, May 15-18,Stockholm, Sweden

The conference is sponsored by the SIAM Activ-ity Group on Optimization and the invited speakersinclude:

• D. Bienstock, Columbia University,Discrete Optimization and Network Design;

• M. C. Ferris, University of Wisconsin,Complementarity Problems and Applications;

• O. Ghattas, Carnegie Mellon University,Large Scale Earthquake Inversion Problems;

• J. Gondzio, The University of Edinburgh,Practical Aspects for Large Scale Interior-PointMethods;

• M. Kojima, Tokyo Institute of Technology,Parallel Computing for Semidefinite Program-ming and Polynomial Optimization;

• M. Laurent, CWI,Semidefinite Programming and ApproximationAlgorithms in Combinatorial Optimization;

• A. Shapiro, Georgia Institute of Technology,Stochastic Programming;

• A. Sofer, George Mason University,Optimization in Medicine.

More details (including information about shortcourses) will appear in upcoming issues ofSIAG/OPT Views-and-News.

2. Special Issue

Call for papers Mathematical ProgrammingSeries B, Special Issue on “Large Scale

Nonlinear and Semidefinite Programming”http://orion.math.uwaterloo.ca/

~hwolkowi/w04workshop.d/mpbspecialissue.d/

callforpapers.html

We invite research articles for a forthcoming is-sue of Mathematical Programming, Series B, on“Large-Scale Nonlinear and Semidefinite Program-ming”. This issue is in memory of and dedicated toJos Sturm.

The issue is associated with the May/04 Workshopat the University of Waterloo on this topic. Interestin algorithmic nonlinear and semidefinite optimiza-tion has increased in recent years for many reasons,of which we mention three. First, the great successof interior-point methods (IPMs) in linear program-ming, at both a practical and theoretical level, andthe development of highly appealing interior-pointtheory and methods for cone programming prob-lems, has motivated researchers to seek practicalinterior-point methods for wide classes of large-scalenonlinear problems. Second, the door to practicalsolution of hard combinatorial optimization prob-lems has been opened by nonlinear and semidefiniterelaxations of these problems, along with astonishingadvances in computational hardware. Third, many

Page 26: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

26 SIAG/OPT Views-and-News

new applications of nonlinear and semidefinite pro-gramming have been identified in recent years, butsolution of practical instances awaits the develop-ment of algorithms for large-scale problems.

The focus of this special issue is on large-scale non-linear programming, including cone programmingproblems such as semidefinite and second-order coneprogramming. We invite papers that address the fol-lowing topics, individually or in combination:

• algorithmic nonlinear optimization;

• computational methods and NP-hard problemsinvolving large-scale nonlinear and/or semidefi-nite programming;

• scientific parallel computing for (large-scale)nonlinear and semidefinite programming and itsapplication to hard combinatorial optimizationproblems.

Deadline for submission of full papers: Nov. 1,2004. We aim at completing a first review of allpapers by May 1, 2005.

Electronic submissions to the guest editors inthe form of pdf files are encouraged. All sub-missions will be refereed according to the usualstandards of Mathematical Programming. In-formation about Mathematical Programming, Se-ries B, including author guidelines and otherspecial issues in progress, is available at URLhttp://www.cs.wisc.edu/~swright/mpb. Addi-tional information about the special issue can be ob-tained from the guest editors:Erling Andersen (e.d.andersenmosek.com),Etienne de Klerk (edeklerkuwaterloo.ca),Levent Tuncel (ltuncelmath.uwaterloo.ca),Henry Wolkowicz (hwolkowiczuwaterloo.ca),Shuzhong Zhang (zhangse.cuhk.edu.hk).

Chairman’s Column

It is time to welcome the new committee membersfor our SIAG-Opt.

First, I would like to thank Luıs for stepping in toreplace Jos as editor of our newsletter. As mentionedin Luıs’ comments below, Jos did an exceptional job.We will all miss his pleasant nature and his contri-butions to our field.

Kurt Anstreicher is our new leader, Chair. Hewill be assisted by: Bob Vanderbei as Vice Chair,Sven Leyffer as Program Director, and Kees Roos asSecretary/Treasurer. We (the previous committee)are leaving the activity group in exciting and capablehands.

As part of the current committee, I am fortunateto be involved in the organization of the upcomingSIAM Conference on Optimization to be held May15-18, in Stockholm, Sweden. I hope that you willall participate. We already have chosen eight out-standing plenary speakers for our Eighth Conference.Consider this a request for both contributed papersas well as for minisymposium organizers. The con-ference will be extended to a fourth day if we getenough preregistered participants. So, once the pre-registration web site is up, please tell your friendsAND preregister.

It seems that my term as Chair has just startedrather than already ended. I have particularly en-joyed working with the other members of my com-mittee. Thank you Philippe, Anders, Natalia and,thank you Jos.

Henry Wolkowicz, SIAG/OPT ChairDepartment of Combinatorics and OptimizationUniversity of WaterlooWaterloo, Ontario N2L [email protected]://orion.math.uwaterloo.ca/~hwolkowi

Page 27: SIAG/OPT Views-and-News pdfsubjectleyffer/views/15-1.pdf · SIAG/OPT Views-and-News A Forum for the SIAM Activity Group on Optimization Volume 15 Number 1 March 2004 ... search from

Volume 15 Number 1 March 2004 27

Comments from the Editor

As the new editor of the SIAG/OptimizationViews-and-News, I would first like to thank my pre-decessors, Larry Nazareth, Juan C. Meza, and JosF. Sturm, for their excellent work. They have givenour newsletter a reputation of the highest quality.

It is a great honor for me to continue the workstarted by Jos. I feel that my responsibility is in-creased since this was a job that he was unfortu-nately unable to continue. I first met Jos at a work-shop at the IMA in Minneapolis in the Winter of2003. We did not have much time to get to knoweach other, but I appreciated talking to him and Ienjoyed the too brief moments we spent together. Iwill do my best to maintain the high standards thathe has established for this newsletter.

This issue contains two expository articles byJ. C. Nash (on a statistical tool called R) and by

A. Y. Alfakih and H. Wolkowicz (on Euclidean dis-tance matrices) which were in the queue when I as-sumed my editorship. In addition, the issue fea-tures interviews with three of our greatest optimiz-ers: Terry Rockafellar, Mike Powell, and Bill Pulley-blank. I hope you enjoy reading their opinions andthoughts as much as I did.

I take this opportunity to ask the community forcontributions to the newsletter, e.g.: expository arti-cles on interesting topics (ranging from applicationsor case studies to theory or software), announce-ments of events, book and software releases, etc..

Luıs N. Vicente, EditorDepartment of MathematicsUniversity of Coimbra3001-454 [email protected]://www.mat.uc.pt/~lnv