high-performance computing in finance: the last 10 years and the next

27
High-performance computing in finance: The last 10 years and the next q Stavros A. Zenios * Department of Public and Business Administration, University of Cyprus, Kallipoleos 75, P.O. Box 537, Nicosia 1678, Cyprus Abstract Almost two decades ago supercomputers and massively parallel computers promised to revolutionize the landscape of large-scale computing and provide breakthrough solutions in several application domains. Massively parallel processors achieve today terraFLOPS per- formance – trillion floating point operations per second – and they deliver on their promise. However, the anticipated breakthroughs in application domains have been more subtle and gradual. They came about as a result of combined eorts with novel modeling techniques, algorithmic developments based on innovative mathematical theories, and the use of high- performance computers that vary from top-range workstations, to distributed networks of heterogeneous processors, and to massively parallel computers. An application that benefited substantially from high-performance computing is that of finance and financial planning. The advent of supercomputing coincided with the so-called ‘‘age of the quants’’ in Wall Street, i.e., the mathematization of problems in finance and the strong reliance of financial managers on quantitative analysts. These scientists, aided by mathematical models and computer simula- tions, aim at a better understanding of the peculiarities of the financial markets and the de- velopment of models that deal proactively with the uncertainties prevalent in these markets. In this paper we give a modest synthesis of the developments of high-performance computing in finance. We focus on three major developments: (1) The use of Monte Carlo simulation methods for security pricing and Value-at-Risk (VaR) calculations; (2) the development of www.elsevier.com/locate/parco Parallel Computing 25 (1999) 2149–2175 q Research partially supported by the INCO-DC grant HPCFIN from European Commission on high- performance Computing for Financial Planning. The paper was written while the author was visiting University of Vienna and the support of Professor Georg Pflug in making the visit possible is acknowledged. The author benefited from the comments of seminar participants at University of Vienna, National University of Singapore, University of Geneva, Lehigh University and the Wharton School. * Also at: The Wharton School, University of Pennsylvania, PA, USA. Tel.: +357-2-892000; fax: +357-2- 339063. E-mail address: [email protected] (S.A. Zenios) 0167-8191/99/$ - see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 7 - 8 1 9 1 ( 9 9 ) 0 0 0 8 3 - 6

Upload: stavros-a-zenios

Post on 02-Jul-2016

216 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: High-performance computing in finance: The last 10 years and the next

High-performance computing in ®nance: The last10 years and the nextq

Stavros A. Zenios*

Department of Public and Business Administration, University of Cyprus, Kallipoleos 75, P.O. Box 537,

Nicosia 1678, Cyprus

Abstract

Almost two decades ago supercomputers and massively parallel computers promised to

revolutionize the landscape of large-scale computing and provide breakthrough solutions in

several application domains. Massively parallel processors achieve today terraFLOPS per-

formance ± trillion ¯oating point operations per second ± and they deliver on their promise.

However, the anticipated breakthroughs in application domains have been more subtle and

gradual. They came about as a result of combined e�orts with novel modeling techniques,

algorithmic developments based on innovative mathematical theories, and the use of high-

performance computers that vary from top-range workstations, to distributed networks of

heterogeneous processors, and to massively parallel computers. An application that bene®ted

substantially from high-performance computing is that of ®nance and ®nancial planning. The

advent of supercomputing coincided with the so-called ``age of the quants'' in Wall Street, i.e.,

the mathematization of problems in ®nance and the strong reliance of ®nancial managers on

quantitative analysts. These scientists, aided by mathematical models and computer simula-

tions, aim at a better understanding of the peculiarities of the ®nancial markets and the de-

velopment of models that deal proactively with the uncertainties prevalent in these markets. In

this paper we give a modest synthesis of the developments of high-performance computing in

®nance. We focus on three major developments: (1) The use of Monte Carlo simulation

methods for security pricing and Value-at-Risk (VaR) calculations; (2) the development of

www.elsevier.com/locate/parco

Parallel Computing 25 (1999) 2149±2175

q Research partially supported by the INCO-DC grant HPCFIN from European Commission on high-

performance Computing for Financial Planning. The paper was written while the author was visiting

University of Vienna and the support of Professor Georg P¯ug in making the visit possible is

acknowledged. The author bene®ted from the comments of seminar participants at University of Vienna,

National University of Singapore, University of Geneva, Lehigh University and the Wharton School.* Also at: The Wharton School, University of Pennsylvania, PA, USA. Tel.: +357-2-892000; fax: +357-2-

339063.

E-mail address: [email protected] (S.A. Zenios)

0167-8191/99/$ - see front matter Ó 1999 Elsevier Science B.V. All rights reserved.

PII: S 0 1 6 7 - 8 1 9 1 ( 9 9 ) 0 0 0 8 3 - 6

Page 2: High-performance computing in finance: The last 10 years and the next

integrated ®nancial product management tools and practices ± also known as integrative risks

management or enterprise-wide risk management, and (3) ®nancial innovation and the com-

puter-aided design of ®nancial products. Ó 1999 Elsevier Science B.V. All rights reserved.

Keywords: High-performance computing; Finance; Financial planning; Monte Carlo simulation; Security

pricing; Risk management; Financial innovation

1. Introduction

Computing power improved by a factor of one million in the period 1955±1990,and it is improving by this factor again just within the current decade. This accel-erated improvement is sustained through technological developments in parallelcomputer architectures: multiple, semi-autonomous processors coordinating for thesolution of a single problem. The late 1980s have witnessed the advent of massiveparallelism; in the 1990s we saw the dominance of heterogeneous networked com-puting with large clusters of workstations. These rapid technological developmentsare transforming signi®cantly the exact sciences, the social sciences and all branchesof engineering. The American Academy of Arts and Sciences devoted the winter1992 issue of its journal, Daedalus, to these developments, referring to what wastermed a ``New Era of Computation.''

The domain of ®nance and ®nancial planning has not been oblivious to the de-velopments in high-performance computation, although these technological devel-opments were motivated by problems in engineering and weapons designapplications, and not by the signi®cant challenges posed in business and economics.The two major milestones in the evolution of supercomputing ± the development ofthe experimental parallel machine Illiac IV in the early 1970s and the introduction ofthe CRAY 1 supercomputer in 1976 ± bracketed the breakdown of the Bretton-Woods agreement in 1974 and the ensuing interest rate deregulation. Immensecomputing power was becoming available as the ®nancial markets were facing un-precedented volatility, the promise of economic growth due to ®nancial liberalizationgoing hand-in-hand with commensurate risks.

Participants in the ®nancial markets were turning increasingly to mathematicalmodels and computer simulations in order to understand the peculiarities of the ®-nancial markets and to develop risk-management tools. Of course mathematicalmodels for equities portfolio management date back to the work of Markowitz [39],but it was not until 1990 that the signi®cance of this line of research was fully ap-preciated and rewarded with a Nobel Prize in Economics. Financial liberalizationadded another dimension to risk ± in addition to the volatility of stock pricesmodeled by Markowitz and studied by Mandelbrot [38] ± that of volatile interestrates, and hence of exchange rates. Further mathematical tools were developed tomodel these joint risks, most notable being the celebrated Merton±Black±Scholesmodel for options pricing [6,40].

As scholars and practitioners came to grips with increasingly more dimensions of®nancial risk they realized that these dimensions are interrelated and a holistic view

2150 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 3: High-performance computing in finance: The last 10 years and the next

of risk measurement had to be taken; see, e.g., [60]. The Modiglianni±Miller work onthe cost of capital for corporations in the late 1950s and Merton's work on thepricing of corporate debt are the seminal contributions in this direction; see, e.g., [41]for a comprehensive treatment.

So, what do we have so far? In a nutshell, increasingly more advanced mathe-matical models for pricing the risks of equities, bonds, derivatives of the above,baskets of stocks and bonds and their derivatives, and of ®rms that are viewed asbaskets of risk exposures. These models are, in general, stochastic processes modelseither in discrete or continuous time. The need to compute solutions to such models± usually in a discretized setting ± prompted the development of Monte Carlosimulation methods, borrowing from similar techniques in the physical sciences.Boyle [7] made the seminal contribution in this direction, and a comprehensivesurvey is given in Boyle, Broadie and Glasserman [8].

The need of investors to manage the risks of their asset portfolios vis-a-vis theirliabilities prompted also the development of mathematical models for portfoliomanagement. These models are based on optimization methods. The earlier onestrace their roots to Markowitz's mean-variance optimization models for the man-agement of equities portfolios. However, in keeping up with the increasingly complexrisk pro®les of new securities and their derivatives we have also seen the developmentof an array of portfolio optimization models; see the articles in [60,62]. The ®nancialoptimization models were developed in parallel to the models for risk pricing. First,we have models that capture increasingly more dimensions of a portfolio's risks in adynamic, muliperiod setting. Such models go under the term of stochastic pro-gramming; Bradley and Crane [9] and Kusy and Ziemba [37] made the seminalcontributions in this direction, and the most notable recent works are due to Mulveyand Vladimirou [48], Zenios [59±61], Hiller and Eckstein [28], Dempster and Ireland[22] and Dupacova et al. [24]. Second, we have models that take an integrated view ofthe risk management process of a ®nancial intermediary. Merton [41,42] was the ®rstto argue for the need of integration of the functions performed by a ®nancial in-termediary. Holmer and Zenios [30] take the integration a step further and proposedthe framework of integrated ®nancial product management of the processes of a®nancial intermediary. The integrated approach to ®nancial product managementhas been gaining acceptance among practitioners; some recent applications inpractical settings are reported in the two papers by Carino and co-authors [13,14],Mulvey [44], Correnti [19], Holmer [29], Holmer and Zenios [30], Mulvey et al. [35],and Zenios et al. [64].

One should think that there must be an end to the evolution of models for riskpricing. Alas ± or, rather, fortunately ± there is not, as Allen and Gale [1] aptlysummarize in their recent book on ®nancial innovation. The ®nancial markets areconstantly attacked by innovative ®nancial instruments, some that are short lived,others that remain around for decades, and all of which need to be designed, pricedand capitalized, and ®nally traded in or out of portfolios. Hence, not only there is noend to the need of developing increasingly more complete (and complex) models forsecurity pricing and portfolio management, but there is also the need to developformal mathematical models for designing the new products. We are witnessing the

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2151

Page 4: High-performance computing in finance: The last 10 years and the next

development of computer-aided design of ®nancial products, following the com-puter-aided design of all kinds of manufactured products from nuclear weapons, tosupersonic aircraft, and racing yachts and cars.

The evolution of mathematical models in ®nance outlined above ± risk pricing,integrated ®nancial product management and ®nancial innovation ± is essentiallydriven by the developments in the ®nancial markets with the advent of the globalinformation-based economy. However, high-performance computations have beeninstrumental in several developments. In the sections that follow we will give a de-scription of signi®cant computational advances in ®nancial modeling in the threemajor areas: (1) The use of Monte Carlo simulation methods for security pricing andValue-at-Risk (VaR) calculations; (2) the development of integrated ®nancialproduct management modeling tools and practices and (3) ®nancial innovation andthe development of models for computer-aided design of ®nancial products.

1.1. The impact of high-performance computing in ®nance

We wish early in our exposition to convey a balanced view of the impact of high-performance computers on the development of computational ®nancial modeling.Signi®cant progress, especially in risk pricing, has been made due to the developmentof the mathematical models described above and the use of state-of-the-art work-stations for their solution. Some of the more complex risk pricing models wouldexhaust the computer capabilities of workstations, and analysts then resort to theuse of clusters of workstations. We would also see an occasional use of massivelyparallel or high-performance computers for the prototyping and testing of somepricing model, before the model would be deployed in practice on a cluster ofworkstations.

Models of integrative risk management ± especially those based on stochasticprogramming ± more often than not rely on high-performance computing for theirsolution. We have even seen extensive research in the development of special-purposeparallel algorithms, see the book by Censor and Zenios [15]. Finally, the computer-aided design of ®nancial products has been relying solely on high-performanceparallel computations, but this is merely a manifestation that current research in thisdirection is still at an early experimental state. Once computer-aided design becomesmore commonplace we expect to see it being carried out on a wide variety of plat-forms from high-end workstations to clusters of workstations and perhaps massivelyparallel systems.

In summary, the most practical and routine applications of computational ®nanceare running either on high-end workstations or clusters of workstations. Explora-tions into new model domains rely on supercomputers and massively parallel com-puters, until they become practical tools that are run on a routine basis on more``routine'' computer platforms. A survey on the use of supercomputers in ®nancewas published by Worzel and Zenios [56], and more recent developments are dis-cussed in the study published by the Intertek Group [25] and the volume edited byZenios [63].

2152 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 5: High-performance computing in finance: The last 10 years and the next

1.2. Organization of the paper

The paper is organized in three major sections that relate, respectively, to securitypricing, integrated ®nancial product management, and ®nancial innovation. In eachsection we brie¯y describe some basic model, and then discuss the computationalissues that need to be resolved in computing a solution to the model. Computationalresults are then reported from the published literature to illustrate what has beenachieved with the use of high-performance computers. We also point out recentextensions of the models that were facilitated through the use of high-performancecomputing. Our discussion gives only a schematic description of the models; detailscan be found in the cited literature, the recent survey article by Mulvey et al. [45] andthe book by Zenios [60].

2. High-performance computing for the Monte Carlo simulation of security pricing

Financial process ± such as interest rates, credit premia, stock prices and the like ±are modeled using stochastic di�erential equations such as

dxt � f �xt� dt � rxt dz: �1�This equation can represent as special cases some of the well-known single-factormodels such as the Vasicek model, the Cox±Ingersoll±Ross model or the exponentialOrnstein±Uhlenbeck process, see, e.g., [33] for advanced textbook treatment of thisliterature. These models are termed single-factor in that they model a single ®nancialprocess xt which is typically taken to be the fundamental process of short-term risk-free interest rates rt.

Under risk-neutrality a fair market price of a ®nancial instrument with a payo�function C�t; rt� is given by the expected present value of the cash¯ows discounted atthe instantaneous risk-free rate

P �Z T

0

C�t; rt�eÿrt dt: �2�

Implementations of the stochastic process model typically use a discrete lattice ap-proximation, whereby from period t to t � Dt the process xt jumps from its initialstate x0

t to one of a ®nite number of states xit�Dt; i � 1; 2; . . . ; n, with some probability

pit�Dt. The number of states and/or their value and/or the probabilities are computed

in such a way that at the limit, as Dt! 0, the discrete lattice process converges to thecontinuous stochastic process.

Depending on postulated structural model assumptions there are di�erent ways tospecify concrete stochastic processes and to obtain discrete lattice models to ap-proximate them. One example, which we will use here for illustration of the com-putational requirements, is the binomial lattice model of Black et al. [5]. This modelspeci®es exogenously jump probabilities of 0.5 to each of the two possible states (upand down), and assumes that the lattice is recombining, i.e., an up/down movementbrings the process to the same state as a down/up movement. The lattice is described

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2153

Page 6: High-performance computing in finance: The last 10 years and the next

by a series of base rates frt0gTt�0 and volatilities fktgT

t�1, and the short-term rate at anystate r of the lattice at period t is given by rr

t � rt0�kt�r. The model parameters areestimated so that the random variable rr

t is in agreement with market data. Fig. 1illustrates a binomial lattice.

The pricing equation (2) is written in the discrete setting as

P � 1

S

XS

s�1

XT

t�1

C�t; s�Qtÿ1i�0�1� rs

i �: �3�

Here expectations are computed over the set of all possible interest rate paths, de-noted by the scenario index s � 1; 2; . . . ; S, of the stochastic process rs

t , that startsfrom t � 0 and terminates at some ®nite time horizon T.

Having a fair estimate of a security's price is the ®rst step in developing tradingstrategies by identifying mispriced securities. The next step is to develop estimates forthe distribution of the security prices at future time periods, thereby estimating ourrisk exposure over a prespeci®ed time horizon. The ®nance industry has even de-veloped a standard for estimating risk exposure, the VaR concept, pioneered byJ.P. Morgan and ®nding today widespread acceptance in practice, see, e.g., theRiskMetrics Technical Document, Morgan, New York, 1996. VaR is the maximumvalue of losses that can be realized within a given planning horizon with a givenprobability. The probability is typically taken to be 95%, and the time interval is oneday ± for actively traded positions ±, one month ± for portfolio management ±, and10 days for the regulatory requirements of the Bank of International Settlements. Incases of portfolios consisting of plain vanilla instruments VaR estimates can usuallyby obtained in closed form. In its most general implementation, however, with theinclusion of complex securities with embedded options, VaR calculations requireMonte Carlo simulation of security prices at the end of the time horizon, so that theloss probabilities can be estimated. (VaR methods are subjected to widespreadcriticism. Nevertheless, this particular risk measure has even been adopted by theBank of International Settlement as a required statistic to be reported by banks forregulatory monitoring.)

Using a lattice approximation of the stochastic process we can estimate prices atthe end of the horizon s for each state of the lattice r � 1; 2; . . . ; s; �P r

s � by computingthe expected present value of the cash¯ows on the sub-lattice denoted by Sr ema-nating from each state r at period s. In particular we have

P rs �

1

Sr

XSr

s�1

XT

t�s�1

C�t; s�Qtÿ1i�0�1� rs

i �: �4�

Fig. 1 illustrates the state-dependent pricing of securities from a lattice. Oncefuture prices are estimated, conditional on the interest rate scenarios realized duringthe holding period, one can estimate holding-period returns and proceed with VaRcalculations. The estimation of holding period returns for portfolio managementapplications ± proposed in [63] and used in [49] ± is in agreement with recent ac-counting standards that require asset value reports based on mark-to-market thanbook-value data.

2154 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 7: High-performance computing in finance: The last 10 years and the next

2.1. Computational requirements

We note that the price at some period s and given a state of the lattice r can beobtained recursively by P r

s � 0:5P rs�1 � 0:5P r�1

s�1 where 0.5 are the probabilities of anup or down jump. If we assume that the cash¯ows C�t; s� depend only on the timeperiod t and the value of the ®nancial process rs

t ±and not on any previous history ofthese cash¯ows or of the process±the recursive equation can be used to compute theprice P starting at the end of the lattice t � T and folding back to t � 0. For a widevariety of ®nancial instruments however, the payments C�t; s� depend in some way

Fig. 1. A binomial lattice approximation of a ®nancial stochastic process and path and state dependent

pricing of a stream of cash¯ows C�t; s�.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2155

Page 8: High-performance computing in finance: The last 10 years and the next

on the history of the stochastic process, i.e., C�t; s� _�C�t; frssgt

s�0�. For instance, thepayo� of a lookback option may depend on the average or maximum or minimumvalue of the process during some periods prior to its exercise [23]; the prepayments ofa mortgage-backed security (MBS) depend not only on market prevailing re®nancingrates, but also on the history of prepayments since the security was issued [36]; thepayo� of a callable bond will depend on whether the call option is still active or it hasalready been exercised [16]. Such instruments are called path-dependent and theirprices cannot be computed recursively but require the evaluation of Eq. (3). We alsonote that when the recursive approach is applicable the state-dependent prices P r

s areunique for each state r of the lattice and can be calculated recursively for each timeperiod s. For path-dependent instruments these prices are also path-dependent(i.e., P r

s _�P rs �frsr

i gsi�0� where srs denotes paths that go through state r at time

period s.) Hence, for path-dependent instruments we need to sample paths that gothrough a particular state and for each path evaluate Eq. (4) using Monte Carlosimulations.

For path-independent instruments the price estimation on a lattice of T timeperiod requires O�T 2=2� calculations. For path-dependent instruments the calcu-lations involved in the expected value estimation in (3) are of the order O�2T �.We can see that for a typical thirty year lattice with monthly steps these calcu-lations are impossible to perform. Here analysts have been resorting to combi-nations of variance reduction techniques [10,11,26,55] and high-performancecomputations.

The ®rst application of high-performance computers for pricing complex ®nancialinstruments ± MBSs ± was done at the HERMES Laboratory for Financial Mod-eling and Simulations at the Wharton School using a massively parallel ConnectionMachine CM-2 and provided breakthrough performance, see [31]. Fig. 2 illustratesthe results obtained on a variety of high-performance computers of the time, andshows that simulations that would take several hours on state-of-the art worksta-tions could be completed within a few minutes (15 or so) on a CRAY X-MP

Fig. 2. State dependent pricing of a portfolio of mortgage backed securities on a variety of workstations,

a supercomputer CRAY X-MP, a workstations cluster, and a massively parallel Connection Machine

CM-2a.

2156 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 9: High-performance computing in finance: The last 10 years and the next

supercomputer and within less than a minute on the massively parallel system. Theseresults were obtained in a research setting, and provided the motivation for theadoption of high-performance computers in industrial settings, see the discussion in[25]. However, the industrial applications are typically carried out on clusters ofworkstations or more modest parallel architectures. The results of Cagan et al. [12]summarized in Fig. 3 show that acceptable performance can be achieved on a varietyof parallel machines, shared or distributed memory and clusters of workstations.More importantly this study showed that such implementations can be portable, andwhen implemented on networks of workstation do not require any substantialcapital outlay.

High-performance computing has also been the catalyst for the development ofnovel modeling approaches for risk pricing. In addition to the development ofmodels cited above for pricing increasingly more dimensions of risk we have alsoseen an interest in the sensitivity and robustness of these models to errors in the inputdata. Perhaps the most notable application in this direction is the work of Bertocchiet al. [2] or Dupacova et al. [24] that studies the e�ect of market data inaccuracies ±term structure and/or volatility estimates ± on the pricing and portfolio optimizationmodels. High-performance computations have assisted modelers in building con®-dence in their own models, or understanding their limitations.

Fig. 3. The performance of a portable Monte Carlo simulation model for pricing mortgage backed se-

curities improves almost linearly on a variety of computer platforms with di�erent parallel architectures or

on a cluster of workstations.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2157

Page 10: High-performance computing in finance: The last 10 years and the next

3. High-performance computing for integrated ®nancial product management

The complexities of the new ®nancial instrument ± with their multidimensionalrisk-pro®les described in the previous section ± and the increased integration of the®nancial markets necessitates a broad perspective on the function of risk manage-ment. In particular, it has been argued by many scholars and practitioners that anintegrated approach should be taken to the design, pricing, and funding of ®nancialproducts together with the capitalization of the ®rm's portfolio. This approach wasarticulated under the framework of integrated ®nancial product management (IPM)by Holmer and Zenios [30]. Of course the need for asset management practices thattake the broader asset/liability management view has been in widespread use for along time. These approaches, however, have typically being restricted to the study ofa single liability family, or of broadly de®ned asset classes. They have rarely includedthe ®rms' balance sheet of assets or liabilities, assign the problem of portfolio con-struction to traders, and never incorporate the problem of security design. Integrated®nancial product management works with a view towards the overall process of a®nancial intermediary; in practical settings it has appeared under the terms of en-terprise wide risk management or total integrated risk management.

To implement IPM we need to develop models that can integrate the character-istics of the product ± such as the state-dependent prices during the holding period,P r

s ± with the ®nancial performance of di�erent funding assets in an uncertain world.In deciding the capitalization structure of the ®rm's portfolio we also need modelsthat can optimize equity allocation decisions given risk-adjusted measures of return-on-equity distributions of the various products. Return-on-equity distributions areobtained from Monte Carlo simulation methods ± such as those described in Section2 ± for both sides of the line-of-business balance sheet. To synthesize these distri-butions into a portfolio we then need optimization models that are dynamic acrosstime and take into account the scenarios for the evolution of returns.

Mathematical models under the general term of stochastic programming with re-course provide the framework for dealing with the broad problem of IPM. Thesemodels seek a policy, i.e., a vector of portfolio decisions, that anticipates futureobservations but also takes into account that ± as observations are made about theuncertain parameters ± the policy can adapt by taking recourse decisions. For ex-ample, a portfolio manager speci®es the composition of a portfolio considering bothfuture movements of stock prices (anticipation) and that the portfolio will be re-balanced as prices change (adaptation).

The two-stage version of this model has been studied extensively. It is amenable toformulations as a large-scale deterministic nonlinear program with a special struc-ture of the constraints matrix. This formulation yields naturally to solution via de-composition algorithms and parallel computations.

To formulate the two-stage stochastic program with recourse we need two vectorsfor decision variables to distinguish between the anticipative policy and the adaptivepolicy. The following notation is used.· x 2 Rn0 denotes the vector of ®rst-stage decisions. These decisions are made before

the random variables are observed and are anticipative.

2158 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 11: High-performance computing in finance: The last 10 years and the next

· y 2 Rn1 denotes the vector of second-stage decisions. These decisions are made af-ter the random variables have been observed and are adaptive. They are con-strained by decisions made at the ®rst-stage, and depend on the realization ofthe random vector.Once a ®rst-stage decision x has been made, some realization of the random

vector x can be observed. Let q�y;x� denote the second-stage cost function, and letfT �x�;W �x�; h�x� j x 2 Xg be the model parameters. Those parameters are func-tions of the random vector and are, therefore, random parameters. T is the tech-nology matrix of dimension m1 � n0. It contains the technology coe�cients thatconvert the ®rst-stage decision x into resources for the second-stage problem. W isthe recourse matrix of dimension m1 � n1. h is the second-stage resource vector ofdimension m1.

The two-stage stochastic program minimizes the cost of the ®rst-stage decisionplus the expected cost of second-stage decisions. It is formulated as:

Min �f �x� � E�Miny2Rn1

�fq�y;x� j T �x�x� W �x�y � h�x�g�� �5�

s:t: Ax � b; �6�x 2 Rn0

� ; �7�where E��� denotes expectation, and Min denotes the minimal function value.

When the random vector x has a discrete and ®nite distribution, withsupport X � fx1;x2; . . . ;xNg±X is called a scenario set ± we can write a determin-istic equivalent large-scale formulation of the stochastic program. Denote byps the probability of realization of the sth scenario xs. That is, for everys � 1; 2; . . . ;N ;

ps _�Prob�x � xs� � Probf�q�y;x�;W �x�; h�x�; T �x��� �q�y;xs�;W �xs�; h�xs�; T �xs��g:

It is assumed that ps > 0 for all xs 2 X, and thatPN

s�1 ps � 1. The stochastic non-linear program (5)±(7) can now be reformulated as the following large-scale deter-ministic equivalent nonlinear program:

Min f �x� �XN

s�1

psq�ys;xs� �8�

s:t: Ax � b; �9�T �xs�x� W �xs�ys � h�xs� for all xs 2 X; �10�x 2 Rn0

� ; �11�ys 2 Rn1

� : �12�

The constraints (9)±(12) for this deterministic equivalent program can be combinedinto a matrix equation with block-angular structure

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2159

Page 12: High-performance computing in finance: The last 10 years and the next

AT �x1� W �x1�T �x2� W �x2�... . .

.

T �xN � W �xN �

0BBBBB@

1CCCCCAxy1

y2

..

.

yN

0BBBBB@

1CCCCCA �bh�x1�h�x2�...

h�xN �

0BBBBB@

1CCCCCA: �13�

Stochastic programming models of the general form given above have recentlyfound application in integrated ®nancial product management decisions. The mostnotable example are perhaps the models of Frank±Russel/Yassuda±Kassai [13]and Tower Perrins [47] that were applied by the respective corporations inoperational settings. Some of the earlier research works in this direction are dueto Kusy and Ziemba [37] for Bank asset/liability management, Mulvey andVladimirou [48] for asset/allocation, and Zenios [59±61] for ®xed-income portfoliomanagement.

These models have been very e�ective in dealing with the problems of IPM thatwere called upon to solve. For instance, Golub et al. [27] designed a model to specifyfunding decisions for a 3-year guaranteed investment contract using assets from themortgage markets. The results of Fig. 4 show that the IPM model was able toachieve the target (minimum guarantee) return under all scenarios, and this modelprovided the basis for a decision support system for the BlackRock FinancialManagement ®rm.

Worzel et al. [57] report on a model for developing asset portfolios that track abroadly de®ned ®xed-income index (the Salomon Brothers index of MBSs). Fig. 5shows that the model was e�ective in tracking this volatile index, and it achieved

Fig. 4. The performance of an integrated ®nancial product management model for funding a three-year

guaranteed investment contract using mortgage securities. The target return ± denoted by the horizontal

line ± is met under all scenarios when using IPM models.

2160 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 13: High-performance computing in finance: The last 10 years and the next

an ex post Sharpe ratio higher than the Sharpe ratio of the index. This model wasused by Metropolitan Life Insurance for the management of their mortgageportfolio.

More recent work in IPM has focused on the integration of exchange rate risk inportfolio management decisions in order to address global risk management issues.Consiglio and Zenios [17] report on the development and performance of a modelthat tracks an international government bond index, see Fig. 6. This model is nowproviding the building blocks for the development of a decision support system by amajor Swiss Bank.

Fig. 5. Performance of integrated ®nancial product management models in tracking a broadly de®ned

index of ®xed-income securities, the Salomon Brothers Mortgage Index.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2161

Page 14: High-performance computing in finance: The last 10 years and the next

3.1. Computational requirements

What are the computational requirements of IPM models? These models integrateMonte Carlo simulations of security prices with large-scale stochastic programmingoptimization models. The Monte Carlo simulation procedures bene®t naturally fromparallel computations as already discussed in Section 2.

The solution of large-scale stochastic programming models bene®ted substantiallyfrom developments in high-performance computing and in particular from parallelcomputations. It is rarely that one reads today's reports on the solution of stochasticprograms that do not employ some form of parallel architecture. See, e.g., [15] for atextbook treatment of recent algorithmic developments, discussion of implementa-tion issues and summary computational results. While several algorithmic develop-ments occurred recently in the ®eld of stochastic programming, here we canaccurately state the breakthroughs in solution methods were enabled by parallelcomputing technology.

The ®rst massively parallel solution algorithm for stochastic programming fo-cused on the solution of problems with special structure (in particular problems withnetwork recourse). The results in [50,51] ± summarized in Figs. 7 and 8 ± show thatproblems with hundreds of thousands of scenarios can be solved with special-pur-pose algorithms on massively parallel architectures. Models of this size are unsolv-able with general purpose solvers, even when running on supercomputers. (Theseresults are obtained from Figs. 15.7 and 15.10 of [15] where interested readers can®nd further details on problem size, solution algorithms and discussions of the re-sults.)

The early successes with special purpose algorithms motivated further research inthe development of general purpose parallel methods. Work in this area focused ondecomposition methods ± see, e.g., Dantzig and Infanger [21] or Infanger [32],Mulvey and Ruszczy�nski [46] or Nielsen and Zenios [52] ± or interior point methods

Fig. 6. Tracking and outperforming an international Government bond index using integrated ®nancial

product management models.

2162 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 15: High-performance computing in finance: The last 10 years and the next

± see, e.g., the matrix factorization procedures of Birge and Qi [3] and Birge andHolmes [4] and their parallel extension by Jessup et al. [34] and Yang and Zenios[58].

Summary comparative results are reported in Table 1. (This is Table 15.18 from[15] where interested readers can ®nd further details on problem size, solution al-gorithm and discussion of the results.) We observe that these methods achieveuniformly high performance on parallel machines and they outperform substantiallyserial methods. Parallel interior point methods in particular have been proven ro-bust, e�cient and scalable in solving models arising from the IPM applications.

Fig. 8. Solving several network problems ± stochastic or deterministic ± with a special-purpose parallel

algorithm on a Connection Machine CM-2 with 16K processing elements and a general purpose simplex

code running on a Cray Y-MP supercomputer. The general purpose optimizer could not solve the larger

problems.

Fig. 7. Solving nonlinear stochastic network problems with a special-purpose parallel algorithm on a

Connection Machine CM-2 with 8K processing elements and a general purpose interior point code on an

IRIS 4D/70 workstation. Problems range in size from 1923 rows by 5468 columns to 7647 rows by 21,777

columns and they have from 18 to 72 scenarios.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2163

Page 16: High-performance computing in finance: The last 10 years and the next

For the sake of presenting a balanced view on the use of high-performancecomputing in this domain we should point out that the successes with parallel op-timization algorithms described above have been mostly con®ned to the researchsetting where they were originally developed. Once su�cient insights were gained inthe performance of the model it was possible to develop reduced versions, containinga small sample of scenarios, and such models are typically solved on high-perfor-mance workstations or clusters of workstations. While the solution times could bequite large in these cases ± several hours of dedicated computer time ± such times arenot prohibitive for a well-developed and stable model. One should bear in mind thatportfolio rebalancing decisions are made rather infrequently (monthly or quarterly)and within this time-framework solution times of several hours are acceptable.However, the research that led to the development of these industry-adopted modelscould not have been carried out if each experimentation and model modi®cationwould require hours of solution time.

4. High-performance computing in ®nancial innovation and the computer-aided design

of ®nancial products

We now turn to the third application in ®nance where high-performance com-puting is having a signi®cant impact. Financial innovation ± the unforecastable,

Table 1

Comparing the serial interior point code LOQO, executing on a single processor of the Connection

Machine CM-5e, with a parallel interior point algorithm. (Time is given in seconds. The number of

processors used by the parallel algorithm is equal to the number of scenarios up to a maximum of 64 (size

of machine). The number of scenarios is indicated by the number after the period in the problem name.

The largest SEN problem has more than 2M constraints and 13M variables, the largest SCSD8 problem

has more than 2.5M constraints and 18M variables. NA means not available)

Problem name LOQO Parrallel code

Iterations Time Iterations Time

SCSD8.8 9 1.45 9 5.54

SCSD8.16 9 3.37 9 6.03

SCSD8.32 9 5.92 9 5.45

SCSD8.64 9 12.8 9 5.42

SCSD8.128 NA NA 10 1.88

SCSD8.256 NA NA 10 2.39

SDSD8.512 NA NA 11 3.79

SCSD8.1024 NA NA 12 6.73

SCSD8.2048 NA NA 14 13.82

SEN.8 14 37.31 19 25.1

SEN.16 16 188.1 19 48.5

SEN.32 17 837.2 20 14.5

SEN.64 19 1702.1 21 16.1

SEN.128 NA NA 23 30.8

SEN.256 NA NA 31 78.3

SEN.512 NA NA 31 153.5

SEN.16384 NA NA 49 7638.3

2164 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 17: High-performance computing in finance: The last 10 years and the next

unanticipated changes in ®nancial instruments ± has been characterized as a ``rev-olution'' in ®nancial economics by Miller [43]. ``Signi®cant and successful'' ®nancialinnovation, using Miller's notions, is usually associated with the recent stream ofexotic ®nancial instruments like futures, options, synthetic securities with embeddedoptions and the like.

The rapid pace of ®nancial innovation has highlighted the need to understand thisphenomenon: Why is it happening and why the securities that are used have the formthey do? See, for example, [1]. Despite the extensive literature on this topic, and theeven more extensive list of novel ®nancial instruments that appear continuously, verylittle has been done to develop a scienti®c methodology for the design of products.The need for a scienti®c approach to security design becomes more pressing as thecomplexity of synthetic securities increases. Ross (1989) writes:

``[Financial economists] are called upon not only to value the new instru-ments and the new [dynamic trading] strategies [that make heavy use ofthe new instruments] but to design them as well. Like engineers whouse physics, ®nancial engineers use the techniques of modern ®nance tobuild the equivalent of bridges and airplanes.''

Almost 10 years after he delivered these comments in the Presidential Address tothe American Finance Association, the design of ®nancial products still remainsmore of an art than a science. The analog of concurrent engineering for the designand manufacturing of products has not yet been adopted in the domain of ®nance.Some of the developments in high-performance computing for security pricing andintegrated ®nancial product management provided the impetus for the developmentof the models by Consiglio and Zenios [17,18] that deal with the design of a speci®ctype of a ®nancial product, namely of callable bonds. Here we discuss thesemodels.

We consider an institution operating in an uncertain world, and we capture un-certainty in the form of a discrete set of scenarios denoted by X _� f1; 2; . . . ; Sg. Theinstitution holds a portfolio of assets, and we use rs

A to denote 1 plus the rate ofreturn of this portfolio under scenario s 2 X, during some holding period of interest.It is assumed here that the portfolio of assets is given a priori, and scenarios ofholding period returns for this portfolio can be computed using the Monte Carlosimulation methods of Section 2. The institution funds these assets through the issueof debt �D� and the investment of shareholders' equity �E�.

Debt is issued in the form of a portfolio of ®nancial products, and the yield on thisdebt is denoted by Rs

L. We assume that the yield on the debt, during a holding period,can be calculated using appropriate pricing models but we do not assume that thedebt structure is given a priori. Indeed, designing the most appropriate debt structureis a key aspect of the institution's problem.

The position of the institution at the end of the holding period is given by theterminal wealth

WT s � �D� E�rsA ÿ Rs

L: �14�

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2165

Page 18: High-performance computing in finance: The last 10 years and the next

The return to shareholders is given by the Return on Equity (ROE)

ROEs _� WT s

E� �D� E�rs

A ÿ RsL

E: �15�

This equation reveals a well-known important relation between ®nancial leverage(i.e., the debt to equity ratio) and ROE. For scenarios under which the return onassets exceeds the interest rate on debt then ®nancial leverage will increase the returnon equity. When the return on assets falls below the cost of debt then ®nancial le-verage will decrease the ROE. However, as the expected return on assets exceeds therequired yield on debt it follows that ®nancial leverage has two e�ects on the returnon equity. It increases the expected ROE, while at the same time it increases thevariability of this return. Therefore, depending on the shareholders' aversion to-wards risk, there is an optimal level of ®nancial leverage.

In order to determine the optimal level of leverage we need to determine the costof debt under di�erent scenarios s 2 X. (Recall that the value of the other parameterthat appears in the ROE calculation, i.e., the return on the asset portfolio rs

A, isexogenous to the institution's decisions.) The cost of debt is determined by theamount of debt raised through issues in di�erent ®nancial products. Hence, the in-stitution has to determine simultaneously the structure of its debt (i.e., the amountissued in each product) and the optimal level of ®nancial leverage. A model thatprovides an answer to this question is brie¯y discussed next. However, there is anadditional level of ¯exibility for our institution. The return on the debt is a functionnot only of the amount issued in di�erent types of products, but also of the yield ofeach particular product. Herein lies the opportunity for ®nancial innovation and thedesign of new products. Since the institution is issuing the products, their yield canbe adjusted with the judicious setting of some design parameters.

The problem facing our institution can be formulated as a hierarchy of optimi-zation models: For a given universe of liability products and their associated returnsthe institution can decide the optimal product mix and the optimal level of leverage;these two decisions can be modeled as a nonlinear programming problem. The in-stitution then has the option to introduce new products to rede®ne the input of theoptimization problem. The overall goal is to ®nd the design of products that, to-gether with an appropriate product mix and an appropriate ®nancial leverage yieldsa maximal expected utility of the ROE.

The hierarchy of optimization models is given in Appendix A. These modelswere used to design portfolios of callable bonds for a mortgage agency in the US.Fig. 9 illustrates the holding period returns of the target mortgage asset and threealternative bond designs for a variety of interest rate scenarios. It is clear from this®gure that bonds B and C are preferable liabilities to hold against the mortgageasset, as opposed to bond A which has a much higher rate of return than the assetfor all interest rates above 8.5%. Fig. 10 shows the tracking error of target assetreturn minus liability return of optimally designed bonds and of a portfolio ofbonds optimized using the models of Appendix A. The models can also be used tostudy the e�ect of the decision makers' risk aversion on the optimal ®nancial le-verage. Fig. 11 illustrates the e�ect of debt-to-equity ratio on the certainly equiv-

2166 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 19: High-performance computing in finance: The last 10 years and the next

alent return on equity (CEROE) for corporations with di�erent risk aversioncharacteristics.

4.1. Computational requirements

The nonlinear program that optimizes the portfolio mix and the ®nancial leveragefalls in the category of stochastic programming models described in Section 3 andcan be solved using the parallel optimization techniques discussed earlier. However,in practice it could be su�cient to rely on a single period model ± see model (A.3)±(A.7) in Appendix A ± and such models can be solved on workstations using state-of-the-art general purpose optimization software. For instance, the models in [18] weresolved within 2 s of CPU time on a PowerPC 604.

Even so, the upper level optimization model that designs the portfolio of products± model (A.8) and (A.9) in Appendix A ± is computationally very demanding. Itsobjective function is obtained by solving the lower level optimization problem

Fig. 9. Holding period returns of a mortgage-backed security (MBS) and three alternative callable bonds,

under di�erent interest rate scenarios. The horizontal axis denotes the geometric mean of the term

structure of interest rates during the holding period (36 months); the vertical axis denotes holding period

returns.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2167

Page 20: High-performance computing in finance: The last 10 years and the next

(A.3)±(A.7) with input data generated from the simulation of the holding-periodreturns of the products designed by the upper level problem (A.8) and (A.9). Hence,the generation of input data resorts to the parallel Monte Carlo simulation methodsdiscussed in Section 2. Further complications arise from the fact that the objectivefunction of the upper level problem is not unimodal, since it is the objective functionof another nonlinear program with simulated input data. Hence to solve the upperlevel problem we need to resort to techniques for global optimization, such assimulated annealing or tabu search. Both of these techniques require a very large

Fig. 10. Tracking error between the mortgage assets and the issued debt of callable bonds under di�erent

interest rate scenarios. The horizontal axis denotes the geometric mean of the term structure of interest

rates during the holding period (36 months).

Fig. 11. The e�ect of risk aversion on the optimal ®nancial leverage and the corresponding certainty

equivalent return on equity (CEROE). In this experiment we use the one-parameter family of isoelastic

utility functions given by U�ROE� � ROEc=c. A parameter value c � ÿ10 corresponds to relatively high

risk aversion, while c � 1 corresponds to risk neutrality. As c! 0 we recover the log utility function used

in the rest of the paper.

2168 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 21: High-performance computing in finance: The last 10 years and the next

number of steps in order to assure a good approximation to the global solution, butthey are both amenable to implementations on parallel architectures. Indeed, theparallelization of global optimization algorithms has been investigated extensively butit beyond the scope of our article to discuss this literature, see [53,54]. A classi®cationof parallel approaches to tabu search algorithms is discussed in Crainic et al. [20].

Fig. 12 illustrates the performance of a parallel tabu search implementation of thebond design problem as reported in [18]. The authors report that the total runningtime of 7.5 h on a single processor would reduce to less than half an hour with theuse of parallelism. Even the parallel running times are substantial in this case, butthey nevertheless facilitated the development and testing of the models and producedthe model results reported above. The nature of the application warrants that furtherspeedups can be obtained on machines with larger number of processors, but thisapplication has not resulted into a decision support system and no further attemptshave been made to date for its parallelization.

5. Conclusions

High-performance computations have been playing a signi®cant role in recentadvances in computational ®nance and in corporate problems of planning underuncertainty. Problems of computational ®nance ± from a wide spectrum of appli-cations ± were shown to be well suited for solution on parallel architectures, fromclustered networks of workstations to massively parallel architectures. However,while the computational experiences ± speedups, e�ciency, scalability ± have beenuniformly good in the research settings, the adoption of this technology in practicehas been modest, relying typically on high-performance workstations or clusters of

Fig. 12. Relative speedup of the parallel tabu search implementation on a Parsytec CC-16 utilizing up to

16 processors.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2169

Page 22: High-performance computing in finance: The last 10 years and the next

workstations. This observation does not diminish despite the enthusiasm that boththe research and practitioners communities show for the developments in high-performance computing technology.

With the increased globalization of the ®nancial markets, developments of ®nan-cial innovation with the constant introduction of novel ®nancial instruments, and theheightened market volatilities the signi®cance of computational ®nance is onlystrengthened. We expect to see high-performance computing having an increasinglymore signi®cant role to play in this development. With the development of theEURO-zone we see a shift from currency risk management towards credit riskmanagement, while the currency exchange risk between the three major currencies isstill there. RiskMetrics tools have been followed by CreditMetrics tool, and the needto optimize the resulting VaR estimates is intensi®ed. Hence, one would expect to seefurther use of high-performance computations and advanced modeling techniquestowards the adoption of integrated ®nancial product management practices. Also,®nancial innovation is a non-abating phenomenon, and we expect to see it being puton a more scienti®c base with the computer-aided design of ®nancial products; thisdevelopment also necessitates high-performance computing. Finally, we have saidnothing in this paper on modeling high-frequency data using nonlinear chaotic dy-namics and arti®cial neural networks. Such approaches have met with success in many®nancial applications and they are also compute intensive. The continued penetrationof these tools in modeling problems in ®nance will further intensify the use of high-performance computers. But we will not hazard a guess on whether this use will be inthe form of massively parallel systems or clusters of heterogeneous networks.

Appendix A. Model formulation for the computer aided design of ®nancial products

This appendix develops the models for the design of portfolios ®nancial products.We formulate a hierarchy of optimization models. For a given universe of productsand their associated returns the institution can decide the optimal mix of products tobe issued and the optimal level of leverage. A nonlinear program can be formulatedto resolve these two decisions. New products can then be introduced, and thoserede®ne the input data to the nonlinear optimization program. Our goal is to ®nd thedesign of products that, together with properly optimized product mix and ®nancialleverage, yield an overall highest certainty equivalent return on equity (CEROE).This is then the design of the best quality.

First introduce some notation:· J denotes the set of ®nancial products f1; 2; . . . ;Ng. Each product in this set is

characterized by some design parameters.· pj denotes the vector of design parameters for product j 2 J. For example, if the

product is a callable bond the design parameters could be the lockout period andthe schedule of redemption prices before maturity. We assume that n parametersare needed to characterize each product, so that pj is a vector in Rn.

· P � ��p1�T; �p2�T; . . . ; �pN�T�T is the concatenated vector of design parameters forall products in J.

2170 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 23: High-performance computing in finance: The last 10 years and the next

· rsj�pj� denotes 1 plus the rate of return of the jth product. The return of each prod-

uct depends on the design parameters pj and on the scenario denoted bys 2 X�: f1; 2; . . . ; Sg:

· y � �yj�Nj�1 is the vector denoting the amount (in face value) of debt issued in eachproduct j 2 J . This vector speci®es the debt structure.

· p � �pj�Nj�1 denotes the price per unit face value of each product in J.With this notation we can express the yield on the debt portfolio by

RsL�y; P � �

XN

j�1

yjrsj�pj�: �A:1�

The ROE (Eq. (15)) can now be written in a way that re¯ects its dependence on thepolicy of our institution

ROEs�y; P ;D=E� � �D� E�rsA ÿ Rs

L�y; P�E

: �A:2�

We point out that the calculation of the ROE involves the evaluation of the non-linear equation (A.1) in y and P. Furthermore, the values of rs

j�pj� as a function ofthe design parameters pj are not available analytically, but are obtained throughMonte Carlo simulation procedures such as those described in Section 2.

We now formulate two models that address the problems facing an institution thatis issuing ®nancial products. The ®rst model assumes that a universe of designedproducts is given and determines the optimal debt structure �y� and the optimal levelof ®nancial leverage �E�. The second model expands the universe of available prod-ucts by designing new products, thereby improving the solution obtained by the ®rstmodel. The two models are used hierarchically, therefore their joint solution solves inan integrated fashion both the problem of debt structure and of ®nancial leverage.

A.1. Debt structure and optimal leverage

For a given value of the vector P the quantities rsj�pj� are parameters estimated

using Monte Carlo simulation procedures. Let CEROE�P� denote the certaintyequivalent of the optimal value of the following optimization program, which de-termines the optimal leverage and the debt structure:

Maxy2RN ;E2R�

1

S

Xs2X

U�ROEs�y; P ;D=E� �A:3�

s:t: �A:1� and �A:2�; �A:4�XN

j�1

pjyj � D; �A:5�

D� E � 100; �A:6�

�D� E�rsA ÿ

XN

j�1

yjrsj�pj�P 0 for all s 2 X: �A:7�

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2171

Page 24: High-performance computing in finance: The last 10 years and the next

(U��� denotes the decision maker's utility function.) The last inequality, taken to-gether with the nonnegativity of E, ensures that ROE remains nonnegative under allscenarios, and hence it guarantees the solvency of the institution for any debtstructure and leverage obtained by the model. Changing the objective function (A.3)to read MinE2R�E we can also calculate the minimum equity required to ensuresolvency under extreme case scenarios.

The model is a single-period one, maximizing the expected utility of ROE over atarget holding period. No decisions to rebalance the portfolio are made during thisperiod, and any cash¯ows generated by the assets or liabilities are invested in theriskless rates and are captured in the calculations of holding-period returns. It ispossible to formulate a multiperiod stochastic programming model for this problem.Such a model could incorporate portfolio decisions made during the target periodand capture inter-temporal aspects of the problem.

A.2. Optimal design of products

The design parameters of the ®nancial products are now the decision variables ofthe following program:

MaxP2RnN

CEROE�P � �A:8�s:t: P 2 X ; �A:9�

where CEROE(P) is the objective value of (A.3)±(A.7), and X denotes some con-straints on the design parameters (e.g., maximum allowable maturity). Note that thisis a global optimization program, since, in general, the function CEROE(P) is notunimodal in P. This problem can be solved using techniques from global optimi-zation, bearing in mind that the objective function is not available analytically but isobtained as the solution to the optimization program (A.3)±(A.7).

A systematic procedure for solving this hierarchy of optimization models issummarized as follows:· Initialization: Parametrize the design characteristics of the ®nancial products and

assume some initial values. The speci®cation of the holding period, generation ofthe scenario set X, and estimation of the holding period returns for the target as-set, are determined during the initialization step.

· Step 1. Generate holding period returns for the target product designs, using thesame scenarios and holding period used to estimate the returns of the target asset.Scenarios of holding period returns are generated using appropriate Monte Carlosimulation procedures (Section 2).

· Step 2. Solve the nonlinear program (A.3)±(A.7) to obtain the optimal debt struc-ture and leverage. The optimal objective value of this program is the CEROE forthe product design simulated in Step 1. If termination criteria are satis®ed, Stop.Otherwise proceed with Step 3.

· Step 3. Adjust the design parameters of the ®nancial products and return to Step 1.This iterative procedure seeks product designs that allow the institution to specify

an optimal debt structure and optimal leverage with the highest possible CEROE. Of

2172 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 25: High-performance computing in finance: The last 10 years and the next

particular interest is the speci®cation of rules for adjusting the design parameters inStep 3, in order to maximize the CEROE obtained in Step 2. The rules should takeinto account the fact that there exist multiple locally optimal solutions to the productdesign problem. It may be possible to ®nd a design that is optimal only in a smallneighborhood of the design parameters. A tabu search heuristic that searches for aglobally optimal solution from among the local maxima is developed in [18].

References

[1] F. Allen, D. Gale, Financial Innovation and Risk Sharing, MIT Press, Cambridge, MA, 1994.

[2] M. Bertocchi, J. Dupacova, V. Morrigia, Sensitivity of bond portfolio's behavior with respect

to random movements in yield curve. Working paper, University of Bergamo, Bergamo, Italy,

1998.

[3] J.R. Birge, L. Qi, Computing block-angular Karmarkar projections with applications to stochastic

programming, Management Science 34 (1988) 1472±1479.

[4] J.R. Birge, D.F. Holmes, E�cient solution of two-stage stochastic linear programs using interior

point methods, Computational Optimization and Applications 1 (1992) 245±276.

[5] F. Black, E. Derman, W. Toy, A one-factor model of interest rates and its application to treasury

bond options, Financial Analysts Journal (1990) 33±39.

[6] F. Black, M.J. Scholes, The pricing of options and corporate liabilities, Journal of Political Economy

81 (1973) 637±654.

[7] P. Boyle, Options: A Monte Carlo approach, Journal of Financial Economics 4 (1977) 323±338.

[8] P. Boyle, M. Broadie, P. Glasserman, Monte Carlo methods for security pricing, Journal of Economic

Dynamics and Control 21 (1997) 1267±1322.

[9] S.P. Bradley, D.B. Crane, A dynamic model for bond portfolio management, Management Science 19

(1972) 139±151.

[10] M. Broadie, P. Glasserman, Estimating security price derivatives by simulation, Management Science

42 (1996) 269±285.

[11] M. Broadie, P. Glasserman, Pricing American-style options using simulation, Journal of Economic

Dynamics and Control 21 (1997) 1323±1352.

[12] L.D. Cagan, N.S. Carriero, S.A. Zenios, A computer network approach to pricing mortgage-backed

securities, Financial Analysts Journal (1993) 55±62.

[13] D.R. Carino, D.H. Myers, W.T. Ziemba, Concepts technical issues and uses of the Russell±Yasuda

kasai ®nancial planning model, Operations Research 46 (1998) 450±462.

[14] D.R. Carino, W.T. Ziemba, Formulation of the Russell±Yasuda Kasai ®nancial planning model,

Operations Research 46 (1998) 433±449.

[15] Y. Censor, S.A. Zenios, Parallel Optimization: Theory, Algorithms, and Applications, Series on

Numerical Mathematics and Scienti®c Computation, Oxford University Press, New York, 1997.

[16] A. Consiglio, S.A. Zenios, Optimal design of callable bonds using tabu search, Journal of Economic

Dynamics and Control 21 (1997) 1445±1470.

[17] A. Consiglio, S.A. Zenios, Integrated simulation and optmization models for tracking international

®xed income indices, Working paper 98-05, Department of Public and Business Administration,

University of Cyprus, Nicosia, Cyprus, 1998.

[18] A. Consiglio, S.A. Zenios, Designing portfolios of ®nancial products via integrated simulation and

optimization models, Operations Research, forthcoming.

[19] S. Correnti, Integrated risk management for insurance and reinsurance companies ± FIRM, Global

Reinsurance (1997) 81±83.

[20] T. Crainic, M. Toulose, M. Gendreau, Towards a taxonomy of parallel tabu search algorithms,

Report 933, Center for Research in Transportation, University of Montreal, Montreal, Canada, 1993.

[21] G.B. Dantzig, G. Infanger, Large-scale stochastic linear programs: importance sampling and Benders

decomposition. Report sol 91±4, Department of Operations Research, Stanford University, 1991.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2173

Page 26: High-performance computing in finance: The last 10 years and the next

[22] M.A.H. Dempster, and A.M. Ireland. A ®nancial expert decision support system, in: G. Mitra (Ed.),

Mathematical Systems for Decision Support, vol. F48, NATO ASI Series, Springer, Berlin, 1998, pp.

415±440.

[23] E. Dockner, H. Moritsch, Pricing constant maturity ¯oaters with embedded options using Monte

Carlo simulation, Aurora working paper, Department of Statistics, Operations Research and

Informatics, University of Vienna, Vienna A-1010, Austria, 1999.

[24] J. Dupacova, M. Bertocchi, V. Morrigia, Postoptimality of scenario based ®nancial planning models

with an application to bond portfolio management, in: W.T. Ziemba, J.M. Mulvey (Eds.), World

Wide Asset and Liability Management, Cambridge University Press, Cambridge, 1998, pp. 263±285.

[25] S. Focardi, C. Jonas, Modeling the Market: New Theories and Techniques, Frank J. Fabozzi

Associates, New Hope, PA, 1990.

[26] P.W. Glynn, D.L. Iglehart, Importance sampling for stochastic simulations, Management Science 35

(1989) 1367±1392.

[27] B. Golub, M. Holmer, R. McKendall, L. Pohlman, S.A. Zenios, Stochastic programming models for

money management, European Journal of Operational Research 85 (1997) 282±296.

[28] R.S. Hiller, J. Eckstein, Stochastic dedication: Designing ®xed income portfolios using massively

parallel Benders decomposition, Management Science 39 (11) (1994) 1422±1438.

[29] M.R. Holmer, The asset liability management system at fannie mae, Interfaces 24 (3) (1994) 3±21.

[30] M.R. Holmer, S.A. Zenios, The productivity of ®nancial intermediation and the technology of

®nancial product management, Operations Research 43 (6) (1995) 970±982.

[31] J.M. Hutchinson, S.A. Zenios, Financial simulations on a massively parallel connection machine,

International Journal of Supercomputer Applications 5 (1991) 27±45.

[32] G. Infanger, Monte carlo importance sampling with a benders decomposition algorithm for stochastic

linear programs, Annals of Operations Research 39 (1992) 69±95.

[33] J.E. Ingersoll, Theory of Financial Decision Making, Studies in Financial Economics, Rowman

Little®eld, Totowa, New Jersey, 1987.

[34] E.R. Jessup, D. Yang, S.A. Zenios, Parallel factorization of structured matrices arising in stochastic

programming, SIAM Journal on Optimization 4 (4) (1994) 833±846.

[35] J.M. Mulvey, S. Correnti, J. Lummis, Total integrated risk management: insurance elements. SOR

Report 97±2, Statistics and Operations Research, Princeton University, Princeton, NJ, 1997.

[36] P. Kang, S.A. Zenios, Complete prepayment models for mortgage backed securities, Management

Science 38 (11) (1992) 1665±1685.

[37] M.I. Kusy, W.T. Ziemba, A bank asset and liability management model, Operations Research 34 (3)

(1986) 356±376.

[38] B. Mandelbrot, The variation of certain speculative prices, Journal of Business 36 (1963) 394±419.

[39] H. Markowitz, Portfolio selection, Journal of Finance 7 (1952) 77±91.

[40] R.C. Merton, Theory of rational option pricing, Bell Journal of Economics and Management Science

4 (1973) 141±183.

[41] R.C. Merton, Continuous-Time Finance, Blackwell, Cambridge, MA, 1990.

[42] R.C. Merton, The ®nancial system and economic performance, Journal of Financial Services

Research (1990) 263±300.

[43] M.H. Miller, Financial innovation: the last twenty years and the next, Journal of Financial and

Quantitative Analysis 21 (1986) 459±471.

[44] J.M. Mulvey (Ed.), Total Integrative Risk Management, RISK, June 1994 (Special Suppl.).

[45] J.M. Mulvey, D.P. Rosenbaum, B. Shetty, Strategic ®nancial risk management and operations

research, European Journal of Operational Research 97 (1995) 1±16.

[46] J.M. Mulvey, A. Ruszczy�nski, A new scenario decomposition method for large-scale stochastic

optimization, Operations Research, to appear.

[47] J.M. Mulvey, A.E. Thorlacius, The towers perrin global capital market scenario generation system,

in: W.T. Ziemba, J.M. Mulvey (Eds.), World Wide Asset and Liability Management, Cambridge

University Press, Cambridge, 1998, 286±312.

[48] J.M. Mulvey, H. Vladimirou, Stochastic network programming for ®nancial planning problems,

Management Science 38 (1992) 1643±1664.

2174 S.A. Zenios / Parallel Computing 25 (1999) 2149±2175

Page 27: High-performance computing in finance: The last 10 years and the next

[49] J.M. Mulvey, S.A. Zenios, Capturing the correlations of ®xed-income instruments, Management

Science 40 (1994) 1329±1342.

[50] S.S. Nielsen, S.A. Zenios, A massively parallel algorithm for nonlinear stochastic network problems,

Operations Research 41 (2) (1993) 319±337.

[51] S.S. Nielsen, S.A. Zenios, Solving multistage stochastic network programs on massively parallel

computers, Mathematical Programming 75 (1996) 227±250.

[52] S.S. Nielsen, S.A. Zenios, Scalable parallel Benders decomposition for stochastic linear programming,

Parallel Computing (1998).

[53] P.M. Pardalos, editor. Advances in Optimization and Parallel Computing, North-Holland, The

Netherlands, 1992.

[54] P.M. Pardalos, A.T. Phillips, J.B. Rosen, Topics in Parallel Computing in Mathematical Program-

ming, Science Press, New York, 1992.

[55] M.S. Shtilman, S.A. Zenios, Constructing optimal samples from a binomial lattice, Journal of

Information and Optimization Sciences 14 (1993) 1±23.

[56] K. Worzel, S.A. Zenios, Parallel- and super-computing in the ®nancial services industry, Economic

and Financial Computing 2 (1992) 169±184.

[57] K.J. Worzel, C. Vassiadou-Zeniou, S.A. Zenios, Integrated simulation and optimization models for

tracking ®xed-income indices, Operations Research 42 (2) (1994) 223±233.

[58] D. Yang, S.A. Zenios, A scalable parallel interior point algorithm for stochastic linear programming

and robust optimization, Computational Optimization and Applications 7 (1997) 143±158.

[59] S.A. Zenios, Massively parallel computations for ®nancial modeling under uncertainty, in: J. Mesirov

(Ed.), Very Large Scale Computing in the 21st Century, SIAM, Philadelphia, PA, 1991, 273±294.

[60] S.A. Zenios (Ed.), Financial Optimization, Cambridge University Press, Cambridge, England, 1993.

[61] S.A. Zenios, A model for portfolio management with mortgage-backed securities, Annals of

Operations Research 43 (1993) 337±356.

[62] S.A. Zenios, Asset/liability management under uncertainty for ®xed-income securities, Annals of

Operations Research 59 (1995) 77±98 (reprinted in: W.T. Ziemba, J.M. Mulvey (Eds.), World Wide

Asset and Liability Modeling Cambridge University Press, Cambridge, 1998).

[63] S.A. Zenios (Ed.), Quantitative Methods, AI and Supercomputers in Finance, Stanley Thornes,

Cheltenham, 1995.

[64] S.A. Zenios, M. Holmer, R. McKendall, C. Vassiadou-Zeniou, Dynamic models for ®xed-income

portfolio management under uncertainty, Journal of Economic Dynamics and Control 22 (1998)

1517±1541.

S.A. Zenios / Parallel Computing 25 (1999) 2149±2175 2175