portfolio efficiency: traditional mean-variance analysis...
TRANSCRIPT
1
Portfolio Efficiency: Traditional Mean-Variance Analysis
versus Linear Programming
Steve Eli Ahiabu University of Toronto†
Spring 2003
† Please send comments to [email protected] I thank Prof. Adonis Yatchew for his comments and suggestions in this project. All remaining shortcomings remain entirely mine.
2
Abstract
So strong is the influence of Markovitz [1952] on modern finance that portfolio selection tasks and efficiency tests are dominated by one definition of mean-variance efficiency. Not much regard is paid to the fact that standard mean-variance utility functions satisfy the necessary and sufficient conditions of expected utility theory if and only if return distributions are elliptical. In this paper, I explore efficiency implications of portfolios using the mainstream mean-variance methodology and compare my results to a new test approach which assumes only nonsatiation and concavity of the utility functions; similar to Arrow [1971]. The results are interesting and suggest that better diversification is required of the otherwise popular portfolio, the value-weighted S&P 500.
Keywords: MV-efficiency, HL-efficiency, Spanning, Intersection
I Introduction
The concepts of Spanning and Intersection are major bedrocks of modern finance
in general and portfolio selection theory in particular. Given market completeness, the
market portfolio frontier is said to span all asset returns in the mean-variance sense. A
portfolio A with Nn + assets is said to span a narrower portfolio B of n assets if the
efficient frontiers of the two portfolios coincide. In fact, an investor holding the efficient
allocation B does not benefit, mean-variance wise, by adding any of the extra N assets
in the broader portfolio A . Thus there is no benefit from further expansion or contraction
of the current portfolio. Spanning occurs if no investor benefits from any diversification
moves irrespective of their degree of risk aversion.
If however the efficient frontiers intersect, then there is exactly one utility
function for which an investor does benefit. Only investors with that utility function will
be optimizing if they fail to add assets from the broader portfolio. Alternatively, there
exists one coefficient of risk aversion for which diversification benefits does not occur.
Intersection thus tests whether or not we can fine any rational agent with any conceivable
degree of risk aversion, however absurd as often suggested in the literature on the equity
premium puzzle, who might not benefit from diversification. Spanning thus implies
intersection but the reverse is not the case. Assets that are spanned can be ignored for the
3
purpose of portfolio selection. For diversification purposes therefore, it is a standard task
for an investor to test for potential benefits from extending portfolio content to a broader
spectrum either for higher return, to economize on risk or in general, to achieve some
preferred return distribution.
Mean-variance analysis is a popular tool for analyzing portfolio efficiency. The
procedure typically involves maximizing a specified mean-variance utility function
subject some constraints including feasibility.1 Since Markovitz [1952, 1959] and Tobin
[1958] , mean-variance analysis has been dominated by one definition of efficiency.
According to these authors, a portfolio is efficient if there exists no other portfolio within
the feasible set that is characterized either by a higher expected return with no worse
volatility or by a lower volatility plus no worse mean return. In this paper, I refer to this
approach which is the most popular as the standard or mainstream mean-variance theory.
It is important however to stress that standard mean-variance analysis is only a
subset to the broader framework of Stochastic Dominance and is not necessarily
consistent with expected utility theory (see Arrow [1952]). It is only consistent if the
return distribution is elliptical, which is hardly the case with finance data. In particular,
an investor may prefer an asset with lower mean return plus higher volatility if that asset
has more preferred distributional properties including skewness, kurtosis and persistence
compared to the benchmark. That is, standard Markovitz-type definition of mean-
variance efficiency is a special case which is particularly not ideal given the nature of
financial data.
Hanoch and Levy [1970] provide an alternative definition of mean-variance
efficiency which is more consistent and free of distributional assumptions. In their
framework, a portfolio is efficient if there exists a monotone and concave (utility)
function that rationalizes that portfolio. Post [2001] uses this definition in his proposed
linear programming test for spanning and intersection using second order stochastic
dominance. Thus, by using second order stochastic dominance, his test is mindful only of
two of most important properties of expected utility theory, non-satiation and risk
1 Other approached may include minimizing variance subject to a minimum desirable return.
4
aversion. Further, by using linear programming, the test avoids the computational burden
of quadratic solutions which characterized mainstream mean-variance analysis and has
the additional flexibility to incorporate logical extensions such as transaction costs.
In this paper, I evaluate portfolio efficiency and mean-variance spanning and
intersection via the restrictive standard approach initially using the simple Sharpe Ratios2
and next with the popular full characterization. I then repeat the task using the Hanoch-
Levy definition of efficiency and the compare the results of both branches. The rest of the
paper is organized as follows. The next section reviews the literature on portfolio
efficiency and mean-variance spanning and intersection. Section III presents the tests
considered in this paper as well as methods and algorithm employed. In section IV, I
present the data and the main results. I also discuss briefly the implications. Section V
concludes.
II Literature Review
The literature on Spanning and Intersection is vast and the intention here is not to
give a complete overview but merely to briefly recap the main branches and to indicate to
an interested reader as to where to look.
Applications of standard mean-variance analysis (MVA) abound. DeSantis
[1995] and Cumby and Glen [1990] employ MVA to question whether US-investors can
benefit from international diversification. Taking the viewpoint of a US investor who
initially only invests in the US, these authors study the question whether they can
enhance the mean-variance characteristics of their portfolio by also investing in other
developed markets. DeSantis [1994], Bekaert and Urias [1996], Errunza, Hogan and
Hung [1998], and DeRoon, Nijman and Werker [2001] investigate mean-variance
portfolio advantages to the US investor who holds assets in the developed markets in the
US, Japan and Europe by investing in emerging markets. Glen and Jorion [1993] take the
argument further by investigating whether mean-variance optimizing investors with well-
diversified international portfolios should add currency futures to their portfolios. That is,
2 Sharpe [1971]
5
should they hedge the currency risk that arises from positions taken in cross border stocks
and bonds?
Some authors have explored spanning and intersection in joint formulations
involving mean-variance frontiers and volatility bounds in what has come to be termed
“duality” tests. Ferson, Foerster, and Keim [1993], DeSantis [1994], Ferson [1995] and
Bekaert and Urias [1996] demonstrate that the hypothesis of mean-variance spanning and
intersection can be reformulated in terms of the volatility bounds similar to those by
Hansen and Jagannathan [1991]. In their framework, the question is whether the set of
additional assets contain information about the volatility of the stochastic discount factor
or pricing kernel that is not already present in the current portfolio. A mean-variance
improvement in this case occurs if the diversification into emerging markets for instance
provides tighter volatility bounds on the stochastic discount factor than returns from the
developed markets only. Bansal and Lehmann (1997) provide a bound on the mean of the
logarithm of the pricing kernel, using growth optimal portfolios. Balduzzi and Kallal
(1997) show how additional knowledge about risk premia may lead to sharper bounds on
the volatility of the discount factor and Balduzzi and Robotti (2000) use the minimum
variance discount factor to estimate risk premia associated with economic risk variables.
There is literature that uses conditioning information. Finance return data are
hardly independently and identically distributed (i.i.d.). Cochrane [1997] and Bekaert and
Urias [1996] develop models that allow the incorporation of conditional information in
their tests. Though their procedures are intuitive and involve only a rescaling of returns, a
disadvantage of this method is that the dimension of the estimation and testing problem
increases quickly. Harvey [1989], Campbell and Viceira [1998] and DeRoon, Nijman
and Werker [1998] show how the problem can be largely circumvented by assuming that
variances and covariances are homoskedastic, while expected returns are allowed to vary
over time, although this assumption is largely in conflict with the empirical evidence
regarding time-varying second moments. Appealing to this simplifying assumption
however, the authors show that the conditioning variables can easily be accounted for by
using them as additional regressors. The restrictions for the intersection and spanning
hypotheses then become similar to the restrictions in the standard case with i.i.d.
6
variables. This way of incorporating conditional variables also has the additional
advantage that the regression estimates indicate the economic circumstances, i.e., for
what values of the conditioning variables, intersection and spanning can or can not be
rejected as demonstrated in Shanken [1990] and Ferson and Schadt [1996].
Markovitz [1952, 1959] and Tobin [1958] present quantitative approaches to
portfolio analysis. Their prescription remains dominant in practice to date. Markovitz
propose choosing the portfolio that minimizes variance subject to a restriction on the
mean return. These methodologies involve large quadratic programming solution rules.
To simplify the problem, Yamakazi and Konno [1991] present linear methods involving
mean absolute error analysis (MADA) while Young [1998] formalize a maximum (and
minimum) return approach requiring linear programming. The latter also establishes the
exact relationship between his minimax approach and Markovitz’s.
Most mean-variance analysis methodologies (methods above) consider when the
investment possibility set is given. Kandel and Stambaugh [1987] and De Roon, Nijman
and Werber [2001] propose mean-variance spanning and intersection tests in the case
where IPOs exist in the market.
Though dominated by standard mean-variance analysis, it is important to stress
that standard MVA is only a subset of the broader subject of Stochastic Dominance (SD).
The advantages of using this standard form include its tractability, ease of testability (see
Huberman and Kandel [1987] and De Roon et al. [2001]) as well as flexibility to allow
for logical extensions such as transactions cost and short selling constraints. As pointed
by Bigoelow [1993], this standard definition of MVA is not necessarily consistent with
expected utility theory; it is consistent only if return distribution is elliptical. Hanoch and
Levy [1970] make this claim much more intuitive when in their characterization of
results when different classes of mean-variance utility functions are assumed. They
suggest that the stronger the restrictions assumed on admissible utility functions, the
closer one gets to individual complete preference ordering. Therefore, the number of
items in the efficient set is reduced as the condition for dominance becomes more
specialized as in the standard mean-variance utility case.
7
Meyer [1979] presents necessary and sufficient conditions for testing whether or
not a given portfolio is efficient in the Hanoch-Levy sense. Similar to the case of standard
MVA, the implementation of such tests involve quadratic programming which can be
computationally tasking. Post [2001] demonstrates a way to derive necessary and
sufficient conditions that require only linear programming hence reducing the
computational burden enormously. His test also offers increased flexibility including the
opportunity to consider transaction costs and short selling constraints. In the next section,
I recap standard mean-variance analysis starting with the Sharpe ratio and then a full
characterization. Portfolio efficiency tests are highlighted for both standard MVA and for
the Hanock-Levy type utility due Post [2001].
III Methods and Procedures
The Sharpe ratio is perhaps the crudest and quick source of the standard mean-
variance criterion. A portfolio is mean-variance efficient if no alternative feasible
portfolio yields at least the same mean return with a lower variance or a higher mean
return with at worst the same variance.3 The ratio basically is4
( )( )
= tt rr
SR φωσωµ
where ( )rωµ and ( )rωσ are the mean return and standard deviation of the portfo lio ω .
tφ is simply data set available up to date t .
It is easy to spot potential flaws with the Sharpe ratio. For instance, this criterion
automatically ranks risk free assets as the most efficient – with a ratio of positive infinity
– regardless of the distribution of returns on risky assets. This is so because no other asset
has zero variance except the risk free asset.
3 An alternative measure of portfolio performance similar to the Sharpe Ratio is Jensen’s alpha following Jensen [1968] . 4 Other measurements of Sharpe Ratio use mean-to-variance ratio. This is simply a normalization and yields same portfolio ranking as the above formulation.
8
A more rigorous mean variance analysis follows. An investor is assumed to be
faced with the one period problem of maximizing the indirect utility of future wealth
( )[ ]1max +tt WuEω
subject to the constraints 11 ++ ′= ttt rWW ω and 1=′ niω (see Ingersoll
[1987]). ω is the vector of weights assigned to each of the n assets within the portfolio
and ni is an 1×n unit vector. 1+tr is an 1×n vector of next period returns. Utility u is
assumed to satisfy the usual properties. The agent’s problem thus can be rewritten as:
( )[ ] ( )nttt irWuE ωηωω
′−+′ + 1max 1
where η is a Lagrange multiplier. The first order condition of the above implies that
[ ] nttt irmE =++ 11
where 1+tm is the stochastic discount factor or pricing kernel which is assumed to exist if
the law of one price holds.5 1+tm is given as ( ) −++ ′′= ηω 11 tttt rWuWm .
As suggested by the previous section, and as shown by Ferson, Foerster, and
Keim [1993], DeSantis [1994], Ferson [1995] and Bekaert and Urias [1996], the concept
of mean-variance spanning and intersection has a dual interpretation in terms of volatility
bounds. In this regard, mean-variance spanning means that the volatility bound derived
from the returns 1,1 +tr is the same as the bound derived from ( )1,21,1 ; ++ tt rr . Therefore, the
minimum variance stochastic discount factors for 1,1 +tr , ( ) 11 +trm , are also the minimum
variance stochastic discount factors for ( )1,21,1 ; ++ tt rr , and the asset returns 1,2 +tr do not
provide information about the necessary volatility of stochastic discount factors that is
not already present in 1,1 +tr .
Using the definition for covariance ( ) ( ) ( ) ( )tttttttttt yExEyxEyx −=cov , we can
rewrite the above FOC as:
[ ] ( )[ ] ( )[ ] [ ]111
11
11 cov ++−
+−
++ −= tttttntttt rmmEimErE
5 Substituting the stronger assumption of “no arbitrage” for the “law of one price”, one can show that
01 >+tm . The same result is arrived at when one interprets the kernel function as the intertemporal rate
of substitution.
9
The optimal portfolio weights ω can be found from the above if the utility function and
Lagrange multiplier η are known.
The problem becomes more tractable if one restricts the objective to mean-
variance optimization (that is assuming a mean-variance utility function). Assume that a
fund initially has n assets in its portfolio. A portfolio ω is mean-variance efficient if it is
chosen to optimization
ωωγ
µωω
∑′−′=2
max u (1)
subject to the constraint 1=′ niω where µ is an ( )1×n vector of gross returns. γ is the
coefficient of risk aversion and ∑ the ( )nn× variance covariance matrix of returns.
Optimal allocation requires the assigned weights to be generated as:
( )niηµγω −∑= −− 11 (2)
In the above, the Lagrange multiplier η can be interpreted as the zero-beta rate, i.e. the
return of the portfolio that is not correlated with the optimal portfolio.
The return vector on the entire market tR can be partitioned into ( )tt rr ,2,1 , ′′ , where
tr ,1′ is ( )1×n while tr ,2′ is ( )1×N . We regard tr ,1′ as the benchmark portfolio and tr ,2′ the
vector of test assets. If the benchmark portfolio is efficient mean-variance wise, then we
expect efficient portfolios to be of the form:
=
01ω
ω (3)
with 1ω being ( )1×n and ω being ( )[ ]1×+ Nn . From equation (2), in the general case
we have ( ) ωγηµ ∑=− ni , a partitioning of which implies
∑∑∑∑
=
−−
01
2221
1211
2
1 ωγ
ηµηµ
N
n
ii
(4)
The first equation in (4) implies ( )niηµγω −∑= −− 111
11 and substituting this into the
second line, we have:
( ) 11121
111212
−− ∑∑+∑∑−= nN iiηµ (5)
10
If there is only one value of ( )γη for which this condition holds, we say that there is
intersection. In this case, the two efficient frontiers intersect, and there can exist a rational
risk averse investor (i.e. an investor with a specific degree of risk aversion) who has no
benefit in terms of standard mean-variance tradeoff from including the extra N assets
into her portfolio. If this condition holds for every value of ( )γη , we say that there is
spanning. Thus, spanning implies:
011
11212 =∑∑− − µµ
011121 =−∑∑ −
Nn ii
Spanning means that the mean-variance frontier of the n assets completely coincides
with the mean-variance frontier of the Nn + assets and no investor, irrespective of
degree of risk aversion can benefit from further diversification. Intersection on the other
hand means that the two mean-variance frontiers have only one common point
(portfolio).
In the two asset case equation (5) becomes
121
212
1
212 1 µ
σσ
σσ
ηµ +
−= (6)
where subscripts 1 refer to the initial asset portfolio and 2 the asset for potential
addition. In (6), the ratio 21
21
σσ
coincidentally is the slope coefficient (beta) from
regressing the return of asset 1 on that of asset 2 . The hypothesis 02 =ω can thus be
tested by running the regression:
ttt urr ++= ,1,2 βα (7)
and testing the restriction ( )βηα −= 1 . Again, the Lagrange multiplier is
unobserved. Pre-multiplying equation (2) by ′2i , we have ( )2
12
121 iii ηµγω −∑′=′= −− .
Solving for relative risk aversion coefficient,
21
21
2 iii −− ∑′−∑′= ηµγ (8)
This implies that one can test for intersection (for a specific investor with a given degree
of risk aversion γ – hence η ) as well as test spanning (for all investors – irrespective of
11
γ ) using the above methodology. In other words, testing intersection involves choosing
γ (hence η ) and testing whether the condition ( )βηα −= 1 holds. A test for
spanning implies this condition holds for all γ (hence η ). This requires the joint
hypothesis 1=β and 0=α .6
To test for intersection in this standard Markovitz-type utility framework, I
reparameterize and test 0=κ in the regression ( ) ttt urr +−+=− ηβκη ,1,2
for several values of γ (hence η ). The test of spanning involve the reparameterized
regression tttt urrr ++=− ,1,1,2 λα and the joint hypothesis 01 =−= βλ and
0=α .
Hanoch-Levy (HL) approach is critical of the above mainstream MVA due to
distributional assumptions which are inherent in the utility function specification. Since
utility functions are not observable, this provides rational for the use of general
assumptions such as nonsatiation and risk aversion. This notion is evident in their
alternative definition of portfolio efficiency. A portfolio Ω∈*ω is HL-efficient if it is
optimal relative to some function Uu ∈ where U is the set of all monotone concave
utility functions. This is a much stronger definition than the standard of mean-variance
definition of efficiency (see above). In contrast to the Sharpe ratio, the HL definition
typically classifies a riskless fund as inefficient, in consistency with Arrow [1971] since
stocks generally have higher mean return over the risk free rate. Thus, a HL-efficient
portfolio satisfies
( ) ( ) ( ) ( ) ( ) ( )[ ] 01
maxminmaxmin1
** =
−=− ∑∫∫ =Ω∈∈Ω∈∈
T
t ttUuUururu
TrdFrurdFru ωωωω
ωω
where ( )rF is the empirical return distribution function. The linear programming test by Post (2001) is designed using the above approach.
His test questions whether or not one can construct a monotone and concave utility
Uu ∈ that can rationalize the portfolio choice *ω . In other words, can we find support 6 An alternative test is spanning and intersection is Jensen’s alpha approach, due Jensen [1968].
12
lines for a monotone and concave quadratic function that justifies the diversification
strategy evident in the current portfolio? If we can, then we can call such a function a
potential utility function for an optimizing agent with that portfolio. The above minimax
formulation has often appeared in the literature as measures of production efficiency7 as
well as in the finance literature (see Young [1998]). In the HL definition, the class of
monotone and concave quadratic utility functions is restricted to:
( ) 12,0: 3232
321 ≥∆+≤++=∈≡ αααααα rrruUuU
The first constraint 03 ≤α endures concavity. ∆ can be seen as the marginal
increase/decrease in return due to a portfolio reallocation and hence the second constraint
02 32 ≥∆+ αα restricts the function to be strictly increasing over the ent ire return
interval.
If the relevant portfolio is efficient, then such support lines must exist, and if such
support lines exist, then the portfolio must be efficient. There are two important issues to
note concerning the above statistic. First, the statistic does not represent a meaningful
performance measure that can be used to rank different portfolios with regards degree of
efficiency. Secondly, the support lines are used as an instrument for testing the efficiency
of the portfolio rather as an estimate of the utility function for which the given portfolio is
efficient. This is because once a portfolio is found to be efficient, there typically exists
multiple candidate utility functions that could equally justify that portfolio. The linear
programming formulation requires the test statistic
( )( )
( )( )
+∈∀≥+−+= ∑ =Θ∈
T
t titt NnirrrT 1 ,
*32
2,1,
* ......1021
:min θωωααθωξααθ
with ( ) 12:, 3232 ≥∆+ℜ×ℜ∈≡Θ − αααα . The problem hence has three choice
variables 32, ααθ and and 1++ Nn constraints. The portfolio *ω is efficient if and
only if ( ) 0* =ωξ .8
7 See Debreu (1951) and Farrell (1957). 8 See Post [2001] for proof.
13
In the above, the necessary condition follows direct from Kuhn-Turker conditions
of solving for ( )∑ =Ω∈
=T
t truT 1
* 1maxarg ωω
ω for Uu ∈ . Thus the condition
( )( ) Ω∈∀≥−′∑ =ωωωω 0
11
**T
t ttt rrruT
(9)
That is to say *ω is optimal for the concave monotone set of utility functions only if all
alternative portfolios Ω∈ω are enveloped by the tangent hyperplane defined by the
vector ( ) ( ) ( )( )**1
* ...,......... ωωω Trururu ′′≡′ . By construction (given our choice of 2α and
3α ), we know that ( ) ( )( )** , ωω ruru ′′′ is feasible i.e. ( ) ( )( ) Ω∈′′′ ** , ωω ruru . The
inequality (9) above implies that *ω is efficient only if ( ) 0* =ωξ , which is the
necessary condition for efficiency.
Sufficient conditions are established by using ( ) Ω∈32 ,αα for the optimal
portfolio and using some concave function ( ) 2*3
*2 WWWu αα += . If *ω is efficient i.e.
( ) 0* =ωξ , then
( ) ( ) ωωωωω ∑∑ =Ω∈=
′=′ T
t ttT
t rt rrurru1
*1
** max (10 )
Jensen’s inequality and u concave imply that ( ) ( ) ωωωω ∑∑ =Ω∈=
′≤′ T
t ttT
t t rruru1
*1
max for
all Ω∈ω . Given ( )Wu as above, we have ( ) ( ) ωωω ∑∑ ==′=
T
t ttT
t t rruru1
*1
. This
together with equation (10 ) gives ( ) ( )∑∑ ==′=
T
t rtT
t t rruru1
**1
* ωωω . Combining this
with the Jensen’s inequality gives ( ) ( ) ωωωω ∑∑ ==′≥′ T
t ttT
t rt rrurru1
*1
**
*0 ωω == when which simply implies the sufficient condition also being ( ) 0* =ωξ .
One observation about the test statistic is that there is always a feasible solution
for instance 01 32 == αα and , in which case ( )∑ =−=
T
t tti rrT 1
*,
1ωθ necessarily satisfies
the constraint. This refers to the case of a risk neutral agent with linear utility and who
seeks to maximize only expected return. A second observation is as follows. Consider an
initial portfolio of just one stock ( )1=n and set *ωω = . Thus tti rr ,1, = and hence
14
0,* =− tit rr ω . Immedia tely, one gets the result 0=θ and the portfolio is efficient. In
other words, if there is only one asset in the investment opportunity set, then that single
asset makes an efficient portfolio.
To test the efficiency hypothesis, Post (2001) develops an alternative formulation
which he calls the “dual formulation” which is very similar that outlined above. The dual
statistic is given as:
( ) ( ) ( ) 0,:,max *** ≥≡Ω∈
ωωσωωµωψω
where the mean difference between the current portfolio *ω and a potential feasible
alternative is ( ) ( )∑ =−=
T
t tti rrT 1
*,
* 1, ωωωωµ . ( ) ( )( )∑ =
−−∆=T
t ttit rrrT 1
*,
** 1, ωωωωωσ is
the co-movement measure between both portfolios. Again, the derivation of this is quite
similar to that recapped above (see Post [200]). The portfolio *ω is HL-efficient if and
only if ( ) 0* =ωψ .
This paper adopts the dual formulation for the tests conducted in the next section.
The algorithm used is rather simply but not so fast for a large asset space and high
accuracy level. First, I decide the degree of precision that an investor may consider
important with regards portfolio weights. That is, an investor seeking fairly high accuracy
may choose weights to the decimal of say 5101 −× . Next, I guess a arbitrary positive value
for ψ . Then I take the current portfolio weight vector *ω and formulate all possible
alternative weight permutations to the required accuracy level bearing in mind the
investment constraints 11
=∑ +
=
Nn
i iω and ( )Nnii +∈∀≥ ,......10ω . Using each
hypothetical weight vector ω , I evaluate ( )*,ωωµ and ( )*,ωωσ . If ( ) 0, * ≥ωωσ and
( )*,ωωµ is less than the previously saved ψ , I replace ψ to equal the current ( )*,ωωµ
and also save the current weight vector. Then I try the next hypothetical weight vector.
My final *ψ is that remaining after the entire iteration is done.
15
IV Data Description and Main Results
In the current paper, I use monthly return data on seven of the most widely
watched benchmark portfolios/indexes of stock market activity. The first is S&P 500
Table 1: Descriptive Statistics: monthly return data in percentages, July 1926 to Dec 2002.
PS & LB / MB / HB / LS / MS / HS /
Mean 0.9694 0.9137 0.9765 1.1974 1.0483 1.2680 1.4585
SD 5.6958 5.5060 5.9256 7.4732 7.9488 7.2302 8.4982
Minimum -28.7100 -28.1500 -27.7900 -35.3600 -32.3400 -30.8500 -33.7000
Maximum 41.6800 32.4700 51.6100 70.5600 64.3200 64.3900 82.0200
Skewness 0.4443 -0.1138 1.3966 1.6900 0.9777 1.4312 2.0744
Kurtosis 12.5277 8.1188 20.2693 21.2780 12.9412 18.1142 22.5401
Jarque-Bera 3482.505 997.3123 11646.53 13149.59 3904.517 9004.619 15187.38
Variance-Covariance Matrix PS & LB / MB / HB / LS / MS / HS /
PS & 32.4424
LB / 30.4441 30.3159
MB / 32.3927 29.1441 35.1124
HB / 38.6742 33.6901 41.3126 55.8484
LS / 38.3310 37.5513 38.7050 47.9873 63.1841
MS / 36.1469 33.6757 38.0098 48.4094 54.4517 52.2764
HS / 41.0795 33.6757 44.1789 58.3386 60.3318 59.1884 72.2200
Unconditional Correlation Matrix PS & LB / MB / HB / LS / MS / HS /
PS & 1.0
LB / 0.9708 1.0
MB / 0.9598 0.8933 1.0
HB / 0.9086 0.8188 0.9329 1.0
LS / 0.8466 0.8580 0.8217 0.8078 1.0
MS / 0.8777 0.8459 0.8872 0.8959 0.9474 1.0
HS / 0.8487 0.7883 0.8773 0.9186 0.8931 0.9633 1.0
Monthly return for S&P 500 were retrieved from CRSP Compuserve database. These returns are inclusive of dividends. Returns on Fama and French portfolios are from the web site of Kenneth French at http://mba.tuck.dartmouth.edu/pages/ faculty/ken.french/. Jarque-Bera [1980] provides an LM test for normality. In this case, I test the normality of returns. The result is distributed as Chi-Squared with 2 degrees of freedom. The test rejects the normality assumption in all 7 cases.
16
value weighted cum-dividend return with a time range of July 1926 to December 2002
from CRSP database. The rest six are Fama and French book-to-market sorted and size-
sorted benchmark portfolio return data with the same time range.9 Summary descriptive
statistics are reported in table 1.
Interestingly, the S&P 500 has one of the lowest mean returns among the seven
portfolios. In compensation, it does exhibit modest volatility. The index correlates highest
with F&F portfolio coded LB / and lowest with LS / . The portfolio LB / seems to be a
rather conservative portfolio exhibiting the lowest return, volatility, kurtosis and negative
skewness. HS / seems to be the most adventurous portfolio. Jarque-Bera [1980] test for
normality reports strong rejection of the null in all seven cases. A potential implication is
that standard mean-variance analysis Markovitz [1952] is flawed.
Next, I compare conditional Sharpe ratio for the above portfolios. Sharpe Ratios
reported in figure 1 do not show overwhelming performance advantage for or against any
portfolio since all the graphs seem neck-to-neck, perhaps in exception of F&F LS /
where S&P seems to do better. However, this is far from being conclusive since this pair
as well exhibit the lowest correlation which suggests a good avenue for risk curbing
diversification. Further, as highlighted earlier, the Sharpe ratio can be highly
uninformative and even erroneous. For instance, two risk free assets with different returns
will be ranked equally with a ratio of positive infinity because both have zero return
variance.
9 See http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html
17
Figure 1: Conditional Sharpe Ratios, S&P verses F&F benchmark portfolios.
These ratios start from the 201st observation (from March 1943) up to the 918th (Dec 2002). The first 200 observations were used as previous information (conditioning information) to calculate the ratio reported at date 201 and so on. The solid lines are moving (conditional) Sharpe ratios for S&P 500 while the broken lines are moving Sharpe ratios for the F&F portfolio reported in each title.
18
Tests for intersection were carried out for varying degrees of relative risk aversion
using standard mean-variance utility maximization as outlined in the previous section.
The question is whether or not a given agent with specific degree of risk aversion γ
(hence η ) will be benefit from diversifying beyond the S&P benchmark and add of the
F&F portfolios. Table 2 shows t-statistics for ( )βηα −= 1:0H in using the
reparameterized regression ( ) ttt urr +−+=− ηβκη ,1,2 .
Table 2: t -statistic for different degrees of relative risk aversion.
γ LB / MB / HB / LS / MS / HS / 0.00 0.00 0.00 0.0001 0.00 0.0002 0.0003
0.25 0.0003 0.00 -0.0007 -0.0008 -0.0004 -0.0009
0.50 0.0005 0.00 -0.0016 -0.0017 -0.0009 -0.0021
0.75 0.0008 0.00 -0.0025 -0.0026 -0.0015 -0.0034
1.00 0.0011 0.00 -0.0033 -0.0034 -0.0021 -0.0046
1.25 0.0014 0.00 -0.0042 -0.0043 -0.0026 -0.0058
1.50 0.0017 0.0001 -0.0051 -0.0052 -0.0032 -0.007
1.75 0.002 0.0001 -0.0059 -0.006 -0.0037 -0.0082
2.00 0.0023 0.0001 -0.0068 -0.0069 -0.0043 -0.0094
2.50 0.0029 0.0001 -0.0085 -0.0086 -0.0054 -0.0118
3.00 0.0035 0.0001 -0.0103 -0.0103 -0.0065 -0.0142
4.00 0.0046 0.0001 -0.0137 -0.0138 -0.0087 -0.0191
5.00 0.0058 0.0002 -0.0172 -0.0173 -0.011 -0.0239
10.00 0.0116 0.0003 -0.0345 -0.0346 -0.0221 -0.0481
20.00 0.0232 0.0006 -0.0691 -0.0693 -0.0444 -0.0964
As explained in Section III, in all the above cases, the variable tr ,1 refers to S&P 500 returns while tr ,2
refers to the returns of the F&F portfolio noted at the top of each column. η is derived from γ and the
appropriate variance-covariance matrix using equation (8) above. That is,
nn
n
ii
i1
1
−
−
∑′−∑′
=γµ
η .
Table 2 above suggests a definite case of non-rejection of the hypothesis of
intersection for all degrees of risk aversion even as high as 20. For the range of relative
risk aversion considered, agents do not benefit from diversification. Agents will not be
acting inefficiently by failing to add any F&F portfolio to the benchmark S&P 500. As
one considers agents with higher risk aversion, the intersection the value of the statistic
rises slightly within reasonable ranges of γ yet the hypothesis is still no where close to
19
the rejection zone. The first row of table 2 refers to the risk neutral agent who is
interested only in expected return. One way to interpret these results is that from the
perspective of a risk neutral agent, the mean return of all seven assets are not statistically
different hence no diversification suggested on the first row. Comparing table 2 to table 1
suggests reasons why diversification benefits are not prevalent via the mean-variance
approach. Most of the F&F portfolios have slightly higher returns than S&P 500 return.
However, the former have higher volatility as well as strong correlation with the latter.
The test for spanning asks whether or not no agent will benefit from
diversification, thus irrespective of the degree of risk aversion. From the test of
intersection above, it is apparent that spanning (that no agent benefits) will most probably
not be rejected either given the rather low statistics reported. Table 3 below shows Wald
statistics for the joint hypothesis 1=β and 0=α in each of the six cases. The test is
conducted here using the reparameterized regression tttt urrr ++=− ,1,1,2 λα
with 001:0 ==−= αβλ andH .
Table 3: Wald Statistic irrespective of degrees of relative risk aversion.
LB / MB / HB / LS / MS / HS / Wald 0.0000 0.0000 0.0000 0.0002 0.0005 0.0012
Again the variable tr ,1 refers to S&P 500 returns while tr ,2 F&F portfolios. The statistic is
derived as ( ) ( ) ( ) ( ) ( )δδδ
δδδ
δδ
δ ˆˆˆˆ
ˆ2
hd
dhddld
ddh
h
′−
′′
distributed 2χ with degree of freedom equal
the number of restrictions under 0H .
Wald statistic with 2 degrees of freedom has 5% critical value of 5.99. According
to table 3, spanning cannot be rejected in any of the six cases. In words, agents,
irrespective of degree of risk aversion will not benefit from diversification beyond the
S&P benchmark if the other available assets are the Fama and French portfolios atop each
column.
As is obvious from the above tests, mainstream mean-variance analysis depends
strongly on distributional assumptions which are inaccurate as evident from the
20
descriptive statistics and the Jarque-Bera test statistics in table 1. HL-efficiency thus
offers an opportunity to test efficiency of portfolios relying on acceptable properties of
utility functions being nonsatiation and risk aversion; a test with fewer return distribution
assumptions.
Table 4: Dual formulation statistic *ψ comparing the value-weighted S&P 500 to each F&F portfolio.
LB / MB / HB / LS / MS / HS / *ψ 0.00 6.4900 209.2800 0.00 274.1000 448.9500
0* =ψ implies a stand-alone S&P 500 portfolio is efficient and there is no need to diversity if the only other asset available is the F&F Portfolio on each column. For instance in the 3rd column, if asset
MB/ is available, then a portfolio made solely of S&P 500 is sub-optimal. In contrast to the standard mean-variance tests, the dual test statistic above rejects
efficiency of a portfolio made solely of the value weighted S&P 500 when alternative
assets MB / , HB / , MS / and HS / are available for diversification purposes. It is
important to caution again that values reported in table 4 only compare the S&P to each
of the six Fama and French portfolios. The dual statistic ψ by design does not offer an
opportunity to rank these six Fama and French portfolios based only on information
contained in the table above.
V Conclusion
Sharpe [1971] has remarked that “if the essence of the portfolio analysis problem
can be adequately captured in a form suitable for linear programming methods, the
prospect for practical application would be greatly enhanced”. In the current paper, I have
demonstrated an example of the growing number of applications of linear programming
to effectively answer topical questions in finance which otherwise would have required a
large quadratic solution.
According to the standard mean-variance utility function approach similar to
Markovitz [1952], spanning and intersection analyzes the effect that the introduction of
additional assets has on the mean-variance frontier. If the mean-variance frontier of the
benchmark assets and the frontier of the benchmark plus the new assets have exactly one
point in common, then they intersect. This is what is termed “intersection”. This means
21
that an agent with that mean-variance utility function is optimizing my holding that
benchmark. For an agent with that utility function, there is no benefit in standard mean-
variance utility from adding the new assets. If the mean-variance frontier of the
benchmark assets plus the new assets coincides with the frontier of the benchmark assets
only, there is “spanning”. In this case no standard mean-variance utility optimizing
investor can benefit from adding the new assets to her initial portfolio of the benchmark
assets. The forgoing definition is only accurate if return distribution is elliptical. Hanoch
and Levy [1970] provide alternative definition for mean-variance efficiency much more
consistent with Arrow’s theorem even in a world of non-elliptical returns.
In this paper, I implemented tests of spanning and intersection for a portfolio
made solely of the value-weighted S&P 500 as a benchmark portfolio using both test
approaches: mainstream mean-variance utility approach using quadratic optimization and
a second test designed to satisfy the Hanoch-Levy definition of portfolio efficiency and
make use of linear programming tools. The objective is to test whether there exists
spanning and intersection of six Fama and French portfolios, considered here as assets
available for possible diversification.
While standard mean-variance tests are unable to reject the hypothesis of
efficiency of the benchmark, the Hanoch-Levy approach rejects in four of the six cases.
There are therefore mean-variance utility benefits from extending the asset holding
beyond the S&P benchmark. The difference in results is accounted for by distributional
assumptions inherent in mainstream mean-variance optimization theory.
22
Reference Arrow, K., (1971), Essays in the Theory of Risk-Bearing, North-Holland Publishing Company, Amsterdam Balduzzi P., and Kallal H., (1997), “Risk Premia and Variance Bounds”, Journal of Finance, (52), 1913-1949. Balduzzi, P., and Robotti, C., (2000), “Minimum-Variance Kernels and Economic Risk Premia”, Working Paper, Boston College Bansal, R., and Lehmann, B.N., (1997), “Growth-Optimal Portfolio Restrictions on Asset Pricing Models”, Macroeconomic Dynamics, 1, 333-354 Bekaert, G., and Liu, J.L., (1999), “Conditioning Information and Variance Bounds on Pricing Kernels”, Working Paper, Stanford University Bekaert, G., and Urias, M.S., (1996), “Diversification, Integration, and Emerging Market Closed-End Funds”, Journal of Finance, 51, 835-870 Bigoelow, J. P., (1993), “Consistency of the mean-variance analysis and expected utility analysis: A complete characterization”, Economic Letters, 43, pp 187-192 Campbell, J.Y., and Viceira, L.M., (1996), “Consumption and Portfolio Decisions when Expected Returns are Time-Varying”, Quarterly Journal of Economics, 114, 433-495. Chen, Z., and Knez, P.J., (1996), “Portfolio Performance Measurement: Theory and Applications”, Review of Financial Studies, 9, 511-556. De Roon, F. A., T. E. Nijman, and B. J. Werber (2001), “Testing for mean-variance spanning with short sales constraint and transaction costs: The case of Emerging Markets”, Journal of Finance 56 (2), pp 721-741 Debreu, G., (1951), “The coefficient of Resource Utilization”, Econometrica 19, pp 273-292 DeSantis, G., (1994), “Asset Pricing and Portfolio Diversification: Evidence from Emerging Financial”, in Howell, M, (ed.): Investing in Emerging Markets, Euromoney Books, London. DeSantis, G., (1995), “Volatility bounds for Stochastic Discount Factors: Tests and Implications from International Financial Markets”, Working Paper, University of Southern California.
23
Errunza, V., Hogan, K., and Hung, M.W., (1999), “Have the Gains from International Diversification Disappeared?”, Journal of Finance, 54, 2075-2107 Farrell, M. L., (1957), “The measure of Productive Efficiency”, Journal of Royal Statistical Society Series, A 120, pp 253-281 Ferson, W.E., (1995), “Theory and Empirical Testing of Asset Pricing Models”, in: Jarrow, R.A., Maksimovic, V., and Ziemba, W.T. (eds.), Handbooks in Operations Research and Management Science 9: Finance, Elsevier, North Holland. Ferson, W.E., Foerster, S.R., and Keim, D.B., (1993), “General Tests of Latent Variable Models and Mean-Variance Spanning”, Journal of Finance, 48, 131-156 Ferson, W.E., and Schadt, R.W., (1996), “Measuring Funds Strategy and Performance in Changing Economic Conditions”, Journal of Finance, 51, 25-462 Hanoch, G., H. Levy, (1970), “Relative effectiveness of efficiency criteria for portfolio selection”, Journal of Financial and Quantitative Analysis 5(1), 43, pp 63-76 Hansen, L.P., and Jagannathan, R., (1991), “Implications of Security Market Data for Models of Dynamic Economies”, Journal of Political Economy, 99, 225-262. Harvey, C.R., (1989), “Time-Varying Conditional Covariances and Tests of Asset Pricing Models”, Journal of Financial Economics, 24, 289-317 Ingersoll, J.E., (1987), Theory of Financial Decision Making, Rowman and Littlefield, Maryland. Kandel, S., and Stambaugh, R.F., (1987), ”On Correlations and Inferences about Mean-Variance Ecciency”, Journal of Financial Economics, 18, 61-90. Markovitz, H. M., (1952), “Portfolio Selection”. Journal of Finance, 7(1), pp 77-91 Markovitz, H. M., (1959), “Portfolio Selection: Efficient diversification of investments”. John Wiley, New York Meyer, J., (1979), “Mean-Variance Efficient Sets and Expected Utility”, Journal of Finance, 34(5), pp 1221-1229 Post, T., (2001), “LP test for MV efficiency” ERIM Report Series Research in Management. Shanken, J., (1990), “Intertemporal Asset Pricing: An Empirical Investigation”, Journal of Econometrics, 45, 99-120.
24
Sharpe, W. F., (1971), “A Linear Programming Approximation for the General Portfolio Analysis Problem” The Journal of Financial and Quantitative Analysis, 6 (5), pp. 1263-1275 Tobin, J., (1958), “Liquidity preference as behaviour towards risk”, Review of Economic Studies, 25, pp 65-86 Yamakazi, H., and H. Konno, (1991), “Mean Absolute Deviation portfolio optimization model and its application to Tokyo stock market”, Management Science 37, 519-531 Young, M. R., (1998), “A minimum portfolio selection rule with linear programming solution”, Management Science 44(5), 673-683