the taylor rule and the transformation of monetary policy · the taylor rule and the transformation...
Embed Size (px)
TRANSCRIPT

issn 1936-5330

The Taylor Rule and the Transformation of Monetary Policy*
Pier Francesco Asso (University of Palermo) George A. Kahn (Federal Reserve Bank of Kansas City)
Robert Leeson (Hoover Institution)
December 2007
RWP 07-11
Abstract: This paper examines the intellectual history of the Taylor Rule and its considerable influence on macroeconomic research and monetary policy. The paper traces the historical antecedents to the Taylor rule, emphasizing the contributions of three prominent advocates of rules--Henry Simons, A.W. H. Phillips, and Milton Friedman. The paper then examines the evolution of John Taylor's thinking as an academic and policy advisor leading up to his formulation of the Taylor rule. Finally, the paper documents the influence of the Taylor rule on macroeconomic research and the Federal Reserve's conduct of monetary policy. Key words: Taylor rule, monetary policy, rules versus discretion JEL classification: B22, B31, E52 ∗This paper is a revised and shortened version of a paper prepared for presentation at the Federal Reserve Bank of Dallas’ conference on “John Taylor’s Contribution to Monetary Theory and Policy.” It has benefited from conference participants’ comments and from comments of seminar participants at the Federal Reserve Board of Governors and at the Federal Reserve Banks of Boston and New York. The views expressed herein are those of the authors and not necessarily those of the Federal Reserve Bank of Kansas City or the Federal Reserve System.

1. Introduction
At the November 1992 Carnegie Rochester Conference on Public Policy, John Taylor
(1993a) suggested that the Federal funds rate (r) should normatively (with qualifications) be set
and could positively (at least in the previous 5 years) be explained by a simple equation:
r = p + 1/2y + 1/2(p-2) + 2, where y = percent deviation of real GDP from trend and
p = rate of inflation over the previous four quarters. With inflation on target (p=2) and real GDP
on trend (≅2, and therefore y=0), the real ex post interest rate (r-p) possesses the same symmetry
(=2).
Fed policymakers and Fed watchers quickly took notice. Salomon Brothers advised their
clients that “a hypothetical policy rule, modeled on the policy behavior that produced the latest
decline in inflation, also indicated that the Federal funds rate is now too low” (Lipsky, 1993, 9, 6,
n6). This advice was followed up with some more detail: the parameters of the Taylor Rule
“capture the stated intentions of virtually all Fed officials”. The Taylor Rule was used to predict
future interest rate changes for the remainder of that year: “the Taylor Rule is likely to prescribe
some relaxation of policy, baring a sharp run-up in current inflation” (DiClemente and Burnham
1995, 6). The Taylor Rule also figured in the Financial Times (Prowse, July 3, 1995) and
Business Week (Foust, October 9, 1995).
Glenn Rudebusch attended the Carnegie Rochester conference and began to apply the
Taylor Rule to monetary policy analysis as a member of the staff of the Board of Governors. In
Spring 1993, Donald Kohn (then staff director for monetary affairs at the Fed and secretary to
the Federal Open Market Committee (FOMC)) discussed the Taylor Rule with its author during a
stint as visiting professor at Stanford. This interest rapidly reached the FOMC: Governor Janet
Yellen indicated that she used the Taylor Rule to provide her “a rough sense of whether or not

2
the funds rate is at a reasonable level” (FOMC transcripts, January 31-February 1, 1995). In
August 1995, Kohn requested from Taylor an update on the rule. Taylor visited with Fed staff
economists for three days in September 1995 and by November 1995, Board staff began
providing the FOMC with a chart summarizing various versions of the Taylor Rule. On
December 5, 1995, Taylor discussed the rule with Chairman Greenspan and other members of
the Board of Governors.1
In addition to prescribing a method of reducing the swings of the business cycle, the
Taylor Rule also apparently described the stabilization method unwittingly used by the Fed.
Deviations from the rule could also shed light on Fed discretion, excessive or otherwise (Taylor
2007a). However, Chairman Greenspan (1997) regarded a substantial degree of discretion as
desirable so as to respond to shocks that were “outside our previous experience … policy rules
might not always be preferable”. The Taylor Rule, Greenspan argued, assumed that the future
would be like the past: “Unfortunately, however, history is not an infallible guide to the future”.
Just prior to the Carnegie Rochester conference, Ben Bernanke and Rick Miskin (1992) argued
that “Monetary policy rules do not allow the monetary authorities to respond to unforeseen
circumstances”: an argument from which Taylor (1992a) dissented.
Athanasios Orphanides’ (2003) examined ‘Historical Monetary Policy Analysis and the
Taylor Rule’; this paper examines the intellectual history of the concept. It is part of a wider
examination of ‘Monetary Policy Rules from Adam Smith to John Taylor’ (Asso, Kahn and
Leeson 2007). Section 2 below examines the power of the Taylor Rule. Sections 3-4 examine
three influential rules-based advocates: Henry C. Simons (section 3) plus A.W.H. Phillips and
Milton Friedman (section 4). Sections 5-6 examine the evolution of Taylor’s thinking between
1976-1991, his two spells at the Council of Economic Advisers (CEA) (section 5) and in the 1 This paragraph is based in part on conversations with Rudebusch and Taylor.

3
months immediately preceding the Carnegie Rochester conference (section 6). Sections 7-8
examine the influence of the Taylor Rule on macroeconomic research (section 7) and the FOMC
(section 8). Concluding remarks are provided in section 9.
2. The Power of the Taylor Rule
Taylor-type Rules have become the standard by which monetary policy is introduced in
macroeconomic models both small and large. They have been used to explain how policy has
been set in the past and how policy should be set in the future. Indeed, they serve as benchmarks
for policymakers in assessing the current stance of monetary policy and in determining a future
policy path.
2.1 Timing
The Taylor Rule was the culmination of a revival of interest in interest rate rules.
Initially, new rational expectations methods led to real business cycle models without a role for
monetary policy. Academic interests appeared to become decoupled from the needs of policy
makers. Taylor (2007c) recalled that this was a “tough time”: the “dark ages” for monetary
policy rules research. A small group of monetary economists saw themselves as “toiling in the
vineyards” (McCallum 1999). A revival (at least in policy circles) began in the later 1980s.
In 1985, the Brookings Institution and the Center for Economic Policy Research (later in
association with the International Monetary Fund) launched a research project to investigate
international macroeconomic interactions and policy. At the December 1988 Macroeconomic
Policies in an Interdependent World conference, several papers investigated policy rules. At this
conference, Taylor (1989b, 125, 138) had the short-run interest rate as the primary operating
instrument of monetary policy: “placing some weight on real output in the interest rate reaction
function is likely to be better than a pure price rule”.

4
But this rules-based literature appeared not to be leading to a consensus. In March 1990,
Taylor (1993b, 426-9) noted that “significant progress” had been made, but “the results vary
from model to model. No particular policy rule with particular parameters emerges as optimal for
any single country, let alone all countries. Because of the differences among the models and the
methodology, I would have been surprised if a clear winner had presented itself”. However,
policy rules which focused “on the sum of real output and inflation” outperform other types: “a
consensus is emerging about a functional form”. Yet there was “no consensus” about the size of
coefficients. Shortly afterwards, Taylor cut this Gordian knot with his simple but persuasive
equation: a compromise between academic complexity and policy-influencing simplicity.
At the same time, various institutional and procedural transformations were creating a
new policy making environment and culture. In 1991 when Mervyn King (2000, 2) joined the
Bank of England and asked ex Fed chairman Paul Volcker for a word of advice, Volcker obliged
with a word, “mystique”. Volcker (1990, 6) described the central bankers of the Bretton Woods
system as “high priests, or perhaps stateless princes”. Some Fed watchers sought to divine the
Secrets of the Temple (Greider 1987) by deconstructing its Delphic utterances. The process by
which monetary policy decisions were made resembled the elusive mysteries of papal
successions.
Yet important changes were taking place. In February 1987, the Fed announced that it
would no longer set M1 targets, and in July 1993 Chairman Greenspan testified before Congress
that the Fed would “downgrade” the use of M2 “as a reliable indicator of financial conditions in
the economy”. Having returned to an explicit federal funds rate target in the 1980s, the Fed kept
the rate constant at 3% from late 1992 to January 1994. When the Fed tightened policy in
February1994, the tightening was accompanied by a new policy procedure: it was announced

5
rather than left for financial markets to infer. Later, in 1995, the FOMC began announcing how
changes in policy would be reflected in the Fed’s target for the federal funds rate. In May 1999,
the FOMC began to publicly announce its policy decision regardless of whether its policy rate
had been adjusted. Further transparency was injected into the system with “direction of bias”
announcements (May 1999), replaced by a “balance of risks” announcement (February 2000).
Transcripts of FOMC meetings are now released (with a five year delay), and since January
2005, FOMC minutes are released expeditiously (three weeks after the announcement of the
FOMC’s policy decision at each regularly scheduled meeting).2
Similar changes were happening elsewhere. On 16th September 1992, British interest
rates and foreign exchange reserves were used in a futile effort to retain membership of the
European Exchange Rate Mechanism with adverse consequences for housing foreclosures and
Conservative Party re-election chances. King (2000, 2) believes that this episode facilitated a
central banking revolution in the UK: “there are moments when new ideas come into their own.
This was one of them … We decided to adopt and formalize a … commitment to an explicit
numerical inflation target”. Thus, a failed monetary policy was followed by transparency (1993,
Inflation Report by the Bank of England) and Bank independence (May 1997). The Taylor Rule
has become a key conceptual framework in a central banking environment committed to time-
consistency (credibility-commitment), transparency and (varying degrees of) independence.
2.2 y-hawks and p-hawks
It is tempting to suggest that Taylor (1946-) chose his timing well in other respects too!
The two competing intellectual leaders of the rules versus discretion debate died in 1946: Simons
2 Transcripts were made available beginning in 1994 for meetings held five or more years earlier. Transcripts for meetings prior to 1994 were produced from original raw transcripts in the FOMC Secretariat’s files. Shortly after each meeting beginning in 1994, audio recordings were transcribed and, where necessary to facilitate the reader’s understanding, lightly edited by the FOMC Secretariat, and meeting participants were then given an opportunity within the next several weeks to review the transcript for accuracy.

6
(the leader of the Chicago “rules party”, who advocated a price level rule, emphasizing p) and
John Maynard Keynes’ (whose name is associated with “sticky” wages and prices, and whose
followers emphasized y). 1946 was also an important year for two economists who were to exert
seminal influences over Taylor’s intellectual development: Phillips enrolled at the London
School of Economics and Friedman returned to Chicago (shortly afterwards he rediscovered the
quantity theory, as a tool for challenging his Keynesian opponents and developed the k% money
growth rule, as an alternative to Simon’s price level rule). The Taylor Rule (with r, not M, on the
left hand side) replaced the Friedman Rule with a lag.
From the Lucas Critique to the Taylor Rule (1976-1992), Taylor had a foot in academia
and an almost equally sized foot in the policy apparatus (CEA 1976-7, Research Adviser at the
Philadelphia Fed 1981-4, CEA 1989-1991). By placing almost equal career coefficients on
government service and academia, Taylor acquired an invaluable understanding of policy
constraints and communication issues. The 1946 Employment Act created the CEA and initiated
the Economic Report of the President. The Act did not specify priorities about p and y: this dual
mandate sought to “promote maximum employment, production, and purchasing power”. But by
the 1960s many economists saw an irreconcilable conflict between promoting “maximum
employment, production” and promoting stable prices (maximum purchasing power).
Keynesians tended to favor a Phillips curve discretionary trade-off as an expression of the
emphasis attached to y.
The Taylor Rule synthesized (and provided a compromise between) competing schools of
thought in a language devoid of rhetorical passion. The Great Depression created a constituency
which tended to emphasize the importance of minimizing y (and hence tended to increase the
weight attached to y). Inflation was accommodated, as a necessary cost of keeping debt servicing

7
low (pre-1951), tolerated, or ‘controlled away’. The Great Inflation and the costs associated with
the Great Disinflation created a constituency that sought to minimize p (and hence tended to
increase the weight attached to p).
Keynes intentionally divided economists into (obsolete) “classics” and (modern)
Keynesians; Friedman divided the profession into (destabilizing) fiscalists and (stabilizing)
monetarists. Taylor (1989a) heretically suggested that different schools of thought should be
open to alternative perspectives and his Evaluating Policy Regimes commentary suggested that
“some of the differences among models do not represent strong ideological differences” (1993b,
428). The Taylor Rule equation with its equal weights has the advantage of offering a
compromise solution between y-hawks and p-hawks.
The rules versus discretion debate has often been broadcast at high decibels. Part of the
Keynesian-Monetarist econometric debate was described as the battle of the radio stations: FM
(Friedman and Meisselman) versus AM (Ando and Modigliani). Around the time of Taylor’s
first publication (1968) the macroeconomic conversation came to be dominated by what some
regarded as the NPR ‘radio of the right’ (“Natural” rate of unemployment, “Perfectly” flexible
prices and wages, or “Perfect” competition, “Rational” expectations).
Robert Solow (1978, 203) detected in the rational expectations revolutionaries “a
polemical vocabulary reminiscent of Spiro Agnew”; but the revolutionaries doubted that
“softening our rhetoric will help matters” (Lucas and Sargent 1978, 82, 60). In a review of Tom
Sargent’s Macroeconomic Theory, Taylor (1981a) commented on Sargent’s “frequently rousing
style” of adversely contrasting new classical macro with the “Keynesian–activist” view. Taylor’s
theoretical framework (in which his rule is embedded) embraced R (and in the background,

8
though it is not required) N, replaced P with contracts, and provided a policy framework minus
the inflammatory rhetoric.
3. Simons and the modern rules versus discretion debate
In February 1936, Simons (1936a) effectively created what Richard Selden (1961)
described as the “Rules Party” with his ‘Rules Versus Authorities in Monetary Policy’. In the
same month, Keynes (1936, 164, 378, 220-1) explained that he had become “somewhat skeptical
of the success of a merely monetary policy directed towards influencing the rate of interest. I
expect to see the State, which is in a position to calculate the marginal efficiency of capital-
goods on long views and on the basis of the general social advantage, taking an even greater
responsibility for directly organizing investment … I conceive, therefore, that a somewhat
comprehensive socialisation of investment will prove the only means of securing an
approximation to full employment”.
Alvin Hansen (the “American Keynes”) favoured a “dynamic approach” – which stood in
contrast to the passive acceptance of “the play of ‘natural’ forces … many economists are
coming to think that action along these traditional lines would by itself be wholly inadequate. It
is increasingly understood that the essential foundation upon which the international security of
the future must be built is an economic order so managed and controlled that it will be capable of
sustaining full employment” (Hansen and Kindleberger 1942, 467).
In response, Simons (1939, 275) complained that Hansen’s proposals would generate “a
continuing contest between the monetary authority seeking to raise employment and trade-unions
seeking to raise wage rates”. Simons also bemoaned that “the gods are surely on his side. What
he proposes is exactly what many of us, in our most realistic and despairing moods, foresee
ahead as the outcome of recent trends”. If Hansen succeeded in establishing a monetary system

9
“dictated by the ad hoc recommendations of economists like himself … the outlook is dark
indeed”.
For Simons (1948 [1944]; 1943, 443-4), tariffs were part of the government sponsored
“racketeering” which his “rules” were designed to thwart. Simons sought to defend “Traditional
Liberal Principles”; his “faith and hope” for the post war world rested on the construction of a
“free-trade front”. Simons (1936b) believed that the General Theory could easily become “the
economic bible of a fascist movement”. Keynes had now embarked on a mission which Simons
found repellent: an authentic genius “becoming the academic idol of our worst cranks and
charlatans”. According to Simons (1948 [1945], 308) the New Deal had delegated arbitrary power
to a series of agencies. This “high-road to dictatorship” was “terrifying” for “an old-fashioned
liberal”. Elevating the “government of men” over the “government of rules” was tantamount to
“accepting or inviting fascism”.
At the Chicago Harris Foundation lectures and seminars on “Unemployment as a World
Problem”, Keynes (1931, 94) advocated discretionary macroeconomic management to “keep the
price index and the employment index steady”. Hansen (1931, 94) asked whether it was not the
case that “in our present state of knowledge we have no guide at all dependable, and
consequently the system you propose is a purely Utopia one?” Keynes (1931, 94) responded that
“statistics are becoming more adequate … I think we economists have given the practical
business men very little real help in the past. If they were aided by more complete statistical data,
then I think we should find central banks doing their best duty”. When asked by Hansen (1931,
94) about the reliability of the “judgment” of the central bankers, Keynes (1931, 94) replied: “I
think we already know enough to give them general suggestions … Painful experience works
wonders. It is really the economists who are primarily at fault. We have never given any sort of

10
scientific conclusions, such as you would expect. So long as the supposedly experts fail to agree
among themselves, it seems to me reasonable for the practical business men to pay only
moderate attention to them”.
According to his disciples, Keynes “trusted to human intelligence. He hated enslavement
by rules. He wanted governments to have discretion and he wanted economists to come to their
assistance in the exercise of that discretion” (Cairncross 1978, 47-8).
4. Phillips, Friedman and Taylor
Friedman’s k% money growth rule (and its breakdown) exerted a profound influence on
monetary economics; his various influences on Taylor (his Hoover colleague) are apparent. Less
widely known, perhaps, is Phillips’ influence on Taylor. Taylor’s (1968) first publication
(“Fiscal and Monetary Stabilization Policies in a Model of Endogenous Cyclical Growth”)
combined two strands of Phillips’ (1954, chapter 16) theoretical evaluation of policy rules and
models of cyclical growth (1961, chapter 20).3 Taylor’s (1968, 1) objective was to “describe the
product and money markets as developed by Phillips, and derive the government policies which
will regulate the model”.
The money market had the interest rate as a function of the price level (P), actual income
(YA) and the money supply (M):
r = f(P,YA,M).
The Phillips curve equation had the rate of inflation depending on the gap between actual and
full capacity income and on changes in growth (a proxy for productivity):
p = b(x -1) - Yg + d
where p = inflation rate, x = ratio of actual output to full capacity output (YA/YF), Yg = the
proportionate growth rate of full capacity output, and d = constant. Taylor (1968, 5, n5) noted 3 All references to A. W. H. Phillips can be found in Leeson (2000).

11
that these monetary policy rules (which describe how the money supply is set) were “modified
versions of the types of fiscal policies first suggested by A.W. Phillips (1954, [2000, chapter
16])”. If the monetary policy simply has M exogenous the authorities effectively set an interest
rate, which depends on the prices and income. There are thus clear similarities between these
1968 Taylor equations and the Taylor Rule.
The Taylor Rule can be seen as a method of compressing the swing of the business cycle
– minimizing the deviations from the “optimal” spot on an inflation-anchored Phillips curve. The
continuities between Taylor and Phillips and between Taylor in 1992 (age 45) and Taylor in
1968 (the 21 year old undergraduate) will be outlined below.
As an undergraduate, Phillips constructed a large physical model with which to explore
the macroeconomic policy options (one version is on permanent display at the Science Museum
in South Kensington, London). Dennis Robertson “practically danced a jig” when he saw the
Phillips Machine in operation. When the Chancellor of the Exchequer and the Governor of the
Bank of England attended a dinner at LSE, they adjourned to the Machine room where the
Chancellor was given control of the fiscal levers and the Governor control of the monetary ones
(Dorrance 2000).
There is a distinct continuity between the Phillips Machine, the 1954 theoretical Phillips
Curve, the 1958, 1959 and 1962 empirical Phillips Curves, Phillips’ growth model and Taylor’s
work. In a ‘Simple Model of Employment, Money and Prices in a Growing Economy’, Phillips
(2000 [1961], chapter 20) described his inflation equation as being “in accordance with an
obvious extension of the classical quantity theory of money, applied to the growth equilibrium
path of a steadily expanding economy”. His steady state rate of interest, rs (“the real rate of
interest in Fisher's sense, i.e., as the money rate of interest minus the expected rate of change of

12
the price level”) was also “independent of the absolute quantity of money, again in accordance
with classical theory.” His interest rate function was “only suitable for a limited range of
variation of YP/M”. With exchange rate fixity the domestic money supply (and hence the
inflation rate) become endogenously determined; the trade-off operates only within a narrow low
inflation band.
Phillips (2000 [1961], chapter 20) described the limits of his model: he was only
“interested” in ranges of values in which actual output (Y) fluctuates around capacity output (Yn)
by a maximum of five per cent: “In order to reduce the model with money, interest and prices to
linear differential equations in x [=Y/Yn], yn and p it is necessary to express log Y … in terms of
log Yn and x. For this purpose we shall use the approximation
log Y ≅ log Yn + (Y - Yn)/Yn
= log Yn + x - 1
The approximation is very good over the range of values of (Y - Yn)/Yn, say from 0.05 to
0.05, in which we are interested [emphasis added].” Since Phillips (2000 [1961], chapter20)
stated that these output fluctuations were “five times as large as the corresponding fluctuations in
the proportion of the labour force employed”, this clearly indicates that Phillips limited his
analysis to outcomes in the compromise zone of plus or minus one percentage point deviations of
unemployment from normal capacity output. Phillips was re-stating the conclusion of his
empirical work; normal capacity output (and approximately zero inflation) was consistent with
an unemployment rate “a little under 2½ per cent” (2000 [1958], chapter 18).
Underpinning the original Phillips Curve was the argument that “One of the important
policy problems of our time is that of maintaining a high level of economic activity and
employment while avoiding a continual rise in prices” [emphasis added]. Phillips explained that

13
there was “fairly general agreement” that the prevailing rate of 3.7 per cent inflation was
“undesirable. It has undoubtedly been a major cause of the general weakness of the balance of
payments and the foreign reserves, and if continued it would almost certainly make the present
rate of exchange untenable”. His objective was, if possible, “to prevent continually rising prices
of consumer goods while maintaining high levels of economic activity ... the problem therefore
reduces to whether it is possible to prevent the price of labour services, that is average money
earnings per man-hour, from rising at more than about 2 per cent per year ... one of the main
purposes of this analysis is to consider what levels of demand for labour the monetary and fiscal
authorities should seek to maintain in their attempt to reconcile the two main policy objectives of
high levels of activity and stable prices. I would question whether it is really in the interests of
workers that the average level of hourly earnings should increase more rapidly than the average
rate of productivity, say about 2 per cent per year” (2000 [1959], 261, 269-80; [1962], 208;
[1961], 201; [1962], 218; [1958], 259).
Taylor’s work was in this Phillips tradition: “Friedman’s (1968) AEA presidential
address was given during the middle of my senior year. Since I had a Phillips curve in the model
used in my thesis, I am sure I discussed the issue with my advisers. In the thesis I did not exploit
the long run trade off implicit in the Phillips curve by increasing the money growth rate and the
inflation rate permanently to get a permanently higher utilization rate. This could have reflected
a judgment that one could not in practice exploit the curve this way, despite what the algebra
said. More likely it was simply that I was interested in stabilization policy rules, and such rules,
very sensibly, did not even consider such a possibility” (Taylor 2007b).
Peter Phillips (2000) and Robin Court (2000) have highlighted Phillips’ analysis of the
relationship between policy control and model identification, and the similarity between the

14
equations used by Phillips and Robert Lucas (1976) to derive their conclusions about
econometric policy evaluation. Peter Phillips argues that the Phillips Critique implies “that even
deep structural parameters may be unrecoverable when the reduced form coefficients are
themselves unidentified. One can further speculate on the potential effects of unidentifiable
reduced forms on the validity of econometric tests of the Lucas critique … [this] may yet have an
influence on subsequent research, irrespective of the historical issue of his work on this topic
predating that of Lucas (1976)”.
Two decades before Lucas, Phillips (2000 [1956], chapter 17) stressed that “There are,
therefore, two questions to be asked when judging how effective a certain policy would be in
attaining any given equilibrium objectives. First, what dynamic properties and cyclical
tendencies will the system as a whole possess when the policy relationships under consideration
themselves form part of the system? [emphasis added]. Second, when the system has these
dynamic properties, will the equilibrium objectives be attained, given the size of the probable
disturbances and the permissible limits to movements in employment, foreign reserves, etc. The
answer to the first question is important, not only because the reduction of cyclical tendencies is
itself a desirable objective, but also because the second question cannot be answered without
knowing the answer to the first. And the first question cannot be answered without knowing the
magnitudes and time-forms of the main relationships forming the system”.
Phillips (2000 [1968], chapter 50) concluded that “The possibility that operation of the
control may prevent re-estimation of the system should lead us to ask whether the decision
analysis we have been considering does not have some fundamental deficiency. And indeed it
has. The basic defect is simply that in deriving the decision rules no account was taken of the
fact that the parameters of the system are not known exactly, and no consideration was given to

15
ways in which we can improve our knowledge of the system while we are controlling it. In my
view it cannot be too strongly stated that in attempting to control economic fluctuations we do
not have the two separate problems of estimating the system and of controlling it, we have a
single problem of jointly controlling and learning about the system, that is, a problem of learning
control or adaptive control.”
Taylor (2007b) followed this “learning” path also: “my Ph.D. thesis was on policy rules.
The problem was to find a good policy rule in a model where one does not know the parameters
and therefore had to estimate them and control the dynamic system simultaneously. An
unresolved issue was how much “experimentation” should be built into the policy rule through
which the instrument settings would move around in order to provide more information about the
parameters, which would pay off in the future. I proved theorems and did simulations, which
showed various convergence properties of the least squares or Bayesian learning rules. My main
conclusion from that research, however, was that in many models simply following a rule
without special experimentation features was a good approximation. That made future work
much simpler of course because it eliminated a great deal of complexity”.
5. Rational Expectations plus Contracts: the “general theory” of policy rules
The Taylor Rule has an existence independent of the theoretical framework that Taylor is
associated with. However, for the purposes of this paper it seems sensible to describe at least in
outline components of that framework.
Experiences in the policy arena led Taylor (1998; 1989b) to propose (in his Harry
Johnson lecture) a “translational” theory of policy regime change, in contrast to Johnson-style
emphasis on revolution and counter-revolution (Johnson 1971). Shortly after leaving the CEA,
Taylor – with Phelps and Stanley Fischer (1977) – sought to rescue from the clutches of the

16
Sargent and Wallace (1975) Policy Ineffectiveness Proposition the “old doctrine” that
“systematic monetary policy matters for fluctuation of output and employment”. Phelps and
Taylor (1977) “bottle[d]” the “old wine” in a rational expectations model in order to build a
better model to evaluate monetary policy rules.
Taylor (2007b) was also persuaded by his first CEA experience to revise the Taylor
Curve paper (Taylor 1979, first draft 1976) to make it “more practical and more useful in
practice”. That paper had the first empirically realistic monetary policy rule that was calculated
with new rational expectations methods. During his first CEA experience, Taylor (2007b) saw
the need “do a better job at explaining the persistence of inflation with rational expectations.
That is where the staggered contract model came from”. After leaving the CEA, Taylor (1977)
wrote about the “incentive structure under which policy decisions are made” and “the fairly
vigorous competition for ideas” and began to think systematically about the administrative
dynamics of policy making.
Taylor (2007b) recalled that “The Taylor Curve paper (Taylor 1979) was reviewed
favorably at the time by people on both sides of the spectrum (the favorable review from Lucas
was certainly a big boost for me), and because it showed that this approach to monetary policy
could work in practice, it was a very big development on the road to the Taylor Rule. The
monetary policy rule in that paper had exactly the same variables on the right hand side as the
eventual Taylor Rule. The rule had the objective of minimizing the weighted sum of the variance
of output and variance of inflation; it also presented and estimated the first variance tradeoff
(Taylor Curve) with inflation and output and contained a simple staggered price setting model
(laid out in an appendix and covered in much more detail in my 1980 JPE paper). The big
difference from the future, of course, was that the money supply was on the left hand side. The

17
transition from the money supply to the interest rate on the left hand side of the rule occurred a
few years later”.
Taylor (1999) saw the r-based rule as “complement[ing] the framework provided by the
quantity equation of money so usefully employed by Friedman and Schwartz (1963)”: “this
actually goes back to the inverted money demand equation in my 1968 paper. Such an inverted
equation can generate interest rate behavior with similar characteristics to interest rate rules.
When GDP rises, the interest rate also rises, for example. But the coefficients are not usually the
same as interest rate rules like the Taylor Rule” (Taylor 2007b).
At the Philadelphia Fed, Taylor (1981b, 145) assessed monetarist rules and nominal GNP
targeting, concluding that monetarist rules were inefficient relative to a monetarist (no
accommodation of inflation)/Keynesian (countercyclical) compromise: “a classic countercyclical
monetary policy combined with no accommodation of inflation is efficient”. Taylor (2007b)
recalled: “This was a way for me to emphasize that monetary policy had to react more strongly
to both real GDP and inflation. By providing no accommodation to inflation, by keeping money
growth constant, in the face of inflation shocks, the central bank would create a larger increase in
the interest rate. At the same time, they could also respond aggressively to reduction in real
GDP”.
With respect to nominal GNP targeting as a “new rule for monetary policy”, Taylor
(1985, 61, 81) detected merits (“the virtue of simplicity. Explaining how it works to policy
makers seems easy”) and explanatory power (“during much of the post war period, the Fed can
be interpreted as having used a type of nominal GNP rule”) plus a fundamental flaw (“This rule,
when combined with a simple price-adjustment equation, has contributed to the cycle by causing
overshooting and “boom-bust” behavior”). As an alternative he proposed a “new policy rule … a

18
modified nominal GNP rule that keeps constant the sum of the inflation rate and the proportional
deviations of real output from trend … The rule can be generalized to permit less than, or more
than, one-to-one reactions of real GNP to inflation, depending on the welfare significance of
output fluctuations versus inflation fluctuations”.
During his second CEA experience (1989-1991) Taylor co-wrote the February 1990
Economic Report of the President. This report (1990, 65, 84, 86, 64, 65, 107) noted that the
“simple” (Friedman-style) monetary growth rule had become “unworkable”; it was
“inappropriate” to follow “rigid monetary targeting”. However, the Fed had “not regressed to an
undisciplined, ad hoc approach to policy … a purely discretionary approach”. Rather it had
“attempted to develop a more systematic, longer-run approach”. Policies should be designed to
“work well with a minimum of discretion … the alternative to discretionary policies might be
called systematic policies … Unpredictable changes in economic and financial relationships
imply that appropriate policy rules in some circumstances are rather general”. The February 1990
Economic Report of the President was a “translational” play: an opportunity to move the “ball”
towards the rules party “goal line” (Taylor 1998).
6. Immediate Prelude to the Taylor Rule
After leaving the CEA, Taylor (1993d, xv, 251) returned to his almost finished
monograph on Macroeconomic Policy in a World Economy: From Econometric Design to
Practical Operation: “this book is considerably different from the book that would have been
published three years ago”. The Taylor Rule must have been reflection-induced as the book was
completed (1992): an equation in “Looking for a better monetary policy rule” almost described
the Taylor Rule.

19
In the thirteen months prior to the Carnegie Rochester conference, four other conferences
also appeared to have influenced Taylor’s progress towards the rule (the first two provocatively).
At the Bank of Japan conference on ‘Price Stabilization in the 1990s’ (October 1991) David
Laidler (1993, 336, 353) argued that the apparent instability of money demand functions required
discretionary offsetting shifts in money supply. Faith that a “legislated, quasi-constitutional”
money growth rule would produce price stability now appeared “naïve … uncomfortably like
those for perpetual motion or a squared circle”. Laidler saw the optimal route to price stability
through independent central banks: “We are left, then, with relying on discretionary power in
order to maintain price stability”. Taylor (1993a, 5) noted that “Michael Parkin’s oral comments
at the conference were consistent with that view, and I think that there was a considerable
amount of general agreement at the conference”.
In April 1992, Taylor commented on Bernanke and Miskin’s (1992) ‘Central Bank
Behavior and the Strategy of Monetary Policy’, taking particular exception to the proposition
that “Monetary policy rules do not allow the monetary authorities to respond to unforeseen
circumstances”. Taylor (1992a, 235) argued that “if there is anything we have learned from
modern macroeconomics it is that rules need not entail fixed settings as in constant money
growth rules”. Taylor appeared to be suggesting that Bernanke and Mishkin were leading
monetary economics in the wrong direction: their paper “eschews models and techniques, which
endeavors to go directly to a policy making perspective … My experience is that there are far too
many policy papers in government that do not pay enough attention to economic models and
theory”.4
4 In September 1992 Bernanke and Blinder (1992, 910-912) published a paper with a section entitled ‘Federal Reserve’s Reaction Function’: “If the Federal funds rate or some related variable is an indicator of the Federal Reserve’s policy stance, and if the Fed is purposeful and reasonably consistent in its policy-making, then the funds rate should be systematically related to important macroeconomic target variables like unemployment and inflation?. Bernanke and Blinder then present estimated policy reaction functions which “show this to be true … The results look like plausible reaction functions. Inflation shocks drive up the funds

20
At the Federal Reserve System’s Committee on Financial Analysis meeting (St Louis
Fed, June 1992) Taylor commented on an early draft of Jeffrey Fuhrer and George Moore’s
(1995) ‘Inflation Persistence’. Taylor (1992b) noted that the authors had made “an important
contribution to the methodology of monetary policy formulation … they look at the response of
the economy to a policy rule which they write algebraically, arguing that the functional form
comes close to what the Federal Reserve has been using in practice … Their results, taken
literally, are quite striking. They find that a policy rule that is a fairly close representation of Fed
policy for the last eight or 10 years is nearly optimal. The rule entails changing the federal funds
rate, according to whether the inflation rate is on a target and whether output is on a target. Their
results are not very sensitive to the choice of a welfare function. Basically, as long as price
stability and output stability are given some weight, movements too far away from this particular
rule worsen performance. This is a remarkable result and deserves further research. What are the
implications for policy? The literal implication is to keep following that rule … It is perhaps too
abstract for policy makers to think in terms of a policy rule, but it seems to me that this is the
only way to think of implementing or taking seriously the policy implications of the paper”.
At the Reserve Bank of Australia in July 1992, Taylor (1992c, 9, 13, 15, 26, 29) noted
that the historical era of “great” inflation/disinflation era was “concluding”. A repeat of this
unfortunate history you thought was “unlikely”. The intellectual justification for inflation (the
Phillips curve trade-off) had been “mistaken” and based on “faulty” models. Taylor argued that
“the most pressing task is to find good rules for monetary policy – probably with the interest rate
as the instrument – that reflect such [short-term inflation-output] trade-offs … monetary policy
rate (or the funds rate spread), with the peak effect coming after 5-10 months and then decaying very slowly. Unemployment shocks push the funds rate in the opposite direction, but with somewhat longer lags and smaller magnitudes. To our surprise, these relationships did not break down in the post-1979 period. Reaction functions estimated in the same way for the 1979-1989 period looked qualitatively similar”.

21
should be designed in the future to keep price and output fluctuations low … the recent research
on policy rules in this research is very promising. There is a need to find ways to characterize
good monetary policy as something besides pure discretion”.
At the same conference, Charles Goodhart (1992, 326, 324) noted that “unspecified”
1946-era multiple goals had been replaced by a philosophy which was reflected in Article 2 of
the Statute of the European System of Central Banks (1992): “The primary objective of the
ESCB shall be to maintain price stability”. Goodhart pondered about a “backbone brace” rule in
which interest rates should rise by 1.5% for each 1% rise of inflation above zero with a
requirement that any divergence from that rule should be formally accounted for by the monetary
authorities. But this ‘Goodhart Principle’ was inflation-first-and-foremost-based and possibly
“too mechanical”.
Taylor is a mild mannered person who obviously survives and thrives in battle zones. The
“rules party” was being abandoned in favor of discretion by Laidler and Parkin, and criticized as
inflexible by Bernanke and Mishkin. On top of those intellectual challenges, the Bush
administration (in which he had served for 2 years) was headed toward defeat, partly on the basis
of having reneged on its “no new taxes” promise. On November 3rd 1992, a sitting Republican
President was ousted at the polls following a challenge for his party’s nomination (Taylor spent
August 1992 at the Republican Party convention in Houston disputing with non-economists over
the content of the platform). But judging from the paper that was presented in the same month,
the turmoil must have been productive.
7. Impact of the Taylor Rule: macroeconomic research
The broad appeal of the Taylor Rule comes from its simplicity, intuitiveness, and focus
on short-term interest rates as the instrument of monetary policy. The rule is simple in that it

22
relates the policy rate—the federal funds rate—directly to the goals of monetary policy—
minimizing fluctuations in inflation relative to its objective and output relative to potential output
(the output gap). In addition, as originally described (prescribed?), the rule requires knowledge
of only the current inflation rate and output gap. Taylor provided his own parameters for the key
unobservables in the rule.
The rule is intuitive because it calls for policymakers to move the funds rate to lean
against the wind of aggregate demand shocks and take a balance approach to aggregate supply
shocks. In addition, the “Taylor Principle” embedded in Taylor’s Rule requires that the real
federal funds rate be increased when inflation is above the inflation objective. In other words, the
nominal funds rate should rise more than one-for-one with an increase in inflation above
objective. This principle is also intuitive as a device for ensuring inflation remains anchored over
time at its objective.
The Taylor Rule appeared to satisfy the dual mandate. However, the Taylor Principle
emerges by reorganizing Taylor’s equation; and interpreting y as a harbinger of future
inflationary pressures leads to single mandate inflation targeting:
r = 1.5p + 0.5y + 1
The Taylor Rule also has broad appeal because it approximates the way policymakers
think about the conduct of monetary policy. In much, but not all, of the academic literature
leading up to 1993, monetary policy was represented by an exogenous autoregressive process on
the money supply. Needless to say, this was not how policymakers viewed themselves as making
policy. Except perhaps for the 1979-1983 period, the main instrument of Fed policy in the post-
Accord period (1951-) has been a short-term interest rate, with the federal funds rate gaining
increasing importance through the 1960s (Meulendyke, 1998). And, by the time Taylor had

23
articulated his rule, policymakers in the United States were well on their way to abandoning the
specification of target ranges for the monetary aggregates.
Of course the appeal of a simple, intuitive, and realistic policy rule would be considerably
diminished if it could not describe past policy or provide guidance about the future. The Taylor
Rule did both. As Taylor (1993a) showed, his rule closely tracked the actual path of the federal
funds rate from 1987 to 1992. And because this was a period of relative macroeconomic
stability, the rule subsequently became viewed as a prescription for conducting monetary policy
going forward.5
However, Taylor (1993a, 197) did not advocate that policymakers follow a rule
mechanically: “…There will be episodes where monetary policy will need to be adjusted to deal
with special factors”. Nevertheless, Taylor viewed systematic policy according to the principals
of a rule as having “major advantages” over discretion in improving economic performance:
“Hence, it is important to preserve the concept of a policy rule even in an environment where it
is practically impossible to follow mechanically the algebraic formulas economists write down to
describe their preferred policy rules”.
Given these features, the Taylor Rule has had a profound influence on macroeconomic
research. For one thing, it fostered renewed interaction and communication between academic
and central bank economists. In the late 1970s and early 1980s, the rational expectations/real
business cycle revolution had led many academics to question the effectiveness of activist
monetary policy. A communication gap had emerged between academic economists studying
the propagation of business cycles in flexible price models and economists at central banks who
5 Taylor (1993a) emphasized the normative aspect of the rule and the desirability of systematic rule-like behavior on the part of policymakers. Taylor also discussed the use of discretion within the context of a policy rule and issues involved in the transition from pure discretion to a policy rule or from one policy rule to another. The close fit of the Taylor Rule to data from 1987 to 1992 suggested the rule was a feasible prescription for policy.

24
were still interested in designing stabilization policies in models where monetary policy had real
effects. In combination with New-Keynesian sticky price models, the Taylor Rule put academic
and central bank economists back on the same research track. Today, economists and economic
ideas move freely between academic and central bank research departments.
The resulting literature on Taylor Rules has been both positive and normative, theoretical
and empirical. It is vast and growing. Rather than an exhaustive survey, this section describes a
handful of issues that have been addressed within the Taylor Rule framework.
First, as McCallum (1993) pointed out in his discussion of the original Taylor Rule paper,
the Taylor Rule is not strictly operational since policymakers cannot observe current quarter
GDP. So, one line of research has been to make the Taylor Rule operational through the use of
lagged output and inflation or the explicit use of forecasts.
Second, researchers have used the Taylor Rule to evaluate historical monetary policy.
This line of research led to the recognition that policy is conducted with contemporaneous data
and that researchers need to be careful to use real time data in assessing the historical record.
Whether policymakers responded aggressively enough to inflation in the 1970s given real time
data remains a topic of debate.
Third, researchers have computed policy rules that are optimal with respect to a particular
macroeconomic model and central bank loss function or that maximize a representative agent’s
welfare in small DSGE models. Attention is given in these models to the specification of Taylor
Rule parameters that rule out indeterminacies and sunspot equilibria. Typically, a coefficient on
inflation that adheres to the Taylor principal and coefficients on inflation and output that are not
too extreme generate favorable macroeconomic outcomes.

25
Fourth, researchers have examined the robustness of policy rules across a variety of
structural models. Again, policy rules that are similar to Taylor’s original specification appear
relatively robust—although other specifications may be more robust across models with forward-
looking behavior and rational expectations. One example is a first differenced version of the
Taylor Rule.
Other issues include the use of Taylor Rules in small open economies, the identification
of conditions under which it may be necessary or desirable to deviate from rule-like behavior, the
use of forecast-based rules versus backward-looking rules, the generalization of Taylor Rules to
allow for regime switching or time variation in the rule’s coefficients, the desirability of
instrument rules versus target rules in central bank decision-making and communications, and
the role of asset prices in policy rules.
8. Impact of the Taylor Rule: the FOMC
Taylor (1993a, 202-03) argued that the FOMC appeared to have acted systematically and
in accordance with his simple rule from 1987 to 1992: “What is perhaps surprising is that this
rule fits the actual policy performance during the last few years remarkable well.... In this sense
the Fed policy has been conducted as if the Fed had been following a policy rule much like the
one called for by recent research on policy rules”.
Taylor (1993a, 208) suggested that a specific policy rule could be added to the list of
factors—such as leading indicators, structural models, and financial market conditions—that the
FOMC already monitored. “Each time the FOMC meets, the staff could be asked to include in
the briefing books information about how recent FOMC decisions compare with the policy rule.
Forecasts for the next few quarters—a regular part of the staff briefing—could contain forecasts
of the federal funds rate implied by the policy rule. There are many variants on this idea. For

26
instance, there could be a range of entries corresponding to policy rules with different
coefficients, or perhaps a policy rule where the growth rate of real GDP rather than its level
appears. Bands for the federal funds rate could span these variants”.
The FOMC was likely unaware before 1993 that its behavior could be described by a
simple policy rule. But the Taylor Rule very quickly became a part of the information set that the
FOMC regularly reviewed. And, Taylor’s description of how a rule could be used in practice
proved prescient. By at least 1995, FOMC members were regularly consulting the Taylor Rule
for guidance in setting monetary policy. A review of transcripts of FOMC meetings from 1993 to
2001—the last year for which transcripts have been made publicly available—shows that the
FOMC used the Taylor Rule very much in the way Taylor recommended in 1993. Not only did
the staff prepare a range of estimates of the current stance of policy and the future policy path
based on various policy rules, but members of the FOMC also regularly referred to rules in their
deliberations.
8.1 A guide for policy
According to the transcripts, the first mention of the Taylor Rule at an FOMC meeting
occurred at the January 31-February 1, 1995, meeting. At that meeting, Janet Yellen described
the rule and its close approximation to actual FOMC policy decisions since 1986 and suggested
that the rule was currently calling for a funds rate of 5.1 percent—close to the current stance of
monetary policy. In contrast, she noted, the financial markets were expecting an increase of 150
basis points “before we stop tightening…,” and the Greenbook (the document prepared for each
FOMC meeting describing the staff’s detailed forecast for economic activity and inflation)
suggested the federal funds rate should be 7 percent. “I do not disagree with the Greenbook

27
strategy. But the Taylor Rule and other rules… call for a rate in the 5 percent range, which is
where we already are. Therefore, I am not imagining another 150 basis points”.
In subsequent meetings, Yellen pointed repeatedly to the Taylor Rule as a guide to her
views on the proper stance for monetary policy. Other Committee members—especially
Governors Meyer and Gramlich and President Parry—also relied heavily on the Taylor Rule.
8.2 A framework for analyzing issues
From 1995 to 2001, the Taylor Rule was also used to analyze a range of issues. Many of
the discussions paralleled research being conducted by academic and Federal Reserve
economists on policy rules. Although firm conclusions were not always reached, it is clear from
the transcripts that the Taylor Rule became over time a key input into the FOMC’s policy
process. Among the issues debated were the following:
8.2.1 The sensitivity of the rule to the inflation measure
At the May 1995 meeting, FOMC members discussed what measure of inflation should
be used in determining the Taylor Rule’s prescription for policy. Chairman Greenspan asked
what measure of inflation Taylor used and noted that, when the data on GDP were revised, the
normative prescription from the rule would change. Donald Kohn indicated that using the
implicit price deflator gave a policy prescription for the funds rate of 4 ¼ percent, while using
the CPI gave a prescription of around 5 ¾ percent. Kohn noted, however, that a rule using CPI
inflation would not track Committee actions in earlier years as well as the Taylor Rule which
relied on inflation as measured by the implicit GDP deflator. Alan Blinder, vice chairman of the
Board of Governors, added that the parameters of the Taylor Rule would likely change if the
variables on the right hand side were to be changed (FOMC, May 1995, 30).
8.2.2 Staff concerns and caveats

28
By November 1995, Board staff began providing the FOMC with a chart summarizing
various versions of the Taylor Rule. In discussing the new chart at the November 1995 FOMC
meeting, Board staff noted several caveats. First, the Taylor Rule was not forward looking except
in the sense that the inclusion of the output gap on the right hand side provided an indicator of
future inflationary pressure. It was noted that the performance of a rule-based monetary policy
might be improved by incorporating forecasts of inflation and the output gap instead of their
current levels.
Second, the equal weights on inflation and the output gap in the Taylor Rule may not
always be appropriate. While equal weights might be well suited for supply shocks, a greater
weight on the output gap may be better suited for demand shocks. This would allow for a
“prompt closing of the output gap” that would “forestall opening up a price gap.”
Third, it was again noted that the Taylor Rule’s prescribed funds rate target is highly
sensitive to how output and inflation are measured. According to the Taylor Rule, the current
setting of the funds rate was high relative to the equilibrium level, suggesting policy was
restrictive. However, the current funds rate appeared close to its equilibrium level when
measures of inflation other than the implicit GDP deflator were used in determining the deviation
of inflation from Taylor’s 2 percent objective.
Fourth, an estimated version of the Taylor Rule that allows gradual adjustment in the
funds rate target to the rate prescribed by the rule suggests the FOMC placed a greater weight on
closing the output gap and less weight on bringing inflation down than in the Taylor Rule. To
some extent, this result reflected “the influence of the credit crunch period when the funds rate
for some time was below the value prescribed from Taylor’s specification.”

29
Fifth, Federal Reserve monetary policy from 1987 to 1993 was focused on bringing
inflation down and, therefore, policy was generally restrictive. Policy remained slightly
restrictive in November 1995 with an estimated real funds rate somewhat higher than the 2
percent equilibrium funds rate assumed in the Taylor Rule. However, the Board staff’s forecast
called for steady inflation at the current nominal and real federal funds rate. In other words, the
staff forecast implicitly incorporated a higher equilibrium real funds rate than that assumed in the
Taylor Rule: “The real funds rate is only an index or proxy for a whole host of financial market
conditions that influence spending and prices in complex ways. Among other difficulties, the
relationship of the funds rate to these other, more important, variables may change over time.”
Thus, the Board staff viewed the equilibrium real funds rate as a concept that changed over time,
making the Taylor Rule as originally specified less reliable (FOMC, November 1995, 1-5).
8.2.3 Deliberate versus opportunistic disinflation
At the same meeting, members briefly discussed the Taylor Rule as a framework for
deliberate, as opposed to opportunistic, disinflation. Gary Stern, president of the Minneapolis
Fed, questioned whether policy should be tighter than indicated by the Taylor Rule “to bend
inflation down further from here.” Governor Lawrence Lindsey responded that, with inflation
above the assumed Taylor Rule target of 2 percent, the prescription for policy from the rule itself
was deliberately restrictive, placing steady downward pressure on inflation (FOMC, November
1995, 49-50).
This topic was taken up again at the next two meetings. For example in January 1996,
Robert Parry, president of the San Francisco Fed, suggested that an opportunistic disinflation
strategy would involve a much more complicated description of policy than a Taylor Rule. An
opportunistic strategy is one in which monetary policy aims to hold inflation steady at its current

30
level until an unanticipated shock pulls inflation down. At that point, policymakers
“opportunistically” accept the lower inflation rate as the new target for policy and attempt to
maintain the lower inflation rate until an unexpected shock again pulls inflation down. Parry
questioned whether such an opportunistic approach wouldn’t require “a complicated
mathematical expression of our policy processes with lots of nonlinearities?” Parry’s concern
was that adopting an opportunistic approach to further disinflation would inevitably lead to a
“loss of understanding” in financial markets about how the FOMC reacts to incoming
information (FOMC, January 1996, 51).
In Taylor’s terminology, opportunistic disinflation involves a series of transitions from
one policy rule to another as the target inflation rate is opportunistically lowered. Taylor (1993a,
207) cautions that “in the period immediately after a new policy rule has been put in place,
people are unlikely either to know about or understand the new policy or to believe that
policymakers are serious about maintaining it. Simply assuming that people have rational
expectations and know the policy rule is probably stretching things during this transition period.
Instead, people may base their expectations partly on studying past policy in a Bayesian way, or
by trying to anticipate the credibility of the new policy by studying the past records of
policymakers, or by assessing whether the policy will work”. Thus, Taylor appears to have
anticipated Parry’s concerns.6
8.2.4 Forward- versus backward-looking Taylor Rules
In 1997, various alternative specifications for the Taylor Rule began to be considered by
FOMC members. Governor Meyer noted that, while the standard Taylor Rule suggested policy
should remain on hold at the present time, the staff’s forecast suggested policy would need to be
6 See Orphanides and Wilcox (1996) and Orphanides, Small, Weiland, and Wilcox (1997) for other interpretations of opportunistic disinflation.

31
tightened in the future. He argued that if current values of inflation and the output gap were
replaced in the Taylor Rule with forecasts, the rule would be prescribing an immediate tightening
of policy. Using a “maxi/min” analysis, he viewed the cost of not tightening when tightening
turns out to be the appropriate action as greater than the cost of tightening when not tightening
turns out to be appropriate. The policy prescription coming from a forward-looking Taylor Rule
and the implications of a maxi/min strategy were among the reasons Meyer cited in support of a
tightening of monetary policy (FOMC, March 1997, 54-57).
8.2.5 The equilibrium real federal funds rate
In 1997, FOMC members began to question the constant 2 percent equilibrium real
federal funds rate assumed in the Taylor Rule. Governor Meyer said, “While I am a strong
believer in some of the wisdom embedded in the Taylor Rule, I have been concerned for a long
time that we need to be more careful about how we set its level by coming up with a more
reasonable estimate of the equilibrium funds rate” (FOMC, August 1997, 66-67). Two key issues
at the time were the dependence of estimates of the equilibrium real rate on the particular
measure of inflation and the possibility that the equilibrium real rate varied over time.
Later, as evidence mounted that trend productivity growth had increased, the issue of the
equilibrium real rate reemerged. Members were concerned that maintaining Taylor’s fixed 2
percent real rate would lead to an overly stimulative policy. Alfred Broaddus, president of the
Richmond Fed, said “…an increase in trend productivity growth means that real short rates need
to rise…. [T]he reason is that households and businesses would want to borrow against their
perception of higher future income now in order to increase current consumption and investment
before it’s actually available…. The Taylor Rule doesn’t give any attention to that kind of real

32
business cycle reason for a move in rates. It only allows reaction to inflation gaps and output
gaps” (FOMC, June 1999, 99-100).
8.2.6 The zero interest rate bound
In 1998, Board staff briefed the FOMC on issues arising from the zero constraint on
nominal interest rates. Again, a good part of the discussion was based on how the Taylor Rule
might be adjusted to address the issue. One alternative was to increase the coefficients on the
inflation and output gaps in the Taylor Rule. Another alternative was to act more aggressively
only when inflation is already deemed “low.” Jerry Jordan, president of the Cleveland Fed,
suggested that conducting monetary policy “through a monetary base arrangement of supply and
demand for central bank money” might be an alternative to the Taylor framework when interest
rates were approaching the zero bound. President Parry pointed out that policy would be more
preemptive under either a more aggressive Taylor Rule or a forecast-based Taylor Rule (FOMC,
June/July 1998, 89-96).
8.2.7 Uncertainty about the output gap
In February 1999, Governor Meyer pointed out that virtually all versions of the Taylor
Rule then tracked by Board staff for the FOMC—whether based on the CPI or GDP deflator,
whether backward- or forward-looking, whether with Taylor’s coefficients or estimated
coefficients—prescribed a funds rate that was higher than the current funds rate target. He
attributed this divergence from the rule to a number of factors including the Asian financial
crisis, the Russian debt default, forecasts that had been calling for a spontaneous slowdown, and,
importantly, structural change suggested by the combination of declining inflation and declining
unemployment.

33
Meyer proposed an asymmetric strategy for setting the funds rate target in such an
environment where there was uncertainty about the level of the non-accelerating inflation rate of
unemployment (NAIRU). He suggested determining the level of the NAIRU under the
assumption that the current setting of the funds rate was the one prescribed by the Taylor Rule.
Then, he recommended following the Taylor Rule if above trend growth pushed the
unemployment rate even lower. In contrast, if the unemployment rate rose modestly, Meyer
recommended taking no immediate action to ease policy. Similarly, Meyer recommended policy
respond to an increase in (core) inflation according to the Taylor Rule, but respond passively to a
decline in inflation (FOMC, February 1999, 65-66).
Other members offered other approaches to dealing with uncertainty about the output
gap. For example, Governor Gramlich suggested a “speed limit rule.” He argued that the FOMC
“should target growth in aggregate demand at about 3 percent, or perhaps a bit less, and stay with
that policy for as long as inflation does not accelerate” (FOMC, March 1999, 44-45). At a later
meeting, Gramlich offered two additional approaches. First, the Committee could drop the output
gap term from the Taylor Rule and implement an inflation-targeting rule.7 And second, the
FOMC could adopt a “nominal GDP standard” (FOMC, May 1999, 45). Meyer viewed a
temporary downweighting of the output gap as sensible but rejected ignoring output all together.
“This is a difference between uncertainty and total ignorance” (FOMC, June 1999, 93-94).
President Broaddus suggested finding another variable to substitute for the output gap that would
serve as a forward-looking indicator of inflation expectations such as survey information or long-
term interest rates (FOMC, June 1999, 99-100).
8.2.8 Uncertainty about the inflation target
7 Gramlich actually discussed his approaches in terms of the associated unemployment gap.

34
As inflation moderated, FOMC members, in addition to questioning the role of the output
gap, began to question Taylor’s assumed inflation objective of 2 percent as measured by the
implicit GDP deflator. Governor Gramlich complained that “we must have point estimates of our
targets for both inflation and unemployment. At the very best I think we have bands; we do not
have point estimates” (FOMC, December 1998, 45). Governor Meyer suggested it might be
more reasonable for the FOMC to tell the staff what its inflation objective is as opposed to
simply accepting Taylor’s assumption (FOMC, June 2000, 90). He later expressed frustration
that “we start off from the inflation target that John Taylor set but do so without any
communication from the Committee to the staff about the inflation objectives Committee
members might have” (FOMC, January 2001, 187-88).
8.3 The Taylor Rule in policy since 2001.
While transcripts of FOMC meetings since 2001 have not yet been made public, it is
clear that the Taylor Rule—and all of its various offshoots—have continued to inform
Committee discussions. One area, which will likely be debated for many years to come, is when
is it appropriate to deviate from rule-like behavior? For example, in the aftermath of the 1987
stock market collapse and the 1998 Russian debt default, policymakers eased policy relative to
the Taylor Rule prescription to limit the impact of financial market turbulence on the real
economy. These two relatively brief deviations from rule-like behavior have been viewed largely
as successful examples of discretionary policy, although concern has emerged about the
associated moral hazard.
More recently, policy deviated from the classical Taylor Rule from 2003 to 2006, when
the funds rate was kept below the Taylor Rule prescription for a prolonged period in an effort to
offset incipient deflationary pressures. Taylor (2007a) criticized this use of discretion as

35
contributing to the surge in housing demand and house-price inflation. According to
counterfactual simulations, Taylor concluded that, if had policy adhered more closely to the
Taylor Rule, much of the housing boom would have been avoided. Moreover, the reversal of the
boom, with its resulting financial market turmoil, would not have been as sharp.
Looking ahead, the issue of discretionary deviations from rule-like behavior will likely
continue to be debated by economists and policymakers. But few would argue against the merits
of systematic policy at least during normal times. In addition to the Taylor Principle, perhaps
Taylor’s biggest contribution to policy is that it is now viewed through the lens of the Taylor
Rule as a systematic response to incoming information about economic activity and inflation as
opposed to a period-by-period optimization problem under pure discretion.
9. Concluding remarks
This paper has described an important component of the transformation that swept
through the monetary policy landscape in a remarkably few years following the abandonment of
monetary targeting. The Taylor Rule became an operational framework for central banks just as
time-consistency (commitment-credibility), transparency and independence replaced a culture of
discretion, “mystique” and “democracy” (i.e. politically-driven or influenced monetary policy).
During this period, the Fed embraced (relatively) clear information laden messages as an
alternative to allowing financial markets to infer what policy changes had taken place. The
Taylor Rule can be seen as part of that information process. The dynamics of macroeconomic
policy formation are as important as conventional macroeconomic dynamics: this paper has
attempted to illuminate aspects of that dynamic process.
References
Asso, P.F., Kahn, G., and Leeson, R. 2007. Monetary Policy Rules from Adam Smith to John Taylor, mimeo.

36
Bernanke, B. and Blinder, A. 1992. The Federal Funds Rate and the Channels of Monetary Transmission. AER September, pp. 901-922.
Bernanke, B and Miskin, R. 1992. Central Bank Behavior and the Strategy of Monetary Policy: Observations from Six Industrialized Countries. NBER Macroeconomics Annual. Cambridge, Mass.: MIT Press.
Blundell-Wignall, A., ed. 1992. Inflation, Disinflation and Monetary Policy. Proceedings of a Conference, Sydney: Reserve Bank of Australia.
Brunner, K. and Meltzer, A., eds. 1976. The Phillips Curve and Labour Markets, Vol. 1 of Carnegie-Rochester Conference Series on Public Policy. Amsterdam: North Holland Publishing Co.
Bryant, R.C., Hooper, P. and Mann, C.L., eds. 1993. Evaluating Policy Regimes: New Research in Empirical Macroeconomics, Washington, D.C.: Brookings Institution Press.
Bryant, R.C., Currie, R., Frenkel, J., Masson, P., and Portes, R., eds. 1989. Macroeconomics Policies in an Interdependent World, Washington, D.C.: International Monetary Fund.
Cairncross, A. 1978. Keynes and the Planned Economy. In Thirlwall, A.P., Keynes and Laissez-faire. London: Macmillan.
Court, R. 2000. The Lucas Critique: Did Phillips Make a Comparable Contribution? In Leeson, ed.
DiClemente, R. V. and Burnham. 1995. Policy Rules Shed New Light on Fed Stance, Economic and Market Analysis: Monetary Policy Update, Salomon Brothers, June 26.
Dorrance, G. 2000. Early Reactions to Mark I and II. In Leeson, ed. Economic Report of the President. 1990. Washington, D.C.: U.S. Government Printing Office. Federal Open Market Committee. 1995 – 2001. Transcripts of FOMC meetings, various issues,
www.federalreserve.gov/fomc/transcripts. Federal Reserve Bank of Boston. 1978. After the Phillips Curve: Persistence of High Inflation
and High Unemployment. Boston: Federal Reserve Bank of Boston. Fischer, S. 1977. Long-Term Contracts, Rational Expectations and the Optimal Money Supply
Rule, Journal of Political Economy, vol. 85, no. 1, February, pp. 191-206. Foust, Dean. 1995. How low should rates be? Business Week, October 9, pp. 68-72. Friedman, M. 1968. The Role of Monetary Policy, American Economic Review, vol. 58, May,
pp. 1-17. ___ and Schwartz, A. 1963. A Monetary History of the United States, 1867-1960. Princeton,
N.J.: Princeton University Press. Fuhrer, J. and Moore, G. 1995. Inflation Persistence. Quarterly Journal of Economics, vol.
110, no. 1, February, pp. 127-159. Goodhart, C.A.E. 1992. The Objectives for, and Conduct of, Monetary Policy in the 1990s. In
Blundell-Wignall, A., ed. Greenspan, A. 1993. Testimony before the Committee on Banking, Finance, and Urban Affairs,
U.S. House of Representatives, July 20. ___. 1997. Speech at CEPR, Stanford University, September 5. Greider, William. 1987. Secrets of the Temple: How the Federal Reserve Runs the Country.
New York: Simon and Schuster. Hansen, A. H. 1931. Discussion. In Wright, Q., ed. Hansen, A. and Kindleberger, C.P. 1942. The Economic Tasks for the Post-war World.
Foreign Affairs, pp. 466-476.

37
Johnson, H.G. 1971. The Keynesian Revolution and the Monetarist Counter-Revolution. American Economic Review, vol. 61, May, pp. 1-14.
Keynes, J. M. 1931. An Economic Analysis of Unemployment. In Wright, Q., ed. ___. 1936. The General Theory of Employment, Interest and Money. London: Macmillan King, M. 2000. Speech to the joint luncheon of the American Economic Association and the
American Finance Association, Boston Marriott Hotel, January 7, 2000. Laidler, D. 1993. Price Stability and the Monetary Order. In Shigehara, K., ed. Leeson, R., ed. 2000. A.W.H. Phillips: Collected Writings in Contemporary Perspective.
Cambridge: Cambridge University Press. Lipsky, J. 1993. Keeping Inflation Low in the 1990s. Economic and Market Analysis:
Prospects for Financial Markets, Salomon Brothers, December. Lucas, R.E. 1976. Econometric Policy Evaluation: A Critique. In Brunner and Meltzer, eds.,
pp. 19-46. ___ and Sargent, T. 1978. After Keynesian Macroeconomics, and Response to Friedman. In
Federal Reserve Bank of Boston. McCallum, B. 1993. Discretion and Policy Rules in Practice: Two Critical Points. A Comment.
Carnegie-Rochester Conference Series on Public Policy, vol. 39, December, pp. 215-220. ___. 1999. Issues in the Design of Monetary Policy Rules. In Taylor, J., and Woodford, M., eds. Meulendyke, A. 1998. U.S. Monetary Policy and Financial Markets. New York: Federal
Reserve Bank of New York. Orphanides, A. 2003. Historical Monetary Policy Analysis and the Taylor Rule. Journal of
Monetary Economics, vol. 50, no. 5, July, pp. 983-1022. Orphanides, A. and Wilcox, D. 1996. The Opportunistic Approach to Disinflation. Finance and
Economics Discussion Series, 96-24, Board of Governors of the Federal Reserve System, May.
Orphanides, A., Small, D., Wieland, V., and Wilcox, D. 1997. A Quantitative Exploration of the Opportunistic Approach to Disinflation. Finance and Economics Discussion Series, 97-36, Board of Governors of the Federal Reserve System, June.
Phelps, E. and Taylor, J. 1977. Stabilizing Powers of Monetary Policy under Rational Expectations. Journal of Political Economy, vol. 85, no. 1, February, pp. 163-190.
Phillips, P. 2000. The Bill Phillips Legacy of Continuous Time Modeling and Econometric Model Design. In Leeson, ed.
Prowse, M. 1995. Decision Time for Alan Greenspan. Financial Times, July 3. Sargent, T and Wallace, N. 1975. “Rational” Expectations, the Optimal Monetary Instrument
and the Optimal Money Supply Rule, Journal of Political Economy, vol. 83, no. 2, April, pp. 241-254.
Selden, R.T. 1961. The Postwar Rise in the Velocity of Money: A Sectoral Analysis. The Journal of Finance, vol. 16, no. 4, December, pp. 483-545.
Shigehara, K., ed. 1993. Price Stabilization in the 1990s. London: Macmillan. Simons, H.C. 1933. Mercantilism as Liberalism. A Review Article on Charles A. Beard (ed.),
America Faces the Future. Journal of Political Economy, vol. 41, no. 4, August, pp. 548-551.
___. 1934. A Positive Program for Laissez Faire: Some Proposals for a Liberal Economic Policy, Public Policy Pamphlet, no. 15. Chicago: Chicago University Press.
___. 1936a. Rules versus Authorities in Monetary Policy. Journal of Political Economy, vol. 44, no. 1, February, pp. 1-30.

38
___. 1936b. Review of John Maynard Keynes. The General Theory of Employment, Interest and Money, Christian Century, July 22, pp. 1016-1017.
___. 1939. Review of Alvin Harvey Hansen, Full Recovery or Stagnation? Journal of Political Economy, vol. 47, no. 2, April, pp. 272-276.
___. 1943. Postwar Economic Policy: Some Traditional Liberal Proposals. American Economic Review, vol. 33, no. 1, March, pp. 431-445.
___. 1944. The US Holds the Cards. Fortune, September, pp. 156-159 and 196-200. ___. 1945. The Beveridge Program: An Unsympathetic Interpretation. Journal of Political
Economy, vol. 53, no. 3, September, pp. 212-233. ___. 1948. Economic Policy for a Free Society. Chicago: University of Chicago Press. Solow, R. 1978. Summary and Evaluation. In Federal Reserve Bank of Boston. Taylor, J.B. 1968. Fiscal and Monetary Stabilization Policies in a Model of Endogenous
Cyclical Growth. Princeton Econometric Research Program Series, October. ___. 1975. Monetary Policy During a Transition to Rational Expectation. Journal of Political
Economy, vol. 83, no. 5, October, pp.1009-1021. ___. 1977. The Determinants of Economic Policy with Rational Expectations. Proceedings of
IEEE Conference on Decision and Control, December. ___. 1979. Staggered Wage Setting in a Macro Model. American Economic Review, vol. 69 no.
2, May, pp. 108-113. ___. 1980. Aggregate Dynamics and Staggered Contracts. Journal of Political Economy, vol.
88 no. 1, February, pp. 1-23. ___. 1981a. Review of Macroeconomic Theory by Thomas J. Sargent. Journal of Monetary
Economics, September, pp. 139-142. ___. 1981b. Stabilization, Accommodation, and Monetary Rules. American Economic Review,
Papers and Proceedings, vol. 71, no. 2, May, pp. 145-149. ___. 1982. The Role of Expectations in the Choice of Monetary Policy. Monetary Policy Issues
for the 1980s: A Symposium. Federal Reserve Bank of Kansas City, December, pp. 47-76.
___. 1985. What Would Nominal GNP Targeting Do to the Business Cycle? Carnegie-Rochester Conference Series on Public Policy, vol. 22, Spring, pp. 61-84.
___. 1989a. The Evolution of Ideas in Macroeconomics. Economic Record, vol. 65, no. 189, June, pp. 185-189.
___. 1989b. Policy Analysis with a Multicountry Model. In Bryant, Currie, Frenkel, Masson, and Portes, eds., pp. 122-141.
___. 1989c. Monetary Policy and the Stability of Macroeconomic Relationships. Journal of Applied Econometrics, vol. 4(Supplement), December, pp. S161-S178.
___. 1992a. Comment on Bernanke, B and Miskin, R. ‘Central Bank Behavior and the Strategy of Monetary Policy: Observations from Six Industrialized Countries’. NBER Macroeconomics Annual, pp. 234-37. Cambridge, Mass.: MIT Press.
___. 1992b. Comments on 'Inflation Persistence' by J. Fuhrer and G. Moore. Federal Reserve Bank of St. Louis, June.
___. 1992c. The Great Inflation, the Great Disinflation, and Policies for Future Price Stability. In Blundell-Wignall, A., ed.
___. 1993a. Discretion Versus Policy Rules in Practice. Carnegie-Rochester Conference Series on Public Policy, vol. 39, December, pp. 195-214.

39
___. 1993b. Comments on ‘Evaluating Policy Regimes: New Research on Empirical Marcroeconomics’. In Bryant, Hooper, and Mann, eds.
___. 1993c. The Use of the New Macroeconometrics for Policy Formulation, American Economic Review, vol. 83, no. 2, May, pp. 300-305.
___. 1993d. Macroeconomic Policy in a World Economy: From Econometric Design to Practical Operation. New York: W. W. Norton.
___. 1998. Applying Academic Research on Monetary Policy Rules: An Exercise in Translational Economics, The Harry G. Johnson Lecture. The Manchester School Supplement, vol. 66, June, pp. 1-16.
___. 1999a. An Historical Analysis of Monetary Policy Rules. In Taylor, J.B., ed. ___, ed. 1999b. Monetary Policy Rules. Chicago: University of Chicago Press. ___ and Woodford, eds. 1999. Handbook of Macroeconomics. vol. 1c, Amsterdam: North
Holland. Taylor, J.B. 2007a. Housing and Monetary Policy. Paper presented at the Federal Reserve
Bank of Kansas City’s Symposium on Housing, Housing Finance, and Monetary Policy. Jackson Hole, Wyoming, August.
___. 2007b. Interview with Robert Leeson, September 12, mimeo. ___. 2007c. Thirty-five Years of Model Building for Monetary Policy Evaluation:
Breakthroughs, Dark Ages and a Renaissance. Journal of Money, Credit and Banking, vol. 39, no. 1, pp. 193-201.
Volcker, P. 1990. The Triumph of Central Banking. The 1990 Per Jacobsson Lecture, Per Jacobsson Foundation, Washington, D.C., September 23.
Wright, Q., ed. 1931. Unemployment as a World Problem. Chicago: University of Chicago Press.