classifications manipulation and nash accounting standards

38
Journal of Accounting Research Vol. 40 No. 4 September 2002 Printed in U.S.A. Classifications Manipulation and Nash Accounting Standards RONALD A. DYE Received 19 May 2001; accepted 10 April 2002 ABSTRACT This paper studies a model of “classifications manipulation” in which ac- counting reports consist of one of two binary classifications, preparers of accounting reports prefer one classification over the other, an accounting standard designates the official requirements that have to be met to receive the preferred classification, and preparers may engage in “classifications ma- nipulation” in order to receive their preferred accounting classification. The possibility of classifications manipulation creates a distinction between the official classification described in the statement of the accounting standard and the de facto classification, determined by the “shadow standard” actually adopted by preparers. The paper studies the selection and evolution of ac- counting standards in this context. Among other things, the paper evaluates “efficient” accounting standards, it determines when there will be “standards creep,” it introduces and analyzes the notion of a Nash accounting standard, and it compares the standards set by sophisticated standard-setters to those set with less knowledge of firms’ financial reporting environments. 1. Introduction Financial reporting is, at its roots, a process of classification: firms are “going concerns” or not; transactions are “recognized” in a firm’s financial Kellogg School of Management, Northwestern University. Paper formerly entitled “Trans- actions Manipulation and Nash Accounting Standards.” The author thanks seminar partici- pants at Duke University, Emory University, Northwestern University, Ohio State University, Pennsylvania State University, Purdue University, Stanford University, the University of Illinois at Chicago, and the University of Southern California, as well as two anonymous referees, for helpful comments on previous drafts of this manuscript, and the Accounting Research Center at Northwestern University for financial support. 1125 Copyright C , University of Chicago on behalf of the Institute of Professional Accounting, 2002

Upload: others

Post on 01-Oct-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Accounting ResearchVol. 40 No. 4 September 2002

Printed in U.S.A.

Classifications Manipulationand Nash Accounting Standards

R O N A L D A . D Y E∗

Received 19 May 2001; accepted 10 April 2002

ABSTRACT

This paper studies a model of “classifications manipulation” in which ac-counting reports consist of one of two binary classifications, preparers ofaccounting reports prefer one classification over the other, an accountingstandard designates the official requirements that have to be met to receivethe preferred classification, and preparers may engage in “classifications ma-nipulation” in order to receive their preferred accounting classification. Thepossibility of classifications manipulation creates a distinction between theofficial classification described in the statement of the accounting standardand the de facto classification, determined by the “shadow standard” actuallyadopted by preparers. The paper studies the selection and evolution of ac-counting standards in this context. Among other things, the paper evaluates“efficient” accounting standards, it determines when there will be “standardscreep,” it introduces and analyzes the notion of a Nash accounting standard,and it compares the standards set by sophisticated standard-setters to those setwith less knowledge of firms’ financial reporting environments.

1. Introduction

Financial reporting is, at its roots, a process of classification: firms are“going concerns” or not; transactions are “recognized” in a firm’s financial

∗Kellogg School of Management, Northwestern University. Paper formerly entitled “Trans-actions Manipulation and Nash Accounting Standards.” The author thanks seminar partici-pants at Duke University, Emory University, Northwestern University, Ohio State University,Pennsylvania State University, Purdue University, Stanford University, the University of Illinoisat Chicago, and the University of Southern California, as well as two anonymous referees, forhelpful comments on previous drafts of this manuscript, and the Accounting Research Centerat Northwestern University for financial support.

1125

Copyright C©, University of Chicago on behalf of the Institute of Professional Accounting, 2002

1126 R. A. DYE

statements or not; leases are capital leases or operating leases; expendituresare assets or expenses; financial claims are liabilities or equity, etc. Manyanalytical studies acknowledge this central role of classification in finan-cial reporting explicitly by representing the financial reporting process asthe production of a partition on some underlying state space (see, e.g.,Christensen and Demski [2003], Demski [1980], Dye [1985], Ijiri [1975]).

When this classification process works well, financial reporting helps in-vestors predict firms’ future cash flows, which in turn affects investors’share purchase decisions. To the extent the capital investors supply to firmsthrough their share purchases is used to finance the firms’ subsequent in-vestments, it follows that the classification process underlying financial re-porting has real resource allocational effects. This paper contains an ex-amination of the relationship between this classification process and theeconomy’s productive efficiency in a model that captures several features ofthe financial reporting environment.

First, the representation of accounting standards adopted in this paperadheres to the conventional binary classification often found in GAAP andGAAS, and the binary classification suppresses many probabilistic nuances.Each of the examples of accounting classification presented in the openingparagraph exhibits both this binary structure and the suppression of prob-abilistic details. For example, whether a firm will remain a going concernover a particular time interval is uncertain until the time interval passes,yet a firm’s auditors either issue a going concern qualification or they donot. Whether an expenditure will generate future benefits to a firm is oftenuncertain in the period the expenditure is made, yet the firm must reportthe expenditure either as an asset or as an expense. And so on.

Second, one of the binary classifications is unambiguously perceived morefavorably by investors than the other, and uncertainty regarding which of thetwo classifications is deemed appropriate is resolved in the financial report-ing process by requiring that some (typically context-dependent) thresholdbe exceeded in order to receive the more favorable classification. The ex-amples of GAAP and GAAS in the opening paragraph illustrate this, too.Obviously, avoiding a going concern qualification is preferred to the al-ternative, and the likelihood of financial distress influences an auditor’sdecision to issue the qualification. Most firms prefer recognizing revenuetoday to not recognizing it (or recognizing it with a delay), and whetherrevenue gets recognized following a sale depends on whether the earningsprocess is considered to be sufficiently complete and whether the proceedsfrom sale are measurable with a reasonable degree of certainty, etc.

Third, the analysis considers the incentives of firms to engage in “classifi-cations manipulation” in their attempt to secure the preferred accountingclassification. The financial reporting process is replete with examples ofsuch behavior: firms often structure lease agreements that are essentiallypurchase transactions so as to skirt GAAP criteria for capital leases; firmstry to include one-time gains in income from continuing operations; firmstry to convince their auditors that whatever contingent liabilities they have

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1127

outstanding are insufficiently probable so as not to warrant inclusion amongtheir estimated liabilities, etc.

In the presence of these features of the financial reporting process, theanalysis emphasizes the distinction between the “official” (or de jure) parti-tioning of firms (or their activities) designated by GAAP and the “effective”(or de facto) partitioning of firms (or their activities) induced by firms’ clas-sifications manipulation. Classifications manipulation results in more firmsreceiving the favorable accounting classification than a strict application ofGAAP would warrant. And, sophisticated investors adjust their interpreta-tion of a firm’s financial statements accordingly. They recognize that theboundary delimiting the more and less favorable classifications encodedin a GAAP (or GAAS) standard, will—as a consequence of classificationsmanipulation—be shifted to a lower threshold, a shadow standard, demark-ing the boundary between those firms who are or are not willing to pay thecost of classifications manipulation to secure the more favorable accountingclassification.

While official standards can be viewed as either exogenous orendogenous—depending on whether one perceives the process definingGAAS and GAAP as exogenous or endogenous—a shadow standard is al-ways endogenous: it is the outcome of an equilibrium process involving thespecification of mutually consistent behavior for both firms and investors,and it depends on each of: the prevailing official accounting standards, thedifferential amounts investors attach to firms that receive the more or lesspreferred accounting classifications, the costs of classifications manipula-tion, and investors’ understanding (or lack thereof) of the firms’ productiontechnologies.

The analysis demonstrates that, as investors learn more about firms’ pro-duction technologies over time, the relationship between the official andshadow standards changes. As a consequence of this learning, accountingstandard-setters face a fundamental trade-off: they can choose to hold theofficial standard constant over time, which causes the shadow standard tochange over time, or they can hold the shadow standard constant by repeat-edly changing the official standard, but they cannot hold constant both theofficial and the shadow standards. This trade-off appears to be new to the ac-counting literature. Moreover, the paper establishes that, if standard-setterschoose to hold the shadow standard constant, then on average they willhave to increase the official standard over time. To the extent that increasesin the official standard represented in this paper proxy for an expansionin the set of financial reporting standards in GAAP, the paper provides anexplanation for the perpetual increase in GAAP standards.

When the official standard is considered endogenous, the question arises:what are GAAP/GAAS standards designed to maximize? We posit that stan-dards are chosen to maximize that expected value of the firms subject to thestandards net of all relevant costs, including the costs of classifications ma-nipulation. What standards emerge as optimal are shown to depend on theextent to which the distribution of available projects changes as accounting

1128 R. A. DYE

standards change, on how well standard setters anticipate the financial mar-ket’s reaction to changes in standards, and relatedly, on how well standardsetters know the parameters of the economy.

Since, in practice, standard-setters may not have good knowledge of somedimensions of the economy, it is important to understand how robust thefirms’ values are to standards that depart from those set by fully–informedstandard-setters. The paper evaluates the effects of such errors in two ways.First, it contains explicit calculations of the loss in value from incorrectlyset standards. Second, it compares the standards chosen by a sophisti-cated standard setter who is fully aware of how preparers react to a changein standards to the standards chosen by a naive standard setter who selectsstandards assuming that the actions taken by preparers this year will be thesame as their actions last year, and hence who assumes that there will be noreaction to a change in standards. While the naive standard setter’s beliefswill often be wrong—that is, preparers’ actions this year often will be dif-ferent from the actions they chose in previous years—over time, we showthat in stable environments, the standards chosen by naive standard setterswill converge to a standard, dubbed a “Nash” standard in the following, inwhich preparers repeat their actions over time. Upon comparing the Nashstandard to the optimal standard chosen by a sophisticated standard setter(dubbed a “Stackelberg” standard), we find that the Nash standard is belowthe Stackelberg standard. That is, we show that if standard setters do notanticipate the reaction of preparers to a change in standards, then insuffi-ciently stringent standards emerge. Additional comparisons between Nashand Stackelberg standards are also made in the paper.

The contemporary accounting literature related to this paper is sparse.Demski [1973], [1974] made the accounting profession aware of the diffi-culties in choosing among accounting standards in general environmentswhere the information supplied by accounting reports had wealth redis-tribution effects and where the information supplied under alternative ac-counting standards could not be Blackwell-ranked. Demski’s work suggestedthe development of both narrower specifications of accounting standardsand the adoption of narrower efficiency criteria than Pareto-optimality; thepresent work represents one attempt at developing a theory of standards re-sponsive to these concerns. Arya, Fellingham, Glover, and Schroeder [1998]have recently proposed evaluating depreciation policies designed to achieveefficient investment selection, just as the present work evaluates financialreporting standards from an efficiency perspective.

The paper proceeds as follows: a description of the base model is pre-sented in section 2. Following that, the notion of a financial reporting equi-librium is presented in section 3. The characteristics and evolution of afinancial reporting equilibrium are given in section 4. Value-maximizingstandards that assume the distribution of firms subject to the standards isfixed are described in section 5. Section 6 presents the effects of introducingerrors in the formulation of accounting standards. In section 7, characteris-tics of value-maximizing standards are considered again, but in this section,

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1129

the distributions of firms subject to the standards can change as the stan-dards themselves change. The notions of Nash and Stackelberg standardsare introduced and analyzed in that section. Section 8 summarizes the re-sults. The appendix contains all relevant proofs.

2. Model Description

In the base model, entrepreneur i has a “stand-alone” project or produc-tion technology which, for life cycle or cash flow reasons, he wishes to sell. Ifthe entrepreneur’s production technology is viable, then by investing $I inthe technology after the sale, the purchasers of this technology (also knownas “investors” in what follows) receive β̃ i × I α/α + ε̃i in cash flows in theperiod following the sale. Here, β̃ i denotes a random “productivity param-eter” with prior mean β̄ i , and ε̃i is a mean zero error term independent ofβ̃ i . When there are multiple entrepreneurs (indexed by i), we assume ε̃i isindependently distributed across i. The scalar α ∈ (0, 1) indicates the rateat which decreasing returns to investment occur. If the entrepreneur’s pro-duction technology is nonviable, then by investing $I after the sale, investorsreceive no cash flows in the period following the sale.

Production technologies differ from each other in terms of the probabil-ity they are viable. The probability φi that the entrepreneur’s technology isviable is the realization of some random variable φ̃i , with density f (φi ) andsample space [φl , φu]. This probability is not observable to external investors,nor is it capable of being communicated directly by the entrepreneur. Con-sistent with the remarks made in the Introduction, we model reports pro-duced in compliance with an accounting/auditing standard as providinginformation about the realized φi through a binary partition on φ̃′

i s samplespace. Specifically, we define an accounting/auditing standard as a thresh-old probability φs that partitions the sample space into two sets W ≡ [φl , φ

s )and B ≡ [φs , φu] (W ≡ “worse”; B = “better”). The accounting/auditing stan-dard is “bright line” in so far as it designates a particular probability thresholdthat must be attained to achieve the better (B) classification.1

We assume that the entrepreneur’s auditor, or some similar party, admin-isters the bright-line standard: the auditor evaluates which of the classifica-tions W or B is consistent with the appearance of the production technol-ogy, and then the auditor issues a report of its findings. Given the report(W or B), investors must make inferences about φ̃i . This inference process

1 Some prevailing accounting standards are explicitly bright line (e.g., accounting for capitaland operating leases); others are nearly so (e.g., where the “more likely than not” criterion isemployed in the valuation allowance for deferred tax assets, and in the accounting for invest-ments, where the 20% and 50% thresholds are focal points, if not dispositive); and others aresomewhat more vague (e.g., the criteria distinguishing between contingent and estimated lia-bilities). Why there are such variations in the precision of the statement of accounting/auditingthresholds is not addressed in this paper.

I wish to thank Lenny Soffer for suggesting the deferred valuation allowance example men-tioned above.

1130 R. A. DYE

is complicated because the entrepreneur has the capability to engage inclassifications’ manipulation, that is, the ability to alter the appearance of theproduction technology so as to achieve the better classification. We posit thatan entrepreneur with a technology φi < φs can alter, at cost c × (φs − φi ) itsappearance to qualify for the better classification.2 With this specification,the farther the realized φi is below the official standard φs , the more costlyit is for the entrepreneur to obtain the better classification.

An entrepreneur bent on classifications manipulation will do so by justenough to qualify for the preferred accounting treatment, since the invest-ing public only observes the reported classification. An implication of this isthat if we—researchers—could observe the fraction of production technolo-gies/projects that just met the standard φs for the preferred classification,then we would see an atom (i.e., a positive fraction) of projects there, evenif the underlying distribution of projects φ̃i were atomless (i.e., no singlepoint has a positive probability of occurrence). Such “bunching” would bedirect evidence of classifications manipulation.3

Whether an entrepreneur will choose to engage in classifications’ manip-ulation depends upon whether the benefits from it exceed the costs. If themarket value of a project classified as W (resp., B) is mW (resp., m B), thenthe calculus of classifications manipulation leads to the following optimizingbehavior by preparers:

classify φi as B if m B − c × max{φs − φi , 0

} ≥ mW ;

otherwise, classify φi as W .Let φ be the minimum of the probabilities φi for which the entrepreneur

finds it advantageous to engage in classifications manipulation,4 i.e.,

φ = φs − m B − mW

c. (1)

This φ =φ(φs ) is what was referred to in the Introduction as the shadowstandard, since it determines the effective partitioning of projects induced

2 Two points should be made here: first, obviously, no entrepreneur with a production tech-nology that is viable with probability φ ≥ φs must incur a cost to secure the better classification.Second, if an entrepreneur with production technology φ < φs does engage in enough clas-sifications manipulation so that his firm qualifies for the better classification, I assume thatthe manipulation is “form rather than substance,” i.e., the actual probability the productiontechnology is viable remains at its original level φ.

3 This is consistent with the “bunching” documented in Degeorge, Patel, and Zeckhauser[1999] and Burgstahler and Dichev [1997].

4 This equation is governing only when the threshold φ is inside the support of φ̃. If φ̃ hassupport [φl , φu], then a complete specification of φ is: if φs − m B − mW

c ∈ [φl , φu], then φ =φs − m B − mW

c ; if φs − m B −,mWc > φu, then φ = φu; if φs − m B − mW

c < φl , then φ = φl .

In practice, the boundary cases φ ∈ {φl , φu} will be unimportant, since any official standardthat induced φ to assume one of these boundaries is tantamount to a standard in which allprojects are classified in the same way, in which case the standard (and associated classificationof projects) serves no allocative function. In such cases, a policy of having no standard will beweakly superior to having any standard.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1131

by the interaction between the official standard φs and preparers’ optimiz-ing classifications manipulation.5 Thus, a given official standard φs inducestwo partitions on projects, the “official” one {[φl , φ

s ), [φs , φu]}, and the “ef-fective” one, {[φl , φ), [φ, φu]}.6 In the following, considerable attention isdevoted to the shadow standard and its associated effective partition, be-cause the economics of an accounting standard are not embodied in theofficial standard or partition, but rather in the shadow standard and ef-fective partition induced by the official standard. This occurs because theallocation of resources, and the market prices of projects, are determinedby the shadow standard, not the official standard.

3. A Financial Reporting Equilibrium

So far, we have discussed how an official standard gets transformed into ashadow standard through an entrepreneur’s optimizing classifications’ ma-nipulating behavior, taking as given the market values m B and mW associatedwith the classifications B and W. But, these market values are endogenousand depend upon how much classifications’ manipulation investors believethe entrepreneur has engaged in. To define m B and mW in an internally con-sistent fashion based on exogenous variables, we must describe and charac-terize a financial reporting equilibrium relative to a given official standard.We do this presently.

We start by expanding on the time line governing the sequence of events.First, an entrepreneur learns the probability φi that his project is viable. An-ticipating the market values m B and mW associated with the classifications Band W , the entrepreneur then decides whether to engage in classificationsmanipulation. The financial report (B or W) for the project is then releasedto investors. Competition among the, presumed risk-neutral, investors re-sults in the project being sold for the expected value of the future cash flowsanticipated to be generated by the project’s stochastic production technol-ogy, net of the cost of anticipated future investment they must make in theproject, given the reported classification. After the sale, investors proceedwith their previously anticipated investments. Finally, the project’s realizedcash flows are generated and consumed by the investors. We now discussthe endogenous components of this event sequence in detail.

5 I wish to thank my colleague Sri Sridhar for suggesting the name “shadow standard” forthis threshold.

6 While, as the Introduction noted, accounting classifications in practice are often binary,there is nothing in the model that would prevent consideration of partitions with more thantwo elements. I would like to thank an anonymous referee for this observation.

An expanded analysis could endogenize the optimal number of elements in the partition.In such an expanded analysis, it would typically be undesirable to have a large number ofelements in the partition, even if the costs of writing the extra classifications were zero, inorder to economize on the costs of classifications manipulation. (These costs typically increaseas the number of elements in the partition increase, since the extra elements create moreopportunities for classifications manipulation.)

1132 R. A. DYE

We first describe how a project’s expected cash flows are linked to its clas-sification and the level of investment made in the project. Suppose investorsconjecture that the official standard φs will generate the shadow standardφ. If the released report is B (resp., W), then investors calculate the prob-ability that the project’s stochastic production technology is viable to bePr(viable | B, φ) = E [φ̃i | φ̃i ≥φ] (resp., Pr(viable | W, φ) = E [φ̃i | φ̃i <φ]).Thus, from outsiders’ perspective, the net expected cash flows from an in-vestment of $I on a project that receives the classification R ∈ {B, W } is:

Pr (viable | R, φ) × β̄ i I α/α − I. (2)

(This requires recalling that investment in a nonviable technology producesno return.)

Next, we discuss how the investment of $I in a project gets determined.Since the project is sold to investors, and it is these investors (not the en-trepreneur) who make the investment decision, the investment level $I cho-sen must be based on information available to them. Investors know whichof the classifications B or W is associated with the project they purchase, andso they can predicate their choice of $I at least on whether φ̃i < φ or φ̃i ≥ φ.

The following analysis proceeds by assuming that this is all the informationabout φ̃i on which investors base their choice of $I. However, it is impor-tant to digress briefly to consider what qualitative differences would emergein extensions of the analysis were investors to obtain additional post-saleinformation about φ̃i .

Digression on when accounting standards have economic consequencesFor any such extension, for arbitrary φ, either investors’ ultimate knowl-

edge of φ̃i varies with what they knew about φ̃i at the time of sale—that is,on whether φ̃i < φ or φ̃i ≥ φ (call this case I)—or else their ultimate knowl-edge of φ̃i is independent of what they knew about φ̃i at the time of sale (callthis case II). An example of case I is this: after the sale of a project that re-ceived the classification B (resp., W), investors additionally learn whetherφ̃′

i s realization belongs to the upper or lower half of the interval [φ, φu]

(resp., [φl , φ)), i.e., they learn which element of the partition {[φl ,φl + φ

2 ),[

φl + φ

2 , φ), [φ,φu + φ

2 ), [φu + φ

2 , φu]} the realized φ̃i belongs to.7 Generically,the only example of case II of which we are aware involves investors learningthe exact realization of φ̃i .8

7 Note that this refined partition varies with φ and hence is consistent with case I.8 Clearly, if following the sale, investors learn the exact realization of φ̃, then their knowledge

of φ̃ at the time of sale (namely, whether φ̃ < φ or φ̃ ≥ φ) does not affect their ultimateknowledge of φ̃. Hence, learning the exact realization of φ̃ is consistent with case II.

We now argue that, generically, the converse is also true: that is, unless investors ultimatelylearn the exact value of φ̃, then their knowledge of φ̃ will vary with what they know about φ̃ atthe time of sale.

The argument runs as follows. For a fixed accounting standard φs and associated shadowstandard φ, as a consequence of the accounting report R ∈ {B, W }, investors know which

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1133

In extensions of the present analysis, the economic consequences of anaccounting standard vary significantly depending on which of case I or caseII is applicable. If case I is applicable, then—as in the analysis that follows—selecting an accounting standard φs has both distributional and allocationalconsequences: the choice of φs affects φ, which in turn affects investors’ ul-timate knowledge of φ̃i , which in turn affects their choice of $I. So, both theselling price of the project (a distributional effect) and the amount of invest-ment in the project (an allocational effect) result from the specification ofφs in case I. In contrast, when case II is applicable, there are no allocationaleffects associated with φs , although there are distributional effects. In thatcase, the specification of φs affects φ and hence affects the project’s sellingprice, but it has no allocational effects, since investors’ ultimate knowledgeof φ̃i is, by definition of case II, independent of φ (and hence independentof φs ), and so the investment $I does not vary with φs . Thus, in case II, thereare no allocational consequences of choosing among accounting standards.

While some of our main results9 will hold for extensions of our analysis in-volving both cases I and II, the results below regarding efficient accountingstandards are suggestive of the kinds of results that would obtain were in-vestors to acquire additional post-sale information about φ̃i only when, as incase I, the information gleaned from the accounting reports remains incre-mentally informative relative to whatever additional post-sale informationinvestors acquire.10

3.1 THE DEFINITION OF EQUILIBRIUM

Returning to the model, the efficient allocation of capital, based on in-vestors’ knowledge of the returns on investment, requires maximizing thenet expected cash flows displayed in (2). The net expected cash flows fromthe project based on this anticipated investment then determine the marketprices mW and m B . Finally, these market prices are linked to the equilib-rium amount of classifications manipulation through (1). When all of these

element of the two element partition � = {[φl , φ), [φ, φu]} the realized value of φ̃ falls in.Suppose, as a consequence of information the investors discover post-sale, investors also learnthat φ̃′s realization falls in some element of the partition � = {[φ0, φ1), [φ1, φ2), [φ2, φ3), . . . ,[φn−1, φn), . . .}.

Clearly, the information contained in the original accounting report is redundant in thepresence of this post-sale information if and only if � ⊂ �, i.e., if and only if � is finer than parti-tion �. Equivalently, the information contained in the original accounting report is redundantif and only if φ = φi for some i.

If φ �= φi for any i , we are done: The accounting classification is incrementally informativerelative to �. But, φ = φ(φs ) cannot equal φi for any i genercially, since φ′(φs ) > 0 (i.e, therecan be only a finite number of φs for which φ(φs ) = φi for any i). This proves the claim.

9 In particular, each of the following continues to be valid: the distinction between a shadowstandard and the official standard, the importance of classifications manipulation, and the driftover time in the relationship between the official and shadow standards depicted in Theorem2 below.

10 Case I is the practically relevant case when investors cannot acquire exact informationabout their production technology (and hence φ̃′s realization) following the technology’s sale.

1134 R. A. DYE

conditions hold concurrently we obtain a financial reporting equilibrium.Formally:

DEFINITION 1. A financial reporting equilibrium relative to accountingstandard φs consists of a pair of investment levels I ∗(B) and I ∗(W) andmarket values m B and mW and a shadow standard φ such that:

(i) An entrepreneur whose project is viable with probability φi engagesin classifications manipulation if and only if φi ≥ φ;

(ii) Pr(viable | B, φ) = E [φ̃i | φ̃i ≥ φ] and Pr(viable | W, φ) = E [φ̃i | φ̃i

< φ](iii) For R ∈ {B, W }, the investment I ∗(R) attains the maximum of:

maxI

Pr(viable | R, φ)β̄ i I α/α − I.

(iv) For R ∈ {B, W }, the market value m R is given by:

m R = Pr(viable | R, φ)β̄ i (I ∗(R))α/α − I ∗(R);

(v) φ = φs − m B − mWc .

In brief, a financial reporting equilibrium consists of a pair of investmentsand market values and a shadow standard determining how projects areclassified, given the prevailing accounting standard, so that:

� investment is efficient given investors’ knowledge;� prices of projects are set correctly;� preparers optimize when engaging in classifications manipulation.

4. Characteristics and Evolution of a FinancialReporting Equilibrium

We start the formal analysis with a more explicit description of the rela-tionships among the equilibrium investment levels, market values, and theshadow standard. We then follow this with an analysis of how the financialreporting equilibrium evolves over time.

A simple calculation shows that, when the accounting classification is B,the optimal level of investment, I ∗(B) is given by:

I ∗(B) = (β̄ i E [φ̃i | φ̃i ≥ φ])1

1−α , (3)

which implies that the project has market value11

m B = E [φ̃i | φ̃i ≥φ]β̄ i I ∗(B)α/α−I ∗(B) = 1 − α

α(β̄ i E [φ̃i | φ̃i ≥φ])

11−α . (4)

11 The computation of the market value m B runs as follows. The first-order condition forI ∗(B) is

E [φ̃ | φ̃ ≥ φ]β̄ I ∗(B)α−1 = 1.

So,

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1135

Similar calculations for a project that receives the classification W yield:

I ∗(W) = (β̄ i E [φ̃i | φ̃i <φ])1

1−α (5)

and

mW = 1 − α

α(β̄ i E [φ̃i | φ̃i < φ])

11−α . (6)

The equation describing the shadow standard (1) links these two marketprices together with the cost of classifications manipulation:

φ = φs − 1 − α

αc× β̄

11−α

i × (E [φ̃i | φ̃i ≥ φ]

11−α − E [φ̃i | φ̃i < φ]

11−α

). (7)

We shall show below that, even when this equation cannot be solved ex-plicitly, much can be learned about the relationship between the official andshadow standards. However, to exhibit a financial reporting equilibrium ex-plicitly, it is useful to parameterize the model in a way which admits anexplicit solution to this equation. We parameterize the problem by assum-ing α = .5 and that φ̃i is uniform on [l, u], i.e., f (φ̃i ) = 1

u − l , φ̃i ∈ [l, u].12

For this parameterization, we conclude:

THEOREM 1 (The Relationship Between Official and Shadow Standards).Assume α = .5, φ̃i ˜Uniform[l, u], and that the official standard φs is such thatthe shadow standard φ is interior (i.e., φ ∈ (l, u)). Then, a financial reportingequilibrium relative to accounting standard φs and distribution φ̃i exists, is unique,and is (partially) described by the shadow standard:

φ = φs − β̄2i

u2 − l2

4c

1 + β̄2i

(u − l)2c

(8)

and the market prices:

mW = β̄2i

(l + φ

2

)2

and mB = β̄2i

(u + φ

2

)2

. (9)

The expression (8) relates the threshold φ to the standard φs , the cost ofclassifications manipulation, the parameters of the distribution of φ̃i , andthe unknown productivity parameter β̃ i . Notice that φ < φs unless the cost ofclassifications manipulation is infinite, so the set of projects that receive the

m B = E [φ̃ | φ̃ ≥ φ]β̄ I ∗(B)α/α − I ∗(B)

= (E [φ̃ | φ̃ ≥ φ]β̄ I ∗(B)α−1/α − 1) × I ∗(B)

= (1/α − 1) × I ∗(B) = 1 − α

α(β̄E [φ̃ | φ̃ ≥ φ])

11−α .

The computations for mW are similar.12 We write this in the following as φ̃˜Uniform[l, u].

1136 R. A. DYE

better classification B is always overstated relative to what a strict applicationof GAAP would warrant, as suggested above.

While comparative statics can be performed directly on the shadow stan-dard φ, the comparative statics that are of most interest employ this shadowstandard to calculate the probability that a project will receive a particularclassification. We state these comparative statics in the next corollary, afterrewriting the shadow standard in terms of the mean and standard deviationof φ̃′

i s distribution. With µ = u + l2 and σ = u − l

2√

3, the shadow standard can be

written as

φ = φs − β̄2i

σ√

3c µ

1 + β̄2i σ

√3

c

. (10)

COROLLARY 113(The Comparative Statics of Shadow Standards). Main-tain the assumptions of the preceding theorem. The probability that a project will receivethe classification B (resp., W ) is

(i) decreasing in the standard φs and the cost of classification manipulation c ,and

(ii) increasing in β̄ i and µ.

Most of these results are intuitive: as the standard φs goes up, or as the costc of classifications manipulation increases, then the probability of getting themore favorable accounting treatment declines. Also, as the expected valueof the productivity parameter β̄ i increases, or as the prior probability µ that aproject is viable increases, then the returns to the preferred classification alsoincrease (since ∂

∂β̄ i(m B − mW ) > 0 and ∂

∂µ(m B − mW ) > 0) and so preparers

will engage in classifications manipulation more often (i.e., for a bigger setof realized φ′

i s). No comparative static is reported involving changes in thestandard deviation of φ̃i on the probability that a project is classified as B,since that effect is generally ambiguous.14

Next, we consider the preceding model and associated equilibrium as asnapshot of a multi-period model in which, in any given period, the (currentgeneration) entrepreneur and (current generation) investors get to witnessand learn from what happened in prior periods when other projects weretransferred between previous generations of entrepreneurs and investors. Inthis setting, it is natural to inquire how the financial reporting equilibriumevolves over time. We study the stationary case, where the distribution of φ̃i

is independently and identically distributed over time, and—with the pos-sible exception of the productivity parameter β̃ i —all other parameters ofthe model (the cost of classifications manipulation c , and the preferences ofthe entrepreneurs and investors) remain constant. The following theorem

13 The proof of the corollary involves simple algebra and is not included in the appendix.14 But, if the standard φs is at or below the mean probability µ that a firm is viable, one can

conclude that the probability a firm will be classified as viable is decreasing in the standarddeviation σ .

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1137

applies in both the “completely” stationary case where β̃ i is constant overtime, as well as in the “partially” stationary case where β̃ i varies over time buthas some stationary component. To be specific, in the partially stationarycase, it suffices that there be no information available at time t better thanthe time t value of productivity parameter that is helpful in predicting theproductivity parameter at any time t ′ > t and that the time series β̃ i t be a mar-tingale. More formally, if ϑt represents all that can be known about the timeseries of the productivity parameter up to time t , then E [β̃ i t ′ | ϑt , βi t ] = βi t

for all t ′ > t.The following theorem demonstrates that, even with all this stationarity,

the relationship between the shadow standard and the official standard isnot constant over time. The theorem shows that if standard setters want tohold the shadow standard constant over time (perhaps because they preferconsistency in the classification of projects across time), then on averagethey will have to increase the official standard over time. That is, there mustbe “standards creep.”

THEOREM 2 (Standards Creep). If the official standard φs is chosen in eachperiod so as to induce the same shadow standard φ and the environment is (partiallyor completely) stationary, then the expected value of the official standard φs must beincreasing over time.

This result is suggestive of a fundamental trade-off in designing account-ing standards: accounting standards can be chosen so that the “face” value ofthe standard φs is intertemporally constant, or accounting standards can bechosen so that the induced shadow standard φ is intertemporally constant,but not both.

The result is somewhat surprising given the assumed stationarity in theenvironment. To obtain intuition for the result, we will consider in the textthe special case where α = .5 and β̃ i is “completely” stationary (though theresult also holds for any α ∈ (0, 1) and for the case of time-varying β̃ ′

i t s too).In this case, the equation (7) defining the shadow standard reduces to:

φ = φs − 1c

× β̄2

i × (E [φ̃i | φ̃i ≥ φ]2 − E [φ̃i | φ̃i < φ]2). (11)

It is apparent from (11) that the shadow standard φ prevailing in a periodis affected by the market’s perceptions of β̃ i only through β̄2

i . In particular,if β̄2

i increases, then the official standard must be raised to keep the shadowstandard constant.15

Now, consider what happens with the passage of time. As time evolves,successive generations of investors will acquire more and more precise in-formation about the value of the unknown productivity parameter β̃i . Thislearning about β̃ i will occur since the cash flows generated by each viableproject, β̃ i × I α/α + ε̃i , provide investors with indirect information about

15 Since E [φ̃i | φ̃i ≥ φ]2 − E [φ̃i | φ̃i < φ]2 > 0.

1138 R. A. DYE

β̃ ′i s realized value. To describe this formally, we recall the notation ϑt in-

troduced above. It represents all that will be known about the productivityparameter β̃ i up through period t. If “today” is period 0, then ϑ̃t is un-certain for any t > 0 whereas ϑ0 is known. In this notation, today’s (resp.,period t ′s) expected value of the productivity parameter β̃ i is denoted byE [β̃ i | ϑ0] ≡ β̄ i (resp., E [β̃i | ϑ̃t ]).

In any period t > 0, the official standard φst and the shadow standard φt

prevailing in period t will be connected to each other through the followingcounterpart to (11):

φt = φst − 1 − α

αc

(E [β̃i | ϑ̃t ])2(E [φ̃i | φ̃i ≥ φt ]

2 − E [φ̃i | φ̃i < φt ]2). (12)

Analogous to what (7) demonstrated in period 0, equation (12) demon-strates that the shadow standard φt prevailing in period t is affected bythe market’s perceptions of β̃ i at that time only through the expectation(E [β̃ i | ϑ̃t ])2. In fact, (12) tells us more: it tells us that to hold the shadowstandard constant as (E [β̃ i | ϑ̃t ])2 increases, the official standard φs

t mustincrease. This fact drives the theorem: since more and more informationaccumulates about β̃ i as time passes, the expected conditional variance ofβ̃ i (calculated today), E [V ar (β̃ i | ϑ̃t ) | ϑ0], will decrease the further into thefuture we look (i.e., the higher t is). That is,

E [V ar (β̃ i | ϑ̃t ) | ϑ0] = E[E

[β̃2

i

∣∣ ϑ̃t] − E [β̃ i | ϑ̃t ]2

∣∣ ϑ0]

= E[β̃2

i

∣∣ ϑ0] − E

[E [β̃i | ϑ̃t ]2

∣∣ ϑ0]

decreases as t increases, and so E [E [β̃ i | ϑ̃t ]2 | ϑ0] increases as time passes,and hence the official standard φs

t must be adjusted upwards to keep φ

constant. Thus, the anticipation of learning about β̃ i is responsible for anincrease in the expected value of the official standard over time.

Moreover, notice that this effect will be reinforced if the cost c of classi-fications manipulation were to decline over time, perhaps because of theinformation diffused through investment bankers about how a preferredaccounting treatment can be achieved.16

16 I would like to thank Mary Barth for the observation that the cost c of classificationsmanipulation might, under some circumstances, increase over time. While the diffusion ofinnovations by investment bankers across firms would tend naturally to reduce c over time (asthe text mentions), increased scutiny by SEC officials of practices that they considered to beviolative of the spirit of some standards could cause these costs to increase over time. Of course,if this happened, then the interaction between the learning effects discussed in Theorem 2and changes in c would lead to indeterminant effects on the time series evolution of the officialstandard φs .

I also want to thank Lenny Soffer for his observation that, apart from the behavior of invest-ment bankers, one might expect c to decline over time because of firms’ constantly “pushingthe envelope” to determine what accounting treatments qualify as being in accordance withGAAP. Once a firm, or a collection of firms, discovers that no objection gets raised to whatpreviously was regarded as an aggressive accounting treatment of some transaction, then that

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1139

5. Value-Maximizing Standards When the Distributionof Available Projects Is Fixed

The analysis so far has taken the official standard φs as exogenously given.In this section, as well as throughout much of the remainder of the paper,we consider endogenizing the choice of the official standard. In makingthe official standard endogenous, we must endow the standard setters em-powered to choose the standard with an objective function. In this section,we presume that standards are chosen to maximize the expected value ofa project, before the project is classified, net of the expected cost of clas-sifications manipulation, while holding the distribution of projects fixed.17

Choosing a standard to maximize this objective in part involves trading offthe costs of two kinds of errors: misclassifying a viable project as nonviable,and misclassifying a nonviable project as viable. The standard setters mustalso account for the relative frequency of viable and nonviable projects inthe population, as well as the cost of classifications manipulation.

Once again we restrict attention to the parameterizations α = .5 andφ̃i ˜Uniform[l, u]. The expected value of a project net of the cost of clas-sifications manipulation is given by:

Pr(φ̃i ≤ φ(φs )

) × mW + Pr(φ̃i > φ(φs )

) × m B −∫ φs

φ(φs )c(φs − φi

)f (φi )dφi .

(13)

The last term is the expected cost of classifications manipulation. Whenφ(φs ) ∈ (l, u), these expected costs integrate to:18

(u − l)β̄4i

8c

(φ(φs ) + u + l

2

)2

. (14)

Since the shadow standard φ(φs ) increases in φs , we see that the expectedcosts of classifications manipulation increase as the standard φs increases. It

aggressive treatment subsequently becomes the benchmark against which even more aggressivereporting of transactions will be compared in the future.

17 In later sections, we allow the distribution of projects to change as the accounting standardchanges.

18 Recall from Theorem 1 that φs and φ are related through the equation

φ

(1 + β̄2

i

2c(u − l)

)+ β̄2

i

4c(u2 − l2) = φs .

Hence, we can express the expected cost of transactions manipulation as:

1u − l

∫ φs

φc(φs − φ)dφ = c

2(u − l)(φs − φ)2 = c

2(u − l)

(β̄2

i

2c(u − l)

)+ β̄2

i

4c(u2 − l2)

)2

= (u − l)β̄4i

8c

(φ + u + l

2

)2

.

1140 R. A. DYE

can be shown that these costs are also always increasing in the productivityparameter β̄ i .19 Further, the expected cost of classifications manipulationis never monotonic in c .20 The graph in figure 1 below illustrates how thesecosts change as the parameter c changes.21

The official standard φs and the associated shadow standard φ(φs ) thatmaximize the expected value of a project net of the expected cost of classi-fications manipulation are detailed in the following theorem.

THEOREM 3 (Value-Maximizing Standards when the Distribution ofProjects is Fixed). When α = .5 and φ̃i ˜Uniform[l, u], and the standard φs

is chosen to maximize the expected value of a project net of the expected cost of classi-fications manipulation, then

(i) when c >(3l + u)β̄2

i2 , the optimal shadow standard is

φ = u + l2

× 2 − (u − l)β̄2i

c

2 + (u − l)β̄2i

c

= µ × 1 −√

3σ β̄2i

c

1 +√

3σ β̄2i

c

and the official standard associated with this shadow standard is

φs = u + l2

= µ;

(ii) the net expected value of the project is strictly increasing in the cost c ofclassifications manipulation, for c >

(3l + u)β̄2i

2 .(iii) When c ≤ (3l + u)β̄2

i2 , the optimal standard is no standard, i.e., φ = φs = l .

Several parts of the theorem are of interest. First, one might expect that,since the probability a project is viable is uniformly distributed, the opti-mal standard would result in half the projects receiving the classification Band half receiving the classification W. But, according to the theorem, eventhough φs = µ, more than 50% of all projects are classified as B. This followssince the reported classifications are determined by the shadow standard φ,not the official standard φs , and φ < µ. The explanation for this asymmetryis the presence of, and the accounting for, the costs of classifications manip-ulation in the standard setters’ objective function. Since the costs of classi-fications manipulation are incurred only when the better (B) classificationis received, the optimal standard that “nets out” these costs of classificationsmanipulation is set below the mean. It can be shown that, were the objective

19 That is,(u − l)β̄4

i8c (

φs − β̄2i

u2 − l24c

1 + β̄2i

(u − l)2c

+ u + l2 )2 is increasing in β̄ i .

20 This last claim is easiest to see by writing the expected cost of transactions manipulationdirectly in terms of φs . Omitting the algebra, these costs can be shown to be expressable as(u − l)β̄4

i8c (

φs + u + l2

1 + β̄2i

(u − l)2c

)2. Now, for any positive finite c , under the preceding conditions, these ex-

pected costs are positive. Since, as c approaches 0, the costs approach zero, and as c approachesinfinity, the costs also approach zero, the nonmonotonicity follows.

21 I would like to thank Madhav Rajan for pointing out an error in the construction of thisgraph in a previous version of this manuscript.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1141

FIG. 1.—Plot of expected cost of classifications manipulation as a function of c; φ̃∼i Uniform

[0,1]; φs = .5; β̄ i = 1.

designed to maximize the expected value of a project gross of these expectedcosts of classifications manipulation, then the optimal shadow standard is setat the mean, and the official standard is set above the mean. But, since thecosts of classifications manipulation are real costs, standard setters shouldaccount for them when designing standards, and so, other things equal,standards should be set in part to economize on these costs.

Second, it is interesting to note from (i) that, even though the standardis chosen to maximize the expected value of a project net of the cost ofclassifications manipulation, the cost c does not enter into the determinationof the optimal standard φs , as long as the cost c exceeds (3l + u)β̄2

i2 . However,

as part (iii) reports, when the cost c is low (c ≤ (3l + u)β̄2i

2 ), standards serveno function. For such low c ′s , nontrivial standards (φs > l) generate costs ofclassifications manipulation, yet they fail to discriminate among projects: allentrepreneurs, regardless of their project’s realized φi , choose to have theirproject classified as B. This may be the economics underlying SFAS 2 andother standards that purposefully do not discriminate between economicallydifferent expenditures.

Part (ii) observes that increases in the cost of classifications manipulationalways increases the net expected value of a project. While, holding projects’market prices fixed, entrepreneurs may prefer to reduce the cost of obtain-ing the better classification, ex ante they are always worse off by having thesecosts decline, since the market will anticipate their subsequent exploitationof the reduced costs of getting the preferred classification. High costs of clas-sifications manipulation (i.e., high c ′s) commit entrepreneurs to engage inless manipulation, which is ex ante efficient.

The theorem also yields several comparative statics. According to the the-orem, the optimal standard φs equals the expected probability that a projectis viable, and so it increases as this mean probability increases. φs is otherwiseindependent of the parameters of the financial reporting environment, a

1142 R. A. DYE

fact that will be important in the analysis of errors in standard setter’s beliefsin the next section. Finally, the optimal shadow standardφ is: below the meanprobability that a project’s production technology is viable; increasing in themean and decreasing in the standard deviation of the distribution of φ̃i ; anddecreasing in the expected value of the productivity parameter β̄ i . Theseare all testable implications of the theorem.

Before concluding this section, we discuss the robustness of the results inthis section to variations in the distribution of φ̃i . Since the uniform distribu-tion is a special case of the beta distribution, one way to evaluate the robust-ness of the above results is by comparing them to corresponding results gen-erated by other beta distributions. While members of the beta family otherthan the uniform distribution are difficult to deal with analytically, numeri-cal plots of these cases provide some insight into these robustness questions.Figures 2a–2d plot four densities drawn from the beta class, and Figures 3a–3d plot the net expected value of a project (net of the expected cost ofclassifications’ manipulation) as a function of the shadow standard that defacto partitions the better (B) and worse (W) accounting classifications.

The beta distributions depicted in Figures 2a and 2b are symmetric, and ascareful scutiny of Figures 3a and 3b reveals, the optimal shadow standardscorresponding to these symmetric distributions are close to, but slightlybelow, the (common) mean (.5) of these distributions. In contrast, the betadistribution in Figure 2c (resp., Figure 2d) is skewed right (resp., left), andupon careful scrutiny, Figure 3c (resp., Figure 3d) reveals that the optimalvalue of the shadow standard is close to but to the left (resp., right) ofthe distribution’s mean (which is 5/7 (resp., 2/7) for the distribution inFigure 3c (resp., Figure 3d)). Generalizing from these figures, it appearsthat a useful heuristic is to have the shadow standard close to the meanof φ̃′

i s distribution, and that if φ̃i is either symmetric or skewed right, thenthe optimal value of the shadow standard is slightly below the distribution’smean, consistent with the results presented above: in these cases, the costof classifications’ manipulation “pulls down” the shadow standard belowthe mean of φ̃′

i s distribution. But, if φ̃′i s distribution is skewed sufficiently

left, then, notwithstanding the extra cost of classifications manipulation itinduces, the optimal shadow standard (and, a fortiori, the optimal officialstandard) may be to the right of the distribution’s mean.

6. Errors in Specifying Standards

The analysis in the preceding section assumed that standard setters knowthe economic environment in which the sale of projects was conducted:they know the parameters of the distribution (u and l) generating viableprojects; they know the cost of classifications manipulation; they have thesame beliefs about the expected value of the productivity parameter as doother market participants, etc. While the actual participants in an economyhave strong financial incentives to acquire such a sophisticated understand-ing of the environment in which they work, it is not clear that standard

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1143

FIG. 2.—Plots of beta densities for various parameterizations.

setters have corresponding incentives. Indeed, one might argue that manylobbying efforts are designed to prevent standard-setters from acquiringsuch knowledge. It is important, consequently, in any discussion of stan-dards, to consider the robustness of the standards with respect to errors

1144 R. A. DYE

FIG. 3.—Plots of net expected value of project as a function of the shadow standard φ forvarious beta distributions when c = 25 and β̄ i = 1.

in the standard setters’ knowledge. This section discusses these robustnessquestions.

To evaluate errors in standards, we continue to assume that market partic-ipants (preparers and investors) know the correct values of the economy’s

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1145

parameters, and we consider what happens when standards are set incor-rectly. That is, regardless of how standard setters arrive at the specifiedofficial standard φs , the shadow standard φ, and the selling prices mW andm B are set rationally according to (8) and (9). We further assume that, asbefore, standard setters choose standards that, based on their information,are perceived to maximize the expected value of a project net of the ex-pected cost of classifications manipulation. That is, if the standard settersbelieve φ̃i ˜Uniform[l e , ue ], then they set φs at φs = ue + l e

2 .

In this setting, notice that standard setters can remain ignorant—orwrongheaded—concerning both the cost c of classifications manipulationand the expected value β̄ i of the productivity parameter β̃ i , and it makesno difference to the construction of the standard. Implementation of theoptimal standard is robust with respect to these kinds of errors. And, if thestandard setters are wrong about the support of the distribution, i.e., aboutl e and ue , the only sense in which their error matters to the formation of thestandard is the error in the mean: e = ue + l e

2 − u + l2 . To evaluate the conse-

quences of the error e , we calculate the difference in the maximum expectedvalue of a project had no error in standards’ specification occurred to themaximum expected value of a project in the presence of this erroneouslyset standard. We refer to this difference as the absolute robustness of a standardsubject to error e . (The relative robustness of a standard subject to error e is the ratioof this difference to the maximum value of a project when the standard isconstructed with no error.)

THEOREM 4.22 (The Robustness of Standards to Errors in Standard Set-ters’ Beliefs.) Suppose α = .5 and φ̃i ˜Uniform[l, u]. If the objective in setting astandard is to maximize the expected value of a project net of the expected cost of classi-fications manipulation, then the robustness of a standard with error e = ue + l e

2 − u + l2

is given by:

error absolute robustness relative robustness

e β̄2i c e 2

4c + 2β̄2i (u − l))

8ce 2

2c(5l2 + 6lu + 5u2) − β̄2i (u − l)3

.

Notice that the robustness of a standard is quadratic in the error e .This has two implications. First, the first-order effect of small errors iszero in both absolute and relative terms (i.e., d

deβ̄2

i c e 2

4c + 2β̄2i (u − l))

|e=0 = 0 anddde

8ce 2

2c(5l2 + 6lu + 5u2)−β̄2i (u − l)3 |e=0 = 0). Second, the direction of the error does

not matter: overestimates of the mean u + l2 are equally costly as underes-

timates (and vice versa).23 It can also be shown that the absolute (resp.,

22 The proof of this theorem, which involves simple, but lengthy, algebraic manipulations,is omitted.

23 Prior to obtaining this result, one might have guessed that the presence of—and concernover—classifications costs might have induced an asymmetry between the effects of overstatederrors and understated errors: errors of overstatement lead to higher standards which in turnwould seem to lead to higher classfications costs, which would seem to make overestimates ofstandard setters more costly than underestimates.

1146 R. A. DYE

relative) robustness of a standard is decreasing (resp., increasing) in β̄ i , solarger values for the productivity parameter accentuate the loss in the ex-pected value of a project due to an erroneously set standard, but this lossdecreases in percentage terms. In contrast, it can be shown that increasesin the classifications cost parameter c adversely affect both the absolute andrelative loss due to errors in a standard’s specification.

7. Value-Maximizing Standards When the Distributionof Available Projects Is Endogenous

So far, the paper has examined determinants of efficient accounting stan-dards, the evolution of standards, and the effects of errors in standardsin a setting where the real resource allocation effects of standards derivefrom their impact on the size of the investments in the set of available ex-isting projects. This might be considered an analysis of the ex post effectsof standards, since the set of available projects is taken as given—that is,not responsive to changes in the accounting standards. But, in practice,we might expect there to be ex ante effects of changes in standards as well,since the behavior of entrepreneurs and others involved in the produc-tion of projects, IPOs, new ventures, and the like may respond to changesin accounting standards. This happens in practice: What deals (spin-offs,takeovers, investments, leases, etc.) are completed are affected by prevailingaccounting standards, and as standards change, both the kind and structur-ing of these deals change. We now formally account for these responsesby allowing the distribution of projects to change as accounting standardschange.

The standards that emerge in this setting depend on the extent to whichstandard setters anticipate the dependence of the (now) endogenous dis-tribution of projects on the chosen standards. We will consider two types ofstandard setters: “sophisticated” (also known as “Stackelberg leaders”) and“naive” (also known as, “followers”). A sophisticated standard setter correctlyanticipates how preparers will respond to changes in standards. Necessarily,a sophisticated standard setter knows all the details of the economic envi-ronment in which standards are set. In contrast, a naive standard setter ispresumed to know nothing about the economic environment other thanwhat can be deduced from observing preparers’ past behavior. Moreover,a naive standard setter believes that preparers do not alter the distributionof their projects in reaction to a change in standards. More detail concern-ing the behavior of both naive and sophisticated standard setters will bepresented below.

Given these assumed differences in sophistication, it is clear that if theonly purpose of the analysis were to compare the expected performanceof these two types of standard setters, then there would be no need for aformal analysis. Whatever the objective used to calculate the performanceof standards, a naive standard setter will never select a better standard than

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1147

a sophisticated standard setter. It is nevertheless worthwhile to evaluate for-mally the behavior of both sophisticated and naive standard setters because,in practice, standard setters may not have sufficient information to be able toact as “our” sophisticated standard setters do. So, it is desirable to comparethe standards adopted by the two standard-setting types, and see what param-eters of the model are key to explaining the differences in their promulgatedstandards. To identify these differences, we turn to a formal analysis.

We start by describing how the distribution of projects is endogenized.In our model, projects differ from each other in terms of the probability φi

they are viable, and an accounting standard is a threshold probability that aproject is viable. So, it seems natural to posit that a change in an accountingstandard induces the entrepreneur to change the probability distribution ofφ̃i . We concentrate on mean effects, and assume that the cost of increasingthe mean probability a project is viable is quadratic in the mean. Specifically,we assume that at cost k

2�2, the entrepreneur can shift the mean of φ̃i toµ = E [φ̃i ] = µ+�, where µ ∈ (0, 1) is some base probability and k is somepositive constant.24

As discussed above, a sophisticated standard setter correctly anticipateshow preparers (i.e., entrepreneurs) respond to a selected standard. To de-scribe this response precisely, we must adapt the previously given definitionof a financial reporting equilibrium to the present situation in which thedistribution φ̃i is endogenous. (In this definition, we let f (φi | µ) denote theprobability density that a project is viable, given that the mean probabilityof viability is µ.)

DEFINITION 2. A financial reporting equilibrium relative to accountingstandard φs when the mean of the distribution of projects φ̃i is endogenousconsists of a pair of investment levels I ∗(B) and I ∗(W) and market valuesm B and mW , a shadow standard φ, and a mean µ∗ such that:

(i) An entrepreneur whose project is viable with probability φi engagesin classifications manipulation if and only if φi ≥ φ;

(ii) Pr(viable | B, φ, µ∗) = E [φ̃i | φ̃i ≥ φ, µ∗] and Pr(viable | W, φ, µ∗)= E [φ̃i | φ̃i < φ, µ∗];

(iii) For each µ, Pr(W |φ, µ) = Pr(φ̃i <φ | µ) and Pr(B |φ, µ) = Pr(φ̃i

≥ φ | µ);(iv) For R ∈ {B, W }, the investment I ∗(R) attains the maximum of:

maxI

Pr(viable | R, φ, µ∗) × β̄ i I α/α − I.

(v) For R ∈ {B, W }, m R = Pr(viable | R, φ, µ∗)β̄ i (I ∗(R))α/α − I ∗(R);

24 Adopting a quadratic functional form for the cost of production is often done in theagency and disclosure literatures when explicit closed form solutions are sought. See, e.g.,Baiman and Sivaramarkrishnan [1991], Holmstrom [1979], and Kirby [1988].

1148 R. A. DYE

(vi) Taking φ, mW , and m B as given, µ∗ attains the maximum of

maxµ

Pr(W |φ, µ) × mW + Pr(B |φ, µ) × m B − k2

(µ − µ)2

−∫ φs

φc(φs − φi

)f (φi | µ)dφi .

(vii) φ = φs − m B − mWc .

There are two principal differences between this definition and the finan-cial reporting equilibrium previously defined. The first is condition (vi), inwhich preparers maximize with respect to the choice of distribution φ̃i ofprojects. The first two terms in (vi) relate to the expected selling price ofa project. The third term − k

2 (µ − µ)2 is the cost of selecting a distributionwith a mean probability µ, as discussed above. The last term is the expectedcost of classifications manipulation.25 The second principal difference isin condition (iii), in which preparers anticipate how their choice amongdistributions affects how projects are subsequently classified.

Notice that this definition presumes that, although investors reading fi-nancial statements get to infer what distribution preparers choose (throughcondition (vi)), investors do not get to observe this choice directly. Thatis, preparers take investors’ beliefs about this distribution as given (andtherefore they also take as given the market prices associated with the clas-sification of projects) when they choose what distribution φ̃i to select. Inequilibrium, however, preparers’ selection of a distribution of projects andinvestors’ inferences about the distribution of projects coincide.

7.1 THE STACKELBERG STANDARD CHOSENBY A SOPHISTICATED STANDARD SETTER

We call the mapping, implicit in the previous definition of a financial re-porting equilibrium, from an accounting standard φs to the induced equi-librium probability µ∗ that a project is viable the equilibrium Nash responsefunction, and we denote it by µ∗ = µ∗(φs ). Associated with the equilibriumNash response function are the equilibrium market values mW = mW (φs )and m B = m B(φs ) and equilibrium shadow standard φ(φs ). A sophisticatedstandard setter knows all of these mappings at the time he selects amongaccounting standards. Consequently, a sophisticated standard setter alsoknows the expected value of an investment, net of both the cost of produc-ing the project and the expected cost of classifications manipulation, as afunction of his chosen standard:

Pr(W |φ(φs ), µ∗(φs )) × mW (φs ) + Pr(B |φ(φs ), µ∗(φs )) × m B(φs )

− k2

(µ∗(φs ) − µ)2 −∫ φs

φ(φs )c(φs − φi ) f (φi | µ∗(φs ))dφi . (15)

25 In the appendix, this expected cost appears in the form 12σ

√3

∫ φs

φc(φs − φ)dφ.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1149

We shall assume that the objective of all standard setters, sophisticated andnaive, is to select a standard φs that maximizes the expected value of a projectnet of all relevant costs, based on their beliefs about the market’s responseto the standards they choose. In particular, a sophisticated standard setter isassumed to maximize (15). Given this objective, it is clear why in the abovewe referred to a sophisticated standard setter alternatively as a Stackelbergleader: In analogy with the corresponding concept from the literature onindustrial organization, a sophisticated standard setter moves first, correctlyanticipating how the rest of the market will “follow” his chosen standard.26

With φsSTACKELBERG denoting the standard that attains this maximum, wehave the following result.

LEMMA 1 (The Stackelberg Standard). Assume α = .5 and φ̃i

˜Uniform[l, u]. The standard chosen by a sophisticated standard setter when thedistribution of projects is endogenous exists and is unique. It is given by

φsSTACKELBERG = µ × 1 + σ β̄2i

√3

c + β̄2i

2k

1 + σ β̄2i

√3

c − 3β̄2i

2k

.

By analyzing φsSTACKELBERG, it is easy to show that the Stackelberg standardis increasing in µ and β̄ i and decreasing in k. These are intuitive comparativestatics: increasingµ or β̄ i or decreasing k increases the expected productivityof a project in so far as it either reduces the cost of producing the project witha given expected probability of being viable (for µ and k) or else increasesthe output of a viable project (for β̄ i ). As is shown in the appendix, theequilibrium Nash response function is given by

µ∗(φs ) =µ

(1 + σ β̄2

i

√3

c

)+ β̄2

i φs

2k

1 + σ β̄2i

√3

c − β̄2i

2k

. (16)

Since, as this expression shows, the equilibrium Nash response function isincreasing in the value of the standard φs , it follows that a sophisticatedstandard setter bent on maximizing the ex ante expected value of a projectwill exploit improvements in the project’s production technology attendingchanges in µ, β̄ i , and k by increasing the threshold required to get the morefavorable accounting treatment.

Perhaps the most interesting feature of the Stackelberg standard is how itcompares to the standard chosen by a naive standard setter. We make thiscomparison in the next subsection.

7.2 THE NASH STANDARD CHOSEN BY A NAIVE STANDARD SETTER

As noted above, the knowledge a sophisticated standard setter must pos-sess to calculate the Stackelberg standard identified above is really advanced:he must know, and anticipate, how preparers respond to a change in stan-dards. One of the advantages of having this sophisticated knowledge is that,

26 I wish to thank an anonymous referee for suggesting this analogy.

1150 R. A. DYE

once a standard is set by such a sophisticated standard setter, there is noreason for the standard setter to change it.27 In this respect, at least, theassumed sophistication of such standard setters seems to exceed that pos-sessed by the FASB and similar bodies, because practicing standard settersrepeatedly modify, amend, or rescind previously issued standards. For somerecent examples, note that SFAS 140 on the transfer and servicing of finan-cial assets and liabilities amends SFAS 125, which in turn amends SFAS 76and 77; SFAS 138 amends the “derivatives” standard, SFAS 133, which in turnamended SFAS 119; SFAS 132 on pensions amends SFAS 87 and 106; SFAS111 rescinds SFAS 32, etc. While some of these changes and corrections mayhave been designed to provide clarification of points that were ambiguousin the original standards, these standards are also modified in response tofirms’ exploitation of loopholes and/or other unanticipated consequencesof previously formulated standards.

In this section, we investigate the polar opposite of a sophisticated stan-dard setter by examining the long run behavior of a naive standard setterwho, as noted above, erroneously assumes that preparers are completelyunresponsive to changes in standards, and whose only knowledge of theeconomy is based on preparers’ past actions.

We begin by expanding on the description of a naive standard setter’sbehavior. The behavior of a naive standard setter can be described only in adynamic context. At the start of, say, period n + 1, the standard setter engagesin a comprehensive survey of projects sold in the previous period n . Thesurvey reveals the average frequency, say µn, of viable projects produced inperiod n. (The number of projects undertaken is assumed to be sufficientlylarge so that the mean of the empirical distribution is indistinguishably closeto the mean of the ex ante distribution.) Armed with this knowledge, thenaive standard setter selects the standard φs

n+1 in period n + 1 to be themean µn of the distribution of viable projects he observed in period n :φs

n+1 = µn.

There are two justifications for having a naive standard setter behave inthis way. First, if standards are to “catch up” with the actual distributionof projects, then in any period they should be based on the most recentdistribution of past projects. Second, given that a naive standard setter doesnot believe that the distribution of projects will change as standards change,a naive standard setter bent on maximizing what he perceives to be theexpected value of a project will be guided by Theorem 3: that theorem statesthat, when the distribution of projects is taken as fixed, the expected value-maximizing standard is given by the standard setter’s perceived mean of thedistribution of φ̃i . Thus, this theorem, combined with the naive standardsetter’s beliefs and survey of period n projects, compels the naive standardsetter to select φs

n+1 = µn.

27 The preceding comment applies when β̃ i ’s realized value is known, or more generally,when no additional learning about β̃ i takes place.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1151

After the standard setter has chosen this standard for period n + 1, pre-parers proceed to select the—now endogenous—distribution of projects forthis period in accordance with the previously given definition of a financialreporting equilibrium. This results in the actual distribution of period n + 1projects having mean probability of viability of µn+1 = µ∗(φs

n+1). At the startof period n + 2, the naive standard setter canvases period n+1′s distributionof projects—and learns µn+1—and the game is then repeated. Thus, this isa “cobweb” game in which, at the start of each period, the naive standardsetter anticipates that what will happen in that period is what did happen inthe previous period.

To distinguish the reasons for the evolution of standards in this sectionfrom that due to learning about the productivity parameter demonstratedin Theorem 2 above, in this section we will assume that the productivityparameter is a fixed, known constant.

From this description, it is clear that in any given period, the standardchosen by a naive standard setter will depend upon the past behavior ofpreparers, which in turn depends upon previously chosen standards.28 Toobtain predictions about the standards chosen by a naive standard setter,we examine environments in which the long run evolution of standards isindependent of the history of previously chosen standards. In the following,we refer to such environments as stable, which we define formally as follows.

DEFINITION 3. A financial reporting environment is stable if, for anyinitial standard φs

0, the sequence of standards {φsn}n≥0 defined by φs

n+1 = µn

= µ∗(φsn), converges to some limiting standard φs

∞ ∈ [0, 1] that is indepen-dent of φs

0.

In stable financial reporting environments, a naive standard setter setsstandards that eventually converge to the limiting standard φs

∞. Moreover,since µ∗( �) is continuous,29 if a naive standard setter ever selects the limit-ing standard φs

∞, the standard setter will continue to choose that standardindefinitely, as:

µ∗(φs∞

) = µ∗(

limn→∞ φs

n

)= lim

n→∞ µ∗(φsn

) = limn→∞ φs

n+1 = φs∞.

In the following, we refer to standards that have this self-repeating charac-teristic as Nash standards.30

28 This is unlike what happens for a sophisticated standard setter: given the economy’s pa-rameters, the standard expected to be chosen by a sophisticated standard setter is independentof what standards were chosen previously.

29 µ∗( �) is continuous since, as noted in (16) above, µ∗( �) is linear.30 Referring to these self-repeating standards as Nash standards is appropriate when we

consider the incentives of preparers and the beliefs of naive standard setters. Recall that thestrategies that define a Nash equilibrium of any game have the property that the strategy ofeach player in the game maximizes that player’s utility, taking that player’s beliefs and thestrategies of other players as given. Since naive standard setters believe that the market doesnot respond to their chosen standard, as noted above, Theorem 3 states that such a standardsetter should choose a standard φs that coincides with his perception of the mean of φ̃. Thus,

1152 R. A. DYE

DEFINITION 4. An accounting standard φsNASH is a Nash standard ifµ∗(φsNASH) = φsNASH.

The next theorem indicates that a Nash standard exists and is unique;it determines when an environment is stable; and it shows how, in stableenvironments, standards selected by a naive standard setter evolve over time.Finally, the theorem documents that the Nash standard is always below theStackelberg standard in every stable environment, and it assesses the importof this result for evaluating the behavior of preparers.

THEOREM 5 (The Nash Standard and its Comparison to the StackelbergStandard when α = .5 and φ̃i ˜Uniform[l, u]).

(a) There exists exactly one Nash standard. It is given by

φsNASH =µ ×

(1 + σ β̄2

i

√3

c

)1 + σ β̄2

i

√3

c − β̄2i

k

.

(b) The financial reporting environment is stable if and only if

β̄2i < k ×

(1 + σ β̄2

i

√3

c

).

When the financial reporting environment is stable,(c) if accounting standard φs

0 is initially set below the Nash standard φsNASH,

then the sequence of successive standards φsn+1 = µ∗(φs

n) increases overtime: φs

n+1 > φsn for all n;

(di) the Nash standard is below the Stackelberg standard:

φsNASH < φsSTACKELBERG;

(dii) the mean probability that a project is viable under the Nash standard is lowerthan under the Stackelberg standard:

µ∗(φsNASH)< µ∗(φsSTACKELBERG)

;

and(diii) the mean probability that a project is viable under the Stackelberg standard

is lower than the Stackelberg standard itself:

µ∗(φsSTACKELBERG)< φsSTACKELBERG.

a naive standard setter who observes that the empirical distribution of viable projects in theprevious period had mean µ = φsNASH will respond by setting the next period’s standard to(also) be φsNASH. Since preparers will maximize their expected profits in responding to thisstandard by choosing µ∗(φsNASH) = φsNASH, their optimizing behavior is in conformity withthe naive standard setter’s expectations about their behavior.

The fact that a Nash standard is a fixed point of the equilibrium Nash response function issimply a summary way of expressing that a Nash standard defines a Nash equilibrium (in theusual sense).

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1153

Part (a) of the theorem indicates that the Nash standard exhibits manyintuitive characteristics. As the project’s production technology improves(β̄ i increases), as the base probability that a project is viable increases (µincreases), and as the cost of classifications manipulation increases (c in-creases), the Nash standard increases. Also, as the marginal cost of increas-ing the probability that a project is viable goes up (k increases), and as thestandard deviation of the distribution of projects increases (σ increases),the Nash standard declines.

To explain (b), recall the form of the equilibrium Nash response functionµ∗ = µ∗(φs ) given in (16) above. Since this function is linear in φs with slope

β̄2i

2k

(1 + σ β̄2i√

3c )− β̄2

i2k

, a financial reporting environment is stable provided preparers’

response to a change in a standard is less extreme than the change in thestandard itself, i.e., if dµ∗(φs )

dφs < 1. The condition for stability given in part (b)of the statement of the theorem is this inequality, rearranged.

Part (c) explains why standards can be expected to increase over timebased on a different principle from that employed in Theorem 2 above. Inthe present theorem, in a stable environment, if a standard were initiallyset below the Nash standard, preparers’ mean response to the standardis sufficiently dampened so as to remain below the Nash standard. So, ifa standard starts out “low” relative to the Nash standard, it will stay “low,”but—since all standards eventually converge to the Nash standard in a stableenvironment—the sequence of standards must gradually increase over time.

Part (d) compares the Nash standard to the Stackelberg standard,as well as the behavior the Nash standard induces to the behavior theStackelberg standard induces. Part (di) states that, in stable environments,the Stackelberg standard is higher than the Nash standard. That is, stan-dard setters who anticipate correctly the response of preparers to changesin standards will choose a higher threshold for the standard than standardsetters who fail to anticipate such responses. As a consequence, part (dii)notes that preparers select a lower mean probability that a project is viablewhen standards are set by naive standard setters than when standards areset by sophisticated standard setters. Finally, part (diii) implies that, whenstandards are set optimally by a sophisticated standard setter, more than halfof the induced projects will fail to achieve a probability of viability equal tothe threshold designated by the official standard.

Parts (di) and (diii) combined indicate some of the difficulties involved inevaluating the performance of both accounting standard setters and thosewhose reporting behavior is governed by accounting standard setters. Itmight seem that a standard setter who sets a standard equal to the mean prob-ability that a project is viable would be doing a good job, since such a standardsetter is sending an accurate signal to investors about the mean character-istics of the projects they are purchasing. Related, it would seem that anentrepreneur who selected projects whose average probability of being vi-able equals the threshold specified by the prevailing accounting standard

1154 R. A. DYE

would be doing a good job, too. But, the standard that has these characteris-tics is the Nash standard, and the standard setter who chooses this standardis the naive standard setter. Since (from (di)) the Nash standard is knownto be below the optimal Stackelberg standard, and since (from (diii)) themean probability that an entrepreneur’s project is viable under a Stackelbergstandard is below the threshold designated by the Stackelberg standard,these seemingly reasonable criteria for judging a standard setter’s perfor-mance to be high—or the performance of an entrepreneur whose account-ing reports are governed by accounting standards to be high—are in factinappropriate. According to results (di) and (diii), optimal (Stackelberg)accounting standards set reporting thresholds above the mean probabilityof viability of those projects whose accounting reports are governed by thestandards.

7.3 GENERAL COMPARISONS BETWEEN NASHAND STACKELBERG STANDARDS

Finally, we consider the relationship between Stackelberg and Nash stan-dards in a very general setting. To that end, let g(µ, µ̂, φs ) denote the ob-jective function to be maximized by preparers, when the preparers privatelychoose actions µ, investors/purchasers conjecture that the preparers’ havechosen actions µ̂, and accounting standard φs is in effect. Unlike the set-updescribed in the earlier sections, here we allow the actual and conjecturedactions (µ and µ̂) to belong to a space, say �n, of different dimension fromthe space � that the accounting standard φs occupies. For expositionalconvenience, we continue to assume in the following that the standard φs isone-dimensional; the following results apply to multi-dimensional standardson a component-by-component basis.

While this generalization requires no modification of the notion ofan equilibrium Nash response function µ∗(φs ) (apart from replacingµ∗(φs ) ∈ � by µ∗(φs ) ∈ �n), it does call for a change in the definition ofa Nash standard chosen by a naive standard-setter. Now, a Nash standardderives from the “standard component” of a Nash equilibrium, definedas follows: it is a pair φsNASH, µNASH such that µNASH = µ∗(φsNASH) andφsNASH ∈ argφs g(µNASH, µNASH, φs ).

The fundamental premise underlying the behavior of the naive standard-setter—as well as a Nash equilibrium—is that the standard-setter assumespreparers do not alter their behavior in response to his choice of standards.Thus, assuming the choice among standards can be characterized by first-order conditions, φsNASH is determined as follows (here, the subscript 3refers to the partial derivative with respect to the third argument):

g3(µNASH, µNASH, φsNASH) = 0. (17)

Contrast this to the characterization of a Stackelberg standard. To describethe Stackelberg standard, first consider the total effect of any (small) changein a standard φs on the preparer’s objective function. In equilibrium, wemust evaluate such changes at an equilibrium point, i.e., at triples (µ, µ̂, φs )

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1155

of the form (µ∗(φs ), µ∗(φs ), φs ). At any such point, the marginal effect ofa change in a standard φs on the objective function g( �) is

(g1( �) + g2( �)) × dµ∗(φs )dφs

+ g3( �) = g2( �)dµ∗(φs )

dφs+ g3( �). (18)

(The equality follows since the initial owners maximize with respect to theirchoice of µ while taking µ̂ as given, so g1( �) = 0 at any equilibrium point.)31

In particular, since a Stackelberg standard maximizes g(µ∗(φs ), µ∗(φs ), φs ),a Stackelberg standard satisfies the first-order condition:

g2( �) × dµ∗(φs )dφs

∣∣∣∣φs =φs STACKELBERG

+ g3( �) = 0. (19)

(here, the arguments of g( �) are evaluated at (µ∗(φs ), µ∗(φs ), φs ) whereφs = φsSTACKELBERG.)

A comparison of this last first-order condition (19) to the Nash first-ordercondition (17) exposes the cost of the naive standard-setter’s failure to rec-ognize preparers’ reactions to his choice of standards. Recalling the deriva-tions that led to (17) and (18) above, the total marginal impact of a changein standards on the preparers’ objective function, when evaluated at theNash standard, is given by

g2( �)dµ∗(φs )

dφs

∣∣∣∣φs =φs NASH

+ g3( �) = g2( �)dµ∗(φs )

dφs

∣∣∣∣φs =φs NASH

.

This observation leads immediately to the following pair of results.

THEOREM 6 (A Comparison of the Nash and Stackelberg Standards inGeneral Environments).

(i) If g2( �) > 0 and dµ∗(φs )dφs > 0, then the preparers’ objective function can

be increased by setting standards higher than that specified by the Nashstandard.

(ii) If g2( �) > 0,dµ∗(φs )

dφs > 0, and g(µ∗(φs ), µ∗(φs ), φs ) is strictly concave inφs , then the Stackelberg standard is strictly greater than the Nash standard.

Part (i) of this theorem asserts that under the (reasonable) conditionsthat whenever the preparers’ objective function is increasing in investors’perceptions of the preparers’ actions, and the equilibrium response of pre-parers to a change in standards involves increasing the value(s) of theiractions, then the objective function of preparers can be increased by push-ing the standard above the Nash standard. Part (ii) adds that, providedthere are globally diminishing returns to increasing the standard (i.e., theobjective function is concave), the Stackelberg standard is above the Nashstandard. These results are consistent with those reported in Theorem 5,but they rely on none of Theorem 5’s parametric assumptions.

31 When µ is a vector, g1( �) should be interpreted as a vector of derivatives (i.e., a gradient),and products such as (g1( �) + g2( �)) × dµ∗(φs )

dφs should be interpreted as dot products.

1156 R. A. DYE

8. Summary

This paper studies a model that captures several features of the fi-nancial reporting environment: Accounting standards consist of binaryclassifications that suppress probabilistic details; preparers prefer one ofthe binary classifications to the other; exceeding some context-dependentthreshold is required to receive the more favorable classification; pre-parers evaluate the costs and benefits of engaging in classifications ma-nipulation to obtain the preferred accounting classification. The possi-bility of classifications manipulation creates a distinction between theofficial classification described in the statement of an accounting stan-dard and the de facto classification, determined by the “shadow standard”actually adopted by preparers. The paper demonstrates that learning inthis environment implies that it is impossible to hold constant simulta-neously both the official standard and the shadow standard. A furtherimplication of this is that if standards are designed to keep the actualpartititioning of firms constant over time, then there must be expected“standards creep,” i.e., an increase in the expected official standard overtime.

The paper also considers how the accounting standard chosen by a stan-dard setter varies with the standard setter’s knowledge of the economicenvironment in which standards are set. Two types of standard setters aresingled out for attention: sophisticated (also known as Stackelberg) standardsetters who can anticipate correctly how preparers respond to changes instandards, and naive standard setters who are unable to calculate preparers’responses to changes in accounting standards and, hence, who choose ac-counting standards that are value-maximizing for the distribution of projectspreparers were observed to select in the past. The paper demonstrates that,in stable environments, the standard naive standard setters adopt over timeconverges to a limiting Nash standard that is independent of the history ofpreviously chosen standards, and this limiting standard is less stringent thanthe Stackelberg standard chosen by a sophisticated standard setter. More-over, under the Stackelberg standard, preparers produce projects that haveboth higher expected value and higher mean probability of success thanunder the Nash standard.

APPENDIX: PROOFS OF THEOREMS

PROOF OF THEOREM 1. It is clear from (4) and (6) that mW and m B aregiven by

mW = β̄2i

(l + φ

2

)2

and m B = β̄2i

(u + φ

2

)2

.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1157

Substituting these values into (7) implies that φ is given implicitly by:

φ = φs − β̄2i

( u +φ2

)2−

( l +φ2

)2

c,

i.e.,

φ

(1 + β̄2

i(u − l)

2c

)= φs − β̄2

iu2 − l2

4c.

Solving this equation for φ produces the expression that appears in Theo-rem 1’s statement. Since this derivation is constructive, and yields only onesolution for φ, this proves both existence and uniqueness.

PROOF OF THEOREM 2. Pick two periods t and t ′ > t. Let φst be the account-

ing standard prevailing in period t , and let φ̃st ′ be the accounting standard

prevailing in period t ′. As of period t, the accounting standard φ̃st ′ is a ran-

dom variable. Also, as in the text, let ϑt be whatever information is knownabout the productivity parameter β̃ i t up to the beginning of period t, andsimilarly, let ϑ̃t ′ be whatever information is known about the productivityparameter β̃ i t ′ up to the beginning of period t ′. Notice that ϑ̃t ′ is randomas of period t. Since 0 < α < 1 and β̃i t ′ is a martingale, Jensen’s inequalityapplies to yield:

E[(E [β̃ i t ′ | ϑ̃t ′])

11−α | ϑt

] ≥ (E [E [β̃ i t ′ | ϑ̃t ′] | ϑt ])1

1−α = E [β̃ i t ′ | ϑt ]1

1−α

= (E [E [β̃ i t ′ | β̃ i t ] | ϑt ])1

1−α = (E [β̃ i t | ϑt ])1

1−α .

Define � ≡ (E [φ̃i | φ̃i > φ]1

1−α − E [φ̃i | φ̃i ≤ φ]1

1−α ) × 1−ααc . Of course, � > 0

always holds. So, we conclude by (7) that

E [φ̃st ′ | ϑt ] =φ + E

[(E [β̃ i t ′ | ϑ̃t ′])

11−α | ϑt

]c

× � ≥φ + E [β̃ i t | ϑt ]1

1−α

c× � = φs

t .

PROOF OF THEOREM 3. Appealing to (14) for the expression for the ex-pected cost of classifications manipulation, we see that the objective function(13 ) is given by:

φ(φs ) − l

u − l×

(β̄ i

φ(φs ) + l

2

)2

+ u − φ(φs )

u − l×

(β̄ i

φ(φs ) + u

2

)2

− (u − l)β̄4i

8c

(φ(φs ) + u + l

2

)2

.

1158 R. A. DYE

Maximizing this is equivalent to maximizing:32

(φ − l) × (φ + l)2 + (u − φ)(φ + u)2 − (u − l)2β̄2i

2c

(φ + u + l

2

)2

, (20)

and the first-order condition derived from maximizing this latter functionwith respect to φ is given by:

(φ + l)2 + 2(φ − l) × (φ + l) − (φ + u)2 + 2(u −φ)(φ + u)

− (u − l)2β̄2i

c

(φ + u + l

2

)= 0.

This simplifies to:

φ = u + l2

× 2 − (u − l)β̄2i

c

2 + (u − l)β̄2i

c

.

The second order condition is satisfied (−2 − (u − l)β̄2i

c < 0), so this is indeeda maximum, provided φ ∈ (l, u). Since 2− (u − l)β̄2

ic

2 + (u − l)β̄2i

c

< 1, we only have to worry

about whether φ > l . The latter holds if and only if

u + l2

×(

2 − (u − l)β̄2i

c

)> l

(2 + (u − l)β̄2

i

c

),

i.e., if and only if

c >β̄2

i

2(u + 3l). (21)

This proves the first part of the theorem.Recall the relationship between φ and φs established in Theorem 1:

φ = φs − β̄2i

4c (u2 − l2)

1 + β̄2i

2c (u − l).

Substituting from the just-derived optimal value for φ, we get:

u + l2

× 1 − (u − l)β̄2i

2c

1 + (u − l)β̄2i

2c

= φ = φs − β̄2i

4c (u2 − l2)

1 + β̄2i

2c (u − l)

which implies

φs = u + l2

.

This proves part i of the theorem.

32 Note that it makes no difference whether the maximization proceeds by directly deter-mining the optimal φs , or rather indirectly by determining the cutoff φ associated with φs : bythe envelope theorem, d

dφs OBJ(φ(φs )) = ∂∂φ

[OBJ(φ)]∂φ

∂φs = 0 implies, since∂φ

∂φs > 0, that theoptimum is determined by that φs for which the associated φ satisfies ∂

∂φOBJ(φ) = 0.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1159

Part ii follows from an application of the envelope theorem to (20).Part iii follows from (21). When the inequality in (21) is reversed, the

shadow standard φ falls below the left endpoint l of φ̃′i s distribution. This

means that the classifications cost parameter c is sufficiently low so that allprojects are classified as viable. Obviously, the standard serves no alloca-tional function in this case, and so results in the incurrence of unnecessaryclassifications’ costs.

PROOF OF LEMMA 1. We prove the lemma constructively.

The next expected value of producing a project with mean probabilityµ = l + u

2 of being viable is given by:

φ − l

u − l× mW + u − φ

u − l× m B − k

2(µ − µ)2 − c(φs − φ)2

2(u − l).

Expressed in terms of µ, σ , this equals (since l = µ − σ√

3 and u = µ + σ√

3):

φ − (µ − σ√

3)

2σ√

3× mW + µ + σ

√3 −φ

2σ√

3× m B − k

2(µ − µ)2 − c(φs −φ)2

4σ√

3.

(22)

Preparers take the market values mW , m B as given when they choose the dis-tribution of projects, since investors do not get to observe directly preparers’choices. Preparers take the shadow standard φ as given as well: if the pre-parers anticipate the market values mW , m B , optimal classifications manip-ulation dictates that they will engage in classifications manipulation ex post(after φ̃i has realized its value φi ) as long as φi ≥ φs − m B − mW

c = φ. Hence,when preparers maximize with respect to the choice of µ (as required inequilibrium condition (vi)), they obtain as the first-order condition for theoptimal mean µ∗:

m B − mW = (µ∗ − µ)2kσ√

3. (23)

This condition determines µ∗ in terms of the endogenous market pricesmW , m B .

The market price m B is determined by the shadow standard φ and µ∗through equilibrium condition (v). Using each of: (23), (9), u = µ∗ +σ

√3,

and φ = φs − m B − mWc , and simplifying, we find that m B equals:

m B =(

β̄ iu +φ

2

)2

=(

β̄ iµ∗ + σ

√3 +φ

2

)2

=(

β̄ iµ∗ + σ

√3 + φs − m B − mW

c

2

)2

= β̄2i

(µ∗ + σ

√3 + φs − (µ∗ − µ) 2kσ

√3

c

2

)2

≡ β̄2i

(µ∗d + σ

√3 + f

2

)2

,

(24)

1160 R. A. DYE

where

d ≡ 1 − 2kσ√

3c

and f ≡ φs + 2kσµ√

3

c.

Likewise, using (23), (9), l = µ∗ − σ√

3, and φ = φs − m B − mWc , and simpli-

fying, we find

mW =(

β̄ il + φ

2

)2

=(

β̄ iµ∗ − σ

√3 + φ

2

)2

= β̄2i

(µ∗ − σ

√3 + φs − (µ∗ − µ) 2kσ

√3

c

2

)2

= β̄2i

(µ∗d − σ

√3 + f

2

)2

.

(25)

Thus, (24) and (25) together imply:

m B − mW = β̄2i (µ∗d + f )σ

√3.

Combining this last equation with (23), we conclude

(µ∗ − µ)2kσ√

3 = m B − mW = β̄2i (µ∗d + f )σ

√3. (26)

Solving this equation for µ∗ = µ∗(φs ), we obtain, for arbitrary φs , the equi-librium Nash response function:

µ∗ = µ∗(φs ) =2µk

(1 + σ β̄2

i

√3

c

)+ β̄2

i φs

2k(

1 + σ β̄2i

√3

c

)− β̄2

i

. (27)

As a further step toward calculating the Stackelberg standard, we haveto solve for the equilibrium shadow standard φ(φs ) for each standard φs .

To that end, we substitute m B − mW as specified in (26) above into therelationshipφ = φs − m B − mW

c , combined with the equilibrium Nash responsefunction to obtain:

φ(φs ) = φs − m B − mW

c= φs − (µ∗(φs ) − µ)2kσ

√3

c.

We take this expression, along with the standard-dependent expressionsfor mW and m B (obtained by substituting the equilibrium Nash responsefunction into each of (24) and (25)), and insert these expressions into (22).Then, we maximize with respect to the choice of φs . Omitting the details,we obtain33 the expression for φsSTACKELBERG

appearing in the statement ofthe lemma.

33 It is easy to show that the second-order condition required for φsSTACKELBERG to be amaximum is satisfied whenever the denominator of φsSTACKELBERG is positive. It is also easyto show that if this denominator is positive, then the stability condition is also satisfied.

CLASSIFICATIONS MANIPULATION AND NASH STANDARDS 1161

PROOF OF THEOREM 5. The Nash standard is the fixed point of the equi-librium Nash response function, i.e., it is the solution to

µ∗(φs ) = φs .

Using the previously determined calculation of the equilibrium Nash re-sponse function in (27) yields the unique expression for the Nash standardappearing in (a).

To prove (b), note that the financial reporting environment is stable ifand only if:

∂µ∗(φs )∂φs

< 1.

This last inequality can be rewritten as:

β̄2i < k

(1 + σ β̄2

i

√3

c

).

This proves (b).To prove (c), first note that if a standard φs

0 is below φsNASH, then so are allelements of the sequence φs

n = µ∗(φsn−1). This follows from the monotonicity

of µ∗( �):

φs0 < φsNASH ⇒ φs

1 = µ∗(φs0

)< µ∗(φsNASH) = φsNASH.

Hence (by induction),

φsn+1 = µ∗(φs

n

)< µ∗(φsNASH) = φsNASH for all n ≥ 1.

Next note that if the environment is stable, then the denominator of µ∗( �)is positive, and so (as simple algebra confirms), φs < µ∗(φs ) if and only ifφs < φsNASH. As all elements of the sequence φs

n have already been shown tobe below φsNASH, it follows that

φsn < µ∗(φs

n

) = φsn+1

for all n. This proves (c).

(di) To show φsNASH < φsSTACKELBERG, write φsSTACKELBERG asµ(x + β̄2

i2k )

x− 3β̄2i

2k

and

φsNASH asµx

x − β̄2ik

, where x ≡ 1 + σ β̄2i

√3

c . So, we must show that, when the econ-

omy is stable,

µx

x − β̄2i

k

(x + β̄2

i2k

)x − 3β̄2

i2k

;

Upon rearrangement, this inequality can be written as:

β̄2i < 2kx = 2k

(1 + σ β̄2

i

√3

c

).

1162 R. A. DYE

Since the requirement for stability is β̄2i < k × (1 + σ β̄2

i

√3

c ),the stability con-dition guarantees φsNASH < φsSTACKELBERG. This proves (di).

(dii) follows immediately from the fact that the equilibrium Nash re-sponse function µ∗(φs ) is increasing in φs :

φsNASH = µ∗(φsNASH)< µ∗(φsSTACKELBERG)

.

(diii) follows immediately from the combined facts that: when theenvironment is stable, ∂µ∗(φs )

∂φs < 1; φsNASH = µ∗(φsNASH); and φsNASH <

φsSTACKELBERG.

REFERENCES

ARYA, A.; J. FELLINGHAM; J. GLOVER; AND D. SCHROEDER. “Best Depreciation Schedules: A So-lution to an Allocation Problem and a Statistical Problem,” Working paper, Ohio State Uni-versity, 1998.

BAIMAN, S., AND K. SIVARAMAKRISHNAN. “The Value of Private Pre-Decision Information in aPrincipal-Agent Context.” The Accounting Review 66 (1991): 747–66.

BURGSTAHLER, D., AND I. DICHEV. “Earnings Management to Avoid Earnings Decreases andLosses.” Journal of Accounting and Economics 24 (1997): 99–125.

CHRISTENSEN, J.; AND J. DEMSKI. Accounting Theory: An Information Content Perspective. New York:McGraw-Hill, 2003.

DEGEORGE, F.; J. PATEL; AND R. ZECKHAUSER. “Earnings Management to Exceed Thresholds.”Journal of Business 72 ( January 1999): 1–34.

DEMSKI, J. “On the General Impossibility of Accounting Standards.” The Accounting Review 481973: 718–23.

DEMSKI, J.. “Choice among Financial Reporting Alternatives.” The Accounting Review 49 (1974):221–32.

DEMSKI, J. Information Analysis 2nd ed. London: Addison Wesley, 1980.DYE, R. “Strategic Accounting Choice and the Effects of Alternative Financial Reporting Re-

quirements.” Journal of Accounting Research 23 (1985): 544–74.HOLMSTROM, B. “Moral Hazard and Observability.” Bell Journal of Economics 10 (1979): 74–91.IJIRI, Y. Theory of Accounting Measurement. Sarasota: American Accounting Association, 1975.KIRBY, A. “Trade Associations as Information Exchange Mechanisms.” Rand Journal of Economics

19 (1988): 138–46.