the disparate impacts of accountability searching for ... · 2 the disparate impacts of...

30
1 The Disparate Impacts of Accountability Searching for Causal Mechanisms Alisa Hicklin Fryar University of Oklahoma [email protected] Prepared for the 2011 Public Management Research Conference in Syracuse, NY Special thanks to the W.T. Grant Foundation for the funding for this work, Tom Rabovsky, the policy doctoral students at the University of Oklahoma, the faculty at the LaFollette School of Public Affairs, and the faculty and students at American University for comments on earlier iterations of this project.

Upload: others

Post on 11-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

1

The Disparate Impacts of Accountability – Searching

for Causal Mechanisms

Alisa Hicklin Fryar University of Oklahoma

[email protected]

Prepared for the 2011 Public Management Research Conference in Syracuse, NY

Special thanks to the W.T. Grant Foundation for the funding for this work, Tom Rabovsky, the

policy doctoral students at the University of Oklahoma, the faculty at the LaFollette School of

Public Affairs, and the faculty and students at American University for comments on earlier

iterations of this project.

Page 2: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

2

The Disparate Impacts of Accountability – Searching for Causal Mechanisms

In 2009, Complete College America, a nonprofit advocacy group committed to

“increasing the nation’s college completion rate through state policy” was formed (CCA 2011).

This organization has worked to recruit states into their alliance, by encouraging governors and

state legislators to commit to change the way higher education is governed by moving higher

education policy to a more performance-based culture. The shifts promoted by this organization

include setting performance goals, moving policy to incentivize better performance (with respect

to undergraduate degree completion), and collecting and reporting better data on institutional

performance. As of May of 2011, twenty-nine states have joined in these efforts.

Although one of the leading organizations currently involved in these efforts, Complete

College America is not alone in its concerns over undergraduate degree attainment. A number of

organizations, both at the national level (such as the Lumina Foundation, the Bill and Melinda

Gates Foundation) and the state level (such as the Texas Public Policy Foundation) have also

pushed for more data-driven, performance-based governance regimes in higher education,

although these initiatives vary considerably in the ways in which they approach these issues.

Some efforts to promote student degree attainment focus more on developing better ways to

prepare college-bound students in high school and offer these students the help they need to be

successful once enrolled in a postsecondary institution. These initiatives are largely process-

driven and incremental, focused on improving the core functions of the traditional education

system. Other efforts are aimed more at revolutionizing the way we thinking about higher

education, taking a bold step away from the traditional model, and inducing higher performance

and efficiency through incentives and competition for resources.

Page 3: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

3

At the heart of these policy discussions lies a very familiar debate within public

administration: can we improve public sector performance through incentives and performance

funding? This paper draws on the literature in public management on accountability and

performance management to explore the effect of performance-based funding policies on public

university student outcomes. In doing so, this paper also argues that some would suggest that

these policies could have differential impacts across institutions, which will also be investigated.

This analysis will bring together quantitative and qualitative data to both identify the nature of

these relationships and explore the underlying causal mechanisms.

The paper proceeds as follow. First, I review some of the literature on accountability and

performance management in public administration. Second, I review the literature in higher

education on accountability, performance funding, and student success. I then present the

findings from a quantitative analysis of all public universities in the US, investigating the direct

influence of these policies on outcomes and any differential impacts. Finally, I draw on

qualitative data gathered in interviews conducted in a single state during the process of

performance-funding policy formation.

Accountability and Performance Management

The literature on accountability in the public sector spans a broad range of work within

both public administration and political science. Much of the recent work on accountability

issues focus on efforts much like those in higher education – efforts to “hold public organizations

accountable” through tracking, publishing, and tying funding to performance data. These

policies vary tremendously on a number of dimensions, including the breadth of data collected,

the level of bureaucratic participation in specifying the performance measures that will be

Page 4: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

4

considered, and the extent to which these data will have any bearing on appropriations for the

institution (Schick 2001). Many of these efforts have been on-going for decades, as governments

around the world have collected data on public agency service delivery, costs, performance

(though often not quantified), but with the surge of support for bringing private-sector values --

such as efficiency, incentives, and concern for a “bottom line” – into the public sector, many of

these efforts have morphed into more defined grading systems that are used in appropriations and

personnel decisions. The movement toward private-sector values in bureaucracy is often

attributed to the New Public Management (Osborne and Gaebler 1992) or Reinventing

Government (Gore 1993) efforts, but we see much of the support these ideas outside of the

organized national movements, manifested in various state and local level initiatives.

A number of scholars have raised a host of concerns about the design and implementation

of performance-based accountability policies (Schick 2001; Moynihan 2008; Talbot 2005;

Dahler-Larson 2005), and some of this work focuses specifically on education (Radin 2006;

Heinrich 2009).

Some of the concerns identified by scholars in public administration are the foundation

for the policy debates in higher education, including:

1. Attribution: To what extent can university practices affect whether a student

graduates? Who or what affects completion rates?

2. One size fits all: Should all universities be held to the same standard? If not, what

would be an appropriate comparison set?

3. Legitimacy: To what extent should state officials set the goals of a public university?

Do these policies infringe on academic freedom? What is the appropriate relationship

between academic freedom and democratic accountability?

Page 5: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

5

4. Resources-Performance Link: Can the threat of funding cuts incentivize better

performance? Would more resources be needed to improve performance?

5. Values: Does the adoption of performance-funding accountability policies signal that

legislators do not trust leaders of public organizations? Could the adoption of these

policies signify distrust? Could it create distrust?

6. Effectiveness: Do these policies actually improve results?

A number of scholars have studied performance-based accountability policies in a wide

variety of public organizations at the state and federal level and have identified a number of

problems with these types of policies. Radin (2006) presents a detailed and thorough discussion

of the problems embedded in many performance/accountability discussions and identifies many

of these issues that have been faced in higher education. She argues that many performance-

accountability efforts are billed as a panacea for ailing organizations, but they often produce

many negative consequences. Other work by scholars of performance management raises a host

of other concerns, chief among them, the lack of evidence that performance budgeting/funding

actually produces results (Moynihan and Andrews 2010; Andrews and Hill 2003). Additionally,

the use of performance data is often viewed as a more objective way to evaluate institutions,

when, in fact, performance data often introduces considerable ambiguity, as individuals may

often perceive performance indicators, or the determinants of these indicators, in different ways

(Moynihan 2006).

Additionally, the introduction of many of these policies fails to consider the critical

differences among public institutions, especially with respect to mission and clientele, and these

policies are often quite critical – to the point of being threatening – to leaders of public

organizations (Radin 2006). The critical nature of these policies often establishes a culture of

Page 6: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

6

mistrust among elected officials and public administrators and can lead to a wide range of

dysfunctional and counterproductive behaviors (Jacob and Levitt 2003; Radin 2006; Meier and

Bohte 2000). Additionally, we may be most likely to see these unintended consequences in more

disadvantaged institutions (Meier and Bohte 2000).

The existing work in public administration can substantially inform our discussions on

accountability policies in higher education. While we have many reasons to expect the findings

in work on other public agencies would also apply to public universities, there are some

differences that are important to note. First, university presidents are rarely thought of as

bureaucrats or even public managers, in a traditional sense. Second, public universities have

traditionally enjoyed a high level of autonomy from government influence, and they are often

highly regarded by citizens and policymakers alike. Yet, as public confidence in higher

education has begun to erode and state budgets tighten, leaders of public universities have faced

more scrutiny than they have in the past. Some may question whether “what we know” about

traditional bureaucratic agencies can be applied to public universities, but the parallels are

strong. The following section reviews the literature on performance funding in higher education,

in which many of the themes discussed in the public administration literature come to the

forefront in the higher education literature as well.

Research on Performance Funding Policies in Higher Education

Performance funding policies have been an area of interest in higher education for

decades, having begun as a discussion of variations in state fiscal policies in higher education.

Burke and his associates (Burke and Serben 1997; Burke and Modarresi 1999; Burke, Rosen, and

Minassians 2000; Burke and Minassians 2001, 2002, 2003; Burke 2005) collecting data through

Page 7: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

7

annual surveys on the ways in which performance was considered in the higher education

appropriations process. In this framework, policies were grouped in three categories:

performance reporting, performance budgeting, and performance funding. The differences

among these categories lie in the relationship between performance data and the appropriations

process, from very weak ties (performance reporting) to loose ties (performance budgeting) to

stronger, more formulaic ties (performance funding). Performance funding policies, the focus of

this study, are those in which the state has enacted a policy by which a pre-set amount of

appropriations monies will be distributed through a pre-set, known formula. As such, these

policies are designed a political process that requires the specification of which performance

outcomes will be valued and allows leaders of public universities to have some idea of “where

they stand.” This stability allows for two important dynamics to emerge. It establishes

(somewhat) stable monetary incentives for public universities to maximize performance on

specified outcome measures, and it gives leaders of public universities the ability to predict how

they will fare under the policy in a given year.

Despite a relatively long history of performance funding policies in higher education,

many scholars have noted their more increasing popularity in recent years (Alderman and Carey

2001; Zumeta 2001), which has motivated some scholars to explore the determinants of

adoption. Some of this work has argued that the public and elected officials have lost their faith

in public universities and are no longer willing to allow institutions to enjoy the autonomy they

once had (Zumeta 2001; Richardson and Martinez 2009). McLendon, Hearn, and Deaton (2006)

conducted a quantitative analysis of the adoption of performance funding policies and found that

only two factors were significant predictors of policy adoption: the percent of the legislature that

is Republican and the centralization of the state’s higher education governing board. Interesting,

Page 8: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

8

none of the other predictors associated with the state’s higher education system (educational

attainment, tuition levels, enrollment) were significant predictors. Others have noted the demise

of performance funding policies in certain states as well. Dougherty and Natow 2009 conducted

a qualitative study of the abandonment of performance funding policies in three states and found

that state budget shortfalls and waning support for the policies were most often cited for the

reasons the policies were not continued.

More recent work has investigated the effectiveness of these policies, yielding some

mixed results. Shin and Milton (2004), in a national study, found no significant effect of either

performance budgeting or performance funding on graduation rates, nor did Volkwein and

Tandberg (2008), when evaluating state-level achievement in higher education. Doyle and

Noland’s (2006) study of performance funding in Tennessee (the most long-standing state

performance funding policy) found that most institutions were unaffected by the policy, but a

few universities saw modest gains in retention rates. Sanford and Hunter (2010) extended on the

work of Doyle and Noland (2006) by exploring whether the inclusion of graduation rates as a

valued outcome in the performance policy raised graduation rates and whether the increase in the

amount of funding tied to the program improved performance. In short, they found no evidence

that the policy in Tennessee – highly regarding as one of the best examples of performance

funding policies in higher education – had any discernable effect on performance.

Despite a developing body of work on the effectiveness (or lack thereof) of performance

funding policies, few scholars have examined whether these policies would have dissimilar

impacts on institutions. While some work on accountability policies in K12 strongly suggests

that these policies can often benefit advantaged organizations and further harm disadvantaged

organizations (Abernathy 2007; Radin 2006), few have examined whether performance funding

Page 9: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

9

policies have similar effects in higher education. Furthermore, few scholars have attempted to

better understand why these policies are not working in some, most, or all institutions. This

paper seeks to do both, by first exploring the effect of performance funding policies on public

four-year institutions, exploring the overall effect and an potential disparate effects, and then

moves to a qualitative analysis that seeks to explore why these policies may not be working or

may be harming disadvantaged students and/or universities.

For the sake of clarity in the quantitative analysis and findings, I specify two hypotheses.

The first hypothesis is the one advanced by those who promote the adoption of performance

funding policies:

Hypothesis One: Performance funding policies will improve graduation rates.

The second hypothesis is the one that is often cited as a concern for broad performance-based

accountability policies that are enacted for dissimilar institutions:

Hypothesis Two: Performance funding policies will improve graduation rates for advantaged

institutions and lower graduation rates for less advantaged institutions.

Quantitative Analysis: Data and Methods

The data for the quantitative analysis are drawn from multiple sources. For this study,

the unit of analysis is the university and includes all public four-year institutions in the 50 United

States1. The dependent variables, graduation rates for different groups of students, are cohort

measures drawn from the U.S. Department of Education’s Integrated Postsecondary Education

Data System. These data represent the percentage of undergraduate, bachelor-degree seeking

students, entering as first-time, full-time freshmen who completed a bachelors degree within six

1 Private universities, community colleges, stand-alone medical schools, and senior colleges (without freshman and

sophomore offerings) are not included.

Page 10: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

10

years. These data represent six cohorts, entering between 1996-2003, and are reported for both

the aggregate student population and by certain racial classifications.

The institutional average for six-year graduation rates in these data is approximately

45.5%, with a standard deviation of 16. The bottom quartile of universities in the U.S. have six-

year graduation rates below 36%, while the top quartile have graduation rates just over 55%, and

the distribution of graduation rates is mostly normal but very slightly right-skewed, which is

attributable to the handful of elite institutions in the dataset. For the models predicting black and

Hispanic student graduation rates, I censored the models to include only cohorts that had five or

more students for that group, to avoid the statistical problems that come from using percentages

with very small numbers. For this set of institutions, there is an approximately 10% gap in

graduation rates across the board, with a mean of 38% and an interquartile range of 25 (25%-

50%). Hispanic students fare a bit better, with a mean of 41% and an interquartile range of 23

(29%-52%).

The key independent variable is a dummy variable representing the presence of a state-

level performance funding policy. This variable, compiled from the data collected by Joseph

Burke and his associates (Burke and Minassians 2003), Kevin Dougherty and Monica Reid

(Dougherty and Reid 2007), and Education Sector (Aldeman and Carey 2009a), counts any state

policy that links any state appropriations with some kind of outcome data for public, four-year

universities, with graduation rate being the most common indicator of performance. Between

1996 and 2003, the number of states with performance funding policies ranged from 9 to 17 with

some states adopting and others abandoning these policies during the time period. It is important

to note that, although the percentage of states with these policies ranges from 18-34%, the

percentage of universities in the analysis that are subject to these policies ranges from 25-43%.

Page 11: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

11

A number of control variables are included to pick up important variations among

universities, many of which have been found to be significant predictors of graduation rates

(Titus 2006; Zhang 2009). The most influential difference among institutions is the selectivity of

the institution, as more selective universities, on average, have higher graduation rates. The

selectivity data are drawn from Barron’s profile of American colleges, which categorize each

institution based on how competitive the admissions process is, a categorization that is based on

various admissions factors. The variable is a six-point ordinal scale ranging from least to most

competitive. Other controls capture basic institutional differences. I include dummy variables

for institutional mission, collapsed into three categories: bachelors-degree granting institutions,

masters institutions, and research/doctoral institutions, all of which come from Carnegie

classifications. Other control variables include size (total enrollment), institutional wealth

(revenue per student, instructional expenditures per student, average faculty salary), student

population demographics (percent black students, percent Latino students, percent of students on

Pell grants), and a dummy variable for whether the institution is an HBCU. These data all come

from the Department of Education’s Integrated Postsecondary Education Data System.

The structure of these data require panel data analysis techniques, as there are many

institutions over multiple years. This analysis employs panel-corrected standard errors models,

patterned after the work of Beck and Katz (1996), with panel-level (institution-specific) AR1

terms. To doublecheck the analysis, these models were also run as two-way fixed effects models

(state and year), and the results did not change much.

Page 12: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

12

Findings from the Quantitative Analysis

Table one presents the models predicting an institution’s overall six-year graduation rate,

with the first model including all of the variables but no interaction, and the second model

including the interaction discussed in hypothesis two. Overall, the model performs quite nicely,

predicting 95% of the variation in institutional graduation rates. Most of the control variables

perform in predicted ways, with increased selectivity related to a 6% increase for every one-point

increase on a the six-point scale. Masters institutions and research institutions also outperformed

bachelors degree institutions (by 2.6 % and 3.9% respectively). Less expected was the effect of

institutional size. Some may expect that students would do better at smaller universities (with

the expectation of being able to receive more attention), but larger institutions enjoy significantly

higher graduation rates, on average, but the effect size is rather small, as a 1000 student increase

is linked to a 0.15 increase in graduation rates. All three of the institutional wealth variables –

instructional expenditures per student, revenue per student, and average faculty salary – are

positive and significant predictors of graduation rates.

[Table One about Here]

The variables for student demographics are also significant predictors in the expected

direction. As is well documented in the literature on race and education, black and Hispanic

students often face a number of obstacles which lessen their chance of success, both at the

individual level, and at the aggregate level, as seen in Model 1. However, HBCUs, when

compared to similar institutions, have higher graduation rates – 13% on average -- than non-

HBCUs with similar student demographics. Finally, the percentage of students receiving federal

aid, the measure of poverty for this analysis, is significant and negative, although substantively

Page 13: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

13

small, with a 20% increase in students on Pell grants resulting in a 1% decrease in graduation

rates.

[Table One about Here]

The first test of hypothesis one is in the first model, where we see that funding policies

have a negative, significant effect on graduation rates. Although the substantive impact is

relatively small, performance funding policies are linked to a 0.5% decrease in graduation rates.

Even with the substantively small coefficient, these findings offer considerable evidence that, at

a minimum, these performance funding policies are not improving performance for public

universities. There are a number of reasons as to why that may be the case. Recent work by

Rabovsky (2011) finds that performance funding policies do not really strengthen the link

between performance and appropriations, and the effect on institutional priorities is minimal.

But why would we find any support for negative outcomes? Institutions could very well ignore

these policies, but why would they produce negative results? Usually, in these situations, we

might expect endogeneity to be the culprit, but the work on the adoption of performance funding

policies (McLendon, Hearn, and Deaton 2006) finds no link between educational attainment in a

state and the adoption of these policies.

Model two introduces an interaction between performance funding policies and the

percentage of low-income students (measured as Pell-grant receipt), as a test of the second

hypothesis. If the second hypothesis were supported, we would see a positive, significant

coefficient for the main effect of performance policies and a negative, significant effect for the

interaction with the percentage of students receiving federal aid. However, this is not the case.

Instead, the coefficient for performance funding policies remains negative and significant, and

the coefficient for the interaction terms is insignificant. Figure one graphs the interactive effect,

Page 14: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

14

in which we see that performance funding policies have a negative effect on graduation rates for

institutions with less than 40% of students on receiving Pell grants. Although this seems like it

would only be a small subsection of institutions, almost 75% of institutions have less than 40%

of their students on Pell grants, meaning that for most institutions, performance funding policies

can be linked to decreases in graduation rates. Surprisingly, this relationship is not significant

for high-poverty institutions, which suggests that hypothesis two should be refuted.

[Tables Two and Three about Here]

Tables two and three replicate these analyses for black and Hispanic students. In table

two, the models for black student graduation rates have some commonalities, but there are a few

important differences. First, the models explain less of the variation in graduation rates,

dropping to around 80%. Secondly, the effect of selectivity is more pronounced on black

graduation rates, but the differences in institutional missions drop out of significance completely.

The variables for institutional wealth remain positive and significant, and the variable for percent

Hispanic continues to be negative and significant. Interestingly, percent black is no longer a

significant predictor, but percent receiving federal aid continues to be negative and significant.

In the models for black graduation rates, performance funding policies are insignificant,

both in the model without the interaction and in the model where the interaction is added. The

interaction is insignificant at the .05 level but is negative and significant (though substantively

small) at the .10 level, which some may consider to offer minimal support to hypothesis two.

Overall, these models suggest that the institutional dynamics that predict the aggregate

graduation rates are a bit different for black student graduation rates. Most importantly,

performance funding policies have no strong effect on graduation rates for black students.

Page 15: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

15

The findings for Hispanic student outcomes have much in common with the findings for

black student outcomes. Again, selectivity, enrollment, and institutional wealth are positive and

significant predictors of graduation rates, and institutional mission is insignificant. Interestingly,

the coefficient for percent black students is negative and significant, while percent Hispanic is

only significant at the .10 level and is very weak. Across groups, the pattern of strong negative

relationships for minority students who are not co-ethnic students and the lack of a negative

relationship for co-ethnics is interesting and certainly warrants attention in future work. In these

models, we also see a stronger, negative relationship between the percentage of students

receiving financial aid and Hispanic graduation rates.

The coefficients for performance funding policies, again, are insignificant both for the

main effect and the interactive term2. Once again, we see consistent evidence that we can reject

the idea that performance funding policies are effective in raising graduation rates, and we have a

little evidence that would suggest that these policies may lead to negative outcomes for some

institutions. However, we have no evidence that these policies are helping advantaged

institutions and hurting disadvantaged institutions, nor do we see that these policies are hurting

minority students.

So why are these policies failing to produce positive gains? And why would we see

declines in performance? There are a number of possible answers. University administrators

may not be responding to these shifts in incentives at all, for a number of reasons – disagreement

with the policy’s goals, an incentive structure that is too weak, or general apathy. Or, it may be

the case that universities are responding to these policies shifts in ways that could be detrimental

to student outcomes, either because administrators are uninformed on the ways to increase

2 Graphing the interactions for performance funding policies for both black and Hispanic graduation rates shows that

zero falls within the confidence intervals for all values in the dataset. These graphs are not shown but are available

upon request.

Page 16: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

16

student achievement (assuming these interventions exist and can be effective across institutions)

or because administrators are trying “game the system” by inflating their numbers and not

producing real efforts. Much like other work on performance funding policies, we have very

little research that explores the ways in which managers of public organizations view and

respond to policies that incentivize increased performance on certain metrics. In an effort to

explore the ways in which managers view and respond to policy change, I conducted a case

study to explore these possible causal mechanisms.

Managerial Responses to Performance Funding

In the spring of 2010, the Texas commissioner of higher education put forth a proposal to

introduce a performance funding policy, a proposal that was seen as a response to strong pressure

from the governor’s office to reform higher education and make universities “more accountable.”

In short, the proposal argued for changing the funding formula. Currently the state of Texas

funds public universities on a formula that is largely enrollment driven. A census is taken on the

12th

class day and enrollments are weighted by certain factors (graduate/undergraduate for

example), and appropriations are distributed. The proposed policy, if adopted, would change the

census date from the 12th

class day to the last class day, so that if a student dropped a course, the

institution would receive no funding for that enrollment.

This proposal was circulated in late spring, with the expectation that it would be proposed

in the next legislative session which was to begin in January of 2011. Between August and

October of 2010, I conducted interviews with public university administrators in the state of

Texas, asking about their perceptions of the policy and the supporting pressures to increase

Page 17: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

17

accountability3. The interviews included seven university presidents, two multi-institutional

chancellors, and three vice presidents, with one vice president interviewed when the president

was not available. Most of these interviews took place in the administrator’s office and lasted

about hour, but a few were conducted in other ways or locations, based on the administrator’s

request.

Although the interviews covered a range of topics, this discussion focuses on responses to

two questions. First, what do you [the administrator] think about the policy proposal put forth by

the commissioner to change the funding formula? And second, if adopted, how will your

institution respond to the shift? For this paper, the purpose of the qualitative work is to explore

possible explanations for why similar policies are ineffective at improving performance, not to

aggregate the responses to produce a quantitative analysis. As such, this discussion will be less

structured and exploratory, in an effort to identify the ways in which managers of public agencies

view political interventions aimed at increasing accountability through performance funding.

Administrative Views of the Performance Funding Proposal

In the most basic question – what do you think about the proposed accountability policy –

responses varied considerably and seemed to differ relative to many of the factors in the

quantitative work that were positive predictors of graduation rates. Two presidents from more

advantaged institutions showed either support or indifference to the proposal, but even the

support was framed in an interesting way. The most supportive president said:

The thought of funding on completion makes some sense for legislators and tax

payers. Why pay for a class that a student didn’t take? But it would have a lot of

negative impacts on universities that are below average. Redistribution would

3 In November, the proposal was changed substantially, and as of the time of this writing, failed to pass the

legislature. However, during the fall of 2010 and the spring of 2011, the governor had become more involved in

similar accountability efforts.

Page 18: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

18

create winners and losers, so the presidents of institutions that would be losing out

are against it because it costs money. But, so far, that’s their only reason to be

against it. That’s not a good enough answer. Consumers and taxpayers would see

funding on completion as a rational plan.

This response is interesting as it is supportive of the policy, but the support is mostly for reasons

that can be linked to citizen and legislative preferences. Additionally, this president expects the

policy will be detrimental to “below average” institutions and implicitly acknowledges

opposition on the part of less advantaged institutions, but believes that the opposition is not

based on the merits of the policy, only because opposing institutions will lose money if the

policy is adopted. This same president then moves from voicing their personal opinion to

discussing their public response to the policy, saying:

After talking about it with other [institutions in our system], [we] had to defer to

other schools… because, in the end, we don’t want to hurt [the less advantaged

institutions]. But I think that institutions ought to be rewarded for success. I

don’t have any problem with funding based on graduation rates or completion.

We owe it to ourselves to earn the trust of the people by not shying away from

reasonable standards.

This response clearly indicates that this more advantaged institution was personally supportive

but publicly not supportive (though not opposing) the policy because of the political structure in

the state. Compare this response to that of another advantaged institution, one that did not have

much pressure at the system level, who said, “[The proposal] actually wouldn’t affect us much…

it would be an increase in [funding]4, which isn’t a lot, but definitely doesn’t hurt us.”

Taken together, neither of the respondents from the more advantaged institutions was

opposed to the policy, but neither was vocally supportive (albeit for different reasons). These

responses were markedly different from other institutional responses. A president from a

4 The respondent cited the amount of money, but the specific amount it removed to protect the identity of the

respondent.

Page 19: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

19

middle-range institution raised a number of concerns about disparate impacts within the

institution, saying:

Our university has many subpopulations. The current proposal to shift funding

wouldn’t result in a serious impact, overall, on our institution, but it would affect

some more than others. One-third of our university population are first-

generation students. Many of them are from low SES backgrounds and they often

have to work to pay for school. A shift like this would impact them the most. …

The state has to consider disparate impacts.

Here, we start to see some concerns about the policies, although, again, the view of the president

is framed by whether the institution would gain or lose funding. Given that this institution would

not lose (or gain) much from the policy, the president was concerned, but that concern did not

really lead to full opposition.

The presidents from the less advantaged institutions were the most vocal in their

opposition of the policy. For presidents of less advantaged institutions, responses often hinted at

concerns that the policy was not designed to improve performance, but instead was designed to

give more money to the more advantaged institutions. One president referred to an internal

report that found no relationship between course completion and on-time graduation, remarking,

“Why spend political capital on this proposal? It doesn’t improve graduation rates. Why

redistribute the funds?” Similarly, another president of a less-advantaged institution also focused

on concerning over redistribution, saying “We don’t get the top students. They want to punish us

for serving [less capable students]. They should pay us more. The state needs to fix the K12

system first. They’re going to punish the senior colleges for failing when you’re starting with a

losing proposition to begin with.”

If one were to think about placing these views on a continuum, most presidential attitudes

would fall somewhere between indifference to opposition, with the only supportive administrator

choosing to suppress his/her support for the sake of sister institutions. As such, there is

Page 20: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

20

considerable evidence that these policies introduce some level of conflict into

political/bureaucratic relationships. Within the literature on performance management, scholars

often discuss conflict as part of a principal-agent framework and assume goal conflict among

politicians and bureaucrats. But in the case of performance funding and higher education, the

actual goals of student success are not in conflict at all. However, the decision to incentivize and

punish (as presidents see it) institutions leads some administrators to believe that politicians do

not trust them to have the students’ best interest at heart. This lack of trust seems to lead to a

more dysfunctional relationship between political principals and public organization leaders,

which could be why we see either no effect or a negative effect of performance funding policies.

Institutional Response to the Performance Funding Proposal

When asked how the administration would respond to the policy if it were adopted,

expectations varied in a similar pattern. More advantaged institutions – those who would receive

more money – had no intention of changing their institutional practices if they policy were

adopted. Less advantaged institutions, however, had different responses, although they were also

quite varied. One president discussed relatively minor changes to internal policies relating to

dropping courses, saying, “we’re currently discussing a shift in our add/drop policy, and it would

affect [disadvantaged students] more than others.” Given the details of the policy proposal, this

response was focused on moving institutional policies based on the specific metrics of the policy.

Although it was not the response that political leaders likely would have preferred, it was a very

rational response to the proposed changes.

Another president believed that there was only one viable response to these kinds of

policies: “You raise entrance requirements and exclude a whole population from ever getting a

Page 21: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

21

college degree. We’re guilty of that now. We know where the funding’s headed.” Both

presidents of less-advantaged institutions believed that they were already doing everything they

can to support student success and that these changes to admissions or drop policies would

protect their institutions from being hurt by what they perceived to be a poorly-designed policy

intervention. Yet, this decision to buffer the negative consequences of the policy change through

internal policies was not shared by all presidents. One president had little interest in trying to

work the new system, but instead was planning to fight the policy on normative grounds saying,

“I’m going to fight it with every ounce of my body. It’s going to cost us [dollar amount

removed]. And it sends the wrong message. To students. To teachers.”

Policy Implications and Contributions to Management Theory

To fully appreciate these responses, it is important to revisit the logic of performance

funding policies. Political leaders begin with concerns over poor performance in higher

education, often pointing to low graduation rates. In thinking about why performance is (seen to

be) lagging, some argue that universities have no incentive to care about graduation rates.

Universities are funded on enrollment, so many argue that they are only incentivized to recruit

students, not retain and graduate them. If one believes that the problem is a lack of incentives, it

logically follows that offering incentives to improve performance on certain metrics would result

in improved performance.

Yet, we are not seeing much evidence that these policies are actually improving

performance, nor are they inducing leaders of public organization to increase their investment in

(or their concern for) undergraduate student success. These findings, while somewhat surprising,

are in line with the work of Weibel et al (2009). If university administrators are already

Page 22: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

22

intrinsically motivated to improve undergraduate student success, the creation of performance

funding policies could produce negative shifts for two reasons. First, as discussed in Weibel et al

(2009), the introduction of extrinsic motivations (financial incentives) could weaken intrinsic

motivations to improve performance. Second, the introduction of these financial incentives

could be construed by leaders of public organizations as a signal that political principals believe

that public leaders do not care about performance. This signal is especially strong when these

policy proposals are paired with actual rhetoric that questions the effectiveness and commitment

of public leaders to the state/nation, as was the case in the state of Texas. This exchange can

easily lead to a culture of mistrust, which can lead to dysfunctional behaviors in public

organizations, would likely lead to negative outcomes, as argued by Radin (2006).

Given the evidence presented in the existing work on performance funding and the

evidence discussed in these two analyses, it is difficult to argue that performance funding

policies will likely lead to performance gains in higher education. However, it is also important

to note that opponents of performance funding policies often make assumptions that also lack

strong empirical support. For example, the assertion that performance funding policies are

substantially harming public university performance or disproportionately hurting disadvantaged

universities and/or students does not strong enjoy strong empirical backing. This analysis

identified a very slight negative effect on aggregate graduation rates for some institutions, and

the evidence of disparate effects was only significant at the .10 level. However, the qualitative

work uncovered strong differences in attitudes among public university presidents, especially

those at the most disadvantaged institutions.

These findings bring to mind the work of McLendon, Hearn, and Deaton (2006) and

Radin (2006). McLendon et al found that the adoption of these policies is not motivated by an

Page 23: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

23

empirical performance failure, but instead, is largely tied to political and structural differences.

Much like the work on New Public Management, there is evidence that these policies are often

more about politics and a strong held belief in the promise of incentives than they are about

actual gains in performance. The work by advocacy groups supports this notion, as they often

advocate all states to adopt performance funding policies, not just those with “below-average”

institutional performance, nor are these policies often designed to target poor performers within a

state. Instead, they seem to be a motivated by a set of strongly-held beliefs: universities do not

have a strong incentive to care about undergraduate student success, universities are failing, and

universities would not be failing if they had the incentive to focus on student success.

Of course, the university itself cannot care more or do anything different. As with most

performance funding policies, the targets of these policies are the administrators in these

organizations. As such, these policies implicitly (and sometimes explicitly) assumes that

university presidents do not care about students and can only be made to care if they are

rewarded or punished monetarily. We have many reasons to believe that university leaders care

about students and take their jobs seriously, so it should not be surprising to see that those who

would suffer under these policies often become defensive. Over time, these interactions can lead

to the type of dysfunction discussed by Radin (2006) when accountability policies cast such a

negative light on leaders of public agencies. While we may not see quantifiable gains or losses

attributable to these policies, the deterioration of the relationship between elected leaders and

university administrators is one worth considering and merits further study.

Page 24: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

24

References

Aldeman, Chad, and Kevin Carey. 2009. Ready to Assemble: Grading State Higher Education

Accountability Systems. Washington, DC: Education Sector.

Andrews, Matthew and Herb Hill. 2003. “The Impact of Traditional Budgeting Systems on the

Effectiveness of Performance-Based Budgeting: A Different Viewpoint on Recent

Findings.” International Journal of Public Administration. 26(2): 135-55.

Beck, Nathaniel, and Jonathan N. Katz. 1995. “What to do (and not to do) with Time-Series

Cross-Section Data.” The American Political Science Review 89(3): 634-647.

Burke, Joseph C. 2005. Achieving Accountability in Higher Education: Balancing Public,

Academic, and Market Demands. San Francisco: Jossey-Bass.

Burke, Joseph C. and H.P. Minassians. 2001. Linking Resources to Campus Results: From Fad

to Trend, the Fifth Annual Survey. Albany, NY: Rockefeller Institute of Government.

Burke, Joseph C. and H.P. Minassians. 2002. Performance Reporting: The Preferred “No Cost”

Accountability Program – the Sixth Annual Survey. Albany, NY: Rockefeller Institute of

Government.

Burke, Joseph C. and H.P. Minassians. 2003. Real Accountability or Accountability “Lite”:

Seventh Annual Survey. Albany, NY: Rockefeller Institute of Government.

Burke, Joseph C. and S. Modarresi. 1999. Performance Funding and Budgeting: Popularity and

Volitility – the Third Annual Survey. Albany, NY: Rockefeller Institute of Government.

Burke, Joseph C., J. Rosen, H.P. Minassians, and T. Lessard. 2000. Performance Funding and

Budgeting – the Fourth Annual Survey. Albany, NY: Rockefeller Institute of

Government.

Burke, Joseph C. and A. Serben. 1997. State Performance Funding and Budgeting for Public

Higher Education. Albany, NY: Rockefeller Institute of Government.

CCA. 2011. Complete College America. www.completecollege.org

Dahler-Larsen, Peter. 2005. “Evaluation and Public Management.” In The Oxford Handbook of

Public Management. Edited by Ewan Ferlie, Laurence E. Lynn Jr, and Christopher

Pollitt. Oxford University Press.

Dougherty, Kevin J. and Rebecca S. Natow. 2009. The Demise of Higher Education

Performance Funding Systems in Three States. CCRC Working Paper No. 17. Teachers

College, Columbia University.

Page 25: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

25

Dougherty, Natow, and Blanca E. Vega. 2012. “Popular but Unstable: Explaining Why State

Performance Funding Systems in the United States Often Do Not Persist.” Teachers

College Record.

Doyle, William and B. Noland. 2006. “Does Performance Funding Make a Difference for

Students?” Presented at the Association for Institutional Research Meeting. Chicago, IL.

Gore, Al. 2003. Creating a Government that Works Better and Costs Less: A Report of the

National Performance Review. Washington, DC: US Government Printing Office.

Heinrich, Carolyn. 2009. “Third-Party Governance under No Child Left Behind: Accountability

and Performance Management Challenges.” Journal of Public Administration Research

and Theory. 20:i59-i80.

Jacob, Brian A., and Steven D. Levitt. 2003. “Rotten Apples: An Investigation of The Prevalence

and Predictors of Teacher Cheating.” Quarterly Journal of Economics 118(3): 843-877.

McLendon, Michael K., James C. Hearn, and Steven B. Deaton. 2006. “Called to Account:

Analyzing the Origins and Spread of State Performance-Accountability Policies for

Higher Education.” Educational Evaluation and Policy Analysis 28(1): 1-24.

Meier, Kenneth J. and John Bohte. 2000. “Goal Displacement: Assessing the Motivation for

Organizational Cheating.” Public Administration Review. 60(March/April): 173-182.

Moynihan, Donald P. 2006. “What Do We Talk About When We Talk About Performance?

Dialogue Theory and Performance Budgeting.” Journal of Public Administration

Research and Theory 16(2): 151 -168

Moynihan, Donald P. 2008. The Dynamics of Performance Management. Washington, DC:

Georgetown University Press.

Moynihan, Donald P. and Matthew Andrews. 2011. “Budgets and Financial Management.” In

Public Management and Performance: Research Directions. Edited by Richard M.

Walker, George A. Boyne, and Gene A. Brewer. Cambridge University Press.

Osborne, David David E, and Ted Gaebler. 1992. Reinventing Government: How the

Entrepreneurial Spirit Is Transforming the Public Sector. Reading, MA: Addison-Wesley

Radin, Beryl A. 2006. Challenging the Performance Movement: Accountability, Complexity, and

Democratic Values. Washington, DC: Georgetown University Press.

Richardson, Jr., Richard and Mario Martinez. 2009. Policy and Performance in American

Higher Education: An Examination of Cases Across State Systems. Baltimore: Johns

Hopkins University Press.

Page 26: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

26

Rabovsky, Thomas. 2011. “Accountability in Higher Education: Exploring Impacts on State

Budgets and Institutional Spending Patterns.” Presented at the 2011 Public Management

Research Association Conference. Syracuse, NY.

Sanford, Thomas and James M. Hunter. 2010. “Impact of Performance Funding on Retention

and Graduation Rates.” Presented at the 2010 Association for the Study of Higher

Education Conference. Indianapolis, IN.

Schick, Allen. 2001. “Getting Performance Measures to Measure Up.” In Quicker, Better,

Cheaper: Managing Performance in American Government. Edited by Dall W. Forsythe.

Rockefeller Institute Press.

Shin, Jung-Cheol and Sande Milton. 2004. “The Effects of Performance Budgeting and Funding

Programs on Graduation Rate in Public Four-year Colleges and Universities.” Education

Policy Analysis Archives. 12(22), 1-26.

Talbot, Colin. 2005. “Performance Management.” In The Oxford Handbook of Public

Management. Edited by Ewan Ferlie, Laurence E. Lynn Jr, and Christopher Pollitt.

Oxford University Press.

Titus, Marvin A. 2006. “Understanding the Influence of Financial Context on Student

Persistence at Four-Year Colleges and Universities.” Journal of Higher Education.

77(2): 353-375.

Volkwein, J. Fredericks, and David Tandberg. 2008. “Measuring Up: Examining the

Connections among State Structural Characteristics, Regulatory Practices, and

Performance.” Research in Higher Education 49(2): 180-197

Weibel, Antoinette, Katja Rost, and Margit Osterloh. 2009. “Pay for Performance in the Public

Sector – Benefits and (Hidden) Costs.” Journal of Public Administration Research and

Theory. 20:387-412.

Zhang, Liang. 2009. “Does State Funding Affect Graduation Rates at Public Four-Year Colleges

and Universities?” Educational Policy 23(5): 714 -731.

Page 27: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

27

Table One: Graduation Rates - All Students

Model 1 Model 2

Barron's Selectivity 6.000***

5.995***

(0.23) (0.22)

Enrollment (in 1000s) 0.151***

0.151***

(0.04) (0.04)

Master's (Carnegie) 2.564***

2.626***

(0.54) (0.53)

Research (Carnegie) 3.946***

4.020***

(0.73) (0.73)

% Black Students -0.238***

-0.243***

(0.01) (0.01)

% Hispanic Students -0.206***

-0.208***

(0.01) (0.01)

Historically Black College or University 13.015***

13.510***

(1.24) (1.27)

Instructional Expend. per Student (in $1000s) 0.091* 0.090

+

(0.05) (0.05)

Revenue Per Student 0.242***

0.241***

(0.02) (0.02)

Average Faculty Salary (in $1000s) 0.252***

0.254***

(0.02) (0.02)

Performance Funding -0.493* -0.938

*

(0.22) (0.45)

% of Students Receiving Federal Aid -0.047***

-0.052***

(0.01) (0.01)

Performance Funding * Percent Receiving Federal Aid 0.013

(0.01)

Constant 12.647***

12.785***

(1.15) (1.17)

Observations 2974 2974

R2 0.948 0.949

Panel Corrected Standard Errors in parentheses + p < 0.10,

* p < 0.05,

** p < 0.01,

*** p < 0.001

Page 28: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

28

Table Two: Graduation Rates - Black Students

Model 3 Model 4

Barron's Selectivity 7.433***

7.440***

(0.37) (0.37)

Enrollment (in 1000s) 0.096* 0.099

*

(0.05) (0.05)

Master's (Carnegie) 0.709 0.801

(0.92) (0.91)

Research (Carnegie) 0.382 0.399

(1.22) (1.22)

% Black Students -0.032 -0.036

(0.02) (0.02)

% Hispanic Students -0.069**

-0.068**

(0.02) (0.02)

Historically Black College or University 9.992***

10.379***

(1.86) (1.82)

Instructional Expend. per Student (in $1000s) 0.327**

0.317**

(0.12) (0.12)

Revenue Per Student 0.165***

0.166***

(0.04) (0.04)

Average Faculty Salary (in $1000s) 0.212***

0.214***

(0.03) (0.04)

Performance Funding 0.006 1.322

(0.47) (0.91)

% of Students Receiving Federal Aid -0.123***

-0.109***

(0.02) (0.02)

Performance Funding * Percent Receiving Federal Aid -0.038+

(0.02)

Constant 3.356+ 2.776

(1.85) (1.88)

Observations 2630 2630

R2 0.809 0.810

Panel Corrected Standard Errors in parentheses + p < 0.10,

* p < 0.05,

** p < 0.01,

*** p < 0.001

Page 29: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

29

Table Three: Graduation Rates - Hispanic Students

Model 5 Model 6

Barron's Selectivity 5.384***

5.393***

(0.33) (0.33)

Enrollment (in 1000s) 0.168***

0.168***

(0.05) (0.05)

Master's (Carnegie) 0.085 0.123

(1.14) (1.14)

Research (Carnegie) 0.409 0.428

(1.24) (1.24)

% Black Students -0.103**

-0.102**

(0.03) (0.03)

% Hispanic Students -0.029+ -0.031

+

(0.02) (0.02)

Historically Black College or University 6.384* 6.394

*

(2.96) (2.94)

Instructional Expend. per Student (in $1000s) 0.141**

0.145**

(0.05) (0.05)

Revenue Per Student 0.203***

0.202***

(0.03) (0.03)

Average Faculty Salary (in $1000s) 0.271***

0.269***

(0.04) (0.04)

Performance Funding 0.458 -0.588

(0.52) (1.08)

% of Students Receiving Federal Aid -0.153***

-0.166***

(0.02) (0.03)

Performance Funding * Percent Receiving Federal Aid 0.032

(0.03)

Constant 10.143***

10.575***

(2.06) (2.11)

Observations 2393 2393

R2 0.772 0.771

Panel Corrected Standard Errors in parentheses + p < 0.10,

* p < 0.05,

** p < 0.01,

*** p < 0.001

Page 30: The Disparate Impacts of Accountability Searching for ... · 2 The Disparate Impacts of Accountability – Searching for Causal Mechanisms In 2009, Complete College America, a nonprofit

30

Figure One: Graphing the Interactive Effect of Performance Funding Policies

-2-1

01

2

Ma

rgin

al E

ffect o

f P

erf

orm

ance

Fun

din

g P

olic

ies

0 20 40 60 80 100

% of Students Receiving Pell Grants

Note: dashed lines indicate 95% confidence intervals