chapter 1 the transparency of transparency measures

18
1 Chapter 1 The Transparency of Transparency Measures It’s one of the real black marks on the history of higher education that an entire industry that’s supposedly populated by the best minds in the country— theoretical physicists, writers, critics—is bamboozled by a third-rate news magazine.... They do almost a parody of real research. I joke that the next thing they’ll do is rank churches. You know, “Where does God appear most frequently? How big are the pews?” —Leon Botstein, president of Bard College We live in an era when individuals, organizations, and governments face pressing demands to be accountable. 1 Not only do we expect actions to be transparent, we also expect them to be demonstrably transparent: the general public has the right to see disinterested evidence of performance, competence, and relative achievement. Quantitative measures seem to offer the best means to achieve these goals. They have the patina of objec- tivity: stripped of rhetoric and emotion, they show what is “really going on.” Even more, they can reduce vast amounts of information to a figure that is easy to understand, a simplicity that intimates that there is nothing to hide, and indeed that nothing can be hidden. Consider, however, three recent controversies: An inspector general’s review of a Veterans Affairs Health Care System hospital in Phoenix, Arizona, found that administrators had doctored records to make the wait times for medical appointments appear shorter than they actually were. 2 In the face of an agency goal of thirty- day wait times for initial appointments—a goal to which bonuses and salary increases were tied—administrators misreported the wait times for some veterans and placed as many as 1,700 others on unofficial

Upload: others

Post on 09-Jan-2022

27 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 1 The Transparency of Transparency Measures

1

Chapter 1 The Transparency of Transparency Measures

It’s one of the real black marks on the history of higher education that an entire industry that’s supposedly populated by the best minds in the country— theoretical physicists, writers, critics—is bamboozled by a third-rate news magazine. . . . They do almost a parody of real research. I joke that the next thing they’ll do is rank churches. You know, “Where does God appear most frequently? How big are the pews?”

—Leon Botstein, president of Bard College

We live in an era when individuals, organizations, and governments face pressing demands to be accountable.1 Not only do we expect actions to be transparent, we also expect them to be demonstrably transparent: the general public has the right to see disinterested evidence of performance, competence, and relative achievement. Quantitative measures seem to offer the best means to achieve these goals. They have the patina of objec-tivity: stripped of rhetoric and emotion, they show what is “really going on.” Even more, they can reduce vast amounts of information to a figure that is easy to understand, a simplicity that intimates that there is nothing to hide, and indeed that nothing can be hidden.

Consider, however, three recent controversies:

An inspector general’s review of a Veterans Affairs Health Care System hospital in Phoenix, Arizona, found that administrators had doctored records to make the wait times for medical appointments appear shorter than they actually were.2 In the face of an agency goal of thirty-day wait times for initial appointments—a goal to which bonuses and salary increases were tied—administrators misreported the wait times for some veterans and placed as many as 1,700 others on unofficial

14284-01-Ch01-3rdPgs.indd 1 3/2/16 2:44 PM

Page 2: Chapter 1 The Transparency of Transparency Measures

2 Engines of Anxiety

“secret lists” until appointments could be made for them. Veterans on these unofficial lists were not included in the hospital’s official statis-tics and therefore did not affect the hospital’s mean-wait-time statistic.3 The report states that this sort of gaming was not limited to the Phoenix hospital, but is a “systematic problem nationwide.”4

The New York Times reports inconsistencies with the Medicare rating system of nursing homes, which is based on awarding up to five stars for different aspects of the care offered.5 This rating system, designed to make quality distinctions among nursing homes, is used not only by consumers making decisions about elder care but also by referring doctors and insurers. Key components of the ratings, however, are self-reported, and the Times investigation shows that there is clear evidence of gaming. Homes manipulate their scores by temporar-ily hiring more staff before scheduled annual surveys, providing an inflated representation of their true staffing levels. They also mis­report their quality measures, knowing that these data are not easily auditable. This means that even recognizably poor facilities can score well on the ratings. Advocates of the rating system claim that these numbers have led to improvements in the nursing home industry, but others point out that results are “implausible” and lead to a “false sense of security.”6

Facing the threat of sanctions, humiliation, and possibly even the closure of their school as a result of poor test results, teachers—by all accounts dedicated and otherwise conscientious—at Park Middle School in Atlanta admitted to systematically changing student answers on tests that determine whether schools are meeting the federal standards outlined by No Child Left Behind. Investigators later found that cheating by teachers in this district was rampant and attributed it to “a culture of fear, intimidation and retaliation that [had] infested a district” that was using data “as an abusive and cruel weapon to embarrass and punish.” This is by no means an exceptional case: Rachel Aviv reports that the Government Accountability Office found comparable instances of cheating in forty states.7

These three high­profile news stories provide glimpses into the un­ expected problems that public measures designed to assure transparency and accountability can create: despite their appearance of objectivity and impartiality, measures are often the product of political processes and contain biases. Detailed investigations of how measures are constructed and implemented often make one less, rather than more, confident about

14284-01-Ch01-3rdPgs.indd 2 3/2/16 2:44 PM

Page 3: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 3

their validity and reliability; the production of measures necessarily entails subjective decisions about how the measures are chosen, assem-bled, and weighted.

Measures create new incentives and power dynamics. Jobs and job-holders come to be defined by the numbers of their office or organization. As the pressure to chase better numbers increases, the quest for numeri-cal improvement can be used as a weapon by those who are dissatisfied with specific outcomes or desire change. This pressure to produce the best numbers possible also motivates those in charge of the numbers to cheat. Numbers can be gamed and measures subverted, especially when financial or other motivating incentives are involved. Short of cheating, quantitative assessments drive people to “teach to the test,” focusing their attention on improving the numbers instead of the qualities the numbers are designed to represent.

Accountability measures do not produce transparency simply, or sim-ply produce transparency. Although most do make some aspects of social processes more apparent, they are complicated constructions—often more complicated than we give them credit for—that have the tendency to trans-form the phenomena they are meant only to reflect. Even more, they nearly always displace discretion rather than expose what is hidden about social processes and phenomena. At worst, they create new forms of obfuscation, forms that pose new dangers because it is difficult to discern precisely how measures are constructed and what kinds of information are left aside. There is a deep irony here: transparency measures themselves often lack transparency.

Concerns about accountability measures are especially relevant because quantitative evaluation has come to permeate social life. If it seems dif-ficult to escape talk of accountability and assessment, it is because these ideas are now far more common than in the past. Google’s Ngram Viewer shows a steep increase in the use of the words “rankings,” “transparency,” “accountability,” “audit,” “performance measures,” and “metrics” during the last two decades. These new forms of evaluation, along with the sur-veillance and discipline that go hand in hand with them, are applied to an astonishing range of organizations, from churches and schools to insur-ance providers and philanthropies, and permeate the activities within them. Increasingly, we even apply these principles of accountability to our-selves as new devices allow us to carefully measure and assess our fitness, mood, and sleep in an attempt to produce a “quantified self.” We are truly awash in the numbers that are used to provide evidence of accountability, transparency, and efficiency.

This book aims to develop a better understanding of this new culture of evaluation by thoroughly scrutinizing a single set of numbers and their

14284-01-Ch01-3rdPgs.indd 3 3/2/16 2:44 PM

Page 4: Chapter 1 The Transparency of Transparency Measures

4 Engines of Anxiety

consequences. Specifically, we examine the U.S. News and World Report (hereafter, USN) rankings of law schools and the sweeping changes these rankings have produced in legal education. We have spent more than ten years studying the USN rankings, conducting over two hundred in-depth interviews with law school students, faculty, and administrators; collect-ing observational data at schools, job fairs, and professional meetings; and combing through decades of school statistics, newspaper reports, online bulletin boards, and organizational documents. We contend that a close case study of this type is necessary to understand the full extent of these measures’ effects in terms of the pressures they generate, the psychologi-cal changes they induce, and the organizational behaviors, patterns, and routines they alter. The amount of work needed to unpack the complexities of a seemingly straightforward and simple numerical evaluation is itself a telling illustration of what numbers obscure: only through an intensive examination such as this one can we make the effects of transparency mea-sures more transparent.

Our title, Engines of Anxiety, is meant to highlight two of the central points that this book makes about rankings and reactions to them. The first is the fear of falling in rank that dominates the consciousness of those subject to them. Nearly everyone we spoke with lived in dread of the inevitable day that new rankings would come out showing that their school had dropped to a worse number or tier, and many of the changes caused by the rankings can be directly traced to this fear. The second point is that rankings are structured to constantly generate and regener-ate these anxieties and reactions. In much the same way as sociologist Donald Mackenzie documents the ways in which quantitative models actively produce financial markets in An Engine, Not a Camera, we show that rankings are constitutive rather than simply reflective of what they are attempting to measure.

RANKINGSRankings are a compelling example of accountability measures both because they are so common in contemporary society and because their precise comparisons generate intense competition among those being evaluated, a competition that makes the rankings’ effects easier to see. Rankings—of sports teams, cities, schools, police departments, doctors, lawyers, and so forth—are seemingly everywhere; there appears to be no limit to our demand to know who is the best and where we stand in relation to one another. We are so accustomed to rankings that they have become a naturalized way of making sense of the world. But while we often express doubts about the results of rankings—about where our team

14284-01-Ch01-3rdPgs.indd 4 3/2/16 2:44 PM

Page 5: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 5

or city or school lands on a particular ranking—we rarely question the legitimacy of rankings per se or ask whether they are a productive way of evaluating the people or things they rank.

With the possible exception of sports teams, the ranking of schools is the most popular and influential form of ranking in the United States. A school’s rank serves as a status marker and a signal of what a degree might be worth. USN’s law school rankings, much like educational rank-ings of other fields, create a very public hierarchy among schools, one that overwhelms other conceptions of how schools might be compared to one another. Within this ranking universe of educational institutions, legal education is unique: in this field, one ranking entity has a monopoly on public perception, and all accredited law schools are ranked together according to the same metrics. These characteristics of law school rankings make it easier to directly connect school action to particular criteria used in the rankings and to see variations in how schools respond to the rankings. (Other fields in which rankings have a powerful influence, such as those of business schools and world universities, either have multiple rankers assessing schools in different ways or have schools divided into subgroups according to their characteristics and missions.)

Moreover, given the hyperimportance of status in the legal field8 and,

at least in our experience, the tendency of lawyers to speak their mind, the legal field provides an ideal opportunity to document the anxieties produced by rankings for students, faculty, and administrators; the range of protest and criticism leveled at this form of public assessment; and the shockingly extensive efforts adopted by schools to “do something” to surpass—or often just to keep up with—peers and rivals. Our subjects were often very forthcoming and eloquent about their concerns about rankings as well as their battles with each other and USN.

All of these factors led us to focus our attention on how rankings have affected legal education. We emphasize, however, that while legal edu-cation provides a particularly clear window into the effects of rankings, the dynamics created by rankings are very similar in other contexts. As we show in chapter 7, the patterns of evaluation and response created by rankings in legal education, as well as the patterns of effects they pro-duce, are apparent not only in other forms of educational rankings (of undergraduate institutions, medical schools, business schools, graduate programs, and world universities), but also in nearly every other form of public quantitative assessment: from the ratings of doctors and hospitals to the management of crime statistics; from the measure of “hits” newspaper articles receive to international indices of corruption and well-being.9 We are confident that the dynamics we document here—the redistribution of attention and effort, the gaming strategies, the anxiety—will be familiar

14284-01-Ch01-3rdPgs.indd 5 3/2/16 2:44 PM

Page 6: Chapter 1 The Transparency of Transparency Measures

6 Engines of Anxiety

to everyone in higher education and the many other fields now subject to rankings and other accountability metrics.

EMPIRICAL AIMThe empirical aim of this book is to meticulously document the effects, both intended and unintended, that rankings have had on the field of legal education and to demonstrate how they have changed law schools, influ-enced the people who work and study within these schools, and altered the perceptions of the external constituents who play a powerful role in directing the future course of these schools. In the following chapters we carefully trace the influence of rankings through legal education, show-ing how rankings affect prospective students, admissions, deans, faculty, career services, alumni, and employment. This approach allows us to dem-onstrate the extent to which rankings have permeated the decision­making of schools from the bottom to the top of the status hierarchy.

Broadly, we argue that rankings produce a new status system that re organizes how law schools are stratified. This broad change in legal edu-cation influences how law schools define their goals, admit students, and deploy resources, and how employers evaluate candidates. These kinds of changes affect who gets to be a lawyer, what kind of lawyer he or she becomes, people’s sense of their own status, and the ways legal jobs are allocated among schools and persons. Because rankings are standardized algorithms applied to all schools, they promote a single, idiosyncratic defi-nition of what it means to be a “good school” and punish schools that do not conform to the image of excellence embedded and embodied in the rankings.

The practical consequences of rankings, as will be made apparent throughout the following chapters, are wide-ranging and often disparate. Our data show that rankings have altered how people in all types of law schools make decisions, allocate resources, and think about themselves and others. Nearly everyone would agree with the dean of a school ranked outside the top fifty who said, “[Rankings] are always in the back of every-body’s head. With every issue that comes up, we have to ask, ‘How is this impacting our ranking?’ ” At the same time, our data also demonstrate that although the rankings are a force with which all law schools must con-stantly contend, not all law schools are affected—or choose to respond—in the same way. Even more, the pressures that rankings exert on schools change over time and in light of current events. Take, for example, the turmoil created by an event like the Great Recession and its aftermath. The downturn transformed the economic environment and was accompanied by a severe constriction of the legal job market and a steep decline in law

14284-01-Ch01-3rdPgs.indd 6 3/2/16 2:44 PM

Page 7: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 7

school applications.10 This upheaval changed many aspects of legal educa-tion, but it did not diminish the attention schools paid to rankings or the efforts they made to achieve a higher rank; it only mediated these effects. As highly ranked schools were much less affected by these events, improv-ing rankings once again became an expedient answer for schools looking to fill classes or ensure future employment for more of their graduates.

It is challenging to catalog such diffuse effects, but we have been able to identify three particularly powerful categories of transformations produced by these measures: they transform the power relations within schools, day-to-day organizational practices, and the ways professional opportunities are distributed. These categories provide perspective on the changes created by rankings while also pointing toward the types of unintended consequences that the implementation of other accountability measures can produce generally.

THEORETICAL AIMIn addition to documenting the practical effects of rankings, our study of the consequences of rankings on schools provides new insights into how we understand the nature of quantitative measures and the changes they engender. First, we develop an explanation of how rankings generate the far-reaching and transformative effects we describe throughout the book. We argue that the “reactivity” of social measures is a key reason for the changes generated by rankings. In the case of law school rankings, for instance, “reactivity” refers to the fact that the measures do not simply reflect an underlying social hierarchy but in fact play a crucial role in cre-ating this hierarchy by changing how people think about and react to law school and legal education. This new cognitive map of the field of legal education powerfully influences how quality is defined and which aspects of legal education are prioritized. We believe that reactivity is a key com-ponent of all social measures and is often responsible for the unintended consequences these measures produce. In chapter 2 we take a close look at the mechanisms that contribute to this reactivity—commensuration, self­fulfilling prophecies, narrative, and reverse engineering—and explain why this process is so potent.

Second, our work provides new insights into how numbers create accountability and, importantly, specify the distinctive type of account-ability they create. We argue that the accountability produced by quan-titative assessments like rankings is best characterized as “selective accountability,” meaning that these assessments hold people or organiza-tions accountable on some dimensions while obscuring other aspects of the processes they measure. These biases are often overlooked not only

14284-01-Ch01-3rdPgs.indd 7 3/2/16 2:44 PM

Page 8: Chapter 1 The Transparency of Transparency Measures

8 Engines of Anxiety

because numbers are useful simplifications of complex social realities, but also because they are granted a great deal of cultural authority. We tend to see numerical measures as objective and legitimate because we associate them with technical efficiency as well as mathematical and scientific rigor.

Finally, our detailed examination of the effects of rankings provides a rare opportunity to explore the often­ignored moral aspects of public mea-surement. Moral assumptions—such as about what is a good school, what is a good student, what is valuable about education—are embedded in the measures, but these assumptions tend to become invisible in the face of quantitative authority. In this book we hope to draw attention to the ethical dilemmas that quantitative assessments create for those who must man-age them, the culture of cynicism that the mind-set of “playing to the test” promotes, and, most generally, the importance of distinguishing between the inarguable usefulness of quantification and the moral implications of creating elaborate measures of everything.

In examining these broad questions, we draw on and contribute to tradi-tions in the sociology of culture and organizations. Like other sociologists, we see cultural processes as instrumental in creating, reinforcing, and redressing inequality.11 Rankings have changed the stratification system of higher education by transforming shared understandings about what education is for; who should have access to it; what it means to be a good student, faculty member, scholar, or administrator; and what excellence is. Even more profoundly, rankings have replaced a system of loosely struc-tured meanings and symbols with arbitrary precision that erases ambigu-ity while also eroding the authority of professional expertise in favor of calculation and numerical facility.

Along these same lines, rankings also change the categories and sche-mas through which people and schools are classified and evaluated. Categories are a form of symbolic boundary that allow us to make dis-tinctions between what (or who) should be included and what (or who) should not.12 As we show, rankings directly change how we categorize schools: we now focus on a specific numerical rank or tier instead of on earlier, more nuanced definitions of what qualifies as a worthy school. In doing so, rankings reorient how both insiders and outsiders conceptualize where schools stand in relation to one another and how the field of legal education is structured. In short, these cultural changes have transformed not only the traditional meaning of legal education but also the values that define this field.

This book also speaks directly to organizational research by demonstrat-ing the distinctive effects of performance measures on organizations and the fields in which they operate. Throughout the book we document the many ways in which these measures impose organizational change: rankings

14284-01-Ch01-3rdPgs.indd 8 3/2/16 2:44 PM

Page 9: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 9

have clear effects on organizational behavior, policy, and strategy. One reason that rankings generate such extensive change is that they override the strategies organizations normally use to manage external pressures. As many scholars have pointed out, organizations often respond to external threats or interference by developing symbolic responses that leave their core activities untouched. New regulation, for example, often results in the creation of formal departments or committees that take on the appearance of compliance but effect little change: organizations may create offices, put ineffective programs into practice, or develop policies that may never be implemented in order to appear responsive.13 This “buffering” of the orga-nization from external efforts to influence it is circumvented in the case of rankings. Rankings make symbolic compliance harder to achieve by allowing outsiders to easily scrutinize the organization, by creating power-ful inducements to try to game the rankings, and by providing insiders with incentives to both adopt the goals implicit in rankings and internalize rankings as a form of professional identity. Accountability measures like rankings are often simultaneously coercive and seductive, and this is what makes them such powerful agents of transformation.

These more general questions about how social measures work and the nature of the accountability that they produce both contextualize our findings and, more important, provide a framework for understanding the ramifications and limitations of rankings, ratings, and accountability mea-sures. With these broad implications in mind, we now turn to the details of the numbers for our particular case, explaining the history and construc-tion of law school rankings.

A BRIEF HISTORY OF RANKINGSRankings of U.S. universities date back more than a century. James Cattell’s American Men of Science, published in 1910, ranked schools on the basis of the number of eminent scientists they produced.14 In 1911, the United States Bureau of Education created an evaluative system that divided hun-dreds of American colleges and universities into one of five tiers, but this ranking was never published owing to the outcry of college officials who believed their school’s quality was not fairly represented in the assessment and were fearful of the reaction of their constituents.15 Raymond Hughes is credited with creating the first rankings of graduate programs in 1925 and, in conjunction with the American Council on Education, again in 1934.16 These early evaluations and others that followed were prepared for and mostly used by academics. It was only much later, in the 1980s, that popu-lar media regularly began producing rankings of colleges and graduate

14284-01-Ch01-3rdPgs.indd 9 3/2/16 2:44 PM

Page 10: Chapter 1 The Transparency of Transparency Measures

10 Engines of Anxiety

programs intended for consumers rather than educators. U.S. News and World Report helped pioneer media rankings when it published its first issue ranking colleges in 1983.17

When Mort Zuckerman acquired U.S. News and World Report in 1984 and became its editor in chief, it was a lackluster news weekly overshad-owed by its more successful rivals, Newsweek and Time. Zuckerman hired a new group of editors to enliven and distinguish USN, one of whom was Mel Elfin, the former Washington bureau chief for Newsweek. Earlier, USN had published the results of two simple surveys in which college presidents named the best college and universities. Elfin recalls that it was Zuckerman’s idea to expand these rankings and issue them annually as a way to solidify USN’s reputation as the magazine providing “news you can use.”18 As editor of special projects, Elfin was charged with figuring out ways to do this, and he quickly became a driving force behind rankings. He remembers feeling skeptical and daunted by the task that his bosses “dropped on his desk,” wondering if it was possible to get the right kind of information, whether they could make sense of it if they did get it, and, finally, whether anyone would notice.

They did notice. After publishing initial surveys in 1985 and 1987, USN produced revised college rankings in 1988, incorporating statis-tics collected from colleges, public sources, and results from a survey of college administrators. After 1988 the college rankings issue was pub-lished annually. Readers bought it in droves.19 In 1990, USN built on its success with an annual issue dedicated to rankings of graduate schools (the magazine had published a more rudimentary version of graduate rankings once before, in 1987). Among those ranked were schools of law, medicine, business, education, and engineering, as well as graduate pro-grams ranging from chemistry to music. The first law school rankings were derived from a simple survey sent to deans asking them to name the 10 best American law schools. Deans from 96 of 183 accredited schools responded, and Elfin’s staff culled from their responses a list of the “Best Law Schools.”20 This admittedly crude early model spawned enough interest to warrant further development. In 1990, when USN began rank-ing law schools and other graduate programs annually, they used more sophisticated rankings that combined survey data on reputations with statistical measures.

The man in charge of designing and implementing this more complex ranking methodology was Robert Morse, the responsive and quietly pas-sionate director of data research at U.S. News and World Report. Morse has worked at the magazine since 1976; he now presides over USN’s rank-ings empire from a small office in a corner of the magazine’s Washington headquarters. Morse has been described by one boss as “the brains of

14284-01-Ch01-3rdPgs.indd 10 3/2/16 2:44 PM

Page 11: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 11

the operation, the heart and soul of the engine,” as “Mr. Rankings” by colleagues, and as maybe “the most powerful man you never heard of” by a reporter for a college magazine.21 Morse oversees the production of the USN college and graduate school rankings. According to Morse, law school rankings have been the most popular and controversial of the grad-uate and professional school rankings.22

In launching their annual rankings of law schools in 1990, USN invoked the language of the marketplace to frame rankings—perhaps borrowing from the example of Consumer Reports, a respected magazine that had been evaluating products for consumers since 1936. USN’s law school rankings provide consumers with useful information about a specialized product market: legal education. Mel Elfin told readers, “The sad truth is that those who face the daunting prospect of raising upwards of $75,000 to finance a legal education often can find out more information on the relative mer-its of two $250 compact disc players than on the relative merits of law schools.”

Over time, the rationale for rankings was adapted to new uses and linked to new concerns. Students and parents are not the only ones who are concerned with the quality of educational products. Employers also care deeply about the schools their employees come from. It seems unlikely that when the editors at USN launched their rankings issues they imag-ined law firms would one day be using them to screen candidates; but as they became aware of this practice, the magazine elaborated its framing of rankings. Elfin, speaking in 1998, explained that law firms and their clients should also care about rankings: “People want to know, when they walk into the hiring partner’s office, what does it mean when it says X school on your résumé, and what does it mean when it says Y school? This is part of the thing you’re buying.”23

With the advent of vibrant “accountability movements” spanning many countries and institutions—health care, government, business, philanthropy, and especially education—rankings are now advertised as addressing even more fundamental concerns. Rankings, the publishers suggest, offer ordi-nary people the means for holding powerful organizations accountable to those who use their services or buy their products. Robert Morse’s blog quotes approvingly from a speech made by Margaret Spellings, U.S. secretary of education, to a group of education accrediting officials on December 7, 2007. She praised her audience for their oversight of educa-tion. As evidence of the public demand for accessible information about choosing and paying for college, she enlisted the example of USN college rankings: “If you ever doubt the need or appetite for your mission, con-sider the U.S. News college rankings. It’s been called the ‘Swimsuit Edition’ of postsecondary reporting. Within 72 hours of its release, the U.S. News

14284-01-Ch01-3rdPgs.indd 11 3/2/16 2:44 PM

Page 12: Chapter 1 The Transparency of Transparency Measures

12 Engines of Anxiety

website was viewed 10 million times. There’s a reason why this magazine is so popular.” Morse wrote in his blog:

U.S. News and U.S. Secretary of Education Margaret Spellings share an important goal. We both believe that there should be considerably more transparency at colleges and universities so prospective students and their parents can be informed about the costly and very important decision of which college to attend. In fact, U.S. News has been a leader in the drive for increased accountability among higher education institutions, and our rank-ings have been one of the factors that have pushed schools to publish more evaluative and consumer-friendly information about themselves.24

Rankings illustrate a general pattern in innovation and diffusion. As a new idea or technology becomes available, and as new groups begin to use it, the meaning of the innovation changes, including the nature of the problems it is designed to solve. As Michael D. Cohen, James G. March, and Johan P. Olsen suggest, problems, solutions, and decision-makers need not be aligned; successful “solutions” often go in search of new problems to which they can be applied both inside and among organizations.25

In offering consumers better information about making their college investment, rankings respond to dramatic changes in higher education in the United States, many of which began after World War II. After return-ing soldiers and, later, their baby boomer children began entering colleges in large numbers, the meaning of higher education changed. The federal government has invested enormously in expanding higher education and making it accessible and affordable for students who, a generation previ-ous, would never have considered it.26 As colleges expanded, proliferated, and provided social mobility, the field of education was transformed. Once higher percentages of Americans routinely began attending college, that one had attended college was no longer enough: where one attended col-lege became increasingly important, and stratification among colleges and universities sharply increased.27 This shifting stratification quickly began to shape career trajectories and income. Those who attend a selective col-lege can clearly benefit from the institution’s status and the robust social networks that this selectivity affords, and they may enjoy higher incomes than those who attend less selective schools.28

The stratification of schools is also closely associated with another important trend: the nationalization of the market for higher education. As late as the 1960s, most people went to college near where they lived, options were fewer, and it was easier to learn about the various alternatives through informal processes. Even Ivy League schools were still largely

14284-01-Ch01-3rdPgs.indd 12 3/2/16 2:44 PM

Page 13: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 13

regional colleges, and most who could pay their tuition were admitted.29 In the 1950s, Harvard admitted 60 percent of its applicants, whereas in 2015 it admitted just 5.3 percent of a much larger pool of applicants.30 This pattern of greater selectivity shifted most prominently in the 1960s as part of an emphasis on merit rather than class or legacy in admissions decisions.31 One result of these changes is that competition for admis-sion in the top schools also increased dramatically, beginning in the early 1980s.32 This places enormous pressure on upper­ and middle­class appli-cants and their parents, who are well aware of the social and economic stakes involved in getting into the “right college.” Patricia McDonough argues, “This knowledge, however tacit, is a bone­chilling wind blowing through suburbia where the dread of downward mobility is very real.”33 She reports that parents no longer feel as though their experience is help-ful to their children; many view college admissions as an “erratic, chancy game” over which they have little control.34 Such anxiety and insecurity is also fueled by a glut of media stories on the competition, high stakes, and travails involved in getting into a “good” college.35 Emerging from all this uncertainty and competition is what McDonough describes as a “college admissions industry” that includes private admissions counselors, test-preparation companies, a proliferation of guidebooks of all sorts, and, of course, media rankings.36 Most of these trends in undergraduate education apply to law schools and other professional schools as well as to colleges and universities. Most salient for our purposes is that students applying to law schools in the last decade have grown up amid this hysteria over admis-sions, and that magazine rankings are a direct response to these changes.

With the stakes higher, the competition fiercer, and the choices more elaborate, the appeal of rankings for applicants and their families is easy to understand. Accessible information that simplifies important decisions filled a void for people who felt overwhelmed by the complexity of college admissions, and this helps to explain the popularity of the rankings. As David Webster has pointed out, in their eagerness to market themselves in the best possible light, the information that colleges (or law schools) produce is self-serving.37 Students know they are objects of sophisticated marketing campaigns; after enough photos of bucolic settings populated by beautiful, diverse students, it is easy to become skeptical of the PR.

But there are other reasons for the popularity of rankings. Webster and other commentators make note of Americans’ “mania for rankings.”38 It is difficult to pin down the origins of this penchant, but a quick perusal of magazines testifies to its breadth, and it is clear that the media both cater to and cultivate our taste. Theodore Porter suggests that there has long been a peculiar relationship between quantification and populism in the United States, especially surrounding politics and administrative culture;

14284-01-Ch01-3rdPgs.indd 13 3/2/16 2:44 PM

Page 14: Chapter 1 The Transparency of Transparency Measures

14 Engines of Anxiety

how policy is scrutinized and defended, Porter argues, sheds light on our ambivalent relationship to elite power and expertise.39 Perhaps this trait reflects America’s fascination with the self-made man or a political culture that idealizes the wisdom of the common people. Perhaps it is because we are especially wary of some forms of elite discretion, yet nonetheless believe fervently in progress and the importance of expert knowledge in improving our lives. Or maybe we fancy rankings because they are embed-ded within a vibrant popular culture dedicated to self-improvement, social mobility, and knowing how one compares to the competition; the wide-spread fixation on sports rankings in the United States lends support to this view. Whatever the precise origins of this collective fascination with lists and rankings, it is so familiar and extensive that we enjoy mocking our obsession. The genre of the top­ten list, made famous by the comedian David Letterman, now appears regularly on T­shirts and websites, in bar-room debates, and, as we ring in the New Year, in newspapers and magazines everywhere.

Given these big changes and widespread predilections, it is perhaps not so surprising that Mort Zuckerman’s hunch paid off. In creating rank-ings he was tapping into a broad anxiety about important shifts in the stratification of schools and the evolving role schools play in mediating social standing more broadly. Rankings became extremely lucrative for USN and are now, in today’s parlance, so fundamental to its “brand” that it is hard to imagine USN without them. According to Mel Elfin, rankings “became, essentially, our franchise.”40 “As the editor,” he says, “I am very proud of [rankings]. It kept the magazine in the game. When you talk about colleges the first thing that comes to mind for many young people is USN rankings.”41

HOW USN CALCULATES LAW SCHOOL RANKINGSUSN law school rankings are made up of four general indicators. Reputation accounts for 40 percent of a school’s score and is based on surveys sent to academics and practitioners. Selectivity determines 25 percent of the overall score and is calculated using student Law School Admissions Test (LSAT) scores and grade-point averages (GPAs) and the school’s acceptance rate (the ratio of students accepted to those who applied). Placement success accounts for 20 percent of the overall ranking and is based on the percent-age of students employed at graduation and nine months after graduation and the percentage of students who pass the bar exam. Finally, “faculty resources” represents 15 percent of the overall ranking and is composed of four separate measures: expenditure rate per student (for instruction,

14284-01-Ch01-3rdPgs.indd 14 3/2/16 2:44 PM

Page 15: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 15

library, and supporting services), student-faculty ratio, “other” per-student spending (primarily financial aid), and the number of volumes in the library. These factors account for 65 percent, 20 percent, 10 percent, and 5 percent of the faculty­resources indicator, respectively. To compute the final ranking, each school’s score is standardized. These scores are then weighted, totaled, and rescaled so that the top school receives a score of 100 and other schools receive a percentage of the top score.

Along with these composite rankings, USN evaluates eight specialty areas. These specialty rankings are derived from surveys sent to legal educators, who pick the top fifteen schools for the designated specialties. Recently, USN evaluated specialties in clinical training, dispute resolu-tion, environmental law, health-care law, intellectual property law, inter-national law, tax law, and trial advocacy. The magazine has changed how some components are constructed, but the basic structure has remained the same. In adapting its rankings, USN has made them more comprehen-sive and precise over time.

USN treats law schools differently than other professional schools. Whereas in the other professional fields only the top twenty-five or top fifty schools are ranked, since 1992 the magazine has ranked every law school accredited by the American Bar Association (ABA), the accrediting body for U.S. law schools. For most of this period, law schools were divided into four tiers: the top tier listed the fifty highest­rated programs, by ranking, with other schools divided into the second, third, and fourth tiers and listed alphabetically within each tier. Beginning with the 2004 rankings, USN expanded its ordinal rankings by reporting the top one hundred law schools by rank and dividing the rest into the third and fourth tier, again listing these schools alphabetically. Now, they rank the top one hundred fifty schools and only leave the fourth tier—approximately fifty schools—unranked. Because USN publicly evaluates every school, not just the most elite schools, the influence of rankings has become pervasive in legal education.

Each year USN asks schools to prepare an elaborate report to supply the information the magazine uses to compile rankings. Initially some schools refused to comply, so the magazine estimated the missing information. Administrators doubted the validity of these estimates. They suggest that estimates are almost always conservative, punishing schools that did not provide information to USN by ranking them lower than they would have been had they submitted the information. Some administrators com-plained about providing what they see as an unwarranted subsidy to a for-profit firm. Generating the information for USN remains tedious and time-intensive, but USN forms have evolved to conform more closely to the statistical reporting requirements of the ABA; this has made the report-ing process less laborious for schools and easier for USN to verify.

14284-01-Ch01-3rdPgs.indd 15 3/2/16 2:44 PM

Page 16: Chapter 1 The Transparency of Transparency Measures

16 Engines of Anxiety

Many law school administrators and faculty are highly skeptical of rankings methodology, and their criticisms have helped fuel the contro-versy over rankings. While our focus is not on evaluating rankings, many researchers have criticized USN’s methods strenuously. Several papers have outlined the weaknesses of each measure employed by USN.42 A study by Stephen Klein and Laura Hamilton, commissioned by the Association of American Law Schools, presented a scathing critique of USN’s methods.43 The authors depict the twelve measures used by USN to construct their rankings as deeply flawed, but their most eye-catching finding is the dis-proportionate influence of LSAT scores in determining the differences in rank among schools.44 Klein and Hamilton conclude:

About 90 percent of the overall differences in ranks among schools can be explained solely by median LSAT score of their entering classes and essen-tially all of the differences can be explained by the combination of LSAT and Academic reputation ratings. Consequently, all of the other 10 factors US News measures (such as placement of graduates) have virtually no effect on the overall ranks and because of measurement problems, what little influence they do have may lead to reducing rather than increasing the validity of the results.45

Brian Leiter, a law professor at the University of Chicago and the cre-ator of an alternative set of rankings, the “Educational Quality Rankings of U.S. Law Schools,” criticizes USN rankings for excluding information about scholarship and for harboring biases against large public schools.46 Michael D. McGuire’s analysis of college rankings highlights the volatil-ity of rankings, showing how small adjustments in the weights attached to various components generate wild fluctuations.47 He also criticizes the weights attached to components as reflecting the judgment of editors rather than being informed by empirical research.

Legal educators from schools across the rankings spectrum have also emphasized the important qualities that rankings ignore. For example, a letter published in 1997 by the Law School Admissions Council (LSAC), the organization that administers the LSAT, and signed by nearly every dean at accredited law schools was sent to every prospective law student for years. It lists twenty-two factors that students identified as “among the most important in influencing their choices of law school.”48 Among these factors not included in USN rankings are the quality of teaching, the acces-sibility of teachers, racial and gender diversity within the faculty and stu-dent body, the size of first-year classes, the strength of alumni networks, and tuition.49 In an unpublished study, Richard Lempert, a sociologist and law professor at the University of Michigan, characterizes rankings as

14284-01-Ch01-3rdPgs.indd 16 3/2/16 2:44 PM

Page 17: Chapter 1 The Transparency of Transparency Measures

The Transparency of Transparency Measures 17

“pseudo-science” and describes every rankings factor as deeply flawed.50 Judith Wegner, past dean of the University of North Carolina’s law school, depicts USN’s methods as “so seriously flawed that it makes any think-ing person despair [of] journalistic ethics.”51 But not all deans feel this way. David Van Zandt, the former dean of Northwestern Law School who became the president of the New School for Social Research in 2011, believes that rankings are useful, if imperfect, measures and simply for-malize reputations that were already widely known.52

Our focus is on law schools, but the patterns we describe here are not unique to them. Accountability measures such as rankings simplify com-plex and dynamic institutions, but they are rarely the neutral technical feat we sometimes imagine them to be. Because people are reflective and reflexive, they tend to react to being measured in unanticipated ways. People scrutinize rankings, worry over them, invest in them, and act dif-ferently because of them, and these behaviors change the institutions that rankings evaluate. In short, measures are hard to control. So it is impor-tant that we understand the results of our efforts to create accountability. To do so, we will need to examine numbers as they are used rather than simply assuming that we know what their effects will be. This will entail an up­close examination. Law schools offer a useful vantage point, a good place to begin.

OVERVIEW OF THE BOOKIn chapter 2 we discuss in detail the context of our arguments and the theoretical contributions of the book. In particular, we lay out the his-tory of accountability measures and the particular form of accountability that numbers provide. We then offer an explanation for why numerical accountability as embodied in rankings can produce such powerful effects and unintended consequences. In a nutshell, we argue that rankings and other measures are “reactive”: instead of simply providing a neutral mea-sure of social phenomena—as a thermometer provides a neutral appraisal of temperature—these measures change how people conceptualize the social world. In other words, measures change what they are designed to reflect and in doing so transform how actors see themselves and make decisions. We elaborate on the reactivity of social measures by outlining five key ways in which this reactivity drives change.

We organize the remaining chapters by constituency, loosely conform-ing to the academic cycle. We begin in chapter 3 with prospective law stu-dents, whose use of rankings illuminates the demand for rankings and their populist appeal. Rankings simplify difficult decisions about where to apply to and attend law school. We show how, why, and when rankings

14284-01-Ch01-3rdPgs.indd 17 3/2/16 2:44 PM

Page 18: Chapter 1 The Transparency of Transparency Measures

18 Engines of Anxiety

are used by students, how they use them to evaluate the competence of administrators, and how they become internalized as symbols of profes-sional status.

We next examine how rankings affect the work of admissions officers who must juggle commitments to admitting accomplished and diverse classes with protecting their schools’ selectivity as measured by USN. In chapter 4 we show how rankings affect who is admitted to which pro-grams in law schools, the content of admissions work, and the moral and professional identities of administrators.

From admissions, we move to the deans’ offices. Deans are responsible for defining and implementing the mission of law schools, overseeing the hiring and firing of faculty and staff, presenting the public face of law schools to external groups, and raising funds for their schools. In chapter 5 we describe how rankings have decreased the discretion of deans, changed the terms under which they are held accountable, and shaped how they relate to peers, alumni, and employers.

In chapter 6 we describe how rankings affect administrators in career services departments in their efforts to help students secure good jobs. We explain how rankings exacerbate pressures to improve placement statistics and how this has encouraged schools to shift resources toward tracking students for the purpose of USN reporting and away from the counseling and network building that have been their traditional purpose. In particular, we examine how employers use rankings to decide whom to hire and interview and how anxiety about this use drives reactivity in career services.

In chapter 7 we summarize the effects of rankings on legal education and describe how rankings have changed in reaction to law schools’ responses to them. We also call for new and more sophisticated empirical studies of the effects of performance measures such as rankings and for research that evaluates alternative forms of evaluation to complement quantitative measures. We discuss how our findings can be applied to other types of rankings, emphasizing that while some effects are unique to law schools, the reactivity of public measures is a central feature of all modern institutions and modern identities. We conclude by highlighting the often-overlooked moral dimensions of performance measures.

14284-01-Ch01-3rdPgs.indd 18 3/2/16 2:44 PM