performance management of inter-organizational … 802-272-0886 email: [email protected]...

45
Performance Management of Inter-Organizational Governance Networks: Understanding the Effects of Contextual Complexity and Collaborative Capacity on the Prioritization of Performance Measures for MPOs Asim Zia (Corresponding Author) Department of Community Development and Applied Economics University of Vermont; 146 University Place, Burlington VT 05405 Phone: 802-656-4695; Fax: 802-656-4447; Email: [email protected] Christopher Koliba Department of Community Development and Applied Economics University of Vermont; 146 University Place, Burlington VT 05405 Phone: 802-656-3772; Fax: 802-656-4447; Email:[email protected] Jack Meek College of Business and Public Management University of La Verne; 2220 Third St., La Verne CA 91750 Phone: 909-593-3511; Fax: 909-596-5860; Email: [email protected] Erica K. Campbell Integrative Sustainable Solutions (ISESS), LLC. 379 Marshall Rd. Waterbury, VT 05676 Phone: 802-272-0886 Email: [email protected] Brian H. Y. Lee College of Engineering and Mathematical Sciences and Transportation Research Center University of Vermont; 210 Colchester Ave. Burlington VT 05405 Phone: 802-656-1306; Fax: 802-656-9892; Email: [email protected] Diana Colangelo Transportation Research Center University of Vermont 210 Colchester Ave, Burlington VT 05405 Email: [email protected]

Upload: vuongbao

Post on 23-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Performance Management of Inter-Organizational Governance Networks: Understanding the Effects of Contextual Complexity and Collaborative Capacity on

the Prioritization of Performance Measures for MPOs Asim Zia (Corresponding Author) Department of Community Development and Applied Economics University of Vermont; 146 University Place, Burlington VT 05405 Phone: 802-656-4695; Fax: 802-656-4447; Email: [email protected] Christopher Koliba Department of Community Development and Applied Economics University of Vermont; 146 University Place, Burlington VT 05405 Phone: 802-656-3772; Fax: 802-656-4447; Email:[email protected] Jack Meek College of Business and Public Management University of La Verne; 2220 Third St., La Verne CA 91750 Phone: 909-593-3511; Fax: 909-596-5860; Email: [email protected] Erica K. Campbell Integrative Sustainable Solutions (ISESS), LLC. 379 Marshall Rd. Waterbury, VT 05676 Phone: 802-272-0886 Email: [email protected] Brian H. Y. Lee College of Engineering and Mathematical Sciences and Transportation Research Center University of Vermont; 210 Colchester Ave. Burlington VT 05405 Phone: 802-656-1306; Fax: 802-656-9892; Email: [email protected] Diana Colangelo Transportation Research Center University of Vermont 210 Colchester Ave, Burlington VT 05405 Email: [email protected]

 

1

Performance Management of Inter-Organizational Governance Networks: Understanding

the Effects of Contextual Complexity and Collaborative Capacity on the Prioritization of

Performance Measures for MPOs

ABSTRACT

The performance management of emergent governance networks poses considerable

challenges for federal and state governments when they provide base funding to sustain the

network operations, such as the federally funded Metropolitan Planning Organizations (MPOs).

Under the federal legislation in the last two decades, the MPOs have been mandated to develop

and sustain inter-governmental and inter-organizational governance networks for developing and

implementing short to medium and long-range transportation plans. Conventional top-down

uniform imposition of performance measures or a demand-driven bottom-up generation of

performance measures for each MPO do not appear to provide adequate performance

management options for MPOs. Instead, a more sophisticated "alternate path" will be needed to

prioritize performance measures for MPOs that take into account their contextual complexity,

and their technical, administrative and collaborative capacity. By drawing on data from a

September 2009 General Accountability Office (GAO) survey of all 381 MPOs in the United

States, with an 86% response rate, we test four hypotheses and propose an “alternative path” for

the federal government to effectively manage the performance of MPOs. We find that the size of

the community where MPO is located and the collaborative capacity of an MPO have a powerful

and significant effect on the differential choice of performance measures. Instead of imposing

uniform prioritization of performance measures on MPOs, it is proposed that the federal

government agencies could develop more sophisticated performance management weighting

schemes for MPOs serving communities of different sizes, contexts and MPO network structures

2

and functions with different technical and collaborative capacities. Implications for theory

development regarding the relationship between inter-organizational governance network

capacity and performance management, research relating to performance indicator use and

valuation in regional planning, public administration and public management practices are

drawn.

1. INTRODUCTION

Those who have studied the use and valuation of performance data within inter-

organizational governance network arrangements have noted the critical challenges that confront

network administrators who are responsible for developing and operating performance

management systems within these settings (Radin, 2006; Frederickson and Frederickson, 2006;

Yang, 2007; Moynihan, 2008; Koliba et al., 2010). Conlan and Posner (2008: 4) observe that as

“our need for understanding complex intergovernmental relationships is growing, our

institutional capacity for intergovernmental monitoring and analysis has been weakened.” Case

studies of organizations that have successfully used performance data to inform decision-making

and action have noted the existence of a “performance measurement culture” that persists within

these organizations (Moynihan, 2008). In particular, Frederickson and Frederickson (2006) have

noted how certain health care delivery, health insurance, and health research inter-organizational

networks have developed this kind of culture.

With the increased attention given to the important role that collaboration plays within

networked settings, other researchers have noted how successful performance management in

networked settings hinges on the “collaborative capacity” of the actors within the network

(Moynihan and Pandey, 2005; Page, 2008; Weber and Khademian, 2008). Collaborative

capacity within networked settings is defined here as, “the ability of organizations to enter into,

3

develop, and sustain inter-organizational systems in pursuit of collective outcomes” (Hocevar, et

al., 2006: 264). It has been suggested that a lack of collaborative capacity leads to frequent

conflict and substantial differences about what defines effective performance, how performance

data should be collected, and how performance data should be integrated (Page, 2004). In other

words, this body of research suggests that the extent to which performance data is used and

valued within a network hinges on the capacity of network actors to collaborate.

In this article we analyze the relationship between performance management and

collaborative capacity within the context of metropolitan planning organizations (MPOs) across

the United States. All MPOs are situated within larger transportation planning networks

comprised of institutional actors that often include local governments, state and federal agencies,

and interest groups. The MPOs are thus real-world laboratories to study inter-organizational

governance networks in action. Under the current legislation, MPOs are required by the United

States Department of Transportation to collect and devise processes for using certain kinds of

performance data. The relatively robust set of performance measures that are collected to

evaluate a region’s transportation infrastructure provide a rich context through which to examine

the relationship between performance prioritization and the collaborative capacity of a

governance network. A series of hypotheses relating to the prioritization of performance

measures, the use and valuation of performance measures, certain contextual factors specific to

transportation planning, and the collaborative capacity of MPOs and their effects on performance

indicator use are presented and tested in this study.

Performance management practices within metropolitan planning organizations (MPOs)

and across their regional networks have been indelibly shaped by the Intermodal Surface

Transportation Efficiency Act of 1991 (ISTEA). It has been noted elsewhere that the role that

4

this piece of legislation and the subsequent SAFETEA-LU act of 2006, have helped to shape the

performance management culture of the nation’s regional transportation planning networks

(Meyer, 1994; Cambridge Systematics, 2010; Koliba et al., 2011). Given the inter-modal short-

to medium- and long-range transportation planning responsibilities of MPOs, their performance

measures must be able to not only capture conventional efficiency measures, but they must also

capture broader social, economic, and environmental impacts of regional transportation planning

choices. The construction and use of a region’s transportation infrastructure affects

environmental, social, and economic conditions (Codd & Walton, 1996). Such impacts include

energy consumption, air quality, impact on natural resources, safety, neighborhood integrity,

employment, and economic output. These impacts are vital components of an accurate

representation of the transportation system’s performance. Traditionally, these impacts have not

been captured by conventional output measures. ISTEA recognizes the importance of these

outcomes of the transportation system: the desired social and economic ends of the system’s

users, and the environmental, social, and economic impacts resulting from the use of the system.

Any performance management system under ISTEA must measure these outcomes.

Arguments have been presented for a broad-based systems management perspective to

conceptualize and design performance measures for short- to long-range transportation plans,

conceptualizing the challenge of performance management faced by MPOs as one of being

responsive to the needs and demands of the consumers of the transportation system (Meyer,

2002). Under this demand-driven approach, a way to increase the importance of management

and operations strategies aimed at enhancing transportation performance is to collect data on

system performance. By monitoring key performance measures that reflect what is of greatest

concern to the users of the transportation system, state, regional and local officials can link this

5

understanding with the types of strategies and actions that best improve this performance. In

addition, by providing meaningful system performance information at the national level, system-

based management and operations could become an important element of a federal program for

improving the nation’s transportation system (Meyer, 1996: 155). This argumentation implies

that different regions will prioritize different performance measures, based upon what is

considered important by the transportation system “customers”.

In contrast to this bottom-up demand-driven approach to using public perception surveys

to prioritize performance measures (Meyer, 2002), other studies have either focused on the use of

specific performance measures, such as equity and environmental justice (Duthie et al., 2007),

travel time reliability (Lyman & Bertini, 2008), and safety (Montes de Oca & Levinson, 2006);

or a small set of uniform performance measures, such as “the total amount of vehicle miles

traveled, the amount of the network experiencing congestion during peak periods, the total

amount of delay in the network, and the level of airborne emissions” proposed for Puget Sound

Regional Council (Reinke & Malarkey, 2006, p. 75). Typically, a case-study approach has been

used to undertake an in-depth thick description of performance management practices of

different MPOs (Handy, 2008; Koliba et al., In Press). A recent study (Koliba et al., In Press)

using comparative case study methodology concluded that it is highly unlikely that one unifying

performance management system exists for a given region. Cataloging the types of performance

data collected and analysis mechanisms in place (such as annual reports, forecasts and

simulations, and real time data collection technology) within four regions (Orlando, Dallas-Fort

Worth; Minneapolis -St. Paul, and San Diego), Koliba et al., (In Press) concluded that many

actors across a transportation planning network will have dissimilar performance management

systems—prioritizing performance data differently.

6

The existing research conducted on the role of performance management within MPOs

provides fertile ground for those looking to understand the relationship between the use and

valuation of performance measures, and the roles that administrative, collaborative and technical

capacity play in determining indicator use. Building on the literature pertaining to performance

management within governance network cited in our introduction, we lay out a series of

hypothesis to be tested using a nationwide survey of MPOs undertaken by the GAO in 2009.

This study examines whether the contextual complexity and collaborative and technical capacity

of an MPO influences the selection of which performance measures are prioritized and which are

perceived to be effective.

The MPOs operating in small- and medium-sized communities (defined here as MPOs

serving populations of under 200K) could have vastly different mandates and public needs than

the MPOs serving relatively larger populations and/or geographical areas. Further, whether

MPOs are multi-state or whether they are located in air quality non-attainment districts will also

likely affect the type of performance measures that they must prioritize. Different weights on

performance measures imply different prioritizations. In this paper, we hypothesize that MPOs

across the nation (US) do not assign equal weights to performance measures. (Hypothesis1).

Rather, various complex contextual factors, such as the size of the population the MPO is

serving, whether it is located in a multi-state area and/or an air quality non-attainment district,

also affect the selection of which performance measures are used and valued (Hypothesis2).

The variability in how MPOs and their regions use and value performance data

contributes to a “performance management gap”, manifested as the differences between what

performance data is collected and what performance data is valued. This study formally defines

the performance management gap as the statistical distance between the values ascribed to

7

performance measures (normative performance measures) and the current practices for

measuring performance (descriptive performance measures). We test the hypothesis that there is

a significant performance management gap in transportation planning networks in the US

(Hypothesis3).

One of the critical contributors to a performance management gap concerns the capacity

of MPOs located within small to medium sized communities to collect and utilize performance

data to inform planning and project selection. Smaller MPOs are constrained by access to fewer

financial and human capital resources. When traditional transportation planning considerations

are addressed, the correlation between the size of the MPO’s region, its more limited access to

capital, and its resulting limited capacity to collect and utilize performance data may be viewed

as less of a challenge. However, as these smaller regions face increasing demands to collect, use

and be held accountable to performance goals and with the scope of regional planning needs

becoming increasingly expanded, (such as coupling land use and transportation planning, the

development of comprehensive emergency response plans for the region, and increasing

expectations for all regions to address carbon emissions), the link between the size of the region

and the region’s capacity to coordinate activities becomes more critical. Thus, the importance of

collaboration in delivering transportation services has been emphasized (Meyer et al., 2005).

Oftentimes, different agencies have different responsibilities within the transportation system.

Some organizations are responsible for the road system, and others are responsible for transit,

emergency response, human resources, and so forth. Each of these organizations collects data

and produces information on the performance of the transportation system or of its services

(Meyer et al., 2005, p. 159). So, in addition to the critical role of technical capacity (Handy,

2008), we hypothesize in this paper, both the administrative and collaborative capacities of

8

MPOs as network organizations affect MPO prioritization of performance measures.

(Hypothesis4). We distinguish between external and internal collaborative capacities. External

collaborative capacity is the capacity of MPOs to coordinate activities with a variety of

stakeholders—operationalized here as the number and depth of ties with external constituencies.

On the other hand, internal collaborative capacity is the capacity of MPO boards to integrate a

broader range of stakeholder groups in deliberative decision making—operationalized here as the

number of different kind of constituencies that are represented on governing boards.

Next, Section 2 presents research methodology. In particular, we provide an overview of

the survey data and coding procedures used for measuring independent and dependent variables

and logistic regression models. Section 3 presents results and Section 4 discusses the

implications of these results for transportation policy and public administration and public

management theory.

2. RESEARCH DESIGN AND METHODOLOGY

We use GAO survey of all 381 MPOs in the United States with a response rate of 86%.

The GAO survey questionnaire is available online (GAO, 2009). The survey was comprised of

45 questions, asking the directors of MPOs (or their designees) to respond to a series of

questions geared toward ascertaining a variety of factors that shape an MPO’s structures and

functioning. From the survey, we focus on the set of questions that specifically relate to the

MPO’s current use and valuation of performance measures, the level of importance MPO

directors ascribe to certain performance measures, and the level of technical and collaborative

capacity that MPOs possess. Table 1 provides descriptive statistics for variables measuring

contextual complexity, administrative structures, descriptive and normative performance

9

Table 1: Descriptive Statistics Variable Symbol N Min Max Mean Standard

Deviation

CONTEXTUAL COMPLEXITY

TMA Urban (>200K) CC1 327 0 1 .45 .498

Multi-state area CC2 327 0 1 .12 .328

Located within an air quality non-attainment or maintenance area

CC3 328 0 1 .50 .501

Air Quality Non-Attainment Area and <200K CC1*CC3 327 .00 1.00 .3242 .4687

ADMINISTRATIVE STRUCTURE

Independent organization AS1 328 0 1 .18 .385

part of a regional council/council of governments

AS2 328 0 1 .38 .486

Part of a county government AS3 326 .00 1.00 .1319 .3389

Part of a city government office AS4 326 .00 1.00 .1902 .393

DESCRIPTIVE PERFORMANCE MEASURES

Project Implementation (Descriptive) DPM1 310 .00 1.00 .4323 .4961

Travel Demand Model Accuracy (Descriptive) DPM2 297 .00 1.00 .2357 .4251

Transportation System Safety (Descriptive) DPM3 310 .00 1.00 .4613 .4993

Transportation System Reliability (Descriptive)

DPM4 307 .00 1.00 .4430 .4975

Transportation System Accessibility (Descriptive)

DPM5 310 .00 1.00 .4419 .4974

Transportation System Security (Descriptive) DPM6 301 .00 1.00 .1561 .3636

Compliance with federal and state rules (Descriptive)

DPM7 320 .00 1.00 .7938 .4052

Satisfaction among local stakeholders (Descriptive)

DPM8 315 .00 1.00 .8540 .3537

Satisfaction among general public (Descriptive)

DPM9 317 .00 1.00 .6593 .4746

10

Variable Symbol N Min Max Mean Standard Deviation

Extent of coordination and stakeholder involvement (Descriptive)

DPM10 319 .00 1.00 .6771 .4683

Measure of public participation (Descriptive) DPM11 317 .00 1.00 .4606 .4992

Level of highway congestion (Descriptive) DPM12 311 .00 1.00 .5209 .5003

Air quality (Descriptive) DPM13 282 .00 1.00 .3723 .4842

Mobility for disadvantaged populations (Descriptive)

DPM14 311 .00 1.00 .4309 .4960

Condition of transportation network (Descriptive)

DPM15 314 .00 1.00 .4873 .5006

NORMATIVE PERFORMANCE MEASURES

Project implementation (Normative) NPM1 324 .00 1.00 .7222 .4486

Travel demand model accuracy (Normative) NPM2 318 .00 1.00 .5818 .4940

Transportation system safety (Normative) NPM3 323 .00 1.00 .7523 .4323

Transportation system reliability (Normative) NPM4 320 .00 1.00 .7344 .4423

Transportation system accessibility (Normative)

NPM5 319 .00 1.00 .7179 .4507

Transportation system security (Normative) NPM6 310 .00 1.00 .4032 .4913

Compliance with federal and state rules (Normative)

NPM7 323 .00 1.00 .7121 .4535

Satisfaction among local stakeholders (Normative)

NPM8 319 .00 1.00 .9216 .2691

Satisfaction among general public (Normative) NPM9 319 .00 1.00 .8715 .3352

Extent of coordination and stakeholder involvement (Normative)

NPM10 320 .00 1.00 .8031 .3982

Measure of public participation (Normative) NPM11 319 .00 1.00 .6928 .4620

Level of highway congestion (Normative) NPM12 317 .00 1.00 .7161 .4516

Air quality (Normative) NPM13 285 .00 1.00 .5368 .4995

Mobility for disadvantaged population (Normative)

NPM14 318 .00 1.00 .7516 .4327

11

Variable Symbol N Min Max Mean Standard Deviation

Condition of transportation Network (Normative)

NPM15 314 .00 1.00 .7803 .4147

COLLABORATIVE CAPACITY

Stakeholder representation on MPO Board COC1 328 .00 12.00 4.5122 2.5425

External Collaborative Capacity COC2 328 .00 100.00 44.7866 21.9686

TECHNICAL CAPACITY

In-house capacity for generating travel forecasts

TC1 328 .00 1.00 .4451 .4977

Using a Travel Demand Model TC2 328 .00 1.00 .9360 .2451

CAPACITY CHALLENGES

Lack of funding CCH1 326 .00 4.00 2.6840 1.1508

Competing Priorities CCH2 322 .00 4.00 1.9752 1.1731

Obtaining public input CCH3 321 .00 4.00 2.2991 1.0417

Lack of flexibility CCH4 317 .00 4.00 1.8486 1.2638

Lack of ability to find local match CCH5 324 .00 4.00 2.2006 1.3213

Fiscal Constraints CCH6 327 .00 4.00 2.5719 1.1378

Limited authority CCH7 312 .00 4.00 2.3942 1.0914

Limitations in TDM Capacity CCH8 317 .00 4.00 1.7634 1.0984

Data limitations CCH9 321 .00 4.00 2.0436 1.0237

Coordination with land-use agencies CCH10 321 .00 4.00 1.6168 1.0721

Coordination with other regions CCH11 311 .00 4.00 1.0161 .9348

Coordination with state DOT CCH12 326 .00 4.00 1.4448 1.1156

Lack of trained staff CCH13 322 .00 4.00 1.5497 1.1269

12

measures, collaborative and technical capacities and capacity challenges that were coded from

the GAO survey data. Below, Sections 2.1-2.6 discuss how these variables were defined in the

study, and present a very brief overview of their descriptive statistics. Further, Sections 2.7 and

2.8 describe statistical analysis approach. Section 2.7 describes paired-t-tests analytical approach

to ascertain the relationship between performance measures that are used versus those

performance measures that are valued, responding to hypothesis3 in the process. Section 2.8

describes the structure of logistic regression models that deploy contextual complexity,

administrative structure, technical capacity, collaborative capacity and capacity challenges as

independent variables to explain the variation in prioritization of performance measures,

responding to hypothesis2 and hypothesis4 in the process.

2.1. Contextual Complexity

The role that contextual factors play within virtually any study of inter-organizational

governance networks needs to be taken into consideration. In the transportation planning arena,

we represent this contextual complexity in terms of the size of an MPO’s regional population, the

extent to which the MPO region spanned more than one state, and whether the MPO’s region is

in an air quality non-attainment area (meaning whether the region had been identified as having

poor air quality). The Contextual Complexity Variables (CV1-3) for this study were measured

from responses to different survey questions.

2.1.1. Population (CV1)

Question # 28 of the GAO survey asked respondents: “is your MPO part of a

transportation management area (TMA), that is, an urbanized area larger than 200,000

population?” If respondents answered yes, we coded it as an MPO with a large population,

otherwise, it was coded as an MPO with a small or medium-sized population for CV1. Table 1

13

shows that 45% of the MPO respondents are large and the remaining 55% are based in small or

medium-sized communities. The size of the MPO’s metropolitan region plays a significant role

in determining the depth and breadth of the regional transportation network and the kinds of

resources that it uses.

2.1.1. Area of Representation (CV2)

Similarly, Question #3 in the survey enabled us to code CV2, i.e. whether the MPO

represents a multi-state area. Table 1 shows that 12% of the MPOs in the sample (N=327)

represent multi-state areas. This factor could play a significant role in MPO governance, as

government agencies, rules and regulation from multiple states must be accounted for.

2.1.1. Air Quality Non-Attainment Area (CV3)

Question #6 provided an answer for CV3: 50% of the MPOs are located in air quality

non-attainment areas, meaning that they have been identified as having poor air quality. We also

generated an interaction term between the size of the MPO (CV1) and the air quality non-

attainment area (CV3), which tells us that 32.42% of MPOs in the sample are large in size

(TMA>200K) and located in air quality non-attainment areas. The designation of air quality

non-attainment is an important feature of the Clean Air Act.

2.2. Administrative Structure (AS)

For measuring administrative structure, four proxy variables were measured from

question #2 of the survey that asked respondents: “what best describes your MPO staff

structure?” 18% are independent organizations, 38% are part of a regional council/council of

governments, 13.19% are part of a county government and 19.02% are part of a city government

office. The remaining MPOs are either part of the state DOT or have some “other” staff

structure. The variability of administrative structures of MPOs provides an opportunity to

14

explore the extent to which the administrative “home” of an MPO affects its capacity to use and

value performance measures.

2.3. Collaborative Capacity (COC)

As regional organizations, MPOs are required to collaborate with a variety of other

institutional actors, including their state DOTs, the local governments of their region, federal

agencies, and a host of other regional organizations from the nonprofit and business sectors.

Many of these interests are formally integrated into the internal governance structure of the

MPO. MPO boards of directors are often comprised of representatives from local governments,

state DOTs, and federal agencies and often have other interests integrated into their board

governance structure. The extent of their external collaboration with these entities may be

measured in terms of the kind and frequency of contacts that an MPO has with these

stakeholders.

2.3.1. Internal Collaborative Capacity (COC1)

The GAO survey asks a question relating to the MPO’s internal governance structure—

specifically the range of backgrounds of MPO board members. Question #4 in the survey asked

respondents: “which of the following types of officials are members (including both voting and

non-voting) of your MPO’s board…?” The GAO survey provided a list of the following 12

stakeholder groups: FHWA, FTA, State DOT, State or local environmental agency, transit

operator, local government (elected), local government (non-elected), other regional agency,

environmental advocacy organizations, business advocacy groups, citizen participation groups,

and private sector. The variable internal collaborative capacity was computed on a continuous

scale from 0 to 12 (as shown in the minimum and maximum values for this variable in Table 1),

with 0 implying that none of these 12 stakeholders are on the MPO board and 12 implying that

15

all 12 stakeholders are represented on the board. The mean value for internal collaborative

capacity is 4.51 on this scale (0-12) with a standard deviation of 2.54. We recognize the many

measures of internal collaborative capacity may be identified through additional means,

including measuring the collaborative capacity of critical committees, teams and other

communities of practice (Gajda and Koliba, 2007), the collaborative dispositions of staff, and the

density of the social network.

2.3.2. External Collaborative Capacity (COC1)

For measuring External Collaborative Capacity (COC2), we used Question #8 in the

GAO survey: “how, if at all, does your MPO coordinate its planning activities with the following

types of organizations?” The GAO survey provided a list of the following ten organizations:

Federal DOT (FHWA and FTA), State DOT, city and county entities (e.g. planning boards),

adjacent MPOs, councils of government/regional council, regional transit operators,

environmental agency (e.g. EPA or state department of natural resources), air quality

organization (e.g. regional air quality management district), regional civic organization(s), and

advocacy group(s) (e.g. business-oriented or environmental-oriented interest groups). For

reporting coordination mechanisms with each of these ten organizations, respondents reported

whether coordination took place through representation on MPO committees (which we assigned

a weight of 4), through regular meetings (weight of 3), through regular correspondence (weight

of 2), solicitation of input/feedback on an ad-hoc basis (weight of 1) or does not coordinate

(weight of 0). If an MPO coordinates with an organization through all four coordination

mechanisms, it will get a maximum collaborative capacity score of 10, if none then 0. Since the

respondents reported for 10 different organizations, each MPO’s collaborative capacity (COC) is

measured on a scale from 0 to 100, where 100 implies that MPOs have the highest collaborative

16

capacity (i.e. they are coordinating with all ten organizations at all four levels). The mean

collaborative capacity (N=328) is 44.78% points with a standard deviation of 21.96% points.

2.4. Technical Capacity (TC) for Modeling

Our ability to measure the technical capacity of the MPOs was limited to the types of

questions that were asked on the GAO survey, which focused solely on the capacity of MPOs to

undertake travel demand modeling “in house.” We note that other factors may eventually be

built in to an indicator for technical capacity such as the professional skill set of MPO staff

members, the number of designated data management professionals, and the availability of

financial resources devoted to data collection and analysis. For this study, the technical capacity

(TC) for generating travel forecasts is measured through two proxy variables. Response to

question #13 enables us to determine that 44.51% of the MPOs (N=328) have in-house capacity

to generate travel forecasts; and (from question #15) 93.60% of the MPOs (N=328) are using

travel demand models to develop a travel forecast for their MPO’s long- range plan.

2.5. Capacity Challenges

The GAO survey asked MPO respondents to indicate the extent to which their MPO faces

certain capacity challenges. This list of challenges is comprised of some of the widely

recognized factors that have been cited as barriers to effective performance for public sector

organizations more generally. Many of these capacity challenges, such as limited funding,

difficulties around garnering public input, as well as challenging relationships with state level

and local level actors are common to many organizations operating with networked

environments. Capacity Challenges (CCH1-13) were measured through question #27 in the

survey: “in your opinion, how much of a challenge, if any, do the following issues present for

your MPO in carrying out the federal requirements for transportation planning?” The

17

respondents ranked 13 issues (variables CCH1-CCH13 in table 1) on a likert scale: very great

challenge (coded as 4), great challenge (3), moderate challenge (2), some or little challenge (1)

and no challenge (0). From table 1, we can determine that the lack of funding provides the

greatest challenge with a mean score of 2.68 (on a scale from 4 to 0), while coordination with

other regions is the least challenging issue (mean score of 1.01).

2.6. Use and Valuation of Performance Measures

The range of performance measures that matter to MPOs and their networks include a

fairly well developed set of metrics that are commonly acceptable forms of performance

measures found within the transportation planning field (Cambridge Systematics, 2010; Koliba,

et al., 2011). In the GAO survey, performance measures were identified along the following

parameters: the extent to which project implementation evaluations were conducted, travel

demand models were used, the safety, accessibility and security of the regional transportation

system, the level of compliance with federal and state rules, local stakeholder and general public

satisfaction, the extent of coordination with stakeholders, measures of public participation, levels

of traffic congestion, air quality and mobility for disadvantaged populations, and assessments of

the condition of the regional transportation network. Space precludes a detailed description of

responses for each of these categories. The reader will find the mean and standard deviation of

responses for each of these performance measures in table 1.

Data pertaining to the descriptive performance measures was gathered through question

37 of the GAO survey: “To what extent, if at all, does your MPO use the following indicators to

evaluate its effectiveness?” The survey instrument provided a list of 15 performance measures

and asked respondents to select one answer from “very great extent, great extent, moderate

extent, some or little extent to no extent and no basis to judge”. We coded “very great extent”

18

and “great extent” as being high priority (1) and other responses as low priority (0), with no basis

to judge as missing values, to generate fifteen “Descriptive Performance Measures” (DPM1-15),

see Table 1.

To illustrate how to interpret the descriptive statistics of the descriptive and normative

performance measures in Table 1, we provide the following: DPM1 measures “project

implementation” (i.e. number of planned projects that were implemented). The mean value of

.4323 indicates that 43.23 % of the survey sample (N=310, excluding missing cases) ranked this

performance measure as being used by that particular MPO to a great or very great extent. The

other 14 DPM and 15 NPM variables in Table 1 can be interpreted accordingly.

Data pertaining to the normative performance measures was gathered through question

38 of the GAO survey: “Regardless of your individual answers to question 37, from your

perspective, how useful, if at all, could the following indicators be for evaluating the

effectiveness of MPOs?” The same list of 15 performance measures that was given in Q37 was

provided to the respondents, who were asked to select an answer from very useful, useful,

moderately useful, of some or little use, of no use, to no opinion or no basis to judge. We coded

“very useful” and “useful” as being high priority (1) and other responses as low priority (0), with

no basis to judge as missing values, to generate fifteen “Normative Performance Measures”

(NPM1-15). Normative performance measures thus solicit what survey respondents value, while

descriptive performance measures solicit what survey respondents practice. Figure 1a shows the

mean and the standard error of the mean for the responses to the 15 descriptive performance

measure listed in the GAO survey. Figure 1b shows similar statistics for the 15 normative

performance measures. Figure 1a shows that the satisfaction among local stakeholders and

compliance with federal and state rules are accorded significantly high priorities as the

19

Figure 1: Variability in the Prioritization of Descriptive and Normative Performance Measures (Mean Response has been re-scaled on a binary scale)

20

performance measures that are being used by MPOs. Transportation system security and travel

demand model accuracy are ranked the lowest in terms of their utility as performance measures.

Figures 1b shows similar patterns for the normative/desirable performance measures,

except that satisfaction among the general public is ranked almost as high as the satisfaction

among the local stakeholders. While Figure 1 presents an overall picture of current and desirable

rankings for these 15 performance measures, as we argued earlier on, there is large variability in

the prioritization of these performance measures that could be explained by contextual variables

and capacity factors. Figure 1 (a) confirms Hypothesis1 that MPOs across the nation (US) do not

assign equal weights to performance measures (H1).

Furthermore, questions 37 and 38 also provided respondents an open-ended opportunity

to list any other descriptive and normative performance measures. Two coders from our research

team independently coded these open-ended responses to determine whether any additional

performance measures were identified or any other issues were identified with respect to the

identification and prioritization of these performance measures. We enforced a 90% inter-coder

reliability match to determine the following: MPO respondents mentioned a variety of other

measures that are being actively used beyond those identified in the survey in their open ended

responses. Some of the more prominent indicators that they mention include measures that

measure the alignment of plan with projects that are actually implemented, the overall reliability

of the transportation system (presumably generated through an indices), and land use indicators.

Multiple MPO respondents also mentioned that they measure the financial stability of their

networks, economic development measures, project benefits, MPO board satisfaction, and

traveler mobility. Of those measures most useful, but not listed in the survey, economic

development, VMT (Vehicle Miles Travelled), and mobility choice were the most mentioned in

21

the open ended question. Multiple respondents also mentioned transit ridership, collision rates,

and land use indicators as measures that are not currently being measured, but could be useful.

2.7. Statistical Analysis

2.7.1. Evidence of a “Performance Management Gap”

A cursory look at the panels a and b in figure 1 shows that respondents in general

provided higher rankings to normative performance measures than the descriptive performance

measures. This consistent difference across the board in higher rankings for normative

performance measures could be explained as the perception on the part of MPO survey

respondents that there is a performance management gap in the current performance

measurement practices of MPOs. To formally test this difference in the means between

descriptive and normative performance measures, we conducted a paired T-test, reported in

Table 2.

The paired T-tests in table 2 indicate that the prioritizations/rankings for 14 out of the 15

normative performance measures are significantly higher. This allows us to reject the null

Hypothesis4 of similar ranking at 95% confidence level for all 15 descriptive and normative

performance measures. For example, -.3554 in the second top-left cell shows that travel demand

model accuracy is ranked higher by 35.54% of the respondents from a normative standpoint than

the descriptive standpoint. The only exception is with respect to the performance measure of

compliance with federal and state rules, in which case, the normative ranking is 8.57% points

lower than the descriptive ranking. This finding implies that MPOs are willing to reduce the

weight on the compliance with federal and state rules. For the remaining 14 performance

measures, we confirm Hypothesis3 that there is significant performance management gap among

the MPOs in the US. In terms of magnitude, the largest gap is observed for travel demand model

22

accuracy (.35 points) and mobility for disadvantaged populations (-.32), while the smallest gap is

observed for satisfaction among local stakeholders (.06 points) and extent of coordination and

stakeholder involvement (.12 points).

TABLE 2 Paired samples t-test Paired Differences

Paired Descriptive and Normative

Performance Measures (NPM - DPM) Mean

Std. Deviation

Std. Error Mean t df

Sig. (2-tailed)

Pair 1 Project Implementation -.2105 .65244 .03730 -7.535 305 .000 Pair 2 Travel Demand Model Accuracy -.35540 .63065 .03723 -9.547 286 .000 Pair 3 Transportation System Safety -.29180 .66151 .03788 -7.704 304 .000 Pair 4 Transportation System Reliability -.30435 .63801 .03690 -8.249 298 .000 Pair 5 Transportation System Accessibility -.26910 .64602 .03724 -7.227 300 .000 Pair 6 Transportation System Security -.23860 .59864 .03546 -6.729 284 .000 Pair 7 Compliance with federal and state

rules .08571 .60960 .03435 2.496 314 .013

Pair 8 Satisfaction among local stakeholders

-.06863 .41905 .02396 -2.865 305 .004

Pair 9 Satisfaction among general public -.21104 .56868 .03240 -6.513 307 .000 Pair 10 Extent of coordination and

stakeholder involvement -.12862 .58711 .03329 -3.863 310 .000

Pair 11 Measure of public participation -.23377 .64361 .03667 -6.374 307 .000 Pair 12 Level of highway congestion -.20000 .66945 .03865 -5.175 299 .000 Pair 13 Air quality -.15918 .70940 .04532 -3.512 244 .001 Pair 14 Mobility for disadvantaged

populations -.32226 .65254 .03761 -8.568 300 .000

Pair 15 Condition of transportation network -.30333 .61038 .03524 -8.608 299 .000

The implications of this apparent gap will need to be explored in greater detail through

additional research. It should be noted that the GAO survey was completed by the executive

director of the MPO or her or his designee. We may presume that responses related to

performance indicator use have a greater likelihood of being grounded in actual practice.

Responses to question 38 concerning the relative value of specific performance measures may be

viewed as statement of opinion and perception that may (or may not) be informed by reasoned

23

consideration. Thus, the apparent gap between the performance measures that are used and those

that are valued needs to be couched in terms of the opinions of those completing the survey

itself. However, the potential of such a gap may be an important signal in ascertaining the extent

to which a performance management culture persists across the transportation planning networks

in this study, a point that we will return to in our conclusion.

2.7.2. Logistic Regression Models

FIGURE 2: An Overview of the Logistic Regression Model Design

We developed logistic regression models to examine the effects of contextual complexity,

administrative capacity, collaborative capacity, technical capacity and capacity challenges on

MPO’s proclivity to use and value specific performance measures. Figure 2 presents an overview

24

of the regression models that we used to test the study hypotheses 2 and 4. Respondent

prioritization of 15 performance measures, both from a descriptive and normative standpoint,

were used as binary response variables to run 30 binomial logistic regression models.

3. RESULTS AND KEY FINDINGS FROM LOGISTIC REGRESSION MODELS

We find that the size of the community where MPO is located and the collaborative

capacity of an MPO have a powerful and significant effect on the differential choice of

performance measures. Technical capacity and administrative structure also matter. There are

some outstanding capacity challenges (such as lack of funding, competing priorities and shortage

of staff) that also significantly affect the prioritization of performance measures.

Table 3 presents the results from 30 binomial logistic regression models. The response

variable in each of these 30 binomial logistic regression models is either a descriptive or a

normative performance measure (on a scale from 0 to 1 with a cut-point threshold of 0.5 for

underlying logistic distribution). There are 25 predictors for each binomial logistic regression

model that capture the context (e.g. size, etc), administrative capacity, collaborative capacity,

technical capacity and capacity challenges for these MPOs. All 30 binomial regression models

shown in Table 3 are jointly significant. We reject the null hypothesis that the joint variation

explained by the inclusion of 25 independent variables in each of the 30 models is the same as

without any independent variables in the models. In general, models with descriptive

performance measures have higher Nagelkerke R2 and Cox and Snell R2 as compared to the

models with normative performance measures. In general, for models with descriptive

performance measures as response variables, Nagelkerke R2 ranges from 18% to 31.5%. Models

with normative performance measures have relatively lower Nagelkerke R2, ranging from 7.1%

to 19.4%, as reported in Table 3.

25

26

27

28

29

As discussed earlier on in the context of paired t-tests (Table 2), relatively lower

Nagelkerke R2 for models with normative performance measures might be due to the propensity

of the MPO survey respondents to view MPO performance more favorably along all of the 15

normative performance measures. Our analysis and interpretation of the results below will focus

more on the models with descriptive performance measures as response variables, because these

models reflect actual planning practice in the MPOs across the national level. However, a similar

interpretation could be undertaken for models with normative performance measures to

understand the perceptions of MPOs about the desirability of performance measures in terms of

reducing the performance management gap. Further, we only discuss those predictor variables

that are statistically significant at the 90% or higher level.

3.1. The Effects of Contextual Complexity

In Table 3, for analyzing the effects of contextual complexity variables on the response

variables, small- or medium-sized MPOs (population <200K) that are located in air quality

attainment areas must be considered as the base group. The coefficients on the first contextual

variable, MPO size, thus present the marginal probabilistic difference between large MPOs

(population > 200K) and small- or medium-sized MPOs (population <200K) that are located in

air quality attainment areas. So, large MPOs located in air quality attainment areas are (1-.38 =

.62) 62% less likely to prioritize project implementation; (1-.35 = .65) 65 % less likely to

prioritize transportation system safety; (1-.33 = .67) 67% less likely to prioritize transportation

system accessibility; (1-.35 = .65) 65% less likely to prioritize the extent of coordination and

stakeholder involvement and (1-.32 = .68) 68% less likely to prioritize the condition of the

transportation network as performance measures compared with the small- or medium-sized

MPOs located in air quality attainment areas.

30

Small- or medium-sized MPOs located in air quality non-attainment areas are (1-.44 =

.56) 56% less likely to prioritize project implementation; (1-.39 = .61) 61% less likely to

prioritize mobility for disadvantaged populations; (2.42-1= 1.42) 142% more likely to prioritize

level of highway congestion; and (15.17-1=14.17) 1417% more likely to prioritize air quality as

performance measures compared with the small- or medium-sized MPOs located in air quality

attainment areas. Further, the interpretation of the coefficient on the interaction term reveals that

the large MPOs located in air quality non-attainment areas are (3.74-1=2.74) 274% more likely

to prioritize transportation system accessibility, but (1-.2=.8) 80% less likely to prioritize air

quality as performance measures compared with the small- or medium-sized MPOs located in air

quality attainment areas. This last finding appears to be counter-intuitive as large MPOs located

in air quality non-attainment areas should be prioritizing air quality as a performance indicator.

Does this finding suggest that large MPOs located in air quality non-attainment areas are trying

to avoid the use of air quality as a performance measure (to appear more qualified for federal and

state funding)? Or is it that they do not believe that air quality could be improved in those areas?

More follow-up research is warranted to further explore this counter-intuitive finding for MPO’s

located in large communities and air quality non-attainment areas. This counter-intuitive finding

must be interpreted in the light of expected significant finding that MPOs located in both small to

medium sized communities and air quality non-attainment areas are 15.17 times more likely to

prioritize air quality as a performance measure than the small/medium sized MPOs located in air

quality attainment areas.

Overall, these findings confirm Hypothesis2 that contextual complexity variables such as

the size of the population that MPO is serving, whether it is located in a multi-state area and/or

an air quality non-attainment district significantly affect the differential weights assigned to

31

different performance measures.

3.2. The Effects of Collaborative Capacity

For understanding the effect of internal collaborative capacity, we find that as stakeholder

representation on the MPO board gets broader/higher from 0 to 12, they are (1-.84=.16) 16% less

likely to prioritize satisfaction among the local stakeholders and (1.11-1=.11) 11% more likely to

prioritize the condition of the transportation network as performance measures. The first finding

suggests that as internal collaborative capacity increases, the practice of prioritizing satisfaction

among local stakeholders diminishes significantly. While this finding might appear counter-

intuitive, it could be explained that when MPO boards have already established wider

stakeholder representation, their practice of prioritizing stakeholder satisfaction decreases while

the condition of transportation network is prioritized that reflects a regional level focus.

The external collaborative capacity appears to be the strongest predictor across all 30

binomial logistic regression models. As the collaborative capacity of an MPO goes up by a 1%

point (on a scale from 0 to 100), it is 1% more likely to prioritize condition of transportation

network and transportation system accessibility; 2% more likely to prioritize project

implementation, transportation system safety, transportation system reliability, air quality,

mobility for disadvantaged populations, compliance with federal and state rules, satisfaction

among the general public, and measure of public participation; 3% more likely to prioritize travel

demand model accuracy, transportation system security, and extent of coordination and

stakeholder involvement; and 4% more likely to prioritize satisfaction among local stakeholders.

Overall, we confirm Hypothesis4 that collaborative capacity of MPOs as network organizations

significantly affects MPO prioritization of performance measures.

3.3. The Effects of Technical Capacity and Capacity Challenges

32

From the technical capacity perspective, MPOs with in-house capacity for generating

travel forecasts are 56% less likely to prioritize satisfaction among the general public and 43%

less likely to prioritize the level of highway congestion as performance measures compared with

MPOs that do not have in-house capacity for generating travel forecasts. Further, MPOs that use

a travel demand model are 79% less likely to use the condition of the transportation network as a

performance measure compared with MPOs that do not use a travel demand model.

Among the capacity challenges, MPOs that rank lack of funding as a major challenge are

80% more likely to prioritize travel demand model accuracy, 47% more likely to prioritize

transportation system safety, 56% more likely to prioritize transportation system reliability, 37%

more likely to prioritize transportation system accessibility, 73% more likely to prioritize

satisfaction among the general public, 76% more likely to prioritize extent of coordination and

stakeholder involvement, 35% more likely to prioritize the measure of public participation, 49%

more likely to prioritize the level of highway congestion, 74% more likely to prioritize air

quality, and 35% more likely to prioritize the condition of the transportation network as

performance measures compared with MPOs that do not rank lack of funding as a major

challenge. Furthermore, MPOs that rank competing priorities as a major challenge are 25% less

likely to prioritize transportation system safety, 28% less likely to prioritize transportation

system reliability, 25% less likely to prioritize transportation system accessibility, and above all,

29% less likely to prioritize satisfaction among the general public as performance measures. This

implies that under competing trade-off situations, transportation system safety, reliability,

accessibility and satisfaction measures are less likely to be used.

Finally, among the capacity challenges, we would like to highlight the MPOs that listed

coordination with other regions and state DOTs and lack of trained staff as major challenges.

33

MPOs that listed coordination with state DOTs as a major challenge are 33% less likely to

prioritize project implementation, 36% less likely to prioritize travel demand model accuracy,

35% less likely to prioritize transportation system safety, 34% less likely to prioritize

transportation system reliability, 24% less likely to prioritize transportation system accessibility,

35% less likely to prioritize compliance with federal and state rules, and 24% less likely to

prioritize level of highway congestion as performance measures compared with MPOs that do

not list coordination with state DOTs as a major challenge. Similarly, MPOs that listed shortage

of trained staff as a major challenge are 43% less likely to prioritize project implementation, 35%

less likely to prioritize extent of coordination and stakeholder involvement, 44% less likely to

prioritize level of highway congestion, and 32% less likely to prioritize both travel demand

model accuracy and transportation system reliability as performance measures compared with

MPOs that do not list lack of trained staff as a major challenge.

4. DISCUSSION

We conclude that the differences found in the structure, capacity and location of MPOs

across the Unites States does influence which performance measures are used. As reported here,

this is an important finding with regard to understanding how the assessment of MPOs can be

addressed from an aggregate perspective. Our research indicates that setting aggregate or across-

the-board uniform measures of performance would not serve MPOs well. We simply need more

sophisticated ways to develop our understanding of performance that reflects an understanding of

context, capacity and organizational design realities. Simply put, the complexity of the regional

transportation planning structures to which MPO are situated within does not permit a

homogenous or uniform prioritization of performance measures to evaluate the performance of

MPOs under ISTEA, SAFETY-LU and other Travel Reauthorization legislative requirements.

34

We have demonstrated that the interactive complexity of MPO location, their size, whether they

are in Air Quality non-attainment area, whether they are multi-state and so forth leads to

differential prioritization of performance measures.

The case of MPOs operating within regional transportation planning networks that are

provisioned and regulated by the federal government brings to light the question of how and to

what extent can a centralized, national authority play a role in ensuring that governance networks

perform? How can the federal government set up a systematic standardized performance

management system that enables performance-based comparison of similar MPOs (i.e.

small/medium vs. large scaled regions) and other contextual criteria discussed above? Given this

high level of contextual complexity and large variance in the observed administrative, technical

and collaborative capacities of MPOs, we do not recommend that federal government agencies

(DOT, FHWA, GAO etc) impose uniform performance measures and/or similar weights on all

MPOs. Rather, the logistic models presented in this study could be adapted to generate specific

weighting schemes on performance measures for the MPOs of different sizes and capacities. The

value of this “alternate middle path” approach is to recognize the many different design realities

that are evident among MPO structures and context and to further embrace these realities in a

more meaningful frame of reference from which both the goals of performance management and

contextual realities are framed to improve MPO effectiveness. This approach will ultimately

mean that MPOs will have different emphasis in terms of performance, but they will need to

identify that emphasis within the context of both their own realities and that of seeking improved

performance.

A wider implication may be drawn from this observation that centers on the need to

develop a greater capacity to describe and assess the use of performance measures within

35

specific inter-organizational networked contexts. One step toward developing an initial

understanding of generalized performance principles is to deepen both agency and across-agency

assessment. Since there is still a large amount of unexplained variation in these logistic models,

which is primarily due to the complexity of designing such performance management systems,

we recommend the design of an agent based model in future research. The study finds that the

size of the community where MPO is located and the collaborative capacity of an MPO have a

powerful and significant effect on the differential choice of performance measures. Instead of

imposing uniform prioritization of performance measures on MPOs, the federal government

agencies could develop more sophisticated performance management weighting schemes for

MPOs serving communities of different sizes, contexts and capacities.

A second major conclusion to be drawn from this study pertains to the evidence of a

performance management gap that appears to exist. The disparities in the frequency of

performance measures that are used as compared to those performance measures that are valued

suggests that those performance measures that are most used may not be found to be the most

useful. We recognize that we have grounded the evidence of a performance management gap in

an analysis of the perceptions of the survey respondents: MPO executives directors or their

designees. The extent to which the gap is a perception held by just this population of responders,

a perception that is held by MPO governing boards, or a perception widely held across the

regional network is a distinction worth noting. We are only comfortable saying perceptions of a

performance management gap exists for those responding to the GAO survey.

These findings also speak to the importance that collaboration plays in determining which

and how performance measures are used. These findings affirm the notion that regional

organizations like MPOs are, by necessity, at the center of regional networks that are highly

36

influenced by state and federal authorities. These findings also affirm the notion that regional

organizations like MPOs are influenced by local governments, interest groups and individual

citizens involvement within the region’s network. That collaborative capacity served as the most

statistically significant variable in determining performance measurement use speaks to the role

that collaborative capacity plays relative to the performance management systems of governance

networks. It suggests that despite the inherent complexities arising in inter-organizational

governance networks, a commitment to using performance measures can be established.

Our findings relative to the role of collaborative capacity vis-a-vis performance

measurement utilization contributes to our general understanding of the relationship between

collaborative ties in governance networks and the capacity of these networks to monitor their

performance. In our model of network collaborative capacity, we distinguished between internal

collaborative ties and external collaborative ties. The extent to which these collaborative ties

contribute to particular decisions to use certain performance measures needs to be couched in

terms of the different roles that internal collaborators play versus those roles undertaken by

external collaborators. Internal collaborators have been defined here as voting members of the

MPO governing board. These collaborators take on a deliberative role, in that they have a direct

say in the major decisions effecting the MPO, presumably including those decisions pertaining to

what performance measures to use. External collaborators were defined here as those

stakeholders with whom the MPO has undertaken routine communication, either through

informal information networks or through more formalized roles in advisory committees.

Presumably, these stakeholders are playing a consultative role relative to major MPO decisions,

including those decisions relating to performance measurement use. The finding that external

collaborative capacity has a stronger correlation with performance measurement use might best

37

be explained by one of the foundational premises of social capital theory, namely, the strength of

weak ties (Granovetter 1973). Weak ties have been described as facilitating the flow of

information, particularly information that fosters innovation. The correlation between weak

collaborative ties and the use of performance data may thus be used to foster the kind of

organizational learning and performance information feedback loops that are often associated

with effective performance management systems (Moynihan 2008; Moynihan and Panday 2010).

Agranoff and McGuire (2003) and others (e.g. Weber, et al., 2007) have noted how

collaborative ties can be mediated through both horizontal and vertical administrative

arrangements. While horizontal administrative arrangements are marked by voluntary

associations predicated on trust and generally weaker ties, vertical administrative arrangements

are predicated on principal-agent relationships. In their purest sense, vertical arrangements are

structured through classical command and control relationships. However, vertically arranged

collaborative ties are marked by the kind of substantive negotiation and bargaining often

associated with the “principal-agent problem.” Understood in the context of internal

collaborative capacity, negotiation and bargaining “in good faith” (Weber et al., 2007: 208) gets

undertaken as a part of the normal give and take between the governing board and the

professional staff of the MPO. The extent to which this give and take involved a broader or

narrower range of potential kinds of actors will likely matter. This may be a factor in explaining

why the stronger ties of internal collaborators was less statistically significant. Our analysis did

not undertake a finer grain analysis that would have determined which kinds of actors were more

likely to influence the use of specific performance measures. We believe that such a study would

be extremely useful.

38

These observations about collaborative capacity are relevant to the matter of the

perceptions of a performance management gap because the “culture of performance” that exists

within MPOs and across MPOs and their network ties will likely exist within the context of using

performance data, but not necessarily valuing performance data. This observation begs for a

deeper, more nuanced explanation that will only be answered through extensive qualitative

research and calibrated computer simulation using agent based modeling. The use of

performance measures may be predicated on the institutional rules that shape network ties. The

value of performance measures will likely be predicated on the mental models and belief systems

of critical actors in the system. Those completing the GAO survey rendered a value judgment

concerning which performance measures were most valuable. It becomes very important, then,

to determine how the belief systems of various actors combine, comingle and compete in these

settings. Because of the high degree of efficacy that collaborative capacity brings to

performance measurement use, it stands to reason that this matter of values versus use is one that

calls for more inquiry in a large variety of inter-organizational governance network policy

settings along the lines suggested above.

We make all of these assertions recognizing that much more research must be done to

explain just how performance measures are discussed and used to inform actual decision making

in inter-organizational governance networks. We also recognize that the data analyzed here

draws no inferences to the question of just how the use of performance measurement data

actually leads to improved network performance. Our capacity to begin a response to this

question will require more data ascribing respondent identity to specific survey responses.

39

5. CONCLUSIONS

In this article, we have provided readers with an analysis of the performance management

capacity of metropolitan planning organizations and drawn some conclusions about the roles that

the size of the region, MPOs’ collaborative capacity, and a variety of capacity challenges play in

determining which performance measures are used. We provided evidence to suggest that MPO

leaders’ perception that performance management gap persists for their organizations and the

wider networks that their organizations serve. We found that the complexity of MPO structures

does not permit a homogenous or uniform prioritization of performance measures to evaluate the

performance of MPOs. The size of the community where MPO is located and the collaborative

capacity of an MPO have a powerful and significant effect on the differential choice of

performance measures. Instead of imposing uniform prioritization of performance measures on

MPOs, the federal government agencies could develop more sophisticated performance

management weighting schemes for MPOs serving communities of different sizes, contexts and

MPO network structures and functions with different technical and collaborative capacities. The

logistic models presented here could be adapted to generate specific weighting schemes to

prioritize performance measures for the MPOs of different sizes and capacities. From this in-

depth statistical analysis, we hope to have contributed to the growing literature on performance

management and inter-organizational governance networks.

In addition to the specific observations that we can make about MPOs and the wider

networks to which they find themselves apart of, we believe this study sheds additional light on

the construction of performance management systems in inter-organizational governance

networks. The biggest conclusion that we can draw from this study for those who study and

practice within other types of governance networks pertains to the role that contextual

40

complexity and collaborative capacity evidently plays in the determination of performance

measurement use. Furthermore, It suggests that despite the complexities of inter-organizational

arrangements, collaboration for performance management is not only possible, but highly

desirable. We therefore conclude that greater attention should be paid to exploring the factors

that contribute to the development of collaborative capacity, a focus of much of the recent

literature pertaining to collaborative management (Agranoff and McGuire, 2003; Bingham and

O’Leary, 2008), collaborative governance (see Ansell and Gash, 2007), and collaborative

networks (Milward and Provan, 2006). These findings suggest that the harnessing of the

collaborative capacity of inter-organizational governance networks needs to be viewed as a

contributing factor in successful design of performance management systems.

ACKNOWLEDGMENTS

41

REFERENCES

Agranoff, Robert and McGuire, Michael. Collaborative public management: Ne strategies for

local government. Washington, DC: Georgetown University Press; 2003

Ansell, Chris and Gash, Alison. (2008). Collaborative governance in theory and practice. Journal

of Public Administration Theory and Research; 2008. 18(4):543-571.

Bingham, Lisa and O’Leary, Rosemary. (eds). Big ideas in collaborative public management.

Armonk, NY: M.E.Sharpe; 2008.

Cambridge Systematics, et al. Network performance measurement handbook. National

Cooperative Highway Research Program; 2010; p. 08-67.

Codd, Ned., and Michael C. Walton. Performance Measures and Framework for Decision

Making Under the National Transportation System. In Transportation Research Record, No.

1518, 1996, 70-77.

Duthie Jen, Cervenka Ken and S. Travis Waller . Environmental Justice Analysis: Challenges

for Metropolitan Transportation Planning. In Transportation Research Record; 2007; (2013):

8-12.

Frederickson David, & Frederickson, H. George. Measuring the performance of the hollow state.

Washington, DC: Georgetown University Press; 2006.

Gajda Rebecca & Chris Koliba, Evaluating the imperative of intra-organization collaboration: A

school improvement perspective. American Journal of Evaluation; 2007; 28(1): p.26-44.

GAO 2009 Transportation Planning: Survey of Metropolitan Planning Organizations (GAO-09-

867SP, September 2009), an E-supplment to GAO-09-868. Available online at

http://www.gao.gov/special.pubs/gao-09-867sp/09-867sp4.html. Accessed July 20, 2010

Granovetter, Mark. The strength of weak ties. American Journal of Sociology; 1972; (76): 1360-

1380.

42

Handy Susan. Regional transportation planning in the US: An examination of changes in

technical aspects of the planning process in response to changing goals. In Transport Policy,

2008, No. 15: p. 113-126.

Hocevar Susan, Thomas Gail Fann, & Erik Jansen. Building collaborative capacity: An

innovative strategy for homeland security preparedness. In M.M. Beyerlein, D.A. Johnson

and S.T. Beyerlein (eds), Innovation through collaboration. New York: Elsevier; 2006; (12):

263-283

Koliba Chris, Zia Asim & Brain Lee. The analysis of complex governance system dynamics:

Emergent patterns of formation, operation and performance of regional planning networks.

Decision Theater Workshop on Policy Informatics. Arizona State University. Tempe, AZ.;

2010.

Koliba Chris, Meek Jack, & Asim Zia. Governance Networks in Public Administration Policy.

Boca Raton: CRC Press; 2011.p. 189-224.

Koliba Chris, Campbell Erica & Asim Zia. Performance measurement considerations in

congestion management networks: Aligning data and network accountability. In Public

Performance Management Review, In Press.

Lyman Kate, & Robert L. Bertini. Using Travel Time Reliability Measures to Improve Regional

Transportation Planning and Operations. In Transportation Research Record; 2008, No 2046:

p. 1-10.

Meyer Michael. Alternative methods for measuring congestion levels. In Curbing Gridlock:

Peak-Period Fees to Relieve Congestion. TRB, National Research Council, Washington,

D.C.; 1994; (242): 32–61.

43

Meyer Michael. Measuring System Performance: Key to Establishing Operations as a Core

Agency Mission. In Transportation Research Record; 2002; No. 1817: p.155-162.

Meyer Michael D., Campbell Sarah, Leach Dennis, & Matt Coogan. Collaboration: The Key to

Success in Transportation. In Transportation Research Record; 2005; No. 1924: p. 153-162.

Milward Brinton. & Keith Provan. A manager’s guide to choosing and using collaborative

networks. Washington, D.C.: IBM Center for the Business of Government.; 2006.

Montes de Oca Norah., & David Levinson. Network Expansion Decision Making in Minnesota’s

Twin Cities. In Transportation Research Record; 2006, No. 1981, p. 1-11.

Moynihan, Donald. The dynamics of performance management: Constructing information and

reform. Washington, DC: Georgetown University Press. 2008.

Moynihan, Donald. and Pandey, S. Testing how management matters. Journal of Public

Administration Research and Theory; 2005; 15(3): p. 421-39.

Moynihan, Donald P. and Sanjay K. Pandey. The Big Question for Performance Management:

Why Do Managers Use Performance Information? Journal of Public Administration

Research and Theory; 2010; 20: p. 849-866.

Page Stephen. Measuring accountability for results in interagency collaboratives. Public

Administration Review; 2004; 64(5): p. 591-606.

Page Stephen. Managing for results across agencies: Building collaborative capacity in the

human services. In Bingham, L.B. and R.O’Leary (eds.) Big ideas in collaborative public

management. Armonk, NY: M.E. Sharpe; 2008. p. 138-161.

Radin Beryl. Challenging the performance movement: Accountability, complexity and

democratic values. Washington, DC; Georgetown University Press; 2006.

44

Reinke David & Daniel Malarkey. Implementing Integrated Transportation Planning in

Metropolitan Planning Organization: Procedural and Analytical Issues. In Transportation

Research Record; 2006, No. 1552: p. 71-78.

Weber Edward P. &Anne M. Khademian. Wicked problems, knowledge challenges, and

collaborative capacity builders in network settings. Public Administration Review; 2008;

68(2): p. 334-348.

Yang Kaifeng. Responsiveness in network governance: Revisiting a fundamental concept. Public

Performance Management Review; 2007; 31(2): p. 131-143.