users' guide for measuring public administration performance

Upload: governance-assessment-portal

Post on 30-May-2018

299 views

Category:

Documents


2 download

TRANSCRIPT

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    1/78

    UNDP Oslo Governance Centre

    Users Guide For

    PUBLIC ADMINISTRATION

    PERFORMANCE

    MEASURING

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    2/78

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    3/78

    A Users Guide to MeasuringPublic Administration Per ormance

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    4/78

    A USERS GUIDE TO MEASURING PUBLIC ADMINISTRATION PERFORMANCE

    Copyright 2009 by UNDP. All rights reserved. For in ormation regarding the appropriate use o this document, please contact theUNDP Oslo Governance Centre.

    Cover design and lay-out by Keen Media Co., Ltd.

    United Nations Development ProgrammeOslo Governance CentreDemocratic Governance GroupBureau or Development PolicyBorggata 2B0650 OsloNorway

    FIRST EDITION June 2009

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    5/78

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    6/78

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    7/78

    INTRODUCTION 1

    CHAPTER 1: ASSESSMENTS OF PUBLIC ADMINISTRATION PERFORMANCE 31.1 Our classi cation o assessment tools and in ormation sources 41.2 Balancing normative and technocratic assessments 41.3 Scope o in ormation sources and assessment tools 51.4 Designing a PAP assessment 81.5 What are the objectives o the user? 9

    1.6 What do the tools and in ormation sources actually measure? 101.7 What is the normative basis o these assessments? 121.8 Are good governance principles embedded in assessment tools? 131.9 How can PAP assessments ocus on particular objectives o government policy? 141.10 Contextualizing PAP assessment tools 151.11 Should indicators measure inputs, processes, outputs or outcomes? 161.12 Combining quantitative data with qualitative data 171.13 What are the most appropriate techniques to collect in ormation? 171.14 Using composite indices or cross-country comparisons: Uses and pit alls 201.15 How can ownership be ostered? 21

    CHAPTER 2: GOOD PRACTICES: GUIDANCE FOR GOVERNMENTS 25

    CHAPTER 3: CASE STUDIES 27 Case stud 1: 360 degree eedback tool or Government policy making 27 Case stud 2: Designing an Employee Attitude Survey 28

    Case stud 3: Designing a Citizen Report Card 29Case stud 4: Monitoring a National PAR strategy 31

    REFERENCES 33

    Contents

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    8/78

    SOURCE GUIDE 35 Ke : 36

    Assessments 36In ormation Sources 36

    Assessments 37

    1. Quantitative Service Delivery Surveys (QSDSs) 372. Citizen Report Cards 383. Common Assessment Framework (CAF) 394. Country Governance Assessment (CGA) 405. Capability Reviews 426. Public Expenditure Tracking Surveys (PETSs) 437. Sel -Assessment Tool or Customer Service Excellence 448. Per ormance Measurement Framework (Public Expenditure and Financial Accountability - PEFA) 479. Public O cials Survey 4810. Country Assessment in Accountability and Transparency (CONTACT) 5011. Evaluation Matrix o Civil Service Human Resource Management in the European Union 51

    12. Control and Management System Baselines 5213. Human Resources Sel -Assessment Guide 5314. Human Resource Management (HRM) Assessment Instrument 5515. Analytical Framework or Institutional Assessment o Civil Service Systems 5616. Engendering Budgets: A Practitioners Guide to Understanding and Implementing

    Gender-Responsive Budgets 5717. National Integrity Systems (NIS) 5818. Diagnostic Framework or Revenue Administration 59

    In ormation Sources 601. Country Policy and Institutional Assessment (CPIA) 602. Governance Matters 623. World Competitiveness Yearbook 634. Government at a Glance 635. Bertelsmann Re orm Index (BRI) 646. Open Budget Initiative 657. Cross-National Data on Government Employment & Wages 678. Integrity Indicators 67

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    9/78

    A Users Guide to Measuring Public Administration Per ormance 1

    IntroduCtIon

    The origins, purpose and structure o this guide

    This Guide is intended to respond to an increasing demand rom UNDP Country O ces and a wide range o nationalstakeholders or guidance on the multiplicity o tools and methods that are being used to measure, assess and monitor theper ormance o public administration (PAP) 1. There is also a growing demand or more operational and nationally-ownedmeasurement tools or public administration (PA), which is partly a reaction to traditional public administration re orms(PAR) and assessments that were mainly donor-driven.

    In this Guide, those tools which produce in ormation and ratings or countries are called in ormation sources and thosewhich users can apply themselves are re erred to as assessment tools. 2

    The proli eration o di erent assessment tools has resulted in some con usion about which tools to use, how they shouldbe applied and their weaknesses and strengths. It is our hope that this Guide will help clear up some o this con usionby providing practical guidance to those who are seeking out better ways to measure PA at the country level, includinggovernment o cials, donor agency sta , re orm practitioners, civil society, and researchers. More speci cally, readers will

    nd answers to such questions as: How to select amongst the existing (or decide to design a new set o ) PA indicators? How to deal with the preparation and launching o an assessment? How to secure national ownership o an assessment process in order to ensure that results are both use ul and used by

    national actors? How to ensure that the assessment is rigorous and methodologically sound? What to do with the results? How to address problems o sustainability

    The Guide is essentially made up o two parts. The frst part critically reviews the existing assessment tools and in ormationsources which are readily accessible to potential users on the internet. It then sets out some practical guidance or users,partly in the orm o short stories which illustrate some common measurement problems users may ace and how theycan be solved. The guidance is based on direct eedback rom users o assessment tools, and rom a distillation o goodpractices. To this end, 20 telephone interviews were conducted with PA practitioners (mainly intergovernmental agency sta ,consultants and researchers) who have been directly involved in public administration re orms at the country level.

    The second part is the Source Guide, which is an inventory o extant assessment tools and methodologies. At present,there is no resource that o ers a global overview bringing together all extant approaches, tools and methods in the areao PA. The Source Guide is structured in a way to provide detailed in ormation on each tool including: history, objectives,measurement ocus, types o in ormation generated, methodology used, strengths and weaknesses (including genderand poverty ocus), and website rom which a user can access the tool. The purpose o compiling and organising thisin ormation is to provide stakeholders engaged in public administration with a resource that can be drawn on or developing newassessment tools or adapting existing assessment approaches to their speci c contexts. It is important to note that theGuide does not provide a new measurement or assessment methodology nor does it provide a speci c blueprint to conduct suchassessments.

    1 This Guide o ers measurement guidance related to the per ormance o the Public Administration (PAP). It will re er to Public Administration (PA) in general (with theunderstanding that the ocus is on public administration per ormance) and to Public Administration Re orm (PAR) when it explicitly deals with issues o re orm.

    2 This distinction is based on current practice, although in principle in ormation sources could be used to enable national stakeholders to conduct t heir own assessments.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    10/78

    2 A Users Guide to Measuring Public Administration Per ormance

    The Guide is an important component o UNDPs body o guidance on measuring and assessing democratic governance,developed as part o UNDPs Global Programme on Capacity Development or Democratic Governance Assessments andMeasurement. 3 This Programme supports nationally owned processes or assessing and measuring democratic governanceand aims to acilitate the development o tools which have broad-based national ownership, are pro-poor and gender-sensitive, and are designed to identi y governance weaknesses and capacity gaps.

    3 http://www.undp.org/oslocentre/fagship/democratic_governance_assessments.html

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    11/78

    A Users Guide to Measuring Public Administration Per ormance 3

    We have used the term Public Administration (PA) throughout this guide, although the terms Public Management,Public Sector Management, or Civil Service Management are o ten used interchangeably. For the purpose o this guide,Public Administration re ers to: (1) The aggregate machinery (policies, rules, procedures, systems, organizational structures,personnel, etc.) unded by the state (or local government) budget and in charge o the management and direction o thea airs o the executive (and local) government(s), and its (their) interaction with other stakeholders in the state, society andexternal environment at national and sub-national levels, and (2) The management and implementation o the whole set o (local and national) government activities dealing with the implementation o laws, regulations and decisions o the

    government(s) and the management related to the provision o public services at national and sub-national levels.4

    Simply put, public administration re orms can be said to consist o deliberate changes to the structures and processes o public sector organizations with the objective o getting them... to run better. 5 Depending on the context, it includesmechanisms to improve policymaking and coordination, the building o robust organisational structures, deconcentrationand devolution, human resource management, as well as communication and in ormation systems. Particular changes tothe physionomy and modus operandi o the public administration will o course be in ormed by particular ideas andideologies (such as new Public Management) and they will be shaped by the views and priorities o politicians andgovernment.

    Historically, developing countries have tended to ollow a similar re orm path to the one adopted in advanced countries thathave re ormed their public administrations. In A rica, or example, re orm programmes can be characterised as ollowingthree main phases which in practice have overlapped. In the rst phase (1980s), re orm programmes were launched in themain to achieve scal stability by reducing the costs o the bureaucracy. Emphasis was placed on rationalising the payroll,controlling establishments and o ten outright downsizing o the public service. In the second phase (1990s), once scalstability had been more or less achieved, the objectives shi ted towards e ciency and e ectiveness. This was the era wheregovernments undertook to restructure their public services, to decentralize services and to re orm systems o nancial andhuman resource management. The term building capacity gained wide currency during this period. In the new century, theemphasis shi ted once again with the advent o sectoral programmes (and related sector-wide approaches) whichdemanded that the public service concentrate on delivering better and more equitable services or its citizens. This newparadigm shi t towards open government 6 led to new re orms designed to strengthen the service delivery capacity o thepublic service and its responsiveness to citizens, with a particular emphasis on target setting.

    However, there is no neat dividing line between these phases and governments have carried orward un nished agendas asnew programmes have been initiated. As a result, many developing country governments nd themselves ocusing onmany objectives simultaneously and as a result carrying out very broad programmes o PAR. Until recently, having acomprehensive programme was considered good practice because success ul re orm usually requires tackling the wholesystem rather than isolated parts o it. But today big bang approaches are increasingly being questioned, as positiveresults with comprehensive re orms remain scarce. While certain re orms undoubtedly need to be steered at the corporatelevel, sectoral approaches are getting increasing attention both rom the governments and rom the donor community.Hence, not all governments are willing or capable o pursuing comprehensive re orm agendas.

    CHAPTER 1

    A m f P blic

    A mi i a i P f ma c

    4 UNDP Practice Note on Public Administration Re orm, 20035 Christopher Pollitt and Geert Bouckaert, Public Management Re orm: A Comparative Analysis, Ox ord University Press, 2000.6 This paradigm shi t towards open government was inspired by t he World Public Sector Re orm Report, Unleashing capacities or public sector management.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    12/78

    4 A Users Guide to Measuring Public Administration Per ormance

    1.1 Our classifcation o assessment tools and in ormation sources

    This guide starts by examining the key elements that need to b taken into account when measuring the per ormance o thepublic administration 7, in terms o their contribution to the e ciency, quality and e ectiveness o public administration. Themodel can be applied when examining the per ormance o the public administration at the aggregate level, the sectorallevel, or the organisational/agency level. Our working typology is summarised in Box 1 below. 8 Some o these themes will be

    amiliar to readers, others less so. It is there ore necessary to make the ollowing clari cations:

    Civil service leadership has been added as a theme in recognition o the critical role that senior managers play, in somecases as part o a separate executive service, both in driving re orms and maintaining an e ective public service;

    Service delivery is regarded by many commentators as an outcome o re orm, where public administration engageswith the citizen. But it is also a key driver in that ambitious poverty reduction programmes require a more e cient ande ective public administration in order to succeed. As such, improving the processes and capacities or deliveringpublic services is an increasingly signi cant ocus in many re orm programmes.

    1.2 Balancing normative and technocratic assessments

    This classi cation is technocratic because it ocuses on the content o particular re orm ingredients rather than the qualityo public administration per se. Further, it ignores both the political environment in which PAR is implemented and thespeci c aims which are sought by the public administration. The paradigm shi t towards more open government as anessential component o a democratic governance in rastructure has widened the debate on PAR to also include the

    ostering o dynamic partnerships with civil society and the private sector, and ensuring broader participation o citizens indecision-making and public service per ormance monitoring. It means that apart rom a critical ocus on delivering services,special attention is also needed to trans orm the public service into a representative and accountable institution.

    UNDP sees the public administration not only as an instrument to ensure e ective policy making, sound management

    o resources and the delivery o essential services, but also as an essential means to advance the international norms andprinciples that underpin the democratic governance paradigm in a given society. For instance, rule o law de cits inthe public administration lack o written decisions, or resistance o lower agencies to comply with decisions and rulings can have a serious impact on individuals and communities. By situating the public administration within a rule o law

    ramework, the users o the system become rights-holders, able to legally claim services and hold state agents accountable.

    7 For the purposes o this guide, the public administration will re er to the executive branch o the central government. It includes the ministries which serve the executive,as well as non-ministerial departments, semi-autonomous bodies and ront-line service delivery units such as schools and hospitals.

    8 A urther component, the machinery o government (which is concerned with the rules, institutions, and structure o the administration necessary to carry out governmentpolicy), has been omitted rom this classi cation because we were unable to nd a single tool which purported to measure it. This is a major gap in t he range o assessmenttools because restructuring is a typically a key component o PAR in developing countries. Finally, procurement systems and E-government are not included as separateelements o PAR in our typology.

    BOx 1 A WORKING TyPOLOGy FOR MEASURING PUBLIC ADMINISTRATION PERFORMANCE

    Civil service management: legal and ethical ramework, HRM policies and procedures, institutional ramework Public fnancial management: budget preparation and execution, accounting systems, audit and legislative scrutiny Government polic making: processes, structures and capacity or analysing problems, identi ying and appraising

    options, consulting with stakeholders, decision-making, monitoring and evaluation Leadership: selecting, rewarding, deploying and developing capacity o senior public servants Service deliver : identi ying client needs, developing service standards and targets, monitoring per ormance,

    building capacity to deliver

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    13/78

    A Users Guide to Measuring Public Administration Per ormance 5

    Another example o how a democratic governance ramework calls attention to particular dimensions o the publicadministration relates to the principle o accountability. Still today, in many bureaucracies around the world, accountabilityis essentially internal, through a hierarchy o supervisors and horizontal control structures, with little concern oraccountability to the public or to elected representatives. Client charters are easily applicable methods o downwardaccountability. Public hearings and public grievance mechanisms that allow citizens to report abuses o authority can alsopromote democratic oversight at the local level. 9

    Clearly, the normative ramework selected will infuence the choice o interventions, the priorities within them and how theyshould be implemented in particular country contexts. It will also a ect the particular themes and indicators which areconsidered relevant in any assessment exercise. For instance, the Country Governance Assessment ramework, developedby the Asian Development Bank, provides guidance questions or the evaluation o revenue administration and publicexpenditure management. While many o these questions are purely technical, some are in ormed by democraticgovernance principles, such as responsiveness ( Do budget priorities refect underlying needs and demands o civil society? ) and transparency ( Is the budget published and made available to the public? ) In later sections o this guide, welook at more concrete examples where the choice o indicators refects a democratic governance perspective o publicadministration.

    1.3 Scope o in ormation sources and assessment toolsWith the exception o public nancial management, public administration re orm has lost some momentum in recentyears as a consequence o the explosion o interest, or good reasons, in the broader concept o governance among thedonor community. In response to this, many in ormation sources and assessment tools have emerged in recent years ormeasuring governance and corruption 10, but not that many measure di erent aspects o public administration per ormance,even though several anti-corruption assessment tools relate directly to PA. However, there is a new realisation in somequarters, notably the World Bank and the OECD, that there is a gap in the measurement o PA and its impact on thee ectiveness o government that needs to be lled. As a result promising new tools are now being developed. 11 The recent

    nancial crisis and its impact on the per ormance o governments and the delivery o services in particular have put publicadministration high on the governance agenda again

    There are ar more assessment tools available to users than in ormation sources

    Our research has unearthed a number o relevant existing tools and in ormation sources which can be used to measuredi erent components o PAP. They are all publicly accessible, although a ew require a simple registration process be orethey can be used. No licensed instruments, which can only be administered by a particular organization on the payment o a ee, have been included. O the 25 instruments highlighted in Table 1, 18 are assessment tools and only 7 are in ormationsources. 12

    There is a reasonably good range o assessment tools which are accessible to users i we search beyond theinternational aid community

    These 25 instruments can be urther categorised as ollows:

    Purpose-built tools which purport to measure a particular aspect o public administration per ormance Broader governance assessment tools and in ormation sources which include a component on public administration Tools designed or a di erent purpose, but which measure the relevant processes

    9 A UN Discussion paper on the approach to public administration, local governance and nancial transparency and accountability in post-confict peace building operations(dra t), Dec. 2008

    10 See UNDP, Governance Indicators: A Users Guide, second edition (2007)11 Such orthcoming tools include the World Banks HRM Actionable Indicators and the OECDs Government at a Glance indicators.12 A number o broader governance tools are excluded rom our list because their measures o public administration are limited.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    14/78

    6 A Users Guide to Measuring Public Administration Per ormance

    The inclusion o the third category provides the potential user with a signi cant amount o choice. For instance, theCommon Assessment Framework (CAF) developed by the European Institute o Public Administration was originallydesigned to assess the e ectiveness o public sector organizations ( rom a Total Quality Management perspective) but it canbe adapted to measure systemic issues. And there are an additional two tools (the World Banks Actionable GovernanceIndicators and the OECDs Government at a Glance.

    As a rst screen or users, Table 1 indicates the thematic areas covered by each assessment tool and in ormation source.

    Table 1 What do assessment tools and in ormation sources measure?

    Re Source Name o assessmenttool / In ormation source

    Leadership Civilservice

    Publicfnancialmanagement

    Policmaking

    Servicedeliver

    Assessment tools

    1 World Bank Quantitative Service Delivery Surveys X

    2 World Bank Public Expenditur Tracking Surveys X

    3 Public A airs Centre Citizen report cards X

    4 Asian DevelopmentBank (ADB)

    Country Governance Assessment X X X

    5 European Institute o PublicAdministration (EIPA)

    Common Assessment Framework X X X X X

    6 UK Civil Service Capability reviews X

    7 UK Civil Service Sel -Assessment Tool or CustomerService Excellence

    X

    8 Multi-institution project(World Bank, IMF, EC,DFID, SPA)

    Public Expenditure and FinancialAccountability (PEFA)

    X

    9 World Bank NetherlandsPartnership Program (BNPP)

    Public O cials Survey X X

    10 UNDP Country Assessment in Accountability

    and Transparency (CONTACT)

    X

    11 World Bank Evaluation Matrix o Civil ServiceHuman Resource Managementin the EU

    X X

    12 SIGMA (OECD & EU) Control & ManagementSystem Baselines

    X X X

    13 State o Texas Human ResourcesSel -Assessment Guide

    X

    14 ManagementSciences or Health

    Human Resource Management(HRM) Assessment Instrument

    X

    15 Inter-AmericanDevelopment Bank (IADB)

    Analytical Framework or InstitutionalAssessment o Civil Service Systems

    X

    16 Commonwealth Secretariat Engendering Budgets: A PractitionersGuide to Understanding and Imple-menting Gender-Responsive Budgets

    X

    17 Transparency International National Integrity Systems X X

    18 World Bank HRM Actionable GovernanceIndicators ( currently under development )

    X

    19 World Bank Diagnostic Framework orRevenue Administration

    X

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    15/78

    A Users Guide to Measuring Public Administration Per ormance 7

    Re Source Name o assessmenttool / In ormation source

    Leadership Civilservice

    Publicfnancialmanagement

    Policmaking

    Servicedeliver

    In ormation sources

    20 OECD Government at a Glance ( currently under development )

    X X X X

    21 Bertelsmann Si tung Management Index, Ber telsmannRe orm Index

    X X

    22 InternationalBudget Project

    Open Budget Initiative X

    23 World Bank Cross-National Data on GovernmentEmployment & Wages

    X

    24 IMD World Competitiveness Yearbook,Government E ciency

    X X

    25 World Bank Governance Matters, GovernmentE ectiveness

    X X X

    26 World Bank Country Policy and Institutional

    Assessment (CPIA) Criteria 13:Quality o Budgetary & FinancialManagement Criteria 15:Quality o Public Administration

    X X X X

    27 Global Integrity Integrity Indicators X X X X

    Table 1 What do assessment tools and in ormation sources measure? (continued)

    There is one assessment tool, EIPAs Common Assessment Framework (CAF), which attempts to measure all vedimensions o PAP. This assessment ocuses on the processes and results o an organization, rather than the public administrationenvironment in which it sits. However, because organizational processes will refect the constraints imposed by the centralrules and regulations, application o the tool across a number o public institutions can generate evidence o systemic PAproblems. In addition, there are two tools which measure three o the core themes, namely human resource management,public nancial management and policy making. These are the Asian Development Banks Country Governance Assessmentand SIGMAs Control and Management System Baselines.

    There is a preponderance o assessment tools and in ormation sources which measure public nancial management re orm and human resource management re orm

    When looking at both assessment tools and in ormation sources, we nd 13 instruments measuring PFM and 15instruments measuring HRM. Both are critical re orm areas. However, PFM tools have attracted a greater interest thanHRM tools within the development community in the last ew years. As a consequence the PFM tools tend to be morecontemporary than the HRM tools. In part, this situation refects the sel -interest o donors in the e ectiveness o their aid,as well the shi t in aid practice towards direct budget support mechanisms which give rise to concerns about duciary risk.It also refects the pro essional biases o international agencies, which tend to be dominated by economists and nancial

    management specialists. But traditionally also, these two sectors were seen as the main drivers o public sector e ciency( nancial and human resources). Attention to leadership, service delivery, the quality o policy making and in ormation andcommunication capacity have come at a later stage. However, the orthcoming Actionable Governance Indicators rom theWorld Bank will to some extent restore a balance.

    Instruments which ocus on a single theme can generate deeper assessments than the broader tools

    Tools which attempt to measure a number o themes, in particular the broader governance assessments, do not generallymeasure them in great depth. In contrast, instruments which ocus on a single theme provide or a much deeper assessmento the phenomena being measured in a relatively short period o time. Our mapping has identi ed the ollowing tools andsources which ocus on a single theme (Table 2):

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    16/78

    8 A Users Guide to Measuring Public Administration Per ormance

    Table 2 Assessment tools and in ormation sources which ocus on a single theme

    Civil Service Management Human Resources Sel -Assessment Guide Human Resource Management Assessment Instrument Cross-National Data on Government Employment & Wages Analytical Framework or Institutional Assessment o Civil Service Systems

    Public Financial Management Public Expenditure Tracking Surveys CONTACT Public Expenditure and Financial Accountability Engendering Budgets: A Practitioners Guide to Understanding

    and Implementing Gender-Responsive Budgets Open Budget Initiative Diagnostic Framework or Revenue Administration

    Service delivery Quantitative Service Delivery Surveys Citizen Report Cards Sel -Assessment Tool or Customer Service Excellence

    Leadership Capability Reviews

    1.4 Designing a PAP assessment

    The majority o instruments have been designed by international agencies, which means that special e orts must be made to achieve host government ownership o the ndings

    More than 60 per cent o the in ormation sources and assessment tools captured have been designed (or are in theprocess o being designed) by o cial international agencies. Others have been developed by public sector organizations inadvanced countries and a private consulting company. Few instruments have been developed by civil society organizations NGOs, academic and research organizations.

    In practice, there ore, a national government wishing to carry out a PAP assessment will o ten be choosing a tool which hasbeen designed by an international agency or application in a number o countries. Users will there ore have legitimateconcerns about the relevance o the indicators and methodology to their particular situation and as a result the ndings

    may lack ace validity. Potential users are there ore encouraged to examine the procedure that was used to develop theinstrument. For example, to what extent was a sample o national stakeholder consulted in the design? Or, was the tool rsttested in developing country contexts and subsequently re ned be ore its wider application?

    PAP assessments are only use ul to the extent that they lead to action by national stakeholders. And action will onlyoccur when ndings are perceived as valid by the user, which in turn will depend on ownership o the assessment process.Ownership is there ore a central theme in this guide. Our contention is that ownership is more likely to be achieved i the user is able to make a contribution to the design o the assessment tool. As illustrated in Figure 1, the design involvesthree key steps: deciding upon objectives, selecting speci c measures and developing the methodology. All three steps areimportant or generating accurate and valid data or users, and or building ownership o the assessment process.

    Figure 1 Users contribution to the design o the assessment is ke or securing ownership

    Objectives

    Measures Method

    Ownership

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    17/78

    A Users Guide to Measuring Public Administration Per ormance 9

    1.5 What are the objectives o the user?

    An essential criterion or selecting a source or tool is whether it meets the objectives o the user. By objectives, wemean the ultimate purposes or which the in ormation will be utilised. As a very rst step, users should have a clearunderstanding o why the assessment is important, why they are carrying it out. In browsing through the o cial in ormationprovided by the owners o di erent tools, the potential user will nd a veritable dictionary o words used to describe the intendedobjectives. For example, the list includes assessment, benchmarking, cross-country comparison, research, dialogue,diagnosis, improving per ormance, aid allocation, eedback, tracking changes, measuring impact o re orm and so on.Some tools are not explicit about objectives and simply re er to the in ormation and results that the source or tool is capableo generating. When this is the case, clarity o purpose or undertaking an assessment is all the more important i users areto be able to select the right tool only by examining the in ormation generated by it.

    The user should be absolutely clear on his or her objectives or the assessment exercise be ore choosing a tool

    For this guide, objectives have been classi ed into ve categories based on the in ormation provided in the o cial literatureon the tool and, in some cases, our interviews:

    Diagnosis & planning: where the in ormation generated constitutes a ormal review o strengths and weaknesses

    eeding into a planning exercise in which decisions are made on re orm priorities. Many tools claim to be use ul ordiagnostic purposes. The reality, however, is that the majority provide data on high level indicators corresponding tothe immediate objectives o re orm which need to be analysed urther. Such tools can identi y the weak areas or urtherinvestigation as part o a more intensive diagnostic process. [Example HRM sel -assessment]

    Monitoring and accountability: in which in ormation paints a picture o how well government is per orming in aparticular thematic area. Monitoring is use ul to establish the current situation in relation to an established standardagainst which uture improvements can be measured. Many tools can be used or monitoring purposes. But i the userwishes to monitor progress o an ongoing re orm programme, the measures selected will need to be customised to,refect its strategic objectives [Example Citizen Report Cards]

    Cross-comparison and benchmarking: in which in ormation is used to compare one country with another,usually by calculating a score or the phenomena being measured. Benchmarking is a urther development which involves ameasurement against a standard, or good practice. This is a common objective, particularly or those sources and toolswhich originate in the donor community. [Example Cross-National Data on Government Employment and Wages ]]

    Dialogue and joint decision-making: where in ormation can be used to initiate a dialogue about changes and

    improvements. Dialogue is a modest objective which is particularly relevant where donor-government partnerships arebeing promoted in line with the Paris Declaration. Dialogue can orm the rst step in a more in-depth diagnosis[Example Open Budget Index]

    Resource allocation: in which the outcome o the exercise is linked to the allocation o aid. This is not an explicit objectiveo any o our tools or sources, but it is included because it may be a perceived objective by national governments

    [Example PEFA]

    The stated objectives o sources and tools are expressed in terms o this classi cation in Table 3. International agencies areinterested in measurement or a variety o purposes dialogue, diagnostic, monitoring and international benchmarking.In contrast, national governments are mainly interested in measuring PA or diagnostic and monitoring purposes, as wellas or priority setting, and to a lesser extent or dialogue and joint decision making. Middle income countries aspiring tobecome developed nations are o ten interested in international benchmarking, but most developing countries less so.

    Many o the sources and tools are serving multiple objectives. In general, where a single tool has many objectives it tends tobe di cult to meet them all satis actorily. Some objectives are complementary. For instance, international comparisons andbenchmarking can orm part o a locally applied diagnostic tool or monitoring exercise. And in ormation or dialogue can be

    use ul or diagnostic purposes. However, in ormation or dialogue is not the same as a diagnostic, which examines causes aswell as symptoms. I the user has multiple objectives, it will be necessary to mix and match di erent tools.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    18/78

    10 A Users Guide to Measuring Public Administration Per ormance

    Assessment exercises usually work best when the user has a single objective, and where this mirrors the objectiveso other stakeholders It is o ten claimed that a tool can be used or both diagnostic, and or monitoring and accountability purposes. This is notstrictly correct. A diagnostic tool designed to ormulate a re orm plan will normally need to delve deeply into relevantprocesses (e.g. recruitment and selection o civil servants) to determine how they are working and why they are working theway they are. This level o detail may be required or monitoring purposes by the specialists tasked with making the changes.But it will be less use ul or re orm managers who are more likely to want in ormation on the immediate results which theprocess changes are intended to produce. In practice, there ore, the in ormation needs or diagnosis and monitoring willdi er, although there will be areas o overlap.

    Ideally, there should be agreement on the objectives o the exercise among all stakeholders. Objectives should bestated explicitly. Problems can arise when the user perceives that a key stakeholder has a hidden agenda. An example isPEFA is intended to monitor the per ormance o a countrys public nancial management system, as well as providing aplat orm or dialogue between donors and governments. However, because o the comprehensiveness o the tool, someassessments have ormed part o a duciary risk assessment. In such cases, a national government may attempt to cover up anyde ciencies because this could reduce the prospects o incremental development aid. This potential blurring o objectivesoccurs in large measure because o the success o PEFA which has tended to supersede many ormer tools. The problem

    can be overcome through extensive consultation between government stakeholders and the assessment team during theprocess, and by placing emphasis on strengthening PFM systems over time, rather than meeting a xed standard.

    Objectives, both explicit and perceived, can infuence behaviour and hence the quality and reliability o the in ormation generated

    Where the objective o the user is to provide in ormation or monitoring and accountability purposes, there will be atendency or those providing the in ormation either to set the target low or to infate the measure so that per ormanceappears better than it is. A diagnostic tool can lead to covered up problems where those providing the in ormation have avested interest in preserving the status quo. This will occur when there is a separation between the user, such as a Ministeror senior government o cial, and the respondents, who may be employees at the operational level.

    1.6 What do the tools and in ormation sources actuall measure?

    A second criterion or choosing a tool is the relevance and quality o the in ormation generated. A ter the user has identi edthe range o instruments which meets his objectives, he should examine more closely at the indicator set they employ.

    There is a critical distinction between practices de jure (in law) and those de acto (in use.) O ten, o cial practices are notcarried through or implemented only partially. For this reason most tools attempt to measure practices in use althoughthis is not always explicit in the way the indicator or question is ramed. 13 The World Banks orthcoming HRM ActionableGovernance Indicators will classi y each question as in law or in practice. There is always a risk that respondents willanswer the easier question (whether a written procedure exists), especially when there is no clear cut answer on whether it

    is implemented in practice. As a rule, when an assessment enquires whether a written procedure exists, it should also ask the extent to which it is actually applied.

    The user has choices in the thematic areas o service delivery and human resource management re orm, where there issigni cant diversity in indicator sets and methodologies. The tools measuring PFM and government policy making are moreuni orm, in part refecting a consensus among agencies on good practice.

    13 The exceptions are the SIGMA Control and Management System baselines and the Matrix or EU Accession which place considerable emphasis on the existence o policiesand procedures in law. In these tools the distinction between practices de jure and those in use is made explicit.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    19/78

    A Users Guide to Measuring Public Administration Per ormance 11

    Service deliver

    There are our service delivery tools presented in our source guide, which measure three di erent aspects o service delivery. The World Bank surveys tend to ocus on concrete dimensions o service delivery such as quality, timeliness, access ande ciency which can be tailored to the sector or organization. Usually, quantitative in ormation is obtained rom theservice providers records. Both the Citizen Report Card and the EIPAs CAF measure service delivery, but rom theperspective o the user rather than the provider. The ormer tool actually asks citizens or their views whereas the lattersimply requires management to provide an opinion on customer satis action.

    In contrast to these two tools, the UK Civil Service Sel -Assessment Tool or Customer Service Excellence emphasizes howservices are provided, rather than what is actually delivered to the customer. The ocus is on improving processes andcapacity that are likely to deliver value to the customer. It there ore looks at the determinants o customer satis action.None o these tools are necessarily superior to any other. Each can be used as a stand-alone tool or they can be used incombination to provide a more comprehensive picture o service delivery.

    Public Human Resource Management

    In measuring HRM re orm the user has a more di cult choice to make. There are nine assessment tools in our source

    guide, one which is under development (the World Banks HRM Actionable Governance Indicators). Three assessmenttools authored by EIPA, MSH, State o Texas measure policies and processes covering the ull range o human resourcemanagement activity. In addition, the MSH tool examines the capacity o the organization to manage its people. Thesetools all derive their indicators rom good private sector practice which are arguably applicable to public sectororganizations. Indeed, they provide a more comprehensive coverage o HR practices than the purpose-built tools.

    However, there are some important di erences between HRM in the public and private sectors. The rst is that in the publicservice HR practices are usually governed by a Civil Service Law which de nes the basic principles o employment or allpublic service bodies, with procedural details established in subsidiary regulations 14. A second di erence is that employeesin the public service are required to uphold certain ethical and behavioural standards, such as impartiality, which are derived

    rom a common set o values. A nal critical di erence is that, because the principle o entry to the service on merit must beupheld, special arrangements are necessary to check that the executive does not abuse its power to make appointments orreasons o nepotism or political patronage. Independent oversight o the executive can be achieved either by establishing aPublic Service Commission or an Ombudsman O ce reporting to Parliament or providing legal mechanisms or individualsto seek redress through the Courts.

    There are three assessment tools which recognise the distinctive nature o human resource management or the civilservice. These are the World Banks Evaluation Matrix o Civil Service HRM in the EU, the SIGMAs Control and ManagementSystem Baselines, and the Asian Development Banks Country Governance Assessment tool. All o these pay attention tothe legal ramework, the institutional arrangements or managing the civil service, and the conduct o civil servants.At the same time, they also cover the key HRM policies and procedures. These are all good tools in terms o the breadtho indicators, but the World Bank tool is arguably the most comprehensive. The World Banks orthcoming ActionableGovernance Indicators or civil service management promises to be an improved version o the EU accession instrument.

    Public fnancial management

    Public nancial management is well trodden territory and the user should not expect to nd much variation in theindicators employed in the many tools available. The technical scope o PFM embraces budgeting, accounting, audit andlegislative oversight. The PEFA tool is the most contemporary tool which is well researched and supported by a consortiumo international agencies. The ramework o six dimensions and 31 high level indicators is comprehensive, refecting theproblems commonly ound in developing country systems, and also many developed countries 15. The Common AssessmentFramework (CAF) by EIPA is also comprehensive in scope, but it tends to view good practice rom an organizational ratherthan a systems perspective. Both o these tools can be recommended to potential users. UNDPs CONTACT is a more detailedquestion-based instrument which nance specialists will nd use ul.

    14 While private companies also have similar statutory rameworks and implementation arrangements, they tend to di er rom one company to another, as opposed to CivilService Law which establishes a common ramework or all public service bodies.

    15 The Government o Norway, or example, did a sel -assessment using the PEFA instrument and did not per orm well on a number o indicators. This indicates the somewhatambitious nature o PEFA standards, perhaps too ambitious or many developing countries.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    20/78

    12 A Users Guide to Measuring Public Administration Per ormance

    The Open Budget Index (OBI) is an in ormation source which complements the more technical tools by assessing budgettransparency. It provides in ormation on the availability o budget documentation, the executives budget proposal and thebudget process in 59 countries. OBI has built into its indicator set many o the key elements o the budget process that aremeasured by PEFA. For example, in order or a country to score highly on the OBI index, its budget must be comprehensive,it must adopt a standard budget classi cation and resources refected in the budget must be aligned with governmentspolicy objectives. These processes guarantee that the in ormation provided to civil society is meaning ul.

    Government polic making The policy making assessment tools are all checklists which capture very similar in ormation. They ocus on the policymaking process, but they also examine capacity to some extent. O these, the SIGMA checklist is perhaps the most widelyaccepted and recognised. It examines: (a) the alignment o sectoral policies with overall government strategy; (b) rules orthe preparation and submission o policies; (c) provision or inter-ministerial consultations; (d) the e ciency and quality o decision making; and (e) capacity or preparing and negotiating policies in ministries. The CAF tool di ers rom the othersin that it measures the development o strategy and policy rom the perspective o individual public organizatiions ratherthan the wider government policy process.

    Leadership

    The two leadership assessment tools both measure the behaviour o senior managers, although there are di erences in thecompetences selected. One important di erence is that the CAF assesses relationships with stakeholders, including withpoliticians, a dimension which is missed by the Capability Reviews.

    1.7 What is the normative basis o these assessments?

    Governance assessments are concerned with the extent to which particular principles and values (e.g. transparency,accountability, participation, integrity) are put into practice. They are there ore inherently normative in their orientation.In contrast, PAP assessment tools tend to ocus on practices rather than principles which o ten obscure any value judgements rom view. In addition, they are typically neutral with respect to the policy objectives (e.g. poverty reduction,gender equality) that are sought rom better public administration

    The available tools or PFM and Government policy making incorporate indicators that are universally accepted

    Is there is a set o good practice indicators or each thematic area that are universally valid, or do the indicators inevitablyrefect the values and biases o the tools author(s)? PEFA was a genuine attempt by a number o donor agencies to developa universal set o high level PFM indicators relevant to all countries. On the whole the assessment tool meets this ambitiousclaim, given its success ul application in many di erent countries. 16 A similar claim can also be made or the OECD/SIGMApolicy making tool which has been embraced by many practitioners. Its standards are internationally valid, not just orEuropean countries and those aspiring to join the EU, or whom they were originally intended.

    One o the reasons that these tools have acquired this status is they have been kept intentionally simple and restricted to alimited number o high level indicators, which correspond to broad and widely accepted standards. The beauty o the OECD/SIGMA tool is that it does not prescribe the precise ways in which these standards should be accomplished. For instance,one standard states that: e ective inter-ministerial cooperation should take place... through the exchange o in ormation,but it does not say what mechanisms should be used to secure cooperation or what media should be used to exchangein ormation. These decisions are le t to the discretion o the government. As soon as one starts to prescribe the speci cprocedures or meeting agreed standards, then legitimate choices start to emerge and the tool will relinquish its claim touniversality.

    16 One practitioner indicated that the indicator on parliamentary scrutiny was geared towards the Anglophone public administration tradition. However, this is only oneindicator out o 31.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    21/78

    A Users Guide to Measuring Public Administration Per ormance 13

    But, as yet, there is no equivalent public human resource management assessment tool which can claim toembody a universal set o indicators

    There are a number o tools, which have a number o indicators in common, but also many divergent measures. Itappears that the main reason or the existence o many diverse tools is that no agency has yet invested in the research anddevelopment o a universal tool which draws together the good eatures o the existing tools. The di erent indicator setssimply refect the di erent emphases introduced by the practitioners who developed them.

    Both PFM and government policy making have evolved to a point where there exist universal indicator sets that can belegitimately applied to any assessment exercise. The user there ore does not need to reinvent the wheel. However, inthe relatively neglected area o public human resource management service re orm, there is ar less consensus on whatconstitute valid indicators. For this thematic area users are advised to customise their assessment instrument, drawing uponthe available indicators in di erent tools.

    1.8 Are good governance principles embedded in assessment tools?

    Even i such principles are not explicit, it may be that they are refected in speci c indicators. Accordingly, as examples, we

    examined the extent to which the principles o responsiveness17

    and transparency are incorporated within the variousassessment tools. 18 In the main, such principles are not included, with the exceptions shown in Box 2 below.

    BOx 2 INCLUSION OF RESPONSIVENESS AND TRANSPARENCy PRINCIPLES: ExAMPLES

    Country Governance Assessment (Asian Development Bank) Participation o the public in the policy-making and legislative process Are criteria or selection o public servants transparent? Do budget priorities refect underlying needs and demands o civil society? Is the budget published and made available to the public?

    Sel -Assessment Tool or Customer Service Excellence (UK Civil Service) In ormation and access: accurate and comprehensive in ormation is delivered through the most appropriate

    channel or customers Customer insight: e ectively identi ying customers and consulting them in a meaning ul way

    Public Expenditure and Financial Accountability (WB, IMF, EC, DFID, SPA) Transparency o inter-governmental scal relations Public access to key scal in ormation

    Evaluation Matrix o Civil Service HRM in the EU (World Bank) Establishment o citizens grievance process in law Existence o an oversight body to ensure the application o civil service policies Public advertisement o posts to ensure equal competition Access o civil servants to their per ormance appraisal Disciplinary principles based on transparent and air principles

    Open Budget Index (International Budget Project) The entire tool measures the access o citizens to budget in ormation

    17 De ned as taking account o citizens aspirations and needs18 Other relevant public administration principles include accountability, participation, airness, integrity

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    22/78

    14 A Users Guide to Measuring Public Administration Per ormance

    Since the principle o responsiveness is inherent in the concept o customer service, it is not surprising that it iscaptured in the indicators or the customer service excellence tool. It is surely relevant to government policy making too.Yet the existing tools ocus on internal processes, in so doing downplaying the inter ace between government actors andcivil society. There is a strong case or including additional measures to refect this principle. The principle o transparencyis relevant to the government budget and to certain human resource management processes (e.g. recruitment andselection, per ormance appraisal, discipline.) And it does seem to be captured reasonably well by the better PFM and publicHRM tools.

    1.9 How can PAP assessments ocus on particular objectives o governmentpolic ?

    UNDPs normative standpoint in all governance support is a concern or the rights, needs and interests o the poor, womenand other vulnerable groups. Can PAP assessments capture such impacts? Service delivery is the one thematic area wherethe tools in our source guide pay explicit attention to the impacts on disadvantaged groups and women. 19 For instance:-

    Citizen report cards gather the views and perceptions o the actual users o services, who can be strati ed into di erentgroups, including poor people and women;

    The World Banks Quantitative Service Delivery Survey examines the quality and e ciency o the service provided which,depending on the survey design and location, can also collect in ormation on the bene ciaries o the services; and

    The UK Civil Services Sel -Assessment Tool or Customer Service Excellence includes a question about the processes inplace to identi y the needs o hard to reach groups.

    However, the assessment tools covering the three core themes public HRM re orm, public nancial management re orm, government policy making are primarily concerned with the measurement o internal processes which are one-step removed rom the service delivery processes which deliver bene ts directly to citizens

    In these cases it is less obvious how a pro-poor perspective can be captured within the assessment tool itsel .Disaggregating indicators by particular target groups (e.g. poor people, women) only works where the direct impact o the concept being measured (e.g. corruption) is experienced by such groups.

    The solution lies in linking speci c re orm interventions to desired policy objectivesin particular country contexts

    Country Poverty Reduction Support Programmes which incorporate the MDGs provide an excellent oundation or thisthinking because they speci y the high level policy goals which governments wish to pursue. The steps to be taken are as

    ollows:

    Step 1. Select a speci c policy goal Step 2. Identi y the outputs that will contribute to this goal Step 3. Develop concrete indicators or these outputs Step 3. Carry out the PAP assessment

    The output indicators provide the ocus or a PAP assessment. A re orm programme can then be designed based on thegaps identi ed between the actual and the desired situation. Box 3 shows examples o concrete re orm outputs which arerelevant to pro-poor policy objectives.

    19 Signi cantly, however, none o these tools are designed to measure the extent to which disadvantaged groups have access to services because they ocus on existingservice users.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    23/78

    A Users Guide to Measuring Public Administration Per ormance 15

    BOx 3 THE CASE OF UNIVERSAL PRIMARy EDUCATION: ExAMPLES OF OUTPUT INDICATORS

    Policy process Education policies are approved which address the causes o low enrolment ratios

    Human Resource Management Per ormance o head teachers is assessed on the basis o pupil retention, not examination results Teachers are recruited or deployed 20 to overcrowded schools Female teachers engaged in schools where girls enrolment ratios are low Low teacher absenteeism

    Public Financial Management Increased government recurrent budget allocated to primary education Low sta costs per pupil

    1.10 Conte tualizing PAP assessment tools

    The choice o appropriate indicators will depend on the level o re orm maturity Consider three country types post-confict, developing and middle income. Although there will always be exceptions,their levels o maturity in a particular re orm area will typically mirror their levels o development. Assessment tools aregenerally designed to meet the speci c requirements o a typical developing country which may not be appropriate or apost-confict or a middle income country. Countries emerging rom confict will have experienced a collapse o many o thebasic systems and rules o public administration. Where scal discipline is the most immediate problem that needs to be

    aced, there seems little purpose in asking questions about medium-term budgeting or the e ciency o spending. Similarly,a middle income country which has aspirations to become a developed country is more likely to want to know whether its

    HR processes refect modern practice, than whether its civil service law is sound. For this reason, users should select thoseindicators that are most appropriate to their level o maturity in a particular thematic area. 21

    Indicators need to be consistent with country traditions o public administration

    Countries not only di er in their relative levels o development; they have di erent traditions o public administration whiche ectively constrain the choices they can make. The theory o path dependency states that history and traditions willa ect what re orms are easible in particular countries. It is no coincidence, or instance, that the developedcountries that have been most receptive to New Public Management approaches to re orm are those with an Anglo-Saxonadministrative culture, such as New Zealand, the UK or the USA. In contrast, NPM has had ar less impact in countries inContinental Europe which uphold the Rechtstaat model o public administration such as France and Germany althoughboth have experimented with some elements. Users will need to be open to alternative perspectives, whilst at the sametime being willing to challenge those indicators which do not eel right.

    Indicator sets should be regarded as fexible rameworks which will need to be reviewed and revised as re orms evolve

    It is o ten claimed that, i an assessment is repeated in the same country at a later date, it will measure re orm progressover time. This is a goal to which PEFA aspires. Because changes in the high level indicator set will occur relatively slowly,it is recommended that a PEFA assessment is conducted every two to three years. The assumption behind this claim isthat governments re orm objectives and trajectory remain unchanged. It is doubt ul, however, that this is a wholly validassumption. Whilst the direction o re orm is likely to remain constant, the pathway chosen may change over time.Case studies developed at the World Bank have demonstrated that it is rare in the second year o a re orm programme

    20 The solution will depend upon whether a devolved recruitment and selection system is adopted.21 One PEFA practitioner elt that a gold standard version could be adopted or countries whose PFM systems are more advanced.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    24/78

    16 A Users Guide to Measuring Public Administration Per ormance

    that a government will be undertaking an identical programme to the one planned in the rst year. Operational indicatorsmay there ore change where learning demonstrates that a particular approach will not work. Priorities may also evolve inresponse to changed circumstances and high level indicators may need to be adjusted accordingly. An assessmentinstrument must accordingly be regarded as a fexible tool, to be adjusted as the re orm trajectory evolves.

    1.11 Should indicators measure inputs, processes, outputs or outcomes?

    This guide ollows the approach adopted by the OECD in Government at a Glance by de ning:-

    Inputs as units o labour, capital, goods and services sacri ced or the production o services (e.g. or health services,inputs can be de ned as the time o the medical sta , the drugs, the electricity, etc.)

    Public sector processes as structures, procedures and management arrangements with a broad application within thepublic sector (e.g. social and nancial audit o local clinics)

    Outputs as the services that are delivered to the end user by a public sector organization (e.g. number o babiesdelivered sa ely, number o pupils completing primary education);

    Outcomes as the impact o these services on the end user (e.g. reduced maternal mortality, improved income)

    Public administration re orms contribute to the achievement o outputs and outcomes, which will typically be refected inthe higher level objectives o the overall re orm programme. Re orms create changes in processes (structures, skills, sta ngand incentives), which in turn lead to change in outputs.

    The OECDs Government at a Glance notes that there is a dearth o output indicators in existing sources and tools. In thisGuide, only three tools were ound to measure outputs exclusively, namely the Quantitative Service Delivery Surveys, thePublic Expenditure Tracking Survey and the Citizen Report Card. In addition, the EIPAs CAF measures both outputs andoutcomes as one element in the assessment.

    Most tools and sources assess processes. For instance, the civil service management component o the EU accessionmatrix tool measures: (a) the Legal and Ethical Framework; (b) the Institutional Framework (structures); (c) Pay Policy andManagement (incentives); and (d) HRM policies and practices (sta ng, skills).

    Ultimately, whether process indicators or output indicators are most use ul will depend upon the objectives o the user.Where the in ormation is required or diagnostic or dialogue purposes, indicators which measure particular re ormprocesses are likely to be most help ul because they will help the user to pin-point exactly what is wrong and in needo improvement. To do this, the indicator must assess the processes in-use as well as those de ned in-law or regulations.An example is provided in Box 4.

    BOx 4 PARTICIPATION OF WOMEN IN THE CIVIL SERVICE: ExAMPLE OF INDICATORS

    Public Administration Re orm objective To ensure the representation o women in senior management broadly refects their composition in society

    Output indicator % o vacant management positions lled by women

    Process indicators New selection criteria which recognise womens contribution

    Flexible employment policies which allow women more reedom to choose their hours o work Equal training opportunities which permit women with child-rearing responsibilities to attend training close to home External competition or jobs to widen the pool o potential emale applicants

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    25/78

    A Users Guide to Measuring Public Administration Per ormance 17

    Process indicators are there ore the actionable governance indicators that users have been demanding. In contrast,output indicators are ar more use ul or monitoring and accountability purposes because they tell the user what resultsthe re orms are achieving without providing unnecessary detail on speci c processes. That level o detail can be providedsubsequently i it is evident that the intended results have not been achieved.

    1.12 Combining quantitative data with qualitative data

    Generally, PAP assessment tools draw rom ve sources o in ormation: written surveys (e.g. sel -assessments),dialogue-based processes (e.g. interviews, ocus groups), indirect observation (e.g. what the assessment team sees when itis in a government building), direct observation (e.g. shadowing a sta member over a day to see how business processeswork), and quantitative data (e.g. on services provided such as patients served at a health post).

    It is o ten tempting to believe that tools and sources which generate metrics (numbers) are somehow providing moreobjective data. Un ortunately, such objectivity is largely illusory. Objectivity will depend on whose opinion is beingquanti ed. For example, survey data, while presented in a quantitative ashion, is perception-based. Also, many qualitativeassessments (such as PEFA) are converted into scores or ratings, usually based on clear scoring criteria. Objectivity will alsodepend upon the accuracy o the source when data is presented in the orm o raw statistics. A good example is the WorldBanks wage-employment database where it is always problematic or country economists to obtain accurate and up to date

    employment data rom government HR in ormation systems. In addition, cross-country comparisons are risky because o the di erent national de nitions o civil service employment used. It is essential there ore or the user to dig beneath thesur ace to determine how the numbers are generated.

    Quantitative data is not necessarily more accurate than qualitative data

    Most o the tools and sources rely heavily upon qualitative data which is essential or measuring change processesin each o the thematic areas. For some tools, qualitative data is reported in a narrative ormat and, or others, it is convertedto a score. Both qualitative and quantitative in ormation are necessary or measuring PAP components, but neither is

    necessarily superior to the other. There is o ten less risk attached to qualitative data, whereas quantitative data can bespurious or inaccurate depending upon the source. Ideally, qualitative data should be substantiated by quantitative data.In practice, it is o ten di cult to generate the quantitative data required during the short time in which the typical assess-ment takes place, unless governments in ormation systems have already been designed to produce this in ormation.

    Perceptions o key stakeholders represent valid data or PAP assessments

    One type o qualitative data perceptions is o ten treated dismissively, especially by pro essionals who are trainedto search or numbers. Yet, the opinions o customers (on services delivered) and the views o civil servants (on humanresource management practices) provide valid data concerning the strengths and weaknesses o particular PA themes.Because the views o stakeholders a ected by PA per ormance are extremely relevant, such so t measures can belegitimately regarded as hard data.

    1.13 What are the most appropriate techniques to collect in ormation?Methodology re ers to the techniques and processes used or the collection o data, its subsequent analysis and theprovision o reports or the user.

    As summarised in Table 3, a wide range o methods are employed by the di erent in ormation sources andassessment tools. The most common methods include surveys, questionnaires, interviews, desk research and discussion.Many assessment tools use a combination o methods. For instance, the institutional assessment o the civil service systememploys an assessment questionnaire, desk research, interviews and expert panels. In contrast, the in ormation sources relyexclusively on either desk research or a combination o surveys and desk research. A ew assessment tools are designed assel -assessment instruments and others incorporate a sel -assessment option.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    26/78

    18 A Users Guide to Measuring Public Administration Per ormance

    One advantage o using multiple methods is that they have the potential to generate a wider range o in ormationrom a number o di erent sources, and they permit the validation and triangulation o data. However, employing many

    methods can be costly and complex to administer, and in the end quality may be compromised. Also, a government should notattempt too much research i it lacks the capacity to process the data into meaning ul in ormation and provide eedback tothose who volunteered the in ormation. For instance, carrying out extensive surveys o civil servants and citizens may soundlike a laudable endeavour, but it can back re i the respondents do not see the results. It will simply con rm their prejudicesthat the government was not really interested in acting on the data in the rst place.

    Some methods can be used to complement each other, rather than to generate stand alone data. For example, the CommonAssessment Framework uses a sel -assessment to generate preliminary data which can be used as a basis or discussion.Such an approach enriches the analysis o problems and helps to build a consensus on the top problems that need to betackled. This is a light method which can generate valid in ormation rom respondents who are amiliar with the issues,without the need or extensive research to assemble quantitative evidence.

    Whilst surveys and questionnaires have the potential to obtain the views o many respondents in a relatively short periodo time, it should be borne in mind that these will inevitably be carried out on a sample basis. As ar as possible, the sampleshould be selected at random. Selection biases can occur i the respondents are senior civil servants who are less likely tobe critical o its policies and processes. It will also be necessary to conduct statistical analyses (o the standard error) to be

    con dent that the results o the sample refect those which prevail in the total population.

    However, i the user does not have the time, the skills or the resources to design and administer a ully-fedgedstatistical survey, one option is to carry out a rapid review to produce a snapshot o the key issues. Such a review wouldinvolve semi-structured interviews with a ew key in ormants which aim to cover the ull scope o the assessment. The rapidreview can serve as a unnelling technique which enables critical issues to be investigated urther using other methods, suchas interviews or ocus groups.

    Sel -assessments are worth considering as a contribution to a discussion, rather than a nal verdict Sel -assessments can work well where there are explicit criteria to enable respondents to orm an opinion. Even so, theremay be a tendency or respondents to infate the rating i they believe they will be held accountable or poor per ormance.Also, the majority o people tend not to be very sel -critical. These problems can be countered to some extent by includingdi erent perspectives or instance, civil servants who are a ected by but not directly responsible or the thematic area and insisting that individual assessments are justi ed. Our view is that sel -assessment data is potentially use ul becausethose generating the data have a real stake in the outcome, with the caveat that it is clearly understood that the in ormationwill be used or improvement, and not accountability, purposes.

    The nal column in Table 3 shows who provides data or the assessment tools and in ormation sources. The mainrespondents are: (a) independent experts and consultants; (b) agency sta ; (c) government o cials; and (d) externalstakeholders.

    Involving external and internal consultants in a team approach can enrich the quality o the assessment

    Many o the assessment tools rely to some extent on the judgments o independent consultants, experts andacademics. 22 There are two main reasons or this. Firstly, they are deemed to have expertise in the theme being assessed.Secondly, their opinions are perceived as independent, hence more objective, than those o the responsible governmento cials. Using outsiders may produce an accurate assessment, but un ortunately this may not result in action bygovernment i external experts are not seen as credible and trusted sources. For this reason, a national or regionalexpert who has a good understanding o the local context is o ten pre erable to an expert who is external to the region.For example, a PAP assessment in an EU candidate country to assess compliance with EU accession criteria might be

    22 Agency sta are a special case o independent experts

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    27/78

    A Users Guide to Measuring Public Administration Per ormance 19

    e ectively led by an expert rom a new member state o the region, but less so by a national expert, who might not beavoured in view o an a liation with previous governments. There is also a risk that national consultants might be

    subjected to undue pressure by government o cials seeking a avourable assessment. A pragmatic solution is to embraceboth oreign (ideally regional) and local consultants working as a team.

    Government o cials have a critical role to play in the assessment

    Government o cials tend to be used ar less requently as data sources than independent experts and donor agency sta .In act they are not used at all in any o the in ormation sources. From one viewpoint, this is perhaps surprising becausegovernment o cials are usually ar more knowledgeable about their problems than a oreigner who jets in or a ewweeks. Indeed, an external consultant will usually be obliged to obtain much o the in ormation or his/her assessment bytalking to local government o cials! It seems that the agencies who have developed these tools either believe thatgovernment o cials have insu cient expertise or that they do not have enough time to devote to a comprehensive assessment.Fortunately, this view is now changing with new tools being designed or the participation o government o cials.

    Joint assessments carried out jointly by external consultants and government o cials have the greatest potential to generate balanced assessments 23

    Joint assessments have two main advantages. Firstly, they are likely to generate high quality data and secondly, theyincrease the likelihood that the results will be acceptable to government. However, whether a joint assessment can besuccess ully conducted will depend upon the skills available within government. In practice, it is o ten di cult to ndenough civil servants who are capable and willing to dedicate themselves ull-time to the task. And some managers are o tenreluctant to release the most quali ed sta because they are in high demand or operational tasks. Another, less signi cant,issue is that some civil servants may be reluctant to be seen to be judging their peers or superiors. Where these practicalproblems can be overcome, joint assessments work best i they are government led, with external consultants used in anadvisory capacity. In this context, the Government o Zambia recently led a PEFA assessment, although the majority o suchassessments have been consultant-led.

    360 degree eedback is a potentially use ul approach, but the cultural limitations need to be recognised

    Is it possible to the joint assessment approach a step urther and adopt a 360 degree eedback instrument which takesaccount o the views o all stakeholders? Taking a particular PAP theme as the target or assessment, the 360 degreeapproach (Figure 3) would collect the views o ministers, experts (local or oreign), civil servants and citizens (directly orthrough civil society organizations.) None o the assessment tools take this panoramic perspective, although the AsianDevelopment Banks Country Governance Assessment comes close. The choice o 360 degree eedback is especiallyappropriate or assessing the Government policy making process and it is also a relevant perspective or civil servicemanagement. The approach is less suited to the assessment o public nancial management where the internal processesunder review may not be visible to either civil servants or citizens. 360 degree eedback is there ore a use ul perspectivedepending on the knowledge and experience o the respective stakeholders.

    However, hierarchic and autocratic governments which have limited tolerance or criticism may be reluctant to accept

    the views o citizens and civil servants as valid or legitimate. And in many countries, civil servants and citizens may bereluctant to voice their true opinions where they ear that the questionnaire or survey is not truly con dential. This caneven occur where the necessary sa eguards have been put in place to assure the anonymity o opinions expressed.Such governments may also reject the views o experts. In such circumstances, where governments are generally reluctantto listen to outsiders because they ear criticism, a sel -assessment represents a practical entry point to an assessment. Theassessment could constitute the basis or a discussion acilitated by a trusted and credible outsider.

    23 Government o cials may include those with direct responsibility or the thematic area and other civil ser vants with good diagnostic skills

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    28/78

    20 A Users Guide to Measuring Public Administration Per ormance

    Figure 3 360 degree eedback

    It doesnt really matter who produces the report, so long as the assessment process agreed by stakeholders at theoutset is ollowed

    Most o the assessment reports and in ormation sources are produced by the international agencies who own the tools. This leaves open the possibility that a recipient government will reject the ndings because it is seen to represent theagencys view. This is more likely to happen i there is evidence that the agency has manipulated some o the ndingsa ter the completion o the assessment. However, provided the conclusions in the report are clearly based on evidence

    which was discussed and agreed with the host government, then it should not matter who ormally authors the nal report.In act, those government o cials who participated in the assessment may actually pre er the report to be produced byan international agency. The agency is in a sa er position to present unpalatable truths to government, which o cials willsometimes be reluctant to communicate themselves.

    1.14 Using composite indices or cross-countr comparisons: Uses and pit alls

    In ormation sources and many assessment tools use ratings both or benchmarking against a standard over time and orcross-country comparisons. This does not pose a problem where the rating simply converts an assessment on a particularindicator to a score. But where the ratings on indicators are aggregated to give a composite score, then the practice is highlyquestionable because it obscures what is being measured. I the rating is to be linked to some orm o recognition suchas the UKs Sel -Assessment Tool or Customer Service Excellence then scoring is both necessary and justi able. However,where the assessment is to be used or the purpose o diagnosis and dialogue, then scoring is less relevant.

    Ratings are super cially attractive, but there are inherent risks Scoring can help the user to compare per ormance across a number o di erent dimensions. But it is critical tounderstand the basis o the scoring. For instance, PEFA provides detailed guidance to help assessors di erentiatebetween an A, B, C or D or each indicator, thereby requiring the ratings to be evidence-based. The process is there oretransparent and allows government to challenge an assessment based upon an examination o the evidence. In contrast,whilst the UK Civil Services Capability Reviews uses a common set o behavioural criteria or the departmental reviews, themethodology gives little concrete guidance or di erentiating between the ve assessment categories strong, well placed,

    development area, urgent development area, serious concerns. In its de ence, the Capabilities Review Team in the UK CivilService moderates the exercise or the civil service as a whole to ensure that the judgements o the departmental reviewteams are broadly consistent. The central oversight unction is o course not an option or a user that wishes to carry outsuch as review or a single department.

    Ministers

    Experts

    Civil society

    Civil servants

    Assessmentof policymakingprocess

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    29/78

    A Users Guide to Measuring Public Administration Per ormance 21

    A major disadvantage o the use o ratings is that it can divert the energies o the parties involved in the assessment. Too much time is spent negotiating and deciding upon the indicator score and consequently not enough emphasis isplaced the nature o the weakness and opportunities or improvement. Where ratings are provided, there is strong case orsupporting these with a narrative assessment, as is provided in the PEFA assessment tool.

    1.15 How can ownership be ostered?

    Users should participate in the design o the assessment process

    In the oregoing three sections guidance has been provided to enable users to: (a) select tools which meet theirobjectives; (b) choose relevant indicators; and (c) design a methodology or collecting data on these indicators. Ownership o theassessment process is there ore achieved when the user is able to contribute to these technical decisions. People usuallycommit to what they help to create. An international agency may believe it has the right answer on these questions or itsgovernment client. But i this view is imposed, or perceived to be imposed, then compliance rather than ownership will bethe inevitable result.

    The quantitative service delivery surveys, the public expenditure tracking survey and citizen report cards insist upon user

    participation because they must be customised to the speci c public service that is being assessed. And whilst those toolswhich measure the core PAP themes appear to have xed indicator sets, the user is encouraged to propose includingadditional indicators and removing existing ones.

    Participation is essential because tools will inevitably need to be customised

    A government may wish to use an assessment tool as part o a diagnostic exercise which eeds into a PAR strategydevelopment process. The measures and methods employed in the tool should be designed to t the speci c objectives o the assessment, not the reverse. There is an even stronger case or government to be involved in the design o an assessmenttool which is intended or monitoring an ongoing re orm programme. Here it is essential that the indicators are alignedwith the governments re orm strategy. The measures should be derived directly rom the speci c programmes and targetspreviously agreed upon in the strategy.

    Ideally, users should also contribute to the actual assessment

    User engagement in the design process is the best insurance or generating results that are accepted and acted upon.However, the ndings may still be rejected where the user decided to contract out the entire assessment to a third party. This is the classic turnkey operation. This problem is best avoided by involving government o cials in the collection andanalysis o data, or better still in contributing to the actual assessment. People rarely argue with their own data. Valid andactionable assessments cannot be constructed on the basis o the opinions o experts alone.

    In the long-term, Governments must build their capability to conduct assessments on a regular basis

    Most tools are designed as one-o tools to be applied at discrete intervals, resulting in ad hoc data collection exercises.Whilst it will always be necessary to conduct special surveys, Governments must also strive to build their own in ormationsystems which can routinely generate key data on PAP. Surveys can be prohibitively expensive to administer, especiallythose o citizens and civil servants. However, surveys o citizens do not need to be undertaken too requently and the costso surveying civil servants can be minimised by designing questionnaires which can be completed on-line.

    It will take time or governments to build their capabilities to conduct PAP assessments without external assistance. Butthey can learn quickly by insisting that government o cials participate in joint assessment teams working alongside moreexperienced consultants.

  • 8/14/2019 Users' Guide for Measuring Public Administration Performance

    30/78

    22 A Users Guide to Measuring Public Administration Per ormance

    Re Tool & producer Themes Users Objectives 24 Methods Respondents 25

    Assessment tools

    1 QuantitativeService DeliverSurve s , by

    World Bank

    Service deliverAlso use ul to in orm public expenditure

    management re orms

    Government Service providers Donors

    Diagnosis Monitoring

    Provider records Interviews Surveys

    World Bank sta Provider sta Clients

    2 Public E penditureTracking Surve ,by World Bank

    Public fnancial mgtTo examine e ciency o expenditure and diversion o resources

    Donors Government

    Monitoring Diagnosis

    Interviews Provider records Surveys

    World Bank sta Provider sta Clients

    3 Citizen report cards ,by Public A airsCentre

    Service deliver For bene ciaries to

    engage in dialoguewith service providers

    Civil society Government Service providers Service bene ciaries

    Dialogue and jointdecision making

    Benchmarkingover time

    Monitoring Diagnosis

    Interviews Focus groups

    Civil societ Clients

    4 CountrGovernanceAssessment,by AsianDevelopmentBank (ADB)

    Polic -making CSR PFM Also assesses local governance

    Donors Government

    Diagnosis Dialogue and joint

    decision making Resource allocation

    Desk research Field visits Interviews Workshops

    Donor sta Consultants Government Local stakeholders

    5 CommonAssessmentFramework,by EuropeanInstitute o PublicAdministration(EIPA)

    Leadership CSR PFM Polic -making Service deliverFor use in all parts o public sector, at national,regional and local levels

    Public organization Government