booz hamilton mjerenje

Upload: marin-miljkovic

Post on 03-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 Booz Hamilton Mjerenje

    1/9

    Proceedings of the 2006 Crystal Ball User Conference

    USING THE VALUE MEASURING METHODOLOGY TO EVALUATEGOVERNMENT INITIATIVES

    Kevin FoleyBooz Allen Hamilton8283 Greensboro Drive

    Mclean, VA 22102 USA

    ABSTRACT

    Booz Allen Hamilton, in conjunction with a team from Harvard University's Kennedy School of Government, de-veloped a method to properly value various government initiatives. Historically, the quantitative value of an alter-native was used as a major factor in the decision to implement a given program. However, this approach over-looks the qualitative aspects of a program that may be in the Governments best interest. The Value MeasuringMethodology (VMM) uses both the quantitative value and qualitative values, such as values to society, the end-user, and government, as inputs to derive new measures for comparing alternative projects. Given the level of risk

    in many government projects, VMM relies on Crystal Ball to describe uncertainty in a style that is acceptable tomajor oversight organizations.

    1 INTRODUCTION

    People and organizations make decisions every day; important decisions, trivial decisions, cost-free decisions, and expensive decisions. Some decisions are made on a repeated basis, such as yearly budgeting, and some are singu-lar events, such as implementing a new IT system. The larger the organization and the more important the deci-sion, the more complex the decision making process becomes. For organizations like the U.S. Federal Govern-ment, this decision making process involves numerous stakeholders, both internal and external to the government.With so many stakeholders, reaching consensus on the ideal solution often becomes arbitrary and time consuming.

    While there is an entire spectrum of possible decisions to be made, many decisions focus on choosing be-tween competing alternatives. A problem exists, and there are multiple solutions, but which solution is the best?

    And who determines what is best in any given situation? The Value Measuring Methodology (VMM) was de-veloped to enable disparate stakeholders to reach consensus by developing a decision framework that allows thedifferent alternatives to be compared against each other in an equally-weighed scale.

    2 HISTORY

    In July 2001, the Social Security Administrations (SSA) Office of Electronic Services, in cooperation with theGeneral Services Administration (GSA), initiated the task of developing an effective methodology to assess thevalue of electronic services (e-Services) that would be: compliant with current federal regulations and OMB guid-ance, applicable across the federal government, and do-able.

    To aide them in this effort, Booz Allen analysts and academics affiliated with Harvard Universitys KennedySchool of Government worked to develop VMM. Both SSA and GSA had experienced the limitations of tradi-tional analyses that focused on direct benefits quantified in financial terms. Both had used financial metrics to

    justify investment, relegating critical non-financial (a.k.a., qualitative or intangible) benefits to narrative descrip-tions. This treatment of primarily non-governmental benefits provided managers and decision makers with littleinsight into the results the project should deliver. In addition, it provided little analytical substance or rigor to theconsideration of benefits that often were more influential in decision-making than government financial benefits.

    The original focus of this effort targeted the development of a means to quantify "value" - specifically thenon-financial benefits associated with electronic government services. However, this focus was broadened duringthe development of the methodology, when it became apparent that an analysis of value alone had little worthwhen considered in isolation. Additionally, it became clear that this methodology could provide important insightinto the business value of other investments, such as those associated with the re-alignment of an organization, theoverall impact of policies or regulations, and the acquisition of information technology (IT) systems.

  • 7/28/2019 Booz Hamilton Mjerenje

    2/9

  • 7/28/2019 Booz Hamilton Mjerenje

    3/9

    Foley

    value associated with investments in building up the infrastructure - whether in staff skills or technology- required to accommodate a changing environment.

    4. Government Financial Value: Financial benefit (e.g., cost savings, cost avoidance) realized by the gov-ernment. Captures the information necessary to calculate financial return-on-investment ratios.

    5. Strategic / Political Value :Ability of an initiative to move an organization closer to achieving its strate-gic goals, the priorities established by the Executive Office of the President, and congressional mandates.

    Captures and quantifies the value associated with advancing organizational and government-wide initia-tives and mandates.

    4.1.2 Prioritize the Value Factors

    Although the realization of value in each of the Factors must be considered, delivering value in one Factor may bemore important than delivering value in another. In other words, the Factors are not necessarily equal in impor-tance and therefore should not carry equal weight in the decision making process.

    In order to reflect the relative importance of the five Value Factors, they must be ranked or weighted. Thisshould be done at the highest appropriate level of agency management. The prioritization process is most effectivewhen completed in a facilitated session with an automated and analytically sound tool.

    For NASA, this prioritization process was done using an Analytic Hierarchy Process, or AHP. This proc-ess compares each Value Factor against the others, in order to establish a ranking. This is a crucial step for any

    VMM analysis, as it allows the stakeholders to share their understanding of the situation. Additionally, this proc-ess creates buy-in from the stakeholders, allowing the final decision to have greater strength and impact. Theresults of this process are shown in Table 1:

    Table 1: NASA Value Factor Ranking

    Value Factor WeightSocial 28.7Direct User 26.5Government Foundational/Operational 24.4Government Financial 11.6Strategic/Political 8.8Total 100.0%

    Table 1 represents a significant amount of input from the stakeholders. Interpreting this table shows that thestakeholders are concerned most with the impact of their choice of alternatives on society as a whole. NASA and the other stakeholders wanted to ensure that their decision positively impacted the largest number of individuals

    possible. The second most important factor is the direct users of the geospatial data. If the direct users of the dataare not addressed, and their concerns not appreciated by management, the stakeholders felt that there would benegative impacts to the long-term success of the project. Finally, the stakeholders were aware that their decisionon standards would have a lasting impact on government infrastructure development within the geospatial com-munity, and this is represented by the Government Foundational/Operational category.

    Note that the remaining two categories, financial impact and strategic impact, while ranked lower than theothers, were still considered important. If one of the alternatives was significantly more expensive to implement,its score in the Government Financial category would be lowered.

    4.1.3

    Identify and Define the Value Measures

    The second layer of the Value Structure is comprised of the individual value measures defined in each of theValue Factors. The selection of data sources used to define value measures is dependent upon the Value Factor under consideration. For example, to identify value measures in the Direct User Value Factor, users should beconsulted. The identification of measures in the Strategic / Political Value Factor should be based upon organiza-tional and government-wide goals and objectives.

    The definition of each value is broken down into four parts: Value Measure Name, Description, metrics, and target information. These metrics must be quantifiable, using the most appropriate unit of measure. For example,users may value reliable customer service. Based on user input, reliability may be defined as service that is

  • 7/28/2019 Booz Hamilton Mjerenje

    4/9

    Foley

    timely, available, and effective. Appropriate metrics for this value measure should include the cycle time betweenthe notification of a problem and the resolution of that problem. The target associated with the metric representsthe desired level of performance. For example, if it were estimated that a particular alternative would provide cus-tomers with a two-hour cycle time between the report of a problem and the resolution of that problem, the alterna-tive would score a 40. A second alternative that could provide a one-hour cycle time in the same area would scorea 100.

    For NASA, Table 2 lists the value measures used to quantify the differences in the alternatives for the DirectUser benefits. The value measures are Ease of Use, Broad Data-Sharing Capabilities, and Data Availabilityand Accessibility. Each of these measures has corresponding metrics tied to them. The metrics are scored on a0% to 100% for each alternative.

    Table 2: NASA Direct User Value Measures and Metrics

    Direct User Value Measures

    Measure Description

    Ease of Use Geospatial resources and systems are easy to translate, transform, and ingest

    Metrics Expertise required to support data transmission

    Field mappings: number of semantic changes required to pass dataComplexity of data: number of changes in Field Length/Value/Types/etc

    Broad Data-SharingCapabilities

    Capabilities exist for broad GI data-sharing between communities of interest

    Metrics Level of Effort required to support data transmission

    Number of inquires for meta-data

    Data Availability and Accessibility

    Geospatial data and applications are readily available and accessible for com-munities of interest

    Metrics Is the data available in real time?

    number of hits (per day/hour/etc)

    Number of downloads (per day/hour)

    Are data available via inter- or intranet?

    4.1.4 Prioritize the Value Measures

    As with the Value Factors, all of the value measures identified within the Value Factors are important to consider,however their relative importance may not be equivalent. For example, users may feel that the statistical accuracyof research data is relatively more important than the timeliness of information. This assessment may be made

    based on the rationale that timely information that is not statistically accurate is of little use. The prioritization of the value measures should be determined during facilitated working sessions with representatives of user commu-nities, program managers, business line staff, and, when appropriate, partnering agencies.

    For NASA, this second ranking was performed after the Value Factors ranking had been determined. Thiswas the appropriate time, because discussion of the metrics clarified the meaning of several of the Value Factors.When participants were aware of the measures and metrics that would ultimately define the Value Factor, their re-sponses to the AHP session were better informed.

    4.1.5 Identify and Define the Risk Structure

    The Risk Structure provides a starting point for identifying and documenting an inventory of risks. It enables program managers to build a sense of confidence that all factors that may jeopardize an initiatives success are

  • 7/28/2019 Booz Hamilton Mjerenje

    5/9

    Foley

    considered when defining and analyzing each alternative solution. The risk structure is comprised of the follow-ing:

    1. Risk inventory A risk inventory identifies and documents the risks associated with an initiative. Theidentification of risk should occur during working sessions with technical and policy staff or representa-tives of partner agencies. Issues and challenges raised during preliminary planning sessions should be

    documented and defined. They will be used during the definition of alternatives to ensure that risk ismitigated to the greatest degree possible. Consideration of risk from the point of view of each of the FiveValue Factors will help to ensure that all risks have been considered and appropriately addressed.

    2. Risk probability and impact scale - A risk probability and impact scale will be used to translate narrativeassessment of the probability and impact of risk (e.g., high, medium and low) to percentages.

    For NASA, the risks were defined using a standard set of 19 risk categories used by Booz Allen. These 19 cate-gories cover technical, schedule, and feasibility risks related to IT projects. These risks were reviewed by the

    NASA team and were defined for each of the case studies in question.

    4.1.6 Identify and Define the Cost Element Structure

    A Cost Element Structure (CES) deconstructs cost into its individual components, helping to ensure a complete,

    comprehensive cost estimate and to alleviate the risk of missing costs or double counting. It is important to tailor the CES to the requirements of a particular initiative. Steps to develop a tailored CES include the following:

    1. Review existing CESs developed for analogous initiatives2. Review / incorporate business best practices and government lessons learned 3. Incorporate elements as necessary to address costs associated with achieving value and mitigating risk

    The CES created for the NASA case study was composed of three major cost categories: Planning, Implementa-tion, and Operations and Maintenance. The same CES was used for both case studies, to ensure fair treatment of

    both alternatives.

    4.1.7 Begin Documentation

    At the end of each of the VMM Steps is a task associated with documentation. Step 1 documentation should in-clude, at a minimum:

    1. The decision framework (Value, Risk and Cost Structures)2. Global assumptions (e.g., economic factors such as the discount and inflation rates)3. Project-specific drivers and assumptions derived from tailoring the Value Structure4. Documentation of research and prioritization sessions

    4.2 Define Alternatives

    An alternatives analysis requires thinking-through possible ways in which an issue may be addressed, yieldingthe data required to not only justify an investment or course of action and support the completion of budget justi-fication documentation, but also to provide an expected baseline of value, costs, and risks to guide the manage-ment and on-going evaluation of an investment.

    An alternatives analysis must consistently assess the value, cost, and risk associated with more than one alter-native for a specific initiative, including the base case, within the specific parameters of the decision framework without stalling the planning processes in the quagmire of analysis paralysis. The estimation of cost and projec-tion of value must, to the greatest extent possible, predict actual costs and value. This is accomplished in part byusing ranges to define specific elements of cost and measures of performance, and then subjecting those ranges toan uncertainty analysis to develop a range of expected value. A sensitivity analysis identifies the variables thathave a significant impact on this expected value. Although these analyses will increase confidence in the accu-racy of an estimate of cost and prediction of performance, they do not consider how other factors may drive upexpected costs or degrade predicted performance: to do so requires the performance of a risk analysis.

  • 7/28/2019 Booz Hamilton Mjerenje

    6/9

    Foley

    For the NASA case study, the alternatives were defined in the problem definition, and each case study repre-sented one alternative. While it is not always possible to find meaningful case study examples, it does show theflexibility of the methodology. If no case studies had been available, suitable alternatives with costs, risks, and

    benefits would have been developed to illustrate the differences in the two approaches.

    4.2.1 Identify and Define Alternatives

    The starting point for developing alternatives should be the information in the Value and Risk Structures and pre-liminary value drivers identified in the initial basis of estimate (see Step 1). Using this information will help toensure that the alternatives and, ultimately, the chosen solution, accurately reflect a balance of performance, pri-orities, and business imperatives. Successfully identifying and defining alternatives requires cross-functional col-laboration and discussion among the managing agency, partner agencies, business line staff, technologists and en-gineers, and policy staff.

    4.2.2 Estimate Value and Cost

    Costs are estimated in units of dollars and are inflated or discounted as necessary. They do not require normaliza-tion, however they do need to be adjusted to reflect the time value of money. The CES and Value Structure areused as a framework for estimating cost and value, respectively. The first step to developing these estimates is to

    collect information. The data sources used will vary based on what is being estimated and the phase of the pro- ject. For example, if the project being analyzed is in the early stages of development, data sources for both valueand cost will typically include benchmarks and data from analogous projects.

    Rather than attempting to assign point estimates for value and cost, VMM requires that ranges be used to en-sure a 90% confidence in the estimate. For example, an analyst may be unsure of the cost associated with the de-velopment of a change management plan. Rather than developing a back of the envelope estimate, the analystusing VMM would look for relevant benchmarks. Based on that information, the analyst sets a range of cost thathe or she is 90% certain will capture the actual cost of the development of the plan.

    Despite our best attempts, initial cost estimates and value projections are very rarely accurate within an ac-ceptable percentage of error. Additional analyses, specifically uncertainty analyses, help analysts increase their confidence that they have identified the most likely cost and value of each alternative. Uncertainty analyses aremost easily and accurately performed using Crystal Ball. Crystal Ball allows VMM to accurately define the un-certainty around each individual cost in the cost element structure. This added flexibility allows for a more well-defined analysis of the costs.

    All assumptions, limitations and data sources used to calculate the assumptions made in the estimation of value and cost for each alternative must be documented in a basis of estimate.

    4.2.3 Conduct a Risk Analysis

    Even the most comprehensive risk mitigation efforts will leave behind some level of residual risk that will causevalue to slip and / or cost to increase. VMM requires that the probability and impact of each risk factor on bothcost and value be considered. This provides analysts with information necessary to determine the interaction be-tween impact and probability and predict how that interaction will change the value and cost of the investmentunder consideration.

    The risk analysis is performed as follows:

    1. The probability and impact of each risk are estimated at the lowest practical level of cost element and value measure. This initial assessment is performed using a narrative scale (e.g., low, medium, high).

    2. The narrative assessments are translated into percentages using the risk scale established in Step 1.

    Since risk increases cost and lowers value, Crystal Ball can be used to put upper and lower bounds on this impact.This enables the analyst to look at the specific cost-risk or value-risk interaction and decide what the range of pos-sible outcomes are, which provides a more realistic understanding of these interactions.

  • 7/28/2019 Booz Hamilton Mjerenje

    7/9

    Foley

    4.2.4 On-going Documentation

    For each alternative, the initial documentation of the general assumptions and risks are expanded to include a de-scription of the alternative being analyzed; a complete, comprehensive list of both cost and value assumptions;and assumptions regarding the risks associated with a specific alternative, including expansion of the initial risk inventory.

    4.3 Analyze Alternatives

    4.3.1 Aggregate the Cost Estimate

    The total cost estimate is calculated by aggregating the expected value of each cost element. For the NASA pro- ject, Table 3 lists the estimated pre-risk costs for both case studies:

    Table 3: Pre-Risk Adjusted Costs

    Case 1 2004 2005 2006 2007 2008 2009 Total1.0 Planning $119,708 $125,694 $42,272 $44,386 $46,605 $48,935 $427,601

    2.0 Implementation $1,780,036 $1,869,038 $410,396 $430,915 $452,461 $475,084 $5,417,9303.0 Maintenance &

    Operation $374,368 $393 ,086 $163,219 $171 ,380 $179,949 $188,947 $1,470,949

    Total $2,274,112 $2,387,818 $615,887 $646,681 $679,016 $712,966 $7,316,480

    Case 2 2004 2005 2006 2007 2008 2009 Total1.0 Planning $46,075 $48,379 $50,798 $53,338 $56,004 $58,805 $313,398

    2.0 Implementation $68,400 $71,820 $75,411 $79,182 $83,141 $87,298 $465,2513.0 Maintenance &

    Operation $965,808 $1,014,098 $1,064,803 $1,118,043 $1,173,946 $1,232,643 $6,569,342

    Total $1,080,283 $1,134,297 $1,191,012 $1,250,563 $1,313,091 $1,378,745 $7,347,991

    It is important to note that costs are estimated for future years. The NASA study required this, as any investmentundertaken by NASA will have lasting impacts. Also note that these numbers are disguised to hide the identity of the case study participants.

    4.3.2 Calculate the Return-on-Investment

    There are numerous Return-on-Investment (ROI) or financial metrics that can be calculated to express the rela-tionship between the funds invested in an initiative and the financial benefits the initiative will generate for thegovernment. An ROI metric often used by the Federal Government is the Savings to Investment Ratio (SIR),however the definition of ROI may change based on the analysis in question.

    For NASA, the SIR was used. In this situation, savings is defined as the difference between the two casestudies maintenance cost, and the investment is defined as the difference in the two case studies planning and im-

    plementation costs. Case study 2 was used as the baseline, as it has a significantly higher long term maintenancecost. With this definition, Case 1 had a SIR of 119%.

    4.3.3 Calculate the Value Score

    The Value Score is calculated by aggregating the estimated performance of an alternative or base case against thevalue measures according to the assignment Value Factor and value measure weights. Table 4 lists the pre-risk adjusted value scores for the NASA case study.

  • 7/28/2019 Booz Hamilton Mjerenje

    8/9

    Foley

    Table 4: Pre- and Post-Risk Adjusted Value Score

    4.3.4 Calculate the Cost and Value Risk Scores

    Risk scores represent either the percentage of overall performance slippage or cost escalation expected that could occur due to risk. They provide decision makers with the information required to determine the degree to whichvalue and cost will be negatively impacted by risk.

    For each case study in the NASA example, detailed cost estimates were created using the CES. These costswere then risk-adjusted using the risks defined earlier.

    4.3.5 Compare Value, Risk and Cost

    By performing some straight-forward mathematical operations, it is possible to model and visualize the effect risk will have on estimated value and cost. It is also possible to determine the governments financial ROI and, if comparing alternatives divide the value of an initiative with the level of investment to determine the value per dol-lar invested.

    Table 5 shows a summary of the comparison of the risk-adjusted value and cost. In this example, Case 1 hasa lower risk-adjusted cost and a higher risk-adjusted value, making it the clear choice for implementation.

    Case Study 1 Case Study 2Value Factors & Benefits

    WeightWeighted

    ScoreRisk Adjusted

    Weighted ScoreWeighted

    ScoreRisk Adjusted

    Weighted Score1 Direct User 26.50% 22.7 19.6 11.8 10.31.1 Ease of Use 9.81% 8.1 7.0 5.1 4.51.2 Broad Data Sharing Capabilities 6.63% 5.1 4.4 2.6 2.31.3 Data Availability and Accessibility 10.07% 9.6 8.2 4.0 3.52 Social Value 28.70% 26.9 23.1 17.6 15.32.1 Institutional Effectiveness 5.74% 5.1 4.4 4.0 3.52.2 Efficient use of tax payer resources 3.73% 3.7 3.2 3.4 3.02.3 Minimal Barriers exist to finding and obtain-

    ing data5.74% 5.5 4.7 3.2 2.7

    2.4 Citizens are able to make better decisions 7.75% 7.5 6.4 3.6 3.12.5 Extra-Governmental Coordination 5.74% 5.2 4.4 3.3 2.93 Government Foundation/Operational 24.40% 19.6 16.8 14.9 13.03.1 Intra-governmental collaboration 4.15% 3.1 2.7 3.1 2.73.2 Mainstreaming of GIS technology 2.68% 2.7 2.3 2.2 1.93.3 Interagency Collaboration 3.42% 2.8 2.4 2.4 2.13.4 Reuse, Adaptation, and Consolidation 3.42% 3.4 2.9 3.0 2.63.5 Public Participation and Accountability 3.66% 2.4 2.0 1.6 1.43.6 Ease of Integration 5.61% 4.0 3.5 2.2 2.03.7 IT performance 1.46% 1.1 0.9 0.3 0.34 Strategic/Political Value 8.80% 7.9 6.8 4.3 3.74.1 Supports improved decision making 2.64% 2.4 2.0 1.3 1.14.2 Supports NSDI 2.46% 2.1 1.8 1.7 1.54.3 Close Working Relationships 2.64% 2.3 2.0 0.7 0.64.4 e-Gov Support 1.06% 1.1 0.9 0.5 0.55 Government Financial 11.60% 7.0 6.0 4.9 4.35.1 Total Cost Savings 7.19% 4.3 3.7 3.6 3.15.2 Total Cost Avoided 4.41% 2.6 2.3 1.3 1.2

    Total 84.0 72.3 53.5 46.5

  • 7/28/2019 Booz Hamilton Mjerenje

    9/9

    Foley

    Table 5: NASA Case Study Summary Information

    Case 1 Case2Cost, Pre-Risk $7,316,480 $7,347,991Cost, Post-Risk $9,116,129 $11,505,551Cost Risk Score 24.6% 56.6%Value, Pre-Risk 84.0 53.5Value, Post-Risk 72.3 46.5Value Risk Score -13.9% -13.1%

    4.4 Document and Communicate

    The outputs of VMM may be used to communicate the overall value of an initiative to a variety of stakeholders.The analysis and planning required by VMM produce outputs that fully satisfy or strongly support many require-ments for justifying budget requests.

    5 CONCLUSIONS

    Using the Value Measuring Methodology allowed NASA achieve a better understanding of the decision in ques-tion by providing detailed cost, value, and risk information for both quantitative and qualitative aspects of the al-ternatives in question. While the VMM process is intensive, the additional information provided enables decisionmakers to more fully understand the nuances of their decisions. Additionally, the quantitative analysis providesinformation and insight required to identify and assess alternatives and to create a baseline for on-going manage-ment and evaluation. The comparison of value, cost, and risk provide the insight required to make tradeoffs in or-der to optimize value, minimize cost, and diminish risk. The granularity and depth of the analysis provides the ba-sis for effective and tailored communication to stakeholder holders, customers, and funding authorities.

    REFERENCES

    Value Measuring Methodology How To Guide, October, 2002, located at:http://www.cio.gov/archive/ValueMeasuring_Methodology_HowToGuide_Oct_2002.pdf

    Geospatial Interoperability (GI) Return on Investment (ROI) Study Report", April 2005, located athttp://gio.gsfc.nasa.gov/publication.html

    BIOGRAPHY

    Kevin Foley ( f ol ey_kevi n@bah. com , 703-377-5444) is an associate with Booz Allen Hamiltons Economicsand Business Analysis (EBA) Team in Mclean, VA. Mr. Foley has a B.S. in aerospace engineering, an MBA instrategy and policy analysis, and is currently pursuing a Masters degree in applied economics at Johns HopkinsUniversity.