use of evaluative information in foundations: benchmarking data

Download Use of Evaluative Information in Foundations: Benchmarking Data

If you can't read please download the document

Upload: alpha

Post on 25-Feb-2016

15 views

Category:

Documents


0 download

DESCRIPTION

Use of Evaluative Information in Foundations: Benchmarking Data. Patrizi Associates June 2010. Study Purpose. Study Purpose Benchmark foundation practices regarding evaluation functions and responsibilities and how evaluation resources are deployed. - PowerPoint PPT Presentation

TRANSCRIPT

  • Use of Evaluative Information in Foundations: Benchmarking DataPatrizi AssociatesJune 2010

  • *Study PurposeStudy PurposeBenchmark foundation practices regarding evaluation functions and responsibilities and how evaluation resources are deployed.Explore perceptions of how well foundations use evaluative information.Explore patterns of demand for evaluative information.

    Set the stage for the July 2010 Evaluation Roundtable Meeting to:Consider how evaluative information can be used effectively to advance foundation capacity to develop and guide strategy in complex and challenging environments.

  • *Study OverviewAbout the study The focus of this study is on evaluative information rather than on evaluation in order to capture the range of functions and products used by foundations to gauge their own effectiveness.The questions were posed to those responsible for evaluation in each of the participating foundations.Although weve conducted benchmarking studies in the past, they were more narrowly focused on evaluation, more qualitative in nature, and included between 10-14 foundations. We are reluctant to use these as true points of comparison. However, weve included some historical references in this presentation based on these previous Evaluation Roundtable studies and from interviews conducted as part of this study.Approach33 foundations (US and Canada) with a history of strong evaluation use were invited to participate.We sent a web-based survey to the person who led the evaluation unit or had major responsibility for evaluation.31 foundations completed the survey.26 foundations returned foundation expenditure information.29 participated in follow-up phone interviews.

    Time period of studyStudy conducted in Summer 2009. Respondents were asked to provide data from time period 2007 and 2008 and to reflect on changes over the last five years.AnalysisWe examined overall responses to identify patterns across respondents and segmented responses by: Size of the foundations yearly grantmaking: Foundations under $50 millionFoundations $50 million and under $200 millionFoundations $200 million and overReporting structure: Whether the evaluation unit reports to the CEO, program leader, or administrative leader.CaveatThe sample size is small, although it includes over 50% of foundations with grantmaking over $200 million annually and nearly 15% of those awarding over $50 million annually. The universe of foundations of interest is even smaller, in light of our criterion for study participation that a foundation has an expressed/ demonstrated strong interest in evaluation.

  • *Participating Foundations by Grantmaking Size andReporting Structure* These foundations did not provide evaluative information expenditure data.

    + Because of the unique nature of the BMGFs operations, they were counted as 3 separate foundations. At the time of the survey, evaluation staff reported to an administrator; shortly after the survey administration, the Foundation delegated this function to the three program presidents.

    Evaluation Reports toFoundations Under $50 MillionFoundations Between $50 to $200 MillionFoundations Above $200 MillionCEOBruner Foundation*Colorado TrustEdna McConnell Clark FoundationJ.W. McConnell Family FoundationNew York State Health FoundationCleveland FoundationJames Irvine FoundationOntario Trillium FoundationWallace FoundationWilliam Penn FoundationHewlett FoundationRobert Wood Johnson FoundationAdministratorBarr Foundation*Marin Community FoundationCalifornia EndowmentLumina FoundationRockefeller FoundationAtlantic PhilanthropiesGates Global Health*+Gates Global Development*+Gates U.S. Program*+Pew Charitable TrustsProgramCalifornia Health Care FoundationKauffman FoundationAnnie E. Casey FoundationCalifornia Wellness FoundationHilton FoundationKnight FoundationFord FoundationKellogg FoundationPackard Foundation

  • *Study Context: Forces Shaping the Evolution of Evaluationin FoundationsThe evaluation function in philanthropy is relatively new, dating to the late 1970s early 1980s with a large expansion of the number of foundations with dedicated evaluation staff in the 1990s. Evaluation emerged during a period of professionalization of philanthropy, when many foundations shifted their approach from that of a charitable grant maker responding to grant requests toward a more directive and purposeful role as a strategist. With their expanded role as a strategic actor, foundations increased their attention to evaluation. To be a strategic philanthropy soon became linked to being an effective philanthropy and with this, an increased focus on measurement.Reflecting this overall shift toward strategic philanthropy, evaluation units have expanded their focus from assessing whether grantees are effective toward assessing whether foundation strategies are effective and whether foundations add value. A look at the trends in evaluation unit titles is revealing. We hypothesize that much of this evolution corresponds to the growth of strategic philanthropy.

    Trends in Evaluation Unit Titles

    1980sEarly 1990sLate 1990s2000sResearch and evaluationPlanning and evaluationOrganizational learningImpact somethingStrategic something

  • *Study Context: Tensions and ChallengesThe role and function of evaluation sits astride several philosophical debates related to this evolution of philanthropy, largely pivoting around the degree to which evaluation serves to increase learning vs. its role in assuring accountability.

    This core tension is played out in numerous questions regarding reporting, responsibility, orientation (internal or external) and level of resources relative to value.

    In the shift toward strategic philanthropy, foundations have been challenged to reframe evaluation from an older model of post hoc assessment of grantees for accountability to one that examines their own work and is structured to inform strategy from start to finish.

    This shift has resulted in many changes in the roles, responsibilities and structure of the evaluation function. This survey sought to describe how foundations use evaluative information (in its many forms) to guide their work and to analyze whether reporting relationship or size affects the use of evaluative information. We looked at:The range of foundation activities employed to produce evaluative informationThe resources available for these functionsPerception of internal use and demand for evaluative information by different groups internal to foundations

    This study sets the stage for discussion at the Evaluation Roundtable Meeting: Information and Its Use in Supporting Strategy. We hope to reflect on the data and its implications for what foundations need to enhance strategic learning.

  • *Benchmarking Evaluation in Foundations: 2009 FindingsI.Evaluation Functions and ResponsibilitiesII.Evaluation Resources: Staffing and BudgetIII.Perceptions of Demand for and Use of Evaluative Information

  • *What Functions Do Evaluation Units Perform? What are the principal responsibilities of your unit? Most foundations have expanded the role of the unit beyond supporting evaluations, including but not limited to the functions illustrated in the table.Evaluation units have a major role in knowledge management in 60% of responding foundations, and this is more often the case when staff report to the CEO (73%) or to program (67%). Only 40% of those reporting to administrators are involved in knowledge management in any way.Two developments have emerged as important aspects of the job: involvement in performance metrics and in program strategy. We know from prior work that an evaluation units involvement in strategy development was relatively rare five years ago; now 90% have a role in strategy. Evaluation Functions and Responsibilities

  • *What Types of Evaluative Activities DoEvaluation Units Do? Which of the following types of evaluation/performance metric activities does your foundation do? There has been an increase in all types of evaluative information. In the past individual grant and initiative evaluations made up the majority of the portfolio of foundation evaluation work. Now, although nearly all foundations surveyed still evaluate individual grants and initiatives, the vast majority of those surveyed support the full spectrum of evaluative activities listed above.Indicators work and strategy evaluations are increasingly important part of the portfolio of work, whereas this was relatively rare five years ago. Evaluations of larger aspects of foundation work, such as entire program area assessments and foundation-wide assessments, are also more common; however, about 25% of the foundations responding typically do not conduct these types of assessments.Evaluation Functions and Responsibilities*This chart does not include when other units have primary responsibility. 4% of respondents indicated that another unit had primary responsibility for each activity except perception surveys, where 21% of foundations delegated this responsibility to an other unit.

  • *How is Responsibility for Evaluation Distributed Throughout Foundations?We asked a series of questions about which unit in the foundation takes primary responsibility for an evaluative task.Response options were: My unit has primary responsibilityProgram has primary responsibility My unit and program share responsibilityOther unit has responsibilityMy foundation does not do this type of workThe purpose of these questions was to surface information regarding the role of evaluation staff in relation to program staff in the work of evaluationEvaluation Functions and Responsibilities

  • *Evaluation Is Not Exclusively the Responsibility of Evaluation StaffWhich unit has primary responsibility: evaluation, program, or shared?Evaluation units tend to have primary responsibility when the focus of assessment is larger (i.e., strategy or foundation-wide evaluations) and conversely, program staff tend to have primary responsibility for individual evaluations, a smaller focus of assessment.In most foundations, program staff assumes a great deal of the responsibility for most types of evaluation.Most foundations give evaluation staff primary responsibility for foundation level assessments (including perception surveys)Program staff have at least shared if not primary responsibility for identifying and tracking indicators of grantmaking performanceEvaluation Functions and Responsibilities

  • *Allocation of Primary Responsibility Differs Considerably Based on Reporting StructureEvaluation Functions and ResponsibilitiesEvaluation Reports to AdministratorUnits reporting to an administrator are also more likely to have primary responsibility for most evaluative activities, except identifying and tracking foundation indicators. Evaluation Reports to ProgramA much different picture emerges if evaluation reports to program. Not one evaluation unit reporting to program has primary responsibility for individual grant evaluations, strategy evaluations, entire program area assessments, or for identifying grantmaking performance indicators.Evaluation Reports to CEOCEO reports are much more likely to have primary responsibility for every type of evaluation and assessment.

  • *Evaluation Units Assume an Increased Role in Program StrategiesEvaluation units now have at least some role in the strategy development process, with nearly all respondents reporting that they are at least somewhat or heavily involved at both the start and end/renewal points of program strategy. Overall, 64% report that they are heavily involved in strategy discussions in the early stages of strategy development and nearly the same (61%) report heavy involvement at the end or renewal point. Overall participation in strategy drops off considerably in the ongoing stages of strategy evolution, with only 27% reporting that they are heavily involved in providing feedback or critique in an ongoing basis.However, respondents who report to the CEO are more involved in program strategy at every stage of the strategy cycle (outset, on-going and end) than those reporting to others.Evaluation Functions and Responsibilities: Program Strategy Assistance

  • *Summation/Questions/Points to Consider: Functions and ResponsibilitiesThe role of evaluation has expanded, particularly in program strategyThere is an increase in types of evaluative activities employed Reporting structure varies and seems to affect who has primary responsibility

    Thought starters:How does the distribution of responsibility reflect learning or accountability? What are the tradeoffs?How do you interpret the way in which role of evaluation differs under each reporting relationship studied? What kind of skills do program staff need to meet their responsibilities in the design and implementation of evaluations/ evaluative activities?Why does the evaluation role in program strategy drop off during implementation?

  • *Benchmarking Evaluation in Foundations: 2009 FindingsI.Evaluation Functions and ResponsibilitiesII.Evaluation Resources: Staffing and BudgetIII.Perceptions of Demand for and Use of Evaluative Information

  • *Tracking Spending on Evaluative InformationNearly every foundation found it difficult to provide data on evaluation spending. Most foundations do not systematically track these costs.We asked respondents to submit estimates* of their spending on a range of evaluative information activities where the foundation is a primary user, including:EvaluationsCollection of data for indicators of foundation or program performance, andOther related expenditures to gather data to inform knowledge of foundation effectivenessWe received expenditure data from 26 foundations. Those not submitting these data are asterisked on slide 4. Neither the largest foundation (BMGF) or the smallest (Bruner) is included in the analysis. To augment these data, we asked staff for their perceptions about how spending for evaluative information compared to spending on grants over the last five years.Evaluation Resources*We have every reason to believe that respondents submitted as accurate a number as possible. Those respondents who felt like the could not get good estimates did not participate in this part of the survey.

  • *The Percentage Spent on Evaluative Information (Relative to the Grant Budget) Varies GreatlyDollars Spent on Evaluative Information and as a Percentage of Grantmaking BudgetAlthough the highest percent spent on evaluative information was 17.8%, nearly 40% of all foundations surveyed invest less than 1% on these activities.Smaller foundations tend to invest a greater portion of their grantmaking budget than those in the other two tiers, with the majority spending over 7% of their grant making budget.In addition, those who report to the CEO and Administrators have invested 33+% more on evaluative information than those who report to program.

    Evaluation ResourcesThese data are based on two years of spending for both evaluative information and the grants: FY 2007 and 2008. These data should be considered estimates as many foundations do not formally track this information.

    Foundation SizeMeanMedianMinimumMaximumOverall$4,664,6523.7%$1,629,3132.2%$212,451 0.3%$28,719,57517.8%Tier 1: Under $50 million$1,150,068 7.2%$1,054,5007.4%$212,4510.8%$3,000,00017.8%Tier 2: $50 to $200 million$2,354,6502.4%$1,513,1451.6%$273,2810.3% $10,650,0006.5%Tier 3: Over $200 million$12,139,2402.6%$6,037,5362.3%$500,0000.3%$28,719,5754.9%

  • *All RespondentsMost Respondents Perceived an Increase in Evaluation Investments Prior to the Economic DownturnNot considering the recent economic downturn Over the last five years, what is your perception of how funding levels for evaluation have changed relative to shifts in the size of the grants budget?Not Considering the Economic Downturn: Most respondents believe that evaluation spending increased somewhat compared to their foundations grantmaking spending.Respondents who perceived increases in investment were largely those reporting to the CEO or an administrator. Compared to those reporting to program, more than twice as many administrator reports and 60% more of those reporting to CEOs responded that their foundation increased their investments in evaluation.Decreases in spending on evaluation were perceived only in units reporting to program.Evaluation ResourcesBy Reporting Structure

    Availability of Evaluation Funds Compared to Grantmaking SpendingNet of Recent Economic DownturnIncreased Dramatically3%Increased Somewhat59%Stayed about the Same31%Decreased Somewhat7%Decreased Dramatically0%

    Availability of Evaluation Funds Compared to Grantmaking SpendingCEOAdminProgram Increased67%78%38% Stayed Same33%22%38% Decreased0%0%25%

  • *What was the Impact of the Economic Downturn on Evaluation?How has the recent economic downturn affected the amount of funds available for evaluation compared to those available to grantmaking? Considering the Economic Downturn:In most foundations, the poor economy did not affect investment in evaluation more than it did grantmaking. The majority of respondents reports that evaluation spending relative to grantmaking spending remained constant despite the economic downturn.However, there were clear differences based on reporting structure:All respondents who perceived an increase in evaluation spending were in units reporting to CEOs or administrators.Decreases in spending occurred most frequently in units reporting to program. This is notable if we consider the order of magnitude among the differences.Evaluation ResourcesAll RespondentsBy Reporting Structure

    Availability of Evaluation Funds Compared to Grantmaking SpendingConsidering Economic DownturnIncreased Dramatically0%Increased Somewhat10%Stayed about the Same62%Decreased Somewhat21%Decreased Dramatically7%

    Availability of Evaluation Funds Compared to Grantmaking SpendingCEOAdminProgram Increased8%20%0% Stayed Same75%50%57% Decreased17%30%43%

  • *How is the Evaluation Function Staffed?Evaluation Unit Professional Staffing (in FTEs)The smallest foundations (tier 1) have more evaluation staff relative to their size than those in the other tiers.Staffing within the largest foundations (tier 3), varies tremendously and is heavily skewed by two foundations each with 14 FTE staff, almost 3 times that of the mean.Overall, staffing has gone down since 2005 from and overall mean of 3.9 FTEs and median of 3.5 FTEs.

    Evaluation Resources

    Foundation SizeMeanMedianMinimumMaximumOverall3.02.0014Tier 1: Under $50 million2.12.00.84.8Tier 2: $50 to $200 million2.02.004.0Tier 3: Over $200 million5.03.5014

  • *Number of Evaluation FTE Staff by Reporting StructureReporting structure again greatly influences number of FTEs allocated to evaluation functions.Those reporting to administrators have 2.5 times the staff size of those reporting to program.Those reporting to CEOs have over twice the staff of those reporting to program.Evaluation ResourcesThese data are based on two years of spending for both evaluative information and the grants, FY 2007 and 2008. These data should be considered estimates as many foundations do not formally track this information.

    MeanMedianMinimumMaximumTotal in FTEs3.02.0014.0CEO3.32.00.2514.0Administrator4.02.750.7514.0Program1.61.304.0

  • *Perceptions on the Level of Investment: Is It Appropriate?How would you assess the amount your foundation invests (both in terms of staff and funding) in each of?Half of the respondents believe that their foundation invests an appropriate amount (in dollars and staff time) in program strategy, foundation strategy, performance metrics, and evaluation functions. Still sizeable percentages, 31 to 47%, believe too little (or far too little) is invested in these areas.Dissatisfaction was articulated most frequently regarding foundation investment in knowledge management and formal learning functions, where 67% say foundation investments were inadequate.Program strategy was the only area where a number of respondents felt that their foundation invested far too much.Evaluation ResourcesThere is a lot of ya ya ya-ing out there on learningbut it doesnt happen.

  • *Dissatisfaction with the Level of Investment was Highest Among Those Reporting to Program and Those in Mid-size Foundations How would you assess the amount your foundation invests in? Knowledge management and formal learning functions were of greatest concern across all reporting structures and across all size foundations.About 2 out of 3 (or higher) reporting to program believe that a less than appropriate amount is invested in all of these functions.A majority (or more) reporting to administrators are most concerned about the level of investment in evaluation, knowledge management and learning.The greatest weighting of dissatisfaction was expressed by those in foundations with over $50 million in grant making. Percent of responses indicating a "less than appropriate amount"How would you assess the amount your foundation invests in? Percent of responses indicating a "less than appropriate amount"Evaluation Resources

    CEOAdministratorProgramEvaluation25%50%75%Performance Metrics/Indicators25%40%75%Knowledge Management75%50%75%Formal Learning Functions42%70%62%Program Strategy27%10%62%Foundation Strategy25%30%62%

    Under $50M$50M-$200MAbove $200MEvaluation25%55%60%Performance Metrics/Indicators37%55%40%Knowledge Management62%62%70%Formal learning Functions50%64%60%Program Strategy43%45%0%Foundation Strategy38%36%30%

  • *Summation/Questions/Points to Consider: Evaluation ResourcesStaffing and BudgetOverall financial support appears to be holding for evaluative activities.Staffing for evaluation, however, has dropped considerably since 2005.Knowledge management and learning were the areas of largest concern regarding the adequacy of foundation investment, across all size segments and reporting relationships.Those reporting to program expressed the greatest concern about the adequacy of investment made in strategy, evaluation, learning, knowledge management and indicators.

    Thought starters:Are resources (across the organization) adequate to meet the knowledge challenge of strategic philanthropy?How do you interpret the influence of reporting relationships on evaluative information investment decisions?

  • *Benchmarking Evaluation in Foundations: 2009 FindingsI.Evaluation Functions and ResponsibilitiesII.Evaluation Resources: Staffing and BudgetIII.Perceptions of Demand for and Use of Evaluative Information

  • *Management Demand for Evaluative Information Increased in Most FoundationsOver the last five years, what is your perception of trends in management demand for the following:No decrease in demand was reported by any respondent.Respondents perceived large increases in demand for all types of evaluative activities. Most respondents perceived dramatic increases in demand for program performance metrics and strategy evaluations.Although overall demand for individual grant evaluations increased, it was not perceived to be as strong as that experienced for other forms of evaluative work.Differences by reporting structure: Units that report to program experience much greater demand for individual grant evaluations, research to inform program strategy and program metric data.Units that report to CEOs experience greater demand for strategy evaluations and overall foundation assessments.Our interviews revealed that increases in demand were largely driven by CEO or board interest.Perceptions of Demand for and Use of Evaluative InformationWe have a new president and (s)hes driving the change here. Having the CEO focused on [evaluation] is critical. Before (s)he came in, it was hard to get program officers attention I cant overstate how vital that is. The board is asking in a more explicit way what results are being achieved with our grantmaking and that is the reason for the increase.

  • *Most Respondents Believe Program Staff Use of Evaluation is at Least AcceptableHow would you assess program staffs use of evaluation to inform:About 2/3 or more respondents report at least acceptable use of evaluation at all the different stages of program work.The most frequently cited problem area for evaluation use was how well it informs midcourse decision making. Perceptions of Demand for and Use of Evaluative InformationWe spend a lot of time and resources in developing strategies, but they can become like railroad tracksonce you get going, the mechanism for switching tracks on the basis of evaluative information is difficult. The ways in which strategies get realigned is still a work in progress.

    GoodAcceptablePoorProgrammatic Development Work23%57%20%Mid-course Decisions During Implementation33%33%33%Summative Performance Assessments43%36%21%

  • *How Much are Evaluation Findings Disseminated: An Early Indicator of UseWhat portion of your evaluations either in full or summary form do you estimate are shared with the following audiences:Percent reporting large majority or all or almost allReporting is a factor in how evaluation findings are shared.More respondents reporting to CEOs share their evaluation results with every major audience listed than do those reporting to administrators or program.This is even the case in sharing with grantees (17 point difference from program) and the broader field (8 point difference).Fewer of those reporting to administrators share reports with the broader field.Frankly, there is a lot of cherry picking going on [regarding sharing evaluation findings]. Access to findings is more managed here. Findings get shared, but warts and all? Not generally.The CEO controls the message to the board and s/he errs on the side of less information. S/he would not [share] a full evaluation report that had any equivocation.

    Evaluation Unit Reports to:CEOAdministratorProgramManagement92%40%75%Board58%40%38%Program Staff92%80%87%Grantees67%60%50%Broader Field58%20%50%

  • *Comments on Issues of Use from InterviewsIt gets to be more difficult when people feel that there is more at stake. a program director is stepping down or if someones under threat, its very difficult to do healthy learning.

    We have many meetings where a program director will pitch for an idea for funding, but there wont be a question on how these grants build on prior experience or investments. There are no moments where people are asked to look back on what theyve learned.

    When I came to the Foundation I looked at all the evaluations that they did over the last 4 to 5 years and I could not tell the impact that those evaluations had had on business. Partly, entire programs had changed. Partly, officers/ directors are no longer there, so there is no ownership of information as it comes back into the foundationso it gets ignored.

  • *Most Respondents Also Believe Program Use of Metrics is at Least AcceptableHow would you assess program staffs use of performance metrics to inform:Again, about 2/3 or more of respondents rated staff use of performance metrics as acceptable or higher.Perceptions of Demand for and Use of Evaluative Information

    GoodAcceptablePoorProgrammatic Development Work17%50%33%Mid-course Decisions During Implementation8%67%25%Summative Performance Assessments22%44%35%

  • *There Were Surprisingly Few Differences Between How Respondents Viewed Evaluation and Metrics in Terms of UseHow important are the following potential uses of evaluation and performance metrics at your foundation?In considering these purposes, few respondents saw differences in the importance of use between evaluation and metrics. We were surprised by the relative lack of variation in responses.Across the spectrum of uses cited, both evaluation and performance metrics were largely seen as very or moderately important tools. The largest spreads were: Performance metrics were seen as less useful (by 12%) than evaluation in making summative judgments.Evaluation was seen as less useful (by 13%) than performance metrics in providing information about foundation progress at regular intervals.Perceptions of Demand for and Use of Evaluative Information

    Percent report "very" or "moderately" importantEvaluationPerformance MetricsProvide information about implementation97%89%Sharpen focus or operationalize a goal or strategy87%78%Provide periodic information on program performance at regular intervals83%93%Provide information to make summative judgments76%64%Provide information about foundation progress at regular intervals69%82%

  • *Respondents Raised Several Issues Regarding the Use of Metrics in Our InterviewsEvaluation vs. metrics: Executive leadership is looking less to evaluation and more to performance indicators without understanding the difference.Program vs. operational metrics: Each program can define their own metrics they want to measure. [Staff tend] to rely on what you would report to finance, which isnt necessarily helpful as program impact indicators. The indicators we have now for the board work well except for assessing program performance.

    Interestingly, the first metrics out the door are more operational than program (e.g., turn-around time of grants, volume, etc.).Using appropriate metrics in complex systems: If we looked only at the [metrics] it was a failure. But if we looked at the dynamics of schools and community, youd see what was happening that explained the initial drop. It took looking at the broader dynamics and beyond metrics alone. It took the program officer really reaching into the community to understand what was behind those metrics. When to identify metrics: Should we focus on metrics when we havent defined strategy yet?Perceptions of Demand for and Use of Evaluative Information

  • *Management Support for Evaluation Was Seen as Strong; CEO Reports Experience the Most Consistent SupportHow well would you assess managerial support of evaluative information at your foundation? All RespondentsAbout 2/3 of respondents report management support as being frequent or often.Respondents who report to the CEO were more likely to perceive strong managerial support.Only 25% of evaluation units reporting to program believe that management often or frequently addresses foundation problems identified in evaluations.

    By Reporting StructurePercent of Responses Indicating "Frequently" or "Often"Perceptions of Demand for and Use of Evaluative InformationEvaluation Unit Reports To:

  • *Board Support Reveals a Similar Pattern to Management SupportHow would you assess the boards position on:All RespondentsAbout half of the respondents report high support from their boards.Relatively few respondents feel limited or low support from their boards.Perceptions of board support tend to be lowest in evaluation units that report to program.Percent of Responses Indicating High SupportBy Reporting StructurePerceptions of Demand for and Use of Evaluative InformationEvaluation Unit Reports To:

  • *Many Respondents Raised Four Organizational Cultural Factors as Impeding Good Learning at Their FoundationTo what extent are the following organizational culture factors an impediment to good learning at your foundation?The top four factors identified by over 20% of respondents as often or always impeding good learning are:Highly individualistic grantmakingLimited thinking or reflecting with othersLack of attention to implementation issuesLimited constructive feedback These tendencies were cited as a factor sometimes in at least 63% of foundations.Relatively few respondents see isolation from the field, over commitment despite uncertainty, pressure to make larger grants or unwillingness to make small exploratory grants as impeding learning in their foundations.

    Not a FactorSometimesOftenAlwaysHighly individualistic grantmaking20%53%20%7%Lack of attention to implementation27%50%23%0%Limited constructive feedback37%40%20%3%Limited thinking or reflecting with others28%52%10%10%Isolation from others in the field45%41%14%0%Over-commitment when knowledge is limited and uncertainty is high45%41%14%0%Pressure to make larger grants52%34%10%3%Predisposition toward relatively untested grantmaking33%57%7%3%Unwillingness to make small exploratory grants63%33%3%0%

  • *Respondents Who Report to an Administrator or Program Believe that Organizational Culture Factors More Frequently Impede Learning at Their Foundation To what extent are the following organizational culture factors an impediment to good learning atyour foundation?

    Percent of Responses Indicating always a factor or often a factorBy Reporting StructureRespondents who report to the CEO were generally less likely than others to believe that these cultural factors inhibit learning.Among those who report to administrators, about a third identify the following as factors that reduce institutional learning:Limited thinking with othersHighly individualistic grantmakingPressure to make large grantsLimited constructive feedback Among those who report to program:Half cite a lack of attention to implementation as a factor that often impedes foundation learning. Highly individualistic grantmaking is another common impediment to learning.Perceptions of Demand for and Use of Evaluative Information

    Evaluation Unit Reports to:Percent of responses indicating "always a factor " or "often a factor"PresidentAdministratorProgramLimited thinking or reflecting with others18%30%12%Highly individualistic grantmaking17%30%38%Over-commitment when knowledge is limited and uncertainty is high17%11%12%Pressure to make large grants9%30%0%Isolation from others in field9%20%12%Limited constructive feedback8%40%25%Unwillingness to make small exploratory grants8%0%0%Lack of attention to implementation8%20%50%Predisposition toward relatively untested grantmaking approaches0%20%12%

  • *Respondents from Larger Foundations are More Likely to Believe that Culture Factors Impede LearningTo what extent are the following organizational culture factors an impediment to good learning at your foundation?

    Percent of Responses Indicating always a factor or often a factorBy Foundation SizeThose working in foundations with grantmaking under $50m report few organizational factors that impede learning.Among mid-size grantmaking institutions, more than half report highly individualistic grantmaking as a serious impediment to learning. A lack of attention to implementation was also cited as a factor inhibiting learning by about 1/3 of the foundations this tier. Those respondents from larger foundations saw a greater number of cultural factors as impeding good learning. Perceptions of Demand for and Use of Evaluative Information

    Percent of responses indicating "always a factor " or "often a factor"Under $50M$50M to $200MAbove $200MLimited constructive feedback22%18%30%Unwillingness to make small exploratory grants11%0%0%Limited thinking or reflecting with others11%27%22%Predisposition toward relatively untested grantmaking approaches0%18%10%Pressure to make large grants0%0%40%Over-commitment when knowledge is limited and uncertainty is high0%10%30%Lack of attention to implementation0%36%30%Isolation from others in field0%9%30%Highly individualistic grantmaking0%54%20%

  • *Summation/Questions/Points to Consider: Perceptions of Demand for and Use of Evaluative InformationDemand is increasing for all evaluative services and products.Information use is seen as most problematic when programs and strategies are ongoing. This is strongly supported by the qualitative interviews. For the most part, those reporting to program expressed greater dissatisfaction with program use of evaluative information.Program reports also felt their management and board were less supportive of evaluation than those reporting to the CEO or an administrator.

    Thought Starters:Demand has increased yet staff size is down and spending remains steady. Are foundations able to do more with less? Is something being sacrificed?We expected that reporting to program would result in greater evaluation use and more general buy in, yet this seems not to be the case? How do you interpret this? What are the other sources of evaluative information available to program staff outside of the evaluation unit? How are all sources integrated and made available to program and management?How is program learning surfaced and facilitated?Do/should performance metrics and evaluation serve different purposes and what are they?

  • *In Sum: If Strategy Is Learned; How Well Are We Doing?Strategy is learned; not planned. Henry Mintzberg

    As foundations have increasingly engaged in strategic philanthropy, the evaluation function has evolved in a corresponding mannerwith a greater focus on issues of strategy. Examples abound regarding highly valued theory of change work, serving to improve and clarify foundation intent, focus, feasibility etc.

    Yet, we also see areas where improvement is needed, particularly related to deep and ongoing learning necessary for strategy evolution. We believe that as important strategic actors, foundations need to have strong capacities and commitment to learn throughout strategy evolution.

    The question becomes: How can philanthropic organizations deepen their capacity to learn and adapt their strategies?

    This is not just an evaluation issue, but an organizational challenge requiring the commitment of evaluation, program and management.

    ***************************************