Responding to the demand for quicker evaluation findings.

AuthorNunns, Heather
PositionReport

Abstract

Some public sector stakeholders are demanding evaluative findings within a short timeframe. Although evaluators want to be responsive to such requests, there are a number of barriers that hinder their ability to produce evaluative information more quickly. This paper describes the results of an investigation into ways to help evaluators respond to such evaluation "timeliness" issues. It examines the factors that underpin the issue and the barriers to addressing it. A review of the literature identifies three approaches evaluators can use to address the timeliness issue. An unintended result of the investigation is also presented. Based on the findings of the literature review, a tool (named the "time/resource matrix") has been developed for responding to and managing stakeholder demand for quicker evaluative findings.

THE ISSUE

This investigation is the result of my experiences as an evaluator in a public sector organisation. Evaluation stakeholders (notably policy and programme managers) are requesting evaluations with short timeframe for the reporting of findings. An examination of requests for proposals (RFPs) posted on the Government Electronic Tendering (GETS) website in 2007 indicated that such demand is occurring across the public sector. Some RFPs have a period of 6 to 12 weeks between the awarding of an evaluation contract and the reporting deadline.

Timeliness is an important consideration for evaluators. A "utility standard" is the first of four evaluation standards of the American Evaluation Association. The utility standard refers to the importance of the timeliness of evaluation findings "so that they can be used in a timely fashion" (Sanders 1994:53--57). In New Zealand, the Social Policy Evaluation and Research Committee (SPEAR) has identified timeliness as one of four features that make research and evaluation useful for social policy purposes (Bedford et al. 2007).

The importance of the timeliness of evaluative findings is expressed succinctly by Grasso (2003:512): "Timing is almost everything". Other authors stress the relationship between use and timeliness:

The timeliness of information is no less critical than its accuracy, as exigencies often force program managers to make decisions before thorough analyses can be completed. In some instances, less rigorous analyses delivered at the right time may be superior to exhaustive analyses delivered too late. (McNeil et al. 2004:287)

Given the relationship between the timeliness of evaluative findings and their subsequent use, it is appropriate for evaluators to consider how they can respond to requests for a quicker turn-around of evaluation results.

CONTRIBUTING FACTORS IN THE POLICY ENVIRONMENT

Three factors in the policy environment appear to contribute to this demand for quicker evaluation findings: the policy-making process, the conceptualisation of evaluation in the policy process, and misaligned timeframes.

Policy-making Process

New Zealand's policy environment is characterised by "a degree of volatility" (Williams 2003:199). This is due in part to New Zealand's three-year electoral cycle, which means that major policy changes can occur very rapidly (Williams 2003:199) and may also result in "political exigencies that leave policy analysts with little time or incentive to track down and digest evidence" (Baehler 2003:37). Baehler's observation is congruent with my experience as a public sector evaluator. The demand from policy analysts for quick turn-around of evaluative findings is often the result of a Minister's request for new or revised policy within a very short timeframe.

Conceptualisation of Evaluation in the Policy Process

The policy process is typically portrayed as a cyclical, unidirectional process with evaluation the end-stage, providing information for decision-making for the next cycle of the process (Baehler 2003, McKegg 2003). Both Baehler and McKegg challenge this conceptualisation. Baehler (2003:32) argues:

The typical textbook portrayal diverges from reality ... single-loop models of policy making ... fail to recognise the different arenas in which decisions are shaped and the different stages of the cycle where key actors may be more and less open to learning from evaluation results. McKegg (2003) notes that this linear conceptualisation fails to address the wide range of possible purposes and uses of evaluation. As a result, it fails to capture the many ways in which evaluation can interact with and inform policy development and review, along with programme design and delivery.

Misaligned Timeframes

There is an inherent mismatch in the timeframes associated with the policy process and evaluation activity (Baehler 2003, Williams 2003). Policy processes are aligned with the electoral cycle, with policy making and funding for new initiatives typically occurring at the beginning of the three-year period. However, the funding of evaluations via the Government Budget process commences at the beginning of the financial year (1 July), which often does not fit with key decision-making cycles (Williams 2003). For example, policy makers require evaluative information approximately 12 months before the end of the funding of an existing programme in order to gain ongoing programme funding through the annual Budget-setting process. If an evaluation is scheduled towards the end of the policy cycle (as in the traditional conceptualisation described above), its findings will be too late to be used for decisionmaking purposes.

It should be noted that two of the factors described above--the conceptualisation of evaluation in the policy process and the misalignment of policy and evaluation timeframes-are not only contributing to the demand for quicker evaluation findings, but are also having a negative impact on evaluation use generally (McKegg 2003). These factors will only be addressed through structural change (such as the alignment of policy/programme funding allocation with evaluation funding allocation) and strategies to increase understanding about evaluative activity. Such strategies could include educating public sector managers about the ways evaluation can be used for decision-making, policy development, and programme design and development.

ADDITIONAL BARRIERS

Within the context of the policy environment identified above there are other factors that further limit the ability of evaluators to respond to requests for rapid evaluation findings. These barriers relate to resourcing, evaluator supply, the evaluability of some programmes and policies, and stakeholder expectations. Each of these is briefly discussed below.

The first barrier concerns internal resourcing limitations. Many public sector evaluation teams are small (with perhaps three to eight staff) in comparison with other teams in the organisation who commission evaluations (for example...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT