Building social policy evaluation capacity.

AuthorDuignan, Paul

Abstract

The last three years have seen an increasing interest in evaluation in the public sector in New Zealand. This trend could result in an adequately resourced and sophisticated approach to evaluation, involving policy and provider levels within government, Maori and third-sector/community organisations. This in turn could lead to better formed and implemented social programmes and policies. On the other hand, it is possible that unrealistic expectations, an unsophisticated model of evaluation, lack of strategic involvement of stakeholders and inadequate investment in appropriate evaluation capacity building will result in the current wave of enthusiasm ultimately turning to disillusionment. If we use the current increased interest in evaluation to build and embed a sophisticated evaluation capacity across the social policy sector we are likely to see a more positive outcome. To achieve this we need to use appropriate evaluation models, including those appropriate for Maori programmes; build a sector culture of evaluation through appropriate evaluation training and awareness-raising at all levels; and attempt to foster strategic, sector-wide priority setting of evaluation questions.

INTRODUCTION

The final years of the last decade saw mounting interest in evaluation and an outcomes focus within the New Zealand social policy community (Schick 1996, Bushnell 1998, Duignan 1999, State Services Commission 1999, Controller and Auditor-General 2000). From the point of view of the working evaluator, this seems to have been accompanied by a significant rise in the amount of evaluation being funded and undertaken in New Zealand. It will be fascinating to watch how this develops over the next decade. If we are lucky it will result in more sophisticated evaluation being undertaken, which will feed into the formation and implementation of better social policy. If we are unlucky there is likely to be an initial burst of evaluation activity for a few years with a lot of resources spent on elaborate technical evaluation designs, followed by a phase of disillusionment due to unrealistic expectations as to what evaluation can deliver for social policy in New Zealand.

If we are to get the most out of the increased interest in evaluation we must build an enduring evaluation capacity in the social policy area. Part of this involves increasing the number of evaluators involved with the sector, as has been done in some evaluation capacity building (Compton et al. 2001), but it needs to go beyond this to put in place the following three elements:

* using appropriate evaluation models;

* developing a culture of evaluation throughout the social policy sector by teaching evaluation skills appropriate for each level of the sector; and

* sector-level strategising to identify priority evaluation questions, rather than just relying on evaluation planning at the individual programme level.

Each of these needs to involve government, community organisations and Maori stakeholders in the development of a more strategic approach to social policy evaluation.

USING AN APPROPRIATE EVALUATION MODEL

Discussing an appropriate evaluation model may seem a slightly obscure and theoretical place to start thinking about building social policy evaluation capacity. However, there are a number of different ways in which evaluation can be described, and various models and typologies that are in use by evaluators (Cook and Campbell 1979, McClintock 1986, Patton 1986, Guba and Lincoln 1989, Rossi and Freeman 1989, Scriven 1991, Fetterman et al. 1996, Chelimsky and Shadish 1997). From the author's experience, these models and approaches are not all the same in terms of their suitability for social policy evaluation capacity building. Suitable evaluation models should:

* attempt to demystify evaluation so that it can be understood and practised at all levels within the social policy sector;

* use a set of evaluation terms that emphasises that evaluation can take place across a programme's life cycle and is not limited to outcome evaluation;

* allow a role for both internal and external evaluators;

* have methods for hard-to-evaluate, real-world programmes, not just ideal-type, large-scale, expensive, external evaluation designs;

* not privilege any one meta-approach to evaluation (for example, goal-free, empowerment);

* be based on a sophisticated understanding of what evaluation can actually deliver in terms of an evidence base for social policy; and

* take into account the need for approaches for evaluating Maori programmes that may be different from mainstream evaluation approaches.

Some evaluation models meet these criteria better than others. Each of the criteria is discussed below.

Demystifying Evaluation

An appropriate evaluation model for social policy evaluation capability building should be able to be explained in clear terms to a wide range of different stakeholders with diverse training, backgrounds and experience from across government, Maori and the community sectors. Such a model must at the same time be able to accommodate complex technical evaluation methodologies within this easily understandable framework.

One way to describe evaluation for capacity building is to conceptualise it as being about asking questions--of our programmes, organisations and policies. These questions are not something that evaluators alone should attempt to answer themselves; they are questions that should be an important concern of every policy maker, manager, staff member and programme participant. The high-level question I use in describing evaluation is always:

* Is this (organisational activity, policy or programme) being done in the best possible way? This is then unpacked into a series of subsidiary questions:

* How can we improve this organisation, programme or policy?

* Can we describe what is happening in this organisation, programme or policy?

* What have been the intended or unintended outcomes from this organisation, programme or policy?

A question-based introduction to evaluation helps to demystify the process of evaluation. It puts the responsibility for evaluation back where it belongs--on the policy makers, funders, managers, staff and programme participants to identify the questions they are interested in, rather than leaving it solely with evaluators. It highlights that programme managers and staff cannot avoid these questions; they just have to work out ways of answering them. In most cases stakeholders will have to answer these questions through their own efforts. However, in some instances they will need to call in specialised evaluation assistance. A question-based approach to evaluation is also well positioned to highlight the concept of sector-level strategising about priority evaluation questions, which is discussed later in this article.

A Set of Evaluation Terms That Apply Across the Programme Life Cycle

In New Zealand, at least, most stakeholders unfamiliar with evaluation still see it mainly in terms of outcome evaluation, although this narrow perspective is now starting to change. An appropriate set of terms for the different types of evaluation should highlight that evaluation consists of much more than this. Two important dichotomies are often used to describe evaluation: the distinction between formative and summative evaluation and the distinction between process and outcome evaluation. Combining elements from both leads us to a three-way typology--formative, process and impact/outcome--that emphasises that evaluation can take place right across the programme life cycle, not just at the end. This is the three-way split used in the evaluation work of the Alcohol & Public Health Research Unit (Casswell and Duignan 1989, Duignan 1990, Duignan and Casswell 1990, Duignan et al. 1992a, Duignan et al. 1992b, Turner et al. 1992, Duignan 1997, Waa et al. 1998, Casswell 1999, Health Research Council n.d.).

In this typology, which is based on the purpose for which evaluation will be used, formative evaluation (McClintock 1986, Dehar et al. 1993, Tessmer 1993) is defined as evaluation activity directed at optimising a programme. (It can, alternatively, be described as design, developmental or implementation evaluation).

Process evaluation (Scheirer 1994) is defined in our typology as describing and documenting what happens in the context and course of a programme to assist in understanding a programme and interpreting...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT