1.18 Understanding Evaluation
Evaluations needs to be planned and the 4Es might be a good framework and you need to close the loop from asking questions, getting answers to drawing conclusions.
Performance measurement measures outputs to compare to desired outcomes.
Performance management uses performance measurement to intervene.
Evaluation looks at both from several perspectives to find need for further actions.
There is a strategic and operation evaluation with both focussing either on outcomes or processes. Evaluation involves looking at the interests of several stakeholders inline with operational performance or strategic development.
These can be done to make you look good and as good management practice. Some evaluations have no real purpose but otherwise they will be about:
\- Learning
\- Proving (Efficiency)
\- Improving
\- Controlling (Effectiveness)
What criteria?
When evaluating intervention look at goals originally set: “Is it working as we had intended.” These have to be concrete and fit for measurements. If you are only looking at goals you are preventing finding out about the effects. Also, one stakeholder might be happy with the outcome but others might not.
This is where Scriven’s approach of goal-free/needs-based evaluation comes in.
1\. Less likely to miss unanticipated outcomes
2\. More open for negatives
3\. Remove goals bias
4\. Maintain objectivity and independence.
Scriven’s noted that the less an external evaluator knows about the goals, the less tunnel vision he will have. You normally need to take the original objective into account though.
To set meaningful goals:
\- distinguish between outcome goals and activities
\- Outcome goals should be clearly outcome oriented
\- finding out if desired outcome has not been attained needs to be possible.
\- Goals and objectives should be understandable
\- Separate goals from indicators
\- don’t borrow another organisation’s goals
Illuminative evaluation is another method which is not connected to a goal and more there to check understanding of a situation. The first stop is normally a concern and then you draw views and info together. From there you move on to clarification focussing down on the specifics for new understanding. This form of evaluation might be contested though as it does not rely on facts. Facilitation is important.
Who is the evaluation for?
Difference between performance measurements and evaluation is the focus on doing it for specific stakeholders. You need to:
\- Report positive and negative
\- look at possible short time positives that are negative for the long term
\- look at (harmful) indirect affects on other stakeholder groups
Who should do the evaluation?
Some by management and others by specialists which depends on:
\- Accountability to others
\- Insufficient expertise inside
\- Controversial subject
You need to choose for technical as well as personal skills and credibility. Remember that evaluations cost money, both direct (data processing and consumables) and indirect (staff time) and possibly opportunity costs.
How to do it?
For collecting data you have several sources (B18P27):
\- Record keeping
\- Archival data
\- Benchmark data
\- Questionnaire surveys
Patton has created an utilisation-focused outcomes framework:
1\. Client target group
2\. desired outcome(s) for the target group
3\. One or more indicators for each desired outcome
4\. Performance targets
5\. Details of data collection
6\. How the results will be used
Economic evaluation aim to clarify, quantify and value all the relevant options, their inputs and consequences. For a cost-benefit analysis you need to make in/out values monetary to compare. Cost-effectiveness will use a common measurement for outputs (e.g. lives saved) which leads to a money per unit level.
When doing your evaluations, make sure your data is unbiased and you have co-operation by the staff. There are several influences on the evaluation though:
1\. Evaluator characteristics
2\. User characteristics
3\. Contextual characteristics
4\. Evaluation characteristics

