Monitoring and evaluation
(Originally published on the OUBS Blog)
This is about the second part of the entire decision making loop, monitoring and evaluating the performance of those decisions.
Monitoring is about what is happening. It is clearly a part of the process of evaluation, but it is not the whole picture. Evaluation seeks to assess how well the path was executed.
If you need to control something always put the control on the task, not on the people.
In terms of monitoring and objectives you should try to avoid standards that are open to interpretation to make them SMART. Ideally, they would also be meaningful, clear, fair, adjustable, honoured.
Monitoring is often associated with the gathering of information for control which can be done by:
Involvement and observation
Regular reporting
Exception reporting
Questioning and discussion
Records and routing statistics
Next the Information you have gathered needs to be interpreted and you must take action upon that. Understand why something happened and the revise the standard or target or modify the activities or continue as is.
If you have control, you need to exercise it and this is what we call evaluation. Evaluation is about finding out whether you are achieving what you set out to achieve, and where and how things could be improved. You should continuously look for ways to improve, never settling down.
The process of evaluation is iterative, meaning you are going through the processes repeatedly.
1\. Asking questions (-> Monitoring)
2\. Answering questions (-> Review)
3\. Drawing conclusions (-> Assessment)
4\. Making necessary changes
Evaluation will highlight any differences in goals, values or objectives that exist in an organisation and force them to be addressed.
There are different kinds of evaluations as performance evaluation (are targets met), process evaluation (how do we work), impact evaluation (whether outcomes are achieved), strategic evaluation (doing the right thing), composite evaluation.
How do we design a formal evaluation? We take this from McCollam and White from 1993:
Define project aims -> Define purposes of evaluation -> Determine focus and audience -> Specify timescale -> Describe the work of the project – (possibly go back to defining the purpose) -> Choose evaluator -> Select Methods Collect Information -> Analyse and write up results -> Use results internally – (possibly go back to 1) -> Disseminate them externally.
The type of the evaluation depends on its focus and the balance between quantitative (measurable, results speak for themselves) and qualitative (reason behind everything) information is important.
Then you need to analyse and report the results. You need to look at your data looking for evidence relating to the achievement of your objectives, patterns in the evidence and unexpected results. Remember that analysis takes time and remember to write for the people that read it and make the report look good.
Then you need to make use of the results and disseminate the findings and if the evaluation results in changes to be made, then you will need to plan their implementation.
There are some issues to consider. Conflict can result out of discovered differences of expectations between groups, as slack is never popular and so isn’t a manager that makes savings, finding inefficiencies can result in some conflict too. Remember, 20% of the activities you do each day take 80% of the time.
Over-commitment is another important thing when activities are taken on without the necessary resources. It might be better to close down an entire unit then to cut everyone’s expenses.
Collusion means that people present misleading pictures of things important to your research if they feel threatened.
There might also be resistance to findings. Therefore you need to make the stakeholders feel involved and consulted from the start. Those people intimately involved need to be able to reflect on the findings and and you need to consider long-term changes.

