Nothing Special   »   [go: up one dir, main page]

Unit 1: Evaluation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 37

Unit 1: EVALUATION

3.1.2 What Is Evaluation?


Evaluation is the determination of the value of a thing. It is the formal determination of the
quality, value or effectiveness of a programme, project or process is primarily concerned with
measuring the impact of input the quality of people's lives.
Educational Programmes have intended outcomes. They have plans that are being followed in
order to achieve these outcomes. These plans consist of a range of components working together
to ensure their successful implementation. It is by monitoring plans and evaluating their outcome
once completed that educators seek to ensure that they are being accountable to their stake-
holders (parents, government, students and society), true to their intentions, and that they
themselves will learn from past experience of the programme for further work they might do.
3.1.3 Evaluation for What?
Evaluation of the educational Organization and programme is one of the most difficult and most
important phases of educational administration. Evaluations are constantly being sought by
various individuals or groups. The parents and members of the public want to know how good
their schools are. The government must make continual judgments regarding the schools as such
judgments are basic for the establishment or review of the various policies adopted by the
government. The administrators are not in a position to make recommendations in the desirable
developments in the school system unless they have available the results of evaluations. Teachers
also are interested in evaluation, in order that they may have some knowledge of the results of
their efforts. Their morale is highly related to the understandings that they have in respect of the
smooth running of the school system of which they are a part. In view of the foregoing, you can
see that evaluation is inevitable in order to measure the effectiveness and efficiency of the school
system.
2. Briefly explain the need for evaluation.
2. Evaluation is relevant to educational management. It is the last phase of management function
and it helps to measure the effectiveness and efficiency of the school system.

3.3 The Nature of Evaluation


Monitoring is a prerequisite for successful project valuing. Monitoring and evaluation are two
activities which support each other and enable stakeholders to make informed decisions about a
project's future. Essentially evaluation is ultimately concerned with the worth and value of a
project or programme. However, such judgments are made in the context of programmed
operations. For example, if a State Government in the country decides to supply free lunch to its
students in its Day Secondary Schools, we may wish to know whether the students learn more or
become better nourished? Thus, evaluation is concerned with the "so what" of inputs, that is, the
long - term changes that a particular project helps bring about in the behaviours and conditions of
those whom it touches.
From the data generated through monitoring and evaluation, one may decide to do either
of the following:
1. Discontinue the project if it is beset with basic faults that cannot be easily solved,
2. Revise the project's design.
3. Continue the project with no changes.
You should note that monitoring and evaluation are continuous activities. They occur throughout
the life of a programme.
3.4 Importance of Evaluation
The significance of evaluation in educational management lies in the fact that evaluation is the
springboard on which the future development of education and the entire school system repose.
Parents, students, members of the public, teachers, government and administrators have their
views and judgments with respect to the strengths and limitations of given schools or school
systems. Educational administrators recognize that evaluation is a part and parcel of their
function, however they are often confronted by issues of validity and credibility of data collected
as some of these may be inadequate. While it is understood that the task of evaluation is difficult
and complex, however, these are no sufficient reasons for failure to recognize its importance in
the school system. If a problem arises in the other numerous tasks of the administrator, carrying
out an evaluation of the problem area would assist him in no small measure on how to go about
solving the problem.
3.5 Purposes Of Evaluation
Evaluation is carried out for a variety of purposes. Some of these are listed below:
1. To secure the basis for making judgments at the end of a period of operation; for example, at
the end of a school term, school year or even a week of school term.
2. To ensure continuous, effective and improved programme operation.
3. To diagnose difficulties and avoid destructive problems.
4. To improve staff and members of the public's ability to develop the educational system.
5, To test new approaches to problems and to conduct pilot studies in the consideration of which
advancements and progress can be effected. Essentially, management of schools involves the
evaluation of the following educational objectives.
a) To evaluate instructional programmes b) To assess students' progress c) To facilitate students'
progress d) To understand the individual student e) To facilitate self - understanding by students
f) To contribute to a knowledge of students' abilities g) To assist in administrative judgment.
Let us take a brief look at each one of these.
a) To evaluate instructional programme The evaluation of instructional programmes is
compulsory for both the teacher and the learners to determine the causes of poor learning
situation. It could be that the objectives are not realistic; methods of teaching may be ineffective;
examination tests may be too hard or inadequate; or that specific characteristics of the students
had resulted in poor performance.
b) To assess students’ progress A student needs to know when he is making progress in his
learning and when he is not in order to help him improve,
c) To facilitate students’ progress in daily, weekly and long term learning tasks, the teacher
should ascertain how well the student is learning and on this basis to award him a grade or a
rating.
d) To understand the individual student Various interest inventories and academic aptitude tests
should be used to facilitate the evaluation of the student's abilities in the cognitive, affective and
psychomotor domains.
e) To facilitate self-understanding by students The impact of school on the student’s life is
crucial on his later life. By the time students finish secondary school, they are expected to set
realistic goals and evaluate their progress towards these goals. This depends however, on
teacher-student collection of information about ability and the teachers task of interpreting such
information to them if the student is to achieve self-understanding,
f) To contribute to knowledge of students' abilities. The improvement in the teaching - learning
process can be better induced through an increased knowledge of abilities and instructions,
g) To assist in administrative judgment We need to know which of the students shall be retained
in a particular class; who shall we promote; and who shall we give accelerated promotion. In
addition, we need to know the student’s mental state of fitness.
Exercise 1.2
Pause and think for a little while what our school system will be like if there is no evaluation.
Your reflection may include ideas such as this: the school system may not run smoothly as there
would be no way of knowing the areas of strengths and weaknesses in the system. We may
therefore, make very little or even no progress at all as we would be unable to measure the
achievement of our predetermined goals. There would be no checks and balances in the way they
behave and perform their tasks in the school.
3.6 Types of Evaluation.
Evaluation uses inquiry and judgment methods including:
a) Determining standards for judging quality and deciding whether those standards should be
relative or absolute;
b) Collecting relevant information;
c) Applying the standards to determine quality.
There are four dimensions to evaluation: the formative and the summative, the internal and the
external dimensions.
i) Formative evaluation is conducted during the operation of a programme to provide the
programme managers with evaluative information that are useful in improving the programme.
For example, if we are developing a curriculum package, formative evaluation would involve
inspection of the curriculum content by subject experts, pilot tests with small number of students,
field tests with larger number of students and teachers in several schools and so on, each stage
would result in immediate feedback to the developers who would use the information to make
necessary revisions.
ii) Summative evaluation is conducted at the end of a programme to provide potential
consumers with judgments about the programme's worth or merit. For example, after the
curriculum package is completely developed, a summative evaluation might be conducted to
determine how effective the package is with a national sample of typical schools, teachers, and
students at the level for which it was developed. The findings of the summative evaluation would
then be made available to consumers.
You would note that the audiences and uses for these two evaluation roles are very different. In
formative evaluation, the audience is programme personnel, that is, in our example they are those
responsible for developing the curriculum. Summative evaluation audiences include potential
consumers such as students, teachers, and other professionals, funding agents such tax payers,
and supervisors and other officials as well as programme personnel. Formative evaluation leads
to decisions about programme development including modification, revision and the likes.
Summative evaluation leads to decisions concerning programme continuation, termination,
expansion, adoption and so on. You should be aware that both formative and summative
evaluation are essential because decisions are needed during the initial, developmental stages of
a programme so as to improve and strengthen it, and again, when it has stabilized, to judge its
final worth or determine its future. Unfortunately, many educators conduct only summative
evaluation. This is unfortunate because the development process, without formative evaluation,
is incomplete and inefficient. Try to imagine a situation in which a new aircraft design was
developed and submitted to a summative fest flight without first testing it in the "formative"
wind tunnel. Educational test flights can be expensive too, especially when we do not have a clue
about the probability of success.
Evaluation may also be classified as either internal or external. An internal evaluation is one
conducted by the programme employees, and an external evaluation is one conducted by
outsiders. An experimental remedial programme in a secondary school may be evaluated by a
member of the school staff (internal evaluation) or by a team of inspectors from the school's
Zonal Education Office (external evaluation), These two types of evaluation have advantages
and disadvantages some of which are listed below:
1. The internal evaluator surely knows more about the programme than an outsider, however this
closeness to the programme may make her not to be completely objective in her judgment of the
programme.
2. It is difficult for an external evaluator to learn as much about the programme as the insider
knows, 3. Sometimes, an internal evaluator may have unimportant details about the programme
but overlooks several critical factors, 4. The internal evaluator may be familiar with important
contextual information that would tamper with evaluation recommendations.
4.0 Conclusion
In this unit, you have learnt a number of basic and important issues that relate to the concepts of
monitoring and evaluation in educational management. You have known the reasons for
evaluation. The nature and importance of monitoring and evaluation have been highlighted. In
addition, we have pointed out that educational administrators require evaluating their school
programmes regularly and continuously. The purposes of evaluation were also described, and the
four main types of evaluation were outlined,
Functions of Evaluation
It is through evaluation that we learn to what extent the goals of education are being achieved. It
enables us to review the progress of education and to devise new measures for its improvement
and development. Four main functions of evaluation are described here.
3.2.1 Diagnosis
You can use evaluation to discover or locate weaknesses in your students as to what they do not
know. Diagnostic testing will enable you to decide whether some of your students need remedial
courses or not. Pre-tests given at the beginning of a class are good for determining what the
students already know and what they do not know. For example, at the beginning of an English
lesson, you may ask for the meanings of some words to find out if your students have come
across those words. You may then have to explain the meanings of such words even before the
passage is read. This is to facilitate the reading exercise. This is a form of diagnostic evaluation
you have got some information by which you have judged the knowledge of the students and
finally you have taken action to remedy the situation.
3.2.2 Prediction Sometimes we give tests to identify the aptitudes and abilities of the students.
This sort of test is varied so that different types of abilities are catered for. From this test you can
predict students who are creative, technically inclined or arts oriented and as teachers you give
them exercises that will help develop each individual's interests, The National Examination given
in Nigeria to select gifted children is a good example of this.
3.2.3 Selection Through evaluation we learn where additional and better resources human,
material and financial are required. Thus evaluation is used to identify suitable persons for
particular courses, jobs, entitlements and others.
3.2.4 Grading Evaluation whereby students are ranked and graded in order of performance is
commonly used in schools. Grading between schools in terms of examination results and other
performance criteria provide parents and the public with a measure for choosing which school to
send their children to.
3.3 Evaluation and Target Setting
In target setting, you need to have a specific objective (or target) you want to accomplish, a plan
as to how you will achieve that target and then evaluation procedures to indicate whether it has
been achieved.
For example, you may have some under-qualified teachers in your school, who can adversely
affect the quality of education. You may decide your need to enable them to obtain training,
through upgrading. You will need to set a time limit for this upgrading programme and also
decide what method of upgrading will be immediately useful. After setting the time target for the
upgrading, you will need to plan your approach. As teachers on the job, their upgrading
programme has to be an in-service course. You then need to decide on how many of them should
go for Sandwich programmes held in universities during the holidays. The final step in the
process is to decide on criteria for evaluating whether the objective has been achieved - and to
ensure that the results of any evaluation are utilized to plan the next development.
Exercise 1.1 Can you provide examples of ways in which the neglect of evaluation reports has
hindered the development of effective teaching and learning in your school.
You may be able to cite examples of evaluation projects undertaken with particular goals, for
example improving the quantity and quality of food in your school, repairing damaged furniture,
improving discipline and so on. When actions are not taken as recommended in evaluation
reports then there in unlikely to be any improvement and the quality of teaching might be
affected.
3.4 Evaluation and Factors of Effective School Management
Earlier in unit 1, you have learnt that monitoring and evaluation were identified as important
school management functions, necessary for ensuring effective and efficient schools. A school is
considered effective if the following elements are found existing in it.
3.4.1 A well Organized School A school is established to ensure that teaching and learning take
place. A school that is able to discharge effectively its daily routines is providing value for
money. For this to happen, the school should be well planned so that learning can take place in a
conducive atmosphere. A democratic management style is required where planning is done in
advance of events occurring. An efficient school should have good communication channels to
enhance effective administration. There must be discipline in the school. To each office in the
school are specific duties attached and failure of one officer will affect the effective
administration of the school.
3.4.2 School Curriculum at the beginning of a new academic year, the school head would collect
the school calendar from the Zonal Education Office and he must ensure that all school activities
are in line with the calendar. Before the school resumes, he must see that text books, stationery,
furniture, games equipment, and library books are ready. The syllabuses of all the classes must
be available, and with the assistance of his assistants, - the vice principals. Teachers are helped to
prepare scheme of work.
The school head and his assistants should ensure that lesson notes are prepared daily and that
teachers' teaching does not deviate from the lesson notes. Assignments, tests and examinations
should be marked and recorded promptly and corrections done where necessary.
3.4.3 Parents-Teachers Association (PTA) The school head should ensure that a functional PTA
exists in his school, and that its meetings are held regularly, at least once a term. It essential that
there is a cordial relationship between parents and teachers for effective administration of the
school, If parents are properly approached, they can do a lot to help easing some of the financial
problems in the school, In this country, the PTAs have assisted schools with building of
classrooms, provision of school buses, purchase of laboratory equipments and others.
3.4.4 Staff Meetings There may be staff meetings of the entire school staff, departments and
special committees. These should be held regularly to review the running of the school. School
heads should use a democratic approach by listening to their staff and understanding their
personal and professional problems. Participative decision making would also enhance the
performance and productivity of the staff.
3.4.5 School Records The school head should ensure that, complete and accurate records on
students and staff, stock ledgers and registers, and other vital records which provide a full picture
of the school life are kept.
3.4.6 School Accounts The school head should keep proper accounts of income and expenditure
and bills and receipts must be accompanied by vouchers. These are required in the auditing of
the school accounts as well as in promoting the principles and practice of accountability and
evaluation in the school.
3.4.7 Inspection Reports
A record of all Inspection records must be kept by the school head and he must discuss these
with the staff so that recommendations with respect to how the system can be improved and
developed may be implemented.
It is important to point out that the data contained in an inspection report should provide
information for the school head to use as a means of effecting changes in the school. If this is
done, then monitoring and evaluation become major tools for effective management.
4.0 Conclusion
In this unit you have learned that school heads are accountable for whatever goes on in the
schools. They are answerable to the government who has expended a lot of its resources on
education, parents whose future hope is the students and to the members of the public who as
taxpayers contribute substantially to education. It is through regular evaluation that they can
make the school system profitable.
Evaluation is a rigorous and independent assessment of either completed or ongoing activities to
determine the extent to which they are achieving stated objectives and contributing to decision
making. Evaluations, like monitoring, can apply to many things, including an activity, project,
programme, strategy, policy, topic, theme, sector or organization. The key distinction between
the two is that evaluations are done independently to provide managers and staff with an
objective assessment of whether or not they are on track. They are also more rigorous in their
procedures, design and methodology, and generally involve more extensive analysis. However,
the aims of both monitoring and evaluation are very similar: to provide information that can help
inform decisions, improve performance and achieve planned results.

The distinction between monitoring and evaluation and other oversight activities
Like monitoring and evaluation, inspection, audit, review and research functions are oversight
activities, but they each have a distinct focus and role and should not be confused with
monitoring and evaluation.
Inspection is a general examination of an organizational unit, issue or practice to ascertain the
extent it adheres to normative standards, good practices or other criteria and to make
recommendations for improvement or corrective action. It is often performed when there is a
perceived risk of non-compliance.
Audit is an assessment of the adequacy of management controls to ensure the economical and
efficient use of resources; the safeguarding of assets; the reliability of financial and other
information; the compliance with regulations, rules and established policies; the effectiveness of
risk management; and the adequacy of organizational structures, systems and processes.
Evaluation is more closely linked to MfDR and learning, while audit focuses on compliance.
Reviews, such as rapid assessments and peer reviews, are distinct from evaluation and more
closely associated with monitoring. They are periodic or adhoc, often light assessments of the
performance of an initiative and do not apply the due process of evaluation or rigor in
methodology. Reviews tend to emphasize operational issues. Unlike evaluations conducted by
independent evaluators, reviews are often conducted by those internal to the subject or the
commissioning organization.
Research is a systematic examination completed to develop or contribute to knowledge of a
particular topic. Research can often feed information into evaluations and other assessments but
does not normally inform decision making on its own.
Effectiveness of development assistance initiatives ,including partnership strategies Contribution
and worth of this assistance to national development outcomes and priorities, including the
material conditions of programme countries, and how this assistance visibly improves the
prospects of people and their communities Key drivers or factors enabling successful, sustained
and scaled-up development initiatives, alternative options and comparative advantages of UNDP
Efficiency of development assistance, partnerships and coordination to limit transaction costs
Risk factors and risk managements strategies to ensure success and effective partnership Level of
national ownership and measures to enhance national capacity for sustainability of results
Whilemonitoringprovidesreal-timeinformationrequiredbymanagement, evaluation provides more
in-depth assessment. The monitoring process can generate questions to be answered by
evaluation. Also, evaluation draws heavily on data generated through monitoring during the
programme and project cycle, including, for example, baseline data, information on the
programme or project implementation process and measurements of results.
MEANING OF MONITORING AND EVALUATION:
What do you understand by the terms: monitoring and evaluation?
1. You would have observed that monitoring and evaluation are twin terminology that goes hand
in hand. Monitoring means checking on a person or thing to ensure that he is doing the right
thing at the right time, It entails informing a person in respect of his duty. Evaluation is the
determination of the value or worth of a thing or programme.
3.1.1 What is Monitoring?
In simple terms, monitoring refers to watching or checking on a person, things or objects in order
to warn or admonish. It entails warning about faults or informing one in respect of his duty.
Monitoring could also mean giving advice and instruction by way of reproof or caution. It can be
said to mean keeping order in a particular situation.
Monitoring can be defined as collecting information at regular intervals about ongoing projects
or programmes within the school system, concerning the nature and level of their performance.
Regular monitoring provides basis for judging the impact of inputs that have been fed into the
system,
Monitoring is also an ongoing process. The lessons from monitoring are discussed periodically
and used to inform actions and decisions. Evaluations should be done for program at
improvements while the programme is still ongoing and also in form the planning of new
programmes. This ongoing process of doing, learning and improving is what is referred to as the
RBM life-cycle approach,
3.2 The Nature and Importance of Monitoring
Monitoring is concerned with whether a project or programme is implemented in a manner that
is consistent with its design. In other words, in monitoring we are interested in determining if the
inputs were delivered at the times and in the quantities envisaged by the plan; if activities
occurred qualitatively and quantitatively in the manner prescribed by the plan; if resources were
expended at the times and levels outlined in the plan; and, if the individuals and communities
targeted by the plan were the ones who were actually served by the project.
Monitoring is important for many reasons, some of which are described here:
1. It enables us to describe the programme we will subsequently evaluate. If we do not know the
degree to which it is implemented, it is difficult to arrive at conclusions about the adequacy of
that programme.
2. It is a powerful tool for programme managers who wish to determine the specific "nuts and
bolts" they must address in order to improve a project's impact.
3. It is an essential element of accountability to counterparts, employers and colleagues.
Monitoring can be defined as the ongoing process by which stakeholders obtain regular
feedback on the progress being made towards achieving their goals and objectives. Contrary to
many definitions that treat monitoring as merely reviewing progress made in implementing
actions or activities, the definition used in this Handbook focuses on reviewing progress against
achieving goals. In other words, monitoring in this Handbook is not only concerned with asking
“Are we taking the actions we said we would take?” but also “Are we making progress on
achieving the results that we said we wanted to achieve?” The difference between these two
approaches is extremely important. In the more limited approach, monitoring may focus on
tracking projects and the use of the agency’s resources. In the broader approach, monitoring also
involves tracking strategies and actions being taken by partners and non-partners, and figuring
out what new strategies and actions need to be taken to ensure progress towards the most
important results.
Meaning

parameter noun [ C usually plural ] uk /pəˈræmɪtər/ us › a set of facts


which describes and puts limits on how something should happen or be
done: The report defines the parameters of best practice at a strategic,
operational, and process level within an organization.
Throughout or review of the effectiveness of different habitat rehabilitation
techniques we have emphasized the need for better monitoring and evaluation.
Our understanding of the effectiveness of different habitat rehabilitation is limited
because monitoring has often not been adequately replicated spatially (i.e.
number of sites) and temporally (i.e., too short) and often designed as an
afterthought. Designing appropriate monitoring and evaluation programmes for
stream rehabilitation will differ by project type as well as by region,
geomorphology, scale, and a host of other factors. However, there are several
basic steps that must be taken to design an effective monitoring and evaluation
programme that will allow us to learn more about the rehabilitation techniques. In
an effort to provide clear guidance on monitoring and evaluation, we provide a
brief overview of steps and considerations for developing a rigorous monitoring
and evaluation programme for single or multiple projects and at fine (habitat or
reach) or coarse scales (watershed) and provide key references were more
detailed information can be obtained. We draw heavily from Roni (2005) and
refer the reader to this reference for an in-depth treatment of the concepts we
discuss in this section.

A well-designed monitoring and evaluation programme is a critical component of


any resource management, conservation, or rehabilitation activity. It can also
help reduce the cost and increase the benefits of future rehabilitation in part by
minimizing failures (Lewandowski et al., 2002). The development of a monitoring
programme is best done as an integral part of the design phase of rehabilitation.
Many previous studies have been of limited usefulness because they were not
designed and implemented as part of the initial rehabilitation project. The
objectives of individual rehabilitation programmes and projects vary, as do the
objectives of monitoring programmes. Numerous decisions that need to be made
in designing a monitoring programme are often interrelated with those that need
to be made in developing a rehabilitation project. Thus the two should occur
concurrently well before construction of the project occurs. That is not to say that
retrospective studies of past rehabilitation activities are not without utility, but that
most questions or hypotheses will require collection of data before and after
rehabilitation.

TABLE 21
Definitions of monitoring types (adapted from MacDonald et al.,1991 and
Roni, 2005) and examples of what might be monitored for a wood
placement project targeting fish. Effectiveness and validation monitoring
are typically the types used to habitat evaluate rehabilitation actions.
Monitoring types
Description (hypotheses) Examples
(other names)
Characterizes the existing biota,
Fish presence, absence,
Baseline chemical, or physical conditions for
or distribution
planning or future comparisons
Characterizes the condition (spatial
Abundance of fish at
Status variability) of physical or biological
time x in a watershed
attributes across a given area
Spawner surveys and
Determines changes in biota or
Trend temporal trends in
conditions over time
abundance
Implementation Did contractor place
Determines whether project was
(administrative, number and size of logs
implemented as planned
compliance) as described in plan?
Determines whether actions had
Effectiveness desired effects on watershed, physical Did pool area increase?
processes, or habitat
Evaluates whether the hypothesized Did change in pool area
Validation (research,
cause and effect relationship between lead to desired change
sometimes considered
rehabilitation action and response in fish or biota
part of effectiveness)
(physical or biological) were correct abundance?

5.1 DEFINITION OF MONITORING AND EVALUATION

Before discussing steps for monitoring and evaluation it is important that we


define what we mean by monitoring, as there are several types. As with the
terminology of restoration, there is much confusion about monitoring terminology
or types. Monitoring is technically defined as systematically checking or
scrutinizing something for the purpose of collecting specified categories of data.
In ecology it generally refers to sampling something in an effort to detect a
change in a physical, chemical, or biological parameter. The common types of
monitoring used to examine changes in aquatic habitat and biota include:
baseline, trend, implementation, effectiveness, and validation monitoring (Table
21; MacDonald et al., 1991; Roni, 2005). Determining whether a rehabilitation
project was implemented correctly (implementation or compliance monitoring) is
an important part of understanding why it may or may not have achieved goals
and objectives. Implementation monitoring is relatively straightforward, involves
quality assurance and project construction management, and may be as simple
as a yes-no checklist (Kershner, 1997). Effectiveness and validation monitoring,
which typically focus on determining whether an action had the desired physical
and biological effects (Table 21), are often much more complex, more difficult,
and longer term than implementation monitoring. They are also the type of
monitoring we use to evaluate rehabilitation actions and the focus of our
discussion. Other types of monitoring (status and trend) may also help plan and
inform evaluation of rehabilitation actions.

5.2 STEPS FOR DEVELOPING MONITORING PROGRAMMES

Regardless of the type, number, and scale of aquatic rehabilitation actions, there
are several logical steps that should be taken when designing any monitoring
and evaluation programme. These include establishing project goals and
objectives, defining clear hypotheses, selecting the monitoring design, selecting
monitoring parameters, spatial and temporal replication, selecting a sampling
scheme for collecting parameters, implementing the programme, and finally,
analyzing and communicating results (Figure 16). Many of these steps are
interrelated and some steps could occur simultaneously or in a different order
than presented here. For example, monitoring design depends on hypotheses
and spatial scale, just as the number of sites or years to monitor depends in part
on the parameters selected. The first steps are critical for designing an effective
monitoring and evaluation programme and we focus our discussion on these.

Determining the objectives of the project and defining key questions and
hypotheses are the critical first steps in developing a monitoring programme.
Defining the key questions will depend on the overall project objectives.
Evaluation of rehabilitation actions can be broken down into four major questions
based on scale (e.g. site, reach, watershed) and desired level of inference
(number of projects). These include evaluations of single or multiple reach-level
projects and single watershed or multiple watershed-level projects (Table 22).
For example, if one is interested in whether an individual rehabilitation action
affects local conditions or abundance (reach scale), the key question would be:
What is the effect of rehabilitation project x on local physical and biological
conditions? In contrast, if one is interested in whether a suite of different project
types has a cumulative effect at the watershed scale, then the key question
would be: What is the cumulative effect of all rehabilitation actions within the
watershed on physical habitat and populations of fish or other biota? While some
actions such as riparian plantings or instream wood placement can cover multiple
adjacent reaches or occur in patches throughout a geomorphically distinct reach,
the initial question is still whether one is interested in examining local (site or
reach scale) or watershed-level effects on physical habitat and biota.

Determining the scale of influence for physical habitat responses requires


distinguishing between habitat unit, reach, and watershed-scale effects (Frissell
and Ralph, 1998; Roni et al., 2003). However, for fishes and other mobile
organisms, determining the appropriate scale requires differentiating between
changes in local abundance and changes in population parameters at a
watershed or larger scale. Most research on habitat and biota, both for
rehabilitation and other ecological studies, has focused on reach scale or
individual habitat units. This information is important, but uncertainty about
movement, survival, and population dynamics of biota prevent these reach-scale
studies from addressing watershed or population-level questions. Studies
designed to assess watershed or population-level effects can provide valuable
information but also face multiple challenges (e.g. upstream-downstream trends,
sampling logistics; Conquest 2000; Downes et al., 2002).

TABLE 22
Overarching hypotheses for monitoring aquatic rehabilitation divided by
scale and number of projects of interest (from Roni, 2005). Most
appropriate study designs are listed in parentheses. BA = before-after
study design, BACI = before-after control-impact, and EPT = extensive
post-treatment design. Extensive design refers to a design that is spatially
replicated (many study sites, reaches, or watersheds).

Spatial Scale

Number
of Reach/local Watershed/population
projects
Does single project effect
Single Does an individual project affect watershed
habitat conditions or biota
project conditions or biota populations? (BA or BACI)
abundance? (BA or BACI
Do projects of this type A. What are the effects of a suite of different projects
affect local habitat on watershed conditions or biota populations? (BA
Multiple
conditions or biota or BACI) B. What is the effect of projects of type x on
projects
abundance? (EPT or watershed conditions or biota populations? (BA or
replicated BA or BACI) BACI)

FIGURE 16
Key steps for developing a monitoring and evaluation programme for rehabilitation
actions. Modified from Roni (2005)
From the key questions and specific hypotheses will flow the other important
decisions including appropriate monitoring design, duration and scale of
monitoring, sampling protocols, etc. The most difficult part and the biggest
shortcoming of many rehabilitation evaluation programmes is the study design.
As noted in our review of riparian rehabilitation, lack of preproject data, adequate
treatments and controls, reference sites, and various management factors have
limited the ability of many studies to determine the effects of rehabilitation
actions. There are many potential study designs for monitoring single or multiple
rehabilitation actions. None is ideal for all situations and each has its own
strengths and weaknesses. Hicks et al. (1991) distilled these possibilities down
to a handful of experimental designs based on whether data are collected before
and after treatment (before-after, or post-treatment designs) and whether they
are spatially replicated or involved single or multiple sites (intensive or
extensive). They also described the pros and cons of each approach (Table 23).
Many variations of these basic study designs have been used or proposed in
monitoring of land use, pollution, and habitat alterations (e.g. Johnson and
Heifetz, 1985; Walters et al.,1988; Bryant, 1995) and can easily be modified for
use in evaluating rehabilitation actions. However, most of these modifications can
be classified as either before-after or post-treatment study designs. The first
include collection of data before and after implementation of the rehabilitation
project often with a control reach or watershed (before-after control-impact or
BACI design) and the later are retrospective studies implemented after
rehabilitation and rely on comparing treated areas to suitable control (same but
no treatment) or reference (ideal or natural conditions) areas. The many
strengths and weaknesses of different designs are thoroughly reviewed in
Hicks et al. (1991), Downes et al. (2002), and Roni (2005) (Table 23). No one
design is correct for all situations-the key questions and hypotheses will help
determine the most appropriate design.

Determining which metrics and parameters to monitor and measure logically


follows defining goals and objectives, key questions and hypotheses, definition of
scale, and selection of study design. Selecting parameters also goes hand in
hand with spatial and temporal replication and sampling schemes discussed
below. Parameters and metrics should not be selected arbitrarily or simply
because they were used in other studies. Monitoring parameters should be
relevant to the questions asked, strongly associated with the rehabilitation action,
ecologically and socially significant, and efficient to measure (Downes et
al., 2002; Bauer and Ralph, 2001; Kurtz et al.,2001). For example, monitoring of
riparian rehabilitation will likely be focused on indicators of plant growth and
diversity as well as some channel features, while instream habitats improvement
may focus on instream habitat features and changes in fish numbers or diversity.
Moreover, to be useful the parameter must change in a measurable way in
response to treatment, be directly related to resource of concern, and have
limited variability and not likely be confounded by temporal or spatial factors
(Conquest and Ralph, 1998).

TABLE 23
Summary of advantages and disadvantages of the major study designs
used to evaluating stream or watershed rehabilitation or habitat alteration
(modified from Roni, 2005). Intensive study design generally includes
sampling at one or two study sites or streams, extensive at multiple study
sites, streams, or watershed. Years of monitoring needed to detect a fish
response are general estimates based on juvenile salmonid studies and
extensive study designs assume more than 10 sites are sampled (space for
time substitution) thus fewer years of monitoring are needed.
Study Designs
Before and After Post-treatment
Intensiv Extensiv Intensiv Extensiv
Attribute (pros and cons) BACI
e e e e
Includes collection of preproject data yes yes yes no no
Ability to assess interannual variation yes yes yes yes no
Ability to detect short-term response yes yes yes no yes
Ability to detect long-term response yes no yes yes yes
Appropriate scale (WA = watershed,
R/WA R/WA R/WA R R/WA
R=Reach)
Ability to assess interaction of physical
low high low low high
setting and treatment effects
limite
Applicability of results limited broad limited broad
d
Potential bias due to small number of
yes no yes yes no
sites
Assume treatment and controls are
NA NA no yes yes
similar before treatment
Results influenced by climate, etc. yes yes yes yes no
Years of monitoring needed to detect a
10+ 1-3 10+ 5+ 1-3
fish response

NA = not applicable

The appropriate parameters to monitor will differ by types of rehabilitation as well


as specific hypothesis. The choice of a parameter should in part be based on the
different sources of spatial and temporal variability associated with that
parameter. Both observation error and natural variability of a quantity will reduce
the precision with which the mean of the quantity is estimated. For example,
electrofishing and snorkeling are both used to estimate juvenile fish densities in
small streams. While electrofishing may have a smaller observation error, it is
more time consuming and thus leads to fewer surveyed habitat units. If the
variability of fish is high between units then the marginal reduction in observation
error may have a relatively small effect on the precision of the mean density
estimate when compared with the increase in precision from snorkeling more
units. Moreover, temporal variation within sites and across sites can affect the
usefulness of an indicator or parameter for detecting local and regional trends in
biota or habitat (Larsen et al., 2001). It is important to consider these different
types of error when selecting monitoring parameters.
Numerous publications discuss different parameters to measure, the strength
and weaknesses. Many regional protocols exist for different parameters.
Parameters typically address watershed processes or physical, chemical, and
biological changes. We summarize common parameters in each of these
categories in Table 24 and attempt to link them to basic watershed processes as
outlined in Figure 1. The reader should consult regional protocols for more
information on which might be most useful in their region.

Determining the spatial and temporal replication needed to detect changes


following rehabilitation can and should be established prior to monitoring. This
will also help determine whether the initial parameters selected will be useful in
detecting change to the rehabilitation action in questions. This can be done using
relatively straightforward power analysis found in statistical software packages
and statistical texts. Similarly sampling schemes for collecting data within a given
study area are covered in similar texts (e.g. simple random, systematic, stratified
random, multistage, double sampling, Line transect).

Once the monitoring programme has been designed and implemented, the
results obviously need to be written up and published. While this seems intuitive,
many studies on habitat rehabilitation have only been published as grey
literature. Moreover, the published literature is likely biased towards projects that
showed an improvement following rehabilitation. Rehabilitation actions are
experiments and reporting both positive and negative findings are critical for
improving our understanding of the effectiveness of different measures, spending
limited rehabilitation funds wisely, and restoring aquatic habitats and
ecosystems.

5.3 CONCLUSIONS FOR MONITORING AND EVALUATION

Monitoring and evaluations is critical to understanding rehabilitation actions and


spending future funds wisely. Key factors to consider when developing a
monitoring and evaluation programme for rehabilitation include:

 Establishing hypotheses, choosing an appropriate study design, and selecting sensitive


parameters linked to the hypotheses.

 The lack of published evaluations of habitat rehabilitation emphasize the need for better reporting
and publishing both successful and unsuccessful projects.

 Design of monitoring is best done as part of project design, not as an afterthought.

 The failure to detect significant changes in watershed processes, physical habitat, or biota has
often been because of poorly designed monitoring and not following the steps defined above.
TABLE 24
Common parameters utilized to evaluate rehabilitation projects

Rehabilitation Watershed/riveri Physical Water/nutrien


Biota
technique ne processes habitat ts
Channel
cross
Mass wasting
sections
(landslide) rate
and long
and volume, fine
profile,
sediment and
channel
coarse sediment
width, Macroinvertebra
delivery and
channel Turbidity, te diversity and
storage, surface
Roads/sediment/hydrolo units nutrients, abundance, fish
erosion, hydrology
gy (habitats), water abundance,
(discharge),
residual chemistry diversity, and
connectivity of
pool depth, survival
roads with stream
fine
channel, sediment
sediment,
storage and
substrate
transport, bed
size and
scour and fill
compositio
n
Channel
geometry,
large
Sediment and woody Vegetation
nutrient transport debris, fine species
and retention, sediment, composition,
channel bank diversity,
aggradation and stability, growth, survival,
Temperature,
migration, wood soil biomass, root
nutrients,
Riparian transport and conditions, density;
water and soil
retention, changes percent vertebrate and
chemistry
in hydrology and cover, invertebrates
groundwater habitat measurements
levels, vegetation quality may also be
succession and (pool appropriate for
composition depth), some projects
shade and
canopy
cover
Floodplain Rate of channel Channel Temperature, Juvenile and
migration, geometry, nutrients, adult fish
Sediment channel water and soil diversity
transport and pattern, chemistry abundance,
storage, Wood length, survival,
movement;
transport and macroinvertebra
retention; te diversity,
hydrology, nutrient abundance,
density;
transport and periphyton and
habitat
retention, riparian aquatic
units
species macrophyte
succession and growth, species
composition composition,
biomass
Juvenile and
adult fish
Habitat
diversity
units,
abundance,
channel
survival,
morpholog
movement;
y, large
macroinvertebra
woody
Instream structures NA Temperature te diversity,
debris,
abundance,
cover,
periphyton and
substrate,
aquatic
cross
macrophyte
sections,
growth, species
long profile
composition,
biomass
Juvenile and
adult fish
In-lake structures NA NA NA species
abundance,
movement
Periphyton,
primary
Nutrient retention production and
and uptake chlorophyll a;
Chemistry and
Nutrient enrichment including stable NA macroinvertebra
nutrients
isotope analysis of te and fish
biota growth,
abundance and
biomass
Acquisitions and Hydrology, Landowner Temperature, Species
conservation easements connectivity of and human chemistry, composition and
(habitat protection) habitats, channel use, see nutrients richness,
migration, also list invasive species
sediment under presence or
transport floodplain absence of rare
rehabilitatio or sensitive
n species,
behaviour of
biota
(reproduction,
rearing, refuge,
migration);
abundance,
survival, and
growth of key
species

   

Policy monitoring comprises a range of activities describing and


analyzing the development and implementation of policies,
identifying potential gaps in the process, outlining areas for
improvement, and holding policy implementers accountable for
their activities.

Policy Monitoring - Wikipedia


https://en.wikipedia.org/wiki/Policy_Monitoring
Definition:
It is the collection of, analysis and interpretation of information about any aspect
of a programme of education or training as part of a recognised process of judging its
effectiveness, its efficiency and any other outcomes it may have.

Purpose of Evaluation:

 Review instruction policy o assess the capability of institution.


 To identify the issues concerned and problems of institutions and reaction of
significance group of people.
 To measure the effectiveness of instructor to provide feedback.

Goals of Evaluation

 To find out the weakness of program


 To examine the objective of program
 To suggest the ways and means to improve the program
 To conduct the follow up study

Types of evaluation
Evaluation is of two types:

 As per quality of evaluation


 As per time of evaluation

Quality of Evaluation

Quantitative Evaluation

This type evaluation provides a quantifiable objective to measure. It can express in


proportions.

Examples: How many students have got percentage above 60?

Qualitative Evaluation

It provides the means to communicate general expectations which can be expressed in


grading.

Example: What about socio-economic status

As per Time Evaluation

Formative Evaluation: Ongoing evaluation is carried during an instructional period to know


the participation of students in comparison to instructor.

For example: Class tests

Summative Evaluation: It is conducted at the end of course to judge the performance of


students, effectiveness of an instructor and effectiveness of course at the end of academic
term.
For example: Mid-term or Annual Examination.

You might also like