Nothing Special   »   [go: up one dir, main page]

LAP 20 Teaching Strategies

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

LAP Code: No.

of Hours: 3 hours/meeting
LAP Subject Title: Teaching Strategies for Elementary Science

LAP-20
Assessment Strategies for Science
(Physics, Earth, and Space Science)
A. Topic Outline

Content
Unit Learning Objectives Activities Assignment
Standard
Unit 4: LAP 20: -To characterize effective use of performance Analyzing
Using tasks in classroom instruction. Concepts
Performance Task -To discuss guidelines in designing and
implementing performance tasks.
- To distinguish among the types of
performance tasks; and
-To design performance tasks for earth
science and physics.

B. Introductory Activity: Analyzing Concepts. (10 points). Use short size bond paper for your answer.
Recall a performance task you demonstrated when you were still in high school or in one of your
subjects in your undergraduate studies. What kind of task was assigned to you or your group? How did
you complete your task?
C. abstraction
What is Performance-based Assessment?
A performance-based assessment is the assessment of students’ ability to apply knowledge,
skills, and understanding, usually in authentic, real-life settings that are like those encountered in the
world outside the classroom (Murchan & Shiel, 2017). Typically, the students are required to create a
product or demonstrate a process. Performance-based assessment can be used to measure a broad
range of learning outcomes, including more complex outcomes that cannot be assessed using indirect
measures, such as multiple-choice tests, and written examinations. Some examples of performance-
based assessments include:
 Representing a character from a drama or play.
 Keeping a portfolio of artwork.
 Demonstrate a routine, movement, or dance.
 Making a video to dramatize a historical theme.
 Editing a story, term paper, or essay.
 Conducting a science experiment.
 Working with a group of students to design a student attitude survey.
 Using equipment/machine to complete a task.
 Preparing a meal/baking pastries or cakes in a culinary subject; and
 Reporting on a project by delivering a multimedia presentation
Typically, assessing performance involves evaluating student learning. The evaluation (making
judgment about the quality of a performance) can be conducted by a teacher, an external marker,
or the students themselves. Klenowski &Wyatt Smith (2014) addressed student self-assessment,
whereby the students evaluate their own learning, and most importantly, internalize assessment
standards or criteria, as a major benefit of performance-based assessment. In conducting as
assessment, the rater may use a scoring tool such as checklist, a rating scale, or a scoring rubric. The
use of an appropriate scoring is essential to ensure the relevant aspects of the performance are
assessed (validity) and that assessment is marked in a consistent manner (reliability). Evaluation can
occur during (e.g., delivery of oral presentation) or after the performance (e.g., completion of an
essay, portfolio, or project).
Performance assessments can vary in length, from activities that take just a few minutes to
complete to tasks that take several weeks and require the students to present their findings to an
audience inside and outside the school.
Various authors have identified aspects of knowledge and dispositions that can best assessed
using performance-based assessments, and some of these frameworks overlap:
 Habits of mind – according to Costa & Kallick (2008), these are problem-solving, life-related
skills that are needed to operate effectively in society and include persisting, thinking
flexibility, managing impulsively, thinking about one’s thinking or metacognition, applying
past knowledge to new solutions, taking responsible risks, thinking independently, and
remaining open to continuous learning.
 Collaborative problem-solving – the students are assessed as they work together to
complete a project or another performance task (e.g., Von Davier & Halpin, 3013). To
determine the outcomes of cooperative learning, there may be learning outcomes relating
to the overall success of the project as well as outcome specifying the expected
contributions of the individuals.
 Twenty-first-century skills – these are skills that are deemed important for the world of work
in the 21st century. Griffin & Care (2015) describe these as:
a) ways of thinking (creativity and innovation, critical problem-solving, metacognition.
b) ways of working (communication, collaboration/teamwork);
c) tools for working (information literacy (ICT literacy); and
d) living in the world (citizenship, life and career, personal and social responsibility).
 Higher-order-thinking skills (HOTS) – these comprised the more advanced skills in Bloom’s
revised taxonomy (Anderson & Krathwohl, 2011) and include applying (using information in
new situations), analyzing (drawing connections among ideas), evaluating (Justifying a
stand or decision), and creating (producing new or original work)
A key rationale in using performance-based assessment is that it is possible to establish strong
links between curriculum (expressed as goals or objectives), learning (expressed as performance
standards or learning outcomes), and assessment. Specifically, aspects of the curriculum that cannot
otherwise be assessed, like collaborative problem-solving, are emphasized, and the students can
demonstrate their strength in these areas. The outcomes of assessment can then feed into further
teaching and learning activities, and gaps in student performance can be addressed. Klenowski &Wyatt
Smith (2014), proposed that performance-based assessment when used effectively, has considerable
potential as an instrument of educational reform and as a disincentive to teaching of the test (that is,
preparing sit examination that are often predictable in format and content). In addition, they suggest
that that it is consistent with social constructivist learning theories.
Implementing a Performance-based Assessment
A performance-based assessment task can be developed and scored by an individual teacher, a
subject department, an external assessor, or an examining board. A performance task seeks to assess
learning targets or objectives that are specified in curriculum developments (Murchan & Shiel, 2012).
Such task may be carried out by individuals or groups. They can be scored as the students work
on the task/or after it has been completed. Often, curriculum objectives are expressed as standards or
learning outcomes, and these became the focus of a rating scale or a rubric.
A moderation process may be put in place, where a check on the quality of the grades assigned
by the teacher is undertaken (Murchan & Shiel, 2012). This could involve a different rater undertaking a
random sample of completed tasks and scoring them independently. Discrepancies between two or
more raters can then be addressed in a marking or moderation conference. Sometimes, when
moderation unearths a discrepancy, the assessor may need to review the standards (learning outcomes)
to achieve a better understanding of them.
The final stage in assessing performance on a task is to assign a grade or mark. This may take the
form of a numerical score, a descriptor, or a grade. More extensive feedback maybe provided to the
student who completed the task, such as comments, an indication of areas in need to further
implement, or targets that the student should strive to reach the in the future.

Developing Observable Performance Criteria


The value and richness of performance assessments depend heavily on identifying criteria that
can be observed and judged, it is important that the criteria be clear in the teacher’s mind and that the
students be taught the criteria. Russell & Airasian (2012) proposed the following guidelines that are
useful for the said purpose.
 Select the performance or product to be assessed and either perform it yourself or imagine
yourself performing it.
 List the important aspects of the performance or product.
 Try to limit the number of performance criteria, so they all can be observed during a
student’s performance.
 If possible, have groups of teachers think through the important criteria included in a task.
 Express the performance criteria in terms of observable student behaviors or product
characteristics.
 Do not use ambiguous words that cloud the meaning of the performance criteria. Avoid
adverbs such as those ending in -ly, remarks such as good or appropriate, etc.
 Arrange the performance criteria in the order in which they are likely to be observed.
 Check for existing performance criteria before defining your own.
Tools in Assessing Performance-based Assessment
There are 4 tools that can be used to assess how will the students do on a performance-based
task: anecdotal records, observational checklists, rating scales, and scoring rubrics.
1. Anecdotal records
These are based on the teacher’s observations about the students as they perform an
assessment task. They allow the teachers to document the students’ strengths and weaknesses
as they edit a text, solve a problem, or search for information. Data gleaned from anecdotal
notes can be reviewed with other information (such as a finished product) to arrive at an overall
judgment of a student’s performance (Murchan & Shiel, 2012).
Of all the tools used in assessing the student’s performance, the anecdotal record is the
most detailed yet the most time consuming. It is not meant to be a free-flowing report, or a
description of a student’s performance based on prespecified performance criteria intended to
be used as a guide for the observer’s decision. Thus, judgment and recommendations are absent
from the record and are made when the record is reviewed at a later time.
2. Observational checklist
A checklist consists of a list of behaviors, characteristics, or activities and a place for
marking whether each is present or absent. It can focus on a procedure, a behavior or product
(Murchan & Shiel, 2012). Checklists are diagnostic, reusable, and capable of charting the student
progress. They provide a detailed record of the students’ performances, one that can and should
be shown to the students to help them see where improvement is needed (Russell &
Airasian,2012).
The students may use a self-evaluation checklist to review their own work. This may
enable them to internalize the criteria for performing well on a task, and they can also build
metacognitive knowledge as their understanding of their learning processes increases. On the
other hand, a potential disadvantage of a checklist is that it does not show degrees of quality-
only whether a criterion has been met or not.
There are, however, disadvantages associated with checklists. One important
disadvantage is that checklists give the teacher only two choices for each criterion: performed
or not performed. A checklist provides no middle ground for scoring (Russell & Airasian,2012).
Another drawback is the difficulty of summarizing a students’ performance into a single score.
To solve these concerns, summarizing performances from a checklist can be done by
setting up rating standards or by calculating the percentage of criteria accomplished (Russell &
Airasian,2012).
3. Rating scales
These are often used for aspects of a complex performance that do not lend themselves to a
yes-no- or present-absent judgment. A rating scale assess the degree to which a student has
attained the learning outcomes linked to a performance task. It can be used as a teaching tool (to
familiarize the students with what is required to achieve a standard) as well as assessment tool. The
end points of a rating are usually anchored (“always” “never”), with intermediate points defining
levels of performance (“seldom” “occasionally” “frequently”). In general, more points on the rating
scale indicate more reliable scores.
Three of the most common types of rating scales are the numerical, graphic and descriptive
scales (Russell & Airasian,2012). In numerical scales, a number stands for a point on the rating scale.
For example, you can use “1” that corresponds to a student “always” performing the behavior “2”
for a student “usually” performing the behavior, and so on. Graphical scales require the rater to
mark a position on a line divided into sections based on the scale. The rater marks an “x” at that
point on the line the best describes the student’s performance. Descriptive rating scales are also
known as coring rubrics, where the rater is required to use the different descriptions of the actual
performance.
Regardless of the type of the rating scale the teacher will use, two general rules will improve
their accuracy. The first rule is to limit the number of rating categories. There is a tendency to think
that the greater the number of rating categories to choose from, the better the rating scale is. Only
few observations can make reliable distinctions of a performance when the rating scale has more
than five categories. Adding many categories on a rating scale is likely to make the ratings less, not
more, reliable. Stick to three to five well-defined and distinct rating scale points (Russell &
Airasian,2012). The second rule is to use the same rating scale with each performance criterion. This
is not usually possible in descriptive rating scales where the descriptions vary with each
performance criterion. For numerical and graphical scales, however, it is best to select a single rating
scale and use it for all performance criteria. Using many different rating categories requires the
observer to change focus frequently and will decrease rating accurately by distracting the rater’s
attention from the performance.
Numerical summarization is the most straightforward and commonly used approach to
summarize performance on rating scales. It assigns a point a point value to each category in the
scale and sums the points across the performance criteria.
4. Scoring rubrics
According to Murchan & Shiel (2012), these are the types of rating scale on which each
level has a complete description of performance and quality. A rubric also kays out the criteria for
different levels of performance, which are usually descriptive rather than numerical (Russell &
Airasian,2012).
They may be analytic, where each of several dimensions is assessed, or holistic, where
either a judgment about overall quality or an overall judgment performance is made. Rubrics may
also be general (e.g., the same rubric can be applied to different tasks) or task-specific (where the
rubric describes quality with respect to a particular task). An analytic rubric has the potential to
generate specific feedback on strengths and weaknesses on each dimension of a task (Murchan &
Shiel 2017).
Russell & Airasian (2012) explained how rubrics help the teachers and the students in
various ways. It helps teachers by:
 Specifying criteria to focus instruction on what is important.
 Specifying criteria to focus student assessment.
 Increasing the consistency of assessments.
 Limiting arguments over grading because clear criteria and scoring levels reduce subjectivity;
and
 Providing descriptions of student performance that are informative to both parents and the
students.
Furthermore, rubrics help the students by:
 Clarifying the teacher’s expectations about performance.
 Pointing out what is important in a process or product.
 Helping them monitor and critique their own work; and
 Providing clearer performance information than traditional letter grades provide.
General Steps in Preparing and Using Rubrics
A rubric includes both the aspect or characteristics of a performance that will be assessed and a
description of the criteria that is used to assess each aspect. The following steps are simplified (Russell &
Airasian, 2012) to help the teachers, find ease in preparing rubrics.
1) Select a process or product to be taught.
2) State performance criteria for the process or product.
3) Decide on the number of scoring levels for the rubric, usually three to five.
4) State the description of performance criteria at the highest level of student performance.
5) State the description of performance criteria at the remaining scoring levels (e.g., the “good”
and “poor” levels of the book report rubric).
6) Compare each student’s performance with each scoring level.
7) Select the scoring level closest to a student’s actual performance or product.
8) Grade the student.
D. Application: Situational Analysis (20 points)
1. What are the possibilities and applications when using the performance assessment with very young
children and primary school students.
2. To what extent can young students engage with the self-assessment aspect of the performance
assessment.
3. Two challenges in implementing performance assessments are tome constraints and workload
management for the teachers and students. Think of a subject in which you can introduce performance
assessment such as portfolio or project. What steps can you take to make it more manageable?
4. Think of a certain performance task that can help the students utilize their knowledge and skills in
physics and earth science. You can also include other disciplines where the students are most exposed
to. Provide a rubric for assessing their performance.
5. How can performance tasks be more effective and manageable for both the teacher and the
students?

You might also like