SUMMARY OF IMPROVING PERFORMANCE ASSESSMENT
By Group 5 :
FAIZAH
CHOLIL TSUQOIBAQ NIM. 113411131
NASHIHUN
AMIN NIM.
113411112
Performance assessments provide a systematic way of evaluation
those reasoning and skill outcomes that cannot be adequately measured by the
typical objective or essay test. These outcomes are important in many different
types of courses. Performance assessment can help to fill gap, because they not
only engage these students and give them a chance to demonstrate their
knowledge but also disclose more in-depth information on student’s academic
need.
A.
Performance
tasks forms:
a.
Solving
realistic problems.
b.
Oral or
psychomotor skills without a product.
c.
Writing or
psychomotor skills with a product.
These various types of performance may be very restricted to fit a
specific instructional objective or they may be so extended and comprehensive
that numerous instructional objectives are involved. For example, we may judge
how well students can select and use laboratory equipment, or we may have them
conduct an experiment that involves planning the experiment and writing a
report of finding. We might also have them present their findings to the class
and defend their procedures and conclusions. Because performance tasks can vary
so widely in comprehensiveness, it is helpful to use the designations
restricted performance tasks and extended performance tasks.
English and foreign language courses are concerned with such skills
are map and graph construction and operating affectively in group. Although
tests can tell us whether the students know what to do in particular
situations, performance assessments are needed to evaluate their actual
performance skills.
B.
Performance
assessment types:
1.
Paper-and-pencil
performance.
In number of instances, paper-and-pencil performance can provide a
product of educational significance. A course in test constructions, for
example, might require students to perform activities such as following:
-
Construct a set
of test specifications for a unit of instructions.
-
Construct test
items that fit a given set of specifications
-
Construct a
checklist for evaluating an achievement test.
2.
Identification
test.
The
identifications test includes a wide variety of situations representing various
degrees of realism. In some cases, a student may be ask simply to identify a
tool or piece of equipment and to indicate its function.
Although
identifications tests are widely used in industrial education, they are by no
means limited to area. Foreign language students to identify correct
pronunciation, English students to identify the best expression to be used in
writing.
3.
Structured
performance test.
A
structured performance test provides for an assessment under standard,
controlled conditions. It might involve such things as making prescribed
measurements, adjusting a microscope, following safety procedures in starting
machine, or locating a malfunction in electronic equipment.
4.
Simulated
performance.
Simulated
performance is an attempt to match the performance in a real situation either
in whole or in part. In some situations, simulated performance testing might be
used as the final assessment of a performance skill.
5.
Work sample.
The
work sample requires the student to perform actual tasks that are
representative of the total performance to be measured. The work sample
approach to assessing performance is widely used in occupations involving
performance skills, and many of these situations can be duplicated in the
school setting.
6.
Extended
research project.
One
of the most comprehensive types of performance assessments involves the
extended research project. The extended research project provides for
assessment of multiple outcomes (e.g., research, writing, speaking, thinking ,
and self-assessment skills) and can be adapted to various areas of
instructions.
C.
Selecting the method of observing, recording,
and scoring performance assessment.
Whether judging procedures, products, or some combinations of the
two, some type of guided observations and method of recording, and scoring the
result needed.
Commonly used procedures include:
1.
Systematic
observation and anecdotal records.
Observing
students in natural setting is one of the most common method of assessing
performance outcomes. For more comprehensive performance situations, however,
the observations should be systematic and typically some record of the
observation should be made. This will enhance their objectivity,
meaningfulness, and usefulness at a later date.
An
anecdotal record is a brief description of some significant event. It typically
includes the observed behavior, the setting in which it occurred, and a
separate interpretation of the event.
2.
Checklist.
The
checklist is basically a list of measurable dimensions of a performance or
product, with a place to record a simple “yes” or “no” judgment. If a checklist
were used to evaluate a set of procedures, for example, the steps to be
followed might be placed in sequential order on the form; the observer would
then simply check whether each action was taken or no taken.
3.
Rating scales.
The
rating scales is similar to the checklist and serves somewhat the same purpose
in judging procedures and product. The main difference is that the rating scale
provides an opportunity to mark the degree to which an element is present
instead of using the simple “present-absent” judgment. The scale for rating is
typically based on frequency with which an action is performed (e.g., always,
sometimes, never), the general quality of a performance (e.g., outstanding,
above average, average, below average), or a set of descriptive phrases that
indicates degrees of acceptable performance (e.g., completes task quickly, slow
in completing task, cannot complete task without help). Like the checklist, the
rating scales directs attention to the dimensions to be observed and provides a
convenient form on which to record the judgment.
D.
Validity and
reliability in performance assessment.
Assessment of teacher practice must be both valid and reliable if
is to be believed and trusted. Validity relates to question of whether or not
one assesses what one claims toor intends to assess. It deals with whether or
not an assessor’s findings correspond to some form of objective reality. The
data collected during an assessment must in some way accurately reflect the
action being assessed. To the extent that this is so, the assessment is valid.
Reliability relates to whether or not the findings can be replicated, either by
the same observer watching similar teaching practice or by another observer
viewing the same teaching practice as the first assessor. If an assessment
practice is reliable, then both assessor should arrive at the same approximate
score. To the extent that the assessors agree in their scoring, the assessment
is reliable.
Validity does not ensure reliability, and reliability does not
ensure validity. For instance, a study can be valid, but lack reliability, and visa
versa.