|
Assessment Process
It is our hope that this document will serve as a useful
tool for James Madison University (JMU) faculty and for
faculty from other institutions interested in the process
of assessment. We have provided a general framework of
assessment practice that that takes you from the early
stages of assessment-specification of program focus-through
the later stages of assessment -maintenance of assessment
practice. We being this process with a bit of background
about the role of the Center for Assessment and Research
Studies (CARS) in JMU's assessment efforts, we then take
you through each phase of the assessment process, providing
examples as needed. The phases of the assessment process
included 1) specification of program focus; 2) development
of goals and objectives; 3) linkage of goals and objectives
to curriculum; 4) linkage of goals and objectives to assessment
methods; 5) linkage of curricular experiences to data collection
points; 6) method identification; 7) method selection;
8) method construction, pilot, and refinement; and 9) maintenance
of assessment practice.
CARS faculty assist departments
and programs in the development and maintenance of strong
assessment programs. This assistance
is extended to all our University programs: undergraduate
or graduate - student affairs or academic. We serve in
a consulting role to assist faculty and staff throughout
the process of assessment.
Specification
of Program Focus
An important and useful place
to begin the assessment process is to consider the context
within
which a program
functions and operates. For example, an academic program
actually has several homes; the program resides within
a department, a school or college, and the university
as a whole. Some academic programs also maintain strong
relationships
with agencies external to the university. For example,
many nursing programs have relationships with hospitals
and external nursing facilities that provide experiential
and professional opportunities for students while providing
service to the community. The congruence of a program's
mission with that of each of the larger units represents
an important linkage by which the maintenance and support
for programs can be assured or jeopardized.
Development
of Goals and Objectives
The development of a design for
assessment requires that we have a clear and shared idea
of what it is we are trying
to measure. This is one of the most difficult but also
one of the most important stages of assessment program
design. We begin by delineating the goals and objectives
of the program. Our staff help faculty to arrive at clear
statements of the goals and objectives of their programs.
We find that when the goals and objectives are clearly
described, the appropriate assessment methods become
apparent. Good goals and objectives are the engine that
drives the
assessment process. Once the goals and objectives are
drafted, we encourage program faculty to revisit them frequently
to ensure that they remain aligned with actual instruction
and program delivery.
Many goals and objectives
may not be easily measured. It is very important at
this stage of program design to
respect the complexity of the phenomenon under study.
There are many important goals that are difficult to describe,
let alone measure. An important program objective should
not be abandoned simply because we cannot think of an
easy
way to measure it. For example, the JMU's Social Work
department has struggled for many years with the refinement
of a process
and outcome that they refer to as "the development
of professional self." We believe this objective
is a very important component of being an effective social
worker. We remain committed to it, though we are not
yet
satisfied with our assessment of this goal.
Linkage
of Goals and Objectives to Curriculum
When program goals
and objectives have been drafted, discussed, and agreed
upon by faculty members, a useful exercise is
to link the goals and objectives to the program's curricular
experiences offered to students. For example, a chart-like
the one below-can be constructed that lists each of the
goals and objectives of the program matched with the
opportunities the program provides to meet each objective.
In this way,
faculty and students can see where and in what sequence
goals and objectives are addressed and reinforced. The
experience of several programs has indicated that this
is a valuable process. Through this method, faculty members
have discovered that the information they thought was
covered in previous classes was not. Other programs have
indicated
this information has been particularly beneficial for
new faculty members in providing structure for course planning
and instructional delivery. They know what their courses
are intended to do, and they can plan accordingly.
| Objective |
Courses and co-curricular opportunities where the
objective is addressed |
| Objective1: |
|
| Objective 2: |
|
Linkage
of Goals and Objectives to Assessment Methods
With the goals
and objectives outlined and with assurances that students
will have the opportunity to learn and practice
them, faculty then identify appropriate assessment methods.
Many programs have found it useful to develop a matrix
that links the selected assessment method(s) to each
of the program goals and objectives, as is shown in the
example
below. It is not necessary to develop a different method
for each objective. For example, a knowledge test can
be designed to assess a variety of objectives, and a performance
task can be designed to address several objectives. Some
program goals are simply not amenable to measurement
with
a multiple-choice or selected-response instrument. There
are many assessment methods from which to choose: selected-response
tests, constructed response tests, recitals, performance
tasks, surveys of different groups, focus groups, or
interviews. It may not be possible to develop methods to
assess all
of your goals and objectives right away. The important
thing is to get started and to develop a plan to do so.
In addition, it may not be feasible to assess all goals
each year. However, it is important to create a systematic
plan to meaningfully assess all goals on a reasonable
schedule.
| Methods |
Methods to Assess Objective |
| |
|
| |
|
Linkage
of Curricular Experiences to Data Collection Points
Once
assessment methods have been identified, it is necessary
to decide when to administer them. Characteristics of
many curricular designs provide natural opportunities for
assessment
data collection. Examples of data collection points are
entry level, keystone course, and capstone course.
For many program goals, it may be useful to assess student
understanding when they enter the major. An entry level
required course that serves as an introduction to the major
can serve as a fine pre-test data collection point. JMU's
Communication Science and Disorders undergraduate program
has taken advantage of their entry level course to provide
new majors with an opportunity to see the kinds of competencies
and knowledge they will be expected to have at the end
of their senior year. Their students have expressed quite
a bit of wonder and enhanced respect for the major when
they discern the nature and breadth of the study area upon
which they have embarked. Faculty have been able to assess
the readiness of their new majors across several academic
years by using this design. Further, when the program assesses
their graduating seniors during their final semester, this
pre-test data has provided a meaningful frame of reference
for interpretation of program outcomes.
Other programs have embedded assessment methods in what
are referred to as keystone courses. This type of curricular
formation involves a set of common core courses that are
required by all majors. This core is frequently seen with
academic majors that offer separate concentrations within
the major. For example, all students in JMU's Psychology
undergraduate major must take a series of three courses:
Psyc 101, 210, and then 211. The Psychology 211 course
is a keystone course, because it marks the last course
in a required series. Assessment activities in this course
ensure that all students have had several prerequisite
educational experiences and are declared Psychology majors.
Assessment of student knowledge and competence at this
point ensures that all students have acquired the necessary
skills considered prerequisite for advanced academic standing.
Collecting data from students
after this keystone course can provide programs with significant
opportunities to
assess the quality of their core courses and compare the
quality of their majors over time. Another frequently used
assessment point is the capstone course. Many programs
provide students with curricular designs or course concentrations
that culminate in a final integrating course, known as
a capstone. These final course requirements can include
student research experiences, laboratory tasks, term papers,
performance recitals, or seminar activities through which
students demonstrate many of the most important program
goals and objectives. Capstone courses therefore provide
a natural home for assessment activities.
Method
Identification
When program faculty have arrived
at clear ideas of what the goals and objectives of the
program are
and where students
learn and practice the skills and competencies, they
can begin to identify the methods most appropriate for
assessing
student learning and development. As stated earlier,
well-written goals and objectives tend to clarify which
assessment methods
are best for assessment use. Strong, mature assessment
programs are characterized by multiple methods of assessment.
They do not rely on one single assessment test to provide
them the information they need about their many program
goals and objectives. Newer programs, with little assessment
experience, often begin the process with multiple choice
knowledge tests. Some programs find that there are methods
available that can meet their needs, such as the ETS
Major Field Achievement Tests (MFAT) or there may be tests
that
have been developed by a national disciplinary association,
such as the American Chemistry Society or the National
League of Nursing. However, many disciplines have a strong
tradition of performance assessment in which recitals,
dance, demonstrations, or portfolios are reviewed. Some
objectives seem to be more amenable to assessment with
multiple-choice examinations and others are best evaluated
with an actual performance or product review. At this
stage of design, identification and selection of the best
methods
for each of the program objectives is conducted. The
actual selection or design of the instrument to use comes
later.
What is currently available on the market place should
not influence the specification of program goals and
objectives or the selection of what method is best.
Method Selection
CARS faculty
assist faculty with the selection and development of
all assessment methods to assure that the methods used
for program or student review are sufficient to the task,
as we are concerned about the adequacy of measurement.
The purpose of our assessment process is to provide information
to facilitate program improvement. We work with faculty
to achieve sufficient reliability and validity of assessment
methods, as it is essential to achieve a high standard
of measurement quality to allow for confident inferences
and actions based on these methods. For faculty interested
in commercially available instruments, we assist them
by requesting the test documentation and by reviewing the
psychometric properties of the instruments. We then discuss
and describe this information with the faculty to help
them decide if the test can meet their needs. Faculty
members
are encouraged to carefully review items that comprise
an instrument to see if it covers their program goals
and objectives. By creating a table, like the one below,
that
lists the goals and objectives of the program, faculty
can map the individual test items back to their program
goals to assure adequate coverage. This also makes it
possible to see that perhaps supplementing a commercial
test with
a set of carefully designed items can enhance the coverage
of the goals and objectives of the program.
| Items |
Items to assess objectives |
| |
Number of items:
Percent of items:
|
| |
|
Method Construction, Pilot, and Refinement
We often find
that commercially available instruments are not appropriate
for assessment of our programs. In
fact, our faculty members in collaboration with CARS
faculty have developed over 90% of the instruments used
for assessment
at JMU. By creating our own instruments, we can tailor
them specifically to the goals and objectives of each
program. We can also pilot and revise the instruments to
assure
they have sufficient reliability and validity to meet
our needs. As we modify our program goals and objectives
or
instructional delivery, we can revise our instruments.
Being able to develop and modify our own assessment instruments
helps us to maintain the flexibility we need to have
vigorous programs that can respond effectively and efficiently
to
changing demands and developments. Fundamental to all
development is the fidelity of the assessment methods and
techniques
to the program's goals and objectives.
Maintenance
of Assessment Practice
Once the assessment design has
been developed, implemented, and refined, the maintenance
of
practice must be addressed.
There are many components of this maintenance procedure
that we assist JMU with. For example, we archive program
methods, instruments, and data. We assist in data analysis
and in the interpretation of results. Many of the results
have lead to meaningful research question about assessment
practice, in general, and student motivation, in particular.
We are concerned with the vitality of assessment practice.
It is our goal that the level of assessment practice at
our University represents the best our profession has to
offer. Engagement in assessment represents new opportunities
for scholarly inquiry. Many of our faculty members have
found that assessment provides new channels for professional
development and research. There is a growing list of assessment-related
books, chapters, and journal articles that have been published
by James Madison faculty members and an even longer list
of scholarly presentations. We are committed to sustaining
the vitality of assessment practice.
|