Skip to Main Content
 


 
Logo: NCATE - The Standard of Excellence in Teacher Preparation
You are in the main content


Standard 2 Addendum:

Assessment System and Unit Evaluation

Evidence for the Onsite BOE Team to validate during the onsite visit

(1) Assessment system: Steps to be taken to further validate the sensitivity of the instrument to pick up growth throughout the semester. How do you propose to expand on inter- rater agreement analysis?

A prime example of how the unit approaches ensuring that the assessment tools are dependable and consistent is how the student teaching evaluation tool, referred to as the ST9 - has continued to be examined.

All initial programs in the unit use the Assessment of Student Teaching (ST-9) as a formative assessment of candidates' performance at the end of their programs. Each program has a developed and published set of Reference Guides to be followed when supervising in the field. Using state competencies and SPA standards, program-specific reference guides were developed by clinical and university faculty within the MidValley Consortium. As SPA standards are updated, the reference guides are also being updated. The most recent example being the PHETE reference guide, reviewed and revised based on the NASPE/NCATE standards. These guides are used by university supervisors and clinical faculty and encourage a performance-based process for supporting the professional growth of pre-service teachers over time. (Standard 2 Addendum Exhibit 1: Reference Guides)

In November, 2011, mid-block ST9 data was reviewed at the monthly PECC meeting. At this time certain points were discussed about the reasonableness and/or usefulness of university supervisors completing mid-block evaluations. Concerning the instrument specifically, the utility of the instrument for providing helpful data at mid-block was discussed. The committee wanted to review further data which showed mid-block and final evaluations together. In January 2012, such data were reviewed at the PECC meeting. The following points emerged from the conversations:

  • University Supervisor mid-point evaluations provide feedback to the candidates, so that he/she will have record of any areas of weakness before the experience is complete.
  • The mean scores of student performance are lower at mid-block than at final (for Fall 2011 data, the average score at mid-block was 2.78, vs. 2.94 on the final evaluation on a three-point scale), suggesting that the instrument is indeed sensitive to the lower skill level of the candidates earlier in their field experiences.

The general consensus of the group was to continue with the current process of having both university supervisors and cooperating teachers complete both the mid-point and final ST9 evaluations.

Interrater agreement on the ST9 has been reviewed in PECC. The last such discussion at PECC, in January 2012, revealed that exact agreement between raters (the cooperating teachers and university supervisors) ranged from 51% to 97% at mid-block and 72% to 99% at final evaluation. Within each item, agreement was higher at final than at mid-block. This is reasonable, given that at mid-block, university supervisors have had fewer opportunities to observe candidates (sometimes having only seen them 1-2 times by that point) than the cooperating teachers, so consistency between the two raters is harder to achieve. By the end of the block, both raters have established a better understanding of the candidate's strengths and weaknesses which in turn facilitates rater agreement.

Further analyses to be conducted:

Analysis on ST9 data will be routinely conducted (see attachment); these analyses will be disseminated and discussed via PECC.

In programs that will use multiple raters for the new Unit dispositions rubric, agreement analyses will be completed to build validity evidence for the measure.

(2) Assessment system: Steps taken when a candidate does not meet the requirements at a transition point. What remediation is offered?

As demonstrated in Exhibits 2.3a and 2.3b in the IR, transition point criteria may vary slightly from program to program and/or across initial and advanced programs in the unit and as such, there are variations in the ways in which remediation is offered. Essentially the transition points are marked by criteria that include completion of specific trainings, providing documentation and letters of support, minimum grades on required exams or courses, and satisfactory performance evaluations as determined by supervisors. The circumstances surrounding the candidates' challenges with meeting transition points criteria influences the remediation that is offered.

In some cases, candidates self select to not continue in the fifth year, in other instances, the candidates may not have the requisite grade point average, may not have been able to pass required exams, or may not have met the expectations of the field experiences. IR Exhibit 2.3b provides specific information on the candidates who did not progress through all gates. The mastery learning model that is embraced by faculty across the unit informs the programs in multiple and in deep ways about the progress of the candidates. As indicated in IR Exhibit 2.3.b, explanations vary for the attrition rate, as do rationales for determining the level of remediation to provide, if any.

For initial licensure programs, the unit provides remediation opportunities for candidates who are having difficulty with requirements for admission into Teacher Education primarily through group meetings, individual advising, and support services provided both by the College of Education (e.g. Praxis 1 peer tutoring by TEACH ambassadors) and the university (e.g. Learning Resource Centers). Advanced candidates seek support for admission requirements through the Graduate School (information provided regarding GRE prep) and/or program advisors.

Key assessments are not directly identified as "transition points" in the programs but clearly play a critical role in the progression of the candidates through the program. Each program monitors candidate performance on key assessments. As reflected in the key assessment data summaries in IR Exhibit 1.3. c-h, the overwhelming majority of candidates' in each program meet the expected competencies. Program faculty work with candidates from a mastery learning perspective; thus, candidates are provided multiple opportunities for feedback and correction on key assessments.

(3) Assessment system: Advanced programs evaluation of their practicum and internship experiences. What types of rubrics are used?

Candidates in advanced programs are evaluated on projects that are implemented in either their own classrooms or in internship experiences outside of their own classroom. In all cases, the assessments have been developed to provide opportunities to apply the principles and theories studied in the university courses. Rubrics used to assess the projects are designed to allow the candidates to demonstrate their knowledge and skills when working directly in the contexts in which they are being prepared to serve as education professionals and leaders and are presented in Standard 2 Addendum Exhibit 2: Advanced Program Practicum and Intern Evaluations

For example, candidates in the Educational Leadership program are required to complete an internship in two school level buildings other then their own in addition to time in the central office. The course syllabus clearly delineates the expectations of the experiences including documenting specific experiences in a journal. The accompanying rubric describes the evaluation of the journal.

The Internship in School Psychology is seen as that part of the training program that provides the student and supervisors a chance to evaluate the student's knowledge and skills in a controlled but real and practical setting. It is viewed as an opportunity for the student to develop a clear and professional identity and move toward assumption of full responsibility as a school psychologist. The Internship is seen as that point of training that integrates all previous training experiences by means of practical application in the schools and in some cases additional settings.

The School Psychology Handbook is a comprehensive document that clearly articulates the expectations for the internship, including the rubric that will be used to evaluate the experience.

Detailed information for the two programs described above and other advanced programs is available in Standard 2 Addendum Exhibit 2: Advanced Program Practicum and Intern Evaluations.

(4) Assessment System: Description of the assessment system that explains the difference between initial and advanced programs.

The assessment system for both initial and advanced programs is aligned with the Conceptual Framework, state licensing requirements and national standards. Key assessments have been identified for initial and advanced programs that measure candidate performance across five areas: content knowledge, pedagogical and content knowledge, impact on student learning, diversity and dispositions. A thorough description of the unit assessment system is presented in IR Exhibit 2.3a. Also included is an explanation of the differences in transition points between and among all programs

(5) Assessment system: Information on how TK-20 system is used. How are candidates, faculty, and school partners involved?

TK-20 is the primary source of collecting data on our candidates in initial and advanced programs. It is linked to several important processes in the unit, such as admission, documenting successful meeting of the criteria for the program transition points, documenting successful completion of all required exams, performance on key assessment and completion of all program requirements resulting in recommendation for licensure. Candidates in all initial programs are required to complete the application for admission through inputting data into the TK20 system. Once formally admitted, the candidates refer to their personal data in TK20 to track their progress through the programs. TK20 also serves as the vehicle through which candidates can evaluate their student teaching experience and supervision. University and school based faculty depend upon TK20 to document the candidate's performance on program key assessments. University supervisors and cooperating teachers access TK20 to record the candidates' performance during student teaching. All faculty that administer and score key assessments utilize the system to record candidate scores. The system is also used by faculty advisors to track their advisee's progress through the programs.

Standard 2 Addendum Exhibit 3: TK20 Applications presents the various roles, responsibilities and applications of the data management system

(6) Data collection, analysis, and evaluation: Clarify how candidate assessment data are disaggregated by alternate route, off-campus, and distance learning. Does the unit conduct these analyses? If yes, what information is gleaned from these analyses?

We do not have programs that would be aggregated across these different areas. We are not approved for alternate route programs. We don't have any programs at either the initial or advanced levels that combine any delivery systems.

Standard 2 Addendum Exhibit 4: Program Delivery Models

(7) Data collection, analysis and evaluation: Clarify information about graduate survey data. The unit states that graduate survey data are collected from candidates exiting and from graduates three years out. Data presented was from current students. Are data collected from graduates available? If yes, what do the data indicate?

Candidates who are planning to graduate are invited, at the end of that particular semester, to complete a survey reflecting upon their experiences in their professional education program. The survey is deployed about two weeks before the end of the semester, and remains active until about two weeks after graduation. This allows our new graduates to access the survey as their schedules permit. Four surveys have been administered in this manner: May 2010 graduates, December 2010 graduates, May 2011 graduates, December 2011 graduates. Both graduate surveys conducted in 2009 were sent out after graduation was completed; the infrequency of graduates' use of their university email account (where survey was sent) brought about the decision to invite new graduates to take the survey right before graduation. A three-year out survey was completed in May 2010; students who had graduated in May or December 2007 were invited to participate. Standard 2 Addendum Exhibit 5: Graduate Survey Data presents this data.

Three surveys of advanced program graduates have been completed: Math Education (see IR Exhibit 1.3.i), Educational Technology graduates and Education Leadership graduates Standard 1 Addendum Exhibit 3: Educational Leadership and Educational Technology Surveys.

In summary, survey findings indicate that overall, candidates felt adequately prepared for the profession. Working with limited English proficiency students and students with disabilities are two areas in which a few program completers feel least comfortable. Educational Technology graduates felt prepared to work in the field of educational technology but were not prepared to answer interview questions. Respondents on the Math M.Ed. graduate survey answered that the program's courses did not lend themselves to an online format. Graduates of the Education Leadership program indicated that the program prepared them to interview, take the licensure assessment, and work as an administrator. Improvements to the program that were suggested included offering more summer courses and having greater variety of instructors. All data gathered through the surveys is fed back to the programs, and serve as information for making data driven decisions that are reflected in the program Assessment Progress Templates (APT), IR Exhibit 2.3.d.

(8) Use of data for program improvement: Clarify how data are used to support changes in programs and unit operations. Examples provided deal with the nature of changes in selected programs and the assessment system. How data were used to guide these changes was not clearly articulated.

The examples of significant changes made to courses, programs, and the unit in response to data gathered from the assessment system presented in IR Exhibit 2.3.h reflect decisions made by programs following the completion of the university Assessment Progress Template (APT). The entire APT is presented in IR Exhibit 2.3.d. In order to see the linkages between the program goals, assessment, data, and finally the changes in programs, the entire APT must be examined.

The education programs in the College are working towards establishing a reporting cycle, which facilitates using data to guide changes. Programs receive feedback about their APT around Oct 1.The feedback about the report is sent to the Dean, the Assessment Director, and the department head.

The APT for Middle Education serves as an example of how the process provides the context for data driven decision-making. In 2011, Middle Education received their feedback from the Center for Assessment and Research Studies in the fall semester about the report they submitted on or before June 1, 2011. That report had included data from Spring 2010, Summer 2010 and Fall 2010 semesters

Section I & II of the APT describes the objective, course/learning Experiences. Section III describes the assessments used to measure candidate progress and the data from the assessments are presented in Section IV. Section V describes how the assessment results are shared with faculty and incorporated into the planning and governance structure of the program. Finally, Section VI demonstrates how the assessment results were used to contribute to program improvement and enhanced student learning and growth.

In the 2011 APT, the Middle Education program's decision to provide support to the MAT candidates who had not passed Praxis II described in Section VI of their APT can be traced backwards to the presentation of data in Section IV.

For all programs, a similar mapping between the decisions presented in IR Exhibit 2.3.h and the data gathered following administration of agreed upon assessments can be made by examining the complete APTs presented in IR Exhibit 2.3.d.

During the onsite visit, the BOE members will have an opportunity to talk to Dr. Amy Thelk in more detail about the APT process and how it supports the Unit Assessment System.