Standard 2. The unit has an assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs.
2.1 How does the unit use its assessment system to improve candidate performance, program quality and unit operations?
James Madison University has a rich history of transforming curriculum through data based decision-making. Through its nationally recognized Center for Assessment and Research (CARS), comprehensive and systematic assessment of all programs is conducted.
The JMU Professional Education Unit Assessment System for our initial and advanced programs has been developed collaboratively to reflect the professional education unit Conceptual Framework (CF). Incorporating national and state standards, the system has been developed to provide continuous data supporting the improvement of candidate performance, program quality and unit operations. Five core areas are reviewed for both initial and advanced programs: content knowledge, pedagogical content knowledge, impact on student learning, diversity and dispositions, which facilitates discussion of our CF and whether those competencies are being achieved by our candidates. In addition, data gathered through the system inform decision making about resource needs, facilities management and other issues related to efficient unit operations.
The process has been finely honed. On an annual basis, all programs are required by JMU to complete an Assessment Progress Template (APT) as a way to gather data for the university’s accreditation with Southern Association of Colleges and Schools (SACS). Feedback is distributed back to programs following the review, which is used in the following year’s report submission, describing improvements made to the assessment plan.
The APT report interfaces with our Unit Assessment System. Program key assessment data are presented in the APT and the process promotes a structure of collecting and disseminating data for departmental and program review and interpretation. From January-March the director of assessment organizes the data and uses it to complete the relevant parts of the APT report. The documents are then forwarded to program faculty. The remaining sections are then completed based on conversations about the data. Any changes to the programs, decisions about programs and insight about program strengths and weaknesses are based on a data-driven decision-making process.
The unit has clearly defined admission and program completion criteria and uses data from unit key assessments to monitor candidate performance throughout program transition points. Each program has defined a set of key assessments that are comprehensive and fully integrated within the curriculum. These instruments provide evidence about whether candidates meet competencies outlined in the Conceptual Framework (CF) and in NCATE and professional standards. Key assessments include, but are not limited to, lesson plans, unit plans, case studies, and teacher work samples. Key assessments are both formative and summative in the sense that they may represent a culmination of efforts over the length of the particular course in which they are embedded or lay the groundwork for subsequent activities. From admission to recommendation for licensure, candidates are provided clear directions on appeal processes, should they not be meeting the progression standards. Documentation of the results of the appeals is kept on file and follow up action is taken when warranted.
Program expectations are presented at the start of their programs, as well as in course syllabi and other program materials (e.g. Teacher Education handbook). Most key assessments are designed to allow the candidates to make revisions to their projects throughout the semester, so the key assessment grade represents their best and final effort.
Key assessments, developed and administered at the course level across the unit, are evaluated by faculty. Final rubric scores for each program key assessment are entered into Tk20, the unit data management system. The director of assessment monitors the completion of this step on a monthly basis, at a minimum.
Tk20 report summaries are updated regularly by the director of assessment, providing readily available up-to-date results. Report summaries are used to develop the departmental annual reports and the university’s assessment progress templates, both of which are due each June.
Data are regularly and systematically collected, compiled, aggregated, summarized, and analyzed. The unit assessment system is continuously evaluated by internal stakeholders (i.e., Unit faculty, administrators and candidates). The professional education unit Assessment Committee, Professional Education Coordinating Council and community members participating in the School Partnership Committee are all presented with opportunities to review the unit assessment system and provide feedback.
The unit assessment system supports decision making about unit operations, as well as guiding program decisions. While resource allocations at the university are determined based on college or academic unit needs, the professional education unit has been successful in using data to leverage additional support for its operations. For example, an analysis of the expenditures of the field-based operations resulted in a significant increase in the College of Education budget that was targeted to offset the costs across the unit. In addition, reporting processes and systems required of all academic units and centers (Planning Data Base and Annual Reports) at JMU have yielded data that support the unit as a whole. One specific example is the addition of a PC lab that is housed in Memorial Hall – the home to the College of Education and majority of courses leading to initial teaching licensure programs at JMU. Finally, through data gathered from candidates across the unit, the need for extended operating hours in the Education Technology and Media Center was identified and resulted in a request that was granted for increased funding to support staff to accommodate the need for extended hours of operation.
To ensure the fairness of its assessments, the unit has carefully aligned its curriculum with its candidate proficiencies, state licensure regulations, P-12 standards, and national professional standards. These alignments map courses in the curriculum where candidates have had the opportunity to learn and practice the material being assessed. The alignments with national professional standards are regularly reviewed and updated. Course syllabi state the timing and structure of key assessments, how they are scored, and how they contribute toward program completion. This information is also included in such documents as program handbooks or other course materials.
Construct validity is addressed through outcome alignment with the unit conceptual framework and SPA standards. Faculty members use their experience and expertise to produce and revise assessments. Assessment results are regularly reviewed in light of related external assessments such as grades, Praxis testing, GPAs, and program retention/completion.
The unit is refining processes to gauge whether key assessments produce results which are dependable and consistent. The unit measures the extent to which internal consistency and inter-rater reliability are present once an assessment is in place and administered. Several of the unit’s key assessments are scored by multiple raters. The unit provides specific training for raters as appropriate and inter-rater reliability data are regularly collected and analyzed. For example, at the initial level, the unit’s Student Teaching Evaluation Form (ST-9) is scored by both the university supervisor and the cooperating teacher analysis of their ratings occurs annually. Both groups receive training during regularly scheduled support meetings, clinical faculty training or refresher sessions, or as part of web-based training modules.
A major factor in avoiding bias within an assessment system or within individual assessments is a combination of having accurate assessments and using them in a consistent fashion. However, this is not sufficient, and assessments must take additional steps to minimize bias. Therefore, the unit has developed specific procedures to check assessments for bias. Faculty members regularly review unit assessments to ensure that they are free of racial and ethnic stereotypes and that they use culturally sensitive language or task situations. The majority of the key assessments are iterative, course-embedded, untimed, and completed in a setting of the candidates’ choosing.
When data are reviewed, faculty members look at all available information to make a determination about whether the results reflect candidate work, program issues, or a combination of both. These conversations comprise the discussion at 2-3 departmental retreats each year. In addition, programs aim for regular meetings outside of monthly departmental meetings. The results of these conversations are chronicled, in part, in the APTs.
2.2.b Continuous Improvement
As the unit continued to review and develop an assessment system, the need for a dedicated assessment position became apparent. A director of assessment and evaluation was hired during the 2006-07 academic year. A specialist was hired in 2007 to support our increasing needs in data management.
In spring 2007, we collected data into a variety of centrally located program-level databases and started to conduct some analyses and reporting. The Teacher Education database, an Access-based system, only handled field experience, clinical practice and licensure data. It did not house candidate performance data. Live Text was utilized briefly, but did not meet the data needs of the unit and was dropped. Limitations of the databases indicated a need for a centralized database management system. During the 2007-08 reporting period, a time was spent reviewing various electronic data management systems. After weighing and comparing the merits of several different electronic data management systems, and identifying the features we desired, the decision was made in May 2007 to purchase the Tk20 system. An ad hoc Tk20 Advisory Team comprised of faculty and administrators was established to help oversee transition of the unit to Tk20 utilization. Major emphases of the Advisory Team included piloting use of Tk20 by select instructional faculty within the college, and transitioning from reliance on the Teacher Education (TED) database housed in the Education Support Center, to confidence in (and increasing reliance on) management of data utilizing the Tk20 system. This team facilitated training for faculty and staff in the utilization of Tk20, and developed a plan for assimilation of the system into unit operations. In fall 2008, the process of loading CoE candidate data into the system began, and troubleshooting of subsequent issues regarding the interfacing of Tk20 with PeopleSoft (JMU’s student information management system) commenced. Key assessments were collected and their transition to electronic rubrics entered into Tk20 began in fall 2008.
An ongoing endeavor related to assessment is improving the functionality of Tk20. The structure of our unit poses challenges in terms of requiring candidates in advanced and non-teaching licensure programs to purchase a subscription to Tk20, necessitating the maintenance of “shadow” data collection processes. Interface issues continue to be encountered and addressed with the vendor. As of July 1, 2011, all new applicant data is being entered into Tk20. From September 2008 through June 2011, applications were completed using a web-based interface and any new student data was entered into an Access-based system. Prior to that, all application materials were primarily completed in paper form. As a final step of their application for admission to Professional Education, initial-level candidates are now required to purchase a subscription to Tk20. In addition field experience forms are now distributed and collected via Tk20. At this time, Tk20 is the platform used to assess candidate student teaching performance.
The resulting reports are shared with programs to use in developing their APTs. Intentional and meritorious completion of the University Assessment Progress Template has been a focus over the last two years. Starting in spring 2010, the scope of this report was changed to include data from the prior calendar year. The director of assessment and evaluation compiles the first sections of the report (program objectives, linkages to courses, measurement tools, and collected data) and then asked program faculty to focus on discussing, interpreting, disseminating and acting upon the results. After submission, a panel of raters (comprised of university faculty members and doctoral students) reviews the APTs and provides feedback about the quality of the assessment plan. By fostering a collaborative model of completion, this well-organized and highly reviewed report (at the university level, as well as college, unit, departmental and program levels) will now serve as the anchoring event for subsequent data conversations and reporting.
In spring 2010, the unit Assessment Committee developed a unit dispositions rubric. The committee decided to draft a general instrument that could be used across programs, settings and time points. The instrument was reviewed and adopted by the Professional Education Coordinating Council in April 2011. The new rubric will be piloted with select initial and advanced programs in fall 2011.
The first ST-9 rater agreement analysis was conducted using spring 2010 final assessment data. At the PECC discussion of that data, committee members stated that they would like to include mid-block ratings in the analysis as well. Both mid-block and final ratings have been included in subsequent analyses (fall 2010 and spring 2011). In general, the analyses demonstrate two important results. First, agreement between raters is higher at the time of final assessment than it is at mid-block. From a measurement standpoint, this illustrates that when fewer observations are used and raters have less experience with a rubric, reliability is lower. The higher percent agreement between the raters at final evaluation (range: 81%-98% in spring 2011) reinforces that final rubric scores are a reliable evaluation of candidate behavior. Second, mean scores are close to the top of the scale both at mid-block (average scores ranged from 2.63 to 2.98 in spring 2011) and final evaluation (scores ranged from 2.57-2.99 in spring 2011). Our unit has discussed this phenomenon several occasions. The scores should be high at the time of the final evaluation; however, the unit is planning to undertake further validity work to determine whether the instrument is sensitive enough to pick up on growth throughout the semester.
In 2010, the unit instituted what will hopefully be an ongoing event. Our university holds two dedicated assessment days each year: 1) In February, for students with 45-70 completed credits, and 2) in August, for incoming first-year students. Classes are cancelled on the February assessment days, providing faculty with an opportunity to work on projects as a large group. In February 2010 faculty development workshops related to assessment were held. The events included discussion and refinement of the Teacher Work Sample (used by the Middle, Secondary and Mathematics Education department), work on a unit-wide dispositions rubric, focused work by the Diversity Committee, and a discussion of the SPA reporting process. A similar opportunity was offered in February 2011.
The JMU College of Education was one of three colleges/schools of education in the Commonwealth selected by the State Council of Higher Education in Virginia to provide leadership in the development, implementation, and assessment of the statewide Teacher Education and Licensure (TEAL) longitudinal educator preparation data-tracking, management, and analysis initiative. The primary emphasis of TEAL is to gather and track data on all professional educator preparation students from the point of entry, through program completion and follow-up in the 1st, 3rd, 5th, and 10th years of service. Unfortunately, full implementation of this statewide system (later changed to VITAL) was never realized and the project was terminated by the state. This meant that the unit had to resume developing a system for disseminating alumni and employer surveys. A graduate alumni survey was administered in the fall of 2006. Graduate and employer survey data are now being regularly collected at the unit level. Graduates are surveyed the semester they are scheduled to graduate. Each spring a cohort of graduates from three years prior are also surveyed. The web-based surveys inquire about their current vocational situation, feelings of preparedness attributable to training at JMU, and professional attitudes. During the summer, employers at Virginia public schools are surveyed to provide feedback on graduates employed in their schools. Educational Technology and Educational Leadership are developing a survey for the employers of their graduates and plan to deploy the survey at the close of the fall 2011 semester.
The unit has also responded to several changes in the Virginia Department of Education’s Regulations Governing the Licensure of School Personnel and the Review and Approval of Education Programs. One of the key features of these proposed changes was the Cycle of the Review and Approval of Education Programs in Virginia and requirements associated with program compliance and Biennial Measurable Targets. The first report on the 7th Biennial Measurable Target (“Partnerships and collaborations based on P-12 school needs.”) was submitted to VDOE in July 2008. Faculty in the unit continue to address program matrices and the six other Biennial Measurable Targets. The Education Support Center database has continued to revise monitoring of candidate performance to reflect changes in prescribed state assessment guidelines (e.g. Virginia Reading Assessment (VRA); Reading for Virginia Educators (RVE, replaced VRA effective July 1, 2011); Virginia Communication and Literacy Assessment (VCLA); and School Leaders Licensure Assessment (SLLA).