Measure 1 (Initial): Completer effectiveness and impact on P-12 learning and development. (R4.1)
We are committed to preparing candidates who are equipped to make a positive impact in their own classrooms, effectively using their knowledge, skills, and dispositions to contribute to their diverse students’ learning and growth. Because we are positioned in a state in which the Department of Education does not provide EPPs with completer impact data, we triangulate data from several sources – surveys, evaluation data from school division partnerships, and focus groups – to ensure that we are meeting this standard.
The Virginia Department of Education (VDOE) has identified eight Virginia uniform performance standards (VUPs) to form the basis of teacher evaluation. We align our assessments to these VUPs as well as to the InTASC standards. Completer effectiveness aligns to VUP standard 8, Student Academic Progress.
Survey Data
As part of our triangulated data, we use input from completers and their employers acquired through surveys administered by the Virginia Education Assessment Collaborative (VEAC; https://projectveac.org), a volunteer team of assessment and accreditation professionals who work together to coordinate data collection. VEAC provides a centralized assessment structure for Virginia EPPs that standardizes and reduces the complexity of data collection for the VDOE and CAEP. VEAC administers common surveys to completers and their employers for initial and advanced programs for nearly all of the 36 EPPs in Virginia. Two parallel surveys, the Completer Survey and the Employer Survey, are administered by VEAC each spring to participating EPPs. Completers and Employers respond to all items on a four-point scale ranging from 1 (Unacceptable) to 4 (Exemplary). The expected performance for each item is an average score of 3 (Proficient).
Three items on these surveys are aligned to CAEP R4.1 VUP Standard 8, Student Academic Progress:
Please rate [completer]’s performance on each of the following:
Item ID: Systematically gathers, analyzes, and uses all relevant data to measure student academic progress, guide instructional content and delivery methods, and provide timely feedback to students, caregivers, and other educators
Item IG: Engages in practices that result in acceptable, measurable, and appropriate student academic progress.
Item IM: Uses assessment results to inform and adjust practice.
On the Completer Survey, our completers self-reported average ratings between Proficient and Exemplary on a four-point scale on each of these items across cycles. Similarly, on the Employer Survey, employers rated our completers between Proficient and Exemplary on these items across cycles. The most directly related item, IG, reflects this proficiency most clearly:
2023-24 (and Historical) VEAC Initial Licensure Employer and Completer Self-Ratings on Completer Impact (Survey Item IG)
Year | Employer Rating | Completer Self-Rating |
---|---|---|
2023-24 | 3.24 | 3.30 |
2022-23 | 3.15 | 3.40 |
2021-22 | 3.28 | 3.24 |
Although survey data alone are insufficient to demonstrate proficiency in this area, these scores indicate that both completers and their employers feel that teachers who were prepared at JMU are making a positive impact on student learning.
Evaluation Data from District and Educational Support Partnerships
An essential part of our structure for building and sustaining relationships with local P-12 partners to ensure our completers’ success is the MidValley Consortium (MVC), a stakeholder group of four local EPPs and seven P-12 school divisions. As part of our MOU with the school division partners in the MVC, our school systems provide evaluation data to support our continuous improvement and accreditation work. Six P-12 school systems provided teacher evaluation data for JMU completers, including standard-by-standard evaluation scores and overall performance ratings. These school systems have varying racial/ethnic and socioeconomic diversity.
Because each school system in Virginia can design its own evaluation system to address the VUPs, each system uses a unique set of ratings (e.g., Unacceptable/Developing/Proficient/Exemplary; Ineffective/Approaching Effective/Effective). To account for variability in rating systems, which range from 3 to 5 scoring points and often change within systems across years as well as across different systems, we trichotomized the data for ease of interpretation:
- Green: Any "good to go" ratings that indicate teachers performing at acceptable or better levels: effective, proficient, highly effective, exemplary, etc.
- Yellow: Any "cautionary" ratings that indicate room for improvement, such as approaching effective, developing, etc.
- Red: Any "danger" ratings that indicate unacceptable or ineffective teaching
For VUP Standard 8, Student Academic Progress, our completers received over 96% Green ratings, indicating performance that is proficient or exceeds expectations. This is one of our best indications that our completers are effective in contributing to P-12 learner growth, using their knowledge and skills to help their students succeed. Out of 160 teacher evaluations across three years, only one completer from one cohort (a secondary STEM teacher from 2022-23) received a Red rating, indicating ineffective or unacceptable performance on this standard.
Focus Group and Interview Data
Our survey and evaluation data provide large quantities of information, but they do not provide as much depth as other methods of assuring our completers’ impact. Accordingly, we triangulate these data with feedback acquired during small focus groups or interviews with completers. We partnered with JMU’s Center for Assessment and Research Studies Program Assessment Support Services (CARS PASS) so we could have unbiased assessment experts conduct our focus groups and interviews using a protocol developed by VEAC to standardize completer focus groups across Virginia EPPs. We conducted a pilot focus group using the VEAC protocol in Fall 2023, then PASS conducted focus groups or interviews in Spring 2024 and Fall 2024. Across these three outreach cycles, completers from many programs were represented, including Inclusive Early Childhood Education, Special Education, Secondary Education, Elementary Education, and Middle Grades Education. Upon analysis across all discussions, completers mentioned the importance of using assessments to understand their impact on their students’ learning. They also noted the importance of other types of impact, such as socioemotional growth, relationship building, and “lightbulb” moments that are harder to measure. Completers indicated that standardized testing data are important for understanding impact, but so are less standardized things like student letters or projects. Given the value of our small and in-depth discussions with students, we are working on methods to expand participation in future years, including piloting an exit survey to gather better completer data.
Measure 2 (Initial and Advanced): Satisfaction of employers and stakeholder involvement. (R4.2|R5.3|RA4.1)
JMU gathers information about employer satisfaction in a variety of ways, using quantitative and qualitative and formal and informal methods. In Virginia, we are fortunate to have strong collaboration among our EPPs, which we have leveraged to form the Virginia Education Assessment Collaborative (VEAC), which administers two parallel surveys each spring to participating EPPs:
Initial Licensure Completer Survey: Survey sent to completers who completed their programs one to three years ago and are currently teaching in their licensure areas. This 13-item survey asks completers to rate their satisfaction on 12 aspects of their teacher preparation program, and then to evaluate their overall satisfaction with their program. As part of this survey, completers verify employer information, helping VEAC to accurately contact employers in the Employer Survey.
Initial Licensure Employer Survey: Survey sent to employers of our recent completers. This parallel survey presents the same 13 items as the completer survey, reframed to ask employers to evaluate satisfaction with our completers in these areas and about completers’ overall readiness to make an impact in their teaching roles. (Employers receive a separate survey for each completer working in their schools.) For initial programs, employers completing the survey are usually principals.
The purpose of these surveys is explicitly to help measure CAEP Standard R4, and the technical guide provides extensive documentation about how the surveys meet CAEP requirements. Administration procedures are clearly delineated and shared across all participating Virginia EPPs. For initial programs, VEAC designed surveys to align with the InTASC standards and VUP standards. Because items are geared directly toward InTASC/CAEP standards and the professional state standards on which school systems evaluate their teachers, outcomes provide useful and actionable program feedback for continuous improvement. Items were constructed to be direct, unambiguous, neutral, behavioral (versus opinion-based) and using the same language as the professional standards. The response options reflect VDOE rating categories for teachers’ performance standards for evaluation: Exemplary (4); Proficient (3); Developing/Needs Improvement (2); and Unacceptable (1). Initial input on survey design was solicited across EPPs at a Virginia Association of Colleges for Teacher Education (VACTE) conference, and before the full rollout of these surveys in 2020-21, VEAC conducted a pilot study with a small sample of institutions in Spring 2020.
We interpret our findings in two ways. First, VEAC provides a summary report of our item-level EPP data benchmarked against overall VEAC data, giving us a mile-high view of our results. Second, to provide more targeted and actionable data to our programs, our Director of Assessment disaggregates the data for each program by cycle and program-specific open-ended comments. This deep dive sometimes results in small sample sizes, but the data are more specific. Balancing these two types of reports provides multiple perspectives to triangulate in pursuit of useful takeaways for programs. We find the most actionable feedback comes from the open-ended comments provided on these surveys.
The overall VEAC Employer Survey summary reports provide a mile-high view of the results. Overall, completers' highest scores tend to be in the areas of learning differences and equity, learning environment, professionalism, and collaboration (InTASC 2, 3, 9, and 10) and lowest in assessment (InTASC 6). Benchmark data from VEAC indicate that assessment tends to be difficult for completers across EPPs, and our scores are similar to VEAC-wide averages. Indeed, across cycles, our EPP average ratings were similar to the VEAC means, with no significant differences. Our goal is to average at least 3.0 on the 4-point scale on all items, indicating that employers rate our completers as Proficient or better in each preparation area. We have consistently met this target, with all items exceeding it across all cycles. Although our overall averages remain high, we noted a trend toward less favorable ratings as compared to (though not significantly different from) VEAC-wide data in 2022-23. Our averages in 2023-24 improved, so we are optimistic that the 2022-23 ratings dip was an anomaly, possibly related to the joint impacts of COVID-19 and a transition from five-year to four-year teacher preparation programs for many licensure areas. We review and discuss the data trends and potential changes needed in our programs with our Professional Education Coordinating Council (PECC). Overall satisfaction ratings from employers (Item I_O) have indicated consistently high impressions of our completers, with most rated as “Fully ready” or “Mostly ready” to meet the needs of students in their schools.
The table below is best viewed at horizontal orientation on your device.
Results from the 2023-24 Initial Licensure Employer Survey
VEAC Item | VEAC Item Description | InTASC | VUPS 2021 | VEAC M |
VEAC SD | VEAC N | JMU M | JMU SD |
JMU N |
---|---|---|---|---|---|---|---|---|---|
IA | Demonstrates an understanding of the curriculum, subject content, and the developmental needs of students by providing relevant learning experiences. | 1,2,4 | 1 | 3.31 | 0.62 | 1413 | 3.32 | 0.59 | 247 |
IB | Plans using state standards, the school’s curriculum, engaging and research-based strategies and resources, and data to meet the needs of all students. | 1,2,7,8 | 2 | 3.29 | 0.63 | 1409 | 3.28 | 0.62 | 147 |
IC | Effectively engages students in learning by using a variety of research-based instructional strategies in order to meet individual learning needs. | 1,2,8 | 3 | 3.25 | 0.70 | 1414 | 3.25 | 0.70 | 247 |
ID | Systematically gathers, analyzes, and uses all relevant data to measure student academic progress, guide instructional content and delivery methods, and provide timely feedback to students, caregivers, and other educators. | 6,10 | 4,8 | 3.19 | 0.69 | 1404 | 3.15 | 0.67 | 246 |
IE | Uses resources, routines, and procedures to provide a respectful, positive, safe, student-centered environment that is conducive to learning. | 3 | 5 | 3.31 | 0.72 | 1415 | 3.30 | 0.72 | 247 |
IF | Maintains a commitment to professional ethics, collaborates and communicates effectively, and takes responsibility for and participates in professional growth that results in enhanced student learning. | 1,2,9 | 7 | 3.38 | 0.65 | 1416 | 3.38 | 0.68 | 247 |
IG | Engages in practices that result in acceptable, measurable, and appropriate student academic progress. | 6,7,8 | 8 | 3.26 | 0.66 | 1412 | 3.24 | 0.71 | 247 |
IH | Uses content-aligned and developmentally appropriate instructional technology to enhance student learning. | 7,8 | 3 | 3.32 | 0.60 | 1408 | 3.35 | 0.61 | 246 |
IJ | Demonstrates a commitment to equity by providing instructional practices and classroom strategies that result in culturally inclusive and responsive learning environments and academic achievement for all students. | 2,3,8 | 5,6 | 3.37 | 0.60 | 1408 | 3.38 | 0.61 | 246 |
IL | Collaborates with the learning community (e.g. school personnel, caregivers, and volunteers) to meet the needs of all learners and contribute to a supportive culture. | 3,9,10 | 7 | 3.38 | 0.63 | 1413 | 3.41 | 0.62 | 247 |
IM | Uses assessment results to inform and adjust practice. | 6 | 4,8 | 3.22 | 0.67 | 1399 | 3.24 | 0.66 | 245 |
IN | Engages in reflection on the impact of their teaching practice and adapts to meet the needs of each learner. | 9 | 7 | 3.25 | 0.69 | 1411 | 3.27 | 0.69 | 246 |
In addition to these unit-wide data, each program is provided with its program-specific data for use in continuous improvement. Examining the deep-dive data disaggregated by program and completer cohort confirms that most programs across most cycles show averages in the Proficient to Exemplary range (>3.0) across items. Some smaller programs have average ratings in the Developing to Proficient ranges for some items in some cycles. Open-ended responses are mostly positive, with employers praising our completers' leadership, openness to feedback and growth, and positive contributions to the school environment. A few responses indicate possible areas for improvement, such as classroom management or instruction.
We also examine employer satisfaction through evaluation data for teachers in the MVC partner schools. Teacher evaluation data provided by our MVC P-12 partners show that employers evaluate our completers favorably across all VUP standards. Across cycles and programs, 93% of ratings were at the Proficient/Exemplary levels for all standards. Our completers performed particularly well on the standards of Professional Knowledge, Culturally Responsive Teaching and Equitable Practices, Student Academic Progress, and Professionalism. Very few (<1%) ratings were at the Unacceptable level.
Additionally, on the VEAC Employer Survey, employers are asked to respond to the following question for each completer: “Based on your experiences with this teacher, what best describes the extent to which they were ready to meet the needs of students in your school?” Responding employers were presented with the options:
- 5 – Fully ready (able to have an immediate impact on student learning)
- 4 – Mostly ready (able to successfully meet the needs of most students
- 3 – Moderately ready (in order to be successful, needed additional training, support, and coaching beyond what is typically provided to beginning teachers)
- 2 – Minimally ready (limited success meeting the needs of students and improving outcomes even with additional supports)
- 1 – Not ready (unable to meet the needs of students even with additional supports)
James Madison University’s mean has consistently been greater than 4.0, indicating employers observed that their JMU-prepared teachers have a high average readiness level to impact student learning.
2023-24 (and Historical) VEAC Initial Licensure Employment Readiness Ratings for JMU and VEAC
Year | EPP | M | N |
---|---|---|---|
2021-22 | JMU | 4.53 | 118 |
2021-22 | All other VEAC EPPs | 4.43 | 1,100 |
2022-23 | JMU | 4.24 | 129 |
2022-23 | All other VEAC EPPs | 4.33 | 1,208 |
2023-24 | JMU | 4.40 | 249 |
2023-24 | All other VEAC EPPs | 4.41 | 1,431 |
Following the successful implementation of the Completer and Employer surveys for initial licensure completers, VEAC piloted similar surveys for advanced licensure programs in 2022-23. Because each advanced licensure program is guided by different professional standards (unlike the initial licensure programs, which are all guided by InTASC) each advanced survey is unique. Items are aligned to the relevant professional standards and to the CAEP RA1.1 general competencies. In all cases, surveys follow the same pattern, administering parallel items to completers and their employers.
The 2022-23 pilot and initial administration of VEAC surveys for Advanced Licensure programs resulted in limited data for JMU completers and their employers. Employers provided data for three Educational Leadership program completers, rating them all as Proficient or Exemplary in all categories. In 2023-24, we made a concerted effort to gather more and better data for our program completers, resulting in more robust data that enabled us to draw useful conclusions for completers in each program. We interpret our findings through summary and disaggregated reports. VEAC provides a summary report of our item-level EPP data benchmarked against overall VEAC data. The summary report from VEAC provides collapsed employer data for three cycles of administration: 2022, 2023, and 2024. VEAC collapsed data across administrations because samples sizes are small, particularly for some programs and in the initial pilot administration. The evidence provides a snapshot of how JMU data from employers compares to EPPs across the state. To provide more targeted and actionable data to our programs, our Director of Assessment disaggregates the data for each program by cycle (completers from 2023, 2022, 2021, and earlier as available) and for program-specific open-ended comments. This sometimes results in small sample sizes, but the data are more specific. Balancing these two types of reports provides multiple perspectives to triangulate in pursuit of useful takeaways for programs.
For Educational Leadership, 25 employers responded to the VEAC Employer Survey across the 2022-24 administrations. On the survey’s four-point scale, all item averages were between 3 (Proficient) and 4 (Exemplary), suggesting that employers positively evaluate our completers. A closer look at the disaggregated data reveals a pattern of increasing average ratings across cycles: 2020=2.50; 2021=3.08; 2022=3.25; 2023=3.53. Overall employer satisfaction is rated on a scale of 1 (Not ready) to 5 (Fully ready); the JMU overall satisfaction score was 4.23, indicating employers view our completers as mostly to fully ready to make an impact in their roles. All employer comments were positive.
For Literacy, recent cohorts have been small, and the number of responses reflects this (n=2, representing 2020 and 2022 completers). Nevertheless, the data are favorable: all items on the Employer Survey averaged 3.5 or 4.0 on the scale of 1-4, indicating high performance levels for our literacy completers, with scores of Proficient or Exemplary on all items. Overall employer satisfaction on the 5-point readiness scale was 5.0, indicating employers rated our completers as fully ready. Because of the number of responses, we are unable to disaggregate by cohort, and employers did not provide additional comments.
JMU also monitors employer satisfaction through regular and frequent communication with local school divisions who employ our graduates. This communication happens in both formal and informal ways. JMU faculty prioritizes strong relationships with local school divisions and ensuring that JMU is meeting their needs.
Stakeholder involvement is a key element of ensuring that JMU’s programs are designed to prepare program completers for success. There are many layers of stakeholder involvement at JMU:
- Professional Education Coordinating Council (PECC)
Stakeholders serve on the JMU’s Professional Education Coordinating Council (PECC), a group that meets monthly to review policies, needs of school partners, and changes in curriculum, assessment, and program/professional requirements. PECC includes program coordinators and/or assistant academic unit heads from every initial and advanced licensure education program at JMU. It also includes representatives from local school divisions.
More information about the PECC is available in the Professional Education Handbook
- MidValley Consortium (MVC)
JMU is part of the MidValley Consortium (MVC) that collaborates with three other area EPPs (Mary Baldwin University, Bridgewater College, and Eastern Mennonite University) and seven partner school divisions. The MVC Mentorship and Clinical Experience Team (MCET) representing teachers, administrators, and teacher educators meets monthly to plan consortium activities. The MCET and MidValley Consortium Advisory Council meet to evaluate consortium projects, set policy, and approve the annual budget. Each school division and college or university supports consortium activities. In addition to maintaining regular collaborative workflows and relationships, the MVC engages in timely special projects. Recently, the MVC (a) revised the collaborative Student Teaching Performance Assessment to align to the updated Virginia Uniform Performance Standards (to reflect the additional of Culturally Responsive Teaching as Standard 6); and (b) convened to discuss opportunities for innovative field placement formats, including the possibility of leveraging field placement experiences to meet long-term substitute needs in local schools facing teacher shortages.
More information about the MVC is available online and in the MVC Handbook
- College of Education Quality Assurance Advisory Team (CoEQAAT)
In 2022-23, the Director of Assessment, Accreditation, and Accountability for the College of Education convened a new panel of faculty, staff, and students from the College to provide high-level guidance and input about quality assurance issues. The purpose of the College of Education Quality Assurance Advisory Team (CoEQAAT) is to advise on issues related to gathering, summarizing, disseminating, and using educational data. This team of faculty, staff, and students brings diverse perspectives from across the College of Education to guide assessment and accreditation practices to optimize the value and effectiveness of our work. The CoEQAAT includes internal stakeholder representation from each education department, college staff, as well as an initial licensure and advanced licensure student representative.
- Future Teachers Advisory Council (FTAC)
In 2023-24, the College of Education Quality Assurance Advisory Team (CoEQAAT) recommended creation of a better avenue for the Dean of the College of Education to hear directly from students about issues related to education programs. In Spring 2024, we are piloted our inaugural Future Teachers Advisory Council (FTAC), an open forum for current initial licensure education students to have dinner and share input with the Dean. We hosted our second “Dinner with the Dean” in Spring 2025. Facilitated by a faculty member, this dinner provides an opportunity for students to share directly with the Dean about what is working well and what suggestions they have for program and College improvement.
- Advanced Programs and Stakeholder Involvement
Although the PECC, MVC, and CoEQAAT involve advanced programs, they are often more targeted to the immediate needs of initial licensure programs. Most candidates in advanced licensure programs are already employed in schools; typically, candidates in these programs enter in cohorts of colleagues already holding teaching and other academic roles. Accordingly, the needs and nature of stakeholder involvement in advanced licensure programs differ.
In JMU’s Educational Leadership programs (certificate and M.Ed.) faculty include former school superintendents with rich networks of professional colleagues. The Educational Leadership program also utilizes approximately 20 adjunct professors who all hold high-level positions in school divisions across Virginia. This model ensures that local school divisions are directly involved in the creation and dissemination of program content and structure. Having such extensive buy-in from Virginia administrators strengthens the program and ensures it is responsive and proactive to current issues of practice in the schools. Educational Leadership program faculty also meet with a network of superintendents informally at twice-annual meetings of the Virginia Association of School Superintendents.
The Literacy Program has an Annual Stakeholder Meeting to discuss the program structure, assessment, trends in literacy education, policy, and other relevant topics. Participating stakeholders in this advisory group include current students, alumni who are working as Reading Specialists, and other faculty and administrators from Virginia schools. In 2022, the Literacy program adjusted its process for acquiring stakeholder input. In even years, they will hold synchronous meetings, and in odd years they will use surveys to gather stakeholder input asynchronously. The Literacy program also initiated an annual event, the JMU Literacy Leader Awards, in 2022. This annual event convenes local reading specialists for networking and learning and helps publicize the JMU Literacy Program while celebrating high-achieving literacy educators in Virginia schools.
Measure 3 (Initial and Advanced): Candidate competency at program completion. (R3.3|RA3.4)
James Madison University triangulates data from several sources, both internal and external, to ensure candidate competency at completion. These measures build from internal, program-level gateways and key assessments during the program, to student teaching data gathered during final initial licensure field placements, to external proprietary assessments necessary for initial and advanced licensure.
Each education program at JMU uses a variety of program- and unit-level gateways and key assessments to ensure candidates are making sufficient progress throughout the program and well-prepared for employment at program completion. Programs design and administer their own program-level measures and evaluate student progress as part of program-level meetings and college-level annual program assessment requirements.
Our education program curricula are designed to meet state professional and content standards, and our rigorous state approval process certifies alignment of our course content and assessments to address these standards. Because these standards are designed specifically for teachers in each licensure area, including appropriate applications of technology, designing our courses to align directly to these standards and then assessing candidates with measures aligned to these same standards creates clear scaffolding to support candidates from early education courses through field experiences and into in-service teaching roles. Each program uses course-based program-level assessments, identified in course syllabi and linked to their state matrices, as verification that candidates are developing essential knowledge and skills for their specific program.
The MidValley Consortium, comprised of local EPPs and P-12 partners, intentionally designed our co-developed Student Teaching Performance Assessment (STPA) to align to the InTASC standards and the Virginia Uniform Performance standards (VUPs) on which our candidates will be professionally evaluated as in-service teachers. We ensure that candidates are ready for their teaching roles by ensuring that they are meeting our rigorous standards on this measure. Before this summative evaluation, however, we examine performance during multiple practicum experiences and at the midpoint of student teaching.
The unit-wide Practicum Performance Assessment is a backwards-designed instrument with a selection of eight essential elements from the STPA that we assess during practicum to ensure that candidates are progressing as expected in their early field experiences. The PPA evaluates candidates on a four-point scale corresponding to ratings of Does Not Meet, Developing Towards Expectations, Meets Expectations, and Exceeds Expectations. Our target for practicum is Developing Towards Expectations, which would indicate that candidates have room to grow but are on track toward meeting standards for in-service teachers. Across cycles and programs, candidates average in the Developing to Meets ranges on all items. Any ratings of Does Not Meet, though uncommon, require support or intervention plans to help program faculty and university supervisors scaffold candidates for success in areas of concern.
To measure readiness for completion, we examine final scores on the Student Teaching Performance Assessment, and to ensure that we are helping our candidates develop their skills as well as their own reflection and self-development, we measure their growth from the midpoint to the end of student teaching. Cooperating teachers and university supervisors use the STPA to evaluate candidates on a four-point scale corresponding to ratings of Does Not Meet, Developing Towards Expectations, Meets Expectations, and Exceeds Expectations. Our target for student teaching is Meets Expectations, indicating that candidates are ready to meet expectations for in-service teachers. Candidates may have some areas where they are still Developing Towards Expectations, but any ratings of Does Not Meet require intervention plans that must be successfully met before candidates can complete student teaching. In some cases, candidates may need to repeat student teaching to meet those standards. Outcome data show that across programs and cycles, candidates exceed the Meets threshold of 3.0 for all items. Some smaller programs have averages of less than 3.0 on some items but aggregations across cycles and programs show consistent performance at the Meets Expectations level. Growth data on the STPA show that candidates make substantial gains from the midpoint to the end of their student teaching positions. This growth echoes feedback from completers that time in the classroom is essential for developing their teaching skills. The cooperating teacher also measures candidates’ final dispositions at the end of student teaching with the Student Teaching Dispositions Assessment. Results from this assessment also demonstrate our candidates’ readiness for teaching, with average scores across items and cycles exceeding Meets Expectations. Overall, we are satisfied that data on these measures indicate that candidates develop strong skills during their programs and leave prepared to meet expectations for full-time teachers.
More information about the practicum assessments, STPA, and ST Dispositions measures is available online: Field Experience Information
All our completers have achieved all the state’s licensing requirements before leaving their education program. As the most recent (2023) VDoE Biennial Measures report indicates, 100% of our initial licensure completers passed their licensure assessments.
In Educational Leadership, we ensure competency at completion through a combination of grade review, clinical experiences, and culminating assessments. Ultimately, our goal is to produce candidates who are well prepared to pass the School Leaders Licensure Assessment (SLLA) so they are eligible for the Administration and Supervision endorsement to become licensed school leaders. Our candidates have well exceeded the state’s targeted 80% pass rate, with pass rates of 100% in the last two cycles (2023-24, 2022-23) and 96% in the previous cycle (2021-22). Triangulated, these evidence sources show that our candidates leave well-prepared for roles in school leadership.
The Literacy program ensures competency at completion through a combination of grade review, course-embedded key assessments, clinical experiences, and regular dispositions reviews. Ultimately, our goal is to produce candidates who are well prepared to pass the Praxis Reading Specialist licensure exam (formerly, the Reading for Virginia Educators [RVE] Reading Specialist Assessment) so they can obtain the reading specialist endorsement. Our candidates have exceeded the state’s targeted 80% pass rate over the last three cycles with available data, with pass rates of 100% in two of the last three cycles (2023-24, 2019-20) and 80% in 2021-22. Triangulated, these evidence sources show that our candidates leave well-prepared for roles as reading specialists and school leaders in literacy.
JMU annually completes reports documenting its candidates’ program success and ability to meet licensure requirements. The U.S. Department of Education annually gathers data for Title II of the Higher Education Act from all institutions of higher education with teacher preparation programs. These reports provide publicly available data about teacher preparation and certification, including licensure pass rates. JMU also provides data to the Virginia Department of Education (VDoE) for annual and biennial reports of teacher readiness.
JMU’s Title II annual reports and VDoE Biennial Measures reports are available the College of Education assessment reports site: State & Federal Reporting
Measure 4 (Initial and Advanced): Ability of completers to be hired in education positions for which they have prepared.
Because we ensure rigorous preparation and career readiness for our candidates, JMU’s program completers typically successfully find employment upon (or prior to) graduation.
The Commonwealth of Virginia does not provide comprehensive P-12 employment data to EPPs, and obtaining complete and comprehensive data for program graduates is not currently possible. JMU is piloting an exit survey to track this information more thoroughly in completers. However, the JMU Career Center reports annually on career outcomes for graduating students with known data. The most recent available report for the College of Education, for 2023 graduates, reports outcome data for Bachelor’s-level and graduate-level students. For Bachelor’s-level graduates from the College of Education, 2% of students with known data were still seeking employment 6 months after graduation; 78% were employed, and 20% were pursuing continuing education.
Our Literacy candidates are typically employed in full-time teaching roles when they begin the program, and our career outcomes reflect continuing successful employment, with all Literacy completers across the last three cycles with available outcome data reporting they are employed. For Educational Leadership, across the last three cycles (2021, 2022, and 2023), 100% of our completers who reported career outcomes were successfully employed as of six months after completing the program.
Information about career outcomes for graduates from the College of Education is available on the Career Center website.