The following list highlights the ways that CARS faculty, staff, and graduate assistants contribute to the higher education assessment community. Some of these are milestones we have celebrated and others represent ongoing work. We feel that each of these elements have added value and helped elevate the standards of assessment practice in higher education.  

  1. Connection to the Assessment and Measurement PhD Program  
  2. Learning Improvement (Use of Results)  
  3. Assessment Day 
  4. Motivation Research in Low Stakes Testing Environments 
  5. Implementation Fidelity 
  6. Advanced Measurement Techniques Applied to Assessment Practice  
  7. Defining and Evaluating Assessment Quality (Meta-Assessment)  
  8. Partnerships with University Content Experts to Develop Tests  
  9. Professional Development in Assessment (Assessment 101)  
  10. Competency-Based Testing  

CARS receiving the 2022 Banta Lifetime Acheivement Award at Assessment Institute

Donate to CARS

Connection to the Assessment and Measurement PhD Program

Initially seeded by the Fund for the Improvement of Post-Secondary Education, the assessment and measurement doctoral program at JMU was launched in 1998. At the time, few colleges possessed the expertise to conduct useful assessment. In fact, Dr. Peter Ewell characterized the early campus assessment initiators as “happy amateurs.” The Center for Assessment and Research Studies (CARS) at JMU and the subsequent assessment and measurement PhD program responded to the national need for innovative quality assessment. JMU added a Master’s degree in psychological science with a concentration in quantitative psychology in 2001. Many of these students matriculate in the doctoral program.  

Most graduate students in the Ph.D. and M.A. program are awarded graduate assistantships in the Center as well; providing an opportunity for students to apply the skills they are learning in real-time situations. Moreover, many of the publications cited on this webpage are co-authored by our graduate students.

More information on JMU’s Assessment and Measurement Ph.D. program.  

More information on JMU’s Master’s in psychological sciences (quantitative concentration).  

(Back to Top) 

Learning Improvement (Use of Results)

While student learning improvement is championed on many campuses, few universities have evidenced such improvement. In 2014, CARS faculty and students published a simple model for learning improvement through NILOA (the National Institute of Learning Outcomes Assessment). The “weigh pig, feed pig, weigh pig” model demonstrates how universities can evidence student learning improvement through a process of assessment, intervention, and re-assessment.  

Internally, CARS collaborated with JMU’s administration and the Center for Faculty Innovation to pilot the model on campus. Externally, CARS is partnering with prominent organizations and institutions to shift academe from a culture of assessment to a culture of improvement.  

(Back to Top 

Assessment Day

How does a university with more than 20,000 undergraduates assess general education learning outcomes and other large-scale initiatives? JMU conducts what is known as Assessment Day (A-Day, for short). Students participate in two assessment days—once as incoming first-year students during August and again after earning 45-70 credit hours (typically in the spring semester of their second year). Students complete the same tests at both times. This pre-and-post-test design allows the university to gauge how much students have learned as a function of their general education coursework. 
Assessment Day enables the university to answer important questions asked increasingly by students, parents, employers and legislators about what a college degree is worth.  Student learning data also helps the university improve its educational offerings. 

CARS is responsible for coordinating JMU’s A-Day—setting up the test conditions, proctoring the assessments, and analyzing the data collected. This data collection process is widely recognized as one of the most successful and longstanding in the nation, having been in place for over 30 years.  

(Back to Top 

Motivation Research in Low-Stakes Testing Environments

Imagine that a university carefully selected tests and had a representative sample of students: a good start to assessment, no doubt. However, if performance on the test matters very little to students personally, they may not be motivated to do well. In this situation, despite otherwise robust methodology, test scores would not reflect what this student actually knows, thinks, or is able to do. This all too common situation is why JMU has studied motivation and how to improve it in low-stakes situations.  

A few practical procedures allowing CARS to examine validity issues related to motivation include:  

  • Training of test proctors to keep students motivated.  
  • Customizing test instructions to increase relevancy to students.  
  • Evaluating how much effort students give during test taking.  
  • Removing (i.e., filtering) data from students who expended little to no test-taking effort.  

(Back to Top 

Citations

Implementation Fidelity

Think for a moment about a doctor prescribing a drug to a patient experiencing an illness. In two weeks the patient returns to the doctor, describing persistent medical issues. During the consultation, the doctor would ask the patient how much and how often the patient took the prescribed drug.  

This situation is analogous to JMU’s implementation fidelity application in student affairs programs. The success of a planned program such as Orientation, relies not only on how the program was designed (prescribed) but also on whether the program was implemented as planned (the medicine was taken in the appropriate dosage for the appropriate time).  

CARS is creative about how to measure implementation fidelity. For example, graduate students have posed as first year students experiencing Orientation. These auditors take notes about the quality and duration of the program, topics covered, students’ level of engagement, and more.  

This check to ensure the program is implemented as prescribed allows program leaders to identify areas for improvement. Instead of changing the designed program because students are not learning, implementation fidelity assessment ensures that programmatic changes are based on an accurate assessment of what is actually being taught (i.e., the delivered program).   

(Back to Top)  

Citations

Advanced Measurement Techniques Applied to Assessment Practice

The methodological tools available to assessment practitioners today go beyond ANOVA and coefficient alpha and include such techniques as structural equation modeling, item response theory, hierarchical linear modeling, generalizability theory, and mixture modeling. CARS faculty and students contribute to the use, development, and study of such advanced methodologies not only in higher education assessment, but in educational research more broadly. 

Advanced methodologies can be challenging to understand. CARS faculty and students are known for their ability to explain complicated techniques and concepts in understandable ways. CARS’ union of technical expertise and effective communication skills has resulted not only in award-winning teaching and highly sought after workshops, but several publications that serve as “go to” resources for applied methodologists.  

(Back to Top 

Citations

Defining and Evaluating Assessment Quality (Meta-Assessment)

Many institutions struggle to convince accreditors of their assessment quality. To address this issue, CARS worked with university stakeholders to articulate various levels of assessment quality via a rubric. For example, the rubric helps faculty and administrators distinguish among beginning, developing, good, and exemplary statements of learning objectives.  

At JMU all academic degree programs submit assessment reports. Subsequently, trained raters provide feedback via the rubric. This feedback is shared with faculty assessment coordinators, department heads, and upper administration. It is also aggregated across all programs providing a university-level index of assessment quality.  

(Back to Top 

Partnerships with University Content Experts to Develop Assessments and Communicate Findings

Building a good assessment takes a partnership between content experts (faculty and staff) and assessment experts. Developing student learning outcomes and creating test items and rubrics to assess student learning is an iterative process. Content experts articulate what students should know, think, or be able to do. Assessment experts help design assessments. 

By partnering, these teams ensure that assessment instruments fit program learning and development objectives. In fact, 90% of the assessments used on campus are designed by faculty and staff at JMU (e.g., the Ethical Reasoning Rubric). Content experts and assessment practitioners also work together to ensure findings are clearly communicated. Halonen, J., Harris, C. M., Pastor, D. A., Abrahamson, C. E., & Huffman, C. J. (2005). Assessing general education outcomes in introductory psychology. In D. S. Dunn and S. Chew (Eds.),Best Practices in Teaching Introduction to Psychology(pp. 195-210). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.  

(Back to Top 

Professional Development

 CARS has been offering professional development opportunities for faculty in assessment in one form or another for over a decade and you can read more about our worksops here. Our flagship offering, Assessment 101, is a workshop that covers the fundamentals of the assessment cycle.

Assessment 101 is offered multiple times each year, accommodating ~30 faculty, staff, and students in each session. Originally intended as a professional development opportunity for JMU faculty alone, the content in Assessment 101 is now  open to the public and welcomes participants from other institutions both here in the United States, and internationally. 

(Back to Top) 

Competency-Based Testing

Recently, higher education has focused its attention on competency-based testing, a practice that has been a part of JMU’s culture for years. For example, all students must pass JMU’s Madison Research Essentials Test (MREST)—a test of information literacy skills—within their first academic year.  

Since JMU has deemed information literacy skills as fundamental to the success and maturation of an engaged and enlightened citizen, determining what score qualifies as passing was an important task. CARS faculty are experts in methods of standard setting such as bookmark and Angoff and assisted in the creation of cut scores for MREST along with many other assessments at JMU.  

For example, faculty in the department of social work have collaborated with CARS faculty on an assessment which social work seniors must pass to graduate. Other programs at JMU have set standards to help interpret student performance: How many students meet faculty expectations on the assessment?

(Back to Top 

Back to Top