Skip to Main Content

CARS

You are in the main content

Top 10 Initiatives: JMU’s Leading Contributions to Assessment

CARS faculty, staff, and graduate assistants generated and endorsed the following list of 10 initiatives. These 10 contributions to assessment highlight the work that JMU assessment has done, and is still doing, to elevate higher education assessment practices.

*Denotes graduate student authorship.  



1. Connection to the Assessment and Measurement PhD program

Initially seeded by the Fund for the Improvement of Post Secondary Education, the assessment and measurement doctoral program at JMU was launched in 1998. At the time, few colleges possessed the expertise to conduct useful assessment. In fact, Dr. Peter Ewell characterized the early campus assessment initiators as “happy amateurs.” The Center for Assessment and Research Studies (CARS) at JMU and the subsequent assessment and measurement PhD program responded to the national need for innovative quality assessment.  

JMU added a Master’s degree in psychological science with a concentration in quantitative psychology in 2001. Many of these students matriculate in the doctoral program.

Students and faculty in these PhD and Masters programs staff CARS—making the center one of the largest of its kind. By working closely with faculty mentors to assist academic and student affairs programs at JMU, students learn advanced skills in measurement, statistics, consultation, and higher education policy.

Graduates of the program are sought after by universities, testing companies, and other businesses for their blend of technical skills and practical know-how.  Moreover, many of the publications cited below are co-authored by our graduate students.

Erwin, T. D. & Wise, S. L. (2002). A scholar-practitioner model for assessment. In T. W. Banta (Ed.), Building a Scholarship of Assessment. San Francisco: Jossey-Bass.

Wise, S. L. (2002). The assessment professional: Making a difference in the 21st century. Eye on Psi Chi, 6(3), 16-17.

More information on JMU’s assessment and measurement PhD program.

More information on JMU’s Master’s in psychological sciences (quantitative concentration)

2. Learning Improvement (Use of Results)

While student learning improvement is championed on many campuses, few universities have evidenced such improvement. In 2014, CARS faculty and students published a simple model for learning improvement through NILOA (the National Institute of Learning Outcomes Assessment). The “weigh pig, feed pig, weigh pig” model demonstrates how universities can evidence student learning improvement through a process of assessment, intervention, and re-assessment.

Internally, CARS is collaborating with JMU’s administration and the Center for Faculty Innovation to pilot the model on campus. Externally, CARS is partnering with prominent organizations and institutions to shift academe from a culture of assessment to a culture of improvement.  

Fulcher, K. H., *Good, M. R., *Coleman, C. M., & *Smith, K. L. (2014, December). A simple model for learning improvement: Weigh pig, feed pig, weigh pig. (NILOA Occasional Paper). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

3. Assessment Day

How does a university with more than 19,000 undergraduates assess general education learning outcomes? JMU conducts what is known as Assessment Day (A-Day, for short). Students participate in two assessment days—once as incoming first year students during August Orientation and again after earning 45-70 credit hours (typically in the spring semester of their second year). Students complete the same tests at both times. This pre-and-post-test design allows the university to gauge how much students have learned as a function of their general education coursework.

Assessment Day enables the university to answer important questions asked increasingly by students, parents, employers and legislators about what a college degree is worth.  Student learning data also helps the university improve its educational offerings.

CARS is responsible for coordinating JMU’s A-Day—setting up the test conditions, proctoring the assessments, and analyzing the data collected. This data collection process is widely recognized as one of the most successful and longstanding in the nation, having been in place for over 25 years.

Hathcoat, J. D., Sundre, D. L., & *Johnston, M. M. (2015) Assessing college students’ quantitative and scientific reasoning: The James Madison University story. Numeracy, 8 (1), Article 2.

Lau, A. R., Swerdzewski, P. J., Jones, A. T., Anderson, R. D., & Markle, R. E. (2009). Proctors matter: Strategies for increasing examinee effort on general education program assessments. Journal of General Education, 58 (3), 196-217.

Pieper, S. L., Fulcher, K. H., Sundre, D. L. & Erwin, T. D. (2008). “What do I do with the data now?”: Analyzing assessment information for accountability and improvement. Research and Practice in Assessment, 3, 4-10. 

4. Motivation Research and Intervention in Low-Stakes Testing Environments

Imagine that a university carefully selected tests and had a representative sample of students: a good start to assessment, no doubt. However, if performance on the test matters very little to students personally, they may not be motivated to do well. In this situation, despite otherwise robust methodology, test scores would not reflect what this student actually knows, thinks, or is able to do. This all too common situation is why JMU has studied motivation and how to improve it in low-stakes situations. 

A few practical procedures allowing CARS to examine validity issues related to motivation include:

  • Training of test proctors to keep students motivated.
  • Customizing test instructions to increase relevancy to students.  
  • Evaluating how much effort students give during test taking.
  • Removing (i.e., filtering) data from students who expended little to no test-taking effort. 

Barry, C. L., Horst, S. J., Finney, S. J., Brown, A. R., & Kopp, J. (2010). Do examinees have similar test-taking effort? A high-stakes question for low-stakes testing. International Journal of Testing, 10 (4), 342-363.

Finney, S. J., Sundre, D. L., *Swain, M.S., & *Williams, L. M. (in press). The validity of value-added estimates from low-stakes testing contexts: The impact of change in test-taking motivation and test consequences. Educational Assessment.

Swerdzewski, P. J., Harmes, J. C., & Finney, S. J. (2011). Two approaches for identifying low-motivated students in a low-stakes assessment context. Applied Measurement in Education, 24 (2), 162 – 188.

Thelk, A., Sundre, D. L., Horst, J. S., & Finney, S. J. (2009). Motivation matters: Using the Student Opinion Scale (SOS) to make valid inferences about student performance. Journal of General Education, 58 (3), 131-151.

Wise, S. L. & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment, 10 (1), 1-17. 

5. Implementation Fidelity Applied to Student Affairs Programs

Think for a moment about a doctor prescribing a drug to a patient experiencing an illness. In two weeks the patient returns to the doctor, describing persistent medical issues. During the consultation, the doctor would ask the patient how much and how often the patient took the prescribed drug.

This situation is analogous to JMU’s implementation fidelity checks in student affairs programs. The success of a planned program such as Orientation, relies not only on how the program was designed (prescribed) but also on whether the program was implemented as planned (the medicine was taken in the appropriate dosage for the appropriate time).  

CARS is creative about how to measure implementation fidelity. For example, graduate students have posed as first year students experiencing Orientation. These auditors take notes about the quality and duration of the program, topics covered, students’ level of engagement, and more.

This check to ensure the program is implemented as prescribed allows program leaders to identify areas for improvement. Instead of changing the designed program because students are not learning, implementation fidelity assessment ensures that programmatic changes are based on an accurate assessment of what is actually being taught (i.e., the delivered program). 

*Gerstner, J. J. & Finney, S. J. (2013). Measuring the implementation fidelity of student affairs programs: A critical component of the outcomes assessment cycle. Research and Practice in Assessment, 8, 15-28.

*Swain, M. S., Finney, S. J., & *Gerstner, J. J. (2013). A practical approach to assessing implementation fidelity. Assessment Update, 25(1), 5-7, 13.

*Fisher, R., *Smith, K. L., Finney, S. J., & *Pinder, K. E. (2014). The importance of implementation fidelity data for evaluating program effectiveness. About Campus, 19, 28-32.

6. Advanced Measurement Techniques Applied to Assessment Practice

The methodological tools which assessment practitioners have available to them today go beyond ANOVA and coefficient alpha and include such techniques as structural equation modeling, item response theory, hierarchical linear modeling, generalizability theory, and mixture modeling. CARS faculty and students contribute to the use, development, and study of such advanced methodologies not only in higher education assessment, but in educational research more broadly.

Advanced methodologies can be challenging to understand. CARS faculty and students are known for their ability to explain complicated techniques and concepts in understandable ways. CARS’ union of technical expertise and effective communication skills has resulted not only in award-winning teaching and highly sought after workshops, but several publications that serve as “go to” resources for applied methodologists.

DeMars, C. (2010). Item Response Theory. New York, NY: Oxford University Press.

Finney, S. J. & DiStefano, C. (2013). Nonnormal and categorical data in structural equation modeling. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed.) (pp. 439-492). Charlotte, NC: Information Age Publishing, Inc.

Pastor, D. A., & Gagné, P. (2013). Mean and covariance structure mixture models. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed.) (pp. 343-294). Charlotte, NC: Information Age Publishing, Inc.

Pastor, D. A., Kaliski, P. K., & Weiss, B. A. (2007).  Examining college students’ gains in general education. Research and Practice in Assessment, 2, 1-20.

Taylor, M. A. & Pastor, D. A. (2013). An application of generalizability theory to evaluate the technical quality of an alternate assessment. Applied Measurement in Education, 26(4), 279-297.

7. Defining and Evaluating Assessment Quality (Meta-Assessment)

Many institutions struggle to convince accreditors of their assessment quality. To address this issue, CARS worked with university stakeholders to articulate various levels of assessment quality via a rubric. For example, the rubric helps faculty and administrators distinguish among beginning, developing, good, and exemplary statements of learning objectives.

At JMU all academic degree programs submit assessment reports. Subsequently, trained raters provide feedback via the rubric. This feedback is shared with faculty assessment coordinators, department heads, and upper administration. It is also aggregated across all programs providing a university-level index of assessment quality.

Fulcher, K. H. & *Bashkov, B. M. (2012, November/December). Do we practice what we preach? The accountability of an assessment office. Assessment Update, 24(6), 5-7, 14.

Fulcher, K. H. & *Orem, C. D. (2010). Evolving from quantity to quality: A new yardstick for assessment. Research and Practice in Assessment, 5, 13-17.

*Rodgers, M., Grays, M. P., Fulcher, K. H., & Jurich, D. P. (2012). Improving academic program assessment: A mixed methods studyInnovative Higher Education38(5), 383-395. 

8. Partnerships with University Content Experts to Develop Tests

Building a good test takes a partnership between content experts (faculty and staff) and assessment experts. Developing student learning outcomes and creating test items and rubrics to assess student learning is an iterative process. Content experts articulate what students should know, think, or be able to do. Assessment experts help design tests.

By partnering, these teams ensure that assessment instruments fit program learning and development objectives. In fact, 90% of the assessments used on campus are designed by faculty and staff at JMU (e.g., the Ethical Reasoning Rubric).

Cameron, L., Wise, S. L., & *Lottridge, S. M. (2007). The development and validation of the information literacy test. College & Research Libraries, 68(3), 229-237.

Finney, S. J., Pieper, S. L., & Barron, K. E. (2004). Examining the Psychometric Properties of the Achievement Goal Questionnaire in a General Academic Context. Educational and Psychological Measurement, 64 (2), 365-382.

Halonen, J., Harris, C. M., Pastor, D. A., Abrahamson, C. E., & Huffman, C. J. (2005). Assessing general education outcomes in introductory psychology. In D. S. Dunn and S. Chew (Eds.), Best Practices in Teaching Introduction to Psychology (pp. 195-210). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Kopp, J. P., Zinn, T. E., Finney, S. J., & Jurich, D. P. (2011). The development and evaluation of the Academic Entitlement Questionnaire. Measurement and Evaluation in Counseling and Development, 44 (2), 105-129.

9. Assessment Fellows

Lack of time is often the largest obstacle between faculty and quality assessment. The JMU Assessment Fellows program – held every summer - provides faculty with the time and support to refine assessment processes. Fellows work with CARS faculty and graduate assistants on projects decided in conjunction with their home departments and deans.

10. Competency-based Testing

Recently, higher education has focused its attention on competency-based testing, a practice that has been a part of JMU’s culture for years. For example, all students must pass JMU’s Madison Research Essentials Test (MREST)—a test of information literacy skills—within their first academic year.

Since JMU has deemed information literacy skills as fundamental to the success and maturation of an engaged and enlightened citizen, determining what score qualifies as passing was an important task. CARS faculty are experts in methods of standard setting such as bookmark and Angoff and assisted in the creation of cut scores for MREST along with many other assessments at JMU.

For example, faculty in the department of social work have collaborated with CARS faculty on an assessment which social work seniors must pass to graduate. Other programs at JMU have set standards to help interpret student performance: How many students meet faculty expectations on the assessment?

DeMars, C. E., Sundre, D. L, & Wise, S. L.  (2002). Standard setting: A systematic approach to interpreting student learning. Journal of General Education, 51 (1), 1-20.