The Madison Collaborative: Campus-wide ethical reasoning initiatives

By Elizabeth R. H. Sanchez (‘15M)

By now, most every academic and student affairs employee at James Madison University understands the importance of assessment. Higher education institutions nationwide complete assessment reports for accreditation purposes and to gauge how students are performing. However, the Center for Assessment and Research Studies (CARS) at JMU is urging faculty to look at assessment in a new light: to treat the assessment process as a way to evidence and improve student learning; a new, nationally significant practice that can be distilled into three complex, yet concrete, steps: assess students, make programmatic changes based on those results, and re-assess students to determine if changes led to learning improvement.

Madison Col1

Otherwise known as “closing the loop,” this final step of the assessment process is oftentimes the least articulated yet arguably the most important. JMU is off to a great start—many programs are assessing students and changing curricula and pedagogy supported by collected data. The Madison Collaborative (MC), James Madison University’s homegrown initiative to teach ethical reasoning skills through an eight question critical thinking approach, is one of the few programs nationwide to consciously involve assessment, intervention, and re-assessment planning throughout the entire process of its development.

According to professor of Philosophy and Religion and MC Chair, Dr. William Hawk, the ethical reasoning skills improvement initiative began with a stark observation: his instruction was ineffective at teaching students how to apply what they learned in the classroom to the real moral dilemmas they face, and will continue to face, daily. Hawk’s research shows that developed ethical reasoning skills are not only valued by employers, professional organizations, even James Madison himself, but also by students. Yet, as Hawk states, “students were not being intentionally taught how to make sound ethical decisions.”

Moving from a mere observation to a goal of improving student learning is no easy task. In fact, for the Madison Collaborative, it took a supportive and coacting group of faculty, staff, administrators, students, and assessment specialists at CARS to help create measurable student learning outcomes (SLOs), develop multiple assessment instruments that provide reliable and valid data correlative to each SLO, and cultivate an eight key question ethical reasoning framework that met accrediting agency SACSCOC’s standards.


In fact, “It took a lot of convincing [for the university to accept improved ethical reasoning as a quality enhancement plan]” but “we worked with faculty who offered suggestions and saw the critical thinking [approach to ethical reasoning development] at work.  We also had a lot of help from CARS,” says Hawk, “we knew from the beginning that a program like this would face a lot of criticism, so we knew that we would have to have a way to assess students—to demonstrate effectiveness.” In order to demonstrate effectiveness, the Madison Collaborative team first had to assess students without any ethical reasoning skill development intervention.   

Admittedly, the MC was at an advantage when collecting data on JMU’s Assessment Day, “It’s rare that the initial [data] sample is so pure,” noted the MC liaison and assessment and measurement doctoral student Kristen Smith. Indeed, the analysis and interpretation of the preliminary data was relatively straightforward: Although students are interested in reasoning through difficult ethical dilemmas and developing a skillset to do just that, they “weren’t that good at using the Eight Key Questions or identifying them,” comments Hawk, “which is understandable, they had no exposure—no training or practice.” However, he mentions, “I think [faculty] who read initial [student] essays were surprised by [their] inability to put forth good ethical reasoning” which really drove home the university’s “need to teach and value the teaching of ethical reasoning skills.” 

After the initial assessment and analysis of data, the Madison Collaborative’s Eight Key Question framework was (and is continually being) integrated into First Year Orientation, student affairs programming, General Education, and upper-level academic courses. For example, the way Hawk and other interested faculty teach ethics is now centered around the critical thinking approach. Further, nearly all incoming freshmen at JMU are introduced to the Eight Key Questions via a two-page spread in The One Book that they receive upon paying their deposit. Also, during August 1787 Orientation, trained faculty and staff facilitators foster It’s Complicated, an activity and time of reflection that relies on students’ use of the Eight Key Questions while sorting through a hypothetical ethical dilemma. And recently, the Madison Interactive, an online program with nine episodes, premiered in many communications courses.


For some programs, these interventions are where the assessment and student learning “improvement” process ends. But as Smith iterates, “it’s important that programs re-assess students [after implementing changes to pedagogy or curricula].” She adds, “[assessment] doesn’t have to be a ‘one size fits all’ model, especially when it comes to using assessment results for improvement” and urges programs to not abandon their work. But, re-assessing students after making a programmatic change can be daunting. As Hawk states, “We [as faculty] all think that we are pretty good at what we do… and we can be afraid to see whether or not we are accomplishing what we think we are accomplishing.”

Now, students who have been provided information on the MC and the Eight Key Questions in The One Book, participated in It’s Complicated, and could have been exposed to the Eight Key Questions in residential life, student affairs, and academic programs, are going through an assessment process using the same instruments on subsequent Assessment Days. Receiving results from those re-assessments has been a revealing, informative, and incredibly valuable process for the Madison Collaborative.

Some re-assessment data, states Dr. Hawk, “show that students are learning—they are beginning to recognize and use the Eight Key Questions,” while other results suggest that there is much more work to be done. From these results, the Madison Collaborative is making modifications to how the framework is taught: “[The Madison Collaborative has] a better idea of which of the Eight Key Questions are the clearest to students and which are most vague,” which is governing the layout and material provided in the new online interactive program for students wanting to improve their ethical reasoning skills, “[We’re also] writing more supportive materials [for faculty wanting to use the framework in their courses].” As Smith points out, “change is slow…improving student learning is a long-term investment, and we, as assessment practitioners, have not always done well to facilitate these more long-term processes,” and states that any program re-assessing students should not expect to see results too quickly.

“No program should feel ashamed of assessment results,” states Smith, who claims that information collected by the Madison Collaborative or other programs on Assessment Day, “should not be categorized as positive or negative. It truly is just data. We wouldn’t say ‘the sky is positive’ or the ‘sky is negative’ the sky is just the sky—same with assessment data.” Smith continues, “if your program is taking assessment results, making meaningful, logical pedagogical and curricular changes, and then re-assessing students using the same instrumentation, you are already doing assessment at a national level—better than the majority of people out there.”

As for the Madison Collaborative, it’s important to keep in mind that student learning is the overall goal, that “this difficult, complicated reasoning process is a valuable skillset for students to know…[and]…that students are interested in learning.” Assessment of the MC, according to Hawk, is necessary. It provides an “external confirmation of temporary success or not,” which creates “a great feedback system that prompts more effective pedagogy.” He admits, “it takes a lot of program self-confidence and willingness to change,” but “students benefit from faculty continually using results to improve instruction.”

Back to Top