//www.jmu.edu/news/ethical-reasoning/2022/05/8kqwebbanner2000x666.png
ERiA Newsletter: Spring 2023

ERiA Newsletter: Spring 2023

It's Complicated: Special Edition

News

by Kacey Damaty

 

 


 

It's Complicated 2023: AI-CARE

Artificial intelligence and mental health? It's Complicated! 

This year, students at JMU will engage the ethical dimensions of the new It's Complicated scenario AI-Care as they wrestle with the question: Should a university adopt an artificial intelligence chatbot app to help address the growing demand for student mental health services? 

Through a variety of instructional modalities, It’s Complicated provides a foundational experience of a flexible ethical reasoning strategy – the Eight Key Questions (8KQ) – that anyone can use in their personal, professional and civic lives to make better and more ethical decisions. In this special edition of the Ethical Reasoning in Action newsletter, get an insider’s look at how AI-Care was developed, the changes coming to It's Complicated this fall, and how AI-Care and the Eight Key Questions are already inspiring and shaping conversations across campus. 

 


 

ERiA and JMU Libraries partner to move It's Complicated online

ERiA plans to deliver It’s Complicated virtually to nearly five thousand incoming JMU students beginning this fall. Data from the past ten years of facilitating It's Complicated in person demonstrates that it is an effective intervention in a student's ability to reason ethically. What effect will an online delivery have? Working with a team of instructional designers from JMU Libraries, ERiA will use this high-impact project to find answers to that question. Over the next six months, both teams will work together to: 

  • develop the online course infrastructure,
  • re-design It's Complicated content and assessments,
  • map online learning objectives to ERiA's Student Learning Outcomes and
  • ensure that It's Complicated remains engaging and effective.

JMU's instructional design experts are training the ERiA team to use tools in Canvas, with right design, to help students reflect on the ethics of AI-Care in more accessible and engaging ways, with longer academic think time built into asynchronous instruction. What better scenario to introduce in a new online format than one that asks: When should technology be used to replace authentic human interactions?  


 

ERELogo

Ethical Reasoning Educators bring It's Complicated to Wellness Passport Events

For students on campus who want to engage AI-Care and experience It’s Complicated in person, ERiA began delivering the interactive workshop through Wellness Passport events this spring. Over 60 students attended the events which prompted participants to ask ethical questions, such as: 

  • "How will AI-Care respond to issues it can't process or understand?" 
  • "How should students be expected to connect to a robot that can't empathize with them?" 
  • "What are ways to make AI-Care accessible to all?"
  • "How will implementation of AI-Care impact the in-person side of mental health care? Will counselors now be expected to out-perform a chat bot?" 
  • "Does the university have an obligation to offer both [in-person & AI-Care]?" 

In addition to AI-Care, the Ethical Reasoning Educators hosted the first in an ongoing reality TV series-based Wellness Passport event titled “Netflix’s The Ultimatum: Is It Utimately Moral?”, where attendees are asked to think about some of the ethical dilemmas in reality television shows through the lens of the Eight Key Questions.  

Back to Top

 


Crafting the Scenario
A Conversation with Dr. Christian Early and Daniel George

Over a year ago, ERiA began working on a brand-new scenario for It’s Complicated. At the time, the idea of an AI chatbot that could provide mental health support seemed very hypothetical. Enter ChatGPT. The chat bot’s machine learning program digests millions of data points every day with new users signing up to interact. It has likely “read” this newsletter before it even showed up in your inbox. While educators and students are still adjusting to the rapidly changing landscape of educational technology after COVID, the once distant world of AI is now here. AI generates art and music. It writes computer code and academic papers with equal ease. Did the world just take a leap into the future? Does it change the questions we should be asking to guide our decisions? Author of the scenario Dr. Christian Early and JMU Counseling Center Staff Psychologist Daniel George reunited to talk about how AI-Care addresses the timely topics of Artificial Intelligence and Mental Health.

How did you first become interested in the intersection of technology and mental health? 

GEORGE: David Onestack, director of the JMU Counseling Center, connected me with this project, because my research area has been the confluence of technology and psychotherapy for about 7 years. It seemed like a natural fit and I was excited to be involved. 

EARLY: The intention behind the scenario was to address the mental health crisis. In 2022 there were several tragic events on our campus, but also on other campuses in Virginia and nationally. The mental health conversation had been simmering in the background, but it moved to the foreground that year. Students were saying, “Okay, enough now. We need help and universities have to do something.” I thought why not write a scenario that addresses the lack of service providers in the field, the growing demand, and the very real constraints on university budgets. Hiring one more counselor is not a solution because of the scale of the problem. AI could address that cluster of problems, but none of us are experts in the field, which is why we reached out to the Counseling Center and then to Daniel. Hearing that it was his area of expertise was just too perfect. 

Describe how the landscape of artificial intelligence and mental health has changed just since you first worked on AI-Care: 

EARLY: Many of us were shocked and in disbelief when Chat GPT was released. The sophistication and the number of things that AI could do took us by complete surprise. How can it respond to my prompting with such seeming thoughtfulness? There is a conversation that I’m now hearing about regarding chatbots. AI programs have what is called a black box, which essentially describes the fact that we can't retrace the connections between a prompt and a chatbot response. We cannot “see” how it arrives at specific responses, for good or for bad. We don't know how it can arrive at such wrong things to say. A couple of months ago I asked Daniel, “how far away do you think we are?” and you said, “Well, probably a couple of years, but I wouldn't be surprised if something came out tomorrow.” It was mid-December when the news hit about Chat GPT, and the AI discussion took off and went in all sorts of different directions that I just didn't see coming - I did not think tomorrow was going to be mid- December. 

GEORGE: I didn't either. I could see all these things that have been going on with deep learning for probably at least 5 or 6 years that were very, very impressive, and I can remember one of the things that really blew my mind. This was quite a while ago, there was a study conducted on eye scans that showed AI was able to tell all these different things about people just based on their retina scans, including their sex at birth. That is something the ophthalmologists had never been able to do, and they were unable to understand what the AI was seeing - it's the black box thing again. AI-Care is a fictional scenario, but we are now on the cusp of a deep transformation of society. A system like ChatGPT hasn’t been trained on therapy data, as far as I know, but if it were, ChatGPT and bots like it could end up being competent therapists, at least on paper. The question remains: how important is that human connection to the process? 

Why is AI care important for students to consider? 

EARLY: For me, it’s important to consider this scenario because it raises all the Eight Key Questions. Let's say that you feel better after an interaction with a chatbot, that's a positive outcome, right? Or is it?  Do we want to train ourselves to become the kinds of people who can't sit with negative emotions? There's a whole conversation there about who we are becoming, and that discussion is bigger than just an app. It’s digital humanities, it’s human/computer interaction and that's a big conversation but it is one we have to have urgently. There are conversations around responsibility, character, long term outcomes, and fairness questions about equity. AI-Care hits every ethical dimension, it's all there waiting for participants to begin to unpack. It's a fantastically effective way to get people to think more deeply about what we're doing. 
 

GEORGE: I resonate with everything you just said, and 100% agree. And I think it's about the most timely possible thing, because we are seeing 3 things: One of them is just technology becoming more and more pervasive, moving into every last corner of our existence. The other is a notable increase in mental health problems, especially among the youngest generations, and an apparent connection, at least partially, to technology with what's going on in terms of mental health. The final, most timely part is that AI is in the midst of transforming culture. There's a lot of conversation now about what jobs are not going to exist anymore. Automation has been a huge issue in so many industries, but we never really thought automation could take over the role of lawyers or authors and artists. We’re not there yet, and I don't know what the timeline looks like, but we have to contend with it, and I think the more people that are thinking about it and conscious of it the better. 

 


 

Ethical Reasoning Across Campus

JMU X-Lab's Esports team invited ERiA to present the Eight Key Questions ethical reasoning strategy to help members discuss some of the ethical issues in the world of online competitive gaming. 43 members of the team spent their Sunday evening using the 8KQ to navigate difficult moral dilemmas in Esports. {photo} 

Ethical Reasoning Faculty Fellow Dr. Joe Derby, Assistant Professor of Marketing, took a team of talented College of Business students to the International Business Ethics and Sustainability Case Competition hosted by Loyola Marymount University in Los Angeles, CA. Of the three rounds of competition, the team from JMU received first place in the 10-minute presentation and the 90-second elevator pitch competition, and finished as a runner up in the 25-minute competition {photo} 

Join ERiA in congratulating: 

  • Becca DuBois, Senior Mangement major with a Global Supply Chain Management minor  
  • Dylan Deffinbaugh, Senior Finance Major  
  • Nicholas Lesky, Senior Marketing Major  
  • Ellan Miano, Senior Marketing Major with a concentration in Digital Marketing  

 


 

Upcoming ERiA Events

NEW THIS FALL

Does your course have an ethics component, but you are in need of fresh ideas for an ethics-based classroom activity? Request a workshop with the Ethical Reasoning Educators and experience the new AI-Care scenario in the comfort of your classroom.  

Incorporating the Eight Key Questions? Tell us! 

How are you using the 8KQ in your classes, programs, work, research, professional and daily life? Hearing your stories and, with your permission, reading them in an upcoming ERiA newsletter helps to nurture the JMU academic community into a culture where ethical reasoning flourishes! 

Send your experiences, ideas, and insights to ethicalreasoning@jmu.edu 

A look back...a look ahead: 

Ethical Reasoning in Action’s mission is to offer an ethical reasoning strategy for decision-making that anyone can apply when facing moral situations in their personal, professional, and civic lives. The scenarios described in It’s Complicated have all addressed significant issues: Hurricane Sharon (natural disaster), Contagion (virus epidemic), and Narcan (opioid addiction). Those issues – global climate change, infectious disease management, and the opioid crisis – are still relevant. They are among the most critical of ethical issues the world is facing right now. 

ERiA plans to deliver AI-Care over the coming three years. What ethical issues should we be thinking of next? What will the next ten years look like? Imagining into that future is an exciting endeavor and we invite your contribution!

Published: Monday, February 27, 2023

Last Updated: Friday, December 1, 2023

Related Articles