Bridging the AI Trust GapLast month I was on a virtual panel hosted by the Chronicle of Higher Education titled "Bridging the AI Trust Gap." Lee Rainie (Elon University), Gemma Garcia (Arizona State University), and I tried to unpack the differences in how higher ed administrators, faculty, and students approach generative AI in teaching and learning. Moderator Ian Wilhelm from the Chronicle asked very good questions and relayed even more good questions from the audience, and my fellow panelists brought a lot of clarity to the current fractured landscape of AI in higher ed. You can watch a recording of the entire "Bridging the AI Trust Gap" event on the Chronicle website. I thought I would share some of my prep notes here in the newsletter for those interested in my take on some of Ian Wilhelm's good questions. According to a Chronicle survey of 820+ faculty members and administrators conducted last year, faculty were less likely to view generative AI as “an opportunity for higher education to improve how it educates, operates, and conducts research.” For administrators 86% of administrators said it did compared to 55% for faculty. Why this divide? Is it reflective on your campus? What do you hear from faculty about AI? I suspect some of this is because generative AI is having different impacts on different areas of work across higher ed. Administrators are more likely to see all those various areas of work and say, yes, some of those will be affected by AI. Faculty naturally focus on their own teaching and research, and in some of those disciplines the current impact of AI is rather minimal. I think also faculty are on the front lines with students using AI for unauthorized assistance. That's deeply concerning for a lot of faculty, and for good reason. When you ask faculty about their views on AI, that's likely to be front of mind. And for faculty who are perhaps a little less worried about cheating, they're still worried that student AI use will undercut student learning, that is, students will rely too much on AI and not do the hard work of learning. As I hear those stats, I also think it's useful to distinguish between students using AI on assignments and faculty using AI to help create assignments, design rubrics, and build lessons. I find that faculty are more neutral-to-positive on that latter use cases, and that if you start conversations with faculty on their own use of AI as an instructional assistant of sorts, they actually approach the student use cases with more nuance. When faculty see that an AI chatbot can help them write a new case study or provide ideas for active learning in their classes or draft pretty good exam questions, they start to think differently about the roles AI might play in student work. Bridging trust in AI isn’t only on the shoulders of faculty. Many have legitimate concerns about the tools. What do administrators need to understand to better support faculty as AI potentially changes the way they teach or do other work? What guardrails do administrators need to set with the use of the tool by faculty? A big part of bridging the AI trust gap is taking seriously the concerns that faculty have about integrating AI into their teaching or their students' learning. Some faculty tried ChatGPT two years ago, didn't find it useful, and have been assuming AI won't make a difference in their discipline. These faculty probably need to do a bit more research and update their understanding of the capabilities of generative AI. But other faculty have done the research and considered very carefully the ways that AI might undercut student learning. Cate Denial is a historian at Knox College and the author of A Pedagogy of Kindness. She blogged earlier this year about her very informed and thoughtful decision not to use AI her course. For the skills she wants her students to learn, she has determined that AI won't help them get there. We have to respect informed choices like that, while also making space for other faculty to make different informed choices about their teaching contexts. Another big piece is that many faculty are looking to administration to provide students, faculty, and staff with access to AI tools that are both best of class and properly vetted for student privacy. Faculty are concerned that if the institution doesn't provide tools, the students with financial means will have a leg up over the students who can't afford all the new tools. Marc Watkins at the University of Mississippi has argued on his blog that students have plenty of free-access options to powerful AI tools, but the instructors that I talk to don't see it that way. They want to use the AI tools the institution provides, since those site licenses have provisions for student privacy, but they don't want those tools to be second-rate, not when they're preparing students for a workforce using first-rate AI technology. A survey conducted by Lee’s group and AAC&U showed that students are outpacing faculty in their use of gen AI. How could this dynamic affect teaching and learning in the years ahead? One of my biggest worries is that this will be yet another dynamic that pits students and faculty against each other. There are enough faculty who see any AI use as cheating that students are scared of admitting to using AI, even if they rely on ChatGPT for all kinds of things. I know faculty who have "green light" AI policies in which students are actively encouraged to explore the use of AI in their learning and to document and reflect on those explorations. And even those faculty sometimes struggle to get students to share their AI use, even when it's clear there's no penalty for doing so! It doesn't help students that faculty have such widely varying course policies on AI. It's a lot to navigate when one instructor is asking you to use AI on an assignment and another says that any AI use is cheating. I know how important it is for individual faculty to decide AI policy for their particular courses, but I also hope that we'll see more programs and majors figure out what role AI should play at the curriculum level. What AI skills should the students in your program be learning, if any? And where in the curriculum should you help students develop those skills? It will be a lot easier for the first course in a major to have a "red light" policy if both the instructor and students know there will be "green light" courses down the road. And let's not forget that many students want to learn more about AI as it applies to their chosen professional fields. If faculty aren't providing that, our courses and majors will be less relevant and useful to students. That's another dynamic I would prefer we avoid. There's lots more on these topics in the full "Bridging the AI Trust Gap" recording! Intentional Teaching+If you pull up the Intentional Teaching podcast on your favorite podcast app or on the web, you'll see something a little different this week. You can now subscribe to the podcast for $3 US per month and get access to subscriber-only bonus episodes. There are six bonus episodes in the feed right now, and I'll be adding more in the weeks to come. When I interview guests for the podcast, I usually end up cutting some of the conversation for time, and I've been sharing the best of those cuts as bonus episodes for my Patreon supporters. Thanks to my podcast host, Buzzsprout, I now have another way to share those bonus episodes, through Buzzsprout's subscription feature. If you subscribe to Intentional Teaching, you'll receive a personal RSS feed that includes the bonus episodes and any other subscriber-only audio material I cook up in the future. You can use that RSS feed to listen to "Intentional Teaching+" in your favorite podcast app. (Unless your favorite podcast app is Spotify, which doesn't accept RSS feeds. Get it together, Spotify.) For those of you currently supporting Intentional Teaching on Patreon, thank you thank you thank you. Your financial support means the world to me. Also, you don't have to do anything different. I'll continue posting bonus episodes and any other subscriber-only audio to Patreon as well as to Buzzsprout. And the cost to support is the same ($3 per month) on both platforms. However you support the show, know that your dollars go towards the cost of producing the podcast and newsletter. I just checked, and I'm spending about $80 per month on software and services for the podcast (Zencastr to record interviews, Hindenburg to edit the audio, Rev to generate transcripts, and Buzzsprout to host the podcast), so there are definitely costs to defray. And for anyone reading this, please feel no pressure to support Intentional Teaching financially. Three dollars a month isn't a lot, but in this economy, discretionary income isn't a given, so no worries at all if you don't want to or can't chip in. Sharing a podcast episode or newsletter edition is also a fantastic way to support the work here, so feel free to do that instead! |
Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.
Annotation and Learning with Remi Kalir It's one thing to pull a book off a shelf, highlight a passage, and make a note in the margin. That's annotation, and it can be a useful learning tool for an individual. It's another thing to share your annotations in a way that others can read and respond to. That's social annotation, and when I heard years ago about digital tools that would allow a class of students to collaboratively annotate a shared textbook, I thought, well, that's the killer app...
Structure Matters: Custom Chatbot Edition Many years ago when educators were seeing what they could do with Twitter in their teaching, I wrote a blog post noting that structured Twitter assignments for students seemed to work better than more open-ended invitations for students to use Twitter to post about course material. When we walked through my mom's house as it was being built, I couldn't help but take a photo of all those lines. Somewhat more recently, I started sharing the structured...
Students as Partners in Teaching about Generative AI Last year on the podcast, I talked with Pary Fassihi about the ways she was exploring and integrating the use of generative AI in the writing courses she teaches at Boston University. During that interview, Pary mentioned an AI affiliate program running out of the writing program at Boston University. This program involved matching undergraduate students—the AI Affiliates—with writing instructors, giving the AI Affiliate a role in...