Study Hall: AI and Learning EditionI'm excited to try out a new format on the Intentional Teaching podcast this week. Once again, I’ve been inspired by the American Birding Association podcast. The ABA podcast uses a format they call "This Month in Birding" where host Nate Swick invites three great guests to discuss recent studies or news articles from the world of ornithology. I learn a lot listening to these episodes, and I thought I would try the format out here on my podcast. Doing something called "This Month in the Scholarship of Teaching and Learning" sounded a little ambitious to me. There’s no way I can do this monthly! So I’m calling this format Study Hall since we’re gathered together to discuss interesting teaching and learning studies. For this first edition of Study Hall, we’re focusing on scholarly articles that have something to say about generative AI and education. The panelists are all colleagues in the field of educational development who do a great job finding and sharing educational research that’s interesting and practical. Lance Eaton is senior associate director of AI in teaching and learning at Northeastern University and author of a great blog exploring the intersection of AI and education. Michelle D. Miller is professor of psychological sciences at Northern Arizona University and author of multiple fantastic books applying psychology to teaching and learning. David Nelson is associate director at the Center for Instructional Excellence at Purdue University where he’s been supporting a variety of teaching initiatives for 17 years. In our conversation, we talk about cognitive offloading, chatbot sycophancy, student agency, and more! I invite you to listen to the full conversation, but here are a few highlights to pique your interest: Michelle Miller led our discussion of "Supporting Cognition with Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future," a 2022 literature review by Grinschgl and Neubauer. She noted that the authors point out that cognitive offloading "is not always a universally negative thing." Michelle said she worries that "people say offloading means that we sort of are atrophying cognitively or something like that and it's not that simple." She used the example of GPS for navigating around town. It's true that if you depend on your GPS to get around, you'll not form as robust a mental map of your town. "However," Michelle said, "it doesn't mean that my ability to make mental maps in general is degraded." I think that's an important aspect of cognitive offloading to attend to as we explore the ways that we (and our students) are using AI for offloading. Dave Nelson brought us a 2025 preprint to discuss, "Be Friendly, Not friends: How LLM Sycophancy Shapes User Trust" by Sun and Wang. The authors define sycophancy as the tendency of an AI chatbot to agree with the user, even when the user is demonstrably incorrect. They distinguish this feature of chatbots from others, like friendliness. Dave shared an experiment he ran this spring in a course about learning and AI. He built two AI-powered chatbots, a "Ted Lasso" bot that was relentlessly positive and a "Severus Snape" bot that was "caustic and refused to answer a question directly the first time, always redirecting students back to the material." His students really liked the Ted Lasso bot, but found it didn't provide useful answers. Conversely, the Severus Snape bot rubbed them the wrong way but "was much better at strong answers." This discussion reinforced for me that the personality-like qualities of AI chatbots are programmed in, some of those qualities (like sycophancy) are problematic, and with the right tools we can design chatbots that have better qualities. Lance Eaton reviewed a 2024 article by Darvishi et al. titled "Impact of AI Assistance on Student Agency," one that I had blogged about last year. This is the study where students were asked to provide feedback on peer work, initially with some AI coaching and later without that coaching. The researchers found that the quality of feedback dropped off when the AI coaching was removed, meaning that the AI scaffolding wasn't "sticky" for the students. Lance noted that some students in the study received both AI coaching and additional resources on giving good feedback and that some data suggested those two layers of student support were too much. The students ignored the additional resources and just leaned on the AI coaching. The study got Lance thinking "about how and where AI is introduced as a scaffold on, or how we scaffold off AI in certain contexts." Might we sometimes provide students with too many supports, leading to "decision fatigue or cognitive overload"? And how can we better support student agency? "How does this help them really figure out self-regulation in their learning?" You can listen to the Study Hall panel with Lance Eaton, Michelle Miller, and Dave Nelson here, or search for "Intentional Teaching" in your favorite podcast app,. Thanks for reading!If you found this newsletter useful, please forward it to a colleague who might like it! That's one of the best ways you can support the work I'm doing here at Intentional Teaching. Or consider subscribing to the Intentional Teaching podcast. For just $3 US per month, you can help defray production costs for the podcast and newsletter and you get access to subscriber-only podcast bonus episodes. |
Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.
Not Your Default Chatbot: Teaching Applications of Custom AI Agents As I've mentioned here, I've been working this fall with a number of faculty at the University of Virginia and elsewhere who are experimenting with custom AI chatbots in their teaching. So when OneHE reached out to ask me about doing an AI-related webinar next month, I thought it would be a great chance to share some of what my colleagues and I are learning about teaching applications of custom agents! Here's the abstract for...
Peer and AI Review of Student Writing Peer review is a signature pedagogy of writing instruction. What happens when you take that established structure and add in a layer of AI-generated feedback on student writing? You get PAIRR: Peer and AI Review and Reflection, an approach to integrating AI into writing instruction developed by a team of faculty at California public institutions. On the podcast this week, I talk with two members of that team, Marit MacArthur and Anna Mills. They share how...
Managing Hot Moments in 2025 If you've been around a center for teaching and learning in the last twenty-five years, you might have seen Lee Warren's article "Managing Hot Moments in the Classroom." Lee wrote this while at the Derek Bok Center for Teaching and Learning at Harvard University, and it's been a favorite handout at teaching center workshops ever since. From the article: "Sometimes things seem to explode in the classroom, and what do we do then? Knowing strategies for turning...