Study Hall: AI and Learning EditionI'm excited to try out a new format on the Intentional Teaching podcast this week. Once again, I’ve been inspired by the American Birding Association podcast. The ABA podcast uses a format they call "This Month in Birding" where host Nate Swick invites three great guests to discuss recent studies or news articles from the world of ornithology. I learn a lot listening to these episodes, and I thought I would try the format out here on my podcast. Doing something called "This Month in the Scholarship of Teaching and Learning" sounded a little ambitious to me. There’s no way I can do this monthly! So I’m calling this format Study Hall since we’re gathered together to discuss interesting teaching and learning studies. For this first edition of Study Hall, we’re focusing on scholarly articles that have something to say about generative AI and education. The panelists are all colleagues in the field of educational development who do a great job finding and sharing educational research that’s interesting and practical. Lance Eaton is senior associate director of AI in teaching and learning at Northeastern University and author of a great blog exploring the intersection of AI and education. Michelle D. Miller is professor of psychological sciences at Northern Arizona University and author of multiple fantastic books applying psychology to teaching and learning. David Nelson is associate director at the Center for Instructional Excellence at Purdue University where he’s been supporting a variety of teaching initiatives for 17 years. In our conversation, we talk about cognitive offloading, chatbot sycophancy, student agency, and more! I invite you to listen to the full conversation, but here are a few highlights to pique your interest: Michelle Miller led our discussion of "Supporting Cognition with Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future," a 2022 literature review by Grinschgl and Neubauer. She noted that the authors point out that cognitive offloading "is not always a universally negative thing." Michelle said she worries that "people say offloading means that we sort of are atrophying cognitively or something like that and it's not that simple." She used the example of GPS for navigating around town. It's true that if you depend on your GPS to get around, you'll not form as robust a mental map of your town. "However," Michelle said, "it doesn't mean that my ability to make mental maps in general is degraded." I think that's an important aspect of cognitive offloading to attend to as we explore the ways that we (and our students) are using AI for offloading. Dave Nelson brought us a 2025 preprint to discuss, "Be Friendly, Not friends: How LLM Sycophancy Shapes User Trust" by Sun and Wang. The authors define sycophancy as the tendency of an AI chatbot to agree with the user, even when the user is demonstrably incorrect. They distinguish this feature of chatbots from others, like friendliness. Dave shared an experiment he ran this spring in a course about learning and AI. He built two AI-powered chatbots, a "Ted Lasso" bot that was relentlessly positive and a "Severus Snape" bot that was "caustic and refused to answer a question directly the first time, always redirecting students back to the material." His students really liked the Ted Lasso bot, but found it didn't provide useful answers. Conversely, the Severus Snape bot rubbed them the wrong way but "was much better at strong answers." This discussion reinforced for me that the personality-like qualities of AI chatbots are programmed in, some of those qualities (like sycophancy) are problematic, and with the right tools we can design chatbots that have better qualities. Lance Eaton reviewed a 2024 article by Darvishi et al. titled "Impact of AI Assistance on Student Agency," one that I had blogged about last year. This is the study where students were asked to provide feedback on peer work, initially with some AI coaching and later without that coaching. The researchers found that the quality of feedback dropped off when the AI coaching was removed, meaning that the AI scaffolding wasn't "sticky" for the students. Lance noted that some students in the study received both AI coaching and additional resources on giving good feedback and that some data suggested those two layers of student support were too much. The students ignored the additional resources and just leaned on the AI coaching. The study got Lance thinking "about how and where AI is introduced as a scaffold on, or how we scaffold off AI in certain contexts." Might we sometimes provide students with too many supports, leading to "decision fatigue or cognitive overload"? And how can we better support student agency? "How does this help them really figure out self-regulation in their learning?" You can listen to the Study Hall panel with Lance Eaton, Michelle Miller, and Dave Nelson here, or search for "Intentional Teaching" in your favorite podcast app,. Thanks for reading!If you found this newsletter useful, please forward it to a colleague who might like it! That's one of the best ways you can support the work I'm doing here at Intentional Teaching. Or consider subscribing to the Intentional Teaching podcast. For just $3 US per month, you can help defray production costs for the podcast and newsletter and you get access to subscriber-only podcast bonus episodes. |
Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.
It's August, always the busiest month of the year in my world. I just spent a few days "on Grounds" at the University of Virginia last week leading or co-leading several workshops on teaching and generative AI, including an all-day institute for UVA's new cohort of Faculty AI Guides. That's why there was no newsletter last week! How do you like the new logo? I thought after almost three years, it was time for a fresh coat of paint. This week I'm giving three presentations at other...
Developing AI Literacy This week on the podcast, I talk with Alex Ambrose, professor of the practice and director of the Lab for AI in Teaching and Learning at the Kaneb Center for Teaching Excellence at Notre Dame. I heard Alex on the Kaneb Center's podcast, Designed for Learning hosted by Jim Lang, a few months ago, and I was very interested in what Alex had to say about the evolving state of generative AI in education at Notre Dame. I was thrilled when Alex agreed to come on Intentional...
High Structure Course Design Justin Shaffer has a new book out on high structure course design! I met Justin a few years ago through a Macmillan Learning webinar on teaching with classroom response systems. I learned that not only did he use that particular technology very effectively in his teaching, but he also had a wealth of experience in active learning and course design more generally. When I wanted to put together a podcast episode on "studio" approaches to biology (in which lab and...