New resource on high structure course design


High Structure Course Design

Justin Shaffer has a new book out on high structure course design!

I met Justin a few years ago through a Macmillan Learning webinar on teaching with classroom response systems. I learned that not only did he use that particular technology very effectively in his teaching, but he also had a wealth of experience in active learning and course design more generally. When I wanted to put together a podcast episode on "studio" approaches to biology (in which lab and lecture are combined), I had Justin on the show to share his experiences with studio biology at the Colorado School of Mines, where he's a teaching professor of chemical and biological engineering and associate dean of undergraduate students. Justin became the first repeat guest on the Intentional Teaching podcast when I had him back on to talk about using technology as formative assessment (making "thin slices" of student learning visible, as I often say).

Justin is a thoughtful and effective teacher, and he has a gift for helping others understand and adopt evidence-based teaching practices. That gift is on full display in his new book High Structure Course Design: An Evidence-Based Guide for Designing, Implementing, and Assessing STEM Courses published as part of Macmillan Learning's Scientific Teaching Series. Justin tackles the entire course design process in the book, from defining and sharing learning goals to selecting assessments that align with those goals to the planning of pre-class, in-class, and after-class learning activities. And the book itself is a high structure experience! Each chapter features a list of learning objectives and key terms, several "before we begin" reflection questions, and a chapter "exit exercise." Each chapter is very practical, with a summary of research on the given teaching practice along with examples and tips for implementing the practice.

High Structure Course Design is something of a workbook, too, thanks to embedded "course design checkpoints" prompting the reader to apply the ideas from the text, as well as a number of templates and worksheets in the book's appendices. As Justin writes in the introduction, "Once you have completed these chapters, you'll have sample materials for an entire high structure lesson, and you'll be well-versed in how to create a high structure STEM course."

I'm very glad to see this book out in the wild. It is a valuable resource for any STEM educator interested in fostering student learning and student success. Thanks, Justin, for writing such a practical and useful book!

Generative AI in the Learning Management System

You may have seen the news that Instructure, the company behind the popular learning management system Canvas, announced a deal with OpenAI, the company behind the popular generative AI chatbot ChatGPT. Canvas will now feature a host of AI-powered tools for instructors through a feature called IgniteAI. I'm not clear on the financial side of this partnership--will these AI tools be available to all instructors using Canvas or just ones at an institution that has an enterprise subscription to ChatGPT?--but I do have thoughts on the pedagogical aspects of this deal.

You can read a few of my thoughts about the Instructure/OpenAI announcement in Kathryn Palmer's Inside Higher Ed article today, "Faculty Latest Targets of Big Tech's AI-ification of Higher Ed." I'll let y'all debate the use of the word "targets" in that headline. I'll focus instead on the quote in the article from José Antonio Bowen:

"The LMS might make it easier [for instructors to use AI tools], but giving people a couple of extra buttons isn’t going to substitute for training faculty to build AI into their assignments in the right way—where students use AI but are still learning."

I wasn't surprised to hear about the Instructure/OpenAI partnership. As I say in Kathryn Palmer's piece, "It was only a matter of time before something like this happened with one of the major learning management systems." Blackboard, for instance, has had an "AI Design Assistant" for some time now, and they've recently rolled out "AI Conversation," which allows instructors to create a basic AI-powered assignments for students to interact with. I don't know how the new Canvas tools will compare with Blackboard's offering, but the OpenAI partnership announcement indicates they're breaking some new ground.

It's the use cases described in the announcement that are of most interest to me. For example, IgniteAI in Canvas will have a "discussion summarizer" that provides an AI-generated summary of a set of student discussion posts. That reminds me of the AI-generated summary of restaurant reviews that I use all the time in Google Maps when deciding where to eat out. I find that these reviews are generally accurate and help me decide which restaurants merit further investigation. I can see instructors using this kind of discussion summary in Canvas to get a broad sense of how a particular discussion board is going and using that to selectively dive in to the student responses for further reading. I don't think that's a game changer, but it is a useful feature.

On other other hand, Instructure has also described something they call an "LLM-Enabled Assignment." The idea seems to be that an instructor would create a custom AI chatbot in Canvas (like the "GPTs" that paying ChatGPT users can create) aimed at particular learning objectives, then invite students to interact with the chatbot. The AI would then analyze the student-chatbot conversations for "evidence of learning" aligned with the given objectives and create some kind of report for the instructor to review. The announcement says that instructors will see "exactly where and how a student demonstrated the required understanding in the conversation."

I'm more skeptical about this use case, mainly because the instructors I know who are actively building and implementing custom AI chatbots for students (using other tools) aren't thinking of the student-chatbot conversation as the meat of the assignment. They might be interested in using the conversation as a form of formative assessment to better understand how students are thinking about the course material, but they're certainly not interested in evaluating student learning based on a student's conversation with a chatbot. Doing so runs the risk of reducing the activity's utility for student learning or formative assessment. Here's my quote about this from the IHE article:

"If students know that their interactions with the chat bot are going to be evaluated by the chat bot and then perhaps scored and graded by the instructor, now you’re in a testing environment and student behavior is going to change. You’re not going to get the same kind of insight into student questions or perspective, because they’re going to self-censor."

The faculty using AI-powered chatbots that I've talked to, including Matthew Clemson and Isabelle Hesse at the University of Sydney on a recent podcast episode, tend to think of these chatbots as providing some kind of support or assistance to students as they work through course material and complete assignments. Matthew's chatbot (dubbed Dr. MattTabolism) is geared toward Socratic tutoring for students with questions about the biochemistry material, and Isabelle's chatbot is designed to help students with a specific part of a specific assignment (providing feedback on potential essay topics). The point of these chatbots is to support and enhance student learning; any insight the instructors can gain about patterns in student questions or understanding is a secondary benefit.

Maybe I'm just grumpy about the framing of this "LLM-Enabled Assignment" idea in the Instructure announcement, which positions it as an assessment tool. I like the general idea that IgniteAI would make it easy for instructors to create chatbots like the ones used by our University of Sydney colleagues, but I'm not sold on the use case of evaluating student learning based on their chatbot conversations. I'm reminded of discussions around classroom response systems ("clickers") maybe 15 years ago. Some vendors sold clickers as a way to quickly and easily quiz students. I argued that a better use for the technology was to structure and support active learning in the classroom. Same tech, very different messages for instructors.

One upside to integrating AI tools in the learning management system is that instructors can feel a little better encouraging their students to interact with AI chatbots knowing that those interactions are protected by the same student privacy policies already in place for learning management systems. Another advantage is that these tools provide instructors some visibility into student-AI interactions, something that's not easy if you're sending students off to use ChatGPT or Claude or Copilot outside of the LMS.

Overall, I'm cautiously optimistic about AI tools being deployed intentionally for educational purposes, but as José Bowen said, the real value will depend on how instructors choose to use these tools.

Thanks for reading!

If you found this newsletter useful, please forward it to a colleague who might like it! That's one of the best ways you can support the work I'm doing here at Intentional Teaching.

Or consider subscribing to the Intentional Teaching podcast. For just $3 US per month, you can help defray production costs for the podcast and newsletter and you get access to subscriber-only podcast bonus episodes.

Intentional Teaching with Derek Bruff

Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.

Read more from Intentional Teaching with Derek Bruff

Defending Higher Education with Kevin McClure If you've listened to the recent "Take It or Leave It" episodes of the Intentional Teaching podcast, you'll be familiar with Kevin McClure's essay "Higher Ed Is Adrift." In the essay, Kevin outlines some of the many attacks the current U.S. presidential administration is leveraging against higher ed, and he notes that many faculty and staff are finding their institutional leaders' responses lacking. We talked about the essay on the "Take It or...

HigherEd PodCon banner featuring logos of dozens of higher ed podcasts

Podcasting in Higher Ed Last weekend I had the opportunity to attend HigherEd PodCon, the first conference devoted to podcasting across higher ed. Thanks to UPCEA, which sponsors my Intentional Teaching podcast, for sending me to Chicago for this very engaging conference! On my blog this week I shared some highlights from the conference in a post I called "Higher Ed Podcasting Is Having a Moment." If you're interested in higher ed podcasting, either as an avid listener or as a (potential)...

On the Sensibility of Cognitive Outsourcing You may have seen a headline or two about that new MIT Media Lab study "Your Brain on ChatGPT." This is the study in which more than 50 participants wrote SAT essays either with ChatGPT or with Google search (but no AI assistance) or with just their brains. The researchers took electroencephalography (EEG) measures of the participants and concluded that the ChatGPT cohort didn't have the same brain connectivity seen in the other two groups. The...