Generative AI and learning. It's complicated.


"Imagine an issue. Wrong, it's more complicated than that."

I've been taken with the above snarky quote from minor social media celebrity @internethippo. I seems to describe so many of the issues we've been debating and discussing here in the United States in the year 2024. So many people want simple answers to complicated problems, whether that's combating inflation or balancing the federal budget or a whole host of culture war debates. But complicated problems rarely have simple answers, and that's something we should remember in higher education.

I was reminded of this yesterday while reading Marc Watkins' new essay, "When AI Does the Reading for Students." Marc is an assistant director of academic innovation at the University of Mississippi and a former podcast guest. In the essay, Marc describes some of the many ways students can use generative AI tools like ChatGPT and NotebookLM to summarize or outline or change the reading level of a text. Reading is a "foundational" skill, Marc writes, and "yet in the near future, we'll be able to offload close-reading skills to private corporations in exchange for instant summaries tailored to our reading level, and very likely, our political interests."

That sounds pretty terrible, and it is. However, it's more complicated than that.

Marc goes on to mention his piloting of two AI reading assistants in his writing courses in the spring of 2023, noting that these tools could have immense value to students with "hidden disabilities" or those for whom English is not a first language. "And that's the frustrating aspect of generative AI in education: This is technology that can truly help some students and yet can also, through misuse or overreliance, seriously weaken the skills of many others."

Or consider the frequently stated and completely true claim that large language models (LLMs) like the ones used by ChatGPT and Claude often produce statements that are factually incorrect. Some call these statements hallucinations or confabulations or simply BS, and their common occurrence is why we shouldn't treat generative AI as a search engine or an oracle. Even when something like Google's AI Overviews is correctly summarizing a source, it can select the wrong source to summarize and tell users to try putting glue on their pizzas.

Generative AI is bad at facts. However, it can be surprisingly good at factchecking.

In his recent post, "Teaching Critical Reasoning with AI: Wright Brothers," Mike Caulfield, co-author of a fantastic book about information literacy called Verified, shares how he took a TikTok video about secret government funding for aeronautics research at the turn of the 20th century and asked Claude to evaluate the evidence presented in this video. Claude did a great job identifying the evidentiary gaps in the argument, and it was able (with some additional prompting) to generate a number of search queries someone might use to explore those gaps.

Mike argues that the LLM can be a very useful tool for what he and his Verified co-author Sam Wineburg call "taking bearing." Here's how he describes this part of a research endeavor:

"Taking bearing means surveying the landscape before running off into your research session. Part of that is understanding what you’re looking at, whether a source, claim, or set of Google results. But I think part of it is this too, getting something to generate a list of relevant questions, the stuff you might not be thinking about, the things that aren’t part of your initial reaction. You can chose the path yourself, but here’s what’s available. There are usually more directions you can go than you think."

Mike has a series of posts about critical reasoning with AI, and they're well worth reading.

Here's another example at the intersection of AI and teaching: process tracking. I think I first heard about this back in the summer of 2023 during a conference session on the problems with students using generative AI on written assignments. How can we know if an essay was actually written by a student without AI help? Well, we could ask them to compose the entire essay in Google Docs and then examine the version history if we suspect the student had unauthorized aid. If we see a big piece of the essay that just appears in the document, that's an indication that it was copied and pasted from a ChatGPT.

That's process tracking, and the tools for process tracking have gotten more sophisticated since 2023. Grammarly, for instance, has an Authorship feature that promises to show students' work as their own by combining an enhanced version history with AI detection features. The idea is to communicate to others how much time was spent writing, where text was copied and pasted from, and what role grammar and spelling suggestions played in the writing process.

The MLA-CCCC Joint Task Force on Writing and AI recently published a piece titled "What Is Process Tracking and How Is It Used to Deter AI Misuse?" Not only does the piece provide more information about process tracking, but it also features the views of five task force members about process tracking. Four of the task force members were not keen on process tracking:

  • "I believe some faculty may rely too much on the surveillance of writing than the actual teaching of it." (Kofi Adisa)
  • "Process-tracking software, I think, doesn’t align with our values in Writing Studies." (Antonio Byrd)
  • "This subverts what is a good thing in writing pedagogy– focusing on process– and turns it into a prescriptive and auditable form of surveillance." (Leonardo Flores)
  • "If we force students to disclose information about how and when they write, it could undermine trust and egalitarianism." (Liz Losh)

All four had very thoughtful takes on process tracking, and I recommend reading them all. However, the fifth contributor, Anna Mills, writing instructor and author of the open textbook How Arguments Work, took a different approach. She argues that "even with the best pedagogy, some temptation to take shortcuts with AI will remain when students feel insecure or under pressure," so she has been experimenting with the use of Grammarly Authorship to provide a kind of accountability for her writing students. On her blog, Anna takes a deep dive into Authorship, its strengths and limitations, and the ways she's using it with her students. She also shares a sample writing activity report generated by Authorship, for those who want to see concretely what the tool does.

Process tracking is pretty terrible, except when maybe it isn't. It's complicated.

As higher education heads into its third year grappling with the roles of generative AI in teaching and learning, here's hoping we remember that we aren't likely to find simple answers and that we take the time to explore the complicated ones.

Thanks for reading!

If you found this newsletter useful, please forward it to a colleague who might like it! That's one of the best ways you can support the work I'm doing here at Intentional Teaching.

Or consider supporting Intentional Teaching through Patreon. For just $3 US per month, you can help defray production costs for the podcast and newsletter and you get access to Patreon-only interviews and bonus clips.

Intentional Teaching with Derek Bruff

Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.

Read more from Intentional Teaching with Derek Bruff

Rethinking Doctoral Education Some years ago I was talking with the chair of a department. It was a STEM field, but I won't be more specific than that. I asked the chair what his goals were for the department's doctoral program. He said that they wanted all of their PhD graduates to secure faculty positions at top 25 research universities. I then asked him how many of their graduates were landing those spots at the moment. He said, "Oh, none of them." Way back in 2001, Chris Golde and Timothy...

Infographic with stats from Intentional Teaching's year-in-review, all of which are mentioned in the newsletter text.

Intentional Teaching, Wrapped 'Tis the season for your quantified life. If you're a Spotify user, you've already received your Spotify Wrapped report about the music you listened to on the platform this year. Meanwhile, my Board Game Stats app tells me that I currently have an h-index of 6 for the year (meaning there are 6 games I've played at least 6 times each in 2024) and I'm one play away of hitting a 7. I'm also looking forward to seeing how my War and Peace slow read shows up in my...

AI Across the Curriculum This past summer I was at a conference and ran into Flower Darby, co-author of Small Teaching Online and The Norton Guide to Equity-Minded Teaching. Flower has been doing a lot of work over the last two years supporting faculty explorations of generative AI in their teaching, and we spent a few minutes swapping resources and citations, since I’ve been doing that work, too. Flower pointed me to a paper from a team of faculty at the University of Florida about an “AI...