Generative AI and writing: automation or catalyst?


I was on the fence about reading John Warner’s new book, More Than Words: How to Think about Writing in an Age of AI. It wasn’t that I didn’t respect Warner’s work. No, I had been following his work for years, especially his Just Visiting blog on Inside Higher Ed where he writes about teaching. His was an essential voice for me after ChatGPT launched in late 2022. He had already argued in his book Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities that writing instruction in secondary and post-secondary schools was failing. In his December 2022 newsletter, he noted that if ChatGPT was good at writing the kinds of essays we ask students to write, then maybe those essays weren’t worth writing to begin with:

“But part of the problem is that we – and very much including myself here – have been conditioned to reward surface-level competence (like fluent prose) with a grade like a C+, B-, or B. We may have to get used to not rewarding pro forma work that goes through the motions with passing grades, or it may mean finding other elements of the experience to focus on in terms of grading.”

I quoted that in many of my talks and workshops on teaching and AI throughout 2023!

No, I was on the fence about reading More Than Words because a book about the implications on teaching and learning of a new technology that is published just over two years after the widespread availability of said technology is quite likely to be premature. I had read more than one book on teaching and AI that wasn’t fully baked. And I have more books to read than time to read them, so I try to be selective about which books I add to my shelf of reading opportunity. What convinced me to pick up Warner’s book was a comment of his on Marc Watkins’ newsletter this past January. Here’s what Warner wrote:

“In More Than Words, the concluding chapters are titled Resist, Renew, and Explore, where I first articulate what should be resisted (e.g., anthropomorphizing LLMs), and then move to what we should renew (human connections, writing as an act of valuing a ‘unique intelligence’) and then moving on to what we must explore (the ways technology can enhance human flourishing).
The challenge, as you articulate here, is that we’re trying to do these things simultaneously and the ubiquity of the applications and speed of change (the unavoidable stuff) leaves it hard to pause and orient in ways that allow us to explore productively.”

That’s a challenge I can get behind! In fact, I feel that much of my work in higher ed over the last two years has been about helping faculty and other instructors figure out (individually and collectively) what about generative AI should be resisted, what aspects of teaching and learning should be valued regardless of AI, and how we might find useful roles for AI in our work as educators. This is, for example, why I’ve produced a dozen episodes of my Intentional Teaching podcast on the topic of AI and teaching. Educators across higher ed are struggling to figure these things out, and it is helpful in that process to hear from others who have been doing just this kind of exploring.

I finished reading More Than Words a couple of weeks ago, and I’ve been struggling with how to write about it. Warner is a fantastic writer, and he shares many stories from his own writing life and from his classrooms about the value of writing as a way to think and to feel. He argues compellingly that we shouldn’t outsource our writing to some AI automation tool. “To abandon all writing to generative AI is to abandon thinking itself,” Warner writes on page 68. And on page 78 he writes, “What happens when we divorce the necessity of feeling from the act of writing by outsourcing it to something like ChatGPT, which cannot think or feel?” I largely agree with Warner on these points; as James Lang writes in an endorsement, we should resist allowing “automation to supplant the work that makes us human.”

My problem is that there is a fair patch of middle ground between “abandon[ing] all writing to generative AI” and resisting the use of AI altogether. And it’s in that middle ground where I see thoughtful and caring educators spending their time exploring the intersection of AI and teaching.

For example, consider the ChatGPT-as-peer-review assignment that Pary Fassihi shared during her interview on Intentional Teaching. In the assignment, she asked students to request feedback on their essay drafts from ChatGPT using very specific prompts tied to the criteria Fassihi had set up for the essay assignment. Here’s one: “Evaluate the evidence used to support the main argument. Is the evidence relevant, sufficient, and effectively integrated into the argument?” Here’s another: “How well does the paper analyze the implications of digital technology on academic integrity and authorship? What insights or unique perspectives does the paper offer?”

There’s some evidence (Steiss et al., 2024) that AI chatbots can give feedback on par with human feedback when directed via criterion-based prompts like this. Warner cites this study, then dismisses it because he didn’t find the essay assignments in the study compelling. “The assignments used in these studies are lousy, canned prompts that have been answered by students a zillion times with no room for individual engagement or knowledge building” (page 240). The assignments are available in the paper’s appendix, so readers can judge for themselves. I find them a little cookbook, in that they are fairly prescriptive about what writing moves students should be making. But in a way, so is Fassihi’s assignment, given the clear criteria she laid out for her students through the suggested prompts.

If we can believe for the moment that the assignment at hand is a reasonable one, Warner is still resistant to any kind of AI feedback on student work. He writes the following on page 260:

“The danger of introducing something like AI feedback on student writing is to create a kind of ‘official’ definition of taste as determined by the algorithm. If the imperative of school is to please the algorithm, we will have generations of students who have never even had a chance to explore their own tastes through attempts at expression.”

However, consider the instructions Fassihi gives her students on this assignment. She specifically asks them not to make all the changes that ChatGPT recommends:

“Remember: Engage with ChatGPT critically, remain skeptical, and do research on its responses if you need to (For example, if ChatGPT tells you that a particular word is not used in this particular context or culture, make sure you do your research before you just accept its response). Some of ChatGPTs feedback may be useful, but some may not! Please avoid changing everything it asks you to change, and make sure your voice and YOU still remain very present throughout your paper.”

Fassihi goes on to ask students to revise their essays and also to write a short reflection about how they used or didn’t use the ChatGPT-generated feedback. There’s no imperative here to “please the algorithm.” On the contrary, Fassihi is making an effort to explore with her students when and how AI might provide useful feedback while also taking a critical look at AI and its limitations.

That’s the kind of explorative work that I thought Warner was calling for in More Than Words, but there are very few examples of it in the book. The one memorable example is Warner’s tentative embrace of using AI as a reading assistant of sorts. Page 120: “ChatGPT and other equally powerful models are especially good processors of text, and sometimes, having a tool that can process a lot of text very quickly and mostly accurately can be a useful substitute for some of the ‘reading’ we are often required to do.” Warner goes on to share a story about using ChatGPT to generate an outline of a book he had read several years prior, then using that outline to conduct a targeted re-reading of the book, saving Warner the time of a full skimming of the book.

That is a great example of using AI as an assistant instead of an automation or outsourcing technology. I talked with an education faculty member last fall who had a student doing an independent study who used AI in a similar way. The student was reading journal articles she found very hard to read, so she asked Google NotebookLM to produce one of its chatty-podcast-style audio summaries of one of the articles. That summary provided the student the big ideas and structure of the paper, which made it easier to dig into reading the full paper after listening to the summary. In this case, as in Warner’s example, the AI didn’t replace the reading of the text. Instead, it played a helpful role in that reading.

I can think of other examples from the guests I’ve had on my podcast. Heidi Nobles talked about ways to use generative AI as an editor when writing, getting an approximation of the kind of feedback a human editor or a writing group might provide. Ryan Wetzel shared experiments in having AI serve as a catalyst for students working through a design thinking process, helping students with divergent thinking or rapid prototyping. And way back in November 2022, before ChatGPT was released to world, I talked with Robert Cummings about AI writing, and he pointed me to the band Yacht, who used an AI tool to generate lyrics and music based on their own catalog of songs as well as on songs from their favorite artists. They took the pieces of the AI output they liked and wove those into new songs for a new album. The AI became (to borrow Wetzel’s term) a catalyst for the group’s creativity.

Warner is right to push back against what he calls AI “true believers,” the Sam Altmans of the world that make hyperbolic claims about the power and impact of AI. But I learned a long time ago not to pay much attention to the wild claims of technology companies. I would much rather explore the technology itself and see where it might be useful. I was hoping for more of this kind of exploration in More Than Words, in part because examples of explorations like the ones above were out in the world at the time of Warner’s writing. More Than Words has a lot to say about the power and value of writing, but by limiting its treatment of AI to AI-as-automation and not fully considering ways AI might serve as a catalyst for thinking, the book doesn’t provide the roadmap to the future of AI in writing (and learning) that I hoped it would.

Thanks for reading!

If you found this newsletter useful, please forward it to a colleague who might like it! That's one of the best ways you can support the work I'm doing here at Intentional Teaching.

Or consider supporting Intentional Teaching through Patreon. For just $3 US per month, you can help defray production costs for the podcast and newsletter and you get access to Patreon-only interviews and bonus clips.

Intentional Teaching with Derek Bruff

Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.

Read more from Intentional Teaching with Derek Bruff

A Long View of Undergraduate Research A long time ago (in a galaxy far away?), I spent all three summers of my college years in undergraduate research experiences. That first summer I worked on a project that seems quaint now: I built a website for sharing a collection of quotations about mathematics that my mentor, a math professor, had collected. And (I can't believe this) the website is still around! See the Furman University Mathematical Quotations Server for a flashback to mid-90s web...

Take It or Leave It with Liz Norell, Betsy Barre, and Bryan Dewsbury This week on the podcast I once again borrow a format from one of my favorite podcasts and host a Take It or Leave It panel. I invited three colleagues whose work and thinking I admire very much to come on the show and weigh in on several "hot take" essays on teaching and learning in higher ed. For each essay, each panelist had to Take It (that is, agree with the central thesis of the essay) or Leave It (that is, disagree)....

Midjourney image generated from this prompt: an intersection of uncountably many streets with hundreds of traffic lights, some showing red, some showing green, in a near-future sci-fi M.C. Escher style

One of the most frequent requests I get from faculty is to see examples of actual assignments that thoughtfully integrate generative AI. I am very happy to share a new collection of such assignments on the University of Virginia Teaching Hub: "Integrating AI into Assignments to Support Student Learning." In my day job at the University of Virginia, I'm helping to support about 50 faculty fellows who are part of UVA's Faculty AI Guides program. These faculty are exploring the use of generative...