Pitchforks Out for Generative Text Users?

I had an interesting discussion with colleagues today about student use of text generation tools, specifically the probability-based “AI” text generators built on large language models. The arguments I found most persuasive are that—regardless of what tools might be used in the workplace someday—students need to learn how to think, and that we lack evidence that productive cognitive work occurs when using generative tools, no matter how much students wrangle with them to adjust text. It’s almost like saying, “The best editor in the world will also write the best novels.” Most people recognize that statement as completely ridiculous. And I’d argue the reverse is pretty ridiculous too. The two skills are connected but not in a way that allows someone developing editing skills to transfer them to writing and vice versa. Meanwhile, students are generally using the tools as a shortcut, rather than a tool. Even the most organized, best intentioned students may find themselves in a time crunch. And if there’s a tool that will spit out even a mediocre paper, that’s better than nothing.

But even with consensus on AI tools and their impact on critical thinking, we’re left with a bigger question about judgments of those who use the tools. In my circles—publishing and academia—the general tendency is to smear those who use generative text tools as, at the least, lazy and, at the most, morally bankrupt. There are big questions about how the tools can damage human fulfillment from work. (To wit, we’re happier when we have meaningful jobs. If AI takes away opportunities for people to work, or gives them work that is less meaningful, then the tools are increasing human suffering.) There are well-founded and serious concerns about environmental damage (even though most of the proposed data centers have yet to be built and may never be built). There are issues with profit for big tech that steals—yes, literally steals—from the copyrighted works of others. There are concerns about student learning because they are not engaged in the critical thought necessary to produce quality writing. And some of my biggest concerns center around opportunity costs. This is especially true in education where a student brainstorming about a paper with their roommate helps generate benefits for both of them. There are few benefits if the student instead uses a chatbot.

All of these issues deserve attention, but how should we focus and harness our concerns? Should we muster rage equal to the blind faith and enthusiasm for the technology spouted by the companies producing it (and the generally sycophantic tech and financial press repeating those claims verbatim)? This seems, among other things, exhausting. Should we try to ignore the issue and hope it goes away? Should we rely on AI detection tools and just stop students from using the tools for papers in our classes? (I just saw a presentation from one of my colleagues showing solid evidence that TurnItIn’s plagiarism and AI detection works quite well.) These approaches seem likely to make those of us arguing against the tools sound like we’re stuck in the past. They aren’t inspiring arguments, and I think a different approach is needed.

I think we need to go on the offensive with advocacy of critical thinking and traditional methods of writing. This means both letting others do as they do and treating them merely as folks we’ve not yet persuaded. And it also means celebrating things like writing by hand, diagramming on a whiteboard, brainstorming with others, reading (truly reading) more and a wider array of material, and working through edits to a paper to revise it. We have the evidence on our side to support all of these things as crucial to critical thinking. Now we just need to make other people know how much we support these approaches and why they are irreplaceable.

This approach, I believe, will allow us to keep dialogue focused on learning while also not falling into the trap of condemning people we should be trying to persuade. The enthusiasm for AI tools in the workplace may be higher than professors feel about AI in the classroom, but all the research tells us that companies still do better when they employ critical thinkers who can work together in teams. Nothing about AI tools changes this fact or provides new ways for teams to collaborate. Human ingenuity is still king, and I do not see a future where that changes. Ensuring we advocate for a future that benefits everyone—and excludes tools that hinder this future—is what I see as the best approach.

Leave a Comment