Like most academics who use writing to help students learn, I’ve been thinking a lot about generative AI tools like ChatGPT and Gemini. Most of those conversations have a panicked tone: how are we going to keep students from cheating with it? That’s a problem, yes, but students have always had ways to cheat. Now it’s both easier to do and harder to detect, which is a nightmare of sorts, but it’s not a new problem. For better or worse, it doesn’t keep me up at night.
There are very real ethical concerns about it, and many other reasons I wish it had never been developed. Nevertheless, this genie is out of the bottle and we’re going to have to figure out how to live in an AI-infused world. It has already saved me a ton of time in writing multiple choice questions targeting several levels of Bloom’s taxonomy and generating cases for my ethics students to grapple with. So it can save a lot of time when it comes to working up material for a day in class, leaving me with more time to spend on the parts of my job that are meaningful. I’m sure it has tons of other cool uses as well.
In the meantime I’ve been wondering how we can use it, and teach students to use it, to help them learn. The American Association of Colleges & Universities, partnering with Elon University, just published an AI guide for students. It’s pretty good, and I’ve posted it for my students. But it does talk about using AI to help them outline a paper (on p. 6), and this troubles me. If something else is generating the ideas, is the student doing the work needed to get the learning I’m hoping they’ll get?
In past semesters, I’ve told students that the point of us assigning them work is not usually the product itself. The learning happens when they do the thinking that goes into the product. If I could get away with not assigning papers, I’d sure do it; nobody loves grading. But I can’t read their minds, so in order to show that they’re doing the thinking, they have to give me evidence, frequently in the form of some writing.
As I ruminated on this while walking one evening, I ended up returning to what must be my favorite classic thought experiment, since I keep coming back to it: Robert Nozick’s Experience Machine. If you’re old enough to know what the Matrix is, think that: the idea is that you could plug in and live in essentially whatever world you want, and you wouldn’t know the difference. So as the video I just linked to says, you could have the experience of having won an Olympic medal, written a great novel, been awarded an Oscar, etc. Would you plug in?
Some people say they would, and can stake out a reasonable argument for why. But the idea is supposed to be that if you wouldn’t, then that’s probably because something besides just pleasure matters to you. It’s not just that you want the pleasure of the medal/novel/Oscar, but that you want that experience to be authentic. You want to have earned the recognition, not just enjoy it.
If you wouldn’t enter the Experience Machine in order to feel like you’d accomplished something, I thought, why would you use AI to write anything for you? This seems to me to be the crux of the matter. Personally, I do a lot of writing, in part because I use writing to think. I wouldn’t want to use an AI to produce my Substack pieces, for instance.
So I think the questions we need to tackle are about what the point of writing really is. What’s valuable here, the process or the product? Or—when is the product the main thing of value, and when is it the process?
Generative AI can’t produce anything truly new.1 But it might—I haven’t tried this—be able to help you process and shape nebulous thoughts or complicated data. It might be able to help us see sometimes what we’re struggling to see.
So it seems that I (we?) wouldn’t want it to do my thinking for me. But when it’s the product that counts—when it doesn’t matter what process produced it—then it might make sense to use AI. When is that, though?
I definitely don’t mind using it to save me the labor of writing multiple choice questions, or cooking up ethical scenarios for my students to analyze, or giving me a sense of what to think about when choosing plants for a garden. Those activities are grunt work, and they do nothing to bring meaning to my work. But I wouldn’t want to use it to write these little essays, or my book. Writing is work, sometimes hard work, but for projects like that, the process is just as much the point as the product is. It’s not that I want to have written a book; it’s that I have things worth saying that I think others might also find something good in.
In that kind of case, writing creates meaning. That’s not something I’d like to farm out to AI. To be meaningful, something needs to be yours; that’s what the Experience Machine is meant to show.2 We know that meaning comes from flow, the absorption in something that’s good exercise of your talents. And we know that meaning is good for our well-being. Perhaps if the judicious use of AI can help boost you into that flow zone, that’s a good thing: if it can help make something that’s too much of a struggle to create flow into something in the flow zone, then maybe that’s helpful. And maybe nothing general can be said about when it can do that and when it can’t.
If that’s the case, then conversations with students need to revolve around these issues. Maybe this provides us the occasion to talk about the real value and meaning of education. It’s never been about the product (the diploma). It’s always about the process, but the diploma is supposed to provide evidence that you engaged in the process. Maybe we can use this moment to remind ourselves of that.
Though honestly, when do any of us, given that we swim in a sea of ideas that influence us? But that’s a whole other topic. What the AI is doing is clearly quite different from what we’re doing when we write.
This raises a whole host of other questions for me: Do we value others’ writing, does it move us, in part because we know it’s the work of another human being? Would we retroactively revise our judgment of an AI-generated work if we’d enjoyed and admired it before we knew it was AI-generated? If so, is that just prejudice, or is it revealing something important about how we enjoy art and literature? These are questions for another time, I think.