He illustrates how ChatGPT can be used to avoid detection - or thinking. Recalling perhaps an assignment from Columbia's famous core, he shows how one might ChatGPT an essay on The Iliad. You could start with this ask:
I have to write a 6-page close reading of the Iliad. Give me some options for very specific thesis statements.
In no time, ChatGPT supplies ten, all compelling, including the one he decides to work with:
For novice users, ChatGPT produces generic-seeming essays that aren't long enough for college assignments, but there are easy ways around this, we learn.
Annnnd... done! The words of the essay are the student's own, but the argument and the structure have all been outsourced. The Iliad has to be read, but perhaps not until you've got your marching orders for fruitful foraging. Terry argues that if students are still to be made to think, and to demonstrate that they can, take-home essays have to be supplemented or even replaced by oral exams, in-class writing assignments, or some new form of assignment that's ChatGPT-proof.
I found this essay really instructive. The ease with which smart kids at an Ivy are making use of ChatGPT makes me realize that my ideas about writing essays are really luddite and don't even correspond to what I routinely do. My school's harm-reduction approach to generative AI, which doesn't attempt to prevent use of such programs but rather seeks ways for students to use them thoughtfully and take responsibility for their use of them, makes more and more sense.
The use of ChatGPT to outwit the Columbia Core helps me see what kids have surely been doing for a while in the age of online search engines and algorithms: ChatGPT marks for them a difference in degree, not in kind, in essay writing in the age of the Web. Folks my age made use of a thesaurus to help find the right word or dictionaries of quotations to spice up a speech. I never read CliffNotes but many books offer the wonders of indexes, which can take you swiftly to the section of a text relevant to your research - or tell you you're looking in the wrong place. Then come databases, turbocharged when they go online and launched into the stratosphere by the digitization of most of their sources, now conveniently searchable too. It's a rare luxury to just read a text - the whole thing - and reflect on it, when there are, just clicks away, reviews and analyses and the transcripts of podcasts. I suppose college-bound students are taught most of these tricks while still in high school now. In school as in every other part of their lives the aggregated and sorted voices of countless others are always just a click away. They learn when to defer to the cloud and how to go beyond it to toot their own horn.
Some of Terry's classmates are using ChatGPT to get out of thinking but others are using it to think, or at least to produce the essays and papers that we faculty think evidence thinking. Who is cheated more? ChatGPT makes the effort of wrestling with the complexities of a text, forming an interpretation or critique, finding the right phrasing for it, weighing arguments for and against it seem extravagant. Shouldn't one be able to do that in seconds? More fundamentally, haven't they all already been done by someone? Why do I need to slog through when I could just add my descant to the chorus of cloud wisdom orchestrated by generative AI?
One thing I feel gets lost here is the possibility that a text - it doesn't have to be Homer - becomes a dialogue partner for the reader, questioning me even as I question it. The text calls me to account. ("Don't be a descant," it says, "think for yourself!") While it's good to know what others have made of a text, and good if one can develop an original take on it, the point is - also, mainly - to form a relationship with it. I'm never entirely alone with a text of course(Gadamer's "fusion of horizons" includes the reality that I bring a horizon too) but there's something huge about forming a personal relationship with it, becoming the sort of person who can be in a relationship with books or arguments. That's one of the rationales not just for great books curricula like Columbia's Core but for liberal arts education more broadly.
By the way, a student I was hanging out with today was familiar with the article. When I told him I'd found it revealing for showing how a piece might be written by a student while the thinking was outsourced to ChatGPT, he replied that our students may be "doing it backwards," doing their own thinking but letting ChatGPT do the writing - "and that's why they get caught"!