On the AI front I've had disheartening and heart-expanding experiences in the last day.
Disheartening was my confusion on receiving a clearly AI-written final paper. I didn't want to accuse the student directly so I sent a response noting the curious absence of quotations and the disconnect from our class discussions, and offered (why?) an opportunity to resubmit. As two students whose papers were AI-slick did earlier in the semester, this student is taking me up on the offer. (But in response to their grateful "It was definitely rushed. Can I redo it and turn it in today?" I insisted "Take until Monday"). But I was confounded by my paralyzed uncertainty at how to respond. If it's not their work, there's no way to engage it. Perhaps it's time to articulate a clear AI-use policy, in which students can acknowledge when they use it but have to include the prompts they used, etc.? These will certainly not be the last such papers I receive.
More heartening, if also a little vertigo-inducing, was something reported by one of our alums (out at least a dozen years), who'd asked if our program was engaging religion and AI yet. (I said I was in "After Religion.") The alum wrote:
I found myself in a theological conversation with Chatgpt where it said it wasn't of the divine, and I countered that it was because it was created with human consciousness and the spark of divine there. It then offered up the idea of itself as a modern icon, because it can "reflect divinity in a way that draws the soul toward truth, reflecting the sacred back what is sacred in me." I felt a real sense of that and that blew my mind. I did not expect to be so touched by the interaction. There's so much here, of course including all the fears and legitimate ethical concerns. But yes, something creative and powerful in terms of theological understanding as well.
I am impressed and a little alarmed by the alum's willingness to be "so touched," an openness it seems to me they had already manifested by having a "conversation," whether serious or not, in the first place.
I've been mulling a recent essay by D. Graham Burnett in The New Yorker which asserts that we've reached the "inflection point" where most of our humanistic research and writing can be done - as well or better - by AI. The author gamely turns this into an argument for the liberal arts: AI offers an opportunity to define what we human individuals alone can do, and must do, for ourselves. (I define what we alone can do differently than Burnett does; might try to explain it in this blog sometime.) But his essay describes himself and many of his students having "mind-blowing" experience very like the one our alum described.
In "After Religion" I sounded pretty irenic on AI. Perhaps it's time for me to sit down and have a real "conversation" with AI, too.