I sat next to one of the few other faculty there (others may have been watching the livestream). A historian of 19th century France who long ago moved into administration, she said she knew nothing about ChatGPT but that the hysteria about it reminded her of an article she wrote long ago about anxieties around how railroads would destroy distance, local identity, civilization itself (which they didn't.... did they?).
The director of IT started us off with a lightning history of AI and some bold thoughts about the future unfolding before our eyes, building from "Imagine AI not as our creation but as our co-creator" to "Imagine a future where the Turing Test is not for AI but for us."
An old professor of Design & Technology said the panic about AI reminded him of resistance to bringing personal computers on campus in the late 1980s, something ultimately embraced not because one side won but because students demanded it. Everyone wound up benefiting! As for AI, he told us he uses it all the time to get new ideas; it "shakes up his brain" in a way nothing else can.
A freshly graduated industrial designer described how AI had found its way into everything students he knew did in the last year, but he had noticed it was mostly for the things people weren't "passionate" about - like writing emails (especially onerous for those for whom English isn't their first language), artist's statements, etc. Using AI for these "clerical" things, he felt like there were now ten of him: how cool to have a team!
A faculty member from the public policy school recounted asking students if they used ChatGPT and one saying she'd used it to write the paper for another class. Did she hand it in? Yes. How did she feel? A little strange... but she was really under a of of stress and didn't have time to write it herself and without the paper she would have failed the class. The faculty member "left it there" in his class, and with us. But, he added, our students really are under a lot of stress.
A student in Design & Technology described being inspired by a Florida museum's interactive Salvador Dali replica, with whom patrons could have AI-enabled conversations. She and a classmate decided they wanted to be "immortal like Dali," she said, and brought together different AI programs recording and manipulating their facial expressions, movements, "voice cloning" and the like. In the end it "fell apart" (somehow her avatar wound up with an English accent!) but she found it amazing what they were able do in just a day and a half (!!!), with just a few 10-minute youtube videos explaining how to use these programs, and all for $0 - cheapest Parsons project ever!
The somewhat heavy-handed moderator, the director of our virtual reality lab, emphasized how ChatGPT has radically democratized what had been a very exclusive world. She deflected leery questions from the audience regarding ethics and privacy and expertise, argueing that being a creator is the best way to understand the ethics of AI, learning through doing how you can manipulate the model and how it manipulates you.
The discussion brought out a few problems. There's a vast amount of data more and more people now have access to, but that data set is still biased, and likely to remain so. (Nobody mentioned how much of that data is not just incomplete but false, or designed to mislead.) AI still sometimes makes things up ("hallucinates"), but we were assured it's getting better rapidly. Copyright issues were raised but not plagiarism, or the way the student who outsourced her paper to ChatGPT shortchanged not just the school but herself.
Nobody on the panel thought that use of AI could be or should be constrained: that train has left the station. The point instead was how to live into its possibilities, how not to be left behind. The public policy professor argued that we should to require all our students to learn to use AI, and make this a selling point of a New School degree - and we need to do so in a matter of months, not years! The other professor agreed but noted there was no way faculty could stay ahead of students here; it was an opportunity for co-learning.
The student who'd worked on an immortal avatar told us her own misgivings had been assuaged by a friend who told her that the invention of photography, experienced as an existential threat by artists, had instead led to an artistic renaissance, freeing artists from the "demands of realism." She felt the ubiquity of AI released her from the "pressure to be perfect," and allowed her to focus on work that is "rich in feeling" and storytelling. In the face of the crisis of loneliness, we need work with "emotional intelligence." (Like this?)
The recent graduate noted that as a New School student he had learned a skepticism which made him wonder whether this all doesn't move too fast - last he heard, "fast fashion" and "fast furniture" were a bad thing! But he also told how a fan had created a deepfake song by the Canadian singer Drake which was the best Drake song in years. Drake had quashed it, but another singer, Grimes, had a better response, allowing fans to use her voice in generating songs and offering them 50% of the profits. Imagine when Grimes sings one of those fan-generated songs in a concert, he said; it'll be insane.
This was the first of what will be many events exploring how we can and should integrate AI into our work as a university. (Some in the future might have better representation of those concerned not just with making but with research and analysis.) It convinced me I should face it head-on in my own classes, asking not whether but how it can be used in our learning and thinking, and building on the ways students have already integrated it, knowingly and unknowingly, into their ways of being. Actual experience with manipulating and being manipulated will make for more meaningful discussion.
Speaking of ways of being... The AI-enthusiast veteran of the PC wars, citing some old sci-fi series where robots have long surpassed human capacities but "keep us around for some reason," said he always asks himself what makes him different from the machines he uses, eliciting appreciative nods from the audience. But what sticks with me is the table-turned Turing Test question. The better AI gets at replicating what we think important and valuable (like "emotional intelligence"), the less competent we may feel ourselves to be, not just at what machines do well but even at being human.