Wednesday, August 23, 2023

Don't trust anything it says

Our provost's office has sent around suggestions for responsible integration of generative AI into classes, with some sample language for syllabi. I quite like these, from courses at the University of Pennsylvania:

You may use AI programs e.g. ChatGPT to help generate ideas and brainstorm. However, you should note that the material generated by these programs may be inaccurate, incomplete, or otherwise problematic. Beware that use may also stifle your own independent thinking and creativity. You may not submit any work generated by an AI program as your own. If you include material generated by an AI program, it should be cited like any other reference material (with due consideration for the quality of the reference, which may be poor). Any plagiarism or other form of cheating will be dealt with severely under relevant Penn policies. 

That's from a class on bioethics and the law. An entrepreneurship class is even more gung-ho:

I expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill, and I provide tutorials in Canvas about how to use them. I am happy to meet and help with these tools during office hours or after class. Be aware of the limits of ChatGPT: 

• If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts in order to get good outcomes. This will take work. 

• Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand. 

• AI is a tool, but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of the academic honesty policies. 

Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.

I'm sure ChatGPT will come up in my classes, but the kind of work demanded in a seminar makes it less of a threat than for some other kinds of instruction. It came up when I was talking with the TAs for my upcoming lecture course "After Religion" today, too. Since the opening and closing assignments either ask them to reflect on personal experiences or work in a creative medium to articulate a personal vision, ChatGPT will be of limited use. The more bookish midterm essay prompts are specifically about very specific readings in juxtaposition, so ChatGPT may draw a blank there too. 

But there is one assignment where I though ChatGPT might sneak in, and maybe to the good: a 3-minute video on a contemporary movement or phenomenon of interest to the students, an assignment included as much for the benefit of the class - so they become aware of a variety of topics - as for the individual students. Though they'll still have to generate the content and narrate it, it's an obvious case where ChatGPT might promise to help.

I just confirmed it, asking "Write me a three-minute presentation on Vale do Amanhecer" (the example I offer in class for this assignment) and receiving an effective presentation in nine paragraphs, each with a title - just right for a ppt. Students in a hurry often have a hard time distinguishing minor details from major factors, and I've found student presentations consistently weak on providing enough framing context to help their audiences engage what they're talking about. ChatGPT's blandly competent format would be better for the class.

I'm not about to let students bypass research, though. Remember: 

• Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand. 

And presumably the topics they choose will be ones they have ideas or questions about, if not yet understanding. Could this perhaps motivate meaningful research and reflective work after all, as well as more user-friendly presentations?

Hey, I've just done something recommended in an article in the Chronicle of Higher Education. Instead of ignoring it or trying to forbid it, try to find ways of using generative AI in your field or course in a meaningful way. As promised, seeing what it can and can't do in a setting I understand offers a new vista for assignments.