When I ran the first year program some time ago, I was introduced to the "harm reduction" approach to issues of substance abuse. Students would use legal and illicit drugs whether we told them to or not, but would surely not listen to us if we simply told them it was wrong. A public health approach distinguished itself from a legal one, encouraging them to understand what they were risking, to think about using in safely and more responsible ways, minimizing the likelihood of harm to themselves and others. This approach could not only earn a hearing, but might convince them to reduce or even stop.
Something like a harm reduction appraoch is evidently being taken by our university in response to the arrival of AI programs like ChatGPT. Students will make use of it whether we tell them to or not, the thinking seems to go, but maybe we can get them to use in a more thoughtful way - which might even lead them to realize that they're better off not partaking. This, which went out to faculty from the provost's office today, seems a little too accommodating, though.
How might student learning be supported by the use of generative-AI? It took me a while to make the gestalt switch to recognizing the wisdom of harm reduction; this may take me some time too.